id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
218572320 | pes2o/s2orc | v3-fos-license | A Critical Reflection on Relationship between ICT and Change Management in Enhancing Teaching and Learning Performances
Abstract Numerous studies have pointed out direct positive impacts of Information Communication Technology (ICT) on education quality. From just tool, ICT is being viewed in itself as quality or as catalyst for quality improvement. These findings have led universities worldwide to adopt ICT regardless of uncertainties. Consequently, not all of them succeeded. Proofs from cases around the world show that ICT produces positive impact only in the environment that fits. Such environment is fostered by an effective change management approach. The main aim of this paper is therefore to present the symbiotic relationship between ICT and change management, zeroing on how changes are managed to attain a proper ICT ecosystem for education quality improvement. It also aims to understand, through the conducts of extensive bibliographical research, along with critical content analyses, the roles of ICT in education, its design and impacts, and what constitutes effective change management approach for ICT inclusion. Key findings include: that the integration of ICT alone does not necessarily produce direct positive impact on teaching and learning, but its design; that a good design requires a proper change management process, driven by ICT, and that the involvement of all stakeholders, particularly functional managers, is critical to attain better performances.
INTRODUCTION
In past decades, much research focused on the role ICT played in the betterment of education. While some debated that students learned better with the presences of the ICT (Hewitt-Taylor, 2003;Khan, Hasan, & Clement, 2012;Jacobsen & Forste, 2011;Pajo & Wallace, 2001), others disagreed. Kirkwood and Price (2014), for instance, raised the concern on the limited understanding of the benefits of technology in education, questioning the effectiveness of the long-held discussions that technology as tool had a direct positive impact on learning and teaching outcomes. This came after a "negative relationship" was found between the two variants-ICT and teaching and learning enhancement. To Jacobsen and Forste (2011), technology per se could not do much, but its design that shaped the direct influences on teaching and learning performances. As questions intensified, research focus has been shifted away from the role of media technologies per se, to the design of the technology in education. In an article on Technology-Enhanced Learning and Teaching in Higher Education, Kirkwood and Price (2014) cited the introductory statement of the Director of TELRP who put "Does technology enhance learning? It's not unreasonable to ask this question, but unfortunately it's the wrong question. A better question is: how can we design technology that enhances learning…?" (p. 7). The main objective of this paper is to present the symbiotic relationship between ICT and change management, zeroing on how changes are managed to attain a proper ICT ecosystem for education quality improvement. Along side, this paper also aims to underscore the roles of ICT in education, its design and impacts, and extending to what constitute effective change management approach for ICT inclusion. Given the nature of this topical substance, which will later be used as part of a bigger literature review for ICT and change management, the conducts of extensive bibliographical research, together with critical textual analyses will be used as methodology to achieve the abovementioned objectives. For better understanding, the paper is organized as follows: the introduction, the general view of ICT in education, the change in educational landscape, the changing views, symbiotic relationship between ICT and change process, different theories in the field of change management, approaches to managing effective change, and findings and discussions. This paper ends with a conclusion, as other papers do.
ICT IN EDUCATION
There is no contest that "current college student population is more digitally active than any previous generation" (Jacobsen & Forste, 2011, p. 275). Their activeness is partly contributed by the advancement of modern technology per se, and more importantly, by its influence on their daily lives. From businesses to public works, media technology has made things easier, more convenient, and even more productive. In the field of education, media technology has revolutionized the way things are taught and learnt, and to a greater extent, replaced the old school of practices. Hawkridge, Jawoski, and McMohan (1990) suggested the use of ICT could improve performance, teaching, and administration. ICT has a positive impact on education as a whole, and it could also develop relevant skills in the disadvantage communities-helping in liberation and transformation. Though warning of heavier demand from the side of teacher, Keengwe, Onchwari, and Wachira (2008) confirmed concrete benefits of employing ICT in educational field, stressing that technology allows students to work more productively than in the past. Pajo and Wallace (2001) stand on the same premise by agreeing on the importance of ICT in education, seeing its growing power and capacities in triggering change in the learning environment available for education. McLoughlin and Lee (2007) also confirmed the sophistication of employing media technology to enhance learning through which learners used various tools and multiple forms of interaction to create collective activity, supported by technology affordances. The authors acknowledged the benefits of using information technology in education as it could widen access, decreased need for onsite teaching accommodation, and enhanced explanations by the use of special electronic effects. To the institutional level, including the International Society for Technology in Education (ISTE), the need for technology-based learning is even more obvious (Hamidi, Meshkat, Rezaee & Jafari, 2011). The perceived benefits of the positive influence of media technology use in education have propelled for quick adoptions and integrations of the ICT into higher education by many educational leaders and governments around the world (Edmunds, Thorpe & Conole, 2012). In 2002, UNESCO launched "the Asia-Pacific ICT in Education" programme in 2002 with partner-countries in order to prepare them for a comprehensive and informed approach to integrating ICT into education (UNESCO, 2002).
THE CHANGING EDUCATION LANDSCAPE
Not long ago, theorists like John Dewey, Jean Piaget, and Lev Vygotsky, proposed the studentcentered approach to education. Though there are nuances of explanation and expectation, one common thing exists, that is, students learn better amongst themselves, according to the proposed practice. Constructivist Learning Theorists casted no doubt on this, stressing that the approach relinquishes some teacher's responsibilities in providing in-class instructions, but yet giving students more autonomy and independence in choosing for themselves both the contents and approach for learning. This approach has later been materialized as ICT comes in full swing for harness. Khan, et al. (2012) stressed the role of ICT in shaping a new learning process whereby a more collaborative learning environment was made possible, when previously not. Media technology has given ways for classes to be conducted from a far corner of the world using a sophisticated chatroom, and/or teleconferences where teachers and students could comfortably interact with one another. The new tools have genuinely altered the instructional methods and materials, providing simulated practical experiences, and enhancing visual explanation and online discussion between teachers and students (Hewitt-Taylor, 2003;Kirkwood & Price, 2005). With no surprise, ICT has ushered in a new way of learning and teaching, transforming the so-called correspondent studies into a new e-learning style where classrooms are assisted by new communication technologies systemically connected for content exchanges. Teachers are no longer leading the classrooms, but mere facilitators who direct students to a new type of learning that is, in many ways, assisted by media technologies (Hewitt-Taylor, 2003). Not only does that indicate a sophisticated move from a total teacher-directed instruction to that of a more relaxing studenttailored contents and format, the impact of the change is much greater than one could expect-it requires a change in both classroom format and supports. Soon afterward, the fashion has been adopted worldwide, as a complementary mean to the old teacher-centered approach, and even harsher, replaced it almost completely in many education systems. With the fast expansion of media technologies, and the sounding proofs of their usefulness in education, governments in many countries have made it imperatives for educational institutions to adopt and integrate ICT into every aspect of school lives, making it even compulsory for staff in certain higher educations to master certain basic skills for their jobs. Cambodia is of no exception. In December 2015, the government has, during its annual national budget adoption, laid out solid planning to address the development of physical infrastructure, highlighting the ICT policy priorities within the fifth legislature and a set of planned actions by various MDAs to implement the prioritized policies (KnowledgeConsulting, 2015).
THE CHANGING VIEWS
Despite the widespread growth in practice, concerns continue to be expressed about the extent to which effective use is being made of technology to improve the learning experiences of students (Kirkwood & Price, 2014). In a search for correlation between media technology use and class grades, Jacobsen and Forste (2011) raised their concern on the limited understanding of the benefits of technology in education, questioning the effectiveness of the long-held discussions on the topic. The concern came after the "negative relationship" was found between the two variants. Jacobsen's finding supported Schramm's take, which concluded the absence of evidences to proof the role of technology in enhancing education. According to Schramm (1977), more variances within than between media were found, and hence there was no evidence to suggest that any particular media or technology could in or of itself account for enhancing learning outcomes. The above findings were also the testaments to what has been highlighted by Alexander and McKenzie (1998) who confirmed a similar thing found now. According to the authors, the use of a particular information technology did not, in itself, result in improved quality of learning or productivity of learning…. Rather, a range of factor was identified which was necessary for a successful project outcome, the most critical being the design of the students' learning experiences" (Alexander & McKenzie, p.3). The argument was very much in line with Hewitt-Taylor's finding who valued more on methodology over technology. According to Hewitt-Taylor (2003), the students could only realize the maximum benefits from technologies when they were properly made. In a recently published article on "To Improve Education-Focus on Pedagogy, not Technology", Sharples (2019) reconfirmed the role of pedagogy in enhancing education, putting, "it's not what you use; it is how you use it. We need to focus on how teachers use technology, not just the technology alone. The key to this is pedagogy" (Sharples, 2019, p. 1). It could thus be drawn from the above arguments that to understand what contributes to the effective ICT's design, (or one may call it as pedagogy -how you use technology or ICT environment), one should: first, understand the symbiotic relationship between ICT's integration and organizational change, and second, what change approach to be adopted for effective ICT design.
SYMBIOTIC RELATIONSHIP BETWEEN ICT AND CHANGE PROCESS
As ICT is increasing its influences on our daily lives, and impacts many aspects of contemporary organizational change (Barrett, Grant, & Wailes, 2006;Love, Gunasekaran, & Li, 1998), any discussion about it draws in discussion about change, and vice-versa. At one point, ICT is being adopted as just tools to assist in some change aspects, at the other time they are the prerequisites for effective change, given the enormous and unpredictable size and pace of change at stake. Needless to mention earlier takes which gave weigh on the importance of technology as tools (Hawkridge et al., 1990;Keengwe et al., 2008;Hamidi et al., 2011), other researchers went otherwise into details of technology designs for better teaching and learning performances. Khan et al. (2012) discussed the importance of understanding pedagogical, psychological and cognitive barriers to the successful use of information technology. McFazdean (2001) suggested the process of knowledge acquisition, which required students to participate passionately in order to succeed fruitful outcomes. Other researchers including Kirkwood and Price (2014), Hewitt-Taylor (2003), McLoughlin and Lee (2007), Candy (2000) called on discussions about how technologies are used, in line with appropriate teaching and learning methodology. Byrne, Flood and Willis (2002) acknowledged the technology's benefits on the condition of the administrator's knowledge of the student body, and of the environment they learn. The above rhetoric clearly suggests a different angle of ICT in the change process, particularly at time when subsequent new findings alert change agents to look at ICT beyond its material status; one of the very findings was from Orlikowski and Yates (2006) who argued for the need to see technology out of its contingent determinism box. This later finding supported Gardner and Ash's take (2003), which suggested that the low benefits obtained thus far from ICT's integration was mainly because of the absence of concrete understanding of the nature of change in the complex organization. "A clear understanding of dynamics of change at the people/technology interface, and the symbiotic relationship between information systems and strategy, is a prerequisite for the successful business benefits realization" (p. 18).
In an attempt to show the relationship between technology and people, Andersen (2018) views technology as operational work mechanism, linking all people's actions at all levels of the business. Barrette et al. (2006), while also agreeing to the argument, acknowledges the scarcity of relevant studies on the interconnectedness of ICT and organizational change, saying when there were, such studies tend to ignore or downplay the role of human agency. Nevertheless, what dictates a common frame for all the researchers was the fact that ICT has enormous influence on people, but yet the later is the one to decide (Orlikowski & Barley, 2001). In sum, integrating ICT needs to be done with high caution as it may change the way we think about work (Zuboff, 1988). While on one hand ICT is required as communicative tools to manage bigger institutional outcome within this increasingly complex organizational environment (Savage, 1996), on the other hand, ICT integration impacts organizational work culture that needs to be appropriately addressed. To this end, human role is important to mitigate such culture change so that a chance of failure is minimized. To attain success, technology could not be left alone to determine success, but how change agents, including top supervisors, functional mangers, staff alike, use them accordingly to project the wanted outcome.
CHANGE MANAGEMENT THEORIES
Much research has been conducted on how to manage change for success. In businesses, the process of adopting effective changes ranges from a number of strategies; these include the initial step of quality assurance (QA) which cares more on procedure of attaining the best quality, to continuous improvement (CI) which places more emphasis on customer satisfaction and employee participation, towards adopting a continuous change approach to ensure quality, known as total quality management (TQM). At a radical change phase to attain market leadership, companies adopt a radical change to be part of its procedure, called Process Engineering, and to the most, to the whole business process, known as Business Process Reengineering (BPR) (Love et al., 1998). By (2005) categorized change in three different type-change based on 'the rate of occurrence', in which the researcher included continuous change adopted from Burnes (2004), discontinuous change (Grundy, 1993) and incremental changes (Burnes, 2004); change that is based on 'how it comes about', in which he observed if the change was planned or emergent, and last, the 'change based on scale', which included fine-tuning, incremental adjustment, modular transformation and corporate transformation. Other researchers went down even further into planned internal and external change, and unplanned internal and external change. Nevertheless, all these were centering around two main approaches. One was Kurt Lewin's 'planned approach', and another was the 'emergent approach'. According to Lewin (1951), effective change required detailed plans and projection made by top managers. Change had to start from a clear objectives supported by detailed planned actions, and with projectable results. Change was attainable through the process of freezing, unfreezing and refreezing, or termed differently as displacing, reregulating and rearranging (Heifetz, Heifetz, Grashow, & Linsky, 2009;Sporn, 2001). Bamford and Forrester (2003) however challenged the concept casting doubt mainly on the role of top managers. A good change management had to be a bottom-up, and cross-sectional. Changes took place at functional offices governing by functional managers. Senior managers might dictate general policies, but it was the middle managers who were directly "influenced" by events, who liaised with important customer contacts and spent time with both internal and external auditors (Bamford & Forrester, 2003). Bamford's challenge gets subsequent supports from researchers who witness the uncontrollable nature of change, which is in most way influenced by advancement of ICT. The notion of emergent is particularly relevant in today's setting where organizations are greatly affected by the unprecedented environmental, technological and organizational changes which cannot be explained, and prescribed by priority plans and intention (Orlikowski, 1996). Technology has gone too far to make things predictable, and hence effective organizational change must expect the change per se. And change in their views is non-linear, emergent, dynamic and situated in nature (Gardner & Ash, 2003;Orlikowski, 1996). Though debate is still going, particularly on how to effectively manage the 'unpredictable', it seems a common ground was built on the fact that first, the pace of change have never been greater than in the current business environment, and second, change, being triggered by internal and external factors, comes in all shapes, forms and sizes (By, 2005).
APPROACH TO MANAGING EFFECTIVE CHANGE
Although numerous variables have been identified as factors to be keyed in for managing a proper change, three groups of change factors are common to almost all researchers. These include strategy, technology, and human. Orlikowski and Yates (2006) term them in phrase as making system workable, dealing with materiality, and focusing on practice. Developed from Wagner and Newells' (2006), Orlikowski refered 'making system workable' to a strategy that focused on setting common goals for all change stakeholders in the institution. As for 'dealing with materiality', it was referred to the ability to see technology as more than just tools; a notion shared by Bridgman and Willmott (2006) who put it that the challenge was how to articulate a view of technology's material properties without reifying them through a form of contingent determinism. And 'making system workable' touched more on what people did with technologies in practice (what actors at various level within and across organizations are doing with the technology on the ground and over time). Gadner and Ash (2003) also pointed to a three unifying theme for effective change to take place. To exemplify, emergent change, although difficult to manage in a conventional sense, could be shaped and harnessed under certain conditions which included shared stakeholder goals, a clear understanding of the business model, its objectives (strategy), the role of technology within the process, creation of common "IT change management" protocols and conventions, and on-going use of facilitated forums required to support knowledge integration. Marchesoni, Axelsson, Faltholm and Lindberg (2016) also drew the needs to focus on the aspects of strategy, human, which the authors coined as 'usability needs' and technology. While Muluneh and Gedifew (2018) packaged the change process following adaptive leadership, and design thinking, Milis and Mercken (2002) highlighted the roles of top managers in the design process. Top manager's decision was in many ways influencing the level of support provided by the functional managers. This supports had a great impact on the behavior of the users. Their idea was however, challenged by Bamfort and Forrester (2003) who saw the roles of top manager as less important, compared to the middle managers. Top managers, though having the overall responsibility for effective change, were not supposed to plan or implement change, but only to create an environment that was conducive to experiment and risk-taking, and to develop a workforce that would take responsibility for identifying the need for change and implementing it. Though issue among these findings was constraint on the definition of human roles in an organization, a great thing of it was a common understanding of the criticality of management involvement in change process. On part of technology, there is a need to look at it beyond its materiality and deterministic nature. Since 1980s, technology is no longer seen as a material cause, but social shaping agent (Markus & Robey, 1998;Williams & Edge, 1996). Orlikowski (1996), for example, pointed to a mutual relationship between human and technology, stating that ICT was both a concrete artifact and an actor which influenced and was influenced by the cognition and actions of its users. Andersen (2016) similarly acknowledged the reciprocal relation between ICT users and organizational change that increased autonomy and control power of organizational norm and routine could be made by ICT. Short-term results would be achieved, while increasing instability instead of reducing it, if there was a one-side intervention on any of these two variables (Genus, 1998;Hartley, Benington, & Binns, 1997;Senior, 1997). Several attempts have been made to develop conceptual framework that treat technology as both materials and a social object at the same time (Orlikowski & Barley, 2001), but a unified one has yet to be adopted. What dictated a common frame for all the researchers was the fact that ICT has enormous influence on people, but yet the later is the one to decide.
With great people at work, and sophistical technology in hands, one must not forget to make the system workable-the strategy. Though this seems complicated, an understanding that change is emergent, non-linear, and continuous is certainly a prerequisite to develop good institutional strategies (Gardner & Ash, 2003;Andersen, 2018;Bamford & Forrester, 2003). According to Bamford and Forrester (2003), organizational change was a continuous process of experimentation and adaptation intended to match the organizational capabilities to the needs of the changing environment, and hence any view of strategy as a linear process, following a particular set model, and done within a specific timeframe, was doomed to fail (Gardner & Ash, 2003). Even with these change factors, effective change relies heavily on the process. In summary of Kanter, Stein and Jick's Ten Commandments for Executing Change (1992), Kotter's Eight -Stage Process for Successful Organizational Transformation (1996), and Luecke's Seven Steps (2003), Muluneh and Gedifew (2018) proposed the following steps to implementing change in people's working culture by first developing deep investigation and opening discussion of challenges, and next, proposing the use of adaptive design as tool, and last, introducing collaborative thinking for creative solutions. Though the researchers stressed on the need to equip necessary ICT skills for staff at a later stage, clear communication between and among all stakeholders had to be ensured for the whole change process. Muluneh's proposal shares mostly with Andersen's proposal (2018), though stressing more on managerial responsibility, which suggested (1). the identification of challenges prior to choice of ICT solutions; (2). training; (3). revision of organizational routines, and (4) negotiations to develop commitment to the new ICT.
FINDINGS AND DISCUSSIONS
Information Communication Technology (ICT) and organizational change are reciprocally interrelated. When we talk about ICT's integration, one must not avoid discussion on changes needed for a successful integration. In the same token, when one talks about change, one must also not forget to consider ICT, given the size and the current pace of change. Effective ICT's integration requires an in-depth understanding of relationship between the ICT per se and the environment in which they operate. Getting to know this would give the change agents a better chance of exploiting them where they see fit. To best use them, technology shall be viewed beyond its materiality, that is, not to let technology dictate the results, but using them to determine the wanted results. As for the change agent, they also need to understand the nature of organizational changes. Change is proven to be emergent, continuous, fast and drastic in nature. This requires the change agents to reconsider its change approach. Latest findings have proved that planned change is no longer relevant given the nature of change, and hence all change agents shall expect the unprepared nature of change. Rather than following a set of rules or models, what is needed in managing organizational change is an operational style that is more reflective and less reactive (Bamford & Forrester, 2003). Functional managers, not top ones, are claimed to be the most important change agents as they are the ones to face directly with changes. Though top managers are still responsible for overall objectives of change intent, it is critical for simultaneous and interactional involvements of all change stakeholders. Their involvement, which is facilitated by a free flow of useful information, would contribute to best work practices that integrate new work procedures, which also include new technology (Andersen, 2018). Although the pace of change, at a point in time, is radical, change aspect should be in small-scale, incremental, and bottom-up. Over time, these can lead to a major reconfiguration and transformation of an organization (Bamford and Forrester, 2003). To attain this, however, a convincing plan has to be devised, incorporating Muluneh and Gedifew's (2018) and Andersen's (2018): (1) create awareness of the change among all key stakeholders, pointing to the motivation for change, including both individual and institutional benefits; (2) give them ownership of the change by involving them throughout the change process; (3) explain them possible consequences of not having changes, but at the same time, ensuring them of possible benefits, such as training, incentives, or new job placement etc. (4) explain the stakeholder roles in ensuring success. Regardless of field, the adoption of technology has to be well considered with good change management approach. In education, change has to be made for almost all aspects of teaching and learning, at the onset of ICT's integration, should one expects better teacher-student performances. The need for the inclusion has to be first supported by managers as they have roles, and it shall be then involved others who have stakes. Change is non-linear, and emergent in nature, and according the earlier research, could be dealt with best by functional manager who know the issue well at the ground. The approach to change shall be incremental, flexible and goal oriented. Any attempt to mitigate planned change from top managers may risk a big problem. As ICT could be both tools and process, key stakeholders need to know what function it plays at a particular time.
CONCLUSIONS
A conclusion could thus be drawn that the integration of ICT alone does not necessarily produce direct positive impact on teaching and learning performances, but the overarching process, which requires a proper change management approach. Though ICT is viewed as an inevitable part of the change process, it shall in no way be considered as a determinant for change result. The human indeed has this role and hence, involvement of all key human actors in the change process is deemed critical. Though this paper doesn't conquer more substantial findings on relationship of ICT and change management, it serves as an important reminder to all who have stakes, and are venturing into the unknown supposition of ICT's supernatural power. In furtherance, its thorough literature reviews shall form a strong base for future researchers in understanding more in-depth of this same focus in a number of ways, one of which is the symbiotic relationship of ICT and change, that is, ICT is relevant, but not determinant for change. | 2020-05-11T13:28:01.170Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "facec63830f3afef8eaaff3fdf97dda6f60a7c01",
"oa_license": null,
"oa_url": "https://doi.org/10.2478/cplbu-2020-0049",
"oa_status": "GOLD",
"pdf_src": "DeGruyter",
"pdf_hash": "facec63830f3afef8eaaff3fdf97dda6f60a7c01",
"s2fieldsofstudy": [
"Education",
"Computer Science"
],
"extfieldsofstudy": [
"Sociology"
]
} |
18264589 | pes2o/s2orc | v3-fos-license | Current status of implant prosthetics in Japan: a survey among certified dental lab technicians
Background There are many implant cases in which dental technicians take initiative with regard to the design of implant prostheses, and to a certain extent, this area of care is one in which dentists do not necessarily play the leading role. Moreover, inadequate communication between dental technicians and dentists and insufficient instructions for technicians has been highlighted as issues in the past. The purpose of this questionnaire is to improve the quality of implant prostheses and thereby contribute to patient service by clarifying, among other aspects of treatment, problem areas and considerations in the fabrication of implant prostheses, conceptual-level knowledge, and awareness of prosthodontics on the part of the dentists in charge of treatment and methods for preventing prosthetic complications. Methods A cross-sectional survey was given to 120 certified dental technicians. To facilitate coverage of a broad range of topics, we classified the survey content into the following four categories and included detailed questions for (1) the conditions under which implant technicians work, (2) implant fixed prostheses, (3) implant overdentures, and (4) prosthetic complications. Results Out of 120 surveys sent, 74 technicians responded resulting in a response rate of 61.6%. Conclusions This survey served to clarify the current state of implant prosthodontics, issues, and considerations in the fabrication of implant prostheses, and the state of prosthetic complications and preventive initiatives, all from a laboratory perspective. The results of this survey suggested that, to fabricate prostheses with a high level of predictability, functional utility, and aesthetic satisfaction, it is necessary to reaffirm the importance for dentists to increase their prosthetic knowledge and work together with dental technicians to develop comprehensive treatment plans, implement an organized approach to prosthesis design, and accomplish occlusal reconstruction.
Background
Currently, dental implant treatment is evaluated on the basis not only of restoring masticatory function, but also a variety of other factors, including the implant and superstructure survival rate and psychological impacts [1][2][3]. Numerous factors must be taken into account, to offer highly predictable implant treatment, and there is no doubt that prosthetic-related factors such as the type and compatibility of the prosthesis, as well as occlusion, make a major contribution to that goal [4][5][6][7][8][9].
Recently, a restoration-driven approach to implant treatment has gained recognition and is being put into practice on a broad basis [10,11]. However, an increasingly diverse range of patient cases has led to a situation in which it is impossible to ascertain such aspects of actual practice as prosthesis type and design, making it necessary to reaffirm the importance of treatment carried out from a prosthetic perspective [12]. Many surveys querying dentists or patients with regard to implant treatment have been reported in the literature, addressing such topics as the state of implant treatment in particular countries and regions [13,14], quality of life and patient satisfaction [15][16][17], peri-implantitis and mucositis [18], and implant education [19,20]. However, very few surveys have queried dental technicians, whose job it is to fabricate implant prostheses [21,22].
Dental technicians play a major role in current implant treatment because of increases in both the importance of their participation as part of the treatment team from the treatment planning stage [21] and the frequency of prosthesis repairs, refabrication, and related procedures in the event of prosthetic complications. In particular, the types of prosthetic complications being experienced and associated trends are becoming clear thanks to numerous systematic reviews undertaken recently to investigate the implant complications. Fixed prostheses are prone to issues such as screw loosening, crown detachment, and fracturing of the veneering material on a frequent basis [23][24][25][26][27]. Similarly, implant overdentures are frequently affected by progressive loosening of attachments, denture base fractures, and a sequential need for relining [28,29]. However, because understanding the status of these complications is based on the results of surveys targeting dentists, information is needed on the situation as seen from the standpoint of implant technicians, to clarify the causes of these complications and the techniques for dealing with them. Issues including inadequate communication between dental technicians and dentists and insufficient instructions for technicians have been pointed out in the past [21,30,31]. These reports derive from surveys targeting older fixed or removable prosthesis designs, leaving it unclear not only whether those issues have been rectified in the face of expanding use of implant prostheses in recent years, but also to what degree the opinions and wishes of dental technicians are being reflected in implant treatment.
This survey consists of a questionnaire targeting the certified dental technicians of the Japanese Society of Oral Implantology (JSOI) [32] who are primarily involved in fabricating dental implant restorations. It was formulated to clarify the current status of implant prostheses from a prosthetic and technician-oriented standpoint through questions addressing current trends among dental implant technicians, fixed prostheses, implant overdentures, and prosthetic complications and measures. The certified dental technicians of JSOI queried by the survey are involved in implant-related laboratory work on a comparatively frequent basis, and the responses they provided can be expected to accurately reflect the current state of implant laboratory practice in Japan. Our goal through this questionnaire is ultimately to improve the quality of implant prostheses and thereby contribute to patient service. We aim to do this by clarifying, among other aspects of treatment, problem areas, and considerations in the fabrication of implant prostheses, the conceptual-level knowledge base and awareness of prosthodontics on the part of the dentists in charge of treatment and methods for preventing prosthetic complications.
Methods
This cross-sectional questionnaire survey was performed among the certified dental technicians of JSOI from September to December in 2011. Selected were 120 out of 285 certified dental technicians of JSOI using a random number table and mailing each questionnaire directly to the participant. To facilitate coverage of a broad range of topics, the survey classified content into the following four categories and included detailed questions for each: (1) the conditions under which implant technicians work (questions 1 and 2); (2) implant fixed prostheses (methods of retention, abutment, and prosthesis types; questions 3-6); (3) implant overdentures (questions 7 and 8); and (4) prosthetic complications (complication types, methods of treatment and prevention; questions [9][10][11][12][13][14]. Details of the questions and results are provided in Tables 1, 2, 3, and 4. Given that no previous survey regarding implant dental technician data had been developed, an original form for this purpose was constructed following suggested guidelines [33,34]. Important to the construction validity, both the questionnaire authors and their audience were clinical specialists and were aware of the topic content. The content sought in the questionnaire was a measure of responder demographics, clinical experiences, and subjective perceptions. Additionally, interpretation errors were minimized because of content familiarity and standardization, which improved reliability, and no pretest measures were obtained given the mail-based assessment method.
Results and discussion
Out of 120 surveys sent, 74 technicians responded, resulting in a response rate of 61.6%. A summary of the responses is provided in Tables 1, 2 Because implant treatment (implant prostheses) requires a significant amount of specialized, high-precision laboratory procedures, this area of dental care exhibits slightly different trends than prosthetic treatment as it was practiced in the past, and this work is concentrated at specialized fabrication labs. Moreover, there are many cases in which dental technicians take initiative with regard to the design of implant prostheses, and to a certain extent, this area of care is one in which dentists do not necessarily play the leading role. In light of these circumstances, it was intended for this questionnaire to verify trends in implant treatment from a different perspective than has been used in the past, by investigating the current state of practice in the field from the dental technician perspective. By evaluating implant treatment from the standpoint of dental technology/prosthodontics and identifying current trends and problem areas, it was expected to gain information that enables highly predictable implant treatment. (Table 1) The dental technicians who responded to this questionnaire have an average of about 17 years of experience in the field, indicating that they possess an adequate level of fabrication experience. In light of the reality that dental implant treatment is a comparatively new field, these personnel can be proficient with digital techniques as they differ from past generations of technicians who practiced the craft. On average, each dental technician serves about 36.5 customers, although that number varies depending on the scale of the fabrication lab at which they work. While implant laboratory work consists of complex processes, the fees are high, and labs generate a stable flow of revenue given a constant stream of work requests (Q1).
Conditions characterizing implant laboratories
Dentists play a leading role in 39.3% of the time in implant treatment planning and prosthetic design, and dental technicians are consulted concerning cases and part usage 34.7% of the time, suggesting the approach to implants is driven by prosthetic considerations (by dentists) to some degree. However, because dental technicians indicated that they take the initiative 15% of the time, it is impossible to ignore issues involving the care, skill, and judgment of dentists offering implant treatment. This is distinct from the question of whether communication or information transmission between dentists and dental technicians is adequate, but rather relates to implant treatment knowledge, especially decisions about which prostheses and other treatment tools to use. The repercussions of this problem extend to the rate of incidence of prosthetic complications occurring after the start of functional use, their prevention, and the measures that are undertaken to address them (Q2). Education of dental technicians varies by country, and there are a variety of means by which personnel master fabrication knowledge and skills. For example, a survey of dental technicians in the UK conducted by Bower et al. [35] reveals that while subjects read commercial magazines published for dental technicians, they rarely subscribed to academic journals in the field of prosthodontics, and two thirds of the survey's respondents had never attended a training course on fabrication practices. By contrast, certified dental technicians of JSOI are required to belong to an academic society and to participate in society meetings and certification courses to maintain their credentials. Subscription to JSOI's journal is an example of the advantages of membership for continuing education. 2. Implant fixed prostheses (Table 2) Implant fixed prostheses employ either cement or screw retention. While there are a variety of reports comparing the two methods in terms of such metrics as their respective prognoses, success rates, and advantages and disadvantages [7,[36][37][38], no reports have been published concerning their relative frequency of use. Our questionnaire indicated a distribution of 61.4% cement-retained versus 38.6% screw-retained prostheses (Q3), suggesting that cement retention is used more frequently in Japan. Unfortunately, the fabrication-oriented focus of this survey prevented clarification of the types of cement used for cement retention and the breakdown between provisional and definitive cement. Next, concerning the types of abutments used with cement-retained prostheses (Q4) (Figure 1) UCLA-type abutments made from cast gold alloy accounted for about the same proportion. It is likely that this breakdown is because, in many cases, implant systems using fabricated crowns are not supported by CAD/CAM abutments. CAD/CAM system use is also subject to numerous limitations because of the licensing process imposed by the Ministry of Health, Labour and Welfare (MHLW) in Japan, which is strict when compared with its constituents in other countries. The questionnaire also indicated that titanium two-piece abutments (preparable type) are used in about the same proportion; 28% of the time. This reflects such factors as efforts to keep laboratory costs down and to shorten delivery time frames, in addition to the above reasons. Concerning the types of prostheses used in the anterior region (i.e., veneering materials), the questionnaire indicated a trend toward selection of roughly the same materials for both single crowns and bridges (Q5) ( Figure 2). As a rule, porcelain fused to metal (PFM) crowns accounted for 43.7% of the total, but selection of metal-free restorations using zirconia has been increasing in recent years, reaching approximately 27.1%. Incidentally, veneering porcelain was also used as the veneer material for zirconia copings. The questionnaire also indicated that while highly filled indirect composites such as Estenia (Kuraray, Osaka, Japan) were used 21.3% of the time, primarily for facing crowns, these materials were used infrequently for jacket crowns (2.4%). There is a low risk of facing damage and chipping for prostheses in the anterior region. Nonetheless, the questionnaire revealed the unexpected result that indirect composite facing crowns accounted for 21.3% of the total. This may be because there are many indirect composite resins (Estenia, Ceramage, etc.) available in Japan, and crowns and bridges in the anterior region (natural abutment teeth) are covered by certain types of insurance in the country (National Health Insurance and Social Insurance), with the result that Japanese dentists are familiar with these materials and use them frequently. Consequently, it can be surmised that using these materials in implant prostheses is more common than in Europe and the USA. However, no survey of prosthesis selection has yet been carried out, and future research on that subject is expected. Concerning the types of prostheses used in the posterior region (Q6) (Figure 3), PFM design accounts for about 40% of the total, although the questionnaire also revealed a trend (in 9.1% of all cases) toward metal occlusal designs to avoid fracture and chipping of the veneer material. The same trend is evident in indirect composite facing crowns, where metal occlusal designs are used in about 35% of all cases that this type of prosthesis represents. In the past, the PFM crown was frequently used in implant crowns and bridges. However, a trend is seen toward increasing indirect composite resin use as a veneer material for implant superstructures. In addition to improvements in the physical properties (strength, wear resistance, and discoloration resistance) of indirect composites in recent years, their selection as veneer materials that chemically bond to titanium against the backdrop of increasing CAD/CAM-designed titanium frameworks, because of the low reliability of veneering porcelain, in terms of bonding strength, when used with titanium frames. There is also a greater possibility of direct (in-mouth) repair of failed veneering materials and greater shock-absorbing potential relative to occlusal force in comparison with porcelain [39]. The trend to adhere resin materials instead of porcelain, from Brånemark and colleagues' recommendations for acrylic resin as an occlusal surface material in the 54% 8% 24% 8%
6%
Fracturing of the denture base or denture tooth detachment/fracture Mesostructure (attachment) damage Occlusal reconstruction due to denture wear or attrition Replacement of the attachment system (transition to another system) Other To study more about prostheses and occlusion Other Figure 9 Q14. Do you have any requests for dentists who practice implant treatment? early 1980s, also cannot be ignored [40]. All metal crowns were used about 10.3% of the time in molar regions because of a lack of strong aesthetic requirements. Zirconia, however, accounted for 14.3%; only about half of its use in the anterior region. Possible reasons include this region not being an aesthetic area and veneer material fracture and chipping problems that have yet to be completely resolved [23,41,42]. 3. Implant overdentures (IODs) ( Table 3) Some 19% of IOD design work is left to technicians, while 80% is performed according to the instructions of, or in consultation with, dentists (Q7). As was the case with the question concerning overall prosthesis design described above, these results indicate that a team approach is being put into practice. Bar and clip attachments were most commonly used for IODs, followed by magnet, ball, and socket, and Locator attachments (Q8) (Figure 4). It is noteworthy among the questionnaire results that magnetic attachment use is highest in Asian countries, including Japan [43]. Additionally, it is thought that the low use of Locators (5.2%) is strongly influenced by Japan's strict pharmaceutical regulations and because the MHLW in Japan had not yet licensed the device at the time the questionnaire was administered. Conversely, ball and socket attachments have been standardized by major implant manufacturers, and the freedom with which prefabricated parts can be used has led to their comparatively broad use. IOD use in Japan is by no means widespread; a survey of IOD use in ten countries by Carlsson et al. [44] revealed that the adoption rate of these devices in Japan was just 7% for individuals with mandibular edentulism. This number was lower than in any of the other nine countries, and future changes in IOD use in Japan are a topic that remains interesting. 4. Prosthetic complications (Table 4) According to Papaspyridakos et al. [2], indicators such as implant level (the relationship between the implant and bone) and the state of soft tissue around the implant are the most frequently used indices of implant success, followed by the presence and status of any implant prosthetic complication. Implant prosthetic complications include materials sciencerelated factors, biomechanical and occlusion-related factors, and aesthetic factors. A systematic review of numerous complications that have been reported recently reveals the prostheses, restoration methods, materials, and areas most susceptible to complications [2,[23][24][25][26][27][28][29]. Additionally, the frequency of prosthesis repairs, and repair costs cannot be ignored from a medical economic standpoint [2]. Of the problems and issues generally encountered on the laboratory side, compatibility precision, aesthetic issues, and occlusal issues each accounted for about one third of the total (Q9). When these results are examined in connection with laboratory challenges (Q10) (Figure 5), it becomes clear that technicians regard poor implant location and orientation (42.4%) as obstacles to success. Many other issues derived from factors such as dentists' skill level and treatment planning knowledge are directly related to quality implant treatment, such as defects and inaccuracies in impression-taking and bite registration (29%), inadequate establishment of appropriate occlusal schemes (17%), and deficient or unreasonable prosthesis design (10.6%). These issues can easily give rise to a variety of prosthetic complications after initiating functional use (and may also lead to biological complications), and dentists who offer dental implant treatment should reflect on improving their techniques. In particular, unsuitable implant locations, positions, and orientations can be prevented through appropriate preoperative examination and planning based on diagnostic wax-ups and surgical templates. Looking at repair requests (i.e., complications) involving the superstructures of fixed implant prostheses (Q11) (Figure 6), facing damage and chipping accounted for more than half of all requests (54.5%). Generally speaking, there are many reports that indicate a high incidence of complications related to fixed prostheses involving abutment screw loosening, detachment of cement-retained crowns, and veneer (porcelain/composite resin) fracturing and damage. Because this question addressed repair of implant prostheses, we did not obtain information about complications that can be resolved in a chair-side setting. However, the high rate of requests for facing repairs makes it clear that veneer material chipping and similar issues are occurring at a high frequency [25][26][27].
Although the literature includes reports indicating a greater incidence of chipping and fractures for veneering porcelain than hardened resin [45,46] and for bridges than single crowns [26,27], this questionnaire does not shed light on the relative repair rates for porcelain and composite resin, nor the types of prostheses most likely to experience these issues. In the future, it would be worthwhile to conduct follow-up surveys on the differences among veneering materials and prostheses as well as veneer material failure trends.
Other cases requiring repair seen by technicians include facing discoloration (veneering composite resin) (17%) (Figure 7) and design changes and modification requests associated with additional implants (13.9%). Studies have pointed to issues related to degradation of materials science characteristics for veneering composites that are distinct from those associated with porcelain, including loss of glossiness because of the deterioration of the surface and discoloration, wear, and attrition due to long-term use [47]. It is interesting to note how relatively frequently repairs are performed to address these issues. It has become clear that no small number of laboratory work requests deal with these issues experienced by patients undergoing implant treatment because of changes over time in the area surrounding existing implant treatments that occasionally necessitate additional implants and superstructure design changes or modifications. The questionnaire revealed several creative steps, based on laboratory considerations, being taken to prevent veneer chipping and fractures, a frequent and problematic prosthetic complication (Q12) (Figure 7). Technicians were taking into account metal (including zirconia) coping designs (36.3%), covering only the distal-most part of the molar region with metal (24%), using veneering composite resin (15.7%), and using metal occlusal designs (15.1%). The type of coping is important in preventing veneer fractures, and it is necessary to secure adequate veneering material thickness and to consider the dispersion of stress [48]. Particularly as zirconia becomes more common, there has been a move to improve coping designs using CAD/CAM and to exercise care concerning the prevention of veneering porcelain fracture [49,50]. Responses to this survey support the idea that this concept has been gaining popularity among technicians in recent years. Conversely, it was not expected that 15.7% of respondents would indicate that they use composite resin to prevent veneering material fractures. Moreover, there is no evidence that veneering composites are more resistant to fracture than porcelain (as they are more prone to chipping) [45,46]. As noted above, veneering composites are often used in Japan, and one theory is that this trend is driven by a conceptual assumption that veneering composites are softer than porcelain and less likely to fracture from a materials science standpoint. It can be concluded that the ability to repair prostheses directly in the mouth is also a deciding factor. More than half of all repair requests for IODs (i.e., complications) (Q13) (Figure 8) involve fracturing of the denture base or denture tooth detachment (53.8% of all repair requests). The questionnaire also revealed that reconstruction of occlusion because of wear or attrition of denture teeth (24.1%) is a frequent issue leading to laboratory orders. While the literature includes reports of frequent IOD-related prosthetic complications such as attachment-related compromised retention, detachment or fracturing of denture teeth, relining, and attachment damage [25,28,29], this survey showed a somewhat different trend. It can be inferred that these results differ from actual complication trends because they constitute responses to cases sent to labs as repair requests, and because the survey targeted dental technicians. The causes of this phenomenon can be found in responses to other questions as described above. In short, the questionnaire suggested the possibility that inadequate awareness of prosthetics is making IOD complications in Japan more complex, with issues including the comparatively frequent use of resin bases, problems with implant location and orientation, and inadequate consideration of occlusion by dentists. Finally, technicians gave voice to the several requests for dentists, who are their customers, as a result of their daily experiences accomplishing implant laboratory procedures (Q14) (Figure 9). These included asking dentists to use suitable implant location and orientation (31.8%), to allow technicians to participate and consult with technicians from the treatment planning stage (28.3%), to improve consideration of soft tissue as well as its condition (21.8%), and to add more in-depth knowledge of prosthesis and occlusal design (14.5%). As observed, implant location and orientation issues in particular not only complicate technical work, but may also cause a variety of complications after the initiation of loading. For cases involving a broad range of implant prostheses and occlusal reconstruction, if not all cases, the dental technicians should be a part of the team from the treatment planning stage to enable restoration-driven implant treatment in the true sense of the term. At the same time, a dentist with an extensive understanding of prosthodontics should play the leading role in treatment of such cases. This survey succeeded in identifying prosthetic problems by examining implant prosthetic complications from the dental technician's perspective. As stated in the description of the survey's purpose, it is hoped that dentists make use of this report to reaffirm prosthetic concepts and awareness so that there is achievement of predictable implant prosthetic treatment.
Conclusions
This survey served to clarify the current status of implant prosthodontics, issues, and considerations in their fabrication, and the status of prosthetic complications and preventive initiatives, all from a laboratory perspective.
1. Concerning implant treatment, it was concluded that dentists either play the leading role or work in collaboration with technicians, including in the formulation of treatment direction and that a team approach has been achieved to a certain extent. 2. This survey identified the problems that technicians address on a frequent basis in the fabrication of prostheses (these should be noted by dentists), including implant location and angulation, impression and bite registration precision, and occlusal considerations. 3. Concerning prevention of veneer fractures, it was also concluded that the best approach consists of metal occlusal (including a metal backing for the distal-most area) and coping designs. 4. The results of this survey suggest that, to fabricate prostheses with a high level of predictability, functional utility, and aesthetic satisfaction, it is necessary to reaffirm the importance of dentists increasing their prosthetic knowledge and working together with dental technicians to develop comprehensive treatment plans, design prostheses, and accomplish occlusal reconstruction. | 2017-07-05T20:07:12.917Z | 2015-02-17T00:00:00.000 | {
"year": 2015,
"sha1": "3e0ec888b99e87f50e2b8367903b230152939f88",
"oa_license": "CCBY",
"oa_url": "https://journalimplantdent.springeropen.com/track/pdf/10.1186/s40729-015-0005-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "52723a4589b96c2e29e32b031ca25182444c5a94",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246053414 | pes2o/s2orc | v3-fos-license | TEACHER TALK AND THE PATTERN OF ENGLISH CLASSROOM INTERACTION IN TEACHING ENGLISH
The aim of this research was to explore the teacher talk and the pattern of interaction that occur during in the teaching learning process. The researcher applied qualitative study in English Zone, a Course Institute in Makassar. The data sources were a teacher who has got her BA degree in State University of Makassar two years ago. There are four teenager students who participated in the class. The technique of data collecting consists of doing observation by recording audio-lingual data of the process of teaching English. While, technique of data analysis consist of transcribing data from recorded classroom observation into written text, classifying the types of teacher talk that occurred in the class, identifying the patterns of interaction occur during in the teaching – learning process and concluding the finding from all of the collected data. In brief, it could be concluded that the teacher employed ten from eleven categories of teacher talks, there are: there are eleven types of teacher talks namely dealing with feelings, praising and encouraging, joking, using ideas of students, repeating students’ response verbatim, asking question, giving information, correcting without rejection, giving direction, and criticizing the students’ response. While there are four patterns of classroom interaction that were applied in the study are: Interactions among students or student – students interactions that happened during the small group discussion and during in the classroom discussions teachers – whole class interactions that happened during the class discussion and also this pattern happened when the teacher conveyed learning material and gives instructions to the students, Interactions among teacher – group discussion that happened during the small group discussion when the teacher clarifies the students’ difficulties with the task given. Therefore, it was suggested for the teacher to conducted group discuccion that did not occur in class interaction to improve the students motivation and willingness to work together.
INTRODUCTION
ince English becomes an International Language, it is important to learn and to teach. Teaching English has developed dramatically in the recent year because of the globalization impact. As an international language, it has important role in the world. Most people use English to communicate among people with different background of language from many part of the world, as a mean to gain knowledge, information, science, technologies and other (Pratama, 2015). Many people race to master this language because in all of life aspects, it has become a requirement to obtain many things like going abroad, continuing study, even looking for a job where this era there are many company that asked English as the requirements. Thus, it is not wonder, if there are many people use S this opportunity to build some course institutes that give chance for many people like students, employees, or jobless to learn English.
Every course institute sets some rules in order the goal to master English by the students can be reached. One of them is the teachers' professionalism. It means that the teacher does not only master the materials but also all of classroom aspects must be set well in teaching process, one of them is interaction between the teacher and students in the classroom (Pratama, 2015).
Based on Cambridge Dictionary (2017) interaction is an occasion when two or more people or things communicate with or react to each other.In the class, the people who conducted interaction are the teachers and students. They have their own role in constructing the lively learning process weather verbally or no verbally. The verbal way that they can use to interact each other by talking each other. It has been known from Vygotsky's suggestion in (ASCD, 2008) that thinking develops into words in a number of phases, moving from imaging to inner speech to inner speaking to speech. Tracing this idea backward, speeches-talk-is the representation of thinking.So, both teachers and students must create talk in the class to explore their thinking what we called as classroom interaction.
However, the figure that has big role in creating the interaction in the class is prominent must be conducted by the teacher. The teachers focus to build the good interaction in order to fulfill the needs of their individual students. They should not only focus on material achievement when teaching, they should also be able to treat the student individuals by the language used or "Teacher Talk". Thereby they can encourage and motivate their students to accomplish their proficiency in all skills of English such as reading, writing, speaking and listening skills.
Teacher talk is an indispensable part of foreign language teaching in organizing activities, and the way teachers talk not only determines how well they make their lectures, but also guarantees how well students will learn (Yanfen & Yuqin, 2010). Weddel in Fikri, Dewi, and Suarnajaya (2014) reveals that the language that teachers use in class, or "teacher talk," can have a tremendous impact on the success of interactions they have with students. The kind of language used by the teacher for instruction in the classroom is known as teacher talk (Yan, 2006). While Nunan in Lasantu (2012) argued that teacher talk has crucial importance not only for the organization of the classroom but also for the processes of the acquisition. Based on the definition above, it can be concluded that teacher talk is the language that the teacher use in the classroom to build interaction with the students. It is a vital aspect of classroom based language teaching and learning since it is one of the main resources of language input for the learners. It is magical thing. It can probably change everything in the classroom.
While the students, they also opportunity to explore their idea through interaction. Because basically, teaching process actually gives a chance for learners to ask, to guess, to think and even to discuss the course material in order to make an interaction between students (Pratama, 2015). Classes where students have opportunities to communicate with each other help students effectively construct their knowledge (Teach to Earth). In this case the teachers who convey talks must be able stimulate the students to conveys their idea whether in written or spoken form.
Talksthat the teacher conducts to the students can create a deep communication between them weather in verbal or nonverbal way. Unfortunately, the interactions in the language classroom seem difficult to use the target language all the time, especially in learning process that is conducted in English Zone Course because it has set role that the students must speak English in course institute environment, in the class or the outside. But the students, especially children and teenagers still cannot implement the role well. They still always use Bahasa when they speak with the teacher or their friends.
Based on elaboration above, the researcher will observe the students of teenagers' levels. It has been know that, teenagers have their own word in studying. Sometimes they are playing, making gossip, play headphone in the class, or imagining uncertain thing. So the researcher will observe the magical of teacher talk that can result the students respond in classroom interaction at teenagers level in English Zone.
REVIEW OF LITERATURE
In learning process there must be an intercation between the teacher and the students. Interaction is a verbal or non-verbal relation to communicate meaning between one person to another or one person to group of people and vice versa or among groups of people (Sofyan & Mahmud, 2018). According to Brown in Sagita (2018) also explains that:Though interaction, students can increase their language store as they listen to read authentic language material, or even the output of their fellowstudents in discussion, skins, join problem-solving task, or dialogue journals. In interaction, students can use all they possess of the language-allthey have learned of casually absorbed-in real-life exchange.
Interaction can be seen if the teacher can demonstrate some talks in the class that will give a big effect for the students improvement. Rod Ellis in Nurpahmi (2017) states that Teacher talk is the special language that teachers use when addressing L2 learners in the classroom. In talk that supports learning, participants share information, invite contributions from others, build on each other's utterances, question and challenge each other and seek to synthesise information to develop meaningful connections (Hennessy et al., in Rodnes dkk., 2020).
According to Brown in Sofyan & Mahmud (2018) In dealing with students feeling, it is also important to communicate students past feeling. It is important because their experiences shaped their minds, the reason behind their feeling in present time, and it helps teacher avoiding students' trauma. The understanding from teacher and the right way of handling with students feeling will comfort the teacher-students interaction in the classroom.
b. Praises and encourages
Teacher activities are not only conducting lesson plan and develop teaching material but also motivate students to raise their motivation so they can find and develop their language skill. Motivation for students must be done as well so the objectives of the lesson are achieved like the way it planned. In doing their daily activities in the classroom, teacher can support students with praising, complimenting and tell the students that their ideas and works are valuable. Students may feel stuck or blank in the middle of their speaking performance.
c. Uses ideas of students
Teacher's attention to students' contribution is a great appreciation for students' works. Some ways in expressing the appreciation, such clarify, using, interpret or summarize the ideas of students.
Teacher can start a discussion based on students' ideas by rephrasing them but still recognized as students' contribution.
d. Asks questions
As it brought before in previous pages, questioning in interaction is a way to stimulate students speaking up their thoughts. There are many ways to classify the kinds of questions for classroom effectiveness. The questions can be categorized by the level of the students. Teacher usually begins with display questions which the answers is common knowledge. The display questions can be used to provoke the contain of students' ideas and their language form.
e. Gives information
Giving information is a classic teaching method where teacher gives information, facts, personal opinion, or ideas about a topic. It is simply giving students the lecture or asking rhetorical questions. Nowadays, this method is considered as out-of-date method for teaching and learning process because students should be active in the classroom. To avoid this kind of method, it does not mean that the teacher leaves the whole classroom activities to the students. Teacher should conduct lesson plan and develop material so he/she can stimulate students' behavior.
f. Gives directions
Students need some direction and facilitation of information on how they should demonstrate the whole ideas they own systematically. They expect some direction or command from their teacher.
So, teacher should direct the various exercises and facilitate them by giving a whole-class or smallgroup activities.
METHOD
In this study the researcher tried to analyze the teacher talks in classroom interaction, andthe patterns of interaction occur during in the teachinglearning process in English Zone, a Course Institute in Makassar during nine 90 minute sessions of instruction through observation. Some sessions were audiotaped, and were later transcribed by the researcher for the purpose of data analysis.
The participant was a teacher in a course institute in Makassar who has got her BA degree in State University of Makassar two years ago. There are fourteenager students who participated in the class.
They were studying the last about Country and its season.
The technique of data collecting consists of doing observation by recording audio-lingual data of the process of teaching English. While technique of data analysis consist of transcribing data from recorded classroom observation into written text, classifying the types of teacher talk that occurred in the class, identifying the patterns of interaction occur during in the teachinglearning process and concluding the finding from all of the collected data.
Types of Teacher Talks that Occur in Interaction Between the Teachers And Students in the Classroom
It has been known that interaction is used to indicate the language (or action) used to maintain conversation, teach or interact with participants involved in teaching and learning in the classroom (Rhamli, 2016). One of component that conducts interaction in the classroom is the teacher. Mercer in Creese (2005)argues that teachers use talk todo three things: 1. Elicit relevant knowledge from students, so that they can see what students alreadyknow and understand and so that the knowledge is seen to be 'owned' bystudents as well as teachers.
Teachers elicit knowledge through the use of cuedand direct elicitations.
2. Respond to things that students say, not only so that students get feedback on theirattempts but also so that the teacher can incorporate what students say into theflow of the discourse and gather students' contributions together to constructmore generalized meanings. Teachers respond to what students say through theuse of confirmations, rejections, repetitions, elaborations and reformulations.
3. Describe the classroom experiences that they share with students in such a way thatthe educational significance of those joint experiences is revealed and emphasized. Teachers achieve this through the following means: 'We' statements,literal recaps and reconstructive recaps.
In this section, the researcher gives the example of teacher talk used by the teachers and also explainseach type. The teacher performed all types of teacher talk in the meeting. Before explaining eachtype, the researcher is going to present the frequency of types of teacher talk. According to brown in Lasantu (2012) there are eleven types of teacher talks namely dealing with feelings, praising and encouraging, joking, using ideas of students, repeating students' response verbatim, asking question, giving information, correcting without rejection, giving direction, criticizing students' behavior, and criticizing the students'response.
Dealing with feelings
The smallest frequency in types of teacher talk that occurred in the class was dealing with feelings.
The teacher produced this type of teacher talk in order to help the students to understand their feelings and attitudes by letting them know that they will not be punished when they are expressing their emotions (Lasantu, 2012). The example of teacher talk deal with feeling made by the teacher is provided below.
T
: Okey ASSALAMU 'ALAIKUM WR WB. S12 : Good afternoon T : Okey, good afternoon, and how are you toda:::y? S1 : [ oo I am Fine oo ] S2 : [ oo I am Fine oo ] T : Fine. It's rainy in the outside, right? Okey From the transcript above, it can be seen that the teacher open the class by asking the students condition by saying Okey, good afternoon, and how are you toda:::y? and also asking about the condition the outside by saying It's rainy in the outside, right?. This talk is produced by the teacherto make them, the teacher and students feel close each other.
Praising or Encouraging
Praising and encouraging were teacher's statements carrying the value judgment of approval (Lasantu, 2012). According to Burnett in Intervention Central The power of praise in changing student behavior is that it both indicates teacher approval and informs the student about how the praised academic performance or behavior conforms to teacher expectations.Praising and encouraging is like reinforcement that the teacher gave after the students explore their idea or opinion. The teacher often gives praise and encourage tothe students during the class interaction. The teacher's purpose in praising and encouraging thestudent is to give honors to them who actively participate in teaching and learning process. In this meeting, the researcher find 6 utterances as praising and encouraging talk from the teacher like the example below: T : Oke:y (1.0) So, > Do you still remember < our last meeting? What did we (talk)? In the last meeting. S2 : I T : I'd like to pla↑y S2 : Cheese. T : Che:ese. I'd like to play che:ese.Okey. Why do the people use would you? S1 : Would you? T : Ye::s (2.0) When they-(1.0) ask about-S1 : Someone T : Wishes in the-futu : : : re. For example. > Can you make example using would you< S1 : would you like-mmm would you like-would you like to make a cake? T : Would you like to make a cake? S2 : would youl-would you like-to play a che:ese? From the transcript above, it can be seen that teacher always saidOkey after the students answer her question. The word Okey means that the students answer is correct. Even though the teacher always mentioned the word Okeyin the other uttarances, but in this case, it can be concluded that the word Okey actually used to praise the students or to substitute the word good. Another that the teacher repeated the students answer, it means that the teacher appreciate the students answers. Those teacher talks can make the students to be more active in the class because they will think that their answer is appreciated by the teacher, thus they will not be afraid to make mistakes in the next section. It is like the statement from Gartrell (2014) who said that if the teacher gives praise and encouragement in the class it means that they have learn that all children in the class deserve full acceptance and support and work for a community in each every young child feel like a winner.
Joking
In order to make the classroom interaction relax, sometimes the teacher made a joke. According the Director of the Institute for Emotionally Intelligent Learning and consultant to schools for both character and social, emotional learning (SEL) in the present environment of high stakes testing, budgetary challenges, increased demands on educators and competition for students' attention, everyone in the school benefits when humor is part of the pedagogy. Humor builds a learning relationship through the joyful confluence of head and heart." He points to a growing literature on how humor reduces stress and tension in the classroom, improves retention of information, and promotes creative understanding (Edutopia, 2014). The teacher can use humor to reduce the students anxiety and they feel relax during the learning process. There were 1 utterances of joking thatproduced by the teacher during the classroom interaction. This type of teacher talk occurred in bothof class. The teacher performed a joke in order to make the student enjoy in classroom activities. Thefollowing is an example of joking that occurred in the class.
S1
: Kia is very beautiful and Nida is very smart but T : Kia and Nida ee S1,S2,T : ((laughter)) T : Okey start from kia
Using Ideas of Students
In a classroom interaction, sometimes the teacher was using ideas of students. This type dealt withthe teacher responses toward the student's idea. Using the students idea can make them feel be respected by the teacher. Create the culture of respect for other views and ideas within the class that is necessary for the students to collaborate with others (Blauman & Burke, 2014), by the teacher and the other students. The teacher purpose in use idea of studentwas to develop the student's idea become clearer. The following arethe examples of using ideas ofstudents produced by the teacher.
The first data
T : ………………………… and then you say I go to school with my mother= S2 : = and T :and-(2.0) and then you continue then……… The second data S1 : Come bukan came. anu Came itu verb anu(come is not came. Came is verb…) T : two. Come, yes. You're and your is different. Okey. Your kepunyaan and you're you: Are. Fnish Zainab? Any Question?
From the transcript above, it can be seen that the teacher use the students idea and opinion in order to appreciate them. By using their idea, the students will not be afraid to explore their opinion in the other chance. It can be found when the teacher said and then you say I go to school with my mother=directly the students continue the teacher talk by saying and, at that time the teacher directly use the word that mentioned by the student. The other example can be seen in the second data, the students conveyed her opinion by saying Come bukan came. Came itu verb anu (come is not came. Came is verb…), the teacher replay the students idea by saying two. Come, yes.
Repeating Students Response Verbatim
During the classroom interaction, students often responded the teacher's talk. Regarding to it, theteacher sometimes repeated students' response verbatim. The teacher repeated thestudents' responses, in order to give information that the students' answer was correct. The exampleof repeating students response verbatim found in all meetings is provided below.
S1
: would you like-mmm would you like-would you like to make a cake? T : Would you like to make a cake? S2 : would youl-would you like-to play a che:ese? From the transcript above, it can be found that the teacher repeated the students' answer. It seems like the second types of teacher talk namely praise and encouragement, the teacher repeated the students' answer because she appreciated the students' opinion. Consider in the S1 opinion, when the teacher asked her to make sentence, she said would you like to make a cake, directly the teacher said Would you like to make a cake? That is the repetition of the S1 sentence. The teacher clarified that the S1's opinion was correct and must be appreciated. It is just like the second data, the teacher always repeated the students even though just one word.
Asking Questions
A question is any sentence which has an interrogative form or function. In classroom settings, teacher questions are defined as instructional cues or stimuli that convey to students the content elements to be learned and directionsfor what they are to do and how they are to do it(Cotton).
According to Filippone in Arslan (2006)The greatest attribute of questioning is that it stimulates thinking in the classroom Asking questions were the second most frequently typesof teacher talk that occurred in the class. Asking question can provoke students' critical thinking. The can elaborate the idea in their mind if they are given question. The example of asking question made by theteacher is provided below.
The first data T : Oke : y (1.0) So, >Do you still remember < our last meeting? What did we (
The fifth data
T : okey the last, zainab S1 : ((laughter)) S3 : Korea T : Korea, huuh you want to see lee min ho?
From the transcription above, it can be seen that teacher often produce this utterance. It is used to refresh the students' memory when the teacher said Do you still remember < our last meeting? What did we (talk)?in The last meeting.Why do the people use would you?,Can you make example using would you<(in the first data) Because sometimes the students will forget the previous material that had been given, whereas it is still related with ongoing material on the day, so that teacher question will make the students remember the last material and it easy for them to connect with ingoing material.
While when the teacher said what does it mean what would you like to go, What is the meaning
Writing an email to a friend?this kind of question is an introduction of the material that will be learnt.
It is also useful because, before learn about the lesson deeply, the students must know the title or the meaning of sub title that will be learnt. Another example of question that can be seen is in the second data, the teacher said what is spring↑, What is is east?, [What about north]?, and South. This kind of question is delivered by the teacher because she wanted to remind the students about their previous knowledge that the stduents forget. While when the teacher said when and where would you go? why?
When do you want to go to Canada and where?, actually she wanted the students explored their idea about their planning. It is useful for the students to explore their own idea. This kind of question will make the practicing their speaking ability.
Giving Information
Based on the observation, this type of teacher talk was very often produced by the teacher. In classroominteraction, the teacher often gave information to the students. Theexample of giving information made by the teacher is provided below.
The first data
T : my pet is cute its really rally funny and adorable I. Actually my it's like this T : I want to continue with its name, namanya because its. For example.
Use he, he is my brother-(1.0) His name is bla bla-(.) She is my sister-(.) Her name is bla bla. That's my pet. Its name is bla… [bla]… gitu. Its for it. Its name is bla bla bla
The second data S1 : In British Columbia↑ di British Columbia↑ you can kayakapa itu kayak? S3 : bangsa kayak kayaknya deh (maybe Kayak people) S1 : oh: bisa:: (0h can) T : Kayak ((Laugter)) Semacam sampan(kayak is like sampan) The third data S1 : hahhh↑ A: (Spring itu terjadi) di bulan e::: februari di Victorian ehh: and west. Ehh west, eh anu: west itu utara atau barat? T : wets is Barat. T : so eats↑ S1 : timur ( From the transcription above,it has be found in the first data that the teacher saidActually my it's like this,I want to continue with its name, namanya because its. For example. Use he, he is my brother-
(1.0) His name is bla bla-(.) She is my sister-(.) Her name is bla bla. That's my pet. Its name is bla…
[bla]… gitu. Something like that). Its for it. Its name is bla bla bla. In this case, the teacher explained or gave information to the students about the use of it is and its. Because in the class, the students still did not know how to differentiate between two cases.
In the second data it can be seen the teacher said Kayak ((Laugter)) Semacam sampan (kayak is like sampan). Here, she give information about the students' misunderstanding about the meaning of Kayak. The students thought that kayak is a nation or race in Canada, but actually Kayak is like sampan.
The teacher gave information in order the students know that kayak is a thing that the people use to across the lake.
While the other information that the teacher gave in the class is about the meaning of certain vocabulary the students still did not know. Actually there are many utterances about this kind of information (see in appendixes 1), but the researcher just show some samples above as representation.
Correcting Without Rejection
During teachingprocess, the teacher sometimes corrected the students' answers or responses without rejection. The example of correcting without rejection made by the teacher is provided below.Second Meeting Regular ClassThere only seven utterances of correcting without rejection found in this meeting. The followingis one of the examples.
The first data
S2 : the people, the people ee is very . the people very. The people eee busy and tell: the people are very busy and T : are The third data S3 : <I can wait for you to visit South Korea. And also don't forget to bring some cots coats> T : Coa[ts It can be seen from the transcription above that the teacher just corrected the students answer if they were wrong. When the students said "the people is" directly the teacher corrected it by saying "are". In this case the teacher corrected the grammatical form of the students' sentences. The other example can be seen when the students said (Coats) / Kots. The teacher corrected it by saying "Kowts".
In this case, the teacher corrected the students' way in reading the text or pronunciation.
Giving Direction
Giving direction was one type of teacher talk that also occurred in this study. Giving direction was one types of teacher talk that frequentlyoccurred during four meetings. Some directions that the teacher mentioned in the transcription above (bold sentences) is a kind of teacher talk that is very useful for applied in the class. It can help the students to accomplish the instruction well.
Criticizing Student Behavior
Criticizing student behavior was types of teacher talkthat never produced by the teacher in this study. Because in the class, the students were four girls who were polite and they listen the direction from the teacher well. Thus, the teacher did not criticize the students' behavior.
Criticizing Student Response
Criticizing student response is one type of teacher talk that also occurred in this study. It can be seen from the transcription, teacher sometimes criticized student response when the students made a mistake. The example of criticizing student response made by the teacher is provided below.
T : Visit. Okey. You write down the place where do you want to go:, and then: you fill this one, when would you like to go:: ee why would you like to go there::: and what would you like to do there= S1 : =kalo 2 hh hh(How if I write down two) T : e::: maybe you can choose one-(1.0) Only one T : [ behind it. Oke:y please replay the letter S1 : kalo singkat padat jelas ((laughter)). (Can I write as short as possible?) T : No:: make it long too. Like-(.) when you write it From the transcription above, it can be seen that the teacher sometimes cricize or reject the students willing. When the S1 said =kalo 2 hh hh (How if I write down two)the teacher answered: e::: maybe you can choose one-(1.0) Only one. Actually the teacher had reason why she did not allow the S1 to write down two. It can be because the limitation of time that they have. The other example, when the S1 said kalo singkat padat jelas ((laughter)). (Can I write as short as possible?)the teacher directly said No:: make it long too. Like-(.) when you write it. It means thatthe teacher wanted the students explore their idea in the written form, not only in in speaking form.
The Patterns of Classroom Interactions
In ELT interaction patterns are the different ways learners and the teacher can interact in the class.
Using the right interaction pattern is a fundamental factor in the success of any activity and the achievement of aims (Teaching english, 2009). Based on the observations during the teaching learning process in the classroom, it was found that the pattern of classroom interaction was individual work rather than group work or group discussions. It was proven that the teacher gave the students assignment to write an email to their friends and replay an email for their friends. Besides, the teacher also asked the students one by one about their planning where they would like to go. In this case, the students explore their idea one by one in front of the teacher.
In individual work, the students finish the assignment by themselves. It made them Easy to focus.When you are alone doing a job, it becomes more convenient for one to concentrate properly.
Interruptions are as well, much less, when the person works alone. It becomes easy to focus. If in a group, one could get easily carried away while conversing. Work also becomes less productive as the group gets more involved in chatting, gossip sessions and so on (Sravanl, 2017). If the students wok in group, it will make them interrupt each other, because they will talk the out of the topic in the lesson. It can be seen in learning process; the students think seriously to write down their idea in an email. At once they asked the teacher about vocabulary that did not understand like the transcription below S4 : Miss, What is pemerintah= (Miss how to say in English pemerintah?) T : =Government S1 : eeh: kalo fasih berbicara (How about fasih berbicara) T : Speak fluently . Fasih berbicara Another that by finishing the assignment individually, the students will be independent. They will finish their task by themselves without bothering the other friends, even though they still needed the other friends and the teacher's help at once.
However, work individual like occurred in learning process that has been observed will give disadvantages for the students like needed Long time to finish it. The person might take longer to complete or to do the job, rather than when in a group (Sravanl, 2017). It can be proven from the result of observation when the students finished writing an email; they need thirty minutes to finish it. If they work together, they will finish it soon.
Beside that the researcher also notice that the four patterns of classroom interactions (Lasantu, 2012). Those patterns were:
Interactions among students or studentstudents interactions
That happened during in the learning process and often in individual activity, such as when the students find some difficulties in understanding the material or sometimes did not know the meaning of a word in English, they chose to ask their friend like the example from the transcript below:
S2 : Miss what is beradaptasi? S1 : adaptation hh [hh T : [adapt] a-d-a-p-t-(4.0)
From the transcript below, the S1 (friends/the other students) answer for the S2's question. But it was false. Thus, the teacher corrected it.
Teacherswhole class interactions
That happened during the class discussion and also this pattern happened when the teacher conveyed learning material and gives instructions to the students. And these patterns are the most common used in the classroom when they did the interaction. The transcript above showed the teacher whole class interaction, where the students can give response from the teacher's explanation.
Interactions among teachergroup discussion
That happened during the small group discussion when the teacher clarifies the students' difficulties with the task given. But based on observation, this pattern of interaction did not occur in learning process, because the students work individually.
Teacher -individual students' interaction
That happened when the teacher do closed interaction with one student, such as answer the student question when one of them got some difficulties. This type of classroom interaction pattern always occurred. It can be seen in the transcription below:
CONCLUSION
Concerning with the result of the finding and the discussions of the study, it can be conclude that the classroom interaction that occur during the teaching learning process is generally run well. The teacher and students can conduct their role components of learning process in the class. Besides, the teacher employed ten from eleven categories of teacher talks, there are: there are eleven types of teacher talks namely dealing with feelings, praising and encouraging, joking, using ideas of students, repeating students' response verbatim, asking question, giving information, correcting without rejection, giving direction, and criticizing the students' response.
While there are four patterns of classroom interaction that were applied in the study are: Interactions among students or studentstudents interactions that happened during the small group discussion and during in the classroom discussions teacherswhole class interactions that happened during the class discussion and also this pattern happened when the teacher conveyed learning material and gives instructions to the students, Interactions among teachergroup discussion that happened during the small group discussion when the teacher clarifies the students' difficulties with the task given. But base on observation, Interactions among teachergroup discussion did not occur because in class interaction, there was no group discussion, but work individually. | 2022-01-20T16:14:47.173Z | 2021-12-29T00:00:00.000 | {
"year": 2021,
"sha1": "39ff7539a8e2759dd4c7415da1c2bdb3a9ff622b",
"oa_license": "CCBY",
"oa_url": "http://journal.uin-alauddin.ac.id/index.php/elstic/article/download/25379/12790",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "167c0f0db12b66dca66704a4f77a74cee71f01ce",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": []
} |
98669704 | pes2o/s2orc | v3-fos-license | The effect of sacrificial templates on the pore characteristics of sintered diatomite membranes
Recently, porous ceramic membranes have become an interesting subject due to their outstanding thermal and chemical stability. Among the many types of ceramics, as diatomite is inherently porous and irregular, it is worthwhile to investigate the relationship between the characteristics of sacrificial templates and porous microstructures after sintering. Therefore, sintered diatomite membranes were prepared with 8 ̄m solid polymer spheres, 20 ̄m solid polymer spheres, wheat starch, and light clusters of aggregated carbon nanotubes while varying the amount of sacrificial template material by dry pressing at 25MPa. The results show that the characteristics of the sacrificial templates, e.g., the rigidity, directly affect the pore characteristics and accordingly determine the permeability of sintered diatomite membranes. Also, we discuss whether the largest pore sizes and average pore sizes of the sintered diatomite membranes reflect the actual permeability appropriately.
Introduction
Porous ceramics are increasingly important 1) as researchers seek to exploit their unique properties, such as high wear resistance, 2) low thermal conductivity, 3)5) and a low dielectric constant. 6) Notably, porous ceramic membranes 7)11) are among the most feasible applications of porous ceramics. The driving force behind the development of porous ceramic membranes is mostly the need to produce membranes with greater thermal and chemical stability, as most polymeric membranes cannot withstand operating temperatures above 200°C or exposure to organic solvents such as benzene and toluene. 12) Recent developments related to porous ceramic membranes have heightened the need to investigate mass transport through a membrane. Although there have been various reports on commonly used materials for ceramic membranes, including £-Al 2 O 3 , 10),13) ¡-Al 2 O 3 , 9),14) TiO 2 , 15),16) ZrO 2 , 17) SiO 2 , 18) and composites of these materials, 19), 20) there have been few studies on porous and irregular starting particles such as diatomite. Diatomite is a sedimentary rock originating from the siliceous fossilized skeletons of diatoms, which are composed of rigid cell walls called frustules. 21)23) To date, no detailed studies regarding the use of a membrane made with inherently porous and irregular particles have been published.
One of the straightforward processing routes for the preparation of porous ceramic membranes is the sacrificial template method. This method usually consists of the preparation of a twophase composite consisting of a continuous matrix of a ceramic phase and a dispersed sacrificial template phase that is initially homogeneously distributed throughout the matrix and is ultimately pyrolyzed to generate pores within the microstructure. 1) The main advantage of a sacrificial template method in comparison with other methods is the ability to tailor the porosity, pore size distribution, and pore morphology of the sintered ceramic membrane precisely through the appropriate choice of the sacrificial template. However, in particular, when the shapes of the starting particles are inherently porous and irregular, like diatomite particles, it is not certain as to whether the characteristics of sacrificial templates efficiently affect the final properties of a sintered diatomite membrane to the same degree as dense and uniform starting particles such as Al 2 O 3 and ZrO 2 .
Therefore, to determine whether the shape of a sacrificial template can be transferred onto the pore structure of a sintered diatomite membrane effectively, different types of sacrificial templates were investigated. These were (i) 8¯m solid polymer spheres, (ii) 20¯m solid polymer spheres, (iii) wheat starch, and (iv) light clusters of aggregated carbon nanotubes. This study investigates in detail two factors that may dominate the pore characteristics of the sintered diatomite: the relative size and the rigidity of the sacrificial templates.
Material and methods
Diatomite (Celite 499, Celite Korea Co. Ltd., Korea) was used for the preparation of the sintered diatomite membranes. The average particle size of the as-received diatomite was 12.79 m. To enhance the sinterability of the diatomite particles, the average particle size of the diatomite was reduced by ball-milling. Diatomite particles were mixed with distilled water as a solvent. The slurry was ball-milled for 24 h with an alumina ball-topowder volume ratio of 2:1. The particle size of the ball-milled diatomite was analyzed by a particle size analyzer (LS· 13 320 MW, Beckman Coulter, USA).
Diatomite particles ranging in quantity from 0 to 15 vol.% of PMMA (8¯m solid spheres, or 20¯m solid spheres, Sigma-Aldrich, USA), wheat starch (Wheat starch, Sigma-Aldrich, USA), or carbon nanotubes (CM95, Hanwha Chemical Co., Ltd., Korea) as a sacrificial template, and a polyethylene glycol binder were mixed, dry pressed at 25 MPa, and sintered at 1200°C for 1 h. Also, diatomite particles ranging in quantity from 0 to 25 vol.% of Expancel (hollow spheres, Expancel-920-DET-40-d25, Eka Chemicals AB, Sweden) as hollow polymer spheres, distilled water, and a polyethylene glycol binder were mixed, wet-pressed at 1 MPa, and dried for 24 h. Next, they were sintered at 1200°C for 1 h.
The pore characteristics of the sintered diatomite membranes were investigated by scanning electron micrography (JSM-5800, JEOL, Japan). Average pore sizes of the sintered diatomite membranes were measured by mercury porosimetry (Autopore IV 9510, Micromeritics, USA). The flow rate, Darcy's permeability constant and the largest pore size of the sintered diatomite membranes were characterized by capillary flow porosimetry (CFP-1200-AEL, Porous Materials Inc., USA). Particularly, the largest pore size was measured by the bubble point method, which is the most widely used approach for evaluating pore sizes and which is capable of determining the largest pore size of a membrane. It is based on the feature, for a given fluid and pore size under constant wetting, that the pressure required to force an air bubble through the pore is inversely proportional to the size of the pore.
Results and discussion
Typical Scanning Electron Microscope (SEM) images of raw materials, i.e., diatomite particles after ball-milling for 24 h, asreceived 8¯m solid polymer spheres, as-received 20¯m solid polymer spheres, as-received wheat starch, as-received light clusters of aggregated carbon nanotubes, and high magnification images of light clusters of aggregated carbon nanotubes, are shown in Figs. 1(a)1(f ), respectively. In Fig. 1(a), the diatomite particles maintained both the irregular shapes and inherent pores of the fossilized skeleton of diatoms after ball-milling for 24 h. Figures 1(e) and 1(f ) depict light clusters of aggregated carbon nanotubes. Because carbon nanotubes tend to self-associate into micro-scale aggregates, 24), 25) disaggregation and a uniform dispersion of individual carbon nanotubes are critical challenges that must be met to utilize the unique properties of carbon nanotubes successfully. However, in this study, carbon nanotubes were used without an aqueous colloidal dispersion by a surfactant such as sodium dodecyl sulfate (SDS), because, in our preliminary study, above 2.5 vol.% of sacrificial template, regardless of the type, was needed to enhance the permeability of the sintered diatomite membranes observably, and this amount of carbon nanotubes already far exceeded the criteria of a homogeneous dispersion of carbon nanotubes. 26) Also, we only focused on the flexible and soft aspects of light clusters of aggregated carbon nanotubes in the comparison with the rigid solid polymer spheres.
Figures 2(a) and 2(b) show similar particle size distributions of sacrificial templates. The particle size distribution of the 8¯m solid polymer spheres corresponds to that of the wheat starch, and the particle size distribution of the 20¯m solid polymer spheres corresponds to that of light clusters of aggregated carbon nanotubes.
The diatomite membrane prepared by dry pressing at 25 MPa without any sacrificial template had an inherently porous microstructure, when sintered for 1 h at 1200°C, as shown in Fig. 3(a). Although dry pressing methods generally introduce pores and voids into green bodies, subsequently degrading the densification of conventional ceramics, 27) due to the highly porous microstructure of the diatomite matrix, it was difficult to find peculiar voids or pores in the sintered diatomite membrane prepared by dry pressing. The densities of diatomite membrane were dependent on the amount and the kind of sacrificial template addition, and were varied from 0.8 to 1.0 g/cm 3
. Figures 3(b) and 3(c)
show a diatomite membrane prepared with the 8¯m solid polymer spheres and the 20¯m solid polymer spheres, respectively. As the average particle size of diatomite, which was ball-milled for 24 h, was 8.36¯m, the diatomite membrane had more distinct spherical pores in the diatomite matrix when prepared with the 20¯m solid polymer spheres as compared to the 8¯m solid polymer spheres. It is well known that micro-cracks can readily develop within the microstructure and act as escapee paths for the gas phase generated during the pyrolysis of a solid polymer template in a dense microstructure induced by uniform particles such as alumina and zirconia. However, in this study, we successfully sintered the diatomite membrane at a heating rate of 5°C/min without a time-consuming burn-out process, unlike the conventional porous ceramics fabrication process. This can be explained by the highly porous diatomite matrix, which acts as an escape path for the gas phase which evolves during the pyrolysis of the sacrificial template without the generation of micro-cracks. While, even porous and irregular diatomite particles do not mitigate concerns over micro-crack generation completely, the inherent porous microstructure of the sintered diatomite can withstand levels below a certain amount of gas phase locally generated during the pyrolysis process with an addition of up to 15 vol.% of the sacrificial templates. In contrast to the solid polymer template case, the diatomite membrane prepared with wheat starch had more slit-like pores compared to that prepared with solid polymer spheres, as shown in Fig. 3(d). This can be explained by the difference in the degree of rigidity between the solid polymer spheres and the wheat starch. A solid polymer sphere, which consists PMMA, has an elastic modulus of approximately 3 GPa, 28), 29) whereas the elastic moduli of various starches range from 0 to 500 MPa. 30) Therefore, slit-like pores may be induced when the mixture of diatomite particles and wheat starch is exposed to external pressure during dry pressing at 25 MPa. Figure 3(e) shows a diatomite membrane prepared with light clusters of aggregated carbon nanotubes. Although the average particle size of light clusters of aggregated carbon nanotubes was larger than that of the wheat starch, it is difficult to find a trace of light clusters of aggregated carbon nanotubes in the microstructure of the sintered diatomite. It may be that light clusters of aggregated carbon nanotubes tend to locate inside the inter-particle voids rather than in the continuous area of the diatomite matrix due to the flexibility of the carbon nanotube itself and the fragility of light clusters. pore size distributions of the sintered diatomite membranes prepared with 10 vol.% of 8 and 20¯m solid polymer spheres are in good agreement with the particle size distributions of the 8 and 20¯m solid polymer spheres, as shown in Fig. 2(a). Although the average size of the 20¯m solid polymer spheres is more than two times larger than that of the 8¯m solid polymer spheres, the main peak of 20¯m solid polymer spheres is slightly larger than that of 8¯m solid polymer spheres. This occurs because the main peaks of the 20¯m solid polymer spheres correspond to the throats 31) or entrance openings of the spherical pores approximately 20¯m in size, as mercury will enter the spherical pores at a pressure determined by the entrance size rather than the actual spherical pore size. Thus, the average pore sizes as measured by mercury porosimetry do not reflect the spherical pores induced by the sacrificial templates, and the average pore sizes are insufficient when seeking to understand the pore characteristics considering the permeability, which will be discussed again when referring to Figs. 5(a) and 5(b). Figure 4(b) shows the pore size distributions of the sintered diatomite membranes prepared with 10 vol.% of wheat starch and light clusters of aggregated carbon nanotubes at a sintering temperature of 1200°C. Unlike the particle size distributions of the wheat starch, the pore size distribution of the sintered diatomite prepared with wheat starch shows a broad peak reflecting the various sizes of the slit-like pores. Also, the pore size distribution of the sintered diatomite prepared with light clusters of aggregated carbon nanotubes shows a negligible difference from the sintered diatomite prepared without any type of sacrificial template, as expected in Fig. 3(e).
The air permeation properties of sintered diatomite membranes prepared with 10 vol.% of different sacrificial templates at a sintering temperature of 1200°C are shown in Fig. 4(c). The permeability of the sintered diatomite membranes with 10 vol.% of 20¯m solid polymer spheres was the highest among the four types of sacrificial templates, and that of the sintered diatomite membrane with 10 vol.% of light clusters of aggregated carbon nanotubes was the lowest. To complement this result, the sintered diatomite membranes were characterized by two different methods: average pore size measurements by mercury porosimetry and largest pore size measurements by capillary flow porosimetry.
Figures 5(a) and 5(b) show the average pore sizes and the largest pore sizes of the diatomite membranes prepared with various amounts of sacrificial templates ranging from 0 to 15 vol.% while varying the type of sacrificial template (8¯m solid polymer spheres, 20¯m solid polymer spheres, wheat starch, and light clusters of aggregated carbon nanotubes) as a function of darcy's permeability constant, with all samples sintered at 1200°C for 1 h. Although some specimens had higher Darcy's permeability constants than others, the average pore sizes of the sintered diatomite remained nearly unchanged. However, when a sintered diatomite membrane had a Darcy's permeability constant, it had a larger largest pore size. Moreover, for comparison, the data of the sintered diatomite membranes prepared with a 45¯m hollow polymer sphere wet pressed at 1 MPa while varying the amount of the 45¯m hollow polymer spheres ranging 0 to 25 vol.% were also plotted, as shown in Figs. 5(a) and 5(b). Although the largest pore sizes of the sintered diatomite membranes prepared with the 45¯m hollow polymer spheres wet pressed at 1 MPa show a distinctly different trend from those of the sintered diatomite membranes prepared with other templates dry pressed at 25 MPa in Fig. 5(b), the average pore sizes of these membranes show a negligible difference from the average pore sizes of the others in Fig. 5(a).
This occurs because the average pore size as measured by mercury porosimetry accounts for all open pores regardless of the pore type, including blind pores, cross-linked pores, and through pores, 31) whereas, in principle, the largest pore size measured by capillary flow porosimetry ensures that the pores are interconnected and act as pore channels. Thus, the measured largest pore size of the sintered diatomite membrane can not only provide the largest size of the solute that can pass through as a surface membrane but also can describe appropriately the permeability of the sintered diatomite membrane. In the literature, the average pore size and the largest pore size of a track-etched polymeric membrane with a cylindrical pore structure and a very narrow pore size distribution, representing the ideal conditions for measuring the largest pore size, are essentially the same. 32) However, the average pore size and the largest pore size of a metallic membrane with an asymmetrical pore structure 33) are unambiguously different. One important consideration is that these studies 34) focused on the discrepancies among the average pore size, the largest pore size, and pore size distribution as measured by either mercury porosimetry or capillary flow porosimetry. In the present study, we intended to determine experimentally which method is practically more appropriate when designing a membrane with a specific Darcy's permeability constant and a specific pore size, particularly a ceramic membrane prepared by the sacrificial template method.
Also, these results above show that the characteristics of the sacrificial templates, such as the shape and rigidity, directly affect the final pore characteristics after the sintering process and accordingly determine the permeability of the sintered diatomite membranes.
Conclusion
In summary, sintered diatomite membranes were prepared with the sacrificial templates of 8¯m solid polymer spheres, 20¯m solid polymer spheres, wheat starch, and light clusters of aggregated carbon nanotubes. The diatomite membranes were sintered at 1200°C for 1 h at a heating rate of 5°C/min, without a timeconsuming burn-out process, unlike the conventional sacrificial template method. Although the diatomite membrane prepared with solid polymer templates had spherical pores, the diatomite membrane prepared with wheat starch had more slit-like pores as opposed to that created with solid polymer spheres due to the difference in the rigidity of the sacrificial templates. Furthermore, the diatomite membrane prepared with light clusters of aggregated carbon nanotubes had no particular pore shape owing to the flexibility of the carbon nanotube itself and the fragility of light clusters.
It is noteworthy that the characteristics of the sacrificial templates directly affect the pore characteristics after the sintering process, and accordingly determine the permeability of the sintered diatomite membranes. Also, the largest pore size of the sintered diatomite membrane can not only provide the largest size of the solute that can pass through as a surface membrane but can also appropriately describe the permeability of the sintered diatomite membrane. | 2019-04-06T13:02:41.702Z | 2013-11-01T00:00:00.000 | {
"year": 2013,
"sha1": "5eeead855fd51055efe95cd71dacaaa6b40b23bc",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jcersj2/121/1419/121_JCSJ-P13123/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "541f69023f96933711e18f61fdc219e02b5b01e0",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
230511384 | pes2o/s2orc | v3-fos-license | A Joint Detection and Recognition Approach to Lung Cancer Diagnosis From CT Images With Label Uncertainty
Automatic lung cancer diagnosis from computer tomography (CT) images requires the detection of nodule location as well as nodule malignancy prediction. This article proposes a joint lung nodule detection and classification network for simultaneous lung nodule detection, segmentation and classification subject to possible label uncertainty in the training set. It operates in an end-to-end manner and provides detection and classification of nodules simultaneously together with a segmentation of the detected nodules. Both the nodule detection and classification subnetworks of the proposed joint network adopt a 3-D encoder-decoder architecture for better exploration of the 3-D data. Moreover, the classification subnetwork utilizes the features extracted from the detection subnetwork and multiscale nodule-specific features for boosting the classification performance. The former serves as valuable prior information for optimizing the more complicated 3D classification network directly to better distinguish suspicious nodules from other tissues compared with direct backpropagation from the decoder. Experimental results show that this co-training yields better performance on both tasks. The framework is validated on the LUNA16 and LIDC-IDRI datasets and a pseudo-label approach is proposed for addressing the label uncertainty problem due to inconsistent annotations/labels. Experimental results show that the proposed nodule detector outperforms the state-of-the-art algorithms and yields comparable performance as state-of-the-art nodule classification algorithms when classification alone is considered. Since our joint detection/recognition approach can directly detect nodules and classify its malignancy instead of performing the tasks separately, our approach is more practical for automatic cancer and nodules detection.
I. INTRODUCTION
Lung cancer is the primary cause of cancer deaths worldwide. The 2018 Global Cancer Statistics [1] shows that there are approximately 1.8 million deaths and 2.1 million new cancer cases caused by lung cancer, ranking first among other cancers. Early diagnosis of a small tumor can prevent metastasis of cancer and substantially improves the prognosis and survival rate [2]. Therefore, the development of an intelligent computer-aided diagnosis system (CADS) can be beneficial to the early treatment of lung cancer.
The volumetric thoracic computed tomography (CT) is the most commonly used imaging technique for lung scan [3], The associate editor coordinating the review of this manuscript and approving it for publication was Essam A. Rashed . which can be used to detect lesions in the lung called pulmonary nodules. Such nodules can be benign or malignant, and the detection of the latter is of great importance. One difficulty in detecting the nodules from these CT scans is that the nodules absorb the same level of X-ray as normal body tissues. Thus, there is no apparent intensity discrepancy. The distinctive features of pulmonary nodules are primarily related to shape and location. Figure 1 shows an example 2D slice from such as volumetric or 3D-CT scan. It can be seen from Figure 1 (c) that the tiny pulmonary nodule has no distinctive feature compared with vessels in the 2-D image. However, the vessels have a continuous structure, while nodules are isolated. This motivates us to develop a network for detecting nodule and malignancy using 3-D volumetric data instead of fusing results from multiple 2D slices. On the other hand, humans are more proficient in extracting information from 2-D images than 3-D volumetric images. Therefore, it is expected that a thorough analysis of CT scans by clinicians can take much time, increasing the cost of such check. Compared with checking by doctors, CADS has the potential advantage of taking the three-dimension image data into account and output potential nodule candidates for reference or confirmation quickly. More importantly, the CADS approach can even learn and accumulate the experience from radiologists via continuous training. Hence, they may provide very stable prediction comparable or even outperforming a single experienced radiologist [4]. Hence, it is helpful to develop an efficient CADS for the diagnosis of lung cancer from CT images. In the literature, such automatic diagnosis usually consists of two steps: nodule detection and nodule classification [5]. With the success of deep learning in natural image processing, most recent studies on these two tasks are based on the convolution neural network (CNN) [6]- [9]. Methods for nodule detection usually rely on networks for object detection problems, including faster R-CNN [10] and YOLO [11], which outputs region proposals of the target objects. The nodule classification problem, on the other hand, is usually regarded as a 3-D image 1 recognition problem using the data at the detected regions as inputs. 3-D extensions of well-known image classification networks such as ResNet [12] are widely used.
Despite these advances, a fully automatic CADS for lung nodules detection and cancer classification still present several major challenges. First of all, separating the detection with classification tasks usually reduce the overall classification rate as considerable amounts of detected nodules are, in fact, false positives. By introducing a simple classification stage to refine the detected nodules after the detection task can considerably reduce the false-positive results [13], which, otherwise, will mislead the classification task later. Therefore, it is desirable to develop a methodology for joint nodule detection and malignancy classification.
Secondly, most pulmonary nodules are small and isolated in the raw CT scans. The shape of the nodule thus serves as an informative feature for distinguishing it from other body tissues. Therefore, it is desirable to exploit the 3D nature of the data for better classification. However, due to significantly increased parameters of 3D neural networks, most conventional approaches are still based on multiple 2D networks [14]- [16]. The primary obstacle of applying the 3-D model in nodule classification is the overfitting problem arising from the increased number of parameters and the limited number of training samples. For instance, while ImageNet [17] uses millions of images for training, there are only 1018 scans in the LIDC-IDRI [18]- [20] lung cancer CT dataset.
Finally, for some cases, the labels of the radiologists may not be consistent or missing (say the nodules may be labelled by 1 or 2, but not all the radiologists). This arises because labeling nodules as benign or malignant using CT images depends mostly on the experience of radiologists and the limitations in the data collection process. Unless a single consistent label can be agreed on (as in some dataset), such uncertain labels, which we shall also refer to as marginal labels, will arise for some nodules. In fact, it is commonly found in the LIDC-IDRI dataset. If the network is forced to fit these marginal samples, the performance usually deteriorates as reported in [15], [16]. This problem is usually referred to as the label uncertainty problem. Though a precise probabilistic model to describe such variations can be difficult to obtain, it is desirable that such adverse effect on the overall performance of the network can be mitigated. All these motivate us to develop a joint detection and recognition approach to lung cancer diagnosis and segmentation from CT images with possibly marginal or uncertain labels.
An important advantage of the proposed joint detection/recognition approach is that it can directly detect nodules and classify its malignancy instead of performing the two tasks separately. Therefore, our approach is more practical as it can be applied in an end-to-end manner to automatic cancer and nodules detection. Moreover, the proposed joint nodule segmentation/recognition (JNSC) network is capable of exploring the semantic segmentation information [21] to yield a more detailed segmentation of the nodules and their malignancy instead of conventional simple regional proposal. It is known that nodule malignancy is highly related to its morphology. The segmentation information offered by our proposed joint nodule segmentation/recognition (JNSC) network can provide valuable morphology description of FIGURE 2. System overview of the proposed framework. The detection phase outputs multiple potential nodules. The recognition phase uses features of detection phase to build an additional classifier to discriminate them into three classes: benign, cancer and non-nodule. Only the benign and cancer nodules are then evaluated for nodule detection task and classification task. Importantly, in the classification task, the undetected nodules are directly labeled as benign to report the result. The architecture of the joint nodule detection and classification network is shown in Figure 3.
the detected nodules, which can be useful in differentiating malignant tumors from scars or other complications.
From the neural network training point of view, the encoded features and initial segmentation obtained in our nodule detection network serve as valuable prior information for the subsequent classification process. This not only helps to the classification network to extract more discriminative features but also makes possible the training of our 3D neural network for classification and further refinement of the segmentation map without suffering from excessive overfitting. Figure 2 shows the system overview of the proposed network, where the input CT image is passed through the proposed joint nodule detection and recognition network to provide a segmentation map of the nodule as well as its malignancy prediction. Our JNSC network is a 3D network and it adopts the encoder-decoder architecture with multiscale features extraction, which has the advantages to encode the desired location information as well as shape information of the nodules. Moreover, instead of simply cascading the detection and classification networks, a path for extracting discriminative features from the output of the encoder of the nodule detection module to the classification network is proposed. These features are jointly trained from the two networks and provide valuable additional information for improving the classification performance.
Thanks to this additional information provided by the nodule detection network, the proposed 3D JNSC can be trained from scratch despite the limited number of training samples. Moreover, the encoder in our JNSC is trained on the whole CT image, which can also distinguish other body tissues for nodule detection. Experiment results to be presented later show that the joint detection and classification framework is superior to the sole classification approach with an improvement of 1.25% in terms of accuracy. This is in accordance with previous studies in scene geometry and semantics research [22], [23] where it has been demonstrated that multi-task learning can effectively boost the overall performance.
Finally, to address the label uncertainty problem, we treat the problem as a training problem with label noise 2 [24] where the noisy label will be corrected during the training phase. In the lung nodule diagnosis problem, samples with inconsistent or missing annotations are commonly encountered and they may be less reliably annotated. Here, we introduce the concept of pseudo-label to alleviate the adverse effect of these possible less reliable annotations. More precisely, the unreliable annotations are detected and their labels are re-estimated as ''pseudo-labels'' by minimizing a variant of the cross-entropy loss function, which is capable of seeking a better tradeoff between network prediction and fitting errors. While the true model of these less reliable labels is different to obtain in practice, the use of the more robust cross-entropy loss function effectively prevents the network from overfitting those less reliable marginal samples. 3 Experimental results show that training with the proposed pseudo-labels can improve the accuracy by 2.44% compared with the hard-label assignment and by 1.31% compared with the soft-label assignment. 4 The proposed approach has been evaluated and compared with state-of-state algorithms on the publicly available LIDC-IDRI dataset. In particular, the nodule detection phase is validated on the LUNA 16 [13] competition, which is a subset of LIDC-IDRI. The result shows that our proposed nodule detection network outperforms state-of-the-art algorithms while achieving comparable results with stateof-art nodule classification algorithms. Since our joint detection/recognition approach can directly detect nodules and classify its malignancy in an end-to-end manner instead of performing the two tasks separately, 5 our approach is more practical for automatic cancer and nodules detection. Moreover, the segmentation map of the nodules and its malignancy are available from the network output, which provides valuable information on the morphology of the tumor. 6 The rest of the paper is organized as follows. Section II briefly reviews the literature of related works. The information of the dataset under study is given in Section III. The proposed network architecture, feature extraction, and joint optimization methods are presented in Section IV. The experimental results, analysis, and comparisons are presented in Section V. Section VI summarizes the major findings/contributions and possible limitations of the work. Finally, conclusions are drawn in Section VII.
II. RELATED WORKS A. NODULE DETECTION
Nodule detection from CT images usually involves two steps: i) nodule candidate proposal and ii) false-positive reduction [30]. The goal of nodule detection is to identify potential nodule candidates from the remaining lung tissues, whereas the false positive reduction aims to suppress potential false positive due to interference from tissues such as blood vessels, etc. TABLE 1 summarizes some recent works on nodule detection and their performance. 7 Traditional detection methods usually rely on hand-craft features and classic image segmentation methods [31]. Recently, a more extensive dataset LIDC-IDRI is made publicly available. Hence, more sophisticated deep learningbased methods can be applied and significantly better performance over traditional approaches in the larger dataset has been demonstrated [13], [32].
In Ding et al. [33], a 2-D region proposal network, which is transferred from the general image detection framework [10], was proposed and an impressive sensitivity of 94.6% under 15 candidates per scan is achieved. Though a 2-D network generally has fewer parameters than a 3-D network, it cannot fully utilize the 3-D shape information simultaneously. Therefore, more recent studies [9], [34], [35] tend to adopt 3-D CNN to solve the problem directly. For instance, Khosravan and Bagci [35] propose a 3-D densely connected region proposal network to acquire the region proposals. This densely connected network connects every two layers in the network, while the typical network only connects two successive layers. Therefore, it usually improves the overall performance over normal layer-by-layer connected network, while requiring much fewer parameters than many conventional 3-D networks. Besides the region proposal network, Pezeshk et al. [8] proposed to segment the nodules from the CT scans directly. Similar pixel-wise segmentation has been widely applied to biomedical-related applications, in which the 3-D U-net [36] and V-Net [37] are prevalent network architectures. While segmentation can provide more accurate information than detection only, it is also more involved as more detailed annotation will be required. Since LIDC-IDRI has released the pixel-wise segmentation label recently, training deep networks for nodule segmentation is now feasible, and it can potentially provide more information to the joint detection (segmentation) and classification of lung nodules.
The false-positive reduction is another essential step after nodule detection to eliminate false positive candidates, and 3-D CNN is usually preferred [4], [8], [32], [33] because of their excellent performance. The network usually undertakes a classical classification task, i.e., classifying nodule with non-nodule. Furthermore, there is no need to develop an independent network as features can be simply transferred from the detection stage for performing classification. In Qin et al. [4], the feature from the nodule detection network is directly cropped. As the LUNA 16 competition provides an additional false-positive reduction (FPR) task which labels many possible false-positive nodules, better performance is achieved if a FPR network is trained to refine the detection result. Moreover, it is observed that even if the false positive samples in the detection task are collected without additional labels from FPR task, training their own FPR networks can also improve the result [25], [35].
B. NODULE CLASSIFICATION
Currently, nodule classification is performed either on the patient-level or nodule level. On the patient-level, only the binary label for each patient is available regardless of the number of nodules of the patient. Liao et al. [34] proposed an end-to-end CADS and won the competition for patient-level lung cancer classification. The nodule-level evaluation is popular because it has an accurate label for each nodule and avoids the variance raising from the multiple instance problem. Indeed, the framework of both levels is quite similar, except for the training strategy.
Some classical image processing descriptors, including Local Binary Pattern (LBP) [38], Histogram of Oriented Gradients (HOG) [39], and Fourier shape descriptor [40], are firstly exploited in nodule classification. Nevertheless, deep learning-based approaches usually outperform these hand-craft features [15]. Zhao et al. [41] propose a hybrid approach using well-known AlexNet and LeNet to classify the nodule slice, the performance is superior to single model methods. Moreover, in order to alleviate the overfitting problem, the 3-D nodules can be decomposed into multi-views [32] therefore the 3-D network is simplified to multiple 2-D networks. Recently, Xie et al. [16] adopt, in total, 27 ResNet for classifying the 3-D nodules from 9 viewpoints. Similarly, Hussein et al. [42] adopt a slice-by-slice approach by fusing the results from all the slices. Although many studies [9], [15] have focused on 3-D architecture, the performance is usually inferior to these 2-D ensemble methods Liao et al. [34] firstly incorporate the nodule classification into the nodule detection network and train the detection and classification network alternatively. Zhang et al. [43] fine tune the classification network from the detection network and shows that classification performance can be benefited from information of the detection stage. Moreover, Xie et al. [44] show that joint training can boost segmentation and classification in skin lesion. While the choice of 2-D or 3-D networks in nodule classification remains controversial, we shall focus on 3-D network as it is more promising in exploring the morphology information of pulmonary nodules. Notably, we extended the co-training method in [34] for training our 3-D network to be described in Section IV.
C. LABEL NOISE
Estimating the malignancy level of nodules from morphology depends mainly on the experience of the clinicians and there are inevitably variations and perhaps errors for difficult cases. Therefore, labels may not always be consistent, especially when only a few annotations are available. Although up to 4 radiologists will label the data in the LIDC-IDRI database [18]- [20], many samples are only labeled by only one radiologist. Such uncertainty in the labels are usually referred to as label noise. Frenay and Verleysen [45] give a comprehensive review on tackling label noise. Manwani and Sastry [46] studied the noise tolerance performance of various loss function and found that the 0-1 loss has the best noise toleration ability. Zhang et al. [47] developed a probabilistic model to deal with potential misclassification where the noise label is used as prior information for updating the posterior probability. These algorithms mainly focus on loss function and label correction. Other improvements proposed include data cleansing [48], [49] and model-based methods [50], [51].
Since training a neural network is time-consuming, it is hard to train a neural network several times until the noise correction converges. Patrini et al. [52] recently proposed a two-stage training method which adapts the loss function at the first stage and re-trains the network at the second stage. Adjusting the loss function is preferred on the neural network-based model because it can be easily integrated into the current framework if the loss function is differentiable.
III. DATASET
In this study, the LIDC-IDRI [18]- [20] dataset from The Cancer Imaging Archive (TCIA) is used to evaluate the performance of our proposed network. There are 1018 scans obtained from seven institutions in the dataset, and four experienced thoracic radiologists annotate each scan with detailed nodule location as well as malignancy level. However, the radiologists sometimes cannot reach a consensus for some lesions, and therefore, some nodules are annotated by one to three radiologists.
The diameter of nodules ranges from 3 mm to 30 mm, and the malignancy level is evaluated in a 5-point scale where 1 represents 'Highly unlikely' nodule, 3 represents 'Indeterminate' and 5 represents 'Highly suspicious'. Following the settings in the previous studies [15], [16], [53], we calculate the malignancy score (MS) by taking the median of the malignancy levels from different annotations and label the nodules whose MS<3 as benign, MS=3 as uncertain, and MS>3 as malignant. Note that uncertain nodules are excluded in the testing phase. Moreover, we observe that a considerable number of nodules are marginally classified as benign or malignant, and some nodules are only annotated by one radiologist, which may introduce label uncertainty. Thus, we further categorize the benign and malignant nodules as certain and marginal nodules. Marginal nodules are defined as the nodules which are labelled by only one or two radiologists, and the median malignancy levels are between 2 and 4, including 2 and 4. We list the precise number of nodules in each class in TABLE 2. For nodule detection, we adopt the Lung Nodule Analysis 2016 (LUNA16) [13] to evaluate the performance of the nodule detection algorithms. The LUNA16 dataset is a subset of the previous LIDC-IDRI dataset. To better evaluate the nodule detection algorithms, the scans with a slice thickness greater than 2.5 mm are excluded from the LIDC-IDRI dataset. LUNA16 only consists of nodules whose diameters are larger than 3 mm and annotated by at least three radiologists. Therefore, there are in total of 888 scans with 1186 nodules in the challenge. Due to the large image size, most works tend to train the detection and classification network on a small size voxel 8 like 64 × 64 × 64, randomly sampled from the entire image. Afterward, to obtain the final detection/classification for a particular subject, one needs to apply the network to the many sub-voxels of the entire image and aggregate the respective outputs. For instance, in the LUNA challenge, the results obtained by applying the detection to 64 × 64 × 64 voxels with a shift of multiples of 32 voxels in any of the three directions are averaged to form the performance metric. In this study, we use the official 10-fold split in LUNA16 to report the detection performance by randomly splitting the scans in LIDC-IDRI to 10-fold for five times to report the nodule classification performance.
IV. PROPOSED METHOD
We now present our joint nodule segmentation and recognition network (JNSC) and its construction, which consists of the following step: 1) data pre-processing and data augmentation (DPA), 2) multiscale voxel-based feature-extraction and nodule size estimation (MVFNSE), 3) pseudo-label assignment for marginal samples (PSA), and 4) jointly-optimized nodule segmentation and classification (JNSC). In the DPA step, the training samples are generated from the CT scans data after standard processing procedure. Moreover, additional training samples are generated using data augmentation technique to improve the robustness of the neural networks against various variations such as rotation of the input, etc. The input voxel is assumed to be a voxel cube with size 64 × 64 × 64. Next, we shall introduce the network architecture and the details of the above four steps will be presented.
A. NETWORK ARCHITECTURE
The proposed joint nodule segmentation and recognition network (JNSC) is shown in Figure 3. It adopts the V-Net [37] as the backbone as the V-Net adopts a multiscale encoder-decoder architecture, and it can perform pixelwise segmentation. The upper and lower branches form the encoder and decoder in a V-Net architecture where the input voxels are segmented to yield the segmented output at the left lower corner. The encoder and decoder are arranged in a multiscale manner where features are extracted at each scale via the voxel-based feature extraction layer (see also Figure 4). The multiscale features and the nodule size are also estimated in the MVFNSE step which are then concatenated (denoted by the block CC in Figure 3) for predicting whether the current block is a nodule, and whether they are benign or malignant (the middle path in Figure 3).
In the MVFNSE step of the nodule detection subnetwork, the possible locations of the nodule at each scale are estimated from the initial segmentation outputs to form the nodule location map (NLM), which consists of bounding boxes containing potential nodules (as is shown in Figure 4). volumetric image is shown here for sake of presentation. The nodule specific region (NSR) is obtained by applying a threshold to the nodule segmentation map. The nodule location map (NLM) is generated as 3D bounding boxes encapsulating the NSR, which is introduced to tolerate the irregular shape of the potential nodules. Nodule specific features are extracted at the location of NLM and are fed into the voxel-based feature extraction layer. Finally, the flatted feature vectors from multiple scales as defined in Figure 3 are concatenated (block CC in Figure 3) for classification using the soft-max criterion. Note, the in the above example, it is assumed that three possible nodules are detected, each with a multiscale feature vector. Each of these candidate nodules will pass through the linear layer and the softmax unit as shown in the middle of Figure 3 to yield the classification output for all these nodules candidates. The number of nodules detected (i.e. the number of NLM) can be variable from each input of voxels.
For classification, the multiscale features of each nodule candidate in each NLM and the nodule size will be fed to the linear layer and softmax layer for classification as shown in the middle path of Figure 3. 9 The PSA step will adjust the label for the marginal nodules to avoid possible overfitting of the marginal samples. The feature vector, together with the segmentation outputs, enables us to jointly optimize the segmentation and classification in a single network at the JNSC step. Training and other details of the above operations will now be discussed.
B. DATA PRE-PROCESSING AND AUGMENTATION (DPA)
The LIDC-IDRI dataset consists of CT scans from seven institutions. Therefore, the pixel spacing and slice thickness may vary on different scans. To reduce the variation from inconsistent resolution, we simply normalize all scans into a resolution of 1.0 mm × 1.0 mm× 1.0 mm by spline interpolation. Besides, the raw CT images are clipped to between −1000 and 400 Hounsfield unit (HU), which can reduce the effect of air and bone in the images. The last step is normalizing the CT images to zero mean and unit variance as commonly used in training neural networks. n each epoch, we extract two voxels from each scan. One of the voxels consists of a nodule, and if a scan has multiple 9 It should be noted that there may be more than one nodule candidate (or none) detected inside each voxel volume, each with its own multiscale feature vector and each of these feature vectors will pass through the linear layer and the softmax unit to yield the classification output for all these nodules candidates (please refer to Figure 4 for more details) inside the voxel volume. nodules, we randomly pick one of the nodules every time. The other voxel is extracted from the normal region, which does not include any nodule. The motivation for sampling voxels from nodules is to increase the occurrence of the nodule in the training data while sampling other position is to encourage the network to distinguish other body tissues better.
Different from many studies [15], [16], which mainly consider nodule classification, we do not require the nodules to be located in the center of the voxels. To reduce overfitting and improve the generalization ability of the network, we further adopt data augmentation by random rotating the extracted voxels. The rotation is done in one of the x-y plane, x-z plane, and y-z plane with equal probability at each time. To avoid the blank region caused by rotation, we only rotate the image with one of the following angles [0 • , 90 • , 180 • , 270 • ] with equal probability.
C. MULTISCALE VOXEL-BASED FEATURE EXTRACTION AND NODULE SIZE ESTIMATION (MVFNSE)
As mentioned, we choose the V-Net [37] as the backbone of our JNSC as the V-Net adopts a multiscale encoder-decoder architecture as it can perform pixel-wise segmentation. The multiscale voxel-based feature extraction has three steps: i) generation of the nodule location map (NLM), ii) extraction of the multiscale features, and iii) concatenation of the nodule size information to the feature vector. We summarize these procedures in Figure 4. The yellow contours denote the ground truth nodule boundary annotated by at least three radiologists. The final segmentation result is a binary map obtained by threshold the network output having a value from 0 to 1. Thus, the segmentation map will depend on the applied threshold. In the above illustration, a conservative threshold of 0.4 is used. For the best performance, it can be further optimized via cross validation. It should be noted that the CT images (nodules) are 3D volumetric images and the 2D images (nodules) shown above are their x-y, y-z and x-z cross sections.
To generate the nodule location map, the network is trained on the pixel-wise segmentation from radiologists. Therefore, we can acquire the corresponding nodule probability map from the output of the detection network. The nodule probability map contains the probability of each pixel being classified as nodules. Note that the dimension of the map is identical to the input voxel, which is 64 × 64 × 64. Afterward, we empirically use a detection threshold of 0.4 (40%) to include more suspicious regions for detection. The probability map is then transformed into a binary segmentation map, where 1 represents nodules, and 0 represents nonnodules. Because the shape of the detected nodule is irregular at this stage, as shown in Figure 5, we propose to draw a bounding box 10 encapsulating each nodule to tolerate the irregular shape and reduce the variance in extracting nodule specific features. Then the region inside the box is called a nodule-specific region (NSR). The NSR is found based on its voxel connectivity in the binary map [54]. It should be noted that the segmentation results at this stage may contain errors, say a single or small patch of voxels may be detected, which are likely to be false positives. Therefore, the NSR extracted may be false positives. Fortunately, these false positives are not that many, and their labels are available. Therefore, they are also extracted and will be labelled as non-nodule against benign and malignant nodules, and this preliminary decision information can then be corrected at the classification stage. To this end, we pre-train the detection network at initialization so as to simplify its joint training with the classification network.
Compared with pixel-wise NSR, using the NSR for feature extraction the following benefits. Firstly, accurate morphology information is prone to segmentation errors. Secondly, it allows information/features surrounding the nodules to be 10 Note, the bounding box is used for feature extraction. The final segmentation output will be derived from these features as shown in the lower branch of the joint network in Figure 3 extracted for performing the classification at the final stage. Finally, even if the segmentation is extremely accurate, it may be smeared by the subsequent convolution layers. Therefore, more emphasis should be paid on the features of the nodule voxels as well as its neighborhood. Hence, the final nodule location map (NLM) is then generated based on NSR to tolerate the mentioned effect.
For the extraction of the multiscale feature, the size of the input voxel is 64 × 64 × 64, which will be down-sampled 4 times in the encoder network. Therefore, we have feature maps of size 64, 32,16,8,4 as shown in Figure 3. The NLM is also down-sampled to the same size of each feature maps, as shown in Figure 4. For each feature map, we crop the feature from the corresponding location in NLM. Following the feature cropping, we further add 1 × 1 convolution layers to aggregate inter-channel information. An adaptive max-pooling operation on the features is then performed where the features from the first two voxel-based feature extraction layers V 1 , V 2 are pooled into a uniform spatial size of 2 while those at the third to fifth layers V 3 , V 4 , V 5 are pooled into a spatial size of 1. Because of the adaptive max-pooling layer, the length of the final feature vector is invariant to the size of the NSR, and it can be flattened and concatenated among different scales.
The last step of the MVFNSE step is to concatenate the nodule size information on the feature vector. It is widely recognized that nodule size is highly related to the malignancy level, and larger size usually increases with the probability of being malignant. The pooling operation in step 2 is invariant to nodule size, and therefore, we can directly add the information to the concatenated features. The nodule size is estimated as: where V is the estimated nodule size and P is the number of pixels for the given nodule in the NSR. The nodule diameters vary from 3 mm to 30 mm and the resolution of segmentation result is 1.0 mm × 1.0 mm × 1.0 mm. Since large values in the features may dominate the classification performance, the estimated size is scaled by a factor of 0.1, which is determined empirically. It was found that the performance is relatively insensitive to the choice. The final feature used for classification consists of concatenated multiscale features from step 2 and a dimension of estimated nodule size. Each vector will pass through the linear layer and the softmax unit to yield the classification output for all the nodules candidates detected inside the voxel volume (please refer to Figure 4 for more details).
D. PSEUDO-LABEL ASSIGNMENT FOR MARGINAL SAMPLES (PLA)
In nodule classification, some nodules are labelled by 1 or 2 radiologists. However, radiologists are likely to be inconsistent on the malignancy level, especially all with a marginal level of malignancy. To address this issue in training our network, we propose a pseudo-label approach for those marginal nodules to alleviate the effect caused by label uncertainty. More precisely, the cross-entropy loss we based for training is given by: where T i and p i are the malignancy score and the predicted probability by the network respectively. Here, the labels ''0'' and ''1'' represent the benign and malignant nodules respectively. However, due to label uncertainty, T i is usually not chosen as either 0 or 1 and the following soft-label is preferred: where M i is the MS for the i-th nodule.
Here, we re-estimate the underlying label called the pseudo-labelp i for addressing those marginal nodule samples and continuously adapting them based on the network prediction obtained as well as the MS. Specifically, by initializing the initial value of the pseudo-label with the soft-label in (3), the resultant loss function using the pseudo-label is given bỹ where α is a regularization parameter that balances the influence of MS and network prediction on the pseudo-label. If α is large, the pseudo-label will mainly depend on MS andL ce will approach the cross-entropy loss. On the contrary, if α is small, the pseudo-label is dominated by the network output, which is not desirable because the training information T i cannot guide the learning process. The influence of alpha on the classification result will be further studied in the experiment section. By introducing the regularization inL ce , the pseudo-label becomes adjustable. The gradient ofL ce , which is required for performing the optimization, is given by: We now briefly explain the advantage of the proposed pseudo-label approach. Firstly, if the network prediction result is consistent with the MS, the first term in (6) will increase the certainty of the pseudo-label, which will implicitly increase the weight on this sample. For example, if the network prediction value p i is 0.7, the first term in (6) is negative and the correspondingp i will become larger during optimization. This largerp i will increase the absolute value of the gradient in (5), which in turn will encourage learning from the sample. On the other hand, if the network prediction is contradicting the MS, forcing the network to fit the sample may lose the generalization ability of the network due to the MS noise. Thus, for such samples, the first term in (6) will drive thep i towards p i , which will implicitly lower the weights of learning from such samples. Besides, the second term in (6) is used to penalize the pseudo-label for large deviation from T i , which avoids large fluctuation in the pseudovariable. Thus, the pseudo-label can be regarded a weight reflecting our confidence on the marginal label given the original annotation as well as the current network knowledge.
The pseudo-label can be updated using gradient descent: where r 2 is the learning rate for the pseudo-labels. Since the pseudo-label represents the probability of malignancy, it should be bounded between 0 and 1. Therefore, the update in (7) is further projected on these bound constraints as:
E. JOINTLY-OPTIMIZED OF NODULE SEGMENTATION AND CLASSIFICATION (JNSC)
The proposed JNSC network comprises of a nodule detection module and a nodule classification module with a shared structure for information exchange. The features for nodule VOLUME 8, 2020 classification can be extracted from the encoder of the nodule detection module, which provides additional information for feature extraction. For training this joint network, we first train the nodule classification network for 100 epochs using the pixel-wise cross-entropy loss: where S i denotes the probability of the pixel belonging to the nodule. After the initialization of the nodule segmentation network, the output segmentation may still generate many false-positive nodules. To overcome this problem, we extract not only features for true positive nodules, but also those false positive nodules for classification. Moreover, the false-positive nodules are labelled as non-nodule with probability 1. The network is then trained jointly. For the following 100 epochs, we do not update the pseudo-label because the network prediction is unstable at these early stages. Finally, the segmentation and classification modules are properly initialized, and the network can be optimized using the following cost function: Different from [34] where the segmentation and classification networks are trained iteratively, the parameters in both the detection and classification modules of the proposed JNSC can be updated simultaneously.
Additionally, because the parameters in our network are differentiable, the parameters can be optimized by efficient optimizer like Adam. In each epoch, which consists of a number of iterations, the network parameters are updated at each iteration. Since the parameters are likely to sufficient training after each epoch, each pseudo-label will be updated after each epoch. To reduce the effect of previous gradient, the pseudo-labels are directly updated by gradient descent without momentum.
F. IMPLEMENTATION DETAILS
Our proposed network mainly consists of three convolution layers, and the parameters of the convolution layers are listed in TABLE 3. Each convolution layer is followed by an instance normalization [55] layer and a ReLU layer. The Adam optimizer optimizes the parameters in our network with default settings in PyTorch. The initial learning rate is 0.001, and it is decreased every 250 epochs with a factor of 0.2. The maximum training epoch is set to 1000 and the batch size is 12. The spatial dropout strategy is applied to the 3-D convolutions with a dropout rate of 0.1. We also employ gradient clipping during the optimization by clipping the gradient to 1 if the L 2 norm of the gradient is larger than 1 for the sake of stability.
Since the number of benign nodules is almost 2 times that of the cancer nodules, class-imbalance problem will occur. Specifically, the non-nodule pixels in L seg and benign nodules in L ce will dominate the training phase if no balancing mechanism is used. To leverage this problem, we, therefore, adopt different weights in the cross-entropy loss. Specifically, in the nodule detection module, they are chosen as 0.01 and 0.99 for nodule and non-nodule pixels, respectively. On the nodule classification module, the weights for the malignant, benign, and non-nodule classes are set to 0.35, 055 and 0.1, respectively. In principle, the weights are chosen as the ratio of samples in the two classes. Of course, one can increase the weight to allow the network to focus more on the cancer samples. The weights in the nodule segmentation also adopt a similar criterion, where the weight of non-nodule pixels is about 100 times that of the nodule pixels. The performance does not depend critically on these weights as long as they can reflect the difference in the sample number between classes.
A. NODULE DETECTION
We first evaluate the performance of the nodule detection performance of our JNSC and other state-of-the-art algorithms on the LUNA16 dataset. The standard ten-fold cross-validation of LUNA16 competition is adopted and the standard evaluation script is used to compute the Free-response Receiver Operating Characteristic FROC curve.
To extract the nodule candidate from the 3-D nodule detection probability maps, we first set the detection threshold to 0.4 and label the connected regions in the segmentation map based on their voxel connectivity [54]. Then, the region proposals can be extracted from the labelled map, and the center is calculated by the centre of mass of the proposed regions. Lastly, we use non-maximum suppression [56] on the proposed regions and exclude those with diameter less than 3 mm. Figure 5 shows five examples of our nodule detection results with a wide range of nodule diameters. To visualize the 3-D segmentation result in a 2D figure, we present the cross sections of the nodules as well as the corresponding segmentation maps along the x-y, y-z and x-z planes. It can be seen from Figure 5 that the detected regions are relatively larger than the ground truth.
Moreover, as shown in Figure 5 (b), our network can detect tiny nodules while distinguishing the small nodule from other body tissues like vessels. The resolution of CT scans in the z-axis is much lower than the resolution in the x-and y-axis. For instance, the resolution in the x-and y-axis is usually 0.7 mm per pixel, but the resolution in the z-axis can vary from 1.25 to 3 mm per pixel. To ensure similar accuracy in the three dimensions, we employ interpolation to convert the resolution along the three dimensions to 1 mm per pixel. The result shows that our network can tolerate the problem of different resolutions and achieve similar performance on the three dimensions.
1) PERFORMANCE OF JOINTLY OPTIMIZED NODULE DETECTION
To verify the effectiveness of the structure, we compare the performance of our proposed approach on nodule detection under standard settings with and without the classification phase. 11 The FROC under the two settings is shown in Figure 6. As shown in Figure 6, the jointlyoptimized approach significantly outperforms the detection only case. More specifically, the sensitivity of JNSC with classification at 0.125 false positives per scan is 0.776, while that of the classification only case is 0.630. Because the undetected nodules at low false-positive levels are primarily small ones, the joint optimization approach is capable of significantly improving the detection performance on such tiny nodules, which is essential to the early detection of the disease.
The detection module of the JNSC is trained using the pixel-wise cross-entropy cost function. Since large nodules have more pixels, they will dominate the performance at the training phase as the gradients are mainly backpropagated from these large nodules. Consequently, the small but important nodules can easily be neglected. Moreover, despite the shortcut path, the gradient backpropagated from the decoder may be less sensitive to the small nodules. On the other hand, the direct path of our JNSC to every encoder helps to propagate the gradient from the classification network to train the encoder so that undetected small nodules can be distinguished from the non-nodule region during the training phase. 11 In the detection without classification case, the outputs from the detection phase are directly evaluated using standard evaluation script. The joint training case will further classify the outputs as non-nodule, benign and malignant. Afterwards, benign and malignant nodules in the final result are evaluated. The result shows the classification stage can significantly reduce false positive nodules.
It can facilitate the detection of those small nodule regions which are not detected by the detection network alone. It is also observed that the information backpropagated from the direct path is much more direct and effective than those backpropagating from the gradient of the classification network, due to the large separation between the classification output and the detection encoder. Moreover, the classification phase performs the simultaneously false-positive reduction, which further improves the detection rate.
Additionally, from Figure 6, the proposed JNSC with and without classification achieves respectively an impressive sensitivity of 0.953 and 0.942 at 8 false positives per scan, which further demonstrates the effectiveness of joint optimization. ZNET [13] and Aidence [13] are the participants of the competition and win the first and second places. ZNET uses a 2-D U-Net [36] architecture and computes the nodule probability map slice by slice. Though the 2-D network cannot fully utilize the 3-D structure of the nodules, the parameters to be trained are much less than the 3-D network. The ZNET achieves a CPM of 0.811 and a sensitivity of 0.915 at 8 false positives per scan. The detailed method of Aidence is unavailable because of commercial confidentiality. The Aidence also achieves a CPM of 0.807 on the competition.
Despite the advantage of having fewer parameters in 2-D networks, 3-D neural networks are preferred recently due to its ability to detect 3-D patterns and the increased availability of computational power. DeepMed [8] was extended to a 3-D architecture, but the network is relatively shallow. Also, an independent false-positive network is trained to distinguish the detected candidates. Our JNSC is deeper than [8], which can capture more complicated structures and the false-positive reduction stage is implicitly incorporated into the JNSC. SDFPR [4] and DeepLung [9] adopt faster R-CNN structure which performs the regression of nodule location as well as probability but not pixel-wise segmentation as in our JNSC. Their encoder-decoder architecture is similar to our network, but our network has an additional shortcut path to the encoder. Hence our network can be more sensitive at a low false-positive level. For example, our JNSC obtains 0.776 sensitivity at 0.125 false positives per scan, while SDFPR [4] is approximately 0.62.
The 3D-CNN in [25] uses a combination of 2-D and 3-D networks where the 2-D network is used for candidate detection while the 3-D network is used to classify false positives. The candidate detection network can benefit from the pre-trained VGG network while the 3-D network can only be trained from scratch. The conditional non-maximum suppression in [25] is superior to normal NMS. However, the two networks are still independent of each other while our network adopts a joint optimization approach. The result shows that the CPM of our JNSC outperforms [25] by 2.4%.
The S4ND [35] employs a single end-to-end network and replace convolution blocks with densely connected convolution blocks. The results from [35] show that densely connected block outperforms regular residual connection.
However, S4ND does not perform false positive reduction after detection, while a considerable number of tiny nodules are, in fact, body tissues. Our JNSC jointly achieves false positive reduction with the help of the classification network and outperforms state-of-the-art algorithms.
B. NODULE CLASSIFICATION
We now evaluate the nodule classification performance using the LIDC-IDRI dataset. As described in section VI, the uncertain nodules are excluded from evaluation. 12 We randomly split the 1018 scans into ten subsets and adopt 10-fold cross-validation to report the result. Additionally, each fold is trained five times to reduce the effect of network initialization. Note that the uncertain nodules in the testing set are excluded from calculating the accuracy.
The classification network in our proposed JNSC requires the segmentation result from the nodule detection network to perform multiscale voxel-based feature extraction. In order to compare with other classification only algorithms, those undetected nodules are directly labelled as benign. We have also neglected the false positives in the nodule detection process.
1) COMPARISON WITH THE STATE-OF-THE-ART ALGORITHMS ON NODULE CLASSIFICATION
To our knowledge, few studies report the end-to-end result, and therefore, the comparisons can hardly be absolutely fair. 12 We follow the common practice that nodules with MS = 3 are excluded, as such these nodules are uncertain as to benign or cancer. Therefore, we report algorithms using the same MS and CT scans as ours. It should be noted that our system is endto-end, which is more challenging than just classification of the nodules as nodules detection process may itself be error-phone. On the other hand, our framework is closer to a realistic operating environment.
The accuracy, sensitivity, and specificity of the proposed approach on nodule classification is reported and compared with state-of-the-art algorithms. Moreover, as there are more negative samples than positive samples in the dataset, the network is likely to perform better on the negative samples (thus, the specificity is usually higher than the sensitivity). Hence, the negative samples will have more influence on the accuracy. To illustrate the overall performance of the algorithms despite these effects, we also report the balanced accuracy to better reveal and compare the performance. More precisely, the definition of the balanced accuracy is Furthermore, to verify the effectiveness of the segmentation information in the proposed joint-optimization approach, the NSR is replaced by the ground truth region and the JNSC is trained without segmentation, i.e. it is operated in classification only mode. Particularly, we do not backpropagate the gradient from the segmentation module so that the encoder is trained only by the classification network. The results show that the joint training performs better than the classification only mode.
As shown in TABLE 5, our proposed JNSC achieves the highest balanced accuracy and sensitivity among the algorithms. Although the 2D-MV-KBC [16] has the best accuracy, the higher accuracy results from the imbalanced classes where specificity can contribute more to the overall accuracy. Moreover, 2D-MV-KBC only considers the classification on the extracted nodule patches while our algorithm does not require the nodule location to be known in the training phase. Although the 2-D U-Net is adopted for labelling the nodule from the patch, training the network on the extracted patches is still much easier than for the entire CT scans because the extracted regions will be free from the interference of many other body tissues. Moreover, it is required to train 27 independent networks in 2D-MV-KBC so that their results can be aggregated. Its complexity will be significantly increased. In [16], a three-dimension network with 3 independent networks based on ResNet-50 is also proposed. Experimental results show the 2-D network outperforms the 3-D network, which is likely due to the fact that the 2-D network can benefit from the pre-trained ResNet-50 network.
On the other hand, the proposed 3D JNSC can be trained from scratch since the nodule detection network can provide additional information in the form of regularization to alleviate the overfitting problem caused by insufficient training samples. Moreover, the encoder in our JNSC is trained on the whole CT image which can also distinguish other body tissues for nodule detection. The experiment results show that the joint detection and classification framework is superior to the classification only approach with an improvement of 1.25% accuracy. Overall, our approach is more practical for automatic cancer and nodules detection.
The MC-CNN [15] is the first to introduce the approach of cropping nodule-specific feature, which is similar to our multiscale feature extraction method. However, our algorithm differs from [15] in that: i) our extraction is based on the nodule detection while MC-CNN uniformly extracts multiscale feature by using successive max-pooling on each feature, ii) MC-CNN requires nodule-centric inputs (i.e. the first identification of the location of the nodules to be classified by the network) while our JNSC is more flexible in that the nodule can occur anywhere in the voxels and our feature extraction is invariant to the nodule location. Moreover, MC-CNN employs 2-D convolution given the 3-D inputs (i.e. as multiple 2D channels), and hence the information among slices may not be efficiently exploited.
In conclusion, our JNSC is at least comparable to the stateof-the-art nodule classification algorithms with respect to accuracy, sensitivity, and specificity for classification alone task. On the other, the JNSC is fully automatic and does not require pre-selected inputs of the detected nodules. Actually, it can be operated in an end-to-end manner.
2) ANALYSIS OF THE EFFECT OF PSEUDO-LABEL
To examine the effect of labels on the classification performance of our approach, experiments are performed on the following three cases: 1) assigning hard label to nodules, by which each nodule is labelled either ''0'' or ''1''; 2) substituting the hard label by soft label, by which nodule is labelled based on MS in (3); and 3) replacing the soft label by our pseudo-label for the marginal nodules. The results are shown in TABLE 6. Apparently, the performance of using the hard label is the worst among the three methods. This phenomenon reveals that classification in the biomedical area is different from natural image recognition because ground truth is not absolutely correct. Inconsistent labels may arise in the biomedical area due to human errors. It is noted that we are not proposing a physical model to accurately model the probability that the label is uncertain. Instead, we empirically estimate the reliability of the marginal samples and its associated labels via the cross-entropy loss function so as to prevent the network from overfitting these less reliable samples, which affect the overall performance. Consequently, assigning soft-label in classification can significantly improve classification accuracy. However, soft label requires the estimation of probability, which may also introduce additional noise when only a few annotations are available. In this study, we assume that the nodules annotated by at least three radiologists are reliable, while nodules annotated by less than three radiologists and not highly confident are marginal. We then estimate and update a soft label in the form of pseudo-labels for the marginal nodules based on the annotation and network prediction to reduce the noise mentioned above.
To visualize and validate the effectiveness of the proposed pseudo-label during the training phase, the histograms of pseudo-labels before and after training under different regularization parameter α are shown in Figure 7. Figure 7 (a) plots the initial distributions of pseudo-labels. Note that the data is acquired on a randomly selected fold. We then examine the effect of α on pseudo-labels. As shown in Figure 7 (b), lower α pushes the pseudo-label towards the boundary, where pseudo-labels are similar to hard labels on the marginal samples. This can be explained by the fact that the network prediction results dominate the pseudo-label update. However, this is undesired because little information can be learned from the marginal nodules. The result shows that the network tends to fit the benign nodules, and the highest specificity is achieved and the overall performance is inferior to the soft label. When α grows larger, we observe from Figure 7 (d) that α still forces the pseudo-label towards the boundary, but the changes are less severe than before.
Moreover, the network increases the malignancy probability of some benign nodules, revealing that the network treats such nodules as malignant. The network is not trained to mine the marginal samples. Instead, it relies more on the certain data for classification as the marginal samples may not be absolutely correct due to label uncertainty. The problem is commonly encountered in biomedical applications where ground truth may not be precisely gauged from limited human labels. This is in great contrast to natural image classification and language understanding where such labels are usually correct, except for occasion human errors. In summary, the pseudo-label approach addresses the label uncertainty by incorporating the network prediction results or knowledge in addition to the label provided.
Next, we observe that the regularization power does not grow linearly with increasing α. Figure 7 (h) shows that α = 20 performs similarly as 10. When α is set to 10, the majority of pseudo-labels only vary in a small range as is shown in Figure 7 (g). It is reasonable that the network prediction and ground truth annotation are balanced under α = 10, thus achieving the best overall performance. Theoretically, when α grows to infinity, the annotation should govern the pseudo-label, which is identical to soft label. We do not explore larger α and 10 is selected as the default value in this study.
3) ANALYSIS ON MULTISCALE FEATURE EXTRACTION
Our proposed JNSC relies on the features from several encoders to perform nodule classification. Hence, it is important to evaluate the effect of the number of the multiscale features on the classification performance. The experiment is designed to observe the classification performance over concatenating features from first encoder V 1 to the deepest level V 5 . Note that the nodule size is still concatenated to the feature.
As shown in TABLE 7, the classification performance generally improves as deeper features are added. Although discarding the feature up to V 4 yields higher accuracy and specificity, the performance is comparable to that of concatenating all features after considering the balanced accuracy and sensitivity. Therefore, to maintain the consistency of the structure, we do not discard the feature from V 5 . The reason for such a behaviour can be explained as followed. As features of different scales are extracted from the corresponding location in multiscale feature map, the convolution operation can expand the reception field, which means that the extracted features usually represent a larger region in the input CT images. For the features from V 1 and V 2 , the effect is negligible. For the feature from V 5 , such effect can somewhat affect the classification, especially on small nodules because it may encode the information of other body tissues. Meanwhile, the small nodules are likely benign nodules and thus the specificity decreases after adding V 5 features.
VI. DISCUSSION AND FUTURE WORK
A deep-learning based approach for joint detection, segmentation and classification of nodules from 3-D CT scans has been proposed. Moreover, the concept of pseudo-label has been proposed to tackle the problem of label uncertainty, which is commonly encountered in biomedical data. While most algorithms proposed focus on either detection or classification, the proposed algorithm operates in an end-to-end manner, which provides detection and classification of nodules simultaneously together with a segmentation of the detected nodules. Experimental results show that it outperforms the state-of-the-art nodule detection algorithm, and yields comparable performance as state-of-the-art nodule classification algorithm while classification alone is considered.
While natural images are often in two-dimension, biomedical images, such as CT and MRI, are often in threedimension. Since it is usually difficult for human to efficiently visualize these three-dimension data for detection, detail segmentation and classification of region of interest, the proposed algorithm offers a promising approach in developing similar computer-aided diagnosis systems.
In this work, we have employed a multi-task framework, which combines the detection and classification in a single network. Such an integrated approach allows essential information to be exchanged between individual subnetworks and lead to higher performance in both tasks. Moreover, in many practical applications, it is required to be able to provide users with the detailed location or morphology of the objects of interest, in addition to the final decision. In this work, we further extend the nodule detection to pixel-wise nodule segmentation, where a more accurate shape or morphology description of nodules can be obtained. Therefore, the present framework may also be useful in related applications.
Some limitations do exist in our study. Firstly, the patient-level prediction is not studied in this work. Secondly, the slice thickness of various CT scans can vary dramatically. The nodule detection competition (LUNA16) manually excludes the scans whose slice thicknesses are larger than 2.5 mm. The diameter of the small nodules is around 3 mm, which is very close to the slice thickness. Therefore, the low and variant resolution on the z-axis is another difficulty in nodule detection, especially for the small nodules. Many studies [57][58][59][60] have adopted the deep-learning-based super-resolution approaches to address the problem in CT and MRI images. It is interesting to incorporate the superresolution into the proposed nodule detection and classification framework.
VII. CONCLUSION
A joint lung nodule detection and classification network for end-to-end lung nodule detection, segmentation and classification subject to possible label uncertainty in the training set has been presented. It operates in an end-to-end manner, which provides detection and classification of nodules simultaneously together with a segmentation of the detected VOLUME 8, 2020 nodules. A 3D encoder-decoder architecture is adopted for better exploration of the 3D nature of the data. The nodule classification subnetwork of the joint network utilizes the features from the encoder output of the detection subnetwork and the multiscale nodule-specific features for boosting the classification performance. This valuable prior information also allows the more complicated 3D nodule classification encoder network to be optimized directly with improved performance on both tasks. Evaluation using the LUNA16 and LIDC-IDRI datasets shows that the proposed nodule detector outperforms the state-of-the-art algorithms and yields comparable performance as state-of-the-art nodule classification algorithms when classification alone is considered. Finally, since our joint detection/recognition approach can directly detect nodules and classify its malignancy instead of performing the tasks separately, our approach is more practical for automatic cancer and nodules detection. | 2020-12-17T09:12:08.517Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "27aa6e42865f9c3c1305583cad7ec5a1ce5a49c3",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09294013.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "67c54314aa9ce50b8d761f38fb7713a74b7db138",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
260719408 | pes2o/s2orc | v3-fos-license | The computational power of the human brain
At the end of the 20th century, analog systems in computer science have been widely replaced by digital systems due to their higher computing power. Nevertheless, the question keeps being intriguing until now: is the brain analog or digital? Initially, the latter has been favored, considering it as a Turing machine that works like a digital computer. However, more recently, digital and analog processes have been combined to implant human behavior in robots, endowing them with artificial intelligence (AI). Therefore, we think it is timely to compare mathematical models with the biology of computation in the brain. To this end, digital and analog processes clearly identified in cellular and molecular interactions in the Central Nervous System are highlighted. But above that, we try to pinpoint reasons distinguishing in silico computation from salient features of biological computation. First, genuinely analog information processing has been observed in electrical synapses and through gap junctions, the latter both in neurons and astrocytes. Apparently opposed to that, neuronal action potentials (APs) or spikes represent clearly digital events, like the yes/no or 1/0 of a Turing machine. However, spikes are rarely uniform, but can vary in amplitude and widths, which has significant, differential effects on transmitter release at the presynaptic terminal, where notwithstanding the quantal (vesicular) release itself is digital. Conversely, at the dendritic site of the postsynaptic neuron, there are numerous analog events of computation. Moreover, synaptic transmission of information is not only neuronal, but heavily influenced by astrocytes tightly ensheathing the majority of synapses in brain (tripartite synapse). At least at this point, LTP and LTD modifying synaptic plasticity and believed to induce short and long-term memory processes including consolidation (equivalent to RAM and ROM in electronic devices) have to be discussed. The present knowledge of how the brain stores and retrieves memories includes a variety of options (e.g., neuronal network oscillations, engram cells, astrocytic syncytium). Also epigenetic features play crucial roles in memory formation and its consolidation, which necessarily guides to molecular events like gene transcription and translation. In conclusion, brain computation is not only digital or analog, or a combination of both, but encompasses features in parallel, and of higher orders of complexity.
Information processing in brain: theoretical concepts
The brain has always been compared with a highly sophisticated computer. To this end, scientists and computer technologists have been working jointly and in parallel to unravel structural and functional connectivities and dynamics of communication and information processing in the Central Nervous System. Toward the end of the last century, computer technology began to focus almost exclusively on digital information processing. And, indeed, many events in the CNS are running in all-or-none, or digital manners, as well.
Early concepts: turing machine and reservoir computing
Despite different firing rates, all-or-nothing action potentials or spikes could be used for applications of mathematical algorithms in artificial neural networks (ANN) including series of discrete instructions based on Turing's work Turing (1936). In his mathematical analysis of algorithms, Turing assumed discrete timesteps and discrete variables for computation [Turing-machine (TM)]. Consequently, the question has been raised, if the brain can be compared to a TM. However, in contrast to the algorithmic system of a TM, very often the human mind is facing the problem to prove the truth of propositions. Its solution necessarily includes procedures that take into account their meaning, e.g., not just reading a text, but reading "between the lines." Those procedures defined as semantical, can be activated in the human brain. This process enables the brain to prove the notion of "meaning" (as condition of truth). In other words, the human mind can associate the notion of prove with that of meaning, which contrasts with a TM. This assertion, however, has been vividly disputed and rejected [e.g., Kerber (2005)].
Analog computation, hence, contrasts profoundly with algorithms implemented in a TM. The great power of analog computation was also appreciated later by Von Neumann (1958) and Turing (1990), who investigated analog computation in brains and in cells, respectively. Additional work highlighting analog computation in the CNS was published at the same time (Tank and Hopfield, 1987). However, both analog and digital computing may be reconciled by analog-digital crossover. The fundamental reason for a substantial improvement of performance through analog-digital crossover lies in information theory: in the digital approach, information is encoded by many 1-bit interacting computational channels but in the analog approach by only one multi-bit computational channel (Sarpeshkar, 1998). In the end, the digital approach distinguished by high informational precision cannot compete with the lower informational precision in analog computation where all the bits are processed in parallel and the task is solved right away.
From that it may be concluded that the human CNS has developed ways of computation that cannot be reduced to the workings of a TM (Toni et al., 2007), because complex brain activities, like abstraction and mentation, require more "elastic" forms of computation (Arbib, 1987) far above any of today's machine learning techniques. More sophisticated information processing is needed such as hybrid computation, joining discrete and continuous forms of communication.
It is essential for the brain to create appropriate behavior based on relatively small amounts of information. To this end, it is making use of unsupervised learning as opposed to supervised learning. In the latter, the system is supplied with the correct answers to model, whereas in the former the learning system finds structural patterns on its own without guidance, i.e., there is no "training set" to learn from, or in other words, to find statistically "independent" components within the input signal.
In fact, the CNS permanently has to analyze complex events in a steadily changing environment, where incoming stimuli are lacking any preset "label" or category (Popper and Eccles, 1977;Edelman, 1987). It has been proposed that those environmental signals have to be categorized by computational maps as intermediate steps of information processing (Knudsen et al., 1987). In such computational maps, a systematic variation in the value of the incoming physiological parameters occurs across at least one linear dimension of the neural structure. Groups of neurons belonging to a map can be viewed as analytical processors, filtering incoming signals in slightly different ways dependent on cellular responsiveness to the stimulus and operating jointly and in parallel. In that manner, the environmental input is converted into a place-coded, probability distribution of cellular activation states. This parallel information processing has been put forward as a basic requirement for global map formation in Gerald Edelman's, Extended Theory of Neuronal Group Selection (Edelman, 1989). On those grounds, it has been hypothesized that representations of complex memories are distributed and stored throughout the brain (Lashley, 1950;Hübener and Bonhoeffer, 2010;Josselyn et al., 2015), although the mechanisms of their formation are still enigmatic.
The vertebrate CNS contains a number of anatomical structures functioning not only as negative but also as positive feedback systems. For instance, the hypothalamus continuously releases neural and humoral signals processed within a black box of the target cells. This may result in either lowering (negative feedback) or enhancing (positive feedback) the discrete (neural) output. Those feedback systems are intrinsically connected by recurrent 3-dimensional neural networks that may or may not require any equivalent of full backpropagation through a multilayer network. Within a computer environment, back propagation algorithms have been implemented to detect and correct input layer errors in multi-layer neural networks, e.g., in reservoir computing (RC). As basis sets (or "reservoirs"), randomly connected recurrent networks, like "liquid-" (Maass et al., 2002) or "echo-state machines" (Jaeger and Haas, 2004) have been constructed. A delaybased mixed analog and digital implementation of RC with a non-linear analog electronic circuit as a main computational unit meets the requirements of high dimensionality, which lies in the many degrees of freedom introduced by the delay time τ (Lakshmanan and Senthilkumar, 2011). Although the reservoir itself (the non-linear delay system) is analog, the input and readout are still digital. Reservoirs of random non-linear filters are one approach to close in to the various tuning properties of many neurons, encompassing high dimensionality and mixed selectivity, as observed in the prefrontal cortex (Enel et al., 2016). The leading hypothesis is that storage of memories is reflected in the connection strengths between neurons (Crick, 1984), and learning and storing new memories modify these strengths (Hebb, 2005). An elegant model of memory devised in the computer is the Hopfield network (Chaudhuri and Fiete, 2016). Learning in a Hopfield network (Hopfield, 1982(Hopfield, , 1984 is like presenting a new memory network to a noisy version of a previously stored fundamental memory. Comparing those networks, new attractors in the configuration space of the system equivalent to non-linear adaptation to the best fit are constructed. When the configurations of the systems are sufficiently close, they dynamically relaxe toward the nearest fundamental memory, and stay there indefinitely. But simulations of neuronal interactions in the brain, constructing artificial neuronal networks (ANN) and introducing supervised and unsupervised learning algorithms resulting in systems of artificial intelligence (AI) still left many questions unanswered.
Artificial intelligence
At this point, it is timely to evaluate the basic principles of AI, where it stands presently, and to compare it with the biological facts known until now about information processing and storage (memory) in the CNS.
Let's start with "Moravec's paradox" (Moravec, 1988), that states: "It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, but difficult or impossible to give them the skills of a 1-year-old when it comes to perception and mobility." "The main lesson of more than thirty-five years of AI research is that the hard problems are easy and the easy problems are hard." But the fundamental idea that neurons stand out with a capacity of analog computation, similar to adaptive non-linear processing units (McCulloch and Pitts, 1943), is not well covered by the toolbox of formal logic (Rosenblatt, 1957). The next generation of intelligent systems has to be endowed with sources for good implicit biases able to make smart generalizations across varying data distributions and be able to learn new tasks quickly without forgetting previous ones.
In contrast to biological brains, only neurons are considered in ANNs (Titley et al., 2017). Moreover, they clearly lack some crucial generalization capabilities. One of those is a lack of robustness of the networks to "minimal adversarial perturbations" even when using the simplest toy datasets of machine learning, such as MNIST (Szegedy et al., 2013). Apparently, the details of network structure at both a coarse (e.g., connectivity between hidden layers) and a fine scale (e.g., cell types, non-linearities, or even dendritic computation and ion channel functions) are at present insufficiently represented according to the available neuroscience data (Markram, 2006).
Nevertheless, construction of ANN included properties of biological networks, such as normalization, winner-takes-all mechanisms like max pooling (Riesenhuber and Poggio, 1999), attention (Larochelle and Hinton, 2010), dropout (Srivastava et al., 2014), or simply implemented neurons as basic computational elements. However, there are many important features lacking in ANN: for example, an artificial neuron in the machine learning literature is considered as a point neuron. Neuronal spikes, or action potentials have been considered as the minimal units of information generated by a neuron. Analogous to bits in computers, the spike was associated with an "all-or-none" digital phenomenon. Neurons as nodes in ANN were assigned with discrete, repetitive electrical spikes as inputs and emission of electric signals at the output site. Each cycle of their activation obeyed a sigmoidal function whereas activation of biological neurons is more graded depending of the incoming stimuli over time. Information flow in ANN is only unidirectional from input to output. In analogy to digital units they produce an action potential, or not. There is no graded action potential. Or, as depicted by Von Neumann (1951), "The nervous pulses can clearly be viewed as two-valued markers, characterized by the binary digits 0 and 1." There are, indeed, some events in neuronal communication showing very stable action potentials (Sierksma and Borst, 2017). But for most neuronal cell types, these two assertions are incorrect. For example, spike frequencies have to be taken into consideration. One presynaptic neuron may discharge repetitive, monotonous spikes, another may encrypt its firing rates reminiscent of the MORSE-alphabet (Borst and Theunissen, 1999). Hence, each neuron may have its special firing rates (language) distinct from others, dependent on environmental impact (spike timing: Gütig, 2014). Fine homeostatic adjustments of membrane voltage may impact on the generation of action potentials which may not qualify as computation (Stuart et al., 1997), but encode the "symbols, " or the "alphabet" used by the brain to compute. Therefore, more recently spiking neural networks (SNN) have gained more interest due to their closer similarities to biological neural networks and to their lower energy consumption. They can be used to attain advanced cognitive capabilities when basic mechanisms of synaptic plasticity are implemented by neuromorphic engineering, e.g., by using IBM's TrueNorth neuromorphic hardware (Walter et al., 2015). Their computational power surpasses the abilities of ANN in that they can process spike trains over time decoding temporal information. Moreover, implementation of SNNs even on large scales is not difficult (Cessac et al., 2010;Pietrzak et al., 2023).
Various numbers of inputs (edges) are associated with various weights and their weighted sum or activation is transformed into a scalar non-linear function (ReLU, ELU, sigmoid, etc.) to produce the (yes/no) output. Inputs are external signals and outputs may recognize those signals. Nevertheless, owing to the remarkable increase of capacities of electronic devices and development of new technologies such as 3D integrated circuits, nano-scale transistors, memristors, or phase-change materials and organic electronics, AI has entered a more sophisticated level, taking into account more biological features, with the promising approach of neuromorphic engineering (Indiveri and Horiuchi, 2011;Brivio et al., 2019;Yang et al., 2020;Gandolfi et al., 2022). Simulations showed encouraging results where a cerebellum-inspired neuromorphic architecture was mapped into a large-scale cerebellar network to explore cerebellar learning (Yang et al., 2022). Moreover, canonical neural networks (CNN) have been constructed apparently reducing the cost function and minimizing variational free energy by modulating synaptic plasticity with some delay (Isomura et al., 2022;Fields et al., 2023).
Despite those advancements, energy consumption in highdimensional, multi-layer ANNs or SNNs is extremely high compared to biological networks. In contrast to biological learning, which is local, machine learning impacts on all elements of ANNs. Machine learning has been implemented in practically all AI applications (Kassanos, 2020). Parameters of a flexible non-linear function are adapted to optimize an objective (goal) that depends on data. This optimization is usually implemented, e.g., in ANN, by backpropagation, an algorithm developed by Paul Werbos in his Ph.D thesis in Werbos (1974). Backpropagation is a fast algorithm of learning, displaying changes of the cost function in a network, when changing any weight of inputs (Rumelhart et al., 1985). It is used very often for learning in recurrent neural networks (RNN), where data from time series have to be retained to be used for subsequent steps.
For example: a simple optimizing procedure of a network's performance is to apply the "twiddle" algorithm or, more technically, "serial perturbation." This means that a single weight is perturbed (i.e., "twiddled") with a small increment, and improvement is noted if the cost function has improved compared to the weight unperturbed. In terms of modeling, negative feedback signals require: (a) an input of quantity K from an external source, fed into the black box of the system with a circuitry S, that connects the source to a target, (b) the target, that steadily feeds back its output value of K' , whose value is close to that of K, to the circuitry S. An error detector implanted in S calculates the error signal E = K-K'. E then is able to adjust the entire system along with improvement of its performance. The ultimate adjustment of the system is reached when K and K' are equal and E is zero (Wiener, 1961). The computational power of S probably relies on continuous rather than discrete values.
Apart from the details outlined above, some important distinctions between ANNs vs. biological networks have to be highlighted: processing time is faster in ANNs, there is no refractory period, but processing is serial not parallel, network architecture is determined by the designer, ambiguity of incoming data is not tolerated (fault intolerant), activation obeys sigmoidal functions whereas activation of biological neurons is slower and better tuned to strength of input, energy consumption is orders of magnitude higher in ANN to solve similar tasks (brain approx. 20 watts vs. 250 watts only for running a GeForce Titan X GPU), and they produce a lot of heat during computation (50-80 vs. 36.5-37.5 degrees Celsius), ANN are composed of a few hundreds to a few thousands of neurons in contrast to approx. 86 billions of neurons and 100 trillions of synapses in biological networks, physical units are transistors and not neurons, and all functions including learning are not autonomous but have to be programmed.
After more than 60 years of AI research, Moravec's paradox has not been solved.
Real neurons are more sophisticated machines. Moreover, cerebral microcircuits may encompass various types of neurons that are genetically and functionally distinct (Douglas and Martin, 1991;Jiang et al., 2015). Each one may perform operations like gating, homeostatic regulation, and divisive normalization. Our brain can easily perform tasks like grasping, navigation, and scene understanding, which are tasks of subconscious intelligence hard to teach to machines (Sinz et al., 2019). The brain's adaptive capacity persists into adulthood, and entails higherorder cognitive functions, such as learning and the formation of memories (Weinberger, 1995;Sanes and Donoghue, 2000;Chklovskii et al., 2004;Pinaud et al., 2005;Yao and Dan, 2005). Understanding how sensory experience affects the functional organization of the vertebrate brain requires deep insights into ways of activation of neuronal ensembles and more knowledge about influences of experiential factors on neurochemically distinct cell types. Additionally, the development of coordinated gene expression programs that establish stable, long-term changes in neuronal performance have to be considered.
2. Information processing in brain: biological concepts 2.1. Electrical synapses and neuronal gap junctions as fundamentally analog devices At this point, we want to proceed from theoretical in silico concepts to potential capacities of cellular and molecular structures of the CNS, outlining similarities and differences to achievements made with electronic devices. Synaptic processes have been considered as key events in information processing and storage in the brain. They can be divided into vesicular release-dependent and direct electrical transmission systems. The existence of the latter has been a matter of debate for a long time, because neuronal gap junctions in mammalian CNS were hard to identify by thinsection electron microscopy (EM). When, later on, those gap junctions were found (Rash et al., 1996;Kamasawa et al., 2006), their small sizes did not conform with prevailing ideas to serve for rapid and efficient intercellular propagation of action potentials Barr, 1962, 1964;Loewenstein, 1966Loewenstein, , 1981. More evidence confirmed existence of electrical synapses during early stages of mammalian brain development, such as in neo-cortex (Peinado et al., 1993a), retina (Penn et al., 1994), and spinal cord (Walton and Navarrete, 1991). Those connections were considered to establish functional compartments and early neuronal networks (Yuste et al., 1992;Kandler and Katz, 1998), but would disappear in the course of brain and spinal cord development (Peinado et al., 1993b). However, those types of synapses have also been identified in many areas of adult brain, where they may function as low pass filters (Connors and Long, 2004). The gap junction channel proteins Cx36 and Cx45 were detected in ultrastructurally defined gap junctions in retinal and spinal cord neurons (Rash et al., 2000(Rash et al., , 2001aLi et al., 2008). Additionally, mRNA expression for the connexins Cx45 and Cx57 was reported from various neurons (Hombach et al., 2004;Maxeiner et al., 2005;Schubert et al., 2005;Dedek et al., 2006;Van Der Giessen et al., 2006;Ciolofan et al., 2007;Palacios-Prado et al., 2009). Hence, gap junctions, fulfilling analog information transduction, that abundantly occur between mammalian neurons (Kamasawa et al., 2006;Rash et al., 2007a,b), may also execute as-yet-undetermined electrical, ionic, or metabolic functions (Gilula et al., 1972) other than propagation of action potentials. Resistance and time constants of the coupled cells as well as the conductance of the gap junction control the strength of electrical transmission (Bennett, 1966). That means, that the time constant of a postsynaptic cell can attenuate high frequencycontaining signals such as spikes, but may have low impact on longer lasting, low frequency-containing signals.
Typically, transmission at electrical synapses is bidirectional, which results in spreading of changes of cellular membrane potentials to all the partners within an electrically-coupled compartment (Wheal and Thomson, 1984), which is reminiscent of computer models of ANNs. This also includes subthreshold responses, such as synaptic potentials (Zsiros et al., 2007) as well as spontaneous oscillations (Placantonakis et al., 2006). It has been put forward that "brain oscillations are generated in almost every part of the brain, " and that "network oscillations may assist to store and retrieve information in synapses and regulate the flow of information in neural circuits" (Gelperin, 2006;Kahana, 2006;Paulsen and Sejnowski, 2006;Sejnowski and Paulsen, 2006). In this way, electrical synapses are considered to be pivotal for information processing, learning and memory, and human consciousness in the CNS (Nagy et al., 2018), displaying mechanisms of computations that are fundamentally analog.
In hippocampal pyramidal cells, electrical synapses between inhibitory interneurons facilitate synchronous high-frequency γoscillations. In GABAergic interneurons in striatum (Fukuda, 2009) and cortex (Fukuda, 2007), electrical coupling has been shown to synchronize activity in interneuronal networks and in neocortical pyramidal cells (Diesmann et al., 1999;Galarreta and Hestrin, 1999;Gibson et al., 1999;Deans et al., 2001;Blatow et al., 2003;Hestrin and Galarreta, 2005;Fukuda et al., 2006). Fast spiking basket cells (FS BCs) are one of the major types of hippocampal and neocortical interneurons (Freund and Katona, 2007;Klausberger and Somogyi, 2008;Hu et al., 2010). There is increasing evidence that FS BCs are important in controlling executive functions, such as working memory and attention, and also play a role in neurodegenerative disorders (Baeg et al., 2001;Kann, 2016;Kim et al., 2016). However, a number of studies concluded that FS BCs serve as "on-off " cells (Chiovini et al., 2014) that integrate inputs in linear-or at best sublinear ways -like point neurons (Martina and Jonas, 1997;Hu et al., 2014). This point of view completely ignored potential dendritic influence. Therefore, FS BCs, similar to pyramidal neurons (Poirazi et al., 2003a), can be better envisaged by a two-stage integrator abstraction than as a point neuron. Identification of neuronal gap junctions in excitatory glutamatergic cortical and hippocampal pyramidal cells has been taken as evidence for abundant electrical synapses in those cells (Mercer et al., 2006;Wang et al., 2010). Likewise, this type of synapses has been found in noradrenergic locus coeruleus neurons (Travagli et al., 1995), and between inhibitory interneurons (Kosaka, 1983;Fukuda and Kosaka, 2000a,b). In the suprachiasmatic nucleus Cx36-containing neuronal gap junctions (Rash et al., 2007a,b) are required for normal circadian behavior, and loss of these gap junctions (in Cx36 null mice) affects circadian rhythms (Jiang et al., 1997;Long et al., 2005). In hypothalamus, electrical synapses between magnocellular neurons are involved in pulsatile oxytocin release by synchronizing burst firing Yang and Hatton, 1988;Hatton, 1997;Hatton and Zhao Yang, 2002).
Spike shapes and synaptic transmission
When spikes arrive at the presynaptic terminal, they provoke the opening of voltage gated calcium channels (Cav), with subsequent increase of intracellular Ca2 + concentration and vesicular neurotransmitter release into the synaptic cleft, which are quantal, digital events (Katz, 1969). The shape and time course of the AP depolarizing the nerve terminal membrane modify the gating of calcium channels and the magnitude of calcium flux (Klein and Kandel, 1980;Llinas et al., 1981;Spencer et al., 1989;Augustine et al., 1991;Pattillo et al., 1999). Already small variations in presynaptic calcium release may significantly impact on strength of synaptic transmission, because of the power law relationship between intra-terminal Ca2 + concentration and neurotransmitter release (Sabatini and Regehr, 1997;Bollmann et al., 2000;Bischofberger et al., 2002;Fedchyshyn and Wang, 2005;Yang and Wang, 2006;Bucurenciu et al., 2008;Scott et al., 2008;Neishabouri and Faisal, 2014). Those subtle variations of incoming action potentials do not obey all-or-nothing rules, hence are analog reactions. Further aspects are covered below in ("3. The postsynaptic element and dendritic computation").
All of them serve to accumulate voltage in the postsynaptic neuron, which triggers discharge of an action potential when a critical threshold, specific for each neuron, is overcome.
Incoming action potentials may vary both in amplitude and width adding to complex signals in neuronal computation. They are both digital and analog entities. First, reduced spike amplitudes typically result from decline of conductance of voltage-gated sodium channels (Nav), which may be due to repetitive firing, as observed in long term potentiation (LTP) (Brody and Yue, 2000;Prakriya and Mennerick, 2000;Ma et al., 2017;Ohura and Kamiya, 2018). Reduced spike amplitudes diminish synaptic transmission as shown at hippocampal (He et al., 2002) and cerebellar synapses (Kawaguchi and Sakaba, 2015).
Second, the speed and magnitude of calcium entry in the presynaptic terminal during an AP is highly dependent on the time course of the repolarization phase, which is under control of potassium release. Therefore, AP broadening with subsequent enhanced calcium influx and transmitter release has been observed upon blockade of voltage-gated potassium channels (Figure 1; Augustine, 1990;Wheeler et al., 1996;Shao et al., 1999;Faber and Sah, 2003;Kim et al., 2005;Liu et al., 2017). For example, spike broadening during repetitive firing results in reinforcement of synaptic transmission in the pituitary nerve (Jackson et al., 1991), in dorsal root ganglion (Park and Dunlap, 1998), and in mossy fibers (Geiger and Jonas, 2000). Moreover, neuromodulators, like glutamate and GABA may lower Kv channel conductances in hippocampal neurons, eliciting increased synaptic transmission by depolarizing axonal membrane potential and spike broadening (Ruiz et al., 2010;Sasaki et al., 2011).
Thirdly, AP broadening is also influenced by the density of voltage-gated channels, which may be heterogeneous along the axon. This has been shown in cerebellar stellate cell interneurons for peri-terminal Kv3 channels (Rowan et al., 2016).
Furthermore, dopamine D1 receptor activation may induce decrease in Kv1-dependent ID current and spike broadening in cortical pyramidal neurons upon (Dong and White, 2003;Yang et al., 2013). Those admittedly small effects on shapes of neural spikes are completely different from what we find in digital computers. The phenomenon has been called "analog-digital synaptic transmission" (Clark and Häusser, 2006;Alle and Geiger, 2008;Debanne et al., 2013;Rama et al., 2015;Zbili et al., 2016). Consequently, APs cannot be considered as purely digital events.
Needless to mention that spike broadening and subsequent increased synaptic release due to Kv channel down-regulation has been identified in various neurologic disorders such as schizophrenia, episodic ataxia type1, fragile X syndrome, autism, and epilepsy (Deng P. Y. et al., 2013;Begum et Long-term potentiation, spike codes and spike broadening. Opening times of calcium channels and the magnitude of the calcium flux in the presynaptic membrane not only depend on the time course (spike codes) but also on the shape of the incoming action potential (AP) (Llinas et al., 1981;Augustine et al., 1991;Pattillo et al., 1999). Subtle changes in calcium influx characteristics fine-tuned by both spike codes and shape of APs can precisely proportionate transmitter release. The speed and magnitude of calcium entry in the presynaptic terminal during an AP is highly dependent on the time of repolarization. Voltage-gated potassium channels are responsible for repolarization. Impairment those channels results in AP (Spike) broadening, subsequent increased calcium influx, and transmitter release. Long-term potentiation (LTP), which is associated with repetitive firing, may not only suppress conductance of voltage-gated potassium channels (Kv), but also of voltage-gated sodium channels (Nav), which typically results in reduced spike amplitudes. Altogether, one can conclude that incoming APs at the presynaptic terminal may be stereotypic, discrete signals, but can also be graded inputs more equivalent to analog information.
The postsynaptic element and dendritic computation
As described above, learning occurs by implementing optimization algorithms, comparing a prediction with a target, and the prediction error is used to drive top-down changes in bottom-up activity. In contrast to circuit-level computations that use interactions between point-like neurons with single, somatic non-linearities (Gómez González et al., 2011), more advanced studies have taken into account complex and non-linear capabilities of information processing within the dendritic tree of cortical neurons (dendritic computation) (for overview see: Cuntz et al., 2014). Stimulation of multiple synapses in a single dendrite may result in variations of supralinearity of electrical integration and amplitudes of EPSPs depending on synapse location. In contrast to the base or the middle section of the dendrite, the tip displays higher gain, higher EPSP amplitude, and higher EPSP supralinearity (Branco and Häusser, 2011). Moreover, the positioning of excitation along the dendrite affects the amplitude and threshold of basal dendritic spikes (Behabadi et al., 2012). Proximal excitation enhances the voltage gain but diminishes the threshold of distal inputs, whereas in more proximal inputs distal excitation lowers the threshold for dendritic spike generation. Hence, modulation of dendritic excitability along with changes in the spatial wiring of synaptic connections may be viewed as optional ways to store memory in the brain (Chklovskii et al., 2004). Three main types of dendritic spikes can be distinguished: sodium, calcium and NMDA (N-methyl-D-aspartate) spikes. There is ample evidence of their occurrence in pyramidal neurons.
In addition to dendritic spiking events, more analog forms of communication have to be mentioned, such as the influence of subthreshold potentials on effects of action potentials (Clark and Häusser, 2006), transmission of voltage signals through gap junctions (Vervaeke et al., 2012), or ephaptic coupling between neighboring cells (Anastassiou et al., 2011). These may be due to slow membrane potential dynamics, to close proximity of interacting cells, or to large degrees of population synchrony (Sengupta et al., 2014). This led to the "2-layer" model of neuronal integration. First, terminal dendrites represent non-linear and independent thresholding units. Then, the combined output has to pass a second threshold at the cell body (Poirazi et al., 2003b). Hence, the postsynaptic neuron is a multi-task element within the neuronal network that may receive more than thousand messages from other neurons both on its dendrites and cell body (Figure 1). However, in contrast to earlier views that the cell body makes the decisions, which are digital, it turned out later that dendrites are responsible more often in decision-making than the cell body (London and Häusser, 2005). Those computations are both digital and analog. In terms of non-linear inhibitory and excitatory inputs in active dendrites, it has been shown that their excitability is under powerful control of local inhibition (Gidon and Segev, 2012;Jadi et al., 2012;Lovett-Barron et al., 2012;Müller et al., 2012;Wilson et al., 2012). Local clustering of synaptic connections in dendritic branches, however, may impact significantly on synaptic modifications (Branco and Häusser, 2010). This clustered synaptic plasticity has been associated with increased storage capacity and feature binding (Poirazi and Mel, 2001;Govindarajan et al., 2006;Legenstein and Maass, 2011). The arrangement of synapses in clusters likely stabilizes long-term memories, because clustered spines were more stable than isolated ones. If presynaptic neurons become correlated, the optimal response becomes non-linear. Nonlinear dendrites are essential in neural network computations with their capacities to decode complex spatio-temporal spike patterns. Thus, inputs from presynaptic neurons with correlated activities are integrated non-linearly, while inputs from uncorrelated neuronal activities are integrated linearly (Larkum and Nevian, 2008). This is achieved in the same dendritic tree by clustered synapses of correlated inputs (Harvey and Svoboda, 2007). In other words, there is non-linear summation of synchronous, adjacent inputs on the same dendritic branch, whereas more remote and separated inputs undergo linear combination. Consequently, presynaptic neurons with strongly correlated activities are in contact with nearby locations on dendrites whereas independent neurons are connected to distinct dendritic subunits. The optimal response can be expressed as a set of non-linear differential equations that requires storing and continuously updating ∼N2 variables within the dendritic tree, where N is the number of synapses.
Moreover, repetitive presynaptic inputs typically reduce responses, whereas APs dissimilar to the recent spiking history cause larger changes. Additionally, changing spike frequencies, e.g., highly synchronized spikes superimposed on few, randomly occurring spikes (quiescent states) can evoke supralinear integration (Gasparini and Magee, 2006).
In this view, synaptic clusters from small neuronal populations in dendrites encode for 'related' memories (in time, space, or context) (Silva et al., 2009;Rogerson et al., 2014). Synaptic clusters, hence, may be considered as crucial computational and memory storage units in the brain.
Long term potentiation
Long-term potentiation (LTP) is viewed as the crucial trigger to consolidate synaptic connections and improve synaptic efficacy (Bliss and Lomo, 1973;Volianskis et al., 2015;Bliss et al., 2018). It is induced by rhythmic bursts of activity reminiscent of the theta rhythms typically occurring in hippocampus during learning (Grover et al., 2009). Properties of memory formation are critically dependent of the extent of LTP cooperativity, LTP consolidation, and of the ability for dendritic protein synthesis. Synaptic tagging depends on the availability of plasticity-related proteins (PRPs) that are either produced in the cell body or translated from preexisting mRNAs in dendrites (Montarolo et al., 1986;Schacher et al., 1988;Scharf et al., 2002;Hernandez and Abel, 2008;Alberini and Kandel, 2014). Because synaptic growth at pre-and postsynaptic terminals depends on protein synthesis Chen, 1983, 1989), a delayed wave for the consolidation of long-term memory is required (Katche et al., 2010).
Specific mRNA expression in dendrites and protein synthesis induced in a synaptic spine could convert early-LTP of a nearby spine to late LTP via synaptic capture mechanisms as hypothesized in the synaptic tagging and capture (STC) model (Steward and Schuman, 2003;Cajigas et al., 2012).
An intriguing consequence of dendritic STC is that it can become a mechanism for associating temporally close memories, captured by nearby synapses. This mechanism could support the generation of functional and/or anatomical clusters of synapses facilitating cross-capture of proteins between synapses that express either LTP or LTD, and consolidating formation of memory engrams (Govindarajan et al., 2006).
Bifurcations, storage of information, and engram formation
Beginning and development of human beings appear to be dependent on yes-no or either-or decisions comparable to the fundamental workings of electronic devices. Those bit-like events, or "bifurcations" may have little or larger consequences but altogether contribute to the development of an organism. A fundamental feature to all of them is their intrinsic "irreversibility." There is no way to step back. The sum of bifurcations accumulating continuously in a human being is the result of a chaotic process, critically dependent on the time of onset and subsequently progressing during the whole life (Figure 2A), irreproducible in any other individuum, even in monozygotic twins, shaping personalities that are unique.
Bifurcations can be observed on all levels of an individuum, from organs to cells and to molecules. For those reasons, the question has been addressed many times, if the way a human brain works is comparable to a computer, working in binary modes. In mathematics, bifurcations have been intensely investigated since the seminal publications by Feigenbaum (1978Feigenbaum ( , 1979. After a few steps of period doublings, the map dramatically changes into a chaotic appearance with some bifurcations embedded in the logistic map ( Figure 2B). There is also a critical dependence on the initial conditions which is characteristic of non-linear systems. Moreover, the salient feature of the diagrams is their self-similarity, typical of chaotic systems, and highly reminiscent of fractals as described later by Mandelbrot (1980).
Are those fascinating results delivered by the most basic natural science equivalents of engrams formed in the CNS ?
Engrams are specific changes in the brain formed by experience (Semon, 1921) and stored in a quiescent state (Figure 2A) that becomes functional under conditions that lead to retrieval (Tulving, 1983) or in psychiatric disorders (Gebicke-Haerter, 2014). Although engrams have not been found in their entirety (Josselyn et al., 2017), significant progress has been made in engram research and theoretic models have been developed. According to Hebb's (1949) influential theory, simultaneously activated synapses in clusters of neurons (e.g., by LTP) are reinforced, and this mechanism is the basis for learning and memory. Alternatively, newly established synaptic weights within an activated neuronal population may result in an engram. This would lead to an expanded storage capacity, because there are significantly greater numbers of combinations of synaptic weights than of neurons in any given cortical network. From these theories, one may conclude, that specific connectivity patterns between neurons are engrams Bifurcations and engram formation. At some unknown point of origin (arrow ori) in one's life there is a first decision-making between yes or no (0 vs. 1) followed by innumerable more bifurcations. This happens in each cell of the organism, but in human beings appears to be particularly interesting in the Central Nervous system. Obviously, those are events digital in nature, which raises the question of whether or no information processing and storage is comparable to computer devices (A). The bifurcations exemplarily shown in the figure and their development over time display dynamic events reminiscent of the mathematical model of bifurcations, the Feigenbaum diagram (B). It is constructed according to the differential equation in the inset. The diagram clearly shows, that after the second round of bifurcations the systems turns into a chaotic process with sporadic additional bifurcations embedded (where the Lyapunov exponent runs back to zero within the red line), but on the whole into a non-linear system almost completely devoid of digital events. In the brain, learning processes and memories stored in so-called "engrams" are founded on higher order information processing, storage and recall. Many of the bifurcations may have only little effects, but others may have strong impact during the whole life (a, arrow). There are several theories as to how the brain handles the wealth of information entering from the external world, either focusing on communication within neuronal networks and their oscillations, or putting more weight on the contribution of glial cells, on astrocytes in particular, and their information processing largely relying on analog events. Also, recently, engram cells have been identified in the hippocampus. But there is a high likelihood, that engrams are dispersed all over the brain, and to maintain the whole system, a higher order technology of hybrid computation is required. In contrast to computer technologies, however, the construction of the "hard disk" of memory engrams is time-dependent and irreversible. Nothing can be erased or reset to a previous time point to start again. (Redondo et al., 2014;Tonegawa et al., 2015b;Roy et al., 2017;Choi et al., 2018).
Alternative concepts are more in favor of the cellular aspect. And indeed, a number of studies have identified engram cells, distinct populations of neurons encoding engrams for specific memories (Han et al., 2007Josselyn, 2010;Garner et al., 2012;Liu et al., 2012;Ramirez et al., 2013;Kim et al., 2014;Tonegawa et al., 2015a;Josselyn and Tonegawa, 2020), that appear to be distributed across multiple brain regions (Roy et al., 2022). These cells are conditioned by specific cues associated with incoming signals (Guzowski et al., 1999;Deng W. et al., 2013;Denny et al., 2014). Memory reactivation increased engram cell excitability, which enhanced retrieval of specific memory content (Pignatelli et al., 2019), and memory recall can be elicited by their stimulation (Ryan et al., 2015). For example, intrinsic excitability of dentate neurons results in self-assembly into a memory engram (Park et al., 2016). This has been shown in great detail by the Tonegawa lab, using hippocampus-dependent context fear conditioning (FC). Their data reveals interesting insights into false memory and valence reversal. Enhanced connectivity between CA3 to CA1 engram projections strongly disabled LTP. These events balancing excitation and inhibition have been termed homeostatic plasticity (Turrigiano and Nelson, 2004).
Molecular biology studies on long-term storage of memory (LTM) hypothesized an "intramolecular autocatalytic" reaction (Crick, 1984;Lisman, 1985;Roberson and Sweatt, 1999), a molecular mechanism that once activated persists in a selfsustaining manner. Protein-kinase-M-zeta (PKMζ), an atypical isoform of PKC, was a particularly interesting candidate to consolidate LTMs, because its mRNA is transported to dendrites and its translation is induced by LTP. PKMζ can be considered as a core molecular mechanism of late-LTP and maintenance of LTM, obeying the criteria of necessity, occlusion, erasure, and persistence. All known PKMζ inhibitors abolish this function, but they have no effect on early-LTP and basal synaptic transmission. An LTM trace can be associated with a discrete subset of neurons, reminiscent of engram cells. Those data stimulated studies on remote LTMs (i.e., a few weeks old or older), investigating the fate of memories during systems consolidation (for review see: Frankland and Bontempi, 2005). Systems consolidation progressively relies on cortical areas and less on the hippocampus in a process that involves delayed maturation of cortical neurons and may be mediated by hippocampal sharp-wave ripples (SWR). They are associated with highly synchronous neural firing of subsecond duration and support both memory consolidation and memory retrieval (for reviews see: Squire and Alvarez, 1995;Carr et al., 2011;Buzsaki, 2015;Foster, 2017;Joo and Frank, 2018;Tang and Jadhav, 2018;Tonegawa et al., 2018).
The extracellularly recorded sharp wave component of the SWR corresponds to the accumulated, synchronous depolarization of a large fraction of the neurons in the CA1 region of the hippocampus (Buzsaki, 1986). This effect may be induced by activities from CA3 neurons (Valero et al., 2017) which also excite interneurons. As a result, interneuron-coordinated pyramidal cell ensembles undergo oscillatory excitation and inhibition characterized as a high-amplitude (150-250 Hz), co-incident ripple (English et al., 2014;Stark et al., 2014). The distribution of ripple band power is approximately log-normal with a long tail toward high values, but not bimodal (Cheng and Frank, 2008). SWR rate is at its highest in the contexts of novelty and reward. Therefore, it likely serves to trigger subsequent, slower synaptic consolidation processes (Buzsaki, 1989). Hence, engram formation may be a two-step process.
An interesting understanding of modern engram theory is the view that consolidation depends on retrieval (Lisman et al., 2018). Retrieval is thought to occur if neural activity patterns in the hippocampus that correspond to those that occurred during a previous experience are reactivated. Retrieval appears to be occurring specifically in REM-phases of sleep, where dreaming is dominant and memories from various, seemingly random (engram) sources are surfacing unconsciously. Furthermore, retrieval of a single stimulus-response association can drive behavior directly or, confronted with multiple options, the brain may recall specific episodes of past experience for decision-making or planning, giving rise to new ideas. Retrieval may, hence, support imagination or intuition, which can be understood as the rearrangement or elaboration of stored information in the mental simulation of future possibilities (Josselyn and Frankland, 2018).
The epigenetic switchboard
Accumulating evidence supports the view that epigenetic mechanisms of gene regulation are critically involved in processes underlying learning and memory (Meadows et al., 2016;Sweatt, 2017).
At this point it is important to briefly refresh the biochemical events involved in transcription and translation in terms of digital and analog information processing.
Epigenetic control of gene expression begins with a relaxation of compact chromatin at sites of the genes to be activated. Those events are dependent on posttranslational modifications of histone proteins, and cytidine methylations or hydroxymethylations of DNA, all of which are clearly digital events. Cytosins in DNA can be (hydroxy-)methylated or not, and histones can be acetylated, methylated, phosphorylated, etc., or not. Neuronal activity can influence gene expression by dynamic DNA methylation (Figure 3; Nelson et al., 2008;Sharma et al., 2008;Guo et al., 2011;Halder et al., 2016). In excitatory neurons of the cerebral cortex, DNA methyltransferases (DNMTs), have been shown to modulate synaptic transmission (Levenson et al., 2006;Sweatt, 2016), synaptic scaling (Meadows et al., 2015), and neuronal excitability (Meadows et al., 2016). Conversely, de-regulated expression of DNMTs was associated to defects in the GABAergic system (Matrisciano et al., 2013) in patients with neuropsychiatric diseases like schizophrenia (Huang and Akbarian, 2007;Sananbenesi and Fischer, 2009;Gebicke-Haerter, 2012;Saradalekshmi et al., 2014;Benes, 2015), which strongly suggests important influences of DNMTs on inhibitory interneurons, as well.
The DNA-methylating activity of DNMT1 is often correlated with transcriptional repression (Bestor, 2000;Robertson K. D., 2002;Bordagaray et al., 2022). To investigate in detail how DNMT1 acts on GABAergic transmission, target genes have been studied in Dnmt1-deficient and WT interneurons by correlative global methylome and transcriptome analysis (Pensold et al., 2020). A significant number of differentially expressed genes were associated with clathrin-dependent endocytosis. Since the expression of numerous genes associated to the clathrin-mediated endocytosis pathway was upregulated and their methylation reduced upon Dnmt1 deletion, DNMT1mediated DNA methylation likely exerts a direct regulation of endocytosis, slowing down vesicle recycling and ensuing presynaptic transmission.
Physiologically, ten-eleven translocation (TET) family enzymedependent mechanisms result in DNA demethylation of activityregulated genes (Figure 3; and subsequent memory extinction (Rudenko et al., 2013). TETs oxidize 5-methylcytosine (5mC) to 5-hydroxymethylcytosine Digital and analog events involved in gene transcription. Epigenetic DNA and histone modifications, i.e., DNA methylations and posttranslational histone-tail modifications (PTT) are clearly digital. Demethylations, proceeding from methyl-CpGs at low transcription rates near origin result in increasing, step-wise transcriptions. They are shown as single steps along a straight line obeying the equation: y = nx. Infinitesimal approximations of the triangular (digital) demethylations could be adapted to the (analog) line of transcription. The combined effects of methylations and PTT fine-tune assembly of transcription initiation complex and subsequent transcription. Those effects may also result in logistic (sigmoidal) transcription rates described by (analog) non-linear differential equations, as shown in two more examples. The equation of logistic function or logistic curve (also known as sigmoid curve) entails a common "S" shaped curve defined by the equation in inset, where L = the maximum value of the curve; e = the natural logarithm base (or Euler's number); x 0 = the x-value of the sigmoid's midpoint; and k = steepness of the curve or the logistic growth rate. Sigmoid curves are also very typical for enzyme reactions. The steepness is variable from very flat to very steep. Merging into a vertical line marks the transition into a digital behavior, as shown exemplarily with the transcription factor NFATc2. It is a kind of double-digital process. The protein is highly phosphorylated in its inactive (off) state, when residing in the cytoplasm. It is activated by stepwise dephosphorylation, that, however, do not show any visible effect (but probably increase the tension). Removal of the last phosphate results in overcoming a threshold to unleash its activity completely, entering the nucleus, binding to its DNA-binding site and inducing transcription.
(5hmC) that can then be actively reverted to cytosine. The regulation of synaptic transmission and surface levels of GluR1 receptors in hippocampal neurons has been shown to be mediated by TET3 DNA demethylation (Yu et al., 2015). Therefore, both demethylation and de novo DNA methylation are important for modulating neuronal plasticity and learning and memory in the adult nervous system (Lister et al., 2013;Sweatt, 2016). Basically, memory formation requires hypermethylation of memory suppressor genes and hypomethylation of memory promoting genes. One of those memory suppressor genes, calcineurin (CaN), showed increased methylation in cortical neurons up to 30 days after fear conditioning (Miller and Sweatt, 2007). The same is true for protein phosphatase 1 (PP1), while the synaptic plasticity gene reelin is demethylated and transcribed. At this point, it looks very likely that, within a certain time scale, adding switches of DNA methylation in some groups of genes and removing those switches from other clusters of specific genes creates new methylation patterns that pave the way for memory (engram) formation and consolidation.
Posttranslational histone modifications (PTM)
Proteins modifying histone tails are grouped into three categories; "writers, " "readers, " and "erasers." "Writers" such as histone acetyltransferases (HATs) modify and prepare specific lysines in histones to be recognized by bromodomain (BRD) "readers" to bind to those acetylated lysines. BRDs were discovered as the first domain to exclusively bind acetylated lysine (Dhalluin et al., 1999). These PTMs are not permanent however, since "erasers" such as histone deacetylases (HDACs) are able to remove the acetylation PTM (Janzen et al., 2010). Since acetylated histones act as binding sites for the transcriptional machinery, histone acetylation is often associated with transcriptional activation. Due to the efficient activities of HAT and HDAC, histone acetylation is fast and reversible. Transcription and protein synthesis induced after learning are observed only during restricted periods of time, which means that there is a limited time frame for memory consolidation (Igaz et al., 2002). Histone phosphorylation may also induce transcription, while histone methylation can facilitate both transcriptional activation and repression (Levenson et al., 2004). Methylated histones are recognized by chromodomain containing plant homeodomain (PHD) fingers, discovered in 1993, known to bind histone H3 tri-methylated at lysine 4 (H3K4me3) (Aasland et al., 1995;Wysocka et al., 2006). Transcriptional activation or repression is dependent on the interaction of chromodomaincontaining proteins with the specifically methylated lysine. Histone H3 di-and tri-methylation at lysine 9 (H3K9) results in transcriptional repression, while histone H3 methylation at lysine 4 (H3K4) is associated with transcriptional activation (Vermeulen et al., 2007). Similar to DNA methylations, the influence of histone methylations on gene expression are required for memory formation, as well. Compared to the above described patterns of DNA methylation, it is evident that the digital biochemistry of histone PTMs is orders of magnitude more complex and offers an unprecedented wealth of fine-tuning of storage and retrieval of memory.
Combined DNA methylation and histone PTMs and posttranscriptional events
Noradrenergic stabilization of heterosynaptic ("tagged") LTP requires not only transcription, but specifically, DNA methylation and histone acetylation (Brandwein and Nguyen, 2019). During and after LTP-induced learning, the expression of a "maintenance transcriptome" has to be established and to remain active at least in the range of days. In this period of time, there appear negative epigenetic regulators of gene expression, particularly histone deacetylases, such as HDAC1 and 2, but also a variety of additional members of the HDAC family (Mahgoub and Monteggia, 2014;Penney and Tsai, 2014). Hence, the maintenance transcriptome negatively regulates the plasticity transcriptome, restraining the plastic capability of a neuron after learning. It elevates the threshold for changes in engram neurons and helps to stabilize new connectivites.
Furthermore, there are additional digital events during posttranscription, such as RNA editing and RNA degradation by miRNAs, controling the amount of RNA binding to ribosomes. The resultant quantities of those final mature RNAs can be grouped in more or less linear scales, i.e., again a digital-analog conversion. Finally, another digital-analog transition of biological information is associated with the specific aminoacylation of cognate tRNAs. The aminoacyl-tRNA synthetases (aaRS), on the one hand, specifically recognize individual amino acids, which after their activation are conjugated by aaRS to the cognate tRNA molecules (Ling et al., 2009). In this manner, the digital event of tRNA anticodon binding is translated into an analog string of information by adding amino acids and forming the threedimensional structure of a protein. Here it is necessary to remember the basic principles and differences between the fundamental functions of DNA and proteins in biological systems in terms of digital and analog information processing (Koonin, 2015). We recall the Central Dogma of Sir Francis Crick (1970), saying that "there is no route of reverse information transfer from proteins to nucleic acids, i.e., no reverse translation." This is a fundamental difference between information processing and storage in computers and the Central Nervous System. Within the former, information can be completely erased.
Or the system can be reset to any previous stage and can be started again from that point on. Corrections or replacements of entered and stored information are possible.
In the brain, there is an epigenetic switchboard of incomprehensibly large yes/no options that are adjusted in response to environmental impact and demands, and induce optimized adaptations during subsequent, additional digital events. Those mechanisms keep advancing in complex, nonlinear ways determined by self-sustained switchboard reprofiling maintained during the whole life span of an organism. Although there is no way back, however, there are innumerable possibilities to correct existing and stored information, and to "endeavor" new possibilities. Admittedly, this is somehow reminiscent of unsupervised learning in computer systems. Nevertheless, it remains to be kept in mind that the unique, unidirectional flow of information transfer represents the shift from digital to analogous encoding of information. In other words, there is a transition between the fundamentally one-dimensional (digital) information contained in nucleic acids to the three-dimensional, analog form of information embodied in proteins (Haykin and Van Veen, 2003). This flow of information is unique to the brain and to biological systems in general.
The all-or-nothing modifications described above do not provoke yes-or-no transcription, but solicit graded transcription dependent on the combination and overall sum of all modifications allowing for successful assembly of the initiation complex. This may result in linear or more sigmoidal time-courses of gene expression (Figure 3). Hence, outcomes are analog events. However, there are also exceptions, where those modifications provoke all-or-nothing events.
For example, in Th2 lymphocytes the transcription factor NFATc2 is required for expression of IL-4. NFATc2 is phosphorylated in its inactive form outside the nucleus. It enters the nucleus for binding to the IL-4 promoter only, when it is completely dephosphorylated by the phosphatase calcineurin. Under these conditions, interleukin-4 is fully transcribed without running through any intermediate stages (Figure 3; Köck et al., 2014).
Additional computational dimension: astrocytes, and the tripartite synapse
For a long time information processing in brain has been attributed exclusively to neurons. However, accumulating data has assigned an even more important role to protoplasmic astrocytes and put forward the notion that they are instrumental in learning and behavior [reviewed by Wang and Bordey (2008), Verkhratsky et al. (2011), Parpura et al. (2012, Han et al. (2013), Volterra (2013)]. Apparently, they are not only necessary but also sufficient for new memory formation (Adamsky et al., 2018). The intimate embracement of synapses by thin astrocytic processes was coined the "tripartite synapse" (Araque et al., 1999;Perea et al., 2009). It postulates that the synapse can no longer be considered as only engaging two neuronal elements isolated from the rest of the parenchyma.
Interactions of astrocytes with synapses and neuronal circuits
However, not all synapses are in immediate contact with perisynaptic astrocytic processes (PAPs). They may engage and disengage from synapses spontaneously or in response to physiological (and pathological) stimuli (Panatier et al., 2006;Bellesi et al., 2015). During LTP induction, more PAPs become associated to activated synapses (Lushnikova et al., 2009;Perez-Alvarez et al., 2014), possibly supported by RNA translation within PAPs (Sakers et al., 2017). In neocortex, 30-60% of synapses are enwrapped by astrocytes (Reichenbach et al., 2010), 60-90% in hippocampus (Ventura and Harris, 1999;Witcher et al., 2007), and up to 90% in the somatosensory cortex layer IV (Bernardinelli et al., 2014). The numerous synaptic contacts assign an intriguing role to astrocytic processes in spreading signal information to groups of neighboring synapses, hence an involvement in heterosynaptic plasticity. This plasticity could extend to a number of dendrites even if they do not belong to the same neuron (so-called heteroneuronal plasticity), which could regulate switching between synaptic ensembles during information processing . It is possible, therefore, that an individual astrocyte interferes with the function of all (or subsets of) synapses within its domain. On the other hand, synapses will be functionally divided in two contiguous segments governed independently from one another if a dendrite passes through the domains of two distinct astrocytes. This concept embodies an extra layer of complexity in our understanding of brain computation. Apart from the neuronal layout, polarity and connectivity, a mosaic of independent (though likely cooperating) astrocyte domains add additional control mechanisms to separate volumes of neuropil. Astrocytes affect spine maturation and the function of mature synapses in a "synaptic island"-restricted manner. Large neuronal dendrites may cross domains of hundreds of different astrocytes, which results in reprogramming various synaptic inputs by independent astroglial cells. Consequently, dendritic synaptic inputs not only are shaped by signals from multiple, incoming, pre-synaptic neurons, but also activities of multiple astrocytes embedding the dendritic network.
Astrocyte domains and the three-dimensional and seamless expression of consciousness and explicit memories
Ribonucleic acid expression is enhanced in neurons during excitation, and declines sharply afterward (De Robertis, 1964). After neuronal excitation, sustained increased RNA production has been observed in astrocytes, which coincides with the period of trace retention. This study made Luria to conclude that "the hypothesis that the glia is concerned in retention of memory traces is unquestionably one of the most important discoveries in modern neurophysiology and it must shed considerable light on the intimate mechanism of memory" (Luria, 1973).
Astrocytes are not electrically excitable, but they are wellknown for both stimulus-induced and spontaneous intracellular calcium signals (Cornell-Bell et al., 1992). Those calcium signals usually do not propagate to neighboring astrocytes through gap junctions (Di Castro et al., 2011;Volterra et al., 2014), and the majority are observed in peripheral thin processes rather than in their soma. They do not result from mobilization of internal calcium stores (Srinivasan et al., 2015).
Communication between astroglia and neurons has profound impact on synaptic transmission. Astroglia contain neuronal excitability, release probability and insertion of postsynaptic AMPA receptors, which results in synapse silencing. This strongly impacts on the threshold balance between long-term potentiation and long-term depression (Pannasch et al., 2011). In the absence of functional astroglial networks (Cx30-/-Cx43-/-in hippocampal slices), postsynaptic activity was strongly amplified as a result of massive increase in synaptically-evoked firing (Wallraff et al., 2006).
Furthermore, astrocytic release of (glio-) transmitters directly interacts with pre-or post-synaptic neuronal receptors streamlining synaptic efficacy, potency or plasticity. For instance, astrocytic ATP, which is rapidly degraded to adenosine, may act on pre-synaptic neuronal A1R to inhibit pre-synaptic release (Schmitt et al., 2012) or on post-synaptic A2R receptors to potentiate synaptic strength (Gordon et al., 2005). Furthermore, stimulation of cholinergic muscarinic receptors in the somatosensory cortex (Takata et al., 2011) can be adjusted by the release of the NMDAR co-agonist D-serine (Rollenhagen et al., 2007;Papouin et al., 2012). This D-serine "boost" affects the threshold of NMDARactivation, facilitating the receptor to trigger the downstream signaling pathway that underlies LTP induction (Papouin et al., 2017;Adamsky et al., 2018;Robin et al., 2018). Hence, transient release of D-serine by astrocytes at hippocampal CA1 synapses is necessary for NMDAR-dependent LTP (Yang et al., 2003;Panatier et al., 2006). This release affects LTP only at synapses located within the domain of this astrocyte and not LTP at synapses located in the domain of a neighboring control astrocyte (Henneberger et al., 2010). Astrocytic D-serine also mediates integration of adultborn granule neurons into the hippocampal circuitry (Sultan et al., 2015), a process that is ongoing throughout life and may alter local circuit performance in memory processes and mood control (Toni and Schinder, 2015). The D-serine-controlled synaptic NMDAR impact on sleep-wake cycle clearly relies on analog computation, associating vigilance state to memory formation. During wakefulness, a steady accumulation of sleep-promoting substances enhance the pressure to sleep. Those substances are then gradually degraded. Sleep-wake cycles in rodents have been shown to undergo neuronal network oscillations sustained by astrocytederived adenosine. Slow-wave oscillations (<1 Hz), in particular, observed during non-rapid eye movement (NREM) sleep have been associated with memory consolidation (Marshall et al., 2006;Halassa et al., 2009).
Furthermore, astrocytic l-lactate plays a key role in LTP at hippocampal CA1 synapses. It is stored as glycogen in astrocytes, metabolized to l-lactate during periods of high energy demand, and shuttled to neurons (Pellerin and Magistretti, 1994). LTP in CA1 and CA3 was blocked in vivo when its production was inhibited in astrocytes, suggesting an important role for l-lactate in long-term episodic memory (Suzuki et al., 2011).
Astrocytes express virtually all neurotransmitter and neuromodulator receptors (glutamate, dopamine, norepinephrine, acetylcholine, serotonin, and GABA) (Kettenmann and Zorec, 2013). Individual astrocytes may co-express as many as six different receptors (Shao et al., 1994). But their expression may be region-specific in that, for instance, dopamine receptors are found in astrocytes of the substantia nigra (Miyazaki et al., 2004), and in prefrontal cortex (Khan et al., 2001), whereas glutamate receptors are encountered throughout gray matter witnessing the wide-spread release of glutamate by excitatory synapses everywhere in the CNS. Due to this occurrence, this transmitter is the best candidate to be involved in consciousness and memory formation provided that consciousness and memory are disseminated all over the brain (Calvin, 1996;Cooper et al., 2003;Jones, 2005;Posner et al., 2007). Moreover, adrenergic receptors are more abundant in astrocytes than in neurons (Stone and John, 1991;Aoki, 1992). Although ß-receptors expressed by hippocampal neurons were viewed to potentiate LTP and memory, more recent studies revealed that astrocytic β-2-adrenoceptors are more important, because the known positive effect of arousal on memory performance could be associated to the finding that a key part of the noradrenergic effect is mediated by astrocytes. Moreover, acute stress triggers noradrenaline release activating astrocytic β-2-adrenoceptors, which may increase cognitive performance. Conversely, prolonged stress with sustained astrocyte activation impaired cognitive performance. This has been shown by administration of a β-2 agonist over days, improving memory performance, whereas more extensive exposure to the drug resulted in decline of cognitive ability (Dong et al., 2017). O'Donnell et al. (2012) emphasize that "norepinephrine signaling to astrocytes is necessary to drive the transformation of memory from short to long-term stores" and "is important for supporting processes that bridge short to long-term behavioral adaptation." Obviously, all those events do not obey an all-or-nothing regimen, as realized in computer memory devices.
Acetylcholine, which is released during vigilance states by long range neuronal fibers, also activates astrocyte acetylcholine receptors and promotes astrocyte-mediated neuronal cross-talk (Araque et al., 2002;Perea and Araque, 2005;Navarrete et al., 2012;Papouin et al., 2017). Acetylcholine in concert with noradrenaline maintain brain-wide oscillations to synchronize different brain areas and to insure correct cognitive performance and sensory perception (Wang, 2010).
Computational role of astrocytic calcium
It has been shown in vitro, in situ, and in vivo that [Ca2 + ] I release by astrocytic occurs as rapidly as in neurons (within 500 ms or less) (Winship et al., 2007;Marchaland et al., 2008;Chuquet et al., 2010;Santello et al., 2011). Therefore, astrocytic rapid responses are "compatible with a physiological role in fast activity-dependent synaptic modulation" Kastanenka et al., 2020). This communication with neurons is ensured by expression of virtually all types of ionotropic receptors (Lalo et al., 2011;Steinhauser et al., 2013). Astrocyte synaptic-like currents have been shown to be triggered by neuronal activity in vitro and in situ (Dani et al., 1992;Porter and McCarthy, 1997;Matthias et al., 2003;Bergles and Edwards, 2008).
Conversely, rapid rises and long-lasting Ca2 + transients can be evoked in astrocytic perisynaptic processes, several micrometers long and in 3-dimensional space, by a single action potential (Di Castro et al., 2011;Panatier et al., 2011). Those Ca2 +currents, which may last for seconds, support a role for astrocytes in working memory (Han et al., 2012). Studies of cholinergic (Takata et al., 2011) and noradrenergic neuromodulation (Ding et al., 2013;Paukert et al., 2014) revealed additional, slowly increasing somatic Ca2 + transients in the range of tens of seconds. In hippocampus, those Ca2 + transients can induce long-term effects on synaptic connections associated with memory formation (Adamsky et al., 2018).
It has to be mentioned that the notion of Ca2 + -dependent gliotransmission, the role of astrocytes in long-term potentiation (LTP), and whether D-serine is a gliotransmitter have been discussed, as reviewed in Bazargani and Attwell (2016) and Savtchouk and Volterra (2018). However, it has been well studied that, unlike in other glia, induction of metabotropic calcium waves in astrocytes coincides with electrical currents of synaptic activity in neighboring neurons (Murphy et al., 1993). Those electrical currents could spread via gap junctions and enable long-range astrocyte-neuronal synchrony (Szatkowski et al., 1990). Astrocytes reportedly form extensive networks of electrically coupled cells (Dermietzel et al., 1989). This network The tripartite synapse. Ensheathment of synaptic spines by perisynaptic astrocytic processes (PAPs) can change over time. It depends on neuronal activity and ensuing actin-dependent motility in PAPs. At high neuronal activity (LTP), activated synapses become ensheathed by more PAPs. One astrocyte may contact 300-600 dendrites and up to 36 spines per dendrite (Halassa et al., 2007). Those dendritic segments with their synaptic spines are under strict control of processes from only this astrocyte delineating its territory: orange (Bushong et al., 2002). That means that an individual astrocyte handles a defined volume of neuropil. There is no interference with other astrocytes. Only this astrocyte is responsible for surveillance and control of neuronal elements within this domain. Therefore, a single astrocyte theoretically oversees in its territory 20,000-160,000 individual synapses in the rodent brain and approximately 270,000 to 2 million synapses in the human brain (Oberheim et al., 2009;Heller and Rusakov, 2015). Because, however, an individual astrocyte affects the function of synapses solely located within its domain, a dendrite passing through the territories of two distinct astrocytes will be functionally divided in two contiguous segments governed independently from one another, as far as synapses are concerned. Decisions are made in dendrites far more often than in the cell body, which underscores the complex and highly non-linear capabilities of information processing within the dendritic tree. Such computations are not just digital, but also analog. For example, dendritic spikings are not stereotypic events. Amplitudes of EPSPs and the supralinearity of electrical integration during the stimulation of multiple synapses, e.g., by LTP, vary from the base to the tip of a single dendrite. For example, the base or the middle section of the dendrite show lower EPSP supralinearity, lower EPSP amplitude, and lower gain compared to the tip (Branco and Häusser, 2011). Moreover, the positioning of excitation along the dendrite is crucial for the amplitude and threshold of basal dendritic spikes (Behabadi et al., 2012). Proximal excitation lowers the threshold for spike generation and increases the voltage gain of distal inputs, whereas distal excitation lowers the threshold for dendritic spike generation in more proximal inputs. Spiking, then can be transmitted to astrocytes via gap junction channels (Cx43) and buffered as bits of information in the astrocytic syncytium. Memory, therefore, reminiscent of structures in electronic devices, appears to be stored both in form of RAM on the neuron level and in hard discs of astroglial networks. Apart from the involvement of astrocytes in analog information processing, there is also neuronal dendro-dendritic gap junction communication, adding another level of complexity in computation. Specific products made and released by astrocytes at synaptic spines have considerable influences on processing of arriving neuronal signals. Astrocytes release neurotransmitters (gliotransmitters), cotransmitters, like D-serine, or ATP, converted into adenosine, and express respective neurotransmitter receptors and glutamate transporters (GLT1) (Chaudhry et al., 1995), glutamine synthetase (Derouiche and Frotscher, 1991), aquaporins (Thrane et al., 2011), potassium channels (Higashi et al., 2001), cell adhesion molecules (ephrin) (Zhuang et al., 2011), and lactate transporters (Puchades et al., 2013). Astrocytes can also communicate via exocytosis of synaptic-like microvesicles (SLMV) (Vardjan et al., 2019).
communication modulates pre-to-postsynaptic signaling by finetuning amplification of neuronal activity. Electrical coupling of astroglia forms an important part of intercellular communication between neuronal and tripartite synaptic activity. In terms of computation, those are interesting examples of a one-hit impact triggering a variety of subsequent, long-term analog processes. Crucial elements involved in this communication are gap junctions.
Astrocytic gap junctional computing
The most abundant connexin in the brain is the astrocytespecific Cx43. In contrast to Cx32 and Cx26, Cx43 forms permeable channels. Mice lacking Cx43 (Cx30-/-Cx43-/-mice) showed amplified and extended fEPSP supposedly due to the combination of: (1) enhanced and longer-lasting extracellular potassium levels, and (2) accumulation of extracellular glutamate due to impaired astroglial clearance rate. Hence, precise neuronal communication depends on intact astroglial gap junctional networks, because they provide the large uptake capacities and fast redistributions of extracellular potassium and glutamate via astrocytic networks (Pannasch et al., 2011). Mice lacking connexin-30 show enhanced astrocytic glutamate uptake, diminished LTP expression, and repressed fear memory (Pannasch et al., 2014). In the same way, astrocytic glutamate uptake was increased and hippocampal LTP was reduced in mice deprived of the neuronal ephrin A4 receptor or its astrocytic ligand, ephrin A3 (Filosa et al., 2009), and dendritic spine morphology was altered (Murai et al., 2003).
Furthermore, the notion of a "generalized functional astrocytic syncytium" received strong support by the observation of intercellular calcium waves spreading to numerous cells by traveling through gap junctions (Mugnaini, 1986). Those decisive discoveries lent strong support to the idea that the syncytium embodies the basic structure of memory storage in the brain (hard disc), strongly reinforcing Galambos' original assertion (Galambos, 1961). Gap junction coupling within this syncytium fulfils a neuroprotective role in that it is able to maintain a physiological membrane potential in the presence of elevated extracellular Kþ concentration and moreover can efficiently distribute excess Kþ across the syncytium. This helps to delay or inhibit the induction of spreading depolarizations. Apart from involvement of gap junctions in potassium buffering, also activity-dependent Na + spreads can transmit ionic currents through gap junction networks (Langer et al., 2012). All those ionic movements can be classified as analog computational events.
Astrocyte microdomains, which are quasicrystalline gap junctional plaques, approximately 1.5-12 um in diameter, are considered as the basic structures of postsynaptic information processing. Those plaques are believed to become assembled into packages of memories by crystallization into a long-lived highly resistant state and may be activated during consciousness (Robertson J. M., 2002). Indeed, an ultrastructural study reports that "interastrocytic gap junctions are packed in a crystalline array" (Massa and Mugnaini, 1982).
Additionally, astrocytes express heterotypic gap junctions that specifically connect to and communicate with all other macroglia and vascular elements forming a functional "panglial syncytium" (Nagy et al., 2003;Theis and Giaume, 2012). This integrative system of glial communication leads Fields to conclude that "glial cells are engaged in a global communication network that literally coordinates all types of information in the brain" and that "such oversight and regulation must be critical to brain function, and neurons are incapable of it" (Fields, 2009). Moreover, it has been shown that siRNA can use gap junctions to travel from one cell to another and modify gene expression in the recipient cell (Valiunas et al., 2005). In this way, the astroglial syncytium is fundamental for the formation of long-term memories by epigenetic regulation of DNA throughout the brain.
This syncytium is currently viewed as a complex heterogeneous system that is multifunctional and closely regulated (Giaume et al., 2010;Hervé et al., 2012). It is centrally located between individual synapses and global neuronal networks (Robertson J. M., 2002). Astrocytes modulate both [reviewed by Halassa and Haydon (2010), Verkhratsky and Parpura (2013), Volterra (2013)]. Therefore, it has been put forward, that the astroglial syncytium is the primary coordinator of brain information processing, including consciousness (Pereira, 2007;Pereira and Furlan, 2010;Mitterauer, 2013), memories (Caudle, 2006;Banaclocha, 2007), intentionality (Mitterauer, 2007), and development of motor responses (Hassanpoor et al., 2012). Additionally, the glial network has been proposed as the "true substrate for information processing"-"where the thoughts dwell" (Verkhratsky and Toescu, 2006), synonymous with the "mind, " and the manifestation of the "global workspace" (Pereira and Furlan, 2009). Such a critical position suggests that this massive structure of interconnected astrocyte domains forms the body of the computational power of the brain.
Theoretical concepts
Any adverse effect on the computational tasks of astrocytes delineated above could significantly interfere with neuronal computation. Neurons distinguish incoming stimuli within a few milliseconds as individual entities, whereas astrocyte Ca2 + transients, the tentative astrocytic substrates of neural computing, are too slow to encode ultrafast representations (Vardjan et al., 2016). Obviously, this property serves as a complementary manner to cover various time scales. As stated by Murray, "the brain characteristically operates in parallel on a gradient of time scales that are nested and hierarchically organized" (Murray et al., 2014). For instance, attention and decision making, as well as the surge of emotions may take seconds, mood may change in minutes. Time scales of circadian rhythms are in the range of hours, and other life events with impact on learning and memory may extend to even longer time scales in the range of weeks, or years (Hari and Parkkonen, 2015).
Computationally, attention consists of a gain change (in amplitude of response or contrast) that results in the prioritization of relevant inputs over irrelevant information (Thiele and Bellgrove, 2018). Astrocytes could assist to identify signal coincidence and help prioritize information by regulation of gain. Variations of Ca2 + -dependent glutamate uptake may impede or enhance excitatory synaptic drive (Schummers et al., 2008) or excitatory and inhibitory neurotransmission (Perea et al., 2014). Regulation of gain may also encompass gliotransmission (Takata et al., 2011) and intrinsic neuronal excitability (Sasaki et al., 2012). Regulation of excitatory synaptic strength through gain control can be achieved by lowering glutamate uptake (Poskanzer and Yuste, 2016), by enhancing glutamate release (Halassa et al., 2009), or by GABA-uptake via GAT-3 transporters (Shigetomi et al., 2011).
The involvement of astrocytes in cortical slow oscillations (<1 Hz) (Poskanzer and Yuste, 2016) underlines the involvement of astrocytes in network activity beyond tripartite synapses. Slow oscillations are believed to be the default mode of cortical network activity (Sanchez-Vives et al., 2017). In this light, the notion has been put forward that neurons transmit instructions to astrocytes to make other neurons modify their activity via canonical computations.
Hence, neurons may imprint external signals like odors, position, images, words, abstract categories, and executive functions on networks, but astrocytes enable them to design and to operate canonical computations in local mini-circuits within larger-scale networks. One may hypothesize that those canonical computations are manifestations of computation of error-related statistics and/or time in different contexts.
Astrocyte-mediated filtering of synaptic transmission (denoted as "astrocyte-like control") involves formation of so-called logic gates. Logic gates are essential building blocks in neural circuits to perform logic Boolean operations such as AND, OR, NOT, XOR, and NAND (Binder et al., 2007). Simple combinations of astrocytes and synapses comparable to the abovementioned minicircuits might, in principle, allow for computation of any real-world function in a scalable manner (Song et al., 2016). Therefore, neuron-focused studies should be viewed as computational elements within astrocyte mini-circuits, because dendrites and spines are embedded in an astrocyte "matrix" (Robertson, 2013). Since astrocytes participate in neuromodulation (Ding et al., 2013;Paukert et al., 2014), they might encode precision by temporally compensate prediction errors resulting from multiple synapses in astrocyte mini-circuits, to warrant sufficient statistics. The variable "precision" or "standard error" may be improved within a range of seconds by neuromodulators. Those molecules produce slower and more diffuse effects than transmitters, which eventually results in generation of brain states. State-dependent excitability of neuronal networks is associated with specific cognitive functions (Friston, 2009;Stephan et al., 2015).
During induction of synaptic plasticity, slow temporal properties of astrocytes could be essential to maintain the history of past activity . Indeed, computational models predict, that astrocytes improve synchronization of firing, and synaptic coordination (Amiri et al., 2013). Networks are tuned to oscillatory rhythms underlying memory processing (Tewari and Parpura, 2013), and integration of astrocytes improves network performance (Porto-Pazos et al., 2011;Fields et al., 2014). Within the syncytium, astrocytes may coordinate the excitability of functional neuronal ensembles and support their energetic demands (Chever et al., 2016;Clasadonte et al., 2017).
It looks as if at those levels analog information processing prevails, which leads to the conclusion, that even at relatively high levels of precision in the cell, analog computation is more efficient in its use of resources than deterministic digital computation.
Concluding remarks
Here we would like to reiterate to the central issue of this endeavor: Is The Human Brain Analog Or Digital?
This question stems from the knowledge of modern computer technology as described at the beginning of this review. The fundamental difference, however, is that the brain makes use of biomolecules for computation. All interactions of those molecules are distinguished by a probabilistic, analog nature. Because information is based on statistical approximations, the brain is non-deterministic and not "digital" (Sarpeshkar, 2010(Sarpeshkar, , 2014. On the other hand, many signals sent around the brain use "either-or" states. An action potential is triggered, a cytosine is methylated or not. These events are fundamental elements of communication in brain, as well. However, the binary arithmetic, binary logic or binary addressable memory of a computer chip are in no way sufficient to entail the full computational power of a neuron. The inevitable noise is attenuated by computation relying on feedback loops. Moreover, this type of computation not only involves neuronal networks and their oscillatory behavior, but also (astro-)glia networks mutually and intimately connected, which encompasses higher order information processing and more sophisticated ways of storing, consolidating, and retrieving memories than in hard discs of computers.
Along those lines, molecular parts of neural cells like ion channels, receptors, or enzymes as units of information processing simply cannot be understood as elements of digital, analog nor even hybrid computation. Supervision and control is embedded in various levels of cellular and molecular communication representing a system of more than sufficient flexibility to react and adapt to environmental challenges. Every single cell in the CNS can be viewed as a specific mini computer endowed with all the necessary tools to process incoming messages adequately along with efficient means to communicate with others in cellular and molecular networks. It is endowed with many molecular nanomachines executing their tasks inserted in the plasma membrane, cytoplasma, or in the nucleus almost frictionless and with close to 100% efficiency. A fascinating example of an analogdigital hybrid machine is the F0/F1-ATPase (Abrahams et al., 1994) located in the mitochondrial membrane, that phosphorylates ADP during clockwise rotation of its shaft (F0) injecting approx. 80 pN nm (close to the free energy of ATP) and dephosphorylates ATP turning counterclockwise (F1). The shaft's driving force is provided by hydrogen current ("a proton-driven motor") (Kinosita et al., 2000), which can increase or slow down the propelling speed and resultant production of nucleoside/nucleotide, controlling the production on demand. Another example is the kinesin/dynein system mediating fast axonal (anterograde/retrograde) transport of organelles on microtubules (Vale, 1987). Scrutinizing the literature in this respect easily reveals abundant similar examples of higher order computation everywhere in the Central Nervous System.
In conclusion, it has to be acknowledged that the brain entails many more computing options than any supercomputer. It has been programmed by nature and not by human beings. It is hard to imagine that a man-made computer program will be able to perform complex, abstract tasks like anticipation, intuition, or express social behaviors as basic requirements to live within human populations. All of those need adquisition, reinforcement and longterm consolidation. And, last not least, unlike in electronic devices, there is no option to "erase a folder" or to reset the whole system to a certain, previous condition. There is still a lot to learn and to understand about the computational power in our brain assembled and combined during tens of thousands of years by Nature. It is a big challenge but fascinating.
Author contributions
The author wrote and revised the text and constructed the figures.
Funding
This study was supported by Dr. R. Spanagel from the Institute of Psychopharmacology and the Central Institute of Mental Health in Mannheim and is highly appreciated.
Conflict of interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. | 2023-08-09T15:21:29.299Z | 2023-08-07T00:00:00.000 | {
"year": 2023,
"sha1": "0d888424e6301e8418f05c8f42d9074d2eae5928",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fncel.2023.1220030/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "545cfd0758c73c4fa2994f5967773d31767ddf61",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
88741465 | pes2o/s2orc | v3-fos-license | Evaluation of Antitranspirants for Enhancing Temporary Water Stress Tolerance in Bedding Plants
S UMMARY . Water stress during shipping and retailing reduces the postproduction quality and marketability of bedding plants. Antitranspirants can temporarily prevent plants from wilting by either physically blocking stomata or physiolog- ically inducing stomatal closure, limiting transpirational water loss from leaves. The goal of this research was to evaluate the efficacy of commercially available antitranspirants on enhancing temporary water stress tolerance in bedding plants. Two physical antitranspirants [ b -pinene polymer ( b P) and vinyl-acrylic polymer (VP)], and three physiological antitranspirants [two sugar alcohol- based compounds (SACs) and a biologically active form of abscisic acid (s-ABA)] were applied to begonia ( Begonia semperflorens-cultorum ), new guinea impatiens ( Impatiens hawkeri ), impatiens ( Impatiens walleriana ), petunia ( Petunia · hybrida ), african marigold ( Tagetes erecta ), and french marigold ( Tagetes patula ). Physical antitranspirants were sprayed on foliage and physiological antitrans- pirants were drenched to the media. All antitranspirants were applied at half (0.5 · ), equal to (1 · ), or twice (2 · ) the manufacturer’s recommended rate. Extended shelf life was observed when b P or s-ABA was applied. Treatment with b P increased the shelf life of impatiens and african marigold by 1 and 1.3 days compared with control plants, respectively. The application of b P at 2 · was more effective at delaying visual wilting than at lower rates (0.5 · and 1 · ) in african marigold. Applications of s-ABA delayed wilting by 1.3 to 3.7 days in all tested cultivars. The shelf lives of impatiens and petunia treated with s-ABA at 2 · were extended the most by 3.7 and 3.0 days compared with control plants, respectively. A rapid reduction of stomatal conductance ( g S ) was observed within 4 hours of b P or s-ABA application in plants showing delayed wilting symptoms. s-ABA treatment appeared to cause marginal leaf chlorosis in impatiens, whereas application of b P damaged the opened flowers in all tested cultivars. The application of VP or SACs did not extend shelf life in any treated plants. These results suggest that foliar application of b P on selected species and treatment with s-ABA on most of species would allow bedding plants to withstand water deficit during shipping and/or retailing.
SUMMARY. Water stress during shipping and retailing reduces the postproduction quality and marketability of bedding plants. Antitranspirants can temporarily prevent plants from wilting by either physically blocking stomata or physiologically inducing stomatal closure, limiting transpirational water loss from leaves. The goal of this research was to evaluate the efficacy of commercially available antitranspirants on enhancing temporary water stress tolerance in bedding plants. Two physical antitranspirants [b-pinene polymer (bP) and vinyl-acrylic polymer (VP)], and three physiological antitranspirants [two sugar alcoholbased compounds (SACs) and a biologically active form of abscisic acid (s-ABA)] were applied to begonia (Begonia semperflorens-cultorum), new guinea impatiens (Impatiens hawkeri), impatiens (Impatiens walleriana), petunia (Petunia ·hybrida), african marigold (Tagetes erecta), and french marigold (Tagetes patula). Physical antitranspirants were sprayed on foliage and physiological antitranspirants were drenched to the media. All antitranspirants were applied at half (0.5·), equal to (1·), or twice (2·) the manufacturer's recommended rate. Extended shelf life was observed when bP or s-ABA was applied. Treatment with bP increased the shelf life of impatiens and african marigold by 1 and 1.3 days compared with control plants, respectively. The application of bP at 2· was more effective at delaying visual wilting than at lower rates (0.5· and 1·) in african marigold. Applications of s-ABA delayed wilting by 1.3 to 3.7 days in all tested cultivars. The shelf lives of impatiens and petunia treated with s-ABA at 2· were extended the most by 3.7 and 3.0 days compared with control plants, respectively. A rapid reduction of stomatal conductance (g S ) was observed within 4 hours of bP or s-ABA application in plants showing delayed wilting symptoms. s-ABA treatment appeared to cause marginal leaf chlorosis in impatiens, whereas application of bP damaged the opened flowers in all tested cultivars. The application of VP or SACs did not extend shelf life in any treated plants. These results suggest that foliar application of bP on selected species and treatment with s-ABA on most of species would allow bedding plants to withstand water deficit during shipping and/or retailing. O rnamental bedding plants represent the largest sector of the floriculture industry in the Unites States and have a wholesale value of $1.96 billion accounting for 45% of all floriculture crops (U.S. Department of Agriculture, 2014). In the last decade, there has been a shift in the retailing of ornamental crops. Customers tend to purchase bedding plants more in mass market retailers or superstores and general retail outlets (such as supermarkets) than in traditional garden centers and florists because of convenience and lower prices (Yue and Behe, 2008). In addition, the major growers have moved their production into areas characterized by lower labor cost and more favorable climate conditions to reduce the cultivation cost (Ferrante et al., 2015). As a result, the location of crop production could be further away from markets, forcing plants to spend an extended period of time without proper irrigation during shipping and/or retailing (Waterland et al., 2010a;Weaver and van Iersel, 2014). Additionally, during postproduction periods, plants are often exposed to adverse environmental conditions, including high temperatures and inadequate irrigation, which accelerate substrate drying and plant wilting. Crop losses caused by these poor postproduction conditions are estimated to result in 5% to 20% of unsalable crops (Healy, 2009), and water stress is one of the major causes of diminished aesthetic quality and salability of plants. Therefore, it is highly desired to minimize crop damage caused by water deficit to maintain high quality and prolong longevity of bedding plants during postproduction.
Water stress causes plants to synthesize a phytohormone called abscisic acid (ABA) in the root system, and it is translocated to leaves through the transpiration stream (Taiz and Zeiger, 2010). When ABA reaches guard cells, it binds to ABA receptors that activate an ion efflux, which reduces turgor pressure in the guard cells. Due to loss of turgidity, the guard cells become flaccid and stomata are closed. Closing of stomata inhibits transpiration and allows the plant to withstand water stress by decreasing water loss. Using this principle, growers can utilize antitranspirants to reduce transpiration, thereby limiting water loss during shipping and retailing (Iriti et al., 2009;Odlum and Colombo, 2007;Waterland et al., 2010b). We thank Green Circle Growers Inc., PanAmerican Seed, and Syngenta Flowers for their donation of plant material.
Units
The information in this publication is for educational purposes only. Mention of a trademark, proprietary product, or vendor does not constitute a guarantee or warranty of the product, nor does it imply approval or disapproval to the exclusion of other products or vendors that may also be suitable. Antitranspirants are chemical compounds that increase water stress tolerance by preventing transpirational water loss in plants. Based on their mode of action, antitranspirants can be classified into two major groups, physical and physiological antitranspirants (Anderson and Kreith, 1978;Shinohara and Leskovar, 2014;Waterland et al., 2010b). Physical antitranspirants contain waxes, resins, latexes, or polymers that coat the leaf surface and minimize water loss from the plant by blocking stomata (Goreta et al., 2007). Such physical antitranspirants have shown positive effects on water stress tolerance in pepper [Capsicum annuum (del Amor et al., 2010)], peach tree [Prunus persica (Steinberg et al., 1990)], and herbaceous plants (Anderson and Kreith, 1978). Physiological antitranspirants minimize transpiration by inducing plants to close stomata. These compounds may contain ABA or other chemicals that increase the ABA concentration in plants (Waterland et al., 2010b). Exogenous application of ABA has enhanced water stress tolerance in various horticultural crops (Agehara and Leskovar, 2012;Astacio and van Iersel, 2011;Goreta et al., 2007;Shinohara and Leskovar, 2014). Goreta et al. (2007) found that foliar application of ABA enhanced water deficit tolerance of pepper, which was attributed to decreased g S and increased leaf water potential. Overall, antitranspirants have been shown to reduce wilting caused by water stress. However, some studies have demonstrated that plant responses to antitranspirants vary depending on species, concentrations of antitranspirants applied, developmental stages, and growing environmental conditions (Blanchard et al., 2007;Dunn et al., 2012;Shinohara and Leskovar, 2014;Waterland et al., 2010a).
Antitranspirants have been used to help plants withstand stress caused by water deficit, and many studies have focused on fruits, vegetables, turf, field crops, and woody plants. Little research has been conducted on the effect of antitranspirants on the postproduction quality of bedding plants. Furthermore, most research has evaluated individual antitranspirant and an efficacy comparison study among different products of physical and physiological antitranspirants is lacking. The physical antitranspirants in this study contained either b-P or VP as a coating agent and the physiological antitranspirants were two sugar alcohol-based compounds (SAC1 and SAC2), which are supposed to increase the concentration of ABA in plants, and s-ABA. SAC1 contains xylitol, and SAC2 contains polyhydric alcohol and extracts from seaweed [e.g., red algae (Gracilaria sp.)], corn (Zea mays), and berries [e.g., brambles (Rubus sp.), blueberry (Vaccinium sp.), strawberry (Fragaria ·ananassa)]. The goal of this research was to evaluate the efficacy of these commercially available antitranspirants on enhancing water stress tolerance in bedding plants.
All bedding plants were treated with antitranspirants when they reached a marketable stage of at least one open flower per plant. Begonia, new guinea impatiens, and impatiens were treated in June 2013, and marigold and petunia were treated in July 2013. Plants were irrigated with deionized (DI) water to container capacity 12 h before treatment. Physical antitranspirants were sprayed on the top and the underside of the plant canopy (about 35 mL per plant) with a pressurized sprayer (Regulator Bak-Pak; H.D. Hudson, Chicago, IL), and physiological antitranspirants were drenched in substrates (60 mL per pot). Control treatments for physical and physiological antitranspirant applications were sprayed and drenched with DI water, respectively, and then they were irrigated daily with 100 mgÁL -1 N (irrigated control) or water was withheld (water-stressed control) during the period of the experiment. The physical antitranspirants used were bP (Wilt-Pruf Ò ; Wilt-Pruf Products, Essex, CT) and VP (Moisturin; WellPlant, Sparks, NV). The physiological antitranspirants used were SAC1 (Stasis ä ; Natural Industries, Houston, TX), SAC2 (Root-Zone ä ; GSI Horticultural, Bend, OR), and s-ABA (ConTego ä , VBC-30101, Valent BioSciences, Libertyville, IL). All antitranspirants were applied at either half (0.5·), equal to (1·), or twice (2·) the manufacturer's recommended application rate (Table 1). Plants were held in the greenhouse under the previously described environmental conditions for subsequent evaluations. Half of plants treated with each antitranspirant had water withheld (waterstressed) until all treated plants reached a visual wilt status rating of 3 or below (unmarketable) as described by Waterland et al. (2010c). Wilt status ratings were from 1 to 5 with 5 = completely turgid, 4 = soft to the touch but still upright, 3 = starting to wilt, 2 = severely wilted, and 1 = wilted to the point that leaves are dried and desiccated (Waterland et al., 2010a). The other half were irrigated daily with 100 mgÁL -1 N (irrigated daily) to • August 2016 26 (4) determine whether antitranspirants caused any side effect on plants.
Visual observations of wilt status were taken daily. Evaluation of wilt status was started just before the application of antitranspirants. Evaluation continued until all plants reached a visual wilt status rating of 3 or below. The shelf life of waterstressed plants was calculated as the number of days from the initiation of water being withheld until plants reached a wilt status rating of 3 (Waterland et al., 2010c).
EXPT. 2: WILT STATUS AND Stomatal conductance was measured with a portable photosynthesis system (LI-6400XT; LI-COR, Lincoln, NE). Three fully expanded leaves per plant were tagged for measurements. Stomatal conductance measurements were taken 1 d before treatment, 4 h after the treatment, daily until all plants showed visual wilting, and 3 d after plants were rewatered. A leaf was placed into a light-emitting diode light source chamber (6400-02B, LI-COR). Environmental conditions in the chamber were set at 1000 mmolÁm -2 Ás -1 PPFD, 400 mmolÁmol -1 carbon dioxide, and 25°C as the block temperature. Readings were conducted from 1000 to 1400 HR. Data are the means of measurements from three replications (or three plants), with three leaves measured per plant (n = 3).
STATISTICAL ANALYSIS. Experiments were conducted as a randomized complete block design with three replications (n = 3). Analysis of variance was performed by SAS (version 9.3; SAS Institute, Cary, NC). Bedding plants were blocked by replication based on plant position in the greenhouse and watering regimen (irrigated daily vs. water-stressed). Differences among the treatment means were assessed by Tukey's test at P £ 0.05. (Table 2). All tested bedding plants showed delayed visible wilting by s-ABA treatment and extended shelf life by 1.3 to 3.7 d depending on the cultivar (Table 2). In contrast to bP or s-ABA treatment, shelf life extension was not observed in any species treated with VP or SACs. Among the three rates of application (0.5·, 1·, and 2·), longer shelf life extension was observed when a higher rate (2·) of antitranspirant was applied compared with lower rates (0.5· and 1·) in 'Taishan Orange' african marigold treated with bP and 'Wave Pink' petunia treated with s-ABA (Table 3). Impatiens treated with s-ABA had longer shelf life extension at 1· and 2· than that at 0.5· (Table 3). All other plant species treated with bP or s-ABA and any of the plants treated with VP or either SACs had no difference in extension of shelf life regardless of application rates (data not shown). Overall, the longest shelf life extension was observed in 'Taishan Orange' african marigold treated with bP at 2· by 2 d, and in impatiens and new guinea impatiens by almost 4 d when treated with s-ABA at 2· and 1·, respectively (Tables 2 and 3).
EXPT. 2: WILT STATUS AND S T O M A T A L C O N D U C T A N C E O F
ANTITRANSPIRANT-TREATED BEDDING PLANTS. Applications of bP and s-ABA, which showed the extended shelf life in Expt. 1, were performed to examine if there was a relationship between wilt status and g S . Two cultivars of african marigold, new guinea impatiens, and 'Ultra Red' petunia were treated with bP and s-ABA at the recommended rate (1·). Additionally, the application of bP at 2· was included in both cultivars of african marigold since the longest shelf life was observed at that rate in Expt. 1 (Table 3). As in Expt. 1, the application of bP delayed wilting symptoms in african marigold, but not in new guinea impatiens and petunia (Fig. 1). 'Antigua Yellow' african marigold treated with bP at 1· and 2· did not show any visible wilting symptom 2 d after treatment, whereas stressed control plants had a lower wilt status rating on the same day (Fig. 1B). A different african marigold cultivar, Taishan Orange, treated with the higher rate of bP (2·) showed less visible wilting than stressed control or plants treated with 1· of bP on 2 and 3 d after treatment (Fig. 1C). Petunia did not show any difference in visual wilting symptoms between stressed control and bP-treated plants during the period of water deficit stress (Fig. 1D). Stomatal conductance decreased 45% to 73% compared with that of control plants within 4 h after treatment with bP in both irrigated daily and water-stressed new guinea impatiens and two cultivars of african marigold ( Fig. 1A-C). As water stress progressed, g S of stressed controls declined and their value became similar to that of bP-treated plants 4, 2, and 2 d after treatment in new guinea impatiens, 'Antigua Yellow' african marigold, and 'Taishan Orange' african marigold, respectively ( Fig. 1A-C). The application of bP reduced g S in water-stressed petunia 4 h after treatment, but it became similar to control 1 d after treatment (Fig. 1D). Comparing the application rates of bP in two cultivars of african marigolds, there was no g S difference between plants treated at 2· and 1· 4 h after treatment ( Fig. 1B and C).
All plants treated with s-ABA had delayed visual wilting (Fig. 2) and exhibited a wilt status rating over 4 for 4 d (new guinea impatiens) and 3 d (african marigold and petunia) after the application (Fig. 2). Four hours after s-ABA application, reduced g S was observed in all cultivars tested under irrigated daily and water-stressed conditions (Fig. 2). Stomatal conductance of water-stressed control plants reached nearly the same level as s-ABA-treated plants 2 to 4 d after treatment (Fig. 2). bP was applied at the manufacturer's recommended rate (1·) in all tested cultivars. African marigolds were treated with bP at 1· and 2·. Half of plants were irrigated daily (left), and the other half had water withheld until plants showed wilt status rating of 3 and irrigation was resumed for 3 d (right). Stomatal conductance was measured 1 d before the application, 4 h after the application, daily until all plants showed wilt symptom, and 3 d after plants were rewatered. Irrigated controls had wilt status of 5 for the duration of the experiment, and water-stressed plants had a rating of 5 after 3 d rewatering period. Wilt status ratings were from 5 to 1, where 5 = completely turgid, 4 = soft to touch but still upright, 3 = starting to wilt, 2 = severely wilted, and 1 = wilted to the point that leaves are desiccated. Vertical bars are standard errors of the means with three replications (n = 3). *, **, *** Significant at P £ 0.05, 0.01, or 0.001, respectively. , and the other half had water withheld until plants showed wilt status rating of 3 and irrigation was resumed for 3 d (right). Stomatal conductance was measured 1 d before the application, 4 h after the application, daily until all plants showed wilt symptom, and 3 d after plants were rewatered. Irrigated controls had wilt status of 5 for the duration of the experiment, and water-stressed plants had a rating of 5 after 3 d rewatering period. Wilt status ratings were from 5 to 1, where 5 = completely turgid, 4 = soft to touch but still upright, 3 = starting to wilt, 2 = severely wilted, and 1 = wilted to the point that leaves are desiccated. Vertical bars are standard errors of the means with three replications (n = 3). *, **, *** Significant at P £ 0.05, 0.01, or 0.001, respectively.
Discussion
Two physical and three physiological antitranspirants were evaluated for enhancing temporary water stress tolerance in eight popular cultivars of bedding plants. Among the five antitranspirants examined, only bP in certain cultivars and s-ABA in all cultivars showed positive effects on enhancing water stress tolerance. Three other antitranspirants (VP and SACs) were not effective and consequently further evaluation of VP and SACs was not performed.
A sharp decline of g S 4 h after application of bP and s-ABA compared with controls indicated that application of bP effectively blocked stomata or s-ABA induced stomatal closure at the early stage of application in those plants ( Figs. 1 and 2). The delay in wilting was likely due to stomatal closure and the subsequent reduction in water loss, thus delaying the loss in leaf turgidity (Figs. 1 and 2). A rapid reduction of g S resulting from application of antitranspirants helped to maintain a high water status under water stress and thus improved water stress tolerance (Astacio and van Iersel, 2011;Kim and van Iersel, 2011;Shinohara and Leskovar, 2014;Waterland et al., 2010b). Additionally, Anderson and Kreith (1978) reported that bP treatment resulted in initial reduction of transpiration rate by reducing g S of sweetclover (Melilotus officinalis) leaves. The application of bP caused a reduction in water use of peach trees by 40% immediately after treatment, with a subsequent decrease in water use by 30% for the next 30 d (Steinberg et al., 1990). Although bP treatment reduced g S within 4 h in petunia, transpiration resumed 1 d after treatment, as indicated by similar g S to control, and it did not delay wilting (Fig. 1D). Therefore, the rapid and sustained reduction in g S by antitranspirants before water stress seemed to greatly reduce transpirational water loss during the beginning of water stress period, thus delaying wilting symptoms. As bPand s-ABA-treated plants were irrigated daily, g S gradually increased to the same level of irrigated controls (Figs. 1 and 2). Thus, the efficacy of antitranspirants was diminished as plants were irrigated or as time passed.
Our findings also support the idea that the effects of bP on enhancing water stress tolerance appeared to be species dependent. Application of bP did not extend shelf life universally, but only in impatiens and african marigold, among the six species tested when they were treated at the manufacturer's recommended rate (Table 2). Studies have reported that physical antitranspirants resulted in different responses depending on species (Davies and Kozlowski, 1974;Hummel, 1990). Species differences in response to physical antitranspirants might be associated with differences in shape, size, and density of trichomes (Goreta et al., 2007). Since physical antitranspirants are sprayed on the leaf surface to coat the stomata with a thin film of the chemicals, trichome patterns might affect the chemical's adhesion and retention on leaves differently (Palliotti et al., 2010;Pathan et al., 2009). In artichoke (Cynara cardunculus), physical antitranspirants were not effective to mitigate water stress, presumably due to the dense glandular trichomes (Shinohara and Leskovar, 2014). Indeed, the petunia cultivar evaluated in this research has hirsute leaves, whereas african marigold has rather glabrous leaves. New guinea impatiens tested also have fewer trichomes than petunia and reduced g S was observed when bP was applied (Fig. 1A). The denser trichomes might have prevented bP from forming a thin film layer on the stomata. Species-dependent response to bP in this experiment may have been due to the different characteristics of surface trichome patterns.
Application of s-ABA enhanced water stress tolerance in all plants tested, and the range of extended shelf life varied from 1.3 to 3.7 d depending on species and cultivars. Responses to ABA treatments have been shown to vary according to species (Blanchard et al., 2007;Waterland et al., 2010a). Blanchard et al. (2007) reported that sprench application of s-ABA at 125 or 250 mgÁL -1 delayed wilting by 1.1 to 5.8 d in 'Harmony Grape' new guinea impatiens, but no significant effect of s-ABA on shelf life was observed in 'Tempo Lavender' impatiens or 'Vabana' bacopa (Sutera cordata). In 'Double Fiesta Ole Purple Stripe' impatiens, application of s-ABA at 250, 500, and 1000 mgÁL -1 extended the shelf life by 1.7 to 3.7 d (Table 3). Longer shelf life of 'Xtreme Lavender' impatiens either drenched or sprayed with s-ABA was also observed by Waterland et al. (2010a). The differences between previously published reports and the result from our research may be due to different cultivar selection, s-ABA concentration, application method, and environmental conditions.
In contrast to bP and s-ABA, VP and SACs did not exhibit any positive or negative effect on shelf life extension in all eight cultivars at manufacturers' recommended rates (Table 2). VP had shown increased survival rate of green ash [Fraxinus pennsylvanica (Harris and Bassuk, 1995)] and reduced water use in nursery trees (Englert et al., 1993) after transplanting. However, those plants were subjected to water stress by transplanting and might not have experienced similar levels of water stress as in the present study, which withheld water from container-grown plants. Dunn et al. (2012) found that SACs delayed the visual wilting rating of herbaceous and woody ornamentals compared with nontreated controls, but in a speciesdependent manner. The authors mentioned that the species-dependent responses might be due to the different application rates, retention, and accumulation of chemicals in soilless media. SACs are expected to lower water potential in growing media to induce water stress. In our research, all plants treated with either SAC showed no delay in wilting symptoms even at twice the recommended rate, indicating that SACs failed to trigger a water stress response.
Although application of bP was effective in certain plant species, bP caused floral damage in all plants tested regardless of application rate (Fig. 3). Floral damage was first observed within a few hours after bP treatment, and the floral damage seemed to accelerate flower senescence, resulting in poor quality bedding plants. However, no damage was observed in shoots, flower buds, or leaves. Steinberg et al. (1990) reported that bP did not significantly reduce growth or bud and fruit initiation in peach tree. Therefore, application of bP should be recommended before flower opening. On the other hand, s-ABA caused chlorosis on the margin of leaves in impatiens. Foliar chlorosis and leaf abscission have been frequently mentioned as side effects of ABA application (Agehara and Leskovar, 2012;Astacio and van Iersel, 2011;Kim and van Iersel, 2011;Waterland et al., 2010aWaterland et al., , 2010cWeaver and van Iersel, 2014). Chlorosis has been known to accelerate with increasing ABA concentration (Agehara and Leskovar, 2012;Astacio and van Iersel, 2011;Weaver and van Iersel, 2014).
The efficacy of five antitranspirants to increase temporary tolerance to water stress was evaluated in six species of bedding plants. Among five antitranspirants, bP and s-ABA treatment enhanced temporary water stress tolerance in severely waterstressed plants by blocking and closing stomata, respectively. This explanation was supported by the observation that g S was quickly decreased on application of either antitranspirant. Consequently, shelf life was increased by 1 to 3.7 d depending on species and application rates. The application of bP or s-ABA as an antitranspirant would allow some bedding plants to withstand temporary water stress during postproduction such as the shipping and retailing environments. However, the efficacy of bP appears to be species dependent in our research, possibly due to the difference in trichome patterns of leaf surface. Caution should be applied, given that the application of bP and s-ABA could cause floral damage or leaf chlorosis. Floriculture growers should evaluate the effects of antitranspirants on their crops to maximize aesthetic quality and longevity of their products without any side effects. | 2019-04-01T13:14:56.278Z | 2016-08-01T00:00:00.000 | {
"year": 2016,
"sha1": "03bd7a009d05d982ec82060cbcac1a7181c0daa5",
"oa_license": null,
"oa_url": "https://journals.ashs.org/downloadpdf/journals/horttech/26/4/article-p444.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5d4465d77e55ee7ef84444d8040807a9ae037830",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
6329560 | pes2o/s2orc | v3-fos-license | Who will attack the competitors? How political parties resolve strategic and collective action dilemmas in negative campaigning
Negative campaigning presents parties with a collective action problem. While parties would prefer to have their competitors attacked, potential backlash effects from negative messages mean that individual politicians typically lack the incentives to carry out such attacks. We theorize that parties solve this problem by implementing a division of labour that takes into account the incentives of individual office holders, their availability for campaign activity, and media relevance. Drawing on these arguments we expect that holders of high public office and party leaders are less likely to issue attacks, leaving the bulk of the ‘dirty work’ to be carried out by party floor leaders and general secretaries. Examining almost 8000 press releases issued by over 600 individual politicians during four election campaigns in Austria, we find strong support for our theoretical expectations.
Introduction
In modern democracies electoral campaigns ought to serve the citizens by allowing the candidates to present themselves, their programs, and their records and to conduct a public debate focused on them (LeDuc et al., 2002). However, parties and candidates take an active role in these debates also by attacking the achievements, plans, and candidates of competing parties. Quite simply, the parties' strategic objectives are to appear attractive to the electorate and at the same time to reduce the attractiveness of their competitors. The two resulting types of behaviour are called positive and negative campaigning, respectively.
Parties often consider negative campaigning essential to influence the outcome of the election, as the weaknesses of their competitors may otherwise remain unnoticed. Riding effective attacks therefore is a task parties have to accomplish. Yet there is a tension between the two goals of appearing attractive and reducing the attractiveness of others as research has established a backlash effect of negative campaigning (Lau et al., 1999(Lau et al., , 2007. 1 Even though some studies report beneficial effects of attacks (Geer and Lau, 2006), mass media and voters typically dislike themwith the consequence of popularity losses for the attacker. In multiparty systems, attacking politicians and parties may also suffer policy and office costs, as targeted (prospective) coalition partners may be less willing to cooperate. As a consequence, party elites face a disincentive to attack other parties.
In the United States, parties and candidates have resolved this dilemma by farming out attacks, and toxic ones in particular, to outside groups not formally tied to a candidate or party, the so-called (Super) PACs (Brooks and Murov, 2012;Painter, 2014). In European party democracies such farming out of campaign tasks is at best a nascent development as political parties still dominate the contest (Farrell and Schmitt-Beck, 2008). This leaves them with two strategic dilemmas: First, how to attack competitors while keeping the backlash effect for the party at bay? Second, how to overcome the collective action problem that rests in the conflict between collective party gains in terms of discrediting competitors and individual costs in terms of popularity losses with the electorate and poisoned relationships with other parties' politicians?
To answer these research questions we resort to two rarely connected literatures: that on party organizations and that on political roles. Combining these literatures we begin building what eventually may become a theory of intraparty roles based on various party and public offices in the context of systems with coalition government. From there we derive several unnecessary hypotheses which we test with data from the last four national elections in Austria (2002, 2006, 2008, and 2013), a typical European parliamentary democracy. Parties with diverse ideological backgrounds competed in these elections and the observation period also includes different types of governments: Until 2006, Austria was governed by a centre-right coalition; since then grand coalitions have been in power.
Empirically, we base our study on a content analysis of press releases. We choose this important communication means as it is accessible to a great number of party actors who all should share the collective party goals but face different individual incentives to act upon them. Hence, this source should reveal different degrees of negativity in the campaign communication as a consequence of varying roles.
Our results widely confirm this expectation. Moreover, these differences in the level of negativity are not only observable between subjects but also within subjects, as shown by our analysis of a sub-group of individuals who changed their offices -and thus their expected roles -over time.
Intra-party roles and campaign communication
Modern democracies are characterized by partisan dealignment and increasing levels of electoral volatility (Dalton and Wattenberg, 2000). Against this background electoral campaigns have greatly gained in importance. It is here where parties present their candidates, ideas for future policies, and records, but also engage with their competitors. This is reflected in a large and growing literature on campaigns and campaigning (Bowler and Farrell, 1992;Brady and Johnston, 2006;Jacobson, 2015;Plasser and Plasser, 2002;Schmitt-Beck and Farrell, 2002;Trent et al., 2011). Much campaigning, this literature has shown, is negative in the sense that its focus is not on the relevant actors' claimed strengths but their competitors' alleged weaknesses and faults (Lau and Rovner, 2009;Nai and Walter, 2015).
However, research on negative campaigning has also established a backlash effect: While attacks may hurt the targets, they also harm the attacker (Lau et al., 1999: 856-857;Lau et al., 2007Lau et al., : 1182Lau et al., -1183. Mass media are more likely to report negative messages but journalists may also connect the sender to aspects of politics disliked by the voters. Notwithstanding such a backlash effect, political parties may have no better option than to also campaign negatively. If no one else highlights the weaknesses of their competitors, if, for instance, the mass media display a partisan bias, are docile vis-a-vis incumbents, or simply superficial, there may be no other way to make voters aware about such faults (Geer, 2006). Leaving aside some protest parties, parties as such are unlikely to run entirely negative campaigns. Mixed campaigns with both negative and positive party communication are more likely so that the backlash effect might be contained. In addition to such balancing, we theorize that parties can further minimize the costs of negative campaigning by an intelligent handling of that task.
Parties are collective organizations but organizations can act only through individuals. According to the political entrepreneurial perspective of politics (Laver, 1997), these individuals 'do not have partisan goals per se' (Aldrich, 2011: 5). They rather have career and policy goals in government for which the party is an instrument. Individually striving for such goals can lead to results that are inferior to coordinated behaviour and hence not the best collective outcome for political parties. In short, political parties face a collective action problem when it comes to negative campaigning. This leaves us with a double puzzle: How do political parties manage to attack their competitors if individual incentives for such behaviour are lacking? And how do parties as organizations contain the detrimental effects of negative campaigning?
Answering these research questions requires looking into political parties and their campaign communication in some detail. However, the literature on negative campaigning in European party democracies typically uses 'party' as the unit of analysis and hence cannot provide an answer to this question (Elmelund-Praestekaer, 2008Hansen and Pedersen, 2008;Schweitzer, 2010;vanHeerde-Hudson, 2011;Walter, 2014;Walter and van der Brug, 2013;Walter and Vliegenthart, 2010;Walter et al., 2014). Nor did researchers who studied (female) party leaders (Walter, 2013) or the behaviour of presidential candidates (Sigelman and Shiraev, 2002) look inside parties. Only a study on communication patterns in a Dutch election campaign provides some intra-party differentiation (de Nooy and Kleinnijenhuis, 2013). Likewise, Schweitzer's (2010) study of online campaigning compares party leaders to other party representatives.
In contrast, the US literature largely focuses on individual candidates and hence allows for comparing the campaign behaviour of candidates within the same party. Yet, their competitive context is very different. In a way each candidate for legislative office resembles a party that aims for success in the relevant single-member constituency and relies on his or her own campaign organization. Presidential elections, by contrast, rather resemble a team effort as the candidates and their running mates are tied together. In this regard Sigelman and Buell (2003) found some evidence for the 'conventional wisdom' that vice-presidential candidates carry the main burden of negative campaigning.
The case of US presidential elections thus suggests some division of labour within a party's elite in negative campaigning. Such division of labour should be much more systematic in Europe's strong party organizations. Although the literature on political parties has always displayed a strong interest in issues of organization and intra-party politics it has not dealt with this particular question. While we know much about the internal structures of parties in terms of collective decision-making bodies (Katz and Mair, 1994), the comparative literature on political parties is largely silent on the internal division of labour. This is even true for the one office given to individuals that has received most attention, that of party leader. Although a sizeable literature exists on party leaders, it is mostly on their election and de-selection rather than what they do in office (Pilet and Cross, 2014). And although their office performance is essential in these processes the literature typically avoids mapping their behaviour but rather draws on external evaluations such as public opinion polls or electoral results. The growing literature on the importance of leaders in elections (Aarts et al., 2011;Bittner, 2011;King, 2002) also rarely focuses on their actual behaviour during campaigns -with the major exception of TV debates -but examines rather stable factors such as their personality or issue positions.
Party statutes may also mention a few more positions given to individuals -such as secretary, financial officer, and keeper of the minutes -but typically they do not describe these jobs in detail. Aldrich (2011: 17-18) provides a basic differentiation based on (a) those who hold elective office ('office seekers') and (b) professional communication experts and activists ('benefit seekers'). Yet the empirical literature has not dealt much with this topic. Regarding the work of party employees, Webb and Kolodny described it as 'one of the most under-researched fields in the study of political parties ' (2006: 337). We may therefore approach our research question from a different angle. This perspective is, as Kitschelt (2006: 288, note 281) dubs it, 'task-directed' functionalism (which is different from 'explanatory' functionalism). In this vein, Schlesinger (1993) takes the competition in elections as the most basic feature in the study of political parties. An 'electoral imperative' dictates office-seeking parties a number of tasks. These tasks are different from the more abstract goals of office-seeking, policy-seeking, or vote-seeking (Müller and Strøm, 1999;Strøm, 1990) which Schlesinger reserves for individuals. 2 He rather provides a list of tasks that need to be fulfilled in the US system, ranging from the declaration of candidacy to behaviour in office (1993: 484-493). One of these tasks is dubbed 'complex communication' delineating the need to 'convince voters'. As indicated above, in modern democracies this often involves discrediting competitors. Discussing different regime types, Schlesinger indicates that the individual incentives of party officials to cooperate in achieving the task of convincing voters differ in unitary (parliamentary) and divided (presidential) systems, with the former ones being more cooperative than the latter. Yet he allows for 'some independent campaigning' (1993: 490) of party nominees even in unitary systems if they campaign in geographically delimited areas or compete for different offices. Why would candidates differ under these circumstances? Perhaps because they relate to different reference groups (constituencies) and face expectations closely tied to their respective offices? Such ideas have been especially prominent in the literature on political roles.
Originally, political roles have been given most attention in the study of legislatures (Blomgren and Rozenberg, 2012b;Müller and Saalfeld, 1997;Searing, 1994;Wahlke et al., 1962). Their internal organization builds on a number of formal offices such as president or speaker, committee chair, and party floor leader. These offices are associated with very distinctive formal tasks but they are additionally often related to normative expectations about how the tasks should be performed and how the office holders should behave even beyond their formal duties. Leaving aside the once dominant structural-functional approach (Blomgren and Rozenberg, 2012a: 14-16), contemporary research has integrated the concept of 'roles' into the rational choice paradigm. In Searing's (1994) 'motivational approach', roles take a 'purposive' nature. They are defined according to the purposes the politicians pursue. Specifically, Searing distinguishes 'position roles', tied to specific offices that come with strong expectations about how the role is to be performed, and 'preference roles' that are less well defined and allow politicians to pick and choose among potential activities. Strøm (1997Strøm ( , 2012 has continued the move towards a concept of rational behavior. Politicians, he argues, have preferences they try to advance by making strategic decisions about the employment of scarce resources within the given institutional environment and its incentive structures. According to Blomgren and Rozenberg, roles are 'systematic behaviour' and 'actions that repeat over time ' (2012a: 28-29). Roles, then, are rational responses to institutional incentives.
One important aspect in this regard is the degree of partisanship attributed to a specific office. While Wahlke et al.'s famous dichotomy of 'party man ' vs 'independent, maverick, nonpartisan' (1962: 343-376) is not very useful for application in contemporary European party democracies, its underlying dimension is of relevance to our research. Depending on their particular positions in the political system, politicians can be more or less overt partisans in their behaviour.
Modern role theory thus emanates from the parliamentary context. Although this arena remains central in many respects, contemporary politics has moved the political communication battlefield out of it to a large extent. Political actors not only rely on the mass media to transmit their messages, they also approach them directly, tailoring their messages according to the requirements of journalistic transmitters and a mass audience. That is why we build a theory of actor behaviour in this realm.
Theoretical expectations
Our theorizing starts from formal positions and most basically differentiates them into public vs party offices. These offices can be understood as 'positional roles' with regard to our variable of interest, namely negative campaigning. While we develop strong expectations for high offices, the incentives and opportunities for such behaviour are less clear for many lower offices. They rather resemble Searing's (1994) 'preference roles'.
In order to predict a politician's inclination to carry out attacks we need to answer three questions: First, what is their incentive structure to attack competitors? Second, to what extent are they available for genuine party (rather than public office) work? Third, what is their relevance for media, meaning what chance do they have to get their messages reported by the mass media due to their office(s)? Only when these questions are answered in a particular way can we expect the individuals to internalize the party demand upon negative campaigning, to regularly act accordingly, and to achieve effect.
In terms of public offices, parliamentary regimes appear similar enough to allow a straightforward cross-national application, though we expect differences between systems with single-party and coalition governments. We differentiate the following public offices: head of government, cabinet member, and speaker of parliament. In terms of party offices, by contrast, the empirical variation is certainly greater. A cross-national application of our approach would therefore require starting from the conditions we formulate rather than the specific offices we relate to these conditions in the Austrian context. These party offices are: party leader, party floor leader, and party general secretary. With respect to the six public and party offices we additionally consider differences between parties in government and opposition. All other holders of public and/or party office constitute the group of 'other politicians'.
Incentive structures
Assuming that politicians are rational actors, the first and most fundamental question concerns the office-related incentive structure for negative campaigning. The distinction between government and opposition is crucial for the definition of some of the public offices and this also impacts on the incentive structure of party offices.
Head of government. This office is the main prize of politics in parliamentary systems. For political parties, incumbents (most of the time) are electoral assets that need to be preserved. Clearly, such preservation would also serve the career ambitions of the incumbents. Ascending to statesmanship by meeting with world leaders might help; descending to mere partisan politics by engaging closely with political competitors is more likely to have the opposite effect. At the same time the job of prime ministers is to keep the government running. This means to resolve conflict rather than to forge it in coalition governments. All this suggests that heads of government have strong incentives to avoid negative campaigning.
Cabinet member. The incentive structure for cabinet members is similar to that of the head of government. They are among the most visible party representatives and for the sake of the party and their own career they should avoid public opinion backlash. While they are not primarily responsible for the working of the government tout court, they clearly contribute to it. Moreover, their own success as ministers may depend on the goodwill of coalition partners. They, therefore, have an incentive not to strain relations with them and to avoid clashing with opposition politicians by riding attacks on them.
Speaker of parliament. This office is close to the top of any state's formal political hierarchy and in most European countries it is met with strong non-partisan role expectations (Jenny and Müller, 1995). While this first and foremost means procedural fairness in the conduct of parliamentary affairs, it is easy to see that credibility for such behaviour may suffer from taking a leading role in partisan attacks. Office holders may also aim for even higher office such as head of state. In constitutional monarchies where this career option is not available, the position of speaker is typically taken by elder statespersons who have grown out of party politics. In any case, speakers of parliament typically have very little motivation to expose themselves to the backlash effect that attacks on opponents produce.
Party leader. The party leader is increasingly important as an electoral asset of the party (Aarts et al., 2011;Bittner, 2011;Costa Lobo and Curtice, 2015;McAllister, 2007). He or she has both a party and personal incentive to avoid backlash effects and abstain from negative campaigning. These statements are first and foremost relative to other officials of the same party, allowing some differences between government and opposition parties. Specifically, it is the role expectation of the opposition to criticise and attack the government. We therefore expect that leaders of opposition parties practice less self-constraint in negative campaigning. Their taking a more active part in attacks may also be a necessity if journalists tend not to report what less prominent opposition politicians say.
Party floor leader. Although the basis of this office is a public one -being a member of parliament -leading the parliamentary party is a genuine party office. Leading the party in parliamentary battles without doubt requires attacking competitors. Yet being the party's spearhead is not the only task associated with this office, in particular in government parties where floor leaders are part and parcel of the machinery of government. In coalitions this task typically includes the parliamentary coordination with the other government parties. While floor leaders of opposition parties have strong incentives to attack all their opponents, those of government parties might be interested in smoothing rather than straining intracoalition relations and to concentrate their fire on the opposition.
Party general secretary. In his characterization of party secretaries even in democratic parties, Duverger refers to Lenin's What is to be done? There, Lenin praised the secretaries' 'total and permanent devotion to the party'; together with their availability (see below) this makes them the party's 'real agitators' (see Duverger, 1959: 155). Lenin's revolutionary avant-garde clearly represents the extreme end but, according to Duverger, more than a kernel of truth also for party secretaries in democratic parties. In addition to the material rewards they receive from the party there are also symbolic rewards from the party activists who are believed to be more radical than passive party members and voters and often appreciate offensive behaviour of their leaders (see May, 1973). Despite large variation in their internal organization, most European parties feature a functional equivalent of the party secretary, usually called general secretary or secretary general (e.g. in Germany, Austria, Denmark, Sweden, Finland, Spain, Ireland, or the UK), party secretary (Belgium), or party president (the Netherlands). The job description usually features the day-to-day operation of the extra-parliamentary party organization, in many cases including the management of election campaigns and speaking on behalf of the party.
Other politicians. The offices we have singled out should comprise a large share of politicians who contribute to public campaign discourse. The remaining politicians include MPs, parliamentary candidates, and sub-national office holders. They tend to have less relevance for media and quite heterogeneous incentive structures to participate in the campaign and attack competitors in particular.
While these expectations seem plausible for the public and party offices per se, real world politics is somewhat more complex as several individuals combine party and public offices. In such circumstances we expect the incentives from public office to be stronger. This is in line with Aldrich who argues that politicians take the party as 'the instrument for achieving' their 'more personal and fundamental goals' (2011: 5) in public offices.
Availability for party activity
The second important question is to what extent office holders are available for party activity. While making a contribution to the public political debate may not require much time per se, the precise timing of such interventions is often crucial. Reacting too late may mean that the public floor de facto has been left to the competitors. A too late response may miss the editorial deadlines of important mass media and fail to balance or override messages from political competitors. Availability therefore to a large extent means time flexibility and accessibility for the party's campaign strategists and 'war room' managers.
Such availability is severely limited in the case of members of the executive who may be bound up in meetings or international travel (especially to Brussels), duties that do not vanish in campaign periods. Holders of high parliamentary office -the presidents of parliament and the floor leaders -should display much greater availability, as the parliament typically is not in session when the election approaches. This is probably less true for MPs, many of whom will have to combine private occupation and constituency campaigning.
With respect to party office holders, the party secretaries again are most likely available. Contrary to other politicians they are almost permanently present in the capital and the party headquarters. Again Duverger's reference to Lenin's work is telling: Being employed by the party, they can serve it 'with no interruption or hindrance due to external cares' (Duverger, 1959: 155).
Relevance for media
The classic criteria of 'newsworthiness' applied by journalists include the prominence of the sender in addition to the newness and negativity of the message (O'Neill and Harcup, 2009). The most likely source of prominence is high public office followed by high party office.
Three groups of actors seem plausible: The top group includes the head of government and the (other) party leaders and top candidates respectively. A middle group comprises the members of the cabinet, the speakers of parliament, the parliamentary floor leaders, the party secretaries, and leading sub-national executive officers. A third group, finally, consists of MPs, other sub-national office holders, and candidates without public office.
We can now bring the discussions of the three questions together. Clearly, the incentives to attack constitute the most important factor. Here we see that the holders of high public office have no incentive to attack competitors. Even party leaders have little incentive to do so, though leaders of opposition parties and those who are serious contenders for the office of prime minister should be more prone to attack. Parliamentary floor leaders, especially those of opposition parties, and the parties' general secretaries in particular are the offices that we see most predetermined to ride attacks against competing parties. Conveniently, these offices, and the general secretaries in particular, are also endowed with the required time resources and relevance for media to lend effectiveness to such behaviour. Table 1 summarizes these expectations.
Data and methods
The present article is based on a content analysis of party press releases. This source, to the best of our knowledge, has been hardly used in the study of negative campaigning 3 even though it has two general advantages: First, it is under the direct control of the sender and thus adequately represents a party's campaign strategy. Studies based on media reports, by contrast, might suffer from the media's negativity bias giving conflict a higher chance to get reported (Elmelund-Praestekaer and Molgaard-Svensson, 2014; Hansen and Pedersen, 2008;Ridout and Walter, 2015). Second, press releases are issued frequently and continuously during a campaign and therefore capture its dynamics (Dolezal et al., 2015). For the present article this source is best suited because of a further characteristic: In contrast to TV debates or TV spots, press releases are not an exclusive means for the parties' top candidates. Press releases allow for studying the campaign communication of a much broader range of party representatives. Naturally, leading politicians can easily use other means of communication such as interviews in newspapers or TV news shows. However, press releases typically follow these channels and distribute the messages provided to a broader media audience.
In Austria, press releases are distributed via the APA, the national news agency. They are called 'OTS-Meldungen' (Original Text Service-Messages) and are freely available through a website (www.ots.at). This centralized distribution increases the messages' importance especially for journalists who are their main audience. Research has demonstrated that press releases strongly influence news coverage in many countries, including Austria (Haselmayer et al., 2015;Seethaler and Melischek, 2014).
For each of the four campaigns, we selected all press releases sent during the last six weeks of the campaign by the parties represented in parliament before and/or after the election. We not only included press releases sent by the parties' central offices but also by their parliamentary groups or regional branches. In a further step we manually de-selected all press releases that only informed about coming events (e.g. press conferences or campaign rallies) or provided technical information (such as links to pictures of candidates or audio content). Note that we deliberately do not include press releases distributed by ministries. These releases might have a partisan 'touch' but they are rarely negative. In 2013 we only found one cabinet member using this channel to attack an opponent.
All in all we collected 7858 press releases from seven parties. Apart from the SPÖ (Social Democratic Party of Austria) and the Christian-democratic Ö VP (Austrian People's Party) these parties include two populist radical right parties, the FPÖ (Freedom Party of Austria) and its splitoff, the BZÖ (Alliance for the Future of Austria), the Greens, the liberal NEOS (NEOS-The New Austria), and the populist Team Stronach. While the SPÖ , Ö VP, FPÖ , and Greens were present in all four campaigns, the BZÖ was founded in 2005. The NEOS as well as Team Stronach, by contrast, are new parties and only competed in the 2013 election (Dolezal and Zeglovits, 2014;Kritzinger et al., 2014).
In the content analysis we apply a relational method that captures the relationship of actors ('subjects') with issues or other actors ('objects'). A variable called 'predicate' connects them and records their relation as either positive (1), negative (-1), or neutral (0) (see Appendix for examples). This method goes back to the work of Kleinnijenhuis and his collaborators (e.g. Kleinnijenhuis and Pennings, 2001) and was also used in comparative research on election campaigns and public debates (Kriesi et al., 2008(Kriesi et al., , 2012. The Austrian National Election Study (AUTNES) has developed this approach further and uses it for various types of political texts, e.g. party manifestos (Dolezal et al., , 2016. Given the high number of press releases we only coded their title. However, because of the length of the headings (a maximum of 138 characters set by the OTS-system) and the high quality with which most press releases are written, the content of the titles perfectly captures the basic message of most press releases. What is more, press release titles are the main selection criterion for journalists (only titles and subtitles are visible when journalists scroll through the APA system), thus our measure registers whether party actors choose to make the attack the main point in their communication. 4 For the present article we define any negative relation between subject and object actors, thus any form of criticism, as negative campaigning (e.g. Geer, 2006: 26). Every press release is therefore coded as 1 'attack' or 0 'no attack'. For both the subjects (i.e. the senders) and the objects (the targets) names and organizational affiliation (typically to a political party) were coded so that we can easily identify the individual politicians who held public or party offices. Of course, in an archetypical party democracy such as Austria it is natural to find some overlap between party and public offices. Parties reserve the highest public office available to them for their leaders. Therefore, leaders of government parties typically take positions in cabinet (mostly as Chancellor or Vice-Chancellor), whereas opposition party leaders usually assume the position of party floor leader in parliament.
Apart from the office variables, we also control for gender (as men and women are sometimes expected to differ in terms of negative campaigning), government status, and the week of the campaign (as campaigns may systematically vary in emphasis on individuals and attacks over their course).
Analysis
Media relevance, as argued above, is the precondition for any communication strategy based on press releases; otherwise journalists would simply neglect them. Results from a content analysis of the news coverage of the 2013 campaign (AUTNES MedienManuell, 2013;Schönbach et al., 2014) demonstrate the high media presence of the politicians holding the six offices we are especially interested in. Even though in 2013 these members of the political elite comprised only 33 individuals (or four percent of all individuals recorded in the content analysis), they were mentioned in no less than 54 percent of all articles or television pieces analysed. In 2008 individuals belonging to this group were coded as 'main actors' in 40.7 percent of the articles (AUTNES MedienManuell, 2008). Figure 1 presents the level of negativity by political office. Heads of government and leaders of parties in government (i.e. Vice-Chancellors) almost completely refrain from attacking opponents. Other holders of high public office in government and parliament exercise similar levels of restraint. Opposition party leaders are somewhat more likely to direct negative messages at their opponents, yet still stay below the average level of negativity. Party floor leaders are just above average, yet clearly not as aggressive in their messaging as party general secretaries.
To see whether these results hold in a multivariate test, we present a binary logistic regression with random effects at the party-election level (Table 2) to account for structural factors that remain constant for each party during a campaign. The reference category for the political office predictors is the set of non-elite politicians that make up the majority of all senders in the press release data.
All groups except the government party floor leaders display statistically significant differences from the reference category, with public offices and party leaders displaying negative coefficients and the remaining party offices exhibiting positive effects. The odds ratios suggest large differences between the groups, with heads of (Table A1). government and government party leaders showing the lowest levels of negativity, and opposition floor leaders and party secretaries the highest propensity of attacking.
To make effect sizes comparable, we present predicted probabilities from the regression model ( Figure 2). Four groups emerge: Heads of government are clearly least likely to attack. A somewhat higher probability of attacking is displayed by party leaders, cabinet members, and speakers of parliament. Next, government party floor leaders exhibit a level of negativity that is indistinguishable from that of the reference group. Opposition party floor leaders and party secretaries have the highest probabilities of attacking.
Taken together, these results largely confirm our expectations. Politicians in high public offices that come with expectations of non-partisanship are least likely to attack, whereas somewhat lower-ranking positions that are also more partisan in nature induce higher levels of negativity. Also, government participation dampens negativity for all party offices (although the differences are not statistically significant for general secretaries). These marked differences according to role expectations are especially relevant as we only included press releases distributed by partisan channels -discarding all official government channels such as ministries which would increase the differences even more.
One criticism that could be levelled against our approach is that the willingness to engage in attack behaviour varies primarily across individuals, and this variation may lead to self-selection (or selection by others) into positions that come with specific role expectations. In order to demonstrate that our findings are robust to these concerns, we take advantage of the fact that many individuals moved into, out of, or between high offices in our observation period. We can thus additionally test our expectations on a smaller sample of observations where the same individuals perform different roles. To arrive at this subgroup we identify all subjects that assume more than one role (including the reference category) across the four election campaigns. In total, the pool of office switchers comprises 41 individuals (see Table A2 in the Appendix) producing over 2300 press releases. Table 3 presents the same regression model as in Table 2, but with fixed effects at the level of individuals. Thus, all variation left to explain is within individuals switching between offices (we therefore drop the gender variable which is fully accounted for by the fixed effects). 5 As Table 3 shows, the results are very similar to our analysis of the full sample. Compared to the reference category, holders of public office and party leaders use negative messages to a much lesser extent. The coefficients and odds ratios for the party floor leaders imply little difference compared with the reference group. The same conclusion can be drawn for general secretaries in government parties. By contrast, opposition party general secretaries are significantly more negative than the comparison group and thus constitute the group most prone to attack in our subsample of office switchers. These results strengthen our conjecture that the attack patterns observed in the data are not driven by self-selection of more or less aggressive types of individuals into different political roles, but by a strategic division of labour within parties.
Conclusion
This article builds on and contributes to the literatures on political roles and party organizations in election campaigns. Our core argument holds that parties have good reasons to implement a division of labour regarding negative campaigning. While most parties clearly prefer to have their competitors attacked during election campaigns, the incentives for individual politicians to carry out such attacks are limited. As our analysis shows, parties respond to this collective action problem by shifting the bulk of the 'dirty work' away from party leaders and public office holders towards the holders of genuine party positions that come with more partisan role expectations.
In the context of parliamentary systems with coalition governments, the collective interest of the party is served by delegating attacks to the offices of party floor leader and, in particular, general secretary. The latter are part of the party leadership (most often) by means of appointment and therefore remain accountable to the party leader. At the same time, the party compensates them financially and controls their further political career. More than half of all general secretaries in our sample were promoted to ministerial positions after their party entered government. They thus have a personal incentive to attack, if this is part of the party's strategy. Delegating much of the attacks to them allows other party elites to largely stay free from such behaviour. They thereby follow their personal motivations and, at the same time, do what is in the collective interest of the party.
It is worth pointing out that the effect sizes reported in the regression models are substantial -especially when considering that the large sample size of almost 8000 reduces the chance that random noise produces such huge differences. Moreover, the analysis of a subset of party elites that switch offices between elections strengthens the claim that the observed differences are, in fact, caused by the intra-party division of labour and are not due to selfselection.
Our study is a first step in building a theory of party offices and is limited to party campaign behaviour. While campaigning is a vital party activity, further analyses should expand the scope of analysis to other realms. Policy innovation may allow for a rather straightforward extension of our theoretical reasoning. When parties want to change course on an issue, for instance to expand their electoral appeal, approach potential coalition partners, or because they now consider earlier ideas unworkable, they may face a problem similar to that inherent in riding attacks. Departing from long-standing and firmly held positions can undermine a party's public image and electoral credibility with traditional voter groups and cause uproar internally. In such uncertainty, a division of labour might be testing the viability of the new policy first by one high-ranking official, for instance a minister or party policy specialist, airing it before the party leader throws his or her authority behind it. Similar to negative campaigning, policy innovation constitutes a collective action problem. While beneficial to the party if successful, it also involves risks. A division of labour similar to the one analysed in this article can resolve this dilemma.
As is true for all single-country studies, there are, of course, limits in how far we can generalize from our findings. However, since Austria is fairly typical of most West European parliamentary democracies regarding party system and party organizational characteristics, we are confident that a similar division of labour is present in many parties in other countries. Even if individual incentives and role expectations may vary somewhat between countries and parties, there are strong reasons to assume that campaign communication will be strongly diversified between holders of different public and party offices. Table 2; all other variables held constant at mean or mode; the government dummy was set to one for categories that coincide with government status; 95 percent confidence intervals shown. Note: Figures are raw coefficients and corresponding odds ratios from a binary logistic regression with fixed effects at the individual level; two individuals drop from the analysis due to all negative outcomes; press releases with two individuals as subjects discarded; *p < 0.05, **p < 0.01, ***p < 0.001. Note: Sum of N per election is somewhat greater than total number of press releases because a minority of press releases have two subjects. Table A1. The table presents the number of press releases issued by each group of office holders in each campaign. Percentages refer to the share of releases that contained an attack. The low N for head of government in 2008 is due to the fact that the incumbent Chancellor, Alfred Gusenbauer (SPÖ ), was ousted as party leader and top candidate weeks before the election. The low N for speaker of parliament in 2013 is due to the fact that one of the three individuals (Barbara Prammer, SPÖ ) was terminally ill, and the other two (Fritz Neugebauer, Ö VP, and Martin Graf, FPÖ ) had fallen out of grace with their parties and not been re-nominated as parliamentary candidates.
Coding procedure
In the following, we provide some examples of press releases to explain our coding procedure in more detail. We always present the original title, an English translation, the ID of the press release, and the values we record for actors and their relations. Note that press release titles often use informal language and shorthand expressions. In our relational content analysis we differentiate the subject (the actor producing the message), the object (the actor being addressed, called 'object actor'), and the predicate (a numerical variable capturing the kind of relation between subject and object). We also record the substantive issue of the press releases as well as additional variables such as references to track record and justification claims of issue positions. However, as these aspects are not relevant for the present article we only explain how we capture relations between political actors.
When coding press releases we record up to two subjects and three object actors. In around six percent of all press releases we find two individuals as subjects. For each actor we record his or her name and organizational affiliation (if it is an individual actor), for collective actors we record the name of the organization. In most cases the subject of a press release is an individual whereas the objects comprise individual as well as collective actors. The title typically does not include the first name of actors. Coders typically find this information in the first paragraph of the press release.
Example 1. In the first example an individual actor attacks two collective actors, i.e. parties. BZÖ -Petzner: FPÖ und SPÖ stecken in Kärnten tief im Korruptionssumpf fest BZÖ -Petzner: FPÖ and SPÖ are stuck in a swamp of corruption in Carinthia ID: OTS_20130819_OTS0126 Example 2. During election campaigns, relations between actors from different parties are mostly negative. Positive relations primarily exist between actors from the same party. In this example a candidate of the Ö VP praises his own party.
Steindl: Ö VP hat die Konzepte für mehr Arbeitsplätze Steindl: Ö VP has the concepts for more employment ID: OTS_20130819_OTS0133 Example 3. In the following example an individual (male) politician attacks an individual (female) politician. The reference to two colours refers to a potential coalition of the Christian democratic Ö VP ('the blacks') and the populist radical right FPÖ ('the blues'). | 2018-04-03T03:29:23.372Z | 2015-11-29T00:00:00.000 | {
"year": 2015,
"sha1": "b95997804930cb22127c793adbed143735da032a",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc5624298?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "b95997804930cb22127c793adbed143735da032a",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science",
"Medicine"
]
} |
195774813 | pes2o/s2orc | v3-fos-license | Adaptively Dense Feature Pyramid Network for Object Detection
We propose a novel one-stage object detection network, called adaptively dense feature pyramid network (ADFPNet), to detect objects cross various scales. The proposed network is developed on single shot multibox detector (SSD) framework with a new proposed ADFP module, which is consisted of two components: a dense multi scales and receptive fields block (DMSRB) and an adaptively feature calibration block (AFCB). Specifically, DMSRB block extracts rich semantic information in a dense way through atrous convolutions with different atrous rates to extract dense features in multi scales and receptive fields; the AFCB block calibrate the dense features to retain features contributing more and depress features contributing less. The extensive experiments have been conducted on VOC 2007, VOC 2012, and MS COCO dataset to evaluate our method. In particular, we achieve the new state of the art accuracy with the mAP of 82.5 on VOC 2007 test set and the mAP of 36.4 on COCO test-dev set using a simple VGG-16 backbone. When testing with a lower resolution (300 × 300), we achieve an mAP of 81.1 on VOC 2007 test set with an FPS of 62.5 on an NVIDIA 1080ti GPU, which meets the requirement for real-time detection.
I. INTRODUCTION
In recent years, deep convolutional neural networks (CNNs) have fostered the development tasks in computer vision, such as classification [1]- [4], semantic segmentation [5]- [7] and object detection [8]- [11] through learning better feature representations. For example, to extract high level information, VGG [1] uses very small (3 × 3) convolution to deepen the network. For the same goal, GoogLeNet [2] proposes an inception module to increase the depth and width of the network. The introduction of shortcut by ResNet [3] makes the backward propagation of the gradient easier and enables the deeper networks to be effectively trained. DenseNet [4] connects every layers in a feed-forward approach to strengthen feature transmission, encourage feature reuse, and improve feature expression.
As for the object detection, the purpose is not only to identify the class of objects, but also to localize the object within a bounding box. At present, CNN features have better robustness and strong characterization ability than traditional hand-crafted features. Traditional image processing, which is characterized by hand-crafted features, relieves the problem of multiple sizes of objects by constructing an image pyramid [12]. The pyramid of an image is a set of images that are progressively reduced in resolution and derived from the same original image. Due to its effect in analyzing images at different scales, the image pyramid is also introduced into deep CNNs based object detectors.
Nevertheless, images in the pyramid need to calculate features separately, which greatly consumes computing resources. Therefore, Single Shot MultiBox Detector (SSD) [13] designs a pyramidal feature hierarchy to reuse the multi-scale feature maps and detects objects of different sizes at each feature layer. This method greatly reduces the waste of resources compared to the image pyramid. However, SSD uses features from shallow to deep, resulting in high-resolution feature maps without sufficient semantic information. Furthermore, Feature Pyramid Network (FPN) [14] constructs a top-down framework with lateral connection to produce feature maps with strong semantic information at all scales.
Recently, it has been shown that receptive field plays a key role in object detection in various scales [15]- [17]. For example, inspired by the Atrous Spatial Pyramid Pooling (ASPP) [18], aggregating features through series atrous convolutions with different atrous rates is introduced in [15], [16] for object detection. Unlike increasing the field of view through traditional convolution, the atrous convolution alleviates the contradiction between the field of view and feature resolution, which might be benefit for object localization and detection.
In order to better promote the development of multi-scale object detection, we propose a novel network structure, named adaptively dense feature pyramid (ADFP), which enhances the feature representation capabilities of CNNs-based network structures. The network structure is mainly composed of dense multi scales and receptive fields block and adaptively feature calibration block. The dense multi scales and receptive fields block mainly consists of a cascade of atrous convolution layers through densely connection, resulting dense multiscale features from multiple receptive fields. Then the adaptively feature calibration block is used to calibrate the produced feature maps based on the feature dependencies to retain features contributing more for the detection and depress features contributing less. We then construct a novel one-stage object detector based on SSD [13] framework. By introducing structure designed by us, the new object detector not only achieves state-of-the-art performance, but also maintains faster detection speed. Our work is most related to the work in [15]. While the difference lines in that we use dense connections to extract features and the features is calibrated by a followed SENet module. Our experiment results confirmed our approach by keeping high level semantic information and fine details simultaneously for a object detection task. In summary, the contributions of the this paper are listed as follows:
1.
A novel module called adaptively dense feature pyramid (ADFP) to densely aggregate information at multi scales and receptive fields is proposed.
directly predict bounding boxes and class probabilities. Although there is a slightly lack of precision, YOLO is extremely fast. After that, YOLOv2 [30] and YOLOv3 [31] are proposed to further improve the accuracy in various aspects. Among those one-stage detectors, SSD [13] detects objects of a certain scale through a series of pre-defined anchor boxes on corresponding layers. The anchor boxes over different aspect ratios and scales are set on each layer of a feature pyramid. DSSD [32] replaces the backbone with Residual-101 and adds a large number of high-level semantic information by deconvolution to improve the accuracy. To inherit the merits of both one-stage detectors and two-stage detectors, Single-Shot Refinement Neural Network (RefineDet) [33] designs two inter-connected modules, the anchor refinement module and the object detection module, to coarsely refine the anchors and further improve the regression and classfication respectively. Kong et al. [34] reformulate the feature pyramid structure to combine deeper features and shallower features with global attention and local reconfigurations. To enrich the semantic information for features, Zhang et al. introduce a semantic segmentation branch and a global activation to build Detection with Enriched Semantics (DES) [35]. Parallel feature pyramid network (PFPNet) [17] increases the width of the network instead of deepening the depth of the network to avoid integration between different layer features.
B. ATROUS CONVOLUTION
The traditional CNNs increase semantic information or receptive fields through a series of convolutional filters as well as pooling layers. However, it reduces the image feature resolution, which is important for accurate object localization and detection. To alleviate the contradiction between sufficient receptive field and image feature resolution, atrous convolution [36] or dilated convolution, is proposed for the task of semantic image segmentation and later developed by [18], [37]- [39] for semantic segmentation. Recently, as in the field of object detection, Liu et al. [15] build a lightweight model based detectors, Receptive Field Block Net (RFBNet), using a RF Block (RFB) module to enhance the feature discriminability and robustness. As opposite to this method using atrous convolution, we propose to use a dense connection of atrous convolution, which produces features densely in both scales and receptive fields. Those generated dense features are then calibrated by a adaptively feature calibarion block to retain features contributing most for the detection task and depress features contributing less. The details of our proposed module is described in the following section.
III. METHOD
We propose an adaptively dense feature pyramid network (ADFPNet), which is based on SSD framework with a novel adaptively dense feature pyramid (ADFP) module. The ADFP module is composed of two components including the dense multi scales and receptive fields block and the adaptively feature calibration block, as shown in Fig. 1. We describe the details in the follow sections.
A. DENSE MULTI SCALES AND RECEPTIVE FIELDS BLOCK (DMSRB)
Incorporating different scales and receptive fields has been proven to improve the detection accuracy [15], [17]. Thus, the purpose of this block is to generate dense features with multi-scales and different receptive fields. Inspired by the Densely connected Atrous Spatial Pyramid Pooling (DenseASPP), we design a dense multi scales and receptive fields block consisting of atrous convolution layers with different atrous rates to take full advantages of the multiple receptive field sizes and multiscale features. Compared to the DenseNet, we add atrous convolutions into the proposed module, which has shown to be effective for the feature extraction as it is widely used in the task of semantic segmentation. In two dimensional case, such as images, the atrous convolution operator responding each element i on the output z can be explained as follows: where x denotes the input feature maps, a represents the atrous rate, and w[k] corresponds the k-th parameter of the filter w. The atrous rate is the stride used for sampling the information from the input feature maps. Atrous convolution operation can be interpreted as inserting a − 1 zeros between two sequential filter elements along each spatial dimension to expand the filter by sampling the input x. One example of atrous convolution processing two-dimensional signals with atrous rate of 2 is visualized in Fig. 2(a). However, sampling the input feature x using large atrous rate would cause sparse information, as shown in Fig. 2(b), where the feature is extracted by an atrous convolution with an atrous rate of 5 in one dimension, while only 3 pixels contributing to the convolution calculation process leading to loss of information. Such problem can be alleviated by stacking larger atrous rate after smaller ones to gather information more intensively from more computational pixels. As shown in Fig. 2(c), through the stacking of the atrous convolutions with atrous rates of 1 and 5, 9 pixels in the one dimensional input participate in the convolution calculation, which is 3 times the number in the single atrous convolution calculation. When a two-dimensional signal is used as an input, a single 3 × 3 atrous convolution with an atrous rate of 5 aggregates 9 pixels while the stacked 3×3 atrous convolutions with atrous rates of 1 and 5 aggregate 81 pixels. On the other hand, the stacked atrous convolutions with different atrous rates produce image features with different scales, which helps for object detection. Specifically, to extract features in a dense mode, we stack the atrous convolutions in a dense connection mode, where each layer have access to ahead layers and all subsequent layers to obtain a dense receptive fields. The atrous rates are increasing as the layers in the module get deeper. Such computation of the dense atrous convolution layers can be formulated as: where zd denotes the output of the d th layer, [z 0 ; z 1 , … , zd−1] corresponds to the concatenation of the outputs of the 0 th , 1 th , … , d − 1 th layers, and H a denotes the d th atrous convolution with atrous rate a. Obviously, the dense multi scales and receptive fields block can create a much denser feature pyramid for the reason that the atrous convolutions take all the previous layers' outputs as input. Compared to the single atrous convolution, the atrous convolutions in dense mode with the same receptive fields can sample more information from the input. In other words, through the dense connection, more pixels are involved in the feature extraction process.
The details of our dense multi scales and receptive fields block are illustrated in Fig. 1. A feature transformation is used to aggregate and fuse the information in the input feature maps. The produced feature maps not only contain rich multi-scale semantic information about categories of objects, but also keep the fine details about the shape and location of objects.
B. ADAPTIVELY FEATURE CALIBRATION BLOCK
The features extracted by the DMSRB contain extensive dense features from different scales and receptive fields. While given the assumption that not each channel in the dense feature contributes equally [40], thus some of the feature channels might be redundant. Such redundant features will obstacle the learning as well as the back-propogation. Thus, finding the useful features and depressing the redundant features become necessary. Thus, the adaptively feature calibration block (AFCB) is proposed to exploit the feature channel dependencies to calibrate the aggregated dense features by retaining on relatively important features and weakening relatively unrelated ones using the Squeeze-and-Excitation block [40]. Mathematically, let x ∈ ℝ C × H × W be a feature map that is passed from the DMSRB block and F c be a transformation to produce a channel attention vector c ∈ ℝ C × 1 × 1 from input. The total adaptively feature calibration block can be expressed as follow: where x refers the calibrated features and is the element-wise multiplication. The channel vector c is created to clearly model the inter-channel dependency of the features. Each value in the c multiplies the corresponding feature channel by the broadcast mechanism along the channel dimension. In order to create c more efficiently, we first compress the input feature map x in the spatial dimension H × W by using average pooling, generating a channel descriptor: p ∈ ℝ C × 1 × 1 . Then the channel descriptor p is sent into a two-layer perceptron neural network. To limit model complexity and increase computational efficiency, we first condense p to size C r × 1 × 1, then activate it in the hidden layer by ReLU activation function, and finally restore to size C × 1 × 1, then activate with Sigmoid function. The r is reduction rate used to flexibly adjust the channels of the descriptor p. The process of producing the channel attention vector c by this multi-layer perceptron (MLP) can be formulated as follow: where δ denotes the ReLU activation function, the σ refers Sigmoid activation function, The Sigmoid activation function projects the value2in range [0,1] with lower value depressing feature and higher value retaining feature.
C. TRAINING LOSS
The overall objective loss function is a weighted sum of the classification loss (clc) and the localization loss (loc), as shown: L(x, c, l, g) = 1 N L clc (x, c) + L loc (x, l, g) (5) where N is the number of matched default boxes. If N = 0, we set the loss to 0.
The classification loss is the softmax loss over multiple classes confidences (c).
The localization loss is a Smooth L1 loss between the predicted box (l) and the ground truth box (g) parameters. Similar to Faster R-CNN, we regress to offsets for the center (cx, cy) of the default bounding box (d) and for its width (w) and height (h).
in which and
D. DETAILS OF NETWORK ARCHITECTURE
The motivation of the proposed adaptively dense feature pyramid network (ADFPNet) is to remedy the scale variation of object instances in object detection task with different receptive field sizes. To inherit the merits of SSD in accuracy and speed, we construct a feed-forward convolutional network that reuses the pyramidal feature hierarchy to produce category scores and box offsets for a fixed-size set of pre-set bounding boxes with ADFP module. Then the non-maximum suppression (nms) is followed to filter out most boxes to obtain the final detection results. The whole structure of ADFPNet is showed in Fig. 1. 1) BACKBONE-To fairly compare with the original SSD, we choose the VGG-16 network, pre-trained on the ILSVRC dataset [41] for high quality image classification, as backbone. It is noticed that other backbone such as ResNet50 or ResNet 101 is also be the alternative candidates for the backbone. Due to differences in classification tasks and object detection tasks, we remove the final classification layers and add corresponding convolutional layers with sub-sampling parameters in VGG-16 to meet our needs.
2) PYRAMIDAL FEATURE HIERARCHY-
The original SSD uses the multi-scale feature maps with different resolutions from different layers, including conv4_3, conv7, conv8_2, conv9_2, conv 10_2, and conv11_2, to predict both locations and confidences of objects at vastly different scales, which is called pyramidal feature hierarchy. In our networks, we keep the pyramidal feature hierarchy but with different configurations using the proposed ADFP module as in Fig. 1. Firstly, we place the ADFP module after conv4_3 and conv7 layer. Features from these two layers are first processed then send to the prediction layer and successive layers. Secondly, we replace the conv8_x and conv9_x layers in the original SSD with an ADFP module respectively to produce more dense and semantic rich information. All the ADFP modules consist of a cascade of atrous convolution layers with atrous rates of 1, 2, 3, 4, and 5 except the one after conv4_3, where the atrous rates are 1, 3, 5, 7, 9, and 11. The reason is conv4_3 has larger feature map resolution and need larger atrous rate to capture the large receptive field. We indicate it as ADFP_L module, as shown in Fig. 1.
IV. DATA AND EXPERIMENTS
We have conducted extensive experiments on three widely used benchmarks, namely Pascal VOC 2007, VOC 2012 [42], and MS COCO [43] datasets. The Pascal VOC datasets have 20 object categories, which are the subset of that in MS COCO including 80 object categories. VOC 2007 consists of 5,011 images as trainval set and 4,952 images as test set with all annotations available. In VOC 2012, the researchers annotate trianval set (11,540 images) and leave the test set (10,991 images) annotations unavailable. We split COCO dataset into train set (118k iamges), val set (5k images), and test set (41K iamges), which is much larger than the Pascal VOC datasets. The details on each dataset are described below.
A. PASCAL VOC 2007
In this experiment, all the methods are trained on VOC 07 + 12 trainval set, the union of VOC 2007 trainval set and VOC 2012 trainval set, and tested on the VOC 2007 test set. In VOC 2007, the positive predicted bounding box, whose Intersection over Union (IoU) with the ground truth is higher than 0.5, is sent to predict the final results. We trian our method for 350 epochs using SGD with a "warm-up" strategy. Applying the "warm-up" strategy, we ramp up the learning rate from 10 −6 to 4 × 10 −3 at the first 5 epochs, and then multiply it by 0.1 at 200, 250, and 300 epochs. Referring to [13], we set the default batch size at 32, the weight decay to 5 × 10 −4 , and the momentum to 0.9 in the training. Due to the memory constraint, we halve the batch size and learning rate when training using 512 × 512 input, and keep the other settings unchanged.
B. PASCAL VOC 2012
In this experiment, we train our ADFPNet on the union of VOC 2007 trainval set and test sets, and VOC 2012 trainval set (VOC 07 + +12), then submit the prediction results to the public evaluator. Considering the increase of training set, we adjust the total number of training epochs to 400. We set the learning rate to 4 −3 after the same "warm-up" strategy followed VOC 2007, and divide it by 10 at 250, 300, and 350 epochs. The other training setting used in VOC 2007 are kept.
C. MS COCO
To further validate our method, we conduct experiments on MS COCO dataset, which is a larger and more challenging dataset, and submit the prediction results of test-dev (20k images), which is a subset of the test set, to the official evaluation server to produce the mean Average Precision (mAP). MS COCO uses another evaluation metric different from VOC. The average mAP overing 10 different IoU thresholds from 0.5 to 0.95 is applied to evaluate the performance of the detection methods more comprehensively. APs with IOU thresholds of 0.5 and 0.75 are two other important evaluation indicators in COCO. In addition, COCO divides the object instances into large (area > 96 2 ), medium (32 2 < area < 96 2 ), and small (area < 32 2 ) according to the number of pixels in the segmentation mask to produce the corresponding APs. The training is conducted on the 2017 train set, which is exactly the same as the original public trainvel35k set as reported in the official website. We set the batch size to 32 in training and still apply the "warmup" strategy increasing the learning rate from 10 −6 to 2 × 10 −3 at the first 5 epochs. We continue to train the method with 2 × 10 −3 learning rate for 95 epochs, then decay it to 2 × 10 −4 , 2 × 10 −5 , and 2 × 10 −6 for another 50, 30, and 20 epochs, respectively. Referring to SSD, we reduce the size of the default anchor boxes while keeping the other settings same as in VOC since the size of object instances is smaller than that in VOC. Similarly, for the memory issue, we halve the batch size and learning rate for 512 × 512 input, increasing 20 epochs for the learning rate of 1 × 10 −3 .
V. RESULT
A. PASCAL VOC 2007 1) QUANTITATIVE RESULT- Table 1 shows the performance comparison of ADFPNet with the state-of-the-art methods. The results of SSD300 and SSD512 are enhanced by using a "zoom in" operation to produce random crops as training examples. Our ADFPNet fed with low resolution input 300 × 300 achieves 81.1% mAP without any bells and whistles, which outperforms the SSD300 (77.2%) by a large margin and even exceeds the SSD512 (79.8%) in performance. It should be noticed that our ADFPNet is the first method obtaining above 81% mAP with such low resolution input as we known. By increasing the input size to 512 × 512, the performance of our method is further improved to 82.5% mAP, which exhibits the best mAP among the most advanced VGG-16 based methods (e.g., RefineDet, RFBNet, and PFPNet-R, etc). Our ADFPNet512 surpasses most of the two-stage object detectors including ResNet-101 based Faster RCNN and R-FCN, and shows the result similar to CoupleNet [48], which designs different coupling strategies and normalization ways to couple the global structure with local parts for object detection. Note that, two-stage object detectors typically use high-resolution images (i.e., ~ 600 × 1000) as input and use ResNet-101 as the base network, which yield higher detection performance but greatly increase the inference time as we all know. Compared to the real-time methods such as SSD, YOLOv2, RefineDet, and RFBNet, ADFPNet not only exceeds them in performance, but also is on par with them in inference speed. In order to make our training process more intuitive, the loss and mAP curves of ADFPNet300 during training are shown in Fig. 3 and Fig. 4, respectively. The mAP is evaluated on the VOC 2007 test set every 10 epochs. Moreover, the precision-recall curve of ADFPNet300 tested on the VOC 2007 test set is shown in Fig. 5.
2) QUALITATIVE RESULT-The detection results across multi-objects and different scales on VOC 2007 test set compared with the SSD300 [13] are shown in Fig. 6. It suggested that the SSD method missed objects in very small scale while the proposed method can capture and detect them successfully, which contributes to the the proposed module.
3) FEATURE MAP BEFORE AND AFTER CALIBRATION-
We also show the qualitative results of ADFPNet512 on the feature map before and after self-calibration in Fig. 7 and 8. It suggested that the feature calibration block did depress the features which offers less or sparse information by learning a lower weight (shown in green dot rectangle in Fig. 7 and 8(c)) and assigning a higher weight for the features containing useful information (shown in red dot rectangle in Fig. 7 and 8(c)). Table 2 shows the detection accuracy of the proposed ADFPNet with the other state-of-theart frameworks. To better demonstrate the effectiveness of our ADFPNet, we separately report the results of each category in VOC 2012 test set. Compared with the frameworks using the similar input size, ADFPNet300 produces the best mAP of 79.0%, which has even surpassed most two-stage frameworks using much deeper base network (i.e., ResNet-101 [3]) and larger input size around 1000 × 600. When the input size to 512 × 512, ADFPNet512 achieves best mAP of 81.9%, outperforming the most recently proposed frameworks aiming to detect the multi-scale objects by a large margin (e.g., 80.3% mAP of DES512 [35] and 80.0% mAP of DFPR512 [34]). To the best of our knowledge, ADFPNet is the first framework to obtain performance above 81% mAP on VOC 2012 without any bells and whistles. Table 3 shows the comparison of our method and the other state-of-the-art methods. ADFPNet300 produces 31.8% mAP, which outperforms the other VGG-16 based detectors with the same input size of 300 × 300. It is also noticed that the accuracy of proposed ADFPNet300 is higher than RefineDet320 by 2.4%, which designs the anchor refinement module (ARM) to filter out the negative anchors and coarsely adjust the positive anchors with slightly larger input images. The accuracy of ADFPNet300 even exceeds R-FCN based on ResNet-101 backbone and is similar to RetinaNet400 which uses ResNet-101 as backbone and a 400 × 400 input size. It should be noticed that our method is much better than the recent advanced one-stage detectors which try to include multi-scale context information such as DFPR [34], RFBNet [15], and PFPNet-R [17]. Furthermore, when testing under input image size of 512 × 512, the performance of ADFPNet512 can further improve to 36.4%, which outperforms most of one-stage methods except ResNet-101-FPN based RetinaNet800*, which adopted scale jitter, used the 800 × 800 input image, and was trained for 1.5× longer than RetinaNet500. Compared with the two-stage methods, ADFPNet512 surpasses most of them except Faster R-CNN w/ TDM and Deformable R-FCN with complex backbone and large input size (i.e.,1000 × 600).
C. MS COCO
Our proposed ADFPNet also shows excellent performance on small object detection in COCO dataset. In COCO, approximately 41% of objects are small while only 24% are large as small object detection is still a fundamental problem in computer vision. As shown in Table 4, ADFPNet300 and ADFPNet512 achieve 12.6% and 19.2% mAPs, respectively on the small objects which demonstrates the effeciency and advantage of proposed method. Moreover, ADFPNet512 achieves the best AP on small objects among the VGG-based detectors and even better than most of ResNet backbone based detectors. We show the detection results of ADFPNet512 on the MS COCO test-dev set in the Fig. 9.
A. ARCHITECTURE ABLATION AND STUDY
We conduct experiments on the union of VOC 2007 and VOC 2012 trainval sets to exploit the influence of ADFP module, VOLUME 7, 2019 ADFP_L module, adaptively feature calibration block, and more default boxes. The accuracy is evaluated on VOC 2007 test set, as shown in Table 4. In all the experiment, the input image size is set to 300 × 300 and all the other hyperparameters are set to be the same.
1) ADAPTIVELY FEATURE CALIBRATION BLOCK (ROW 2)-
To verify the effectiveness of adaptively feature calibration block, we construct a variant network by removing it. As listed in Table 4, the variant increases the performance by 3.5% mAP as compared to the baseline. With the adaptively feature calibration block, the mAP is further improved from 80.7% to 81.1%.
2) MORE DEFAULT BOXES (ROW 3)-
In the original SSD, the feature map of conv4_3 contains fine details, which is critical for location, but lacks strong semantic information, which is used for classification. Therefore, only 4 default anchor boxes are associated at each location of conv4_3, conv10_2, and conv11_2, while 6 default anchor boxes are associated at each location of the other layers. We sent the feature maps from conv4_3 into our ADFP_L module to produce a feature map containing rich details and semantic information, wihch is necessary for detecting small object instances. Thus, in order to improve the performance, especially for small instances, we set 6 default boxes, adding aspect ratios of 1 3 and 3, on the feature map from ADFP_L module, which has no effect on the original SSD as mentioned in [15]. As shown in the third and fifth rows in Table 4, adding more default boxes increases the mAP from 80.3% to 81.1%.
3) ADFP MODULE-To demonstrate the effectiveness of ADFP module, we redesign a simple network only with the ADFP module and use the original SSD with new data augmentation as a baseline. The SSD obtains the detection performance of 77.2% as shown in the first row of Table 4. Obviously, with the simple introduction of our ADFP module, this performance is improved to 80.2%. The 3% gain fully demonstrates that our module, which extracts the features with different receptive fields in a dense way, can significantly boost detection performance.
Due to the feature map produced from conv4_3 is much bigger than the others, we correspondingly adjust the atrous rates to constitute a new module, defined as ADFP_L. As can be seen in the fourth and fifth rows of Table 4, the adding of ADFP_L module network further increases the performance by 0.9% mAP as compared to the network only with ADFP module. This probably contributes to the sufficient contextual field from conv4_3 when using larger atrous rates.
B. INFERENCE TIME STUDY
To quantitatively evaluate inference time, we test SSD and ADFPNet with batch size 1 on our machine with an NVIDIA 1080ti, CUDA 9.0 and cuDNN v7 to compare fairly. All the methods are trained on the VOC 07 + 12 trianval set and evaluated on the VOC 2007 test set with 300 × 300 input size. We report all the results in Table 4. The ADFPNet300 without ADFP_L module outperforms the original SSD300 with a large margin (80.2% vs 77.2%), although it spends a little extra time (15 ms/img vs 8 ms/img). The addition of the ADFP_L module consumes almost no extra time but improves performance by 0.9%. Finally, our framework has a 3.9% accuracy gain compared to SSD with an FPS of 62.5. It strongly proves that our proposed ADFP_L and ADFP module significantly help promote the detection performance while meeting the needs of real-time detection (30 frames per second or better), as mentioned in [57] and [29].
VII. CONCLUSION
We present a novel adaptively dense feature pyramid network (ADFPNet) for object detection under the Single Shot MultiBox Detector (SSD) framework. The proposed network is able to detect objects across different scales by extracting feature maps with dense multi scales and receptive fields. Extensive experiments have been conducted on several public benchmarks, Pascal VOC 2007, Pascal VOC 2012, and MS COCO to demonstrate the efficiency of our method, which achieves the state-of-the-art performance without any bells and whistles. Moreover, the proposed method also achieves a good balance between detection accuracy and inference speed. Architecture of the proposed adaptively dense feature pyramid network (ADFPNet). The proposed ADFP module first produces dense features across multi scales and receptive fields; then the dense features are re-calibrated according to their contribution to the detection task. The proposed module is seamless connected to the conv4_3 and conv7 layer. We use larger atrous rate for the ADFP module after conv4_3 as its larger feature resolution (denoted as ADFP_L). Training loss of ADFPNet300 on VOC 07 + 12 trainval set. Conf loss curve signifies the confidence loss. Loc loss curve signifies the localization loss. Loss denotes the total loss of confidence loss and localization loss. The horizontal axis represents the training epochs. The mAP curve of ADFPNet300 trained on VOC 07 + 12 trainval set. It is tested on the VOC 2007 test set. The horizontal axis represents the training epochs. The feature maps of ADFPNet512 before and after self-feature calibration. (a) shows a detection results; (b) shows the feature map from Conv4_3 layer of channel 288 to 303 generated by SSD; (c) shows the feature map generated by our method before feature calibration and (d) shows the corresponding feature of (c) after calibration. The numbers in (c) show the relative feature weights calculated by the adaptively feature calibration block. The red-dot rectangle represents weighted more while the green-dot rectangle is the feature which is depressed. Best viewed in color. Detection results on COCO test-dev set. Note that train2017 signifies the COCO 2017 train set which consists of the same exact images as trainval35k. RetinaNet800* adopted scale jitter and was trained for 1.5× longer than RetinaNet500 using input image size of 800 × 800. | 2019-07-03T14:49:28.404Z | 2019-06-12T00:00:00.000 | {
"year": 2019,
"sha1": "a8af5a65c11d7e031cf4ec371a3102e524f2fdc5",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8600701/08735713.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "200f2856aae1860cc3613f2ccda81a3eb7db3146",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
24300884 | pes2o/s2orc | v3-fos-license | D-form KLKLLLLLKLK-NH2 peptide exerts higher antimicrobial properties than its L-form counterpart via an association with bacterial cell wall components
The antimicrobial peptide KLKLLLLLKLK-NH2 was developed based on sapesin B, and synthesized using D-amino acids. Biochemical properties of the D-form and L-form KLKLLLLLKLK-NH2 peptides were compared. In order to limit the effects due to bacterial resistance to proteolysis, antimicrobial activities of the peptides were evaluated after short-term exposure to bacteria. D-form KLKLLLLLKLK-NH2 exhibited higher antimicrobial activities than L-form KLKLLLLLKLK-NH2 against bacteria, including Staphylococcus aureus and Escherichia coli. In contrast, both the D-form and L-form of other antimicrobial peptides, including Mastoparan M and Temporin A, exhibited similar antimicrobial activities. Both the D-form KLKLLLLLKLK-NH2 and L-form KLKLLLLLKLK-NH2 peptides preferentially disrupted S. aureus-mimetic liposomes over mammalian-mimetic liposomes. Furthermore, the D-form KLKLLLLLKLK-NH2 increased the membrane permeability of S. aureus more than the L-form KLKLLLLLKLK-NH2. Thus suggesting that the enhanced antimicrobial activity of the D-form was likely due to its interaction with bacterial cell wall components. S. aureus peptidoglycan preferentially inhibited the antimicrobial activity of the D-form KLKLLLLLKLK-NH2 relative to the L-form. Furthermore, the D-form KLKLLLLLKLK-NH2 showed higher affinity for S. aureus peptidoglycan than the L-form. Taken together, these results indicate that the D-form KLKLLLLLKLK-NH2 peptide has higher antimicrobial activity than the L-form via a specific association with bacterial cell wall components, including peptidoglycan.
Scientific REPORTs | 7:43384 | DOI: 10.1038/srep43384 Sapecin B is an antimicrobial peptide that was originally isolated from the culture medium of an embryonic cell line, NIH-Sape-4, derived from Sarcophaga peregrine (flesh fly). It displays potent activity against Gram-positive bacteria 13 . Two other related proteins, sapecin and sapecin C, were also isolated from culture medium of NIH-Sape-4 [13][14][15] . Sapecin B has significant sequence similarity to a scorpion venom toxin, charybdotoxin 13,16 . Structural comparison of sapecin B and charybdotoxin identified the undecapeptide RSLCLLHCRLK-NH 2 , which corresponds to amino acid residues 7 to 17 of sapecin B with C-terminal amidation 16,17 . The peptide fragment RSLCLLHCRLK-NH 2 showed significant antimicrobial activity, suggesting that this region is responsible for the antimicrobial activity of the peptide 17 . The undecapeptide KLKLLLLLKLK-NH 2 was developed by modifying the primary structure of RSLCLLHCRLK-NH 2 . In addition to its activity against Gram-positive bacteria, Gram-negative bacteria, and fungi 18 , KLKLLLLLKLK-NH 2 has been shown to enhance mammalian immune responses via undefined molecular mechanisms [19][20][21] . The antimicrobial activity of the D-form KLKLLLLLKLK-NH 2 , which was synthesized using D-amino acids, persisted longer than the L-form because of its resistance to proteolytic degradation 18 .
In this study, we examined the antimicrobial properties of D-form KLKLLLLLKLK-NH 2 . D-form KLKLLLLLKLK-NH 2 displays higher antimicrobial activity against bacteria than its L-form; however, this elevated activity could not be explained by resistance to proteolytic degradation. It is important to note that other D-form antimicrobial peptides did not show higher antimicrobial activity than their L-form counterparts. Furthermore, D-form KLKLLLLLKLK-NH 2 showed higher affinity for bacterial cell wall components, such as peptidoglycan, than its L-form. Thus, the enhanced antimicrobial activity of the D-form KLKLLLLLKLK-NH 2 relative to its L-form is due to direct interactions with bacterial cell surface components.
Results
MICs of D-form KLKLLLLLKLK-NH 2 were lower than those of L-form KLKLLLLLKLK-NH 2 .
Previously, D-form KLKLLLLLKLK-NH 2 was shown to persist longer in bacterial culture medium and it showed higher antimicrobial activity to Staphylococcus aureus than the L-form 18 . In order to further examine the antimicrobial properties of D-form KLKLLLLLKLK-NH 2 , we determined the MICs of the peptides against S. aureus, Escherichia coli, and Candida albicans. MICs of D-form KLKLLLLLKLK-NH 2 were lower than those of its L-form, especially against S. aureus where the MIC of the D-form was 16-fold lower than the L-form (Table 1). We determined minimum inhibitory concentrations (MICs) of other antimicrobial peptides, including KLKLLLKLK-NH 2 , a derivative of KLKLLLLLKLK-NH 2 18 , FIKRIARLLRKIF-NH 2 (Kn2-7) derived from Buthus martensii scorpion venom 22 , INLKAIAALAKKLL-NH 2 (Mastoparan M) derived from hornet venom 23 , and FLPLIGRVLSGIL-NH 2 (Temporin A) derived from Rana temporariareference 24 against S. aureus. All of these peptides are expected to form a helical structure similar to KLKLLLLLKLK-NH 2 16,17,22-24 . MIC of D-form KLKLLLKLK-NH 2 against S. aureus is more than 32-fold lower than that of the L-form (Table 2). In contrast, the MIC of D-forms and L-forms of Mastoparan M, Kn2-7, and Temporin A against S. aureus (Table 2) were similar. These observations indicate that KLKLLLLLKLK-NH 2 and its related peptide KLKLLLKLK-NH 2 are unique because these D-form peptides display lower MICs against S. aureus than their L-forms. peptide was not observed (Fig. 1g). This observation suggests that the higher antimicrobial activity of D-form KLKLLLLLKLK-NH 2 was not due to its resistance to proteolytic degradation. In addition, in order to exclude the possibility that bovine serum albumin or some components from culture medium specifically affect the antimicrobial activity of KLKLLLLLKLK-NH 2 , we performed experiments without culture medium and/or bovine serum albumin in the assay mixture. D-form KLKLLLLLKLK-NH 2 also showed higher antimicrobial activity to S. aureus than L-form KLKLLLLLKLK-NH 2 in the absence of culture medium and/or bovine serum albumin (Fig. 1f). It is noteworthy that antimicrobial activity of both L-form and D-form peptide in the absence of culture medium and bovine serum albumin were lower than those in our standard assay condition ( Fig. 1a and f). The antimicrobial activity of D-form KLKLLLKLK-NH 2 was also higher than that of its L-form counterpart (Fig. 2a).
In contrast, the D-forms and L-forms of Kn2-7, Mastoparan M, and Temporin A peptides displayed similar antimicrobial activities against S. aureus ( Fig. 2b-d). These results indicate that KLKLLLLLKLK-NH 2 and its derivative KLKLLLKLK-NH 2 are unique in that their D-forms have antimicrobial activities than their L-forms.
D-form KLKLLLLLKLK-NH 2 increased bacterial membrane permeability. Cationic antimicrobial peptides bind to the negatively charged bacterial surface and penetrate into the bacterial membrane. Therefore, their effects on bacterial membrane permeability closely correlate with antimicrobial activity. Effects of KLKLLLLLKLK-NH 2 and Mastoparan M on membrane permeability of S. aureus were monitored by ethidium bromide influx rates. As shown in Fig. 3a, both D-form KLKLLLLLKLK-NH 2 (20 μ g/ml) and L-form KLKLLLLLKLK-NH 2 (20 μ g/ml) increased ethidium bromide influx rates; however, the rates were higher in response to D-form KLKLLLLLKLK-NH 2 than the L-form KLKLLLLLKLK-NH 2 . In contrast, D-form and L-form Mastoparan M (20 μ g/ml) increased ethidium bromide influx rates to a similar extent (20 μ g/ml) (Fig. 3b). These observations are consistent with the findings that the antimicrobial activity of D-form KLKLLLLLKLK-NH 2 against S. aureus was higher than that of its L-form KLKLLLLLKLK-NH 2 (Fig. 1a). However, that antimicrobial activity of D-form Mastoparan M against S. aureus was similar with that of its L-form (Fig. 2c).
S. aureus peptidoglycan and E. coli lipopolysaccharide preferentially inhibited the antimicrobial activity of D-form KLKLLLLLKLK-NH 2 .
Most cationic antimicrobial peptides interact with bacterial membranes. Previously, sapecin was shown to have a high affinity for cardiolipin 25 . This observation encouraged us to examine whether D-form KLKLLLLLKLK-NH 2 specifically disrupts liposomes that mimic the cellular membrane of S. aureus. Both D-form and L-form KLKLLLLLKLK-NH 2 released calcein from S. aureus-mimetic liposomes 17,26 , which consisted of phosphatidylglycerol and cardiolipin (Fig. 4a). On the other hand, neither D-form nor L-form KLKLLLLLKLK-NH 2 was able to release calcein from mammalian-mimetic liposomes 27 that consisted of phosphatidylcholine, phosphatidylethanolamine, and cholesterol ( Fig. 4a). Mammalian-mimetic liposomes demonstrated similar sensitivity to Triton X-100 as S. aureus-mimetic liposomes, excluding the possibility that mammalian-mimetic liposome are resistant to chemical treatments (Fig. 4b). These observations indicate that both D-form and L-form KLKLLLLLKLK-NH 2 preferentially disrupt S. aureus-mimetic liposomes, which likely contributes to the antimicrobial activity of KLKLLLLLKLK-NH 2 . Thus, the ability to disrupt S. aureus-mimetic liposomes is not the cause of higher antimicrobial activity of D-form KLKLLLLLKLK-NH 2 relative to its L-form.
To identify a specific target of D-form KLKLLLLLKLK-NH 2 , we analyzed whether bacterial cell wall components were able to inhibit the antimicrobial activities. A comparison of the antimicrobial activities of D-form and L-form KLKLLLLLKLK-NH 2 revealed that 1.9 μ g/ml of D-form and 7.5 μ g/ml of L-form displayed similar antimicrobial activity to S. aureus. The antimicrobial effect of D-form KLKLLLLLKLK-NH 2 was almost inhibited by 40 μ g/ml of S. aureus peptidoglycan, but the same concentration failed to abrogate the antimicrobial S. aureus-type (S. aureus) and mammalian-type (Mammalian) liposomes containing calcein were exposed to Triton X-100. The amount of calcein that leaked from the liposomes was measured using a spectrofluorophotometer and normalized to determine the % release relative to 0.1% Triton X-100. The error bars represent the mean ± standard deviations from triplicate assays.
activity of L-form (Fig. 5a). These observations highlight the potential for a specific interaction between D-form KLKLLLLLKLK-NH 2 and peptidoglycan. In order to exclude the possibility that some contaminants, such as proteases, in peptidoglycan samples might affect the inhibitory effects, heat-treated peptidoglycan was used for the analysis. As shown in Fig. 5f, heat-treated peptidoglycan showed similar inhibitory effects on the antimicrobial activities with those of untreated peptidoglycan. To further confirm that peptidoglycan is a specific target of D-form KLKLLLLLKLK-NH 2 , the antimicrobial effects were investigated in the presence of lysozyme-digested Antimicrobial activities of D-form KLKLLLLLKLK-NH 2 (1.9 μ g/ml) and L-form KLKLLLLLKLK-NH 2 (7.5 μ g/ml) against S. aureus were examined in the presence of the indicated concentrations of peptidoglycan from S. aureus (a), lipopolysaccharide from E. coli (b), lipid A (c), lipoteicoic acid from S. aureus (d), and peptidoglycan from E. coli (e). (f) Antimicrobial activities of D-form KLKLLLLLKLK-NH 2 (1.9 μ g/ml) and L-form KLKLLLLLKLK-NH 2 (7.5 μ g/ml) against S. aureus were examined in the absence or presence of peptidoglycan (40 μ g/ml) or heat-treated peptidoglycan (40 μ g/ml) from S. aureus. (g) Antimicrobial activities of D-form KLKLLLLLKLK-NH 2 (2.0 μ g/ml) against S. aureus were examined in the absence or presence of 40 μ g/ml of peptidoglycan treated with lysozyme (digested peptidoglycan), 40 μ g/ml of peptidoglycan treated without lysozyme (peptidoglycan), or control buffer treated with lysozyme (lysozyme). Antimicrobial activities of D-form and L-form peptides of Kn2-7 (6.25 μ g/ml) (h) or Mastoparan M (8 μ g/ml) (i) against S. aureus were examined in the presence of the indicated concentrations of peptidoglycan from S. aureus. Gray bars and white bars represent CFUs in assay mixtures treated with D-form and L-form peptides, respectively. Black bars represent CFU in assay mixtures treated without peptide. The error bars represent the mean ± standard deviations from triplicate plates. Concentrations of dimethyl sulfoxide in the assay mixtures were 0.15%. peptidoglycans (Fig. 5g). The D-form did not show an inhibitory effect on antimicrobial activity. The antimicrobial activity of D-form KLKLLLLLKLK-NH 2 was preferentially inhibited by lipopolysaccharide prepared from E. coli (Fig. 5b). Furthermore, antimicrobial activity of D-form KLKLLLLLKLK-NH 2 was also preferentially inhibited by synthetic E. coli lipid A, a membrane anchor region of lipopolysaccharide (Fig. 5c). In contrast, lipoteichoic acid prepared from S. aureus inhibited the antimicrobial effect of both D-form and L-form peptides similarly, indicating that the inhibitory effect was not specific for the D-form peptide (Fig. 5d). Peptidoglycan prepared from E. coli had a weak inhibitory effect on the antimicrobial activity of D-form and L-form peptides (Fig. 5e). Taken together, these observations indicate that some cell surface components, such as S. aureus peptidoglycan, preferentially associate with D-form KLKLLLLLKLK-NH 2 rather than its L-form. Moreover, this preferential association accounts for higher antimicrobial activity of D-form KLKLLLLLKLK-NH 2 than that of the L-form.
In addition, inhibitory effects of peptidoglycan against Kn2-7 and Mastoparan M were examined. As shown in Fig. 5h, peptidoglycan shows significant inhibitory effects against the antimicrobial activities of both D-forms and L-forms of Kn2-7. Furthermore, peptidoglycan shows weak inhibitory effects to antimicrobial activities of Mastoparan M, and the inhibitory effect was not specific for the D-form peptide (Fig. 5i). D-form KLKLLLLLKLK-NH 2 showed higher affinity for S. aureus peptidoglycan than L-form KLKLLLLLKLK-NH 2 . The inhibitory effect of S. aureus peptidoglycan on the antimicrobial activity of D-form KLKLLLLLKLK-NH 2 suggested a specific interaction between these two molecules. To determine whether there was a direct association, direct binding between KLKLLLLLKLK-NH 2 and S. aureus peptidoglycan was examined. Biotin-labeled D-form or L-form KLKLLLLLKLK-NH 2 was added to multi-well plates that were coated with immobilized S. aureus peptidoglycan. Binding of biotin-labeled peptides to the D-form or L-form KLKLLLLLKLK-NH 2 was quantified using avidin-labeled peroxidase. As shown in Fig. 6, D-form KLKLLLLLKLK-NH 2 has a higher affinity for S. aureus peptidoglycan than the L-form counterpart.
Discussion
Incorporation of D-amino acids into antimicrobial peptides has been shown to improve their therapeutic efficacy; however, little is known about how the underlying mechanisms make them distinct from their L-form counterparts (reviewed in ref. 28). In this study we found that D-form KLKLLLLLKLK-NH 2 showed higher antimicrobial activity against both Gram-positive and Gram-negative bacteria, including S. aureus and E. coli, relative to its L-form counterpart. Moreover, the enhanced antimicrobial activity of the D-form was not due to its resistance to proteolytic degradation. D-form KLKLLLLLKLK-NH 2 showed higher affinity for S. aureus peptidoglycan than the L-form counterpart. Peptidoglycan and lipopolysaccharide prepared from S. aureus and E. coli, respectively, selectively inhibited the antimicrobial activities of D-form KLKLLLLLKLK-NH 2 . Thus, specific interactions between D-form peptides and components of the bacterial cell wall may contribute to its elevated antimicrobial activity.
Cationic antimicrobial peptides target the negatively charged cell surface of microorganisms. In some cases, D-forms of naturally-occurring antimicrobial peptides have antimicrobial activities similar to those of L-form counterparts, and it is believed that the interaction between antimicrobial peptide and microbial cell surface is not due to specific, close interactions 10,12 . This general notion is consistent with our observations of similar antimicrobial activities of the D-forms and L-forms of Mastoparan M, Kn2-7, and Temporin A. In addition, D-form KLKLLLLLKLK-NH 2 showed similar activity to disrupt S. aureus-mimetic liposomes when compared to the L-form. These observations indicate that the interaction between antimicrobial peptides and anionic bacterial-type liposomes does not require close contact based on the structure, but charge-based interactions are important for antimicrobial activities. In contrast to the previous studies, our results showed that D-form KLKLLLLLKLK-NH 2 had a higher affinity for some cell surface compounds than its L-form counterpart, and the affinity of the D-form for bacterial surface components contributed to its antimicrobial activity. Our observations indicated that some specific, close contact between antimicrobial peptides and bacterial cell surface components increase antimicrobial activities in addition to charge-based contact. Peptidoglycan is consisted of sugars and peptides, and they are chiral components. The chiral portions of peptidoglycan might be involved in the association of D-form KLKLLLLLKLK-NH 2 . It is noteworthy that high affinity of D-form KLKLLLLLKLK-NH 2 to cell surface components including peptidoglycan does not necessary indicate direct targeting. There might be mechanisms to facilitate peptide transfer to the plasma membrane, which determine the effective concentration.
Comparison of the D-4Leu and L-4Leu antimicrobial peptides revealed that the D-form had a greater tendency to bind to the biofilm exopolysaccharide alginate 29 . This current study of KLKLLLLLKLK-NH 2 largely recapitulated these findings. To date, the molecular basis for the close interaction of D-form peptides with bacterial cell surface components remains unknown; however, the importance of precise structures of the bacterial molecules involved in these interactions has been shown. Antimicrobial activities of D-form KLKLLLLLKLK-NH 2 were preferentially inhibited by S. aureus peptidoglycan but not by E. coli peptidoglycan. This difference is likely based on the structural differences between S. aureus peptidoglycan and E. coli peptidoglycan.
Based on our observations, replacement of all L-amino acids with D-amino acids in an antimicrobial peptide may introduce structural changes that are beneficial for antimicrobial activity. It is important to note that not all antimicrobial peptides have distinct activities based on whether they are expressed as a D-form or L-form, and the number of these peptides may be fairly low. Future studies should focus on elucidating the specific interactions of the D-form modification with bacteria as well as the molecular basis underlying this this phenomenon. This will aid in the development of peptide therapeutics.
Methods
Reagents and antimicrobial peptides. Dimethyl sulfoxide, bovine serum albumin (fraction V), cardiolipin, L-α -phosphatidyl-DL-glycerol, peptidoglycan purified from S. aureus, lysozyme, and lipoteichoic acid purified from S. aureus were purchased from Sigma-Aldrich. Cholesterol, 2-dioleoyl-sn-glycero-3-phosphocholine, and 2-dioleoyl-sn-glycero-3-phosphoethanolamine were purchased from Avanti Polar Lipids Inc. Peptidoglycan purified from E. coli was purchased from InvivoGen. Calcein was purchased from Dojindo. Triton X-100 was purchased from Thermo Fisher Scientific. Lipopolysaccharide purified from E. coli 0111:B4 was purchased from List Biological Laboratories, Inc. Synthetic lipid A was purchased from Peptide Institute Inc. Ruby protein gel stain and Any kD TM precast polyacrylamide gels were purchased from Bio-Rad.
Antimicrobial peptides and biotin-labeled antimicrobial peptides were commercially synthesized by Hayashi Kasei, Thermo Fisher Scientific, and the Toray Research Center. C-terminals of the synthetic peptides were modified by amidation. All peptides were initially suspended in dimethyl sulfoxide.
Determination of MIC.
Bacterial suspensions in Muller-Hinton II medium were adjusted to an optical density of 550 nm (OD 550 ) = 0.0011. C. albicans suspensions in YM medium were adjusted to OD 650 = 0.033. Peptides were serially diluted in 10 mM phosphate buffer (pH 6.0) containing 130 mM sodium chloride, 0.2% bovine serum albumin, and 2.56% dimethyl sulfoxide. The peptide solution (100 μ l) was mixed with 100 μ l of bacteria or C. albicans suspensions. Bacterial cultures were incubated for one day at 37 °C. C. albicans cultures were incubated for two days at room temperature. Cell growth was monitored optically and the MIC was determined.
Assay for antimicrobial activity. Bacteria and C. albicans were suspended in growth medium. Peptides suspended in dimethyl sulfoxide were serially diluted in 10 mM phosphate buffer (pH 6.0) containing 130 mM sodium chloride, 0.2% bovine serum albumin as described previously 13 . Concentrations of dimethyl sulfoxide in the assay mixtures are indicated in the figure legends. In order to examine the effects of bovine serum albumin on the assay, 10 mM phosphate buffer (pH 6.0) containing 130 mM sodium chloride, was used for the dilution of peptides. Peptide solution (500 μ l) was added to 500 μ l of bacteria suspensions and then the mixture was incubated at 37 °C for 10 min. In order to examine the effects of culture medium components on the assay, bacteria suspension was prepared with 10 mM phosphate buffer (pH 6.0) containing 130 mM sodium chloride. Alternatively, 500 μ l of peptide solution was added to 500 μ l of C. albicans suspensions and the mixture was incubated at room temperature for 10 min. The inhibitory effects of the bacterial components were analyzed by incubating 450 μ l of S. aureus suspension with 500 μ l of peptide solution plus 50 μ l of inhibitor samples at 37 °C for 10 min. Then, the peptide/ bacteria suspensions were diluted and plated onto LB agar, LB agar containing 0.5% glucose, or YM agar. After cultivation of the plates, colony forming units (CFU) in the peptide/bacteria suspension were calculated based on the average of triplicate plates. Assay for membrane permeability. To examine membrane permeability, ethidium influx rates were examined as previously described 30,31 . S. aureus suspension cultures were adjusted to an OD 600 of 0.4 in 10 mM phosphate buffer (pH 6.0) containing 130 mM sodium chloride and 0.2% bovine serum albumin. Then, peptide in dimethyl sulfoxide (8 μ l) or dimethyl sulfoxide alone (8 μ l) was added to 2 ml of S. aureus suspensions. At 30 sec after the addition of peptide, ethidium bromide was added to a final concentration of 5 μ g/ml, and fluorescence of the ethidium-nucleic acid complex was monitored using a RF-5300PC spectrofluorometer (Shimadzu). Excitation and emission wavelengths were 545 nm with 5 nm slits and 600 nm with 10 nm slits, respectively.
Liposome suspensions were prepared by diluting of 1 μ l of liposomes into 40 ml of 10 mM phosphate buffer (pH6.0) containing 130 mM sodium chloride. Peptides were serially diluted in 10 mM phosphate buffer (pH 6.0) containing 130 mM sodium chloride and 1% dimethyl sulfoxide. Peptide samples (20 μ l) were added to 2 ml of liposome suspension, and the mixtures were incubated at room temperature for 10 min. Calcein leakage from the liposomes was examined using a RF-5300PC spectrofluorometer. Excitation and emission wavelengths were 490 nm and 520 nm (with a 5 nm slit width), respectively 32 .
Digestion and heat-inactivation of peptidoglycan. Peptidoglycan (120 μ g) prepared from S. aureus was added to 1 mg/ml of lysozyme in phosphate buffered saline (150 μ l) 33 . Peptidoglycan without lysozyme and lysozyme without peptidoglycan were also prepared as controls. Samples were incubated overnight at 37 °C, and then incubated at 100 °C for 15 min to inactivate lysozyme. For heat-inactivation of peptidoglycan, 600 μ g of peptidoglycan suspended in water (300 μ l) was incubated at 100 °C for 15 min. The samples were sonicated for 10 sec at setting 1 using a Branson sonifier model S-150D. These samples were used as inhibitor samples for antimicrobial activity assays.
Peptidoglycan-binding assay. Peptidoglycan-binding assays were performed as previously described with some modifications [34][35][36] . Peptidoglycan from S. aureus (100 μ g/ml) was suspended in 0.2% trifluoroacetic acid and sonicated twice for 10 sec at setting 1 using a Branson sonifier model S-150D. The peptidoglycan suspension (50 μ l) was used to coat the wells of a flat bottom 96-well microplate (Thermo Fisher Scientific). The plate was incubated at room temperature until the water evaporated. The plate was placed at 60 °C for 1 h to dry out completely, and then blocked with 200 μ l of 5 mg/ml bovine serum albumin in binding buffer (10 mM phosphate buffer (pH 6.0) containing 130 mM sodium chloride, 0.05% Tween 20, and 0.01% trifluoroacetic acid) at 37 °C for 2 h. The plate was washed four times with 200 μ l of binding buffer. Biotin-labeled peptides in 100 μ l of binding buffer containing 0.5% dimethyl sulfoxide were added to the wells and incubated at 37 °C for 2 h. Detection of biotin-labeled peptides was performed using Vectastain ABC reagent (Vector Laboratories) according to manufacturer's instructions. The wells were washed four times with binding buffer, then 100 μ l of avidin-labeled peroxidase was added to each well, and the plate was incubated at 37 °C for 1 h. The wells were washed again as described above. After washing, 100 μ l of 3, 3′ , 5, 5′ -tetramethylbenzide substrate was added and the plate was incubated at room temperature. After 10 min, the reaction was stopped by the addition of 100 μ l of 0.5 M sulfuric acid. Absorbance was measured at 450 nm. | 2018-04-03T03:47:32.421Z | 2017-03-06T00:00:00.000 | {
"year": 2017,
"sha1": "a418ec868784430159bcfedf58fb04d8212ab1ce",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep43384.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "41094cd624d174d4393fec66dc3920d73aaa80cf",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
245539373 | pes2o/s2orc | v3-fos-license | Spatial tuning of face part representations within face-selective areas revealed by high-field fMRI
Regions sensitive to specific object categories as well as organized spatial patterns sensitive to different features have been found across the whole ventral temporal cortex (VTC). However, it is unclear that within each object category region, how specific feature representations are organized to support object identification. Would object features, such as object parts, be represented in fine-scale spatial tuning within object category-specific regions? Here, we used high-field 7T fMRI to examine the spatial tuning to different face parts within each face-selective region. Our results show consistent spatial tuning of face parts across individuals that within right posterior fusiform face area (pFFA) and right occipital face area (OFA), the posterior portion of each region was biased to eyes, while the anterior portion was biased to mouth and chin stimuli. Our results demonstrate that within the occipital and fusiform face processing regions, there exist systematic spatial tuning to different face parts that support further computation combining them.
Introduction
The ventral temporal cortex (VTC) in the brain supports our remarkable ability to recognize objects rapidly and accurately from the visual input in everyday life. Identity information is extracted from visual input through multiple stages of representation. To fully understand the neural mechanism of object processing, it is critical to know how these representations are physically applied to anatomical neural structure in the VTC. Numerous studies have already revealed multiple levels of feature representation manifest at different scales of anatomical organizations which superimposed in the VTC. The superordinate category representations (e.g. animate/inanimate, real-world size) manifest at large scale organization covering the whole VTC. Meanwhile, the category-selective representations (e.g. face, body, and scene selective regions in the mid-fusiform gyrus) are revealed at finer spatial scale in the VTC (Hasson et al., 2003;Spiridon et al., 2006). Recent evidence suggested a general spatial organization of neural responses to dimensions in object feature space in monkey inferotemporal cortex (Bao et al., 2020). Could such physical organization be further extended to even smaller scale, like object parts/features representations within each category-selective region? In other words, as part representations play a critical role in object processing, would there be consistent spatial tuning across individuals for different object parts within each category-selective region in VTC?
Fine-scale spatial organizations of low-level visual features have already been found in early visual cortex, such as ocular dominance columns and orientation pinwheels (Blasdel and Salama, 1986;Bonhoeffer and Grinvald, 1991;Hubel et al., 1977;Weliky et al., 1996). Among all the objectselective regions in the VTC, the face-selective regions, including FFA and OFA, are one of the most widely examined object-processing networks in the past decades in cognitive neuroscience. As faces have spatially separated yet organized features such as eyes and mouth which are easy to be defined, it is suitable to use face parts to examine whether there are spatial tunings for different object features in the VTC. Neurophysiology studies in non-human primates demonstrated face-selective neurons in face-selective regions showed different sensitivities to various of face feature or combination of dimensions in face feature space (Chang and Tsao, 2017;Freiwald et al., 2009). Human fMRI studies also found the neural response patterns in FFA or OFA could distinguish different face parts (Zhang et al., 2015), suggesting voxels within same face-selective region may have different face feature tuning. In addition, previous study also suggests that the spatial distribution of a face feature may be relevant to the physical location of that feature in a face (Henriksson et al., 2015).
The sizes of the face-selective regions in VTC are relatively small, spanning about 1 cm. To investigate the potential spatial tuning within each face region, high-resolution fMRI with sufficient sensitivity and spatial precision is necessary. With high-field fMRI, fine-scale patterns have been observed in early visual cortex, such as columnar-like structures in V1, V2, V3, V3a, and hMT (Cheng et al., 2001;Goncalves et al., 2015;Nasr et al., 2016;Schneider et al., 2019;Yacoub et al., 2008;Zimmermann et al., 2011). These findings validate the feasibility of using high-field fMRI to reveal fine-scale (several mm) structures in the visual cortex.
Here, we used 7T fMRI to examine whether category-specific feature information, such as object parts, would be represented in certain spatial pattern within object selective regions. With faces as stimuli, the high-field fMRI allowed for measuring detailed neural response patterns from multiple face-selective regions. Our results show that in the right pFFA and right OFA, different face parts elicited differential spatial patterns of fMRI responses. Specifically, eyes induced responses biased to the posterior portion of the ROIs while responses to mouth and chin were biased to the anterior portion of the ROIs. Similar spatial tuning was observed in both the pFFA and OFA, and the patterns are highly consistent across participants. Together, these results reveal robust fine-scale spatial tuning of face features within face-selective regions.
Results
One critical challenge to demonstrate the spatial tuning within single face-selective region is to find the anatomical landmark to align the function maps between different individuals, as the shape, size, and spatial location of FFA vary largely across individuals. Among all the anatomical structures in the VTC, the mid-fusiform sulcus (MFS) could potentially serve as landmark in the current study. MFS is relatively small structure in the VTC, but consistently present in most individuals . On the one hand, the structure of MFS could predict the coordinates of face-selective region around mid-fusiform, especially the anterior one . On the other hand, MFS is found to be highly consistent with many anatomical lateral-medial transitions in the VTC, such as cytoarchitecture and white-matter connectivity transitions Caspers et al., 2013;Grill-Spector and Weiner, 2014;Lorenz et al., 2017). In addition, it could also predict the transitions in many function organization, such as animacy/inanimacy and face/scene preference . Considering its anatomical and functional significance, in the current study, we used the direction of MFS to align the potential spatial tuning of face part across individuals.
Different face parts (i.e., eyes, nose, mouth, hair, and chin, see Figure 1A) and whole faces were presented to participants and they performed a one-back task in 7T MRI scanner. For each participant, five face-selective ROIs (i.e. right pFFA, right aFFA, right OFA, left FFA, and left OFA) were defined with independent localizer scans. Before comparing the spatial response patterns between the face parts, we assessed the overall neural response amplitudes they generated in each ROIs. All face selective regions showed a similar trend that eyes generated higher responses than nose, hair, and chin (ts > 2.61, ps <0.05; except for eyes vs. nose in the left FFA and for eyes vs. chin in left OFA, ts < 2.40, ps >0.06. See Figure 1B). However, mouth generated similar response amplitudes as eyes (ts < 1.58, ps >0.17).
Considering that eyes and mouth are two dominant features in face perception (Schyns et al., 2002;Wegrzyn et al., 2017), and their response amplitudes were similar in face-selective regions, in the initial step, we compared the spatial patterns of neural responses to eyes with that to mouth within each ROI. Each pattern was first normalized to remove any overall amplitude difference between conditions. Then we directly contrasted the two patterns and projected the difference onto the inflated brain surface. A spatial pattern was observed in the right pFFA consistently across all participants ( Figure 2). In the dimension parallel to the mid-fusiform gyrus, the posterior portion of the right pFFA was biased to respond more to eyes, whereas the anterior portion was biased to respond more to mouth. Note that in participant S2, the direction of MFS was more lateral-medial near the position of the right pFFA, and interestingly, the eyes-mouth contrast map was oriented in the same direction, even though S2's map may initially appear oriented differently from that of other participants. It suggests the anatomical orientation of MFS is highly correlated with such spatial tuning of face parts. To estimate the reliability of such spatial tuning, we split the eight runs data from each The face parts were generated from 20 male faces. Each stimulus was presented around the fixation and participants performed a one-back task during the scan. (B) Average fMRI responses to different face parts in each face-selective region. Generally, eyes elicited higher responses than responses to nose, hair, and chin in most of the regions. No significant difference was observed between eyes and mouth responses. Error bars reflect ±1 SEM. participant in the main experiment into two data sets (odd-runs and even-runs), and estimated the eyes-mouth biases within each data set. Then we calculated the correlation coefficient between such biases across different voxels between the two data sets to estimate the reliability of the results in the right pFFA. The results demonstrate strong reliability of the data within participants ( Contrast maps between normalized fMRI responses to eyes and mouth in the right pFFA illustrated in the volume (upper) or on inflated cortical surface (lower) of each participant. On the surface, the mid-fusiform sulcus is shown in dark gray with orange outline. The blue line outlines the right pFFA identified with an independent localizer scan. Aligned with the direction of mid-fusiform sulcus, the posterior part of right pFFA shows response bias to eyes (warm colors), while the anterior part illustrates mouth bias (cool colors). The posterior to anterior pattern is generally consistent across participants.
The online version of this article includes the following figure supplement(s) for figure 2: To further demonstrate such relationship, and also to provide a quantitative description of the spatial tuning of face parts within right pFFA, the fMRI responses to different face parts were projected onto the brain surface of each individual participant. Then we grouped vertices based on their location along the direction parallel to the MFS, and averaged the fMRI responses at each location to generate the response profile on this posterior-anterior dimension ( Figure 3A, see details in Materials and methods). The group-averaged results clearly showed that the difference between eyes and mouth signals consistently changed along the posterior-anterior direction in the right pFFA ( Figure 3B). To quantify this trend, we further calculated the correlation coefficient between the eyesmouth neural response differences and the position index along the posterior-anterior dimension (i.e. more posterior location was assigned with smaller value) in each participant. The group result revealed a significant negative correlation (t(5)=8.36, p = 0.0004, Cohen's d = 3.41), confirming the consistency across participants that the posterior part of right pFFA was biased to eyes and anterior part was biased to mouth.
The contrast map highlighted the differences between eyes and mouth responses. However, the original response patterns elicited by eyes and mouth share the same underlying general 'facerelated' pattern, which was subtracted out when contrasting the two response patterns. To extract the response profile of individual face parts, we used independently obtained response patterns of whole faces as the general face-related pattern and regressed it out from the eyes and mouth response patterns. The fMRI responses could be influenced by multiple factors other than neural responses, such as the distribution of the vein, which means there is a shared factor driving the raw fMRI response patterns of different conditions. Thus, to eliminate such shared pattern from the patterns of different face parts, we regressed out the spatial patterns of the whole faces from patterns of each face part. With the general pattern regressed out, we observed distinct spatial profiles elicited by eyes and mouth in the right pFFA ( Figure 3D top panel). The eye-biased voxels were more posterior than that of mouth-biased voxels, which is consistent with the contrast map shown in Figure 2.
Removing the general pattern helped to reveal the pattern of voxel biases for individual face parts. While removing the face-related general pattern achieved this goal, it is possible that removing the general face-related pattern distorted the parts generated response patterns since they share highlevel visual information (i.e. face and eyes stimuli are both face-related). Therefore, it is important to check whether the parts specific patterns could be seen with removal of a common face-independent signal distribution. In five of the six participants, data were also obtained when they viewed everyday objects. Indeed, non-face objects generated significantly lower but spatially similar patterns of activation compared with faces across the right pFFA ( Figure 3C). This result suggests that there is a general intrinsic BOLD sensitivity profile in the pFFA regardless of the stimuli. Indeed, both face and non-face object patterns explained large part of the variation of the face part patterns (for faces average R 2 = 0.86, for objects average R 2 = 0.72). Thus we proceeded to use the response patterns of either faces or non-face everyday objects to regress out the intrinsic baseline profile from eyes and mouth response patterns, and plotted face part specific patterns along the posterior-anterior dimension. Consistently, results with object patterns removed showed clear posterior bias for eyes and anterior bias for mouth in the right pFFA ( Figure 3D bottom panel).
To control for the potential contribution from retinotopic bias of the different face part conditions, in our experiment, all stimuli were presented at the fixation with a 1.3° horizontal jitter either to the left or to the right alternatively in different trials within a block. Even though the stimuli were centered on the fixation, because of the nature of the face parts (e.g. two eyes are apart, chin depicts the outline of the face), there were still small degrees (less than 3°) of retinotopic differences between the eyes and mouth conditions. To further rule out the retinotopic contribution, as well as to replicate our finding, we did two control experiments. In the first control experiment (Control Experiment 1), data were obtained with a single eye or mouth presented at either the near central (1.3°) or near peripheral (3.1°) location during the scan (see Figure 3-figure supplement 1A). This 2 × 2 (face parts x location) design allowed us to contrasted fMRI response patterns between face parts (single eye vs. mouth) regardless the stimulus location, or between locations (near central vs. near peripheral) regardless the face parts presented. Data from six participants were collected in the Control Experiment 1 and two of them (S1 and S5) also participated main experiment. In all participants, the eye vs. mouth contrast revealed spatial patterns in the right pFFA very similar to that in the main experiment (Figure 3figure supplement 1B). However, contrasting fMRI responses between the near central and near The spatial profiles of whole faces and everyday objects in the right pFFA. Both profiles showed similar patterns, though the whole face responses were generally higher than object responses. (D) The spatial profile of individual face part responses, after regressing out the general fMRI response patterns elicited by either the whole faces (upper) or everyday objects (lower). In both cases, distinct spatial profiles were observed between eyes and mouth in the right pFFA.
Figure 3 continued on next page
peripheral location regardless the face parts failed to reveal consistent patterns across participants (Figure 3-figure supplement 1C). These results further support that the different fMRI response patterns we observed in the right pFFA were contributed by face feature differences rather than retinotopic bias. In the second control experiment (Control Experiment 2), we used top and bottom parts of the face as stimuli and counterbalance the stimulus location to verify the spatial tuning in the right pFFA. With a 2 × 2 design (eyes vs. nose & mouth x present above vs. below fixation) (Figure 3figure supplement 2A), consistent anterior-posterior spatial patterns in the right pFFA were observed in eight participants (Figure 3-figure supplement 2B), which further corroborated our main finding of spatially organized representation of face parts in the right pFFA.
In addition to the two control experiments, we also measured the population receptive field (pRF) of each voxel in the right pFFA in three participants from the main experiment (Figure 3-figure supplement 3A) following established procedures (Dumoulin and Wandell, 2008;Kay et al., 2013;Kriegeskorte et al., 2008). For each voxel, parameter x and y were calculated along with other parameters to represent the receptive field center on the horizontal (x) and vertical (y) axis in the visual field. Although generally more voxels in the right pFFA were bias to left visual field, which is consistent with previous report (Kay et al., 2015;Nichols et al., 2016), we observed no consistent spatial pattern in either x or y map of the right pFFA across participants (Figure 3-figure supplement 3B).
To examine the spatial patterns of response from eyes and mouth in other face-selectivity regions, similar analyses as in pFFA were applied to the fMRI response patterns in the right OFA, right aFFA, left FFA, and left OFA. For the left and right OFA, the posterior-anterior dimension was defined as the direction parallel to the occipitotemporal sulcus (OTS), where the OFAs were located in most participants. Among these regions, the right OFA also had distinct response patterns for eyes and mouth along the posterior-anterior dimension (Figure 4), similar to what we observed in the right pFFA. Group negative correlation was observed between the eyes-mouth differences and the posterioranterior location of the right OFA (t(5)=3.63, p = 0.015, Cohen's d = 1.48). Such pattern was also observed in the Control Experiments. We also observed similar spatial patterns between eyes-mouth bias and visual field bias in vertical direction (Figure 4-figure supplement 1), which is consistent with previous findings in inferior occipital gyrus (de Haas et al., 2021). While the right OFA and right pFFA have been considered as sensitive to facial components and whole faces respectively, in our data they showed similar spatial profiles of eyes and mouth responses along the posterior-anterior dimension. This is consistent with, but adds some constraints to, the idea that the right pFFA may receive face feature information from right OFA for further processing (Liu et al., 2010;Zhu et al., 2011). In other face-selective regions, no consistent pattern was observed, as the correlations between the eyes-mouth difference and posterior-anterior location were not significant (ts <1.09, ps >0.32, see Figure 4A).
Beside the anterior-posterior dimension, the spatial representation of parts could organize in other spatial dimensions, such as the lateral-medial dimension in the VTC, or even in more complex nonlinear patterns. However, since the right pFFA located within the sulcus (MFS) in most of our participants, such that voxels distant from each other on the surface along the lateral-medial dimension could be spatially adjacent in the volume space, making it difficult to accurately reconstruct the spatial pattern along the lateral-medial dimension within the sulcus. Nevertheless, the finding of anterior-posterior bias of face parts is sufficient to demonstrate the existence of fine-scale feature map within objectselective regions.
Our stimuli also included nose, hair, and chin images, thus gave us a chance to examine their spatial profiles in each face-selective ROI as we did for eyes and mouth, though their neural responses were generally lower than that from eyes and mouth. Chin and mouth elicited similar response patterns along the anterior-posterior dimension in the right pFFA and right OFA after regressing out general spatial patterns ( Figure 5A). By directly contrasting fMRI response patterns between eyes and chin, similar spatial profiles were revealed in the right pFFA and right OFA that the posterior part was biased to eyes and anterior part was biased to chin (ts >5.30, ps <0.01, see Figure 5B). We also observed a similar though less obvious profile in the left FFA (t(5)=2.68, p = 0.04, Cohen's d = 1.09), but not in the right aFFA or left OFA (ts <0.41, ps >0.71).
Discussion
Our results reveal that within certain face-selective regions in the human occipito-temporal cortex, the neural representations of different facial features have consistent spatial profiles. Such fine-scale spatial tuning is found similarly in the right pFFA and right OFA, but not in the more anterior right aFFA nor in the left hemisphere's face-selective regions. In other words, fine-scale spatial tuning for face parts exists at the early to intermediate stages of face processing hierarchy in the right hemisphere.
In the current study, five face parts (i.e. eyes, nose, mouth, hair, and chin) were tested, with eyes and mouth showed most distinct spatial profiles in the right pFFA and right OFA. No obvious spatial pattern was observed for nose and hair in face-selective regions, but it would be premature to conclude that there is no fine-scale spatial profile for their neural representations. For one, the nose and hair stimuli elicited lower fMRI responses compared with eyes and mouth stimuli, making it more difficult to detect potential spatial patterns. The observation that eyes and mouth elicited most differential patterns is consistent with them providing more information about faces than other features in face processing (Schyns et al., 2002;Wegrzyn et al., 2017). The dominance of eyes and mouth in face-selective regions could be considered as a form of cortical magnification of more informative features, a common principle of functional organization in sensory cortex (Cowey and Rolls, 1974;Daniel and Whitteridge, 1961;Penfield and Boldrey, 1937).
The discovery that some face parts are represented within the face-processing regions with finescale spatial tuning improve our understanding about how functional representations are physically applied to anatomical structures in the VTC. To further explore the neural models about object processing in the VTC, it is important to ask what kinds of constrain, functional or anatomical, result in such fine-scale spatial tuning? Many of the visual cortical areas have retinotopic maps, indeed having a retinotopic representation of the visual world is one of the key ways to define a visual cortical area. Along occipitotemporal processing stream, visual areas increasingly become more specialized in processing certain features and object categories. What is the relationship between a potential spatial organization of face part representations and the spatial relationship of face parts in a real face?
A recent study has revealed that in the inferior occipital gyrus, where the OFA located, both tunings for retinotopic location and face parts (de Haas et al., 2021). Although the tuning peak maps were idiosyncratic across individuals, the two tuning maps were correlated within individuals, suggesting a relationship between face parts configuration and their typical retinotopic configuration. Our findings provide additional support for face part turning in the OFA, and further reveal that there exists a consistent spatial profile of face part tuning across individuals. More importantly, our finding of spatial tuning of face part in the pFFA indicates that although the organization of feature tuning could be constrained by the retinotopic tuning in occipital cortex, a more abstract feature tuning could still be spatially organized in cortical areas with absent or minimal retinotopic property in the later stages of VTC.
Another previous study also tested the idea of 'faciotopy', that there are cortical patches representing different face features within a face-selective region and the spatial organization of these feature patches on the cortical surface would reflect the physical relationships of face features (Henriksson et al., 2015). Their results showed that in the OFA and FFA, the differences between neural response patterns of face parts were correlated with physical distances between face parts in a face. Our results support the existence of stable spatial profile of face features in the right OFA and right pFFA, especially for eyes and mouth. The possible mechanism underlying such faciotopy organization is the local-to-global computation, that physically adjacent face parts interact more than parts far apart from each other during the early stages of processing, thus it is more efficient for them to have neural representations near each other. However, in the current study, we did not find the posterior bias pattern for hair as we did for eyes, even though hair and eyes are spatially adjacent, which could be caused by the hair being generally less invariant and less informative in the face identification.
Another potential explanation could be that while both contribute to face identification, eyes and mouth are differentially involved in different neural networks and have distinct functional connectivity profiles with other brain regions. Specifically, the mouth region provides more information for language processing and audio-visual communication perception, thus it may be more connected to the language processing system. Meanwhile, the eyes are more important in face detection and eye gaze signifies interest, thus it may be more connected to the attention system. Previous studies have already found the connectivity profiles could predict the functional selectivity in the VTC, thus it would be interesting to examine whether the face part spatial tuning in the pFFA could be predicted using functional or anatomical connectivity to the other brain regions in the future studies.
The third explanation of our results is that the fine-scale pattern of face part sensitivity is driven by larger-scale organization of object-selective regions in the ventral pathway. As the FFA is overlapped with body-selective region fusiform-body area (FBA), it is possible that some face features (e.g. mouth) could be represented closer to the FBA, while face parts such as the eyes are more represented in the FFA proper. However, existing evidence does not support a consistent anterior-posterior relationship between FFA and FBA (Kim et al., 2014). It remains important to directly compare the eyes-mouth pattern against the face-body pattern with high-resolution fMRI in future studies.
The spatial clustering of neurons with similar tuning is one of the organization principles in the brain. Such clustering may optimize the efficiency of the neural computation by reducing the total wire length (Laughlin and Sejnowski, 2003), thus the clustering of neurons with similar feature tuning within face-selective regions could improve the processing efficiency to face stimuli. Our results provide evidence from neural imaging data to support the voxel level neuronal clustering driven by the tuning to different face parts. Previous fMRI and single unit recording studies in monkey face processing network have demonstrated strong correspondence between fMRI signal and neuronal responses (Tsao et al., 2006), suggesting that the face part tuning in our results may be driven by neuronal response biases. In addition to neuronal response biases, the clustering could also reflect activity synchronization across neurons. Further neurophysiology studies are needed to delineate the specific mechanisms to the spatial clustering observed within face-selective regions.
Among five face-selective regions we examined, only the right pFFA and right OFA exhibited distinct fMRI response patterns for eyes and mouth. In the face processing network, face parts are believed to be represented in the posterior regions such as the OFA (Liu et al., 2010;Arcurio et al., 2012;Pitcher et al., 2007). Part information is transmitted to anterior regions to be further integrated to form holistic face representations (Zhang et al., 2015;Rotshtein et al., 2005). In that sense, the more anterior regions in the face processing network are more responsible for representing integrated face information such as gender or identity rather than individual face parts (Freiwald and Tsao, 2010;Landi and Freiwald, 2017). Consistent with this idea, at the right aFFA, there is no obvious spatial tuning of face parts. Meanwhile, a clear hemispheric difference was found in our results that the distinct spatial response patterns for face parts were observed in the right but not the left hemisphere, which is consistent with previous findings that compared with left FFA, right FFA is more sensitive to face specific features (Meng et al., 2012) and configural information (Rossion et al., 2000). The neural clustering of face part tuning and consistent spatial patterns across individuals in the right rather than in the left face selective regions provides a potential computational advantage for right lateralization for face processing. The clustering of neurons with similar feature tuning have been found extensively in the ventral pathway, which may help to support a more efficient neural processing. Therefore, one of the neural mechanisms underlying the functional lateralization of face processing could be the existence of spatial clustering of face part tunings in the right hemisphere.
Much progress has been made in our understanding of object feature representation in the VTC during the past decade, especially with the view of feature space representation (Bao et al., 2020;Chang and Tsao, 2017). Consequently, we now believe that a large number of features are represented for object recognition, but how does our brain physically organize such complex feature representations in the VTC? One possible solution is that these feature representations manifest in different spatial scales. For more general features the representation manifests at large spatial scale across the whole VTC (e.g. large/small, animate/inanimate), and for more specific features such as face parts, it manifests at finer spatial scales within specific object processing regions. Under this view, we would expect more fine-scale feature organizations to be revealed with more advanced neural imaging tools, which are critical for fully understanding the neural algorithm of object processing in the VTC.
Materials and methods Participants
Six (3 females) human participants were recruited in the main experiment. Six (5 females) participants (two of them also participated main experiment) were recruited in the Control Experiment 1. Three participants (2 females) from main experiment finished the pRF experiment. Ten participants (one female) were recruited in the Control Experiment 2, but in two participants right pFFA was failed to be localized, thus we excluded these two participants from the analyses. All participants were between the ages of 21 and 27, right-handed, and had normal or corrected to normal visual acuity. They were recruited from the Chinese Academy of Sciences community with informed consent and received payment for their participation in the experiment. The experiment was approved by the Committee on the Use of Human Subjects at the Institute of Biophysics of Chinese Academy of Sciences (#2017-IRB-004).
Stimuli and experimental design
In the main experiment, for face stimuli, 20 unique front-view Asian male face images were used. Each face image was gray-scaled and further divided into five parts (i.e. eyes, nose, mouth, hair, and chin. See Figure 1A). Twenty unique gray-scaled everyday objects were used as comparison stimuli. The full face and object images on average subtended around 5° x 7°. For stimuli used in localizer scans, video clips of faces, objects, and scrambled objects were used (For detail see Pitcher et al., 2011).
There were total of seven stimulus conditions (i.e. eyes, nose, mouth, hair, chin, whole face, and object condition). Each main experimental run contained two blocks of each stimulus condition. In the scan of participant S6, the object condition was not included. Each stimulus block lasted 16 s and contained 20 images of the same type. Each image was presented for 600 ms at fixation and followed by a 200 msec blank interval. There was a 16 s blank fixation block at the beginning, the middle, and the end of each run. Participants performed a 1-back task that they were asked to press a button when two successive images were the same. To balance the spatial property in the visual field of different images, each image was presented at a slightly shifted location, 1.3° either to the left or to the right of the fixation alternately in different trials within a block. Participants were instructed to maintain central fixation throughout the task.
Each localizer run contained four 18 s blocks of each of the three stimulus conditions (i.e. faces, everyday non-face objects, and scrambled objects) shown in a balanced block order. The 12 stimulus blocks were interleaved by three 18 s fixation blocks inserted at the beginning, middle and end of each run. Each block contained six video clips of a given stimulus category, each presented for 3 s. Participants were asked to watch the video without any task. No fixation point was presented during the scan.
The eight experimental runs and the two localizer runs were completed within the same scan session for each participant.
In the Control Experiment 1, we used a similar block design as that in the main experiment. There were six kinds of stimulus blocks (single eye near central, single eye near peripheral, mouth near central, mouth near peripheral, whole face, object) and each of them repeated three times in a single run. Each participant completed four runs and two localizer runs. In the eye near central condition, single left eye images were presented at 1.3° either to the left or to the right of the fixation alternately in different trials within a block. In the eye near peripheral condition, single left eye images were presented at 3.1° either to the left or to the right of the fixation. The central and peripheral locations were chosen to match the locations of eyes and mouth in the main experiment. Stimuli in mouth near central and mouth near peripheral conditions were presented in the same locations as in two eye conditions, respectively. Whole face and object conditions were the same as in the main experiment.
In the pRF experiment, we adopted stimuli and analysis code from analyzePRF package (http:// kendrickkay.net/analyzePRF/). There were total of four conditions (i.e. clockwise wedges, counterclockwise wedges, expanding rings, contracting rings). The angular span of the wedges was 45°, and it revolved for 32 s per cycle. In the rings conditions, the rings swept 28 s per cycle with 4 s of rest followed. Colored object images were presented on the wedges or rings. The rings and wedges were presented within a radius of 10°. For each run, there was a 22 s blank fixation block at the beginning and the end. Participants performed a change detection task that they pressed a button whenever the fixation color changed. In each run, only one kind of PRF stimulus was presented and repeated eight cycles. Each participant finished four different pRF runs.
In the Control Experiment 2, similar block-design as in main experiment was used. Four face part conditions (top vs bottom part x present location) were included in the experiment (Figure 3-figure supplement 2A). The top part contained eyes (4.02° x 12.08°) and the bottom part contained nose and mouth (8.08° x 12.08°). To engage observers' attention on the stimuli, a randomly selected four images in each block moved slightly either to the left or right during stimulus presentation. Observers were asked to judge the directions of these movements. Same localizer runs as in the main experiment were included for each participant.
Data analysis
Anatomical data were analyzed with FreeSurfer (Cortechs Inc, Charlestown, MA) and custom MATLAB codes. To enhance the contrast between white and gray matter, T1-weighted images were divided by PD-weighted images (Van de Moortele et al., 2009). Anatomical data were further processed with FreeSurfer to reconstruct the cortical surface models.
Functional data were analyzed with AFNI (http://afni.nimh.nih.gov), FreeSurfer, fROI (http://froi. sourceforge.net), and custom MATLAB codes. Data preprocessing included slice-timing correction, motion correction, removing physiological noise with respiration and pulse signals, distortion correction with reversed phase encoding EPI images, and intensity normalization. For the localizer runs only, spatial smoothing was applied (Gaussian kernel, 2 mm full width at half maximum). After preprocessing, function images were co-registered to anatomic images for each participant. To obtain the average response amplitude for each voxel in the specific stimulus condition for each individual observer, voxel time courses were fitted by a general linear model (GLM), whereby each condition was modeled by a boxcar regressor (matched in stimulus duration) and then convolved with a gamma function (delta = 2.25, tau = 1.25). The resulting beta weights were used to characterize the averaged response amplitudes.
The face-selective ROIs were identified by contrasting functional data between face and everydayobject conditions in the localizer runs. Specifically, FFA and OFA was defined as the set of continuous voxels in fusiform gyrus and inferior occipital gyrus, respectively, that showed significantly higher response to faces than to objects (p < 0.01, uncorrected). We were able to identify right pFFA, right anterior FFA (right aFFA), right OFA, and left FFA in all six participants. The left OFA were successfully identified in five out of six participants. In each ROI, to remove the vein signal in the functional data, voxels of which signal changes to face stimuli were larger than 4% were excluded in further analysis.
For the main experimental data, to remove the general fMRI response pattern shared among different face parts, response patterns from whole faces or everyday objects were regressed out from response patterns of each individual face part. Whole face or object response in each voxel was used to predict the individual part response with linear regression algorithm, and the residuals across voxels were considered as the individual part response pattern with general response pattern removed. To extract the trend of the fMRI response pattern along anterior-posterior dimension in the FFA, we first drew a line along the mid-fusiform sulcus on the cortical surface of each participant. For all vertices within the FFA ROI, we calculated their shortest (orthogonal) distances to the line, and projected the neural response of all voxels in the FFA ROI to the line along the mid-fusiform sulcus, and obtained the averaged response on each point along the line to get the response profiles (see Figure 3A). Similar analysis was done for OFA with the line drawn along the inferior occipital sulcus.
For the control experiments, same data processing steps as in the main experiment were applied to extract the spatial patterns of different conditions. For the pRF data, fMRI respond time course of each voxel was fit with compressive spatial summation (CSS) model (http://kendrickkay.net/analyzePRF/). To determine the center location (x, y) of each voxel's population receptive field, CSS used an isotropic 2D Gaussian and a static power-low nonlinearity to model the fMRI response. In each voxel, model fitness can be quantified as the coefficient of determination between model and data (R 2 ). We only included the pRF results of voxels with R 2 higher than 2%. | 2021-12-30T06:22:20.667Z | 2021-12-29T00:00:00.000 | {
"year": 2021,
"sha1": "a5a289833404380d592b996b6058e735e318dd71",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.70925",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1c4c7ee436ad4d5b715c019ced9b2a2fe653ae7a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6102161 | pes2o/s2orc | v3-fos-license | Cortical Folding Patterns and Predicting Cytoarchitecture
The human cerebral cortex is made up of a mosaic of structural areas, frequently referred to as Brodmann areas (BAs). Despite the widespread use of cortical folding patterns to perform ad hoc estimations of the locations of the BAs, little is understood regarding 1) how variable the position of a given BA is with respect to the folds, 2) whether the location of some BAs is more variable than others, and 3) whether the variability is related to the level of a BA in a putative cortical hierarchy. We use whole-brain histology of 10 postmortem human brains and surface-based analysis to test how well the folds predict the locations of the BAs. We show that higher order cortical areas exhibit more variability than primary and secondary areas and that the folds are much better predictors of the BAs than had been previously thought. These results further highlight the significance of cortical folding patterns and suggest a common mechanism for the development of the folds and the cytoarchitectonic fields.
Introduction
The human cerebral cortex is a ribbon of gray matter that is highly folded in order to enable a large surface area to fit in the limited volume provided by the human skull. The folds are intriguing in both their variability and regularity, but little is understood about their relationship to the microstructural organization of the cortex. The cortex itself can be parcellated into a mosaic of microscopically, (i.e., architectonically definable areas) based on localizable and more or less pronounced changes in the laminar distribution of neuronal cell bodies (cytoarchitecture) and/or intracortical myelinated (myeloarchitecture) fibers (Brodmann 1909;Vogt 1911;von Economo 1929;Sarkissov et al. 1955). The most famous of these parcellations is the one proposed by Korbinian Brodmann (Brodmann 1909) a century ago. Most current imaging studies of the human cortex report the location of effects as a ' 'Brodmann area'' (BA). This determination is typically made by visual comparison of the functional imaging results with Brodmann's schematic drawings and thus comes with no defined estimate of precision or uncertainty.
Cyto-and myeloarchitectonic differences between adjacent areas that are the basis of the definition of borders between the BAs vary considerably in terms of their subtlety. For example, probably the most salient architectural feature of the cortex is the stria of Gennari, a highly myelinated stripe in layer IV present only in the primary visual cortex (BA 17). The stria of Gennari is one of the few architectural features of the cortex that is detectable in vivo using magnetic resonance imaging (MRI) (Clark et al. 1992; Barbier et al. 2002;Walters et al. 2003). Another prominent cytoarchitectonic feature of the cortex is the layer II islands (Ramon y Cajal 1909;No 1933) in entorhinal cortex (EC, BA 28) that give rise to the perforant pathway through which most of the input from neocortical areas travels to the hippocampus. Using ex vivo MRI at ultrahigh field, we have recently succeeded in robustly visualizing these celldense regions throughout the extent of EC (Augustinack et al. 2005). Despite these examples, the vast majority of the architectural characteristics that define borders between adjacent cortical areas are not visible at the resolutions that can be achieved by current neuroimaging technologies. Microscopic analysis of histologically stained brain sections, therefore, still remains the most powerful and reliable tool for cortical parcellation and identification of BAs.
Despite the widespread use of cortical folding patterns to perform ad hoc estimations of the locations of the BAs in individuals, little is understood regarding the relationship of the folds to the BAs or whether there is a hierarchy in the predictability of the BAs. The architectonics are of course important as the mosaic of functionally defined regions that are arrayed across the cortical sheet (e.g., Allman and Kaas 1971;Tootell et al. 1983;Felleman and Van Essen 1991;Sereno and Allman 1991) are strongly linked to the underlying anatomy.
Here we use whole-brain histology combined with statistically testable parcellation methods for the identification of cortical areas Zilles et al. 2002) and surface-based analysis Fischl, Sereno, Tootell et al. 1999) to explicitly test how well the folds predict the locations of the areas. We show that the accuracy with which an area can be predicted from folding patterns appears to be related to its level in the putative cortical hierarchy, with primary and secondary sensory areas being well predicted by surrounding folding patterns, and higher level cognitive areas such as Broca's area (BAs 44 and 45) the most variable with respect to the folds. We anticipate that this type of mapping will allow a more accurate assessment of the uncertainty associated with localization of functional or structural properties of the human brain.
Materials and Methods
Histological Processing Methods Ten human postmortem brains were processed and analyzed using the techniques described in Schormann and Zilles (1998), Amunts et al. (1999), and Zilles et al. (2002). The silver-stained histological sections of an individual brain were aligned to the postmortem MR volume of the same brain using nonlinear warping (Schormann and Zilles 1998) to build an undistorted 3-dimensional histological volume. The basic steps, which have been employed in numerous studies, for example, Zilles et al. 1995;Geyer et al. 1997;Schormann and Zilles 1998;Amunts et al. 2000;Geyer et al. 2000;Rademacher et al. 2002;Amunts et al. 2005, are as follows.
1. Histological, cell body--stained sections with cortical regions of interest are imaged under a microscope using a motorized scanning stage and a camera. For subsequent cytoarchitectonic analysis, the gray level index (GLI, [Schleicher and Zilles 1990]) is measured as an index of the volume fraction of cell bodies and GLI images are obtained. Dark pixels correspond to a low volume fraction of cell bodies, light pixels to a high one. 2. The cortex is covered by intensity line profiles that traverse the cortical ribbon from gray/white boundary to pial surface. The shape of each profile reflects the cytoarchitecture (Schleicher and Zilles 1990). 3. A distance function is computed to determine the degree of similarity of adjacent blocks of line profiles. A high degree of dissimilarity (or low similarity) indicates a substantial change in the laminar profiles and hence in the underlying cytoarchitecture. 4. Significant maxima in dissimilarity are those for which the location of the maximum does not depend on the block size but remains stable over large block-size intervals.
Surface-Based Analysis Methods
The reconstructed histological volumes were used to generate surface models of the gray/white interface. This was accomplished in several steps. First, a set of ''control points'' were manually added to the body of the white matter to guide an intensity normalization step that resulted in the white matter across most of the volume being close to a prespecified value ). This volume was then thresholded and manually edited to separate white matter from other tissue classes. The resulting binary segmentation was used to generate topologically correct and geometrically accurate surface models of the cerebral cortex Fischl et al. 2001) using a freely available suite of tools (http://surfer.nmr.mgh. harvard.edu/fswiki). An example of the results of this procedure together with the locations of the manually selected control points is given in Figure 1, which shows coronal (top), sagittal (middle), and axial (bottom) slices of a typical volume with the reconstructed gray/white surface shown in yellow. Note that small errors in surface positioning, which would be critical, for example, in a study of cortical thickness, are mostly irrelevant in this study in which we are more concerned with the large-scale geometry of the surface models. The 8 labeled BA maps (areas 2, 4a, 4p, 6, 44, 45, 17, and 18) were sampled onto surface models for each hemisphere, and errors in this sampling were manually corrected (e.g., when a label was erroneously assigned to both banks of a sulcus). A morphological close was then performed on each label. A close of a binary label is a dilation, in which each point that is 0 and neighbors a point that is 1 is set to 1, followed by an erosion, in which each point that is 1 and neighbors a 0 is set to 0. The close was used to remove small holes that arise due to sampling artifacts without distorting the boundary of each label. The 10 left and 10 right hemispheres were morphed into register using a high-dimensional nonlinear morphing technique that aligns cortical folding patterns (Fischl, Sereno, Tootell et al. 1999). Briefly, this technique maps each individual surface model into a spherical space and then represents the geometry of the surfaces as functions on the unit sphere. The registration of the surfaces is accomplished by maximizing the similarity of these spherical functions, while also constraining the mappings to be invertible and to induce only modest amounts of metric distortion. For these datasets, specifically we used 3 sets of geometric features to drive the registration. The first was the mean curvature of the ''inflated'' surface (the surfaces displayed in Fig. 2). This was necessary to account for the large-scale geometric distortions present in the data. Next we aligned the ''average convexity,'' which has been shown to be representative of the primary folding patterns . Finally, the mean curvature of the gray/white boundary surface was used as the input feature in order to align secondary and tertiary folds where possible. Each of these features in turn was matched to the corresponding feature in our standard in vivo atlas comprised of 40 subjects distributed in age and pathology (10 with mild Alzheimer's disease). Note that no specific optimization was performed for aligning the BAs presented in this report. Rather, a set of parameters that had been determined to be optimal for aligning primary visual cortex (V1) in a separate ex vivo dataset (Hinds OP, Rajendran N, Polimeni JR, Augustinack JC, Wiggins G, Wald LL, Rosas HD, Potthast A, Schwartz EL, Fischl B, unpublished data) were used with no modification.
In order to quantify the accuracy of the alignment of the underlying BAs, the spherical registration was used to transform each of the 8 BAs for each individual into each of the other individual coordinate systems, and a modified Hausdorff distance was computed. (Note that areas 4a, 4p, and 6 were obtained for only 8 of the 10 total subjects. Each of the other areas was present for every subject.) Specifically, for each point on the boundary of each subject's area in the individual subject space, we computed the minimum distance to the boundary of each other subject projected into the individual subject's original white matter surface model and then computed the average of these. The results of this analysis are displayed in Figure 4. The advantage of this procedure is that it provides a measure in millimeter of the uncertainty of localization and is invariant to the size of an area, a well-known problem for other similarity measures such as the Dice or Jaccard coefficient, which compute the degree of overlap of binary labels, a measure that is affected by the size of the label, with larger labels typically evidencing greater overlap than smaller one.
Results
We constructed spatial probability maps for 8 BAs across 10 human brains (both left and right hemispheres) as shown in Figure 2, which displays the average convexity of the in vivo atlas that is used as a common space. These include the primary and secondary visual areas BA 17 and BA 18, respectively; BA 44 and BA 45 (subdivisions of Broca's area); the somatosensory area BA 2; the primary motor areas 4a and 4p; and finally the premotor area BA 6 (note that these last 3 areas were only analyzed in 8 of the 10 datasets). Frequency estimates of the probability that each point was part of each BA were constructed in a surface-based coordinate system by counting the number of times that a label occurred at a given point and dividing by the total number of subjects for each label. Each point in the surface-based coordinate system can then be probed to determine the probability that it is part of any of the set of labeled BAs.
To assess the accuracy of the surface-based results relative to more standard volumetric procedures, we used the publicly available volumetric probability maps (http://www.fz-juelich. de/inb/inb-3//spm_anatomy_toolbox) constructed using a high-dimensional nonlinear fluid warp (Schormann and Zilles 1998). The accuracy of the 2 techniques was quantified by constructing cumulative histograms of the probability for each nonzero voxel (or vertex) in each probability map for each of the 8 areas, as shown in Figure 3. Each bar represents the probability that a point will be at least that accurate. Because the minimum accuracy would be if the label of only one subject occurred at each location, the smallest value on the x-axis is 0.1 (1 subject out of 10). The histograms always achieve their maximum value of 1 at an accuracy of 0.1 indicating that the entire surface/volume is at least this accurate. Subsequent bars then represent the percent of the surface/volume that is at least this accurate. Thus, the bar at 0.2 represents the portion of the data with an accuracy > 0.2. Ideally, if the normalization perfectly aligns the underlying architectonics, these maps will be binary, with ones in the interior of the region and zeros elsewhere, resulting a flat histogram with the rightmost bin (P > 1) containing as many points as the leftmost (P > 0). The level of the histogram in the high-accuracy bins (more overlap across subjects, toward the right in the histograms) then measures the accuracy with which the underlying coordinate system aligns the borders of the BAs. The accuracy of the surface-based alignment in also aligning the architectonics is summarized in the bottom row, which shows the average of the histograms across the 8 areas (left) and the ratio of the surface and volume histograms on the right. For example, the surface-based coordinate system has greater than 7 times more locations of perfect accuracy than the volumetric one and outperforms the volume at every accuracy level. We believe that this type of result does not reflect the details of the volumetric procedure but rather that surface-based techniques use intrinsically more predictive features-cortical folding patterns-which are not available in the volume. Note that the more commonly used linear alignment procedures (12 parameter affine, not shown) have significantly lower accuracy than the fluid warps.
In order to explicitly quantify how well the folding patterns that were used to construct the surface-based coordinate system predict the locations of the various BAs, we computed the average distance between the boundaries of each individual instance of each BA in its native space to every other individual instance of that BA mapped into that subject's coordinate system, as described in the Materials and Methods section. The results of this analysis are shown in Figure 4. This measure allows both an estimate of the absolute accuracy of localization of each BA as well as a means for comparing how well predicted the boundaries of each BA are relative to the others. Note that errors in the surface reconstructions due to the reduced contrast to noise in the underlying images relative to what can be routinely obtained in vivo only strengthens these findings, as this type of error will only artificially increase our estimates of the variability. Examining Figure 4, it is clear that 1) primary and secondary sensory areas are extremely well predicted by the surrounding geometry and 2) there appears to be progression of accuracy, with the level of predictability diminishing as one moves away from areas devoted to processing sensory inputs and into cortical regions implicated in more cognitive domains.
Discussion
The most widely used coordinate system in neuroimaging is the one developed by Talairach and Tournoux (Talairach et al. 1967;Talairach and Tournoux 1988), which provides stereotaxic maps for inferring the architectonic localization of cortical effects (e.g., functional or structural differences between populations or conditions). Unfortunately, although popular tools exist for estimating BA from Talairach coordinate (Lancaster et al. 1997;Lancaster et al. 2000), this coordinate system has been shown to be a poor predictor of the locations of both primary sensory (Rademacher et al. 1992;Rademacher et al. 1993;Amunts et al. 2000;Geyer et al. 2000;Morosan et al. 2001;Rademacher et al. 2001) as well as higher order cortical areas Amunts et al. 2005). An alternative and even more widespread approach is to make an ad hoc estimation of the BA containing a given cortical effect by visually comparing individual folding patterns with those in Brodmann's drawings. This procedure, however, is also problematic, because Brodmann's maps are schematized drawings, and thus do not reflect a real individual brain with its folding pattern. Further, Brodmann's drawings give no means of assessing the variability of the relationship between the folds and the cytoarchitectonic boundaries.
The variability of the architectonics has been characterized in several studies, particularly the landmark work of Rajkowska and Goldman-Rakic, in which 7 human left hemispheres were analyzed to characterize the variability in areas 9 and 46 (Rajkowska and Goldman-Rakic 1995a;Rajkowska and Goldman-Rakic 1995b), with reconstructions of the lateral portion of the hemispheres carried out in 5 cases. In this study, considerable variability was found in the morphology of frontal sulcal patterns. Further, by overlaying their architectonic maps on the Talairach atlas, Rajkowska and Goldman-Rakic were able to point out the ambiguity in other published results that reported findings in a particular BA (e.g., an effect reported in area 9 could have been in 45 or 46 also). It has not been clear whether the well-documented inaccuracy of the use of the Talairach coordinate system for localizing BAs reflects the true variability of the underlying architectonic areas or if higher dimensional nonlinear coordinate systems based on other types of macroscopically observable features could be used in order to increase the accuracy of the localization of the underlying cyto-and myeloarchitecture.
In this work, we have shown that computational techniques that explicitly drive folding patterns into register across subjects are also surprisingly accurate at aligning histologically defined BAs, despite having no access to the microscopic properties used to define them. This is particularly true in the primary cortical areas we have investigated, with primary visual cortex (BA 17) being the most predictable, exhibiting in the order of 2.7 mm of median variability in the location of its boundary in both hemispheres across all subjects. In fact, the predictability of all the primary motor and sensory areas that we studied, including BA 17, 4a, 4p (anterior and posterior divisions of BA 4 [Geyer and Ledberg 1996]), and 2 (although recent evidence casts some doubt over whether area 2 should be considered primary or not [Zilles et al. 2004;Toga et al. 2006]), was found to be surprisingly good with a mean uncertainty of approximately 3.7 mm in the surface-based coordinate system. This figure was obtained by computing the median uncertainty of each individual area across each subject and then taking the mean of these. In the few ''higher order'' areas that we analyzed, the variability increased to 7 mm in the left hemisphere for areas 44 and 45 and 9 mm in the right hemisphere, with significant parts of each area overlapping in all subjects. These core areas of 100% overlap indicate that it should be possible to restrict analysis to regions in which a researcher is confident that an effect is indeed within a given BA, although it is important to note that the geometry of the area will play a role in this type of analysis as well. For example, BAs 44 and 45 exhibit more variability than say BA 4a, but the elongated nature of BA 4a would make it difficult to find many functional MRI voxels solely contained within the predicted location of this cortical area.
Several explanations are possible for this apparent hierarchy in the variability of the location of cortical areas. Variability in position may simply relate to the variability of regional folding patterns as, for example, prefrontal regions are more variable geometrically than perirolandic regions or the region around the calcarine. This, however, begs the question of why primary areas occur near primary folds. If cortical folding patterns are reflective of the tension of subcortical and corticocortical axonal projections (Van Essen 1997), then it may be that the variability in the location of a cortical area relates to the degree of heterogeneity in its pattern of connectivity. Thus, primary areas that are connected to relatively few other cortical areas would be less variable than higher order (multimodal ''associ-ation'') areas, which project to and receive projections from many more disparate brain regions (Pandya et al. 1988). V1, for example, has connectivity mainly limited to the lateral geniculate nucleus of the thalamus and secondary visual cortex (V2) (for review see Sincich and Horton [2005]). Conversely, area 44 receives major projections from secondary somatosensory area S2 and inferior parietal lobule as well as projections from prefrontal and premotor areas (9, 46v, 47/12, 13, 6), cingulate motor cortex, superior temporal sulcus, and rostral insula (Geschwind 1965;Jones and Powell 1970;Pandya and Yeterian 1996). Area 45 receives its main inputs from superior temporal gyrus (higher auditory cortex) and multimodal areas in the superior temporal sulcus, in addition to other prefrontal areas, somatosensory areas 1 and 2, caudal insula, and visual areas of the inferior temporal cortex (Geschwind 1965;Jones and Powell 1970;Pandya and Yeterian 1996). Variability in cortical localization could thus largely reflect the complexity of the underlying patterns of connectivity, as opposed to being directly related to relative location in a hierarchical arrangement of cortical areas.
It is worth noting that the cytoarchitectonic changes that define the borders between adjacent association cortices (such as 44/45) are considerably more subtle than in primary areas, which typically show reasonably sharp transitions in cellular properties between one area and its neighbors (Van Hoesen 1993), making the precise and repeatable localization of higher areas considerably more difficult. In the present observations, cytoarchitectonic maps were used based on a reliable, observer-independent, and statistically testable microscopical technique , which excludes a systematic increase of variability between primary and higher areas due to such methodical reasons. Phylogenetic factors could play a role in the variability of localization, as it has been posited that primary sensory cortices are the most recent to evolve (Sanides 1970), and therefore, evolutionary age could be reflected in degree of variability. This argument is also supported by the fact that the variability in the volume of neocortical areas 44 and 45 greatly exceeds that of the hippocampus (part of archicortex) Amunts et al. 2005). One important cautionary point is that homologies between macaque and human for areas 44 and 45 have not been definitively established (Deacon 2004). It is also possible that ontogenetic factors influence cortical localization. For example, the order of development could play a role with earlier developing areas being less variable than later ones due to a simple propagation of errors. It is known that primary areas myelinate earlier than higher ones (Flechsig 1901), and there is some evidence that they form earlier as well (Flechsig 1920;Brody et al. 1987), although the early myelination of middle temporal area/V5 would then imply that it would have a stable location with respect to surrounding folding patterns, which does not appear to be the case.
Although the variability across areas is intriguing, one striking feature of our results is the stability of the localization of the BAs with respect to the surrounding folding patterns, as might perhaps have been expected given the demonstrated ability of surface-based registration to align structurally and functionally homologous features of the human cortex (Fischl, Sereno, Tootell et al. 1999;Van Essen 2005). This stability may arise from genetic factors, which are likely to play an important role in the location and size of cortical areas. One prominent hypothesis regarding the formation of cortical areas is that the specification of the architectonic regions is present in a protomap in the proliferative ventricular zone in the form of radial columns that guide the formation and migration of cortical neurons during neurodevelopment (Rakic 1988). There is evidence that the protomap exists without the need for sensory input (e.g., Armentano et al. 2007;Cholfin and Rubenstein 2007), although the size and location of the architectonic areas can be modulated by the modification of afferent input (Goldman-Rakic 1980), perhaps contributing to the observed variability in the localization with respect to the surrounding folding patterns. Thus, the protomap may initially specify the location of the cortical areas with respect to one another, with corticocortical and thalamic connectivity then influencing the creation of cortical convolutions and the final position and size of each architectonic field.
An important implication of the current work is that if the size of a cortical area relates to competence of the functional domain in which the area is implicated, then it may be possible to predict performance levels directly from gross morphology. For example, in recent work, Duncan and Boynton (2003) have shown that visual acuity is predicted by the size of the functionally defined primary visual cortex. Given the accuracy with which the borders of V1 appear to be localized by folding patterns alone, visual acuity should be inferable directly from brain structure. Finally, understanding how the underlying cellular characteristics are arranged with respect to the macroscopically visible folding patterns is an important step in understanding how the folds develop and whether they play a computational role in the processing strategies employed by the human brain.
Funding
National Center for Research Resources (P41-RR14075, R01 RR16594-01A1, NCRR BIRN Morphometric Project BIRN002, U24 RR021382); National Institute for Biomedical Imaging and Bioengineering (R01 EB001550, R01EB006758); National Institute for Neurological Disorders and Stroke (R01 NS052585-01); Mental Illness and Neuroscience Discovery Institute, which is part of the National Alliance for Medical Image Computing; National Institutes of Health through the NIH Roadmap for Medical Research (grant U54 EB005149). Helmholtz-Association of Research Centres; the DFG; National Institute of Biomedical Imaging and Bioengineering; National Institute of Neurological Disorders and Stroke; National Institute of Mental Health (to K.Z., K.A.). | 2016-05-04T20:20:58.661Z | 2007-12-01T00:00:00.000 | {
"year": 2007,
"sha1": "d4687f9ae6b2b23d57a53124e7c2d28c30f5a781",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/cercor/article-pdf/18/8/1973/809539/bhm225.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a64197680ff6a2b065bfc2ecee2aa5a05e7ed737",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Psychology"
]
} |
8279095 | pes2o/s2orc | v3-fos-license | Multi-Label Multi-Kernel Transfer Learning for Human Protein Subcellular Localization
Recent years have witnessed much progress in computational modelling for protein subcellular localization. However, the existing sequence-based predictive models demonstrate moderate or unsatisfactory performance, and the gene ontology (GO) based models may take the risk of performance overestimation for novel proteins. Furthermore, many human proteins have multiple subcellular locations, which renders the computational modelling more complicated. Up to the present, there are far few researches specialized for predicting the subcellular localization of human proteins that may reside in multiple cellular compartments. In this paper, we propose a multi-label multi-kernel transfer learning model for human protein subcellular localization (MLMK-TLM). MLMK-TLM proposes a multi-label confusion matrix, formally formulates three multi-labelling performance measures and adapts one-against-all multi-class probabilistic outputs to multi-label learning scenario, based on which to further extends our published work GO-TLM (gene ontology based transfer learning model for protein subcellular localization) and MK-TLM (multi-kernel transfer learning based on Chou's PseAAC formulation for protein submitochondria localization) for multiplex human protein subcellular localization. With the advantages of proper homolog knowledge transfer, comprehensive survey of model performance for novel protein and multi-labelling capability, MLMK-TLM will gain more practical applicability. The experiments on human protein benchmark dataset show that MLMK-TLM significantly outperforms the baseline model and demonstrates good multi-labelling ability for novel human proteins. Some findings (predictions) are validated by the latest Swiss-Prot database. The software can be freely downloaded at http://soft.synu.edu.cn/upload/msy.rar.
Introduction
Recent years have witnessed much progress in computational modelling for protein subcellular localization [1]. However, researches on human genome and proteomics seem more urgent and important for human disease diagnosis and drug development. Unfortunately, there are far few specialized predictive models for human protein subcellular localization thus far [2,3,4,5]. Furthermore, many human proteins have multiple subcellular locations, which renders the computational modelling more complicated. Up to the present, there are only two models (Hum-mPLoc [4] and Hum-mPLoc 2.0 [5]) that can be applicable to multiple subcellular localization of human proteins.
Although many protein sequence feature extraction methods have been successfully developed for protein subcellular localization, such as signal peptide [6], sequence domain [7], PSSM [8,9], k-mer [10,11] etc., the accuracy of the models is still moderate or unsatisfactory, most of which average about 70% [6,7,9,10,11]. Garg A et al (2005) [3] used sequence features only (amino acid composition and its order information) for human protein subcellular localization, and the result is satisfactory (84.9%), but it covers only 4 subcellular locations. The Gene Ontology (GO) project has developed three structured controlled vocabularies (ontologies) that describe gene products in terms of their associated biological processes, cellular components and molecular functions in a species-independent manner, and the GOA database [12] provides high-quality electronic and manual associations (annotations) of GO terms to UniProt Knowledgebase (UniProtKB) entries [13]. Because the three aspects of gene ontology are closely related and the GO terms of cellular component contains direct indicative information about protein subcellular location, GO has become a generally effective feature for the prediction of protein subcellular localization [2,4,5,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29].
Chou K.C. et al [2] proposed an ensemble leaning model called Hum-PLoc for human protein subcellular localization. The model consists of two parts: GO-based kNN and PseAAC-based kNN, and the latter part was designed to compensate for the model performance in the case of GO unavailability. To cover multiplex human proteins that reside in or transport across multiple subcellular locations, Shen HB et al [4] further proposed an improved model called Hum-mPLoc, which extended the number of subcellular locations from 12 to 14 and formally formulated the concept of locative protein and the success rate for multiplex protein subcellular localization. Hum-PLoc, Hum-mPLoc and the work [2,15,16,17,18,19,20,21,22] used the target protein's own GO information to train model, thus inapplicable to novel protein prediction. Many recent GO-based methods generally exploit the homolog GO information for novel protein subcellular localization [5,23,24,25,26,27,28,29,30]. Based on Hum-mPLoc, Shen HB et al [5] further proposed Hum-mPLoc2.0 for multiplex and novel human protein subcellular localization, where a more stringent human dataset with 25% sequence similarity threshold is constructed to train a kNN ensemble classifier. Hum-mPLoc2.0 incorporated those homologs with sequence similarity §60%, but achieved relatively low accuracy (62.7%). However, the method of setting threshold for homolog incorporation has the following disadvantages: (1) significant homolog (high sequence identity, assuming §60%) may potentially be divergent from the target protein in terms of protein subcellular localization, for instance, the target protein P21291 resides in subcellular locations: Nucleus, while its significant homolog P67966 (sequence identity: 90.16%; PSI-Blast E-value: 13e-174072, obtained by Blast default options) resides in subcellular locations: Cytoplasm and Cytoskeleton. High threshold of sequence identity, e.g. 60%, can not guarantee that no noise would be introduced to the target protein; (2) remote homolog (low sequence identity, assuming v30%) may be convergent to the target protein in terms of protein subcellular localization, for instance, the target protein P21291 resides in subcellular locations: Endoplasmic reticulum, Membrane and Microsome, while its first 7 significant remote homologs queried against SwissProt 57.3 database [13] with default Blast option: O75881(26.82%,4e-041),O02766(25.05%,4e-028), Q63688 (25.66%,2e-027),P22680(23.68%,4e-026),Q16850(23.92%, 4e-025),O88962 (25.05%, 4e-025), Q64505 (23.13%, 1e-024) (the first number in parenthesis denotes sequence identity and the second number denotes PSI-Blast E-value), also reside in the subcellular locations: Endoplasmic reticulum, Membrane and Microsome. High threshold of sequence identity (60%) would filter out all the convergent remote homologs that are informative to protein subcellular localization, and thus no homolog knowledge would be transferred to the target protein P21291. We can see that both significant homolog and remote homolog can be convergent homolog, or divergent homolog in terms of protein subcellular localization, thus we should conduct homolog knowledge transfer in a proper way, so that the noise from divergent homolog can be effectively depressed. Mei S et al [25] proposed a transfer learning model (gene ontology based transfer learning for protein subcellular localization, GO-TLM) to measure the individual contribution of GO three aspects to the model performance, where the kernel weights are evaluated by simple nonparametric cross validation. Mei S [26] further proposed an improved transfer learning model (MK-TLM), which conducted improvements on GO-TLM from the two major concerns: (1) more rational noise control over divergent homolog knowledge transfer; (2) comprehensive survey of model performance, especially for novel protein prediction. However, many human proteins reside in or transport across multiple cellular compartments, and the proteins with multiple locations may help reveal special biological implications to basic research and drug discovery [30,31]. Neither GO-TLM nor MK-TLM is applicable to multiple protein subcellular localization prediction.
In this paper, we propose a multi-label multi-kernel transfer learning model for large-scale human protein subcellular localization (MLMK-TLM). Based on the work [25,26], MLMK-TLM proposes a multi-label confusion matrix and adapts one-against-all multi-class probabilistic outputs to multi-label learning scenario. With the advantages of proper homolog knowledge transfer, comprehensive survey of model performance for novel protein and multi-labelling capability, MLMK-TLM gains more practical applicability. To validate MLMK-TLM's effectiveness, we conduct a comprehensive model evaluation on the latest human protein dataset Hum-mPLoc 2.0 [5].
Transfer learning
As a research field of machine learning community, transfer learning has attracted more and more attentions in recent years [32]. Traditional supervised learning generally assumes that all the data, including training data and unseen test data, are subjected to independent and identical distribution (iid), which doesn't hold true under many practical circumstances, especially in the field of biological data analysis. For example, the microarray gene expression data from different experimental platforms would be subjected to different level of experimental noise [33]. Transfer learning can be viewed as a bridge to transfer useful knowledge across two related domains with heterogeneous feature representations and different distributions. Pan S et al [32] reviewed the recent progress of transfer learning modelling and classified transfer learning into three categories based on the way of knowledge transfer: instance-based knowledge transfer [34], feature-based knowledge transfer [35] and parameter-based knowledge transfer [36].
Transfer learning modelling is generally conducted around three central dogmas: (1) how to define the relatedness between domains; (2) what to transfer; (3) how to transfer. In our work, we explicitly define the relatedness between protein sub-families and super-families by protein sequence evolution, i.e. protein homolog. Evolutionally closely-related proteins share similar subcellular localization patterns with high probability. Correspondingly, what to transfer is naturally the homolog GO term. Such a way of transfer learning modelling is computationally simple and biologically interpretable. In order to reduce the risk of negative transfer, GO-TLM [25] and MK-TLM [26] proposed a nonparametric multiple kernel learning method to measure the contribution of GO three aspects, target GO information and homolog GO information to the model performance. In this paper, we redefine confusion matrix, so that the GO kernel weights can be derived by cross validation for multi-label learning scenario.
GO feature construction
All the proteins are represented with both the target GO terms and the homolog GO terms, which are extracted from GOA database [12] (77 Release, as of 30 November, 2009), and the homologs are extracted from SwissProt 57.3 database [13] using PSI-Blast [37]. Assume there are u GO terms x i (i = 1, 2,…, u), then protein X can be represented as follows: If GO term x i is assigned to the protein x in GOA database, then x i = 1; Otherwise, x i = 0. To expressly estimate the individual contribution of the three GO aspects, GO-TLM [25] decomposed the feature vector (1) into the following three binary feature vectors: X P~( x P,1 ,x P,2 ,:::,x P,l ); X F~( x F ,1 ,x F ,2 ,:::,x F ,m ); X C (x C,1 ,x C,2 ,:: However, GO-TLM aggregated the target GO information and the homolog GO information into one single feature vector, such that the two kinds of GO information are treated equally. We know that such a way of feature construction is not rational because divergent homolog GO information carries much noise. Figure 1 shows the difference of subcellular localization patterns between target human protein (P61221 thru. Q9Y2Q3) and its homolog protein.
The homologs are queried against SwissProt 57.3 database [13] with default Blast options (E-value: 10; substitution matrix: BLOSUM62). E-value is relaxed to 10 to obtain remote homologs for those proteins that have no significant homologs. For a target protein, we may encounter three cases for the selected homologs: (1) all homologs are significant homologs; (2) one part of homologs is significant homolog and the other part of homologs is remote homolog; (3) all homologs are remote homologs. Some remote homologs are convergent to the target protein in terms of protein subcellular localization (e.g. remote homologs O75881, O02766, Q63688, P22680, Q16850, O88962 and Q64505 to target protein P21291), thus we should exploit the useful information from remote homologs; meanwhile, some remote homologs are divergent to the target protein, thus we should prevent negative knowledge transfer from the remote homolog. As compared to remote homolog, significant homolog is more likely to be convergent in terms of protein subcellular localization, but in some case, significant homolog is also likely to be divergent. Figure 1 lists one divergent homolog for each target protein. The illustrated divergent homolog has the highest sequence identity and PSI-Blast E-value among the target protein's divergent homologs. From Figure 1, we can see that the significant homologs reside in definitely distinct subcellular locations from the target protein, which implies that we should also depress noise from the significant homologs even though we encounter the above case (1). Similar to MK-TLM [26], we also separate the target GO information from its homolog GO information for the convenience of noise control. Here, we use T to denote the target protein and H to denote its homolog, thus the target GO feature vector is expressed as formula (3), and the homolog GO terms are aggregated into one homolog feature vector as formula (4): X T P~( x P,1 ,x P,2 ,:::,x P,l ); X T F~( x F ,1 ,x F ,2 ,:::,x F ,m ); X T C (x C,1 ,x C,2 ,:::,x C,n ) ð3Þ X H P (x)~(x P,1 ,x P,2 ,:::,x P,l ); X H F (x) (x F ,1 ,x F ,2 ,:::,x F ,m ); X H C (x)~(x C,1 ,x C,2 ,:: Thus, each protein is represented by six binary feature vectors:
Non-parametric multiple kernel learning
The six binary GO feature vectors {X T P ,X T F ,X T C ; X H P ,X H F ,X H C } are used to derive six GO kernels {K T P ,K T F ,K T C ; K H P ,K H F ,K H C }, and the GO kernels are further combined in the way that MK-TLM does [26]. In such a setting, higher homolog GO kernel weight implies more positive knowledge transfer, and lower homolog GO kernel weight can depress the potential noise by divergent homolog. Different to MK-TLM, MLMK-TLM adapts confusion matrix to multi-label learning scenario based on the concept of locative protein [4,5]. For self-contained description and integrity, we give the full description of non-parametric kernel weight estimation in multi-label learning scenario as below, though some part of which is identical to MK-TLM [26]. Similar to GO-TLM and MK-TLM, the final kernel is defined as the following linear combination of sub-kernels: Where SE denotes recall rate or sensitivity and MCC denotes Matthew's correlation coefficient. The kernel weights Hg are derived by cross validation. Given a training dataset, we divide the training set into k-fold disjoint parts. For each fold cross validation, one part is used as validation set and the other parts are merged as training set to train the combined-kernel SVM. Thus, we can derive a confusion matrix M by evaluating the trained SVM against the test set. From the confusion matrix M, we can derive the kernel's SE and MCC measure as follows: . ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (pzr)(pzs)(qzr)(qzs) p Where, M i,j records the counts that class i are classified to classj; superscript L denotes subcellular locations; and all the other variables are intermediate variables that can be derived from the confusion matrix M.
In single-label learning scenario, M i,j (i=j) records the counts that class i are misclassified to class j, which is not applicable to multi-label learning scenario. Let's borrow the notion of locative protein [4,5] to describe the multi-label confusion matrix. Assume that a protein p is located at two subcellular locations fC 1 ,C 2 g, i.e., p[S C1^p [S C2 (S C1 ,S C2 denote the set of proteins that reside in C 1 ,C 2 , respectively), the notion of locative protein means that protein p can be viewed as two different proteins p 1 ,p 2 (p 1 [S C1^p2 [S C2 ). Now take p 1 protein as test protein and the trained SVM labels p 1 as follows: C~maxarg l (f (p 1 ,l)Dl~1,:: Where, f (p 1 ,l) denotes the probability that protein p 1 is assigned the label l (see Section 4 of Methods for how to derive probability outputs). Thus, the multi-label confusion matrix can be defined as follows: Formula (9) shows that only if the predicted label of locative protein p 1 hits its true label C 1 or C 2 , the prediction is deemed as correct; otherwise, the prediction would be deemed as incorrect.
As regards with kernel K t s s[fP,F ,Cg,t[fT,Hg, Gaussian kernel is used here:
Multi-label learning
In our work, we extend MK-TLM [26] to multi-learning scenario based on one-against-all multi-class learning and binary SVM probability outputs [38]. Probability outputs tell us the confidence level that a query protein belong to each subcellular location, thus more intuitive and reasonable than ensemble voting [4,5,39] and label transfer of kNN nearest neighbour protein [27,28,30].
Assuming there are K subcellular locations, for each subcellular location k, we view the proteins that belong to k as positive examples and the proteins that belong to other subcellular locations except k as negative examples, based on which to train one binary SVM. Thus, we have K trained binary SVMs. If each binary SVM outputs {21, +1} labels, multiple {+1} outputs can be viewed as multiple protein subcellular locations [40]. Because the {21, +1} labels can not tell us the confidence level that a query protein belongs to each subcellular location, we don't adopt the method. If each binary SVM yields probability output, we can choose the label with the highest probability as the protein subcellular location, which is s-called oneagainst-all multi-class learning [38,41]; if we set some probability threshold, the labels with probability over the threshold can be viewed as multiple protein subcellular locations, thus intuitively applicable to multi-label learning scenario. Platt J [41] proposed a method to adapt binary SVM {21, +1} labels to posterior class probability as defined below: Where the coefficient A and B can be derived from data by cross validation, and f(x) is uncalibrated decision value of binary SVM. Actually, the one-against-all multi-class SVM with probability output has been implemented into the LIBSVM tool (http://www. csie.ntu.edu.tw/,cjlin/libsvm/), which can be easily used for multi-label learning. Only if we set LIBSVM prediction option ''-b 1'' (LIBSVM command option -b 1 means probability rather than {21, +1} output), we can obtain the probability vector that a query protein is predicted to each subcellular location. By setting optimal probability threshold, we can determine the optimal multiple labelling for the query protein based on the predicted probability vector.
Model evaluation and model selection
The existing GO-based models only reported the optimistic performance by evaluating the proposed model against information-rich (GO, PPI, image) test proteins, and seldom reported the performance for novel proteins [4,5,14,15,16,17,18,19,20,21,22,23,24,25,27,28,29,30]. Apparently, the optimistic performance is not enough to be a comprehensive survey of the model's true predictive ability, especially for novel protein prediction. MK-TLM [26] attempted to conduct a comprehensive survey of the model performance in optimistic, moderate and pessimistic cases, and demonstrated good performance for novel proteins and those proteins that belong to the protein family we know little about. In this paper, the proposed MLMK-TLM inherits all MK-TLM's advantages. The Optimistic case means the training set and the test set both abound in GO information; the Moderate case means that the test set contains no GO information at all, which can be simulated by removing the test kernels fK T P ,K T F ,K T C g; the Pessimistic case means that both the training set and the test set contains no GO information at all where the target GO information is removed from both the training set and the test set, which can be simulated by removing the training kernels fK T P ,K T F ,K T C g and test kernels fK T P ,K T F ,K T C g. The performance evaluation under multi-label learning scenario seems more complicated as compared to single-label learning scenario. Because the model performance estimation involves both singlex protein (only one subcellular location) and multiplex protein (multiple subcellular locations), we should conduct two performance estimation experiments: one experiment is overall performance estimation on locative dataset, where multiplex protein is viewed as multiple singlex proteins as Hum-mPLoc 2.0 [4], Virus-mPLoc [15], iLoc-Euk [27], iLoc-Virus [28] and Plant-mPLoc [30] did; the other experiment is multi-labelling estimation for multiplex proteins. The first experiment is similar to traditional supervised learning estimation except that multi-label confusion matrix is adopted instead (see formula 8 & 9); in the second experiment, cross validation is conducted on multiplex proteins only and the singlex proteins are always treated as training data. Thus, the whole training set is composed of two parts: fixed part from the singlex proteins and the variable part from the multiplex proteins. In addition, the model performance estimation in the second experiment is much more complicated. To simplify the formulation, lets' first give several symbol annotations: (1) L true denotes the true label set of a multiplex protein p; (2) L predicted denotes the predicted label set of a multiplex protein p; (3) PfpDF g denotes the protein set P whose protein p satisfies the condition F; (4) ½½. denotes set cardinal; (5) minus symbol { denotes set difference; (6) denotes logic AND. Based on the symbols, we can formally define Label Hit Rate (LHR), Perfect Label Match Rate (PLMR) and Non-target Label Hit Rate (NT-LHR) as follows: where N denotes the number of subcellular locations a protein may reside in, with maximum value 4 here; n denotes the number of correct HITs or wrong HITS, with maximum value 14{N here (we assume the total number of subcellular locations is 14). The multiplex proteins in Hum-mPLoc 2.0 [5] can be divided into 3 subsets that possesses 2, 3 and 4 labels (subcellular locations), respectively. We will report LHR, PLMR and NT-LHR on each subset. Take 2-label subset as example, the prediction may hit 0, 1 or 2 true labels. Low 0label hit rate and high 1and/or 2-label hit rate imply good model performance. However, the prediction may also hit 1,12 non-target labels (excluding 2 true labels from total 14 subcellular locations). High NT-LHR implies high misleading tendency, which should be as low as possible. The existing multi-label learning model for protein subcellular localization [4,5,15,19,27,28,29,30] seldom reported NT-LHR. If the prediction hits the true labels and yields no other misleading labels, we call the case perfect label match; otherwise, we call the case non-perfect label match. High Perfect Label Match Rate (PLMR) implies good predictive ability and low misleading tendency.
MLMK-TLM is a relatively complex model that requires timeconsuming computation for model comparison and model selection. Apart from SVM regularization parameter C and kernel parameter c, MLMK-TLM introduces a hyper-parameter H that denotes the number of homologs for knowledge transfer. Assume there are N proteins in the dataset and the hyper-parameter sets are C~f2 3 ,2 4 ,2 5 ,2 6 ,2 7 ,2 8 ,2 9 ,2 10 ,2 11 g; c~f2 {3 ,2 {2 ,2 {1 g;~f1, 2,3,4,5,6,7,8,9,10g, MLMK-TLM has to fix one hyper-parameter to optimize the other hyper-parameters, and in each iteration has to compute kernel matrices, thus the computational complexity is ½½K|½½C|½½c|½½H|O(N 2 ), where ½½. denotes set cardinal, ½½K denotes the number of kernel matrices and O(N 2 ) denotes the computational complexity for kernel computing. For large-scale human protein dataset Hum-mPLoc 2.0 [5], the model selection is rather time-consuming. Hence, we adopt 5-fold cross validation instead of leave-one-out cross validation (LOOCV) (Jackknife) as GO-TLM [25] and MK-TLM [26] did. For multi-labelling estimation, the multiplex proteins are divided into 5 nearly-even parts, one part as test set, and the other parts are merged with the singlex proteins into training set, thus iterates for 5 times until all the multiplex proteins participate in the performance estimation process (see Section 6 of Results).
For performance estimation on locative proteins, we adopt the performance measures: Sensitivity (SE), Specificity (SP), Matthew's correlation coefficient (MCC), Overall MCC, and Overall Accuracy. For multi-labelling estimation, we adopt LHR, PLMR and NT-LHR.
Dataset
Shen HB et al [5] constructed a large-scale human protein dataset. The dataset covers 14 subcellular locations and contains 3106 distinct human proteins, where 2580 proteins belong to one subcellular location, 480 to two locations, 43 to three locations, and 3 to four locations. The protein with multiple subcellular locations should be treated as one training example of each subcellular location it belongs to, thus the same protein should be viewed as different protein within different subcellular location, referred to as locative protein in the literatures [4,5,15,19,27,28,29,30]. Thus, there are 3681 locative proteins in the dataset [5]. The dataset is a good benchmark for model performance comparison, because none of the proteins has $25% sequence identity to any other proteins in the same subcellular location. Accordingly, we choose Hum-mPLoc 2.0 [5] as the baseline models for performance comparison. Although the dataset [40] collected much more multiplex human proteins, we don't use it to evaluate the multi-labelling, because the sequence similarity reaches 80%, so high as to yield performance overestimation.
Model performance evaluation
2.1 Optimistic case: both training set and test set abound in target GO information. The optimistic case assumes that both the training set and the test set abound in target GO information, that's, the training proteins and the test protein by themselves contain rich GO information before incorporating the homolog GO information. We call this case MLMK-TLM-I. As shown in MLMK-TLM-I section of Table 1, MLMK-TLM achieves 87.04% accuracy and 0.8606 MCC on Hum-mPLoc 2.0 human protein data, significantly outperforming the baseline Hum-mPLoc 2.0 62.7% [5]. Actually, Hum-mPLoc 2.0 aggregated the target protein's GO information together with the homolog GO information to train classifier, thus the overall accuracy 62.7% is the model's optimistic performance. The optimal hyper-parameter setting is (H~1; c~2 {1 ; C~2 8 ), where H~1 means that only one homolog GO information is transferred to the target protein.
The high MCC value (0.8606) implies that MLMK-TLM achieves good predictive balance among the 14 human protein subcellular locations. We can see from MLMK-TLM-I section of Table 1
Moderate case: training set abounds in target GO information while test set contains no target GO
information. The most common scenario we encounter may be that we have a plenty of well-annotated training proteins and need to label some novel proteins at hand. We call the scenario as moderate case, referred to as MLMK-TLM-II. Novel proteins generally have no GO information at all. Most of the existing GO-based models except the work [26] ignored performance estimation in this case. Once the proposed models work in such a scenario, the performance may not be as optimistic as reported. Therefore, experiments should be expressly designed for the moderate case to test MLMK-TLM's applicability to novel proteins.
The test procedure for moderate case seems more complicated than that for optimistic case, because the proteins in the test set have no target GO information.
As shown in MLMK-TLM-II section of Table 1, MLMK-TLM achieves 85.22% accuracy and 0.8411 MCC on the benchmark data, still significantly outperforming the baseline Hum-mPLoc 2.0 62.7% [5] and nearly 2% lower than the optimistic case (87.04% In this section, we study an extreme case, called pessimistic case, where a protein subfamily or species is not GO-annotated at all, that's, we know nothing about the protein subfamily or species but the protein sequence information. The key point is whether the homolog GO information is informative enough to train an effective prediction model for the protein subfamily or species we know little about. To validate the point, we assume that at least one GO-annotated homolog can be queried for the target protein, which is not restrictive with the rapid progress of GOA database [12]. If experimental results support the idea, MLMK-TLM will gain much wider application. Different from the optimistic case and the moderate case, the pessimistic test procedure contains only three homolog GO kernels with target GO kernels missing. As shown in MLMK-TLM-III section of Table 1, MLMK-TLM achieves 83.97% accuracy and 0.8277 MCC on the benchmark data, significantly outperforming the baseline Hum-mPLoc 2.0 62.7% [5], nearly 3% lower than the optimistic case (87.04% accuracy; 0.8606 MCC) and nearly 1.5% lower than the moderate
Optimal number of homologs
Homolog is a good bridge for knowledge transfer between two evolutionarily-related protein subfamilies, super-families or species. However, biological evidences demonstrate that divergent homologs are subjected to different subcellular localization patterns from the target protein (see Figure 1), thus incorporating divergent homologs would leads to negative transfer and do harm to model performance. Thus, it is highly required to quantitatively study how much homolog GO information should be transferred to the target protein. Most of the existing Homolog-GO-based models except the work [26] seldom conducted the quantitative analysis. Because the homolog space is generally quite huge, the model selection is unendurably long if the hyper-parameter H is large, so we empirically define the homolog search space as 7 homologs with the most significant E-value.
As shown in Figure 2, the optimal number of homologs is 1 for optimistic case (MLMK-TLM-I), moderate case (MLMK-TLM-II), and pessimistic case (MLMK-TLM-III). The model performance slightly decreases for the optimistic case (MLMK-TLM-I) with the incorporation of more homologs, while the model performance decreases Table 5. Multi-labelling evaluation-perfect label match. sharply for the moderate (MLMK-TLM-II) & pessimistic case (MLMK-TLM-III). When the number of homologs reaches 7, the accuracy sharply drops about 15% for moderate & pessimistic case. We can see that divergent homologs adversely contribute little to the optimistic case, partly because the target protein's own GO information can counteract the unfavourable impact of the divergent homolog GO information. For the moderate & pessimistic case, the unfavourable divergent homolog GO information greatly deteriorates the model performance. From the results, we can safely conclude that it is highly necessary to quantitatively study how much homolog GO information should be transferred to the target protein.
It is worthy noting that the pessimistic case contains no target GO information but slightly outperforms the moderate case beyond our expectation (except at the first & second points of the curve in Figure 2). The reason may be that the substitution of the homolog GO feature vector for the target GO feature vector results in the slight performance deterioration (see Formula 13).
Kernel weight distribution
The GO kernel weights are evaluated using 3-fold cross validation as described in Section 3 of Methods, rather than 5fold cross validation as GO-TLM [25] conducted, because the additional hyper-parameter H makes the model selection more time-consuming. Actually, to evaluate the model performance, we conduct two-level cross validation: the outer 5-fold cross validation uses the whole dataset to evaluate performance, and the inner 3fold cross validation uses the training set from the outer cross validation to estimate the kernel weights. Similar to GO-TLM [25] and MK-TLM [26], the kernel weight distributions yielded from the outer 5-fold cross validation is quite similar, so we choose one typical kernel weight distribution to illustrate the GO kernels' contribution to the model performance.
As shown in Figure 3, the x axis denotes the six GO kernels, where T denotes target, H denotes homolog, F, C and P denote the three aspects of gene ontology (molecular function, cellular compartment and biological process), respectively. We can see that both the optimistic case and the moderate case have similar kernel weight distributions on the benchmark dataset, while the pessimistic case is similar to the homolog GO kernel weight distribution of the optimistic case and the moderate case (see the latter part of curve in Figure 3) (the pessimistic case contains only three homology GO kernels in that the target protein's GO information is missing). No matter the target GO kernels or the homolog GO kernels, C (cellular component) demonstrates much higher kernel weight. For optimistic case and moderate case, both the target GO kernels and the homolog GO kernels make equivalent contribution to the model performance (compare the former half part and the latter half part of the curve in Figure 3). From the results, we can conclude that the homolog knowledge transfer is instrumental to novel target protein research.
Multi-labelling estimation
As stated in Section 4 of Methods, MLMK-TLM can yield the probability outputs from Formula 11. We can assign to the test protein the subcellular locations whose predicted probability is greater than the optimal probability threshold. The threshold setting should achieve rational balance between higher LHR (Label Hit Rate) & PLMR (Perfect Label Match Rate) and lower NT-LHR (Non-target Label Hit Rate) defined by Formula 12 in Section 5 of Methods. Generally, higher LHR & PLMR also implies higher NT-LHR. In the work, the optimal probability threshold is selected from f0:06,0:07,0:08,0:09,0:10,0:15,0:2g. Besides LHR, PLMR and NT-LHR (Table 2 thru. Table 4), we also list some proteins of perfect label match (Table 5) and non-perfect label match (Table 6) Table 6).
As shown in Table 2 thru. Table 4, MLMK-TLM achieves 58.54%, 27.19% and 0 LHR (called complete label hit rate CLHR, in bold font) for 2, 3 and 4 multiple subcellular locations (optimistic case), respectively (see Table 2); 56.87%, 25.58%% and 0 LHR (CLHR, in bold font) for moderate case (see Table 3); and 58.13%, 32.56% and 33.33% LHR (CLHR, in bold font) for pessimistic case (see Table 4). The results seem much more promising than 24.3% for 2-label hit rate, 3.6% for 3-label hit rate and 6.7% for 4-label hit rate, reported in the work [40]. The complete label hit rate (CLHR) for pessimistic case seems better than the optimistic& moderate case, because of the probability thresholds: 0.09 for optimistic case, 0.08 for moderate case and 0.07 for pessimistic case. Relax probability threshold would yields higher Label Hit Rate (LHR), but would yields higher Non-target Label Hit Rate (NT-LHR) at the same time. From Table 2 to Table 4, we can see that the pessimistic case shows higher NT-LHR than the optimistic& moderate case. The complete label hit means that all the true labels are correctly hit by the prediction, but it can not measure the model's misleading tendency, because the prediction is still likely to hit non-target labels. Perfect Label Match Rate (PLMR) is the perfect measure that demonstrates the model's multi-labelling ability with zero misleading tendency. As shown thru. Table 2 to Table 4, we can see from PLMR measure that the optimistic case is the best (42.92%, 13.95%, 33.33%), the moderate case the second (38.75%, 9.30%, 0) and the pessimistic case the third (34.58%, 9.30%, 0). We can see that even MLMK-TLM's Perfect Label Match Rate is much better than the Partial Label Match Rate that was reported in the work [40]. Table 5 lists all the proteins of perfect label match in optimistic, moderate and pessimistic case, and the detailed probability outputs for the perfect label match proteins see Supporting Information (File S1 for optimistic case, File S2 for moderate case and File S3 for pessimistic case).
To further demonstrate MLMK-TLM's multi-labelling ability, we list some proteins of non-perfect label match in Table 6 to show how the prediction varies from the true labels. Table 6 takes only 8 proteins for example and the full list of non-perfect label match proteins see Supporting Information (File N S1 for optimistic case, File N S2 for moderate case and File N S3 for pessimistic case). Take protein O43663 in the optimistic case as an example, O43663 is labelled Cytoplasm & Nucleus in the original Hum-mPLoc 2.0 dataset [5] (GOA database version 70.0 released March 10 2008), and the prediction not only hits the two true labels but also hit a non-target label Cytoskeleton with probability 0.136. From the latest Swiss-Prot database (UniProt release 2011_11 Nov 16, 2011, http://www. uniprot.org/), we can see that Cytoskeleton is truly assigned to protein O43663. The non-target labels validated as TRUE Label are underlined in Table 6. We can see that there are many underlined TRUE Labels in Table 6 , where the square bracketed number denotes probability. The underlined TRUE Labels demonstrates MLMK-TLM's generalization ability rather than misleading tendency. Actually, MLMK-TLM's misleading tendency is lower than the NT-LHR measures in Table 2 to Table 4 according to the latest Swiss-Prot database. No training proteins in Hum-mPLoc 2.0 dataset [5] are subjected to the subcellular localization pattern (Nucleus, Cytoplasm, Endoplasmic reticulum, Golgi apparatus, Plasma membrane) as P42858, whereas MLMK-TLM can correctly hit the five labels with different confidence levels, which is hard to achieve by the nearest neighbour based multi-label classifiers [19,28,29,30], because the classifiers assigned to the query protein the labels that belong to the nearest training protein(s). Hum-mPLoc 2.0 web server (http:// www.csbio.sjtu.edu.cn/bioinf/hum-multi-2/) labels O43663, P41222 and P42858 as follows: (1) O43663: Nucleus, without hitting For both the moderate and the pessimistic case, the test proteins' own GO information is removed for the simulation of novel proteins, whereas MLMK-TLM can correctly predicts the test proteins' true labels and underlined TRUE Labels as illustrated in Table 2 to Table 6. The results show that MLMK-TLM has a good multi-labelling ability for novel multiplex human proteins. From Table 6 (2) Q14145: Cytoplasm and Endoplasmic reticulum, hitting non-target label Endoplasmic reticulum; (3) Q9UHD9: Cytoplasm, Endoplasmic reticulum and Nucleus, hitting non-target label Endoplasmic reticulum. Misleading tendency is an important factor that should be given attention for multi-label learning scenario. The advantage of probability outputs is to inform the biologists of the confidence level of each subcellular location, and thus help biologists make a rational decision.
Discussion
In this paper, we propose a multi-label multi-kernel transfer learning model for human protein subcellular localization (MLMK-TLM), which o further extends our published work GO-TLM and MK-TLM to multi-label learning scenario, such that MLMK-TLM has the following advantages over the existing GObased models [2,4,5,14,15,16,17,18,19,20,21,22,23,24,25,26]: (1) proper homolog knowledge transfer with rational control over noise from divergent homologs; (2) comprehensive survey of model performance for novel protein; (3) multi-labelling capability with probability interpretation. As compared to single-label learning, multi-label learning is more complicated. In our work, we propose a multi-label confusion matrix and adapt one-againstall multi-class probabilistic outputs to multi-label learning scenario; meanwhile, we formally propose three multi-label learning performance measures: LHR (Label Hit Rate), PLMR (Perfect Label Match Rate) and NT-LHR (Non-target Label Hit Rate). NT-LHR is formally formulated to measure the model's misleading tendency. The experiments show that MLMK-TLM significantly outperforms the baseline model and demonstrates good multi-labelling ability for novel human proteins. Some findings (predictions) are validated by the latest Swiss-Prot database.
Supporting Information
File S1 Full list of perfect label match proteins in the optimistic case. For each multiplex protein in the supplementary documents, there are three lines of description. The first line describes the protein accession; the second line describes the true label(s) of the proteins; and the third line gives the predicted label(s) of the protein. Each predicted label is followed by a squared bracketed number that denotes the probability the protein is predicted to the label. | 2017-04-04T08:41:20.850Z | 2012-06-13T00:00:00.000 | {
"year": 2012,
"sha1": "0e84def3e17d8eff6baa98e0ee53d4bc07f829ee",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0037716&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e84def3e17d8eff6baa98e0ee53d4bc07f829ee",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
9160117 | pes2o/s2orc | v3-fos-license | A Research Communication Brief: Gluten Analysis in Beef Samples Collected Using a Rigorous, Nationally Representative Sampling Protocol Confirms That Grain-Finished Beef Is Naturally Gluten-Free
Knowing whether or not a food contains gluten is vital for the growing number of individuals with celiac disease and non-celiac gluten sensitivity. Questions have recently been raised about whether beef from conventionally-raised, grain-finished cattle may contain gluten. To date, basic principles of ruminant digestion have been cited in support of the prevailing expert opinion that beef is inherently gluten-free. For this study, gluten analysis was conducted in beef samples collected using a rigorous nationally representative sampling protocol to determine whether gluten was present. The findings of our research uphold the understanding of the principles of gluten digestion in beef cattle and corroborate recommendations that recognize beef as a naturally gluten-free food.
Introduction
Experts recognize fresh meat such as beef as a naturally gluten-free food that is recommended as part of a healthful gluten-free diet [1][2][3][4] Beef is an important source of 10 essential nutrients including protein and key micronutrients such as iron, zinc, and B-vitamins, which are nutrients of concern for those following a gluten-free diet [5][6][7][8][9][10]. Questions have recently been raised about whether beef from conventionally-raised, grain-finished cattle may contain gluten. To date, basic principles of ruminant digestion have been cited in support of the prevailing expert opinion that beef is inherently gluten-free [11]. Although wheat, barley, and rye are common gluten-containing feed ingredients in conventional, grain-finished cattle feeds, it is well accepted that gluten proteins are hydrolyzed into individual amino acids during the ruminant digestive process. While there is general scientific consensus based on well-accepted animal physiology that meat from grain-finished beef cattle does not contain gluten, this has not been scientifically validated using current analytical methods for evaluating the gluten content of foods. Thus, gluten analysis was conducted in beef samples collected using a rigorous nationally representative sampling protocol. The findings confirm the understanding of the principles of gluten digestion in beef cattle and corroborate recommendations that recognize beef as a naturally gluten-free food.
Celiac disease affects an estimated 1% of the population in the United States, while non-celiac gluten sensitivity is estimated to affect another 0.6% to 6% [12,13]. Thus, up to 7% of people in the U.S. may benefit from a gluten-free diet. This gluten analysis in beef provides confidence for the large number of individuals following a gluten-free diet.
Sampling Protocol
In order to provide accurate nutrition information to health professionals, the food industry, and consumers, the national Beef Checkoff Program and the USDA Agricultural Research Service have collaborated to conduct research resulting in updated nutrient composition data for beef retail cuts published in the USDA National Nutrient Database for Standard Reference [14]. To accurately obtain research samples representing beef retail cuts in the U.S., a rigorous nationally representative sampling protocol was developed by nutrition and meat scientists at three universities in collaboration with USDA Nutrient Data Laboratory statisticians [15][16][17][18].
These experts identified a nationally representative sample of 164 beef carcasses at seven meat packing plants in six different regions. A statistically appropriate sample was selected to represent the proper proportions of yield grade, quality grade, breed, genetic type and geographic location. The carcasses were sent to the three collaborating universities for fabrication into retail cuts. Raw and cooked samples were homogenized, and composites were made for each retail cut. A chart illustrating the study sample protocol is published elsewhere [19].
Nutrient Analysis
Comprehensive nutrient analysis was conducted on the raw and cooked composite samples for the retail beef cuts. This included analysis for total protein and amino acids, total fat and fatty acids, cholesterol, minerals such as iron, selenium, and zinc, as well as vitamins including retinol, B-vitamins, choline, vitamin D, and vitamin E. Validated nutrient analysis methods and quality control techniques were performed throughout the study to ensure accurate nutrient data, as has been previously described [13][14][15][16]. The results of this comprehensive nutrient analysis served to update the USDA's National Nutrient Database for Standard Reference [20].
Results
In March 2015, gluten analysis was conducted on archived samples retained from the beef research described above. A total of 17 composite samples representing 17 retail beef cuts were sent to an independent laboratory for gluten analysis. Food Safety Net Services performed the gluten analysis using Veratox ® (Neogen Food Safety, Lansing, MI, USA) for Gliadin R5, a validated sandwich enzyme-linked immunoassay (S-ELISA) test (Neogen Corporation and the University of Nebraska Food Allergy Research and Resource Program, Lincoln, NE, USA, 2012) distributed by Neogen ® (Neogen Food Safety, Lansing, MI, USA) Corporation.
The gluten analysis results for each of the 17 composite beef samples were below the limit of detection for this test that is equivalent to less than 5 ppm of gluten (see Table 1). According to the FDA's 2013 Gluten-Free Labelling regulations, a food that is inherently free of gluten may be labelled as "gluten-free" [21-23].
Discussion
These findings confirm that today's fresh beef supply from conventionally-raised cattle-the predominant type sold in grocery stores-does not contain measurable levels of gluten, and can be included in a gluten-free diet. This evidence may help individuals with gluten-related conditions avoid unnecessary dietary restriction and can provide important nutritional benefits due to the micronutrients found in beef such as iron, zinc, and B-vitamins.
A gluten-free diet is currently the only safe treatment for individuals diagnosed with celiac disease [24,25]. Left untreated, this genetic autoimmune disorder is associated with a wide range of symptoms, nutrient deficiencies, and serious complications. In susceptible individuals, the ingestion of the gluten protein in wheat, barley, rye, and crossbreeds of these grains damages the villi in the small intestine. This damage to the small intestine typically results in the malabsorption of vital nutrients such as iron, zinc, and B-vitamins. Anemia resulting from the malabsorption of iron and vitamin B 12 is of particular concern for those with celiac disease. Avoiding or correcting nutrient deficiencies that are often present with celiac disease is a key focus of medical nutrition therapy.
It is well documented that those following a gluten-free diet are at increased risk of multiple vitamin and mineral deficiencies for a number of reasons [7][8][9]. For example, gluten-free flours and grain products such as breads, pastas, and cereals are not subject to the same enrichment and fortification standards as wheat-based products. Thus, many gluten-free grain products contain lower levels of iron and B-vitamins such as thiamin, riboflavin, niacin, and folate. This further increases the risk of iron and B-vitamin deficiencies in individuals following a gluten-free diet.
Conclusions
Knowing whether or not a food contains gluten is vital information for individuals with celiac disease and non-celiac gluten sensitivity. The approach described in this report can serve as a model for others interested in substantiating the gluten-free nature of their products. The publication of results from gluten analyses such as this can help to further inform health professionals and the food industry and ultimately benefit those who must avoid gluten in their diets.
To our knowledge, this is the first effort to conduct gluten analysis in a nationally representative sample of beef. The rigorous sampling protocol and validated enzyme-linked immunoassay used for this analysis provides scientific evidence to support current recommendations that recognize beef as an inherently gluten-free food that can be enjoyed in a healthful gluten-free diet. This understanding is important since beef is a source of many vital nutrients such as iron, zinc, and B-vitamins that are of concern for those following a gluten-free diet. Encouraging gluten-restricting individuals to enjoy beef as part of a healthful gluten-free diet may reduce unnecessary dietary restriction, and improve diet satisfaction and nutrient adequacy. | 2017-10-01T15:21:17.753Z | 2017-08-25T00:00:00.000 | {
"year": 2017,
"sha1": "b004556c2c817307546a1dcf79cef53fdaf253b1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/9/9/936/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b004556c2c817307546a1dcf79cef53fdaf253b1",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222067497 | pes2o/s2orc | v3-fos-license | Nest-site selection and breeding success of passerines in the world’s southernmost forests
Background Birds can maximize their reproductive success through careful selection of nest-sites. The ‘total-foliage’ hypothesis predicts that nests concealed in vegetation should have higher survival. We propose an additional hypothesis, the ‘predator proximity’ hypothesis, which states that nests placed farther from predators would have higher survival. We examined these hypotheses in the world’s southernmost forests of Navarino Island, in the Cape Horn Biosphere reserve, Chile (55°S). This island has been free of mammalian ground predators until recently, and forest passerines have been subject to depredation only by diurnal and nocturnal raptors. Methods During three breeding seasons (2014–2017), we monitored 104 nests for the five most abundant open-cup forest-dwelling passerines (Elaenia albiceps, Zonotrichia capensis, Phrygilus patagonicus, Turdus falcklandii, and Anairetes parulus). We identified nest predators using camera traps and assessed whether habitat characteristics affected nest-site selection and survival. Results Nest predation was the main cause of nest failure (71% of failed nests). Milvago chimango was the most common predator, depredating 13 (87%) of the 15 nests where we could identify a predator. By contrast, the recently introduced mammal Neovison vison, the only ground predator, depredated one nest (7%). Species selected nest-sites with more understory cover and taller understory, which according to the total-foliage hypothesis would provide more concealment against both avian and mammal predators. However, these variables negatively influenced nest survival. The apparent disconnect between selecting nest-sites to avoid predation and the actual risk of predation could be due to recent changes in the predator assemblage driven by an increased abundance of native M. chimango associated with urban development, and/or the introduction of exotic mammalian ground predators to this island. These predator assemblage changes could have resulted in an ecological trap. Further research will be needed to assess hypotheses that could explain this mismatch between nest-site selection and nest survival.
INTRODUCTION
Where do birds place their nests? This question has intrigued ornithologists since the early days of the discipline (Birkhead, Wimpenny & Montgomerie, 2014;Lovette & Fitzpatrick, 2016). For open-cup nesters, early studies pointed to food availability as the most important factor for nest-site selection, but predation has been increasingly considered as another major factor (Martin, 1987;Martin, 1993;Reidy & Thompson, 2018). Predation can directly affect survival of eggs, juveniles, and adults, and has been identified as the main cause of nest failure in passerines (Nice, 1957;Ricklefs, 1969;Liebezeit & George, 2002;Bellamy et al., 2018;Reidy & Thompson, 2018). According to these studies, we predict that birds will select those habitat characteristics that reduce predation risk and thus increase the probabilities of nest survival (Jaenike & Holt, 1991;Fontaine & Martin, 2006).
Several hypotheses have been proposed to explain the mechanisms by which nest placement reduces predation. One of these, the 'total-foliage' hypothesis, predicts that nests located in sites with more surrounding foliage would have higher concealment, as well as more interference with the transmission of odors and sounds that could be detected by a predator. Thus, a larger amount of foliage reduces predation risk (Martin & Roper, 1988;Martin, 1993). In the present study we introduce another, but not mutually exclusive hypothesis, which we call the 'predator proximity' hypothesis. This hypothesis assesses types of predators according to their mode of attack, particularly aerial versus terrestrial. This hypothesis assumes that passerine birds select nest sites that avoid discovery and attack by the major type of predators in their ecosystem, and it predicts that: (i) when predation is dominated by aerial predators, birds will place nests near the ground and (ii), in contrast, when predation is dominated by ground predators, birds will place nests at greater height from the ground (Jara et al., 2019). Another factor that we consider in this hypothesis is canopy cover. Some aerial predators search for prey while perched in the canopy. Hence in habitats dominated by aerial predators that exhibit sit and wait behavior, we predict that passerine birds will place nests in sites where there is less canopy cover and/or where the canopy is taller (both factors, will effectively put raptors farther away from nests placed in the understory).
High-latitude forests offer ideal natural laboratories because they have a simpler structure compared to tropical forests (i.e., the canopy is dominated by a few species belonging to only one genus, and the understory has low abundance and richness of shrub species; Rozzi et al., 2008). Consequently, sub-Antarctic forests of South America provide unique opportunities to test the total-foliage and predator proximity hypotheses and collect evidence to understand the mechanisms that explain nest-site selection and nest survival. Navarino Island (55 • S), located in the Cape Horn Biosphere Reserve, hosts the world's southernmost forests (Rozzi et al., 2012) and serves as the breeding ground to 28 bird species (Ippi et al., 2009;Rozzi, 2010). Here, passerines are the most diverse and abundant group of terrestrial vertebrates, due to the absence of herpetofauna and the limited number of native terrestrial mammals (Dardanelli et al., 2014). Hence, nest-site selection takes place in the context of a simple assemblage of vertebrate predators, which until the end of the twentieth century included only diurnal and nocturnal raptors (e.g., Accipiter chilensis, Caracara plancus, Glacidium nana, Falco sparverius, Milvago chimango, and Strix rufipes;Ippi et al., 2009;Schüttler et al., 2009). Among the most common open-cup passerines breeding in these forests are the White-crested Elaenia (Elaenia albiceps), Rufous-collared Sparrow (Zonotrichia capensis), Patagonian Sierra-Finch (Phrygilus patagonicus), Austral Thrush (Turdus falcklandii), and Tufted Tit-Tyrant (Anairetes parulus) . Although abundant across their range (Medrano et al., 2018), little is known about these species regarding their nesting habits and nest survival.
In other systems birds prefer to nest in sites with lower risk of depredation by avian predators (Sergio, Marchesi & Pedrini, 2003;Roos & Pärt, 2004;Latif, Heath & Rotenberry, 2012). On Navarino Island, bird nesting strategies also may be aimed at reducing the risk of depredation by raptors, the top native predators in this ecosystem. Preliminary evidence suggests that, for example, T. falcklandii on Navarino Island breeds closer to the ground than mainland populations (Jara et al., 2019) where the predator assemblage includes several terrestrial species such as wild cats and foxes (Zúñiga, Muñoz Pedreros & Fierro, 2008;Altamirano et al., 2013). However, the simple predator-prey system on Navarino Island, dominated almost exclusively by raptors, was disrupted two decades ago with the introduction of the American mink (Neovison vison) (Rozzi & Sherriffs, 2003), and the rapid increase of feral domestic cats (Felis catus) and dogs (Canis lupus familiaris) (Rozzi et al., 2006b). These three exotic predators actively prey on passerine birds on Navarino Island (Schüttler, Cárcamo & Rozzi, 2008;Schüttler, Saavedra-Aracena & Jiménez, 2018), and worldwide (Ferreras & Macdonald, 1999;Bartoszewicz & Zalewski, 2003;Doherty et al., 2016). Hence, the arrival of these mammals presented a new predation pressure for birds nesting on Navarino Island and may represent an ecological trap for birds that evolved in the absence of terrestrial predators. The increasing abundance of these novel predators during the first two decades of the 21st century coincides with the rapid disappearance from the island of the Magellanic Tapaculo (Scytalopus magellanicus), a small passerine with poor flying capacity that inhabits the understory of South American temperate forests (Rozzi et al., 1996). This bird was detected in the Omora Ethnobotanical Park until 2003(Ippi et al., 2009, but not in recent surveys of the area , RD Crego, pers. comm., 2015. According to the total-foliage hypothesis, to reduce the risk of predation, passerines should nest in sites that provide more nest concealment (Table 1). According to the predator proximity hypothesis, passerines should select nest-sites that avoid the presence of predators, thus reducing the risk of predation. Based on these hypotheses, we predicted that on Navarino Island birds place nests in sites with denser and taller understory, and would avoid placing nests close to the canopy (exposing them to perched raptors), or too More understory cover provides more visual nest concealment and interferes with the transmission of odors and sounds coming from the nest that could be detected by a predator.
Understory height
Predator proximity/Totalfoliage Positively associated with nest presence Taller understory provides more nest concealment against predators, and allows for higher nest placement, which reduces accessibility for ground predators.
close to the ground (exposing them to recently introduced ground predators) ( Table 1). We also predicted that survival rates would be lower in nests located at these extremes of the vertical axis of the forest structure. To test these hypotheses, we collected data on forest-dwelling passerines in the world's southernmost forests with two general goals: (i) to test the importance of habitat characteristics on nest-site selection, and (ii) to determine how habitat characteristics and temporal variables influence daily nest survival rate (DSR). We examined habitat variables that are relevant for nest survival according to the total-foliage and predator proximity hypotheses (Table 2).
Study site
The study site is located on the northern coast of Navarino Island (54 • S), within the Cape Horn Biosphere Reserve, at the southern end of South America. Its forests encompass a mixture of only six tree species, and are dominated by the broadleaf evergreen species Nothofagus betuloides (Rozzi et al., 2008). The understory has low abundance and diversity of shrub species, but is covered by s diverse and dense carpet of bryophytes (Rozzi et al., 2008). The regional climate is oceanic, resulting in a mean rainfall of 467 mm homogeneously distributed throughout the year, and in low annual temperature range, with a mean temperature of 10.8 • C during warmest month in summer of and 1.9 • C in the coldest month in winter . We surveyed for nests along 28 km throughout the northern shore forests; however, most of our efforts were concentrated within the more accessible and protected forests in the Omora Ethnobotanical Park (54 • 56 S, 67 • 39 W) (Rozzi et al., 2006a).
Nest searching and monitoring
We searched for nests during three breeding seasons: 2014-2015 (November-January), 2015-2016 (October-February) and 2016-2017 (October-January). We located active nests (under construction or containing at least one egg or young) by observing and following adults exhibiting signs of breeding or nesting behavior (carrying nest material,
Temporal effects
Day of year Negatively associated with DSR Late nesters will have lower nest survival because of the overlap with increased depredation pressure in the forest interior (i.e., N. vison), due to their breeding dynamics.
Nest age (linear vs quadratic effects) and nest stage Negatively associated with DSR Nest age and stage influence adult behavior around the nest (increased nest visitation for food provisioning), and increased noise and odor from nestlings. These cues could be detected by predators.
Habitat effects
Concealment
Positively associated with DSR Under the 'total-foliage' hypothesis, more nest concealment not only protects the nest and its content from predators, but also the adults entering and leaving it.
Nest height off the ground (linear vs quadratic effects) Positively associated with DSR Under the 'predator proximity' hypothesis, nests closer to the ground will be more susceptible to ground predators Ground predator index Negatively associated with DSR Under the 'predator proximity' hypothesis, nests with higher index score will be more susceptible to predation.
Canopy cover, canopy height, understory cover and understory height Variables associated with nestsite selection will have equivalent effect on DSR.
Rationale of these variables' effect on DSR is equivalent to that described in nest-site selection (Table 1).
defending territory via alarm call, or carrying food or fecal sacs in their bills). In cases where we suspected the nest was in a well-delineated small area, but we were unable to see it, we scanned the vegetation with a thermal imaging camera (FLIR One, 2014 c FLIR R Systems, Inc.) to help locate the nest. We monitored active nests until young fledged or the nest failed, using both camera traps (Bushnell Trophy Cam: Bushnell Corp., Overland Park, KS, USA) and nest visitation. We deployed a camera trap between 1-3 m from the nest, depending on the surrounding vegetation. We set cameras to take three consecutive pictures per trigger (to increase chances of detecting the predator) and set a minute delay between triggers. We did not deploy cameras during the laying and early incubation period to prevent nest abandonment (Pietz & Granfors, 2000). Approximately 10% of nests did not have cameras deployed at any stage. We typically visited nests every other day, unless we suspected a possible change of nest developmental stage (i.e., laying, incubation, nestling), in which case we visited them every day. During our nest visits, we verified that no predators were in the vicinity to observe our movements and later prey on the nest. Otherwise, we did not approach the nest at that time. We considered a nest successful if: (i) the nest was empty and there were fledglings near it, (ii) the camera detected them fledging in the absence of predators, and/or (iii) the nest was empty and there was fecal matter on the rim of the nest or underneath it. We considered a nest to have failed if: (i) there were dead nestlings on or around it, (ii) it was empty (either intact or destroyed) before the earliest possible date of fledging, or (iii) the eggs never hatched and there was no adult activity (i.e., abandoned during incubation).
Nest site characteristics
After nesting ended, we characterized the nest site following a modified BBIRD protocol (Martin et al., 1997). We measured habitat features that might influence the presence of predators and their ability to find nests, including potential perching substrates for raptors, and features that contribute to nest concealment. Within a 5-m radius plot, centered on the nest, we recorded nest height from ground (cm) (hereafter nest height, measured to the rim of the nest), mean nest coverage (%) (hereafter, concealment, estimated as the mean nest coverage measured from 1 m above the nest and from each cardinal direction), canopy cover (%), canopy height (m), understory cover (%), and understory height (cm).
We also visually estimated a ground predator (i.e., American mink, rodents, dogs, and cats) accessibility index for every nest. This index ranged from 0-2 with 0 indicating nests that were difficult for a ground predator to access (i.e., nest placed high in a tree without easily accessible branches from the ground), 1 indicating nests that could be accessible from the ground (i.e., nest above ground level but of easy access for a ground predator through climbable branches), and 2 indicating nests that were placed on the ground and could have been easily accessed by potential ground predators.
We assessed nest site selection by measuring the same habitat characteristics (except those specifically related to the nest) using a paired-random plot for each nest. Each random plot was located at a random direction and random distance between 25-70 m from the nest. We chose this distance to maximize the chances the plot was within the home range of the breeding pair. However, because there is no information of home range sizes for these species, these distances are based on personal observations during the study. Before we measured habitat characteristics at the random-paired plot, we verified that active nests of these species were not present at the plot.
Nest-site selection
We used logistic regression to investigate whether habitat characteristics influenced nest-site selection. We developed separate candidate models for each species to assess the probability that a plot contained a nest as a function of canopy cover, canopy height, understory cover, and understory height ( Table 1). The response variable was either 1 or 0, indicating presence or absence of a nest, respectively. We ran these four univariate models, as well as all possible combinations of variables, excluding interactions, and estimated their Akaike information criterion corrected for small sample size (AIC c ) (Burnham & Anderson, 2002). We selected the top model as the one having the lowest AIC c , and evaluated parameter importance by determining whether or not their 95% confidence interval (CI) included zero (Tabachnick & Fidell, 2001). Before fitting the models, we checked for outliers with Cook's distance (D), and for correlation among covariates (r > 0.75). For T. falcklandii there was one outlier for understory height (Cook's D > 1). Replacing this value with the mean of the variable produced similar results as the original value. Furthermore, this variable did not have a meaningful effect on the response variable (see 'Results'). Therefore, we conducted the analysis with this outlier in the data. We used χ 2 tests to determine goodness of fit of the final models, accepting the model if p > 0.05. We calculated the odds ratio to determine the effect of significant habitat predictor variables on the likelihood of a plot containing a nest.
Nest survival
We used the logistic exposure method (Shaffer, 2004) to investigate temporal and habitat variables that influenced daily nest survival rate (DSR) by species (Table 2). We evaluated alternative models using a two-stage process. First, we evaluated temporal variables: nest age (days since first egg was laid; linear vs quadratic effects), nest stage (egg [laying and incubation] vs nestling), and day of year (linear vs quadratic effects). We used the best model from this first stage (the one with lowest AIC c ) as the starting model and evaluated habitat variables in the second stage: concealment, canopy cover, canopy height, understory height, understory density, nest height (linear vs quadratic effects), and ground predator accessibility index. From the second stage, we selected the model with lowest AIC c as the final model for each species. We evaluated the importance of each parameter in the final model by determining whether their 95% CI included zero (Tabachnick & Fidell, 2001). For both stages, we built candidate models using all possible combinations of variables, excluding interactions. Finally, we assessed the goodness of fit of the final models with χ 2 tests, accepting the model if p > 0.05. We estimated overall nest survival with the final DSR model for every species, holding continuous variables at their standardized mean value (x = 0). For models with categorical variables, we estimated a separate DSR for each level of the variable(s). To estimate total survival, we raised DSR to an exponent equal to the average number of risk days (i.e., either per nesting stage or whole nesting cycle) per species. We used duration of incubation and nestling periods determined for these species in the same study area (Jara et al., 2019). Because the duration of incubation of T. falcklandii is still unknown, we used 13 days as it is the average incubation of T. migratorius (Ehrlich et al., 1988).
For the two species with largest number of nests (E. albiceps n = 27 and Z. capensis n = 35) we used generalized linear mixed models (R package lme4 v1. 1.18.1;Bates et al., 2015), using breeding season as a random factor to control for annual differences. For the other three species (P. patagonicus n = 16, T. falcklandii n = 7, and A. parulus n = 14), sample size was insufficient for mixed model convergence. Therefore, we used generalized linear models (R Developement Core Team, 2018) and excluded breeding season from the analysis, which was correlated with ground predator index for these three species. Furthermore, in a prior analysis we determined that breeding season did not have a meaningful effect on DSR for any of these three species. We checked for outliers with Cook's distance, correlation among continuous variables (r > 0.75), and correlation among categorical variables (assessed with a χ 2 test p < 0.05). For T. falcklandii there was one outlier for concealment (Cook's D > 1) that did not affect model results. Therefore, we conducted all the analyses with this outlier in the data. The only significant correlation among covariates was between canopy height and understory height (r = −0.97) for T. falcklandii. We included understory height in the candidate models because it would be easier to measure in the field for future studies. For Z. capensis, we only evaluated explanatory variables for nests that were on the ground because all three nests above the ground were successful (there was quasi-complete separation of data points). We replaced missing values with the mean of the variable (Acock, 2005). Across species and variables, 2.6% of exposure periods-the time between nest visits-had missing values. All continuous variables were standardized to a mean of zero with one unit of standard deviation for analysis (Schielzeth, 2010).
Before we fit nest survival models for each species, we evaluated the potential for a researcher effect on DSR based on camera deployment and nest visitation. Deploying a camera and/or visiting a nest could negatively affect DSR because parents could abandon their nests due to the disturbance. To evaluate the effect of camera presence, we incorporated an indicator variable where 1 = nests with a camera for that exposure period, and 0 = nests without a camera for that exposure period. To evaluate the effect of visits on DSR we created a continuous variable of cumulative number of visits. For this, we assumed that the effect of visiting a nest was delayed (it occurred after we left the nest) and it was higher the more times we visited a nest. If either camera or visit effect were significant, we kept the variable(s) in the final model. All analyses were performed in R 3.5.1 (R Developement Core Team, 2018).
Nest-site selection
We located 104 nests for the five species during three breeding seasons (E. albiceps n = 28, Z. capensis n = 35, P. patagonicus n = 17, T. falcklandii n = 8, and A. parulus n = 16). Nest-site habitat characteristics varied both within and among species (Table S1). Understory cover positively influenced nest-site selection in three of the five species (Z. capensis, P. patagonicus, and A. parulus) ( Fig. 1 and Table 3). The odds of a plot containing a nest of any of these three species increased by a factor of 1.03 with every 1% increase in understory cover. Conversely, this parameter negatively influenced nest-site selection for E. albiceps; however, its 95% CI overlapped zero (Table 3). Understory height positively influenced nest-site selection of P. patagonicus ( Fig. 1 and Table 3). Finally, there was a weak effect of understory height and canopy height on nest-site selection of A. parulus (Table 3). The models provided a good fit for the data (Table S2). For T. falcklandii, the best model was the null model (Table 3), indicating that none of the habitat characteristics that we measured showed strong effects on nest location. For a complete list of competing nest-site selection models, see Table S3.
Nest survival
Of the 98 nests monitored that had a known fate, 52% of them failed (n = 51). The success rate per species was: E. albiceps 33% (n = 18), Z. capensis 44% (n = 34), P. patagonicus 63% (n = 16), T. falcklandii 43% (n = 7), and A. parulus 71% (n = 14). Of the 51 failed nests, 71% (n = 36) were due to predation. However, we were unable to identify the predator for 58% (n = 21) of the predation events (either the nest did not have a camera, or the camera failed to capture the event). We only identified three predators in the system: M. chimango, N. vison and Glaucidium nana, which accounted for 13 (87%), 1 (7%), and 1 (7%) of the depredated nests where we were able to identify the predator, respectively. Milvago chimango mostly depredated nestlings, whereas the latter two depredated eggs. Most predation events (69%) occurred during the nestling stage. Nest abandonment accounted for the remaining failed nests (n = 15). Elaenia albiceps and T. falcklandii had the highest abandonment rate, 26% (n = 7) and 29% (n = 2), respectively. In contrast, P. patagonicus and A. parulus abandoned 0% and 7% (n = 1) of their nests, respectively. Zonotrichia capensis abandoned 15% (n = 5) of its nests. Most abandonments (80%, n = 12) occurred during the incubation stage. There was no evidence that researcher visitation affected DSR for any species. Thus, we examined the influence of temporal and habitat variables without considering the effect of our visits. Zonotrichia capensis was the only species where camera presence affected DSR; DSR increased by 22% when a camera was present (β = 1.97; 95% CI [0.20-3.61]; Fig. 2). Thus, for this species only, we proceeded with model selection by including a camera (Fig. 3). For the DSR best-supported model of E. albiceps, we found that there was a positive, non-linear effect of nest age, as well as a negative effect of canopy cover and understory height (Fig. 4). For Z. capensis, in addition to the camera effect, DSR was strongly influenced by nest age (Fig. 2). DSR followed a similar pattern in the presence or absence of a camera, although survival was higher when a camera was present ( Fig. 2A). Nestlings had a higher probability of surviving than eggs (Fig. 2). Overall nest survival during the egg stage (based on DSR) was 26.8% and 1.4%, in the presence and absence of cameras, respectively. Overall nest survival during the nestling stage was 89.2% and 45.0%, in the presence and absence of cameras, respectively (Fig. 3). DSR of P. patagonicus declined slightly with increasing nest age, understory cover, and understory height, and it strongly increased with more nest concealment (Fig. 5). For T. falcklandii, DSR declined with increasing understory cover (Fig. 6). Finally, we did not find any strong temporal or habitat effects on DSR for A. parulus (Table S4), though estimates showed a weak positive relationship with nest height and understory cover, and a positive quadratic relationship with nest age. The best-supported model for every species were good fits for the data (Table S2). For details on parameter estimates and CIs of the best-fitted nest survival models for each species, see Table S4. For a list of competing nest survival models, see Table S5.
DISCUSSION
Our study provides new evidence that nest survival is influenced by predation. In addition to supporting previous studies that have found similar effects (Nice, 1957;Ricklefs, 1969;Liebezeit & George, 2002;Bellamy et al., 2018;Reidy & Thompson, 2018), we propose a novel hypothesis that combines characteristics of the habitat where the nest is built (canopy height, canopy cover, understory height, understory cover-total-foliage hypothesis) and of the habits of potential and actual predators (attack mode of aerial vs terrestrial predators-predator proximity hypothesis) that influence (a) nest-site selection and/or (b) nest survival (breeding success). In several cases, our sample sizes are limited, providing DSR estimates with considerable uncertainty. Therefore, our results should be taken as preliminary findings that represent baseline data for most of these species and, as a whole, provide support for both nest placement hypotheses.
(a) Nest-site selection
We found percentage of understory cover was the most important habitat variable explaining nest-site selection, as it affected three of the five species. Zonotrichia capensis, P. patagonicus and A. parulus significantly preferred nesting sites with greater percentage of understory cover (Fig. 1). In addition, P. patagonicus preferred to nest in sites with taller understory (Fig. 1). These findings are consistent with the total-foliage hypothesis, which assumes that more foliage reduces the risk of depredation because it interferes with visual, auditory, and olfactory cues for avian and mammalian nest predators (Martin & Roper, 1988;Martin, 1993). Our findings for these three species are also consistent with previous studies on North American passerines, where species placed their nests in higher understory density and/or cover compared to non-nest random plots (Liebezeit & George, 2002;Benson et al., 2009;Wynia, 2013). The other two passerine species, E. albiceps and T. falcklandii, selected nest sites with characteristics that were no different from random-paired plots. There are at least two possible explanations for this lack of effect. First, there is a pattern(s) that we were unable detect, perhaps due to limited sample size. Among the five species studied, E. albiceps and T. falcklandii exhibit the highest diversity of substrates used for nesting (Jara et al., 2019). In addition, E. albiceps and T. falcklandii nest at the high and low extremes of the vertical forest profile (Jara et al., 2019). The high heterogeneity of nesting substrate and position in the vertical axis of the forests could make it more complex to detect a pattern. Second, birds could be using an unstructured pattern for nest placement to deter predators from learning to scan for nests. This has been suggested for Hermit Thrush (Catharus guttatu; Martin & Roper, 1988) and White-tailed Ptarmigan in North America (Lagopus leucurus; Wiebe & Martin, 1998). This mechanism could also provide an explanation for the lack of significant association between nest-sites and the other two examined habitat variables (canopy height, canopy cover) in the five studied passerine species. More research with larger sample sizes is needed to elucidate this potential explanation for predator avoidance.
Overall nest survival
Overall nest survival rates were high for P. patagonicus (87.0%) and A. parulus (99.9%) (Fig. 3). Nest survival rate recorded for T. falcklandii (48%) in the remote sub-Antarctic forests on Navarino Island is higher than the 20% that has been recorded for conspecific populations in temperate forests farther north in southwestern Patagonia on Chiloé Island (42 • S), Chile (Willson et al., 2014). In contrast, survival rates were low for E. albiceps (31.0%) and Z. capensis (1.4-89.2% depending on camera presence and nest stage; Fig. 3). The low rates we found for these two species are similar to the rates found for conspecific populations of E. albiceps breeding on Chiloé Island, Chile (27% nest success) (Willson et al., 2014), and of Z. capensis breeding on central Monte Desert, Argentina (34 • S; 9.4% nest success) (Mezquida & Marone, 2001).
On Navarino Island, T. falcklandii builds its nests closer to the ground than farther north in the temperate forest biome (Jara et al., 2019). This could be associated with the fact that mammalian ground predators are present in temperate forests, but until recently, were absent on Navarino Island (Jara et al., 2019). Our results thus open the following new questions regarding survival rate: (1) Could the difference in proximity to prevailing predators explain the differences in nest-placement and nest survival in T. falcklandii at different latitudes? (2) Can the historical absence of ground mammalian predators and the presence of aerial bird predators on Navarino Island explain the high survival rates of P. patagonicus and A. parulus? (3) Why do the nest-survival rates of these species differ from the low rates detected for E. albiceps and Z. capensis?
Milvago chimango is a common raptor in southern South America that inhabits a variety of habitat types, including forests, shrub-lands, steppes, and coastal ecosystems, as well as anthropogenic habitats such as plantations and cities (Rozzi et al., 1996). This opportunistic raptor is a generalist predator that uses a wide variety of foraging techniques. It can fish using a 'glide-hover' technique, catch fleeing insects while flying through fires, or wade to catch frogs and tadpoles (Del Hoyo, Elliot & Sargatal, 1994;Sazima & Olmos, 2009). In the forests of Navarino Island, it mostly searches for prey while perched and flying overhead (RF Jara and RD Crego, pers. obs., 2015). On Navarino Island M. chimango also depredates nests irrespective of their height from the ground (Crego, 2017). Consequently, it exerts a predation pressure from above (like other raptors) and from below (like ground predators). This suggests that birds on this island may have already developed nesting strategies to avoid ground predation pressure, even before mammalian ground predators were introduced. Milvago chimango populations increase with human disturbance, like those generated on Navarino Island during the last couple of decades by the king-crab industry dumping large quantities of shellfish exoskeletons. Thus, it is possible that this raptor's population has increased on the study site over time, which would represent an ecological trap, because birds on this island evolved under different historical and current predator abundance conditions (Chalfoun & Schmidt, 2012). We therefore recommend monitoring population growth and subsequent impact of M. chimango on nesting passerines in the Cape Horn Biosphere Reserve.
The only ground mammalian predator we identified was N. vison, which depredated 7% of nests with a known predator. This semi-aquatic mustelid was introduced to Navarino Island at the end of the 20th century (Rozzi & Sherriffs, 2003) and is known for its negative impacts on native birds on Navarino Island (Schüttler, Cárcamo & Rozzi, 2008;Schüttler et al., 2009;Maley et al., 2011), andworldwide (Ferreras &Macdonald, 1999;Nordström & Korpimäki, 2004;Bonesi & Palazon, 2007;Brzeziński et al., 2012). However, and contrary to our expectations, its nest depredation rate on passerines was very low. A possible explanation could be a mismatch between the periods of N. vison's peak activity in the forest (summer) (Crego, 2017) and the onset of passerine nesting season (Spring) (Jara et al., 2019). Alternatively, because we were unable to identify the predator in 58% of the events, we may have underestimated the effect of this mustelid-and other potential predators such as feral cats and dogs-on nest survival, as birds are part of N. vison and cat diets (Schüttler, Cárcamo & Rozzi, 2008;Schüttler, Saavedra-Aracena & Jiménez, 2018). Contrary to previous findings on artificial nests (Willson et al., 2001;Maley et al., 2011), we found no evidence of nest predation by rodents or House Wrens (Troglodytes aedon). This may be because in our study of natural nests, parents can actively deter rodents and/or House Wrens (Jara et al., in prep).
Habitat and temporal effects on nest survival: support for nest placement hypotheses
Nest-site selection was positively influenced by higher percentage of understory cover (Z. capensis, P. patagonicus, and A. parulus) and taller understory (P. patagonicus) (Table 3). However, for Z. capensis and A. parulus, understory cover and understory height did not affect nest survival. Furthermore, for P. patagonicus, these two habitat characteristics had an opposite effect, negatively influencing DSR (Figs. 4C, 4B, 5C, and 6A). Thus, it seems that these species may be selecting nest-sites that not only have a neutral effect on nest survival, but actually decrease their survival rates. Given that predation was the main cause of nest failure, it is possible that there is a disconnect between birds assessing the risk of predation (and selecting the appropriate nest-site) and the actual risk of predation. This again might be due to the above-mentioned ecological trap regarding the increased abundance of M. chimango due to anthropogenic factors. Furthermore, passerine populations on this island have evolved with a different predator assemblage (i.e., only aerial predators), but this has been disrupted with the introduction of exotic mammalian ground predators to this island, and the rapid increase of feral domestic cats and dogs, less than 20 years ago. This ecological trap would imply a delay in the ability of birds to adapt nesting behavior in response to a new type and/or abundance of predators. Alternatively, the mismatch between nest-site selection and DSR also could be due to methodological problems (e.g., limited sample size, wrong choice of habitat variables, etc.), or ecological-evolutionary reasons (e.g., tradeoffs with other selection pressures such as microclimate and access to food, etc.) (reviewed by Chalfoun & Schmidt, 2012). Further research will be needed to assess hypotheses that could explain this mismatch between nest-site selection and nest survival.
For two of the three species in which we found a nest age effect on DSR (E. albiceps, Z. capensis, and P. patagonicus), the pattern was similar (i.e., quadratic effect) even though its magnitude varied considerably ( Figs. 2A and 4A). The low rates of nest failure during the laying and incubation periods suggest marginal effects of nest abandonment and depredation by M. chimango and N. vison (the only two identified predators during these nest stages) during the first half of the nesting cycle. Daily survival rates were lowest soon after hatching ( Figs. 2A and 4A). This may reflect the sudden increase in cues to predators coming from nestlings (visual, auditory, and olfactory) and parents (visual and auditory, as their nest visitation frequency suddenly rises) (Cresswell, 1997;Martin, Scott & Menge, 2000;Grant et al., 2005), which increases their vulnerability to predation. After reaching its lowest rate after hatching, nest survival increased steadily during the nestling period ( Figs. 2A and 4A). This pattern, which has previously been observed in passerines (Pietz & Granfors, 2000;Grant et al., 2005), could be due to increased parental nest defense as nestlings get closer to fledging (Montgomerie & Weatherhead, 1988). This is particularly relevant for these five species on Navarino Island, as they only have one brood per breeding season (Jara et al., 2019), and therefore have a greater incentive to protect their nest as young near fledging. Another non-exclusive explanation includes 'forced-fledging' of nestlings by potential predators (Pietz & Granfors, 2000). Nestlings that are close to fledging age may avoid depredation by leaving the nest prematurely when they are at imminent risk. This behavior may decrease depredation-induced nest failures towards the end of the nesting cycle.
Higher nest concealment for P. patagonicus increased its nesting success (Fig. 5D), which is consistent with the total-foliage hypothesis (Martin & Roper, 1988;Martin, 1993). According to this hypothesis, predators have a harder time locating nests with higher concealment, because it may be harder to detect them visually, aurally, and/or olfactorily. It has been suggested that M. chimango can detect nests visually (Crego, 2017), so it seems P. patagonicus may be trying to avoid being detected by nest predators in this system. The parental behavior of this passerine may also be an important contributing factor. Phrygilus patagonicus sits still on the nest in response to the presence of a predator, unlike what we observed for the other species, which flush considerably sooner and exhibit alarm behaviors (RF Jara, pers. obs., 2015). In the other species, higher nest concealment may not improve nesting success due to their more agitated parental behavior that, in contrast to P. patagonicus, may counteract any concealment advantage.
We found that higher percentage of canopy cover above nests of E. albiceps decreased their nest survival (Fig. 4B). This is consistent with the predator proximity hypothesis where nests at higher risk of predation (i.e., aerial or ground) should have lower survival. More canopy cover allows for the presence of M. chimango, the most common nest predator we were able to identify, because this forest raptor not only nests in the canopy, but also uses branches in the canopy to perch and look for prey (RF Jara and RD Crego, pers. obs.).
Camera effect on nest survival
We found evidence that for Z. capensis, the presence of a camera increased DSR by 22%. This positive camera effect has been reported for other bird species or systems (Thompson III, Dijak & Burhans, 1999;Buler & Hamilton, 2000;Pietz & Granfors, 2000;Small, 2005, reviewed by Richardson, Gardali & Jenkins, 2009. Cameras may have a deterrent effect on predators, possibly through neophobia towards these devices, which would consequently reduce depredation of these nests. However, this is unlikely to be the case for our study system because M. chimango was the main nest predator across all five species, but we only found a camera effect for Z. capensis. This suggests M. chimango did not exhibit neophobia towards the cameras. Furthermore, this raptor has been described as having low neophobia (Biondi, Bó & Vassallo, 2010). Alternatively, there could be a bias introduced by delaying the camera deployment until later in the nesting cycle. Nests that failed earlier in the cycle, when the camera was absent, may then positively bias our estimates of DSR for nests with a camera later in the cycle. Finally, and possibly a more likely explanation, this result was an artifact of limited exposure periods without cameras (i.e., 7.7%; n = 83/1077).
CONCLUSIONS
This study provides the first data on nest-site selection and survival of open-cup-nesting passerines in sub-Antarctic forests. We also propose a novel hypothesis that represents a relationship between the habitat and type of predators. Although our study was conducted on a single location, this hypothesis could be tested for nest-site selection and nest survival in other regions. The bird species we studied selected nest-sites with more understory cover and taller understory, which according to the total-foliage hypothesis would provide more concealment against predators. However, more understory cover and taller understory decreased nest survival. There seems to be a disconnect between birds assessing the risk of predation (and selecting the appropriate nest-site) and the actual risk of predation, resulting in birds selecting riskier sites for nesting. This could be attributed to an ecological trap, where birds on this island evolved with a different predator assemblage, which has been disrupted with the introduction of exotic ground mammal predators to this island and/or the increased abundance of native M. chimango associated with urban development. Further research with larger sample size will be needed to assess hypotheses that could explain this mismatch between nest-site selection and nest survival. | 2020-10-01T05:06:02.476Z | 2020-09-21T00:00:00.000 | {
"year": 2020,
"sha1": "fad0567b3e31ae27e0191abdc5a0c83ee162e469",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.9892",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fad0567b3e31ae27e0191abdc5a0c83ee162e469",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
154651493 | pes2o/s2orc | v3-fos-license | Developing College and Career Readiness Through the Man Up ! Men ’ s Leadership Summit
High school guidance counselors have a tremendous job of balancing their administrative responsibilities and providing students with career and college guidance. However, collaborative efforts that bring together guidance counselors, institutions of higher learning, and local community members can provide students with the guidance needed to set and achieve lifelong dreams. A qualitative research design was used to evaluate the merit, worth, and effectiveness of a daylong career development conference offered to male high school juniors and seniors residing in a Midwestern metropolitan area. By following Perna’s (2006) multilevel conceptual model, organizers of the Man Up! Men’s Leadership Summit brought together 166 high school males and community leaders to discuss how to chart a career path. Three themes emerged that support the effectiveness of this model. Implications are discussed and suggestions for future directions are offered.
Introduction
I think what was good about this day is that at my high school my dreams were shot down.I was told I will never succeed in what I do and that scares me.I was at one point so sure who I was but the real world, it gets scary.This program told me that my dream of becoming governor could happen and come true if I put all my heart into it, and that made me feel so much better about myself.After participating in this program, I know I need to go to college to become what I want to become.Being here made me feel welcomed and that there is hope for my future.(student response on evaluation form, March 2011) For some students, transitioning from high school to postsecondary settings can be difficult and stressful.It seems unnatural for young people to know what they want for their future when they only have eighteen years or less of life experience to draw upon.While some are able to leave high school with the requisite skills to achieve their desired level of success, others lack the academic, social, and financial resources to realize their dreams.Either way, successfully charting a prosperous life-long career path, as a high school upperclassman, can be a daunting experience.
Many schools approach this transition by administering career interest surveys to their students.Effective career and college programming is more complex than simply analyzing the results of such a career interest survey.It requires a coordinated effort among students, parents, community members, and high school professionals to create life exploration opportunities that engage young people in meaningful experiences (Allen & Robbins, 2010).According to Marc (2010), these organized efforts provide high school students with the necessary mind set to begin formulating life-long professional and personal dreams.Once these dreams are established, students can develop the necessary skill set and begin working towards meeting their goals.
Given the multitude of responsibilities placed on high school guidance counselors, scheduling time to provide their students with career counseling and college guidance is difficult.In fact, Truong (2011) reported that high school counselors struggle to balance their administrative responsibilities with career education programming and counseling for their students.As such, collaborative efforts that bring together guidance counselors, institutions of higher education, and local community members are needed.Not only will they alleviate some of the pressure put on high school counselors but these efforts will also make for a richer learning experience.
In reality, the high school setting is just one location where students can receive information about their future college or career options.Perna (2006) developed a multileveled conceptual model that outlines four factors that contribute to high school graduate's post secondary decisions: (a) students, parents, and families; (b) public school system; (c) institutions of higher education; and (d) education policy set at the federal, state, and local levels.In fact, a 2010 report titled Up to the Challenge: The Role of Career and Technical Education and 21 st Century Skills in College and Career Readiness supports the benefits of Perna's model."High school and postsecondary partnerships with employers and postsecondary educators provide pathways to employment and/or associate's, bachelor's, and advanced degrees" (Bray,Green,& Kay,p.15).While these four factors are interconnected (Perna, 2006;Rowan-Kenyon, Perna, & Swan, 2011), there is a paucity of research documenting the impact that a secondary schoolhigher education institution-community partnership has on helping students chart a career path.This manuscript addresses this gap in the research by describing the evaluative results of a daylong career readiness conference hosted on the campus of a regional Midwestern university.
Purpose of the Study
We agree with previous researchers (Perna, 2006;Rowan-Kenyon, Perna, & Swan, 2011), that it is important for post-secondary institutions (4-year universities and community colleges) to make a significant contribution to helping high school students successfully transition into career paths.However, at this point we are not trying to test any specific theory; rather, we conducted this study from a constructivist perspective (Ponterotto & Grieger, 2007).Our attempt was to discover answers to our research questions about helping young males transition from high school into college and career paths.
For many adolescents the prospects of transitioning from high school to college, and ultimately into a career, can be daunting.Making the choice to attend the right college and set the right career/life goals is difficult for an eighteen-year-old high school student.While there are many programs offered by high schools that provide students with information on attending college, there are unique resources and activities that post-secondary institutions and community leaders can offer.The purpose of this study was to determine the effectiveness of a career and college readiness development conference offered to male high school juniors and seniors residing in a Midwestern metropolitan region of the United States.
Procedure
Our research team consisted of two university professors in gifted and talented education, whose research focuses on talent development, and one graduate student working on an advanced degree in gifted and talented education.The two university professors consulted with school administrators, guidance counselors, and local community professionals about the organization of the daylong college/career readiness conference.The graduate student did not participate in the organization or implementation phase of the study.Her sole contribution was to help analyze, code, and report data.Finally, the two university professors kept field notes during the conference as a way to record immediate impressions of the conference's events.
This conference was organized to study how male high school students would respond to an opportunity to build a commitment to seek postsecondary college or career options.Our purpose was to provide attendees with, (a) an opportunity interact with other high school students from throughout the region, (b) the opportunity for these students to experience a university campus, and (c) a connection between high school students and prominent business and community leaders through breakout session workshops.We used these three themes as points of departure (Charmaz, 2008) as we designed the conference, developed participant evaluation forms, and analyzed data.These themes should be viewed as a frame of reference for our study around which we defined our data.Our intent was not to form predetermined conclusions to support our pedagogical philosophies.
We invited all juniors and seniors in public schools to attend the Man Up! Men's Leadership Summit; a daylong college and career readiness development conference.Invitation packets with flyers advertising the daylong event and registration forms were sent to all high guidance counselors in the metropolitan region (n = 35 schools).Guidance counselors were asked to post the flyers in their school's hallways and provide registration forms to all interested, male, juniors and seniors.While we advertised a $10 registration fee, we informed the guidance counselors that no student would be excluded for financial reasons.
All attendees completed a conference registration form.There were 60 seniors, 105 juniors, and one sophomore from a total of 20 schools.Eighteen of the schools were public and two were private.In addition, three were rural schools, ten suburban, and seven urban.Fourteen of these schools were considered to be diverse schools in terms of the percentage of students attending the schools who qualify for free and reduced lunch status.Of the 166 attendees, 20% (n = 33) had decided on which college they were attending and only 45% (n = 20) of the seniors indicated which college they planned to attend.
Conference Structure
We relied on our professional network, and the networks of our collaboration team, to identify local professionals who potentially would be willing to speak to a group of male high school students.Recruitment letters were sent inviting selected professionals to speak about their professions, describe paths they took to become successful, and offer advice on transitioning from high school to post-secondary settings.The opening keynote speaker was a professor whose research focuses on diversity issues.The themes of his presentation were the importance of setting both personal and professional goals, giving back to the community, and valuing diversity.The luncheon keynote speaker presentation was titled "Finding Your Passion, Living Your Dream" which highlighted the importance of setting and pursuing one's life goals while finding work that is personally and professionally meaningful.
The opening keynote speaker spoke for one hour.After which, participants attended the first of their two, 1-hour breakout sessions.The first session focused on college readiness and the second focused on career readiness.The day concluded with a luncheon and a lunch speaker.Topics for Breakout Sessions I and II ranged from workplace skill development and college entrance themes to career readiness and career path themes (see Table 1).Participants selected and attended a variety of interactive discussion panels and session workshops with themed tracks for personal development.When registering for the conference, participants selected specific breakout session topics.Sessions' topics focused on providing career specific information as well as life and workplace skills.Sessions were designed to teach important skills needed to be successful in the workforce and ways for the participants to develop their individual leadership potential.Successful men who represented a variety of careers, or were experts on issues that male high school students were facing, led the individual breakout sessions.This included current university students and faculty, as well as business and community experts.
Instrumentation
At the end of the conference, participants were given an evaluation form asking them to provide feedback about the summit's merit, worth, and effectiveness (Patton, 2002).The form consisted of 10 questions.Three questions (I learned something from this conference, The information was presented in an interesting way, and I enjoyed the conference) were 4-point Likert-type items (1 = Not At All; 2 = A Bit; 3 = Enough; 4 = Very Much).The following seven items were open-ended questions that allowed participants to share their thoughts and impressions of the conference: (1) What do you think was good about the conference?; (2) What do you think would improve the conference?; (3) How has your career plan changed as a result of this conference?; (4) Describe what your dream your life to be like 15 years from now; (5) What activities will you engage in to make your dreams come true?; (6) Is there more information you require that would help you achieve your dreams?; and (7) Other Comments.No demographic or personally identifiable information was solicited on the evaluation forms in hopes that participants would feel more comfortable in responding honestly.
Analysis
A qualitative evaluation research design (Patton, 2002) was used to collect and analyze participants' perceptions of the merit, worth, and effectiveness of the Man Up! Men's Leadership Summit.Data were gathered from evaluation forms submitted by conference participants and from our field notes.In order to make data analysis more manageable, the research team (two university professors and graduate assistant) randomly selected 30% (n = 38) of the evaluation forms for intensive analysis (Elliott, Fischer, & Rennie, 1999) and compared them with our field notes.Research team members independently analyzed and coded the data into smaller units.Next, we came together to review codes and agree upon a coding structure.Once this coding structure was established, the we independently themed the data and analyzed those themes.During the independent thematic analysis, distinctive categories emerged.Afterwards, we reconvened, compared similarities and differences in the themes, and developed theory (Saldaña, 2009).When there was disagreement on themes we discussed the differing views and reached a consensus about the emerging theme in question.In order to verify the themes and theory that emerged, we conducted a less intensive examination of the rest of the sample (Elliott el al.).
Results
Out of the 164 participants who attended the conference, 128 (78%) completed and submitted the evaluation forms (see Table 2).After analyzing students' responses to the open-ended questions, the following three themes emerged from which we developed theory: (1) Motivation to Set and Achieve Career/Life Goals, (2) How Do I Make My Dreams a Reality?, and (3) Making a Personal Connection with the Speaker.The first theme demonstrates participants' desire to have a successful life.The second two themes represent a need to provide young people with the requisite resources which will allow them to take an active role in making their lives successful.
The reported data below are representative of the entire sample.In an attempt to accurately report and illustrate data, we selected quotes from evaluation forms that represented participants' views, feelings, and intentions (Charmaz, 2008).Our intention was to provide the reader with a thick description, demonstrate evidence of the themes, and illustrate how we built our theory (Ponterotto & Grieger, 2007).Rather than correcting grammar, we left the participants' words intact.
Motivation to Set and Achieve Career/Life Goals
In describing their dreams for the future, many of the participants in this sample desired to have an established career.While some described their career aspirations in general terms like "I will have a good paying job I enjoy" or "Be in some kind of career", others had specific careers in mind.It was eclectic mix of dreams such as "Be in the law profession or hold public office", "Own an aquaculture farm to help out the oceans against the demands of people", and "A nurse with a great job".Given the number of participants who completed the evaluation form, the wide range of interests supports previous research (Allen & Robbins, 2010;Marc, 2010) about high school male career interests and did not surprise researchers While many participants in this sample have set career goals, some have considered how these career goals will benefit their lives.
For example, in describing future career goals, many of the participants dreamt of having a family or being part of a strong family.Sample responses include "I would like to be in a job I enjoy doing and have a family to care about and live happy with", "Having a family and being able to provide for them", or "Own a big business and have kids and a wife".This common theme around family and parenthood among participants was not something we expected to emerge.For participants in this sample, this unanticipated finding represents an opportunity for future research.Prospective studies could investigate high school males' perspectives on parenthood and provide breakout sessions focusing on various husband/father related topics.
The event helped many realize the importance of taking an active role in determining their own future.A majority indicated that the conference either reinforced their future goals (e.g."My career plans have not changed but I have picked up on some helpful hints" or "It strengthen my passion to join the law profession and gave me a better understanding on the extension of law itself") or helped them make better decisions about their future (e.g."I need to let go of the less important things and do what I need to succeed" or "Has not changed but it has helped me think of new ways to boost my ambition for my career").Based on these responses and this theme, follow-up studies should investigate if similar conferences impact the depth to which participants gave to pursuing their future aspirations.
As young people transition from high school students to high school graduates, deciding on a career path is a big decision.However, making a wrong decision can be costly in both time and resources.Some students indicated that they learned enough about a particular profession to know they no longer wanted to pursue that path ("I learned I do not want to be an Accountant" and "I realized that the engineering field might be too difficult for me").While these participants did not indicate if they settled on an alternative path, this realization is important because it allows them to begin inquiring about other careers.
Making a Personal Connection with the Speaker
Several students indicted that one successful element of the conference was the valuable information shared by the speakers.For instance, a majority of students indicated that the speakers' information was interesting, informative, and valuable.Comments like "I was given a lot of information no one else gives in school", "It was very informative and I learned a lot about what it was like to be a college student", and "I learned a lot of valuable information that I can use in my future" demonstrate the conference's success in being able to meet the participants' desire to speak with those who have successfully overcome life's obstacles.This was a goal that researchers had set for participants and suggests that recruiting speakers who can share personal successes and failures is important.
A majority of students reported that the speakers really connected with them and passed on life lessons.Two comments in particular sum up this sentiment, "They [speakers] urged me to think about different aspects of my life than I have in the past.They encouraged me to think about careers and planning, as well as a professional life."Similarly another participant remarked, "It was good hearing from someone who had overcome many obstacles in life...the speakers could relate to us and tried to guide us based on what they learned from their experiences."This finding supports Allen and Robbins (2010) assertion that high school students value meaningful and authentic experiences.Participants in this sample valued speakers who provided real-life examples of struggle and eventual accomplishment.
By making personal connections with the participants, we believe the speakers impacted the lives of those who attended the conference.Evidence of this conclusion is supported by comments like, "My life choices have changed.I am going to try as hard as I can in school.I will make my life matter.Now I'm more aware of what I want to plan for and what goes into attending college" and "I have realized that in order to make it through college I'll have to be determined and hardworking."While we recruited speakers who could provide personal experiences about the paths they took to reach their particular level of success, we did not anticipate that they would make deep, meaningful connections with the participants.
How Do I Make My Dreams a Reality?
Although the participants gained information about their futures, they were left with a desire to learn more specifics about how to achieve their dreams.For example, a majority of students reported the conference made them realize that they still had more to learn.One student asked, "What exact steps to take for my future?I want more information on the best path to get me where I want to be and how to pay for that path."Another student commented, "Now that I am more positive of my future I want to know how to achieve it.Nobody has prepared me to do that…its frustrating to know what you want to be but not know how to make it happen."These findings support previous research (Rowan-Kenyon et al., 2011) that as young people gain knowledge about careers and potential career paths, they realize that there is more they do not know.Further evidence of this point emerged as these participants described the types of information they desired, which fell into one of the two following categories: (1) college specific and (2) career specific.
Analysis of the data revealed that many of the participants indicated they desired more specific college information.They appeared to be interested in learning about the pathways to access post secondary institutions.These ranged from finding the financial assistance to pay for college (e.g., "How am I going to pay for college?", "…scholarships that are available to me.", and "…more information on performance scholarships.Particularly visual arts-based ones.") to selecting the right college for their desired career path (e.g., "It would help to be able to find more information about colleges and what kind of majors those colleges offer", "Knowing which majors require which classes", and "What the best major would be to achieve being an athletic trainer").These results indicate that participants are thinking about making the right decisions that will influence their future but they lack enough information to achieve their goals.
Equally important for many the participants, was the need for more career specific and experiences.The responses ranged from what is it like to work in a particular field (e.g., "What is it like to be a biology teacher?, "I need more information on the different kinds of engineering jobs", and "Real world experience would help me the most") to the specific steps required to enter a given profession (e.g., "How would one join the gaming industry?","How to get the financial means to start up a product?", and "I need information on foreign language jobs").While the conclusions gleaned from these data indicate that participants in this sample still require information about pursuing their career aspirations, they suggests that these participants are in the midst of the natural growth of a young person learning how to chart life-long course.
Discussion
Qualitative results from the Man Up! Men's Leadership Summit suggests the potential effectiveness of a secondary school-higher education institution-community partnership in delivering a daylong career development conference.In this study, the collaborative efforts among these groups provided students with an authentic learning experience.However this seamless, behind the scenes relationship went unnoticed by the conference's participants.Still, its importance cannot be overstated in the discussion of these results.Data support existing literature that high school students in this sample require authentic career development experiences with those currently working in the field.Finally, these results revealed that, in addition to thinking about their future careers, these young males are concerned with playing an active role in the family.
Quality of the Speakers
Based on the results of this quantitative evaluative study, The Man Up! Men's Leadership Summit was ultimately successful because speakers meaningfully connected with the students in this sample.First, practicing professionals, community leaders, and current college students spoke to the obstacles many male adolescents face when transitioning from high school to post secondary settings.Second, they provided participants with an opportunity to ask detailed questions about specific professions and solicit information about how to access the necessary resources to make entrance into a post-secondary setting possible.Finally, participants viewed the speakers as providing information that is typically not provided in schools.
One explanation for this finding may be that the speakers were not individuals with whom the participants have daily contact.Certainly the messages of hard work, perseverance, and goal setting are common themes expressed by most high school teachers, guidance counselors, and parents.However, the constant interaction with those individuals can cause an adolescent to become immune to the impact of their messages.A critical component of the Man Up! Men's Leadership Summit was participants were removed from their high school settings and invited to a university campus.This finding supports previous research (Perna, 2006;Rowan-Kenyon et al., 2010) that authentic experiences helped reinforce career-based learning opportunities.
Establishing and Meeting Life Long Goals
The results of this study revealed that the conference helped participants recognize the importance of establishing life-long personal and professional goals.This finding supports previous research (Allen & Robins, 2010;Marc, 2010) that, when provided with the right information and guidance, students can make informed decisions about their career paths.The detail that participants used to describe their future career goals demonstrates they were able to articulate future aspirations.
In addition to detailing their career goals, a significant number of participants aspired to have a family.While this finding surprised researchers, perhaps the amazement represents an unidentified bias.In planning the conference, collaborators only focused on college readiness and career path related issues.Any discussion of what it means to be a 21 st century male and a male's role in the family was purely coincidental.Given the limited number of participants, it is difficult to generalize these finding to a larger population; however, these findings suggest that high school males in this sample are thinking about their roles as a father and husband.Future career readiness conferences should dedicate some time and discussion to this topic.
Limitations
The intent of this study, as with most qualitative research, was to inform and not to generalize.Readers should view the presented data with this in mind and formulate their own opinions as to the results' applicability (Gentry, Steenbergen-Hu, & Choi, 2011).There are a few limitations that need to be addressed.The major data source were participants' evaluation forms completed immediately at the end of the conference.We did not follow-up with them to evaluate the longitudinal impact of the day's events.Conducting this type of survey might have provided additional insight.Additionally, by nature of their attendance at the conference, participants were pre-motivated to develop college and career readiness skills.As such, responses on evaluation forms might have been inflated and conclusions cannot be generalized to those who did not attend.
Conclusion
The results from the Man Up! Men's Leadership Summit bode well for partnerships among secondary schools, institutions of higher education, and community leaders in terms of developing career readiness skills and dispositions of adolescent males.Partnerships and conferences such as this can have a positive impact on assisting male students to successfully transition from secondary education, to postsecondary education settings, and hopefully into career pathways.Incorporating the multileveled conceptual model in this manner allowed for a dynamic student experience.
Table 1 .
Breakout sessions topics by student participationThe topics for the college readiness sessions included
Table 2 .
Mean scores from student evaluation forms | 2018-12-05T01:57:45.622Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "7f3aaf6ecff6038bb1331bcc9f24a981d2840922",
"oa_license": "CCBY",
"oa_url": "http://journalcte.org/articles/10.21061/jcte.v28i1.570/galley/391/download/",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "7f3aaf6ecff6038bb1331bcc9f24a981d2840922",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
} |
220509091 | pes2o/s2orc | v3-fos-license | On Sharpness of Error Bounds for Univariate Approximation by Single Hidden Layer Feedforward Neural Networks
A new non-linear variant of a quantitative extension of the uniform boundedness principle is used to show sharpness of error bounds for univariate approximation by sums of sigmoid and ReLU functions. Single hidden layer feedforward neural networks with one input node perform such operations. Errors of best approximation can be expressed using moduli of smoothness of the function to be approximated (i.e., to be learned). In this context, the quantitative extension of the uniform boundedness principle indeed allows to construct counterexamples that show approximation rates to be best possible. Approximation errors do not belong to the little-o class of given bounds. By choosing piecewise linear activation functions, the discussed problem becomes free knot spline approximation. Results of the present paper also hold for non-polynomial (and not piecewise defined) activation functions like inverse tangent. Based on Vapnik–Chervonenkis dimension, first results are shown for the logistic function.
Introduction
A feedforward neural network with an activation function σ, one input, one output node, and one hidden layer of n neurons as shown in Fig. 1 implements an univariate real function g of type The given paper does not deal with multivariate approximation. But some results can be extended to multiple input nodes, see Sect. 5 Sometimes also monotonicity, boundedness, continuity, or even differentiability may be prescribed. Deviant definitions are based on convexity and concavity. In case of differentiability, functions have a bell-shaped first derivative. Throughout this paper, approximation properties of following sigmoid functions are discussed: is often used as activation function for deep neural networks due to its computational simplicity. The Exponential Linear Unit (ELU) activation function σ e (x) := α(e x − 1), x < 0 x, x ≥ 0 with parameter α = 0 is a smoother variant of ReLU for α = 1. Qualitative approximation properties of neural networks have been studied extensively. For example, it is possible to choose an infinitely often differentiable, almost monotonous, sigmoid activation function σ such that for each continuous function f , each compact interval and each bound ε > 0 weights a 0 , a 1 , b 1 , c 1 ∈ R exist such that f can be approximated uniformly by a 0 + a 1 σ(b 1 x + c 1 ) on the interval within bound ε, see [27] and literature cited there. In this sense, a neural network with only one hidden neuron is capable of approximating every continuous function. However, activation functions typically are chosen fixed when applying neural networks to solve application problems. They do not depend on the unknown function to be approximated. In the late 1980s it was already known that, by increasing the number of neurons, all continuous functions can be approximated arbitrarily well in the sup-norm on a compact set with each non-constant bounded monotonically increasing and continuous activation function (universal approximation or density property, see proof of Funahashi in [22]). For each continuous sigmoid activation function (that does not have to be monotone), the universal approximation property was proved by Cybenko in [14] on the unit cube. The result was extended to bounded sigmoid activation functions by Jones [31] without requiring continuity or monotonicity. For monotone sigmoid (and not necessarily continuous) activation functions, Hornik, Stinchcombe and White [29] extended the universal approximation property to the approximation of measurable functions. Hornik [28] proved density in L p -spaces for any non-constant bounded and continuous activation functions. A rather general theorem is proved in [35]. Leshno et al. showed for any continuous activation function σ that the universal approximation property is equivalent to the fact that σ is not an algebraic polynomial.
To approximate or interpolate a given but unknown function f , constants a k , b k , and c k typically are obtained by learning based on sampled function values of f . The underlying optimization algorithm (like gradient descent with back propagation) might get stuck in a local but not in a global minimum. Thus, it might not find optimal constants to approximate f best possible. This paper does not focus on learning but on general approximation properties of function spaces Thus, we discuss functions on the interval [0, 1]. Without loss of generality, it is used instead of an arbitrary compact interval [a, b]. In some papers, an additional constant function a 0 is allowed as summand in the definition of Φ n,σ . Please note that a k σ(0 · x + b k ) already is a constant and that the definitions do not differ significantly. The error of best approximation E(Φ n,σ , f) p , 1 ≤ p ≤ ∞, is defined via We use the abbreviation E(Φ n,σ , f) := E(Φ n,σ , f) ∞ for p = ∞. A trained network cannot approximate a function better than the error of best approximation. Therefore, it is an important measure of what can and what cannot be done with such a network.
The error of best approximation depends on the smoothness of f that is measured in terms of moduli of smoothness (or moduli of continuity). In contrast to using derivatives, first and higher differences of f obviously always exist. By applying a norm to such differences, moduli of smoothness measure a "degree of continuity" of f .
For a natural number The rth uniform modulus of smoothness is the smallest upper bound of the absolute values of rth differences: 1] , and for r-times continuously differentiable functions f , there holds (cf. [16, p. 46]) Barron applied Fourier methods in [4], cf. [36], to establish rates of convergence in an L 2 -norm, i.e., he estimated the error E(Φ n,σ , f) 2 with respect to n for n → ∞. Makovoz [40] analyzed rates for uniform convergence. With respect to moduli of smoothness, Debao [15] proved a direct estimate that is here presented in a version of the textbook [9, p. 172ff]. This estimate is independent of the choice of a bounded, sigmoid function σ. Doctoral thesis [10], cf. [11], provides an overview of such direct estimates in Section 1.3.
According to Debao, holds for each f ∈ C[0, 1]. This is the prototype estimate for which sharpness is discussed in this paper. In fact, the result of Debao for E(Φ n,σ , f) allows to additionally restrict weights such that b k ∈ N and c k ∈ Z. The estimate has to hold true even for σ being a discontinuous Heaviside function. That is the reason why one can only expect an estimate in terms of a first order modulus of smoothness. If the order of approximation of a continuous function f by such piecewise constant functions is o(1/n) then f itself is a constant, see [16, p. 366]. In fact, the idea behind Debao's proof is that sigmoid functions can be asymptotically seen as Heaviside functions. One gets arbitrary step functions to approximate f by superposition of Heaviside functions. For quasiinterpolation operators based on the logistic activation function σ l , Chen and Zhao proved similar estimates in [8] (cf. [2,3] for hyperbolic tangent). However, they only reach a convergence order of O (1/n α ) for α < 1. With respect to the error of best approximation, they prove n by estimating with a polynomial of best approximation. Due to the different technique, constants are larger than in error bound (2). If one takes additional properties of σ into account, higher convergence rates are possible. Continuous sigmoid cut function σ c and ReLU function σ r lead to spaces Φ n,σc,r of continuous, piecewise linear functions. They consist of free knot spline functions of polynomial degree at most one with at most 2n or n knots, cf. [16,Section 12.8]. Spaces Φ n,σc,r include all continuous spline functions g on [0, 1] with polynomial degree at most one that have at most n − 1 simple knots. We show g ∈ Φ n,σc for such a spline g with equidistant knots x k = k n−1 , 0 ≤ k ≤ n − 1, to obtain an error bound for n ≥ 2: One can also represent g by ReLU functions, i.e., g ∈ Φ n,σr : With (1 ≤ k ≤ n − 1) Section 2 deals with even higher order direct estimates. Similarly to (3), not only sup-norm bound (2) but also an L p -bound, 1 ≤ p < ∞, for approximation with Heaviside function σ h can be obtained from the corresponding larger bound of fixed simple knot spline approximation. Each L p [0, 1]-function that is constant between knots x k = k n , 0 ≤ k ≤ n, can be written as a linear combination of n translated Heaviside functions. Thus, [16, p. 225, Theorem 7.3 for δ = 1/n]) yields for n ∈ N Lower error bounds are much harder to obtain than upper bounds, cf. [42] for some results with regard to multilayer feedforward perceptron networks. Often, lower bounds are given using a (non-linear) Kolmogorov n-width W n (cf. [41,45] for a suitable function space X (of functions with certain smoothness) and norm · . Thus, parameters b k and c k cannot be chosen individually for each function f ∈ X. Higher rates of convergence might occur, if that becomes possible. There are three somewhat different types of sharpness results that might be able to show that left sides of Eqs. (2), (3), (4) or (9) and (16) in Sect. 2 do not converge faster to zero than the right sides.
The most far reaching results would provide lower estimates of errors of best approximation in which the lower bound is a modulus of smoothness. In connection with direct upper bounds in terms of the same moduli, this would establish theorems similar to the equivalence between moduli of smoothness and K-functionals (cf. [16,theorem of Johnen,p. 177], [30]) in which the error of best approximation replaces the K-functional. Let σ be r-times continuously differentiable like σ a or σ l . Then for f ∈ C[0, 1], a standard estimate based on (1) is It is unlikely that one can somehow bound g (r) [24]. In fact, we prove with Lemma 1 in Sect. 2 that (5) is not valid even if constant C is allowed to depend on f . A second class of sharpness results consists of inverse and equivalence theorems. Inverse theorems provide upper bounds for moduli of smoothness in terms of weighted sums of approximation errors. For pseudo-interpolation operators based on piecewise linear activation functions and B-splines (but not for errors of best approximation), [37] deals with an inverse estimate based on Bernstein polynomials.
An idea that does not work is to adapt the inverse theorem for best trigonometric approximation [16, p. 208]. Without considering effects related to interval endpoints in algebraic approximation one gets a (wrong) candidate inequality By choosing f ≡ σ for a non-polynomial, rth times continuously differentiable activation function σ, the modulus on the left side of the estimate behaves like n −r . But the errors of best approximation on the right side are zero. At least this can be cured by the additional expression Cr n r f B[0,1] . Typically, the proof of an inverse theorem is based on a Bernstein-type inequality that is difficult to formulate for function spaces discussed here. The Bernstein inequality provides a bound for derivatives. If p n is a trigonometric polynomial of degree at most n then p n B[0,2π] ≤ n p n B[0,2π] , cf. [16, p. 97]. The problem here is that differentiating aσ(bx + c) leads to a factor b that cannot be bounded easily. Indeed, we show for a large class of activation functions that (6) can't hold, see (14). As noticed in [24], the inverse estimates of type (6) proposed in [51] and [53] are wrong. Similar to inverse theorems, equivalence theorems (like (7) below) describe equivalent behavior of expressions of moduli of smoothness and expressions of approximation errors. Both inverse and equivalence theorems allow to determine smoothness properties, typically membership to Lipschitz classes or Besov spaces, from convergence rates. Such a property is proved in [13] for max-product neural network operators activated by sigmoidal functions. The relationship between order of convergence of best approximation and Besov spaces is well understood for approximation with free knot spline functions and rational functions, see [16,Section 12.8], cf. [34]. The Heaviside activation function leads to free knot splines of polynomial degree 0, i.e., less than r = 1, cut and ReLU function correspond with polynomial degree less than r = 2. For σ being one of these functions, and for 0 < α < r, f ∈ L p [0, 1], 1 ≤ p < ∞ (p = ∞ is excluded), k := 1 if α < 1 and k := 2 otherwise, q := 1 α+1/p , there holds the equivalence (see [17]) However, such equivalence theorems might not be suited to obtain little-o results: Assume that E(Φ n,σ , f) p = 1 n β (ln(n+1)) 1/q = o 1 n β , then the right side of (7) converges exactly for the same smoothness parameters 0 < α < β than if E(Φ n,σ , f) p = 1 n β = o 1 n β . The third type of sharpness results is based on counterexamples. The present paper follows this approach to deal with little-o effects. Without further restrictions, counterexamples show that convergence orders can not be faster than stated in terms of moduli of smoothness in (2), (3), (4) and the estimates in following Sect. 2 for some activation functions. To obtain such counterexamples, a general theorem is introduced in Sect. 3. It is applied to neural network approximation in Sect. 4.
Unlike the counterexamples in this paper, counterexamples that do not focus on moduli of smoothness were recently introduced in Almira et al. [1] for continuous piecewise polynomial activation functions σ with finitely many pieces (cf. Corollary 1 below) as well as for rational activation functions (that we also briefly discuss in Sect. 4): Given an arbitrary sequence of positive real numbers (ε n ) ∞ n=1 with lim n→∞ ε n = 0, a continuous counterexample f is constructed such that E(Φ n,σ , f) ≥ ε n for all n ∈ N.
Higher Order Estimates
In this section, two upper bounds in terms of higher order moduli of smoothness are derived from known results. Proofs are given for the sake of completeness. If, e.g., σ is arbitrarily often differentiable on some open interval such that σ is no polynomial on that interval then it is known that E(Φ n,σ , p n−1 ) = 0 for all polynomials p n−1 of degree at most n − 1, i.e., p n−1 ∈ Π n := {d n−1 x n−1 + Vol. 75 (2020) On Sharpness of Error Bounds for Univariate Approximation Page 9 of 35 109 [42, p. 157] and (15) below. Thus, upper bounds for polynomial approximation can be used as upper bounds for neural network approximation in connection with certain activation functions. Due to a corollary of the classical theorem of Jackson, the best approximation to f ∈ X p [0, 1], 1 ≤ p ≤ ∞, by algebraic polynomials is bounded by the rth modulus of smoothness. For n ≥ r, we use Theorem 6.3 in [16, p. 220] that is stated for the interval [−1, 1]. However, by applying an affine transformation of [0, 1] to [−1, 1], we see that there exists a constant C independently of f and n such that Ritter proved an estimate in terms of a first order modulus of smoothness for approximation with nearly exponential activation functions in [43]. Due to (8), Ritter's proof can be extended in a straightforward manner to higher order moduli. The special case of estimating by a second order modulus is discussed in [51].
According to [43], a function σ : R → R is called "nearly exponential" iff for each ε > 0 there exist real numbers a, b, c, and d such that for all The logistic function fulfills this condition with a = 1/σ l (c), b = 1, d = 0, and c < ln(ε) such that for x ≤ 0 there is 0 < e x ≤ 1 and Then, independently of n ≥ max{r, 2} (or n > r) and f , a constant C r exists such that Proof. Let ε > 0. Due to (8), there exists a polynomial p n ∈ Π n+1 of degree at most n such that The Jackson estimate can be used to extend the proof given for r = 1 in [43]: converge to x pointwise for α → ∞ due to the theorem of L'Hospital. Since 1] is obtained at the endpoints 0 or 1, and convergence of h α (x) to x is uniform on [0, 1] for α → ∞. Thus lim α→∞ p n (h α (x)) = p n (x) uniformly on [0, 1], and for the given ε we can choose α large enough to get Therefore, function f is approximated by an exponential sum of type within the bound Cω r f, n −1 +2ε. It remains to approximate the exponential sum by utilizing that σ is nearly exponential.
By combining (10), (11) and (12), we get Since ε can be chosen arbitrarily, we obtain for n ≥ 2 By choosing a = 1/α, b = 1, c = 0, and d = 1, the ELU activation function σ e (x) obviously fulfills the condition to be nearly exponential. But its definition for x ≥ 0 plays no role.
Given a nearly exponential activation function, a lower bound (5) or an inverse estimate (6) with a constant C r , independent of f , is not valid, see [24] for r = 2. Such inverse estimates were proposed in [51] and [53]. Functions For x ≤ 0, one can uniformly approximate e x arbitrarily well by assigning values to a, b, c and d in aσ which is obviously wrong for n → ∞. The same problem occurs with (5).
The "nearly exponential" property only fits with certain activation functions but it does not require continuity. For example, let h(x) be the Dirichlet function that is one for rational and zero for irrational numbers x. Activation function exp(x)(1+ exp(x)h(x)) is nowhere continuous but nearly exponential: For ε > 0 let c = ln(ε) and a = exp(−c), b = 1, then for x ≤ 0 But a bound can also be obtained from arbitrarily often differentiability. Let σ be arbitrarily often differentiable on some open interval such that σ is no polynomial on that interval. Then one can easily obtain an estimate in terms of the rth modulus from the Jackson estimate (8) by considering that polynomials of degree at most n − 1 can be approximated arbitrarily well by functions in Φ n,σ , see [42, Corollary 3.6, p. 157], cf. [33, Theorem 3.1]. The idea is to approximate monomials by differential quotients of σ. This is possible since This theorem can be applied to σ l but also to σ a and σ e . The preliminaries are also fulfilled for σ(x) := sin(x), a function that is obviously not nearly exponential.
Proof. Let ε > 0. As in the previous proof, there exists a polynomial p n of degree at most n such that (10) holds. Due to [42, p. 157] there exists a function Since ε can be chosen arbitrarily, we get (16) via Eq. (13).
Polynomials in the closure of approximation spaces can be utilized to show that a direct lower bound in terms of a (uniform) modulus of smoothness is not possible.
Lemma 1 (Impossible inverse estimate). Let activation function ϕ be given as in the preceding Theorem 2, r ∈ N. For each positive, monotonically decreasing sequence (α n ) ∞ n=1 , α n > 0, and each 0 < β < 1 a counterexample f β ∈ C[0, 1] exists such that (for n → ∞) lim sup n→∞ Even if the constant C = C f > 0 may depend on f (but not on n), estimate (5), as proposed in a similar context in [51] for r = 2, does not apply.
We choose parameters of [21, Theorem 2.1] as follows: We further set ϕ n = τ n = ψ n = 1/n r such that with (1) We compute an r-th difference of x n at the interval endpoint 1 to get the resonance condition (17) and (19) are fulfilled.
A Uniform Boundedness Principle with Rates
In this paper, sharpness results are proved with a quantitative extension of the classical uniform boundedness principle of Functional Analysis. Dickmeis, Nessel and van Wickern developed several versions of such theorems. We already used one of them in the proof of Lemma 1. An overview of applications in Numerical Analysis can be found in [23, Section 6]. The given paper is based on [20, p. 108]. This and most other versions require error functionals to be sub-additive. Let X be a normed space. A functional T on X, i.e., T maps X into R, is said to be (non-negative-valued) sub-linear and bounded, iff for all The set of non-negative-valued sub-linear bounded functionals T on X is denoted by X ∼ . Typically, errors of best approximation are (non-negativevalued) sub-linear bounded functionals. Let U ⊂ X be a linear subspace. The best approximation of f ∈ X by elements u ∈ U = ∅ is defined as Unfortunately, function sets Φ n,σ are not linear spaces, cf. [42, p. 151]. In general, from f, g ∈ Φ n,σ one can only conclude f + g ∈ Φ 2n,σ whereas cf ∈ Φ n,σ , c ∈ R. Functionals of best approximation fulfill E (Φ n,σ , f) But there is no sub-additivity. However, it is easy to prove a similar condition: For each ε > 0 there exists elements u f,ε , u g,ε ∈ Φ n,σ that fulfill Obviously, also In what follows, a quantitative extension of the uniform boundedness principle based on these conditions is presented. The conditions replace subadditivity. Another extension of the uniform boundedness principle to non-sublinear functionals is proved in [19]. But this version of the theorem is stated for a family of error functionals with two parameters that has to fulfill a condition of quasi lower semi-continuity. Functionals S δ measuring smoothness also do not need to be sub-additive but have to fulfill a condition S δ (f +g) ≤ B(S δ (f )+ S δ (g)) for a constant B ≥ 1. This theorem does not consider replacement (20) for sub-additivity.
The aim is to discuss a sequence of remainders (that will be errors of best approximation) (E n ) ∞ n=1 , E n : X → [0, ∞). These functionals do not have to be sub-linear but instead have to fulfill . . , f m ∈ X, and constants c ∈ R. In the boundedness condition (26), D n is a constant only depending on E n but not on f .
Let μ(δ) : (0, ∞) → (0, ∞) be a positive function, and let ϕ : [1, ∞) → (0, ∞) be a strictly decreasing function with lim x→∞ ϕ(x) = 0. An additional requirement is that for each 0 < λ < 1 a point X 0 = X 0 (λ) ≥ λ −1 and constant If there exist test elements h n ∈ X such that for all n ∈ N with n ≥ n 0 ∈ N and δ > 0 then for each abstract modulus of smoothness ω satisfying there exists a counterexample f ω ∈ X such that (δ → 0+, n → ∞) E n (f ω ) = o(ω(ϕ(n))), i.e., lim sup For example, (28) is fulfilled for a standard choice ϕ(x) = 1/x α . The prerequisites of the theorem differ from the Theorems of Dickmeis, Nessel, and van Wickern in conditions (24)-(27) that replace E n ∈ X ∼ . It also requires additional constraint (28). For convenience, resonance condition (31) replaces E n (h n ) ≥ c 3 . Without much effort, (31) can be weakened to The proof is based on a gliding hump and follows the ideas of [20, Section 2.2] (cf. [18]) for sub-linear functionals and the literature cited there. For the sake of completeness, the whole proof is presented although changes were required only for estimates that are effected by missing sub-additivity.
Sharpness
Free knot spline function approximations by Heaviside, cut and ReLU functions are first examples for application of Theorem 3. Let S r n be the space of functions f for which n + 1 intervals ]x k , x k+1 [, 0 = x 0 < x 1 < · · · < x n+1 = 1, exist such that f equals (potentially different) polynomials p of degree less than r on each of these intervals, i.e. p ∈ Π r . No additional smoothness conditions are required at knots. Results Math Note that r andr can be chosen independently. This corresponds with Marchaud inequality for moduli of smoothness.
The following lemma helps in the proof of this and the next corollary. It is used to show the resonance condition of Theorem 3.
≤ 0 for all x ∈ I k holds. Then g can change its sign only at points x k . Let h(x) := sin (2N · 2π · x). Then there exists a constant c > 0 that is independent of g and N such that The prerequisites on g are fulfilled if g is continuous with at most N zeroes. (2N ·2π·(x−a)). Thus, for 1 ≤ p < ∞,
Proof. We discuss 2N intervals
Proof of Corollary 1. Theorem 3 can be applied with following parameters.
Whereas S δ is a sub-linear, bounded functional, errors of best approximation E n fulfill conditions (24), (25), (26), and (27) obviously satisfy condition (29): h n (x) X p [0,1] ≤ 1 =: C 1 . One obtains (30) because of Let g ∈ Sr 4n , then g is composed from at most 4n + 1 polynomials on 4n + 1 intervals. On each of these intervals, g ≡ 0 or g at most hasr − 1 zeroes. Thus g can change sign at 4n interval borders and at zeroes of polynomials, and g fulfills the prerequisites of Lemma 2 with N = (4n+1)·r > 4n+(4n+1)·(r−1). Due to the lemma, h n − g X p [0,1] ≥ c > 0 independent of n and g. Since this holds true for all g, (31) is shown for c 3 = c. All preliminaries of Theorem 3 are fulfilled such that counterexamples exist as stated.
Corollary 1 can be applied with respect to all activation functions σ belonging to the class of splines with fixed polynomial degree less than r and a finite number of knots k because Φ n,σ ⊂ S r nk . Since Φ n,σ h ⊂ S 1 n , Corollary 1 directly shows sharpness of (2) and (4) for the Heaviside activation function if one chooses r =r = 1. Sharpness of (3) for cut and ReLU function follows for r =r = 2 because Φ n,σc ⊂ S 2 2n , Φ n,σr ⊂ S 2 n . However, the case ω(δ) = δ of maximum non-saturated convergence order is excluded by condition (32). We discuss this case for r =r. Then a simple counterexample is f ω (x) := x r . For each sequence of coefficients d 0 , . . . , d r−1 ∈ R we can apply the fundamental theorem of algebra to find complex zeroes a 0 , . . . , a r−1 ∈ C such that There exists an interval I := (j(r + 1) −1 , (j + 1)(r + 1) −1 ) ⊂ [0, 1] such that real parts of complex numbers a k are not in I for all 0 ≤ k < r. Let I 0 := ((j + 1/4)(r + 1) −1 , (j + 3/4)(r + 1) −1 ) ⊂ I. Then for all x ∈ I 0 This lower bound is independent of coefficients d k such that We also see that Each function g ∈ S r n is a polynomial of degree less than r on at least n intervals (j(2n) −1 , (j + 1)(2n) −1 ), j ∈ J ⊂ {0, 1, . . . , 2n − 1}. For j ∈ J: Thus, E(S r n , x r ) = o 1 n r . In case of L p -spaces, we similarly obtain with sub- Sharpness is demonstrated by combining lower estimates of all n subintervals: Although our counterexample is arbitrarily often differentiable, the convergence order is limited to n −r . Reason is the definition of the activation function by piecewise polynomials. There is no such limitation for activation functions that are arbitrarily often differentiable on an interval without being a polynomial, see Theorem 2. Thus, neural networks based on smooth nonpolynomial activation functions might approximate better if smooth functions have to be learned. Theorem 3 in [15] states for the Heaviside function that for each n ∈ N a function f n ∈ C[0, 1] exits such that the error of best uniform approximation exactly equals ω 1 f n , 1 2(n+1) . This is used to show optimality of the constant. Functions f n might be different for different n. One does not get the condensed sharpness result of Corollary 1.
Another relevant example of a spline of fixed polynomial degree with a finite number of knots is the square non-linearity (SQNL) activation function σ(x) := sign(x) for |x| > 2 and σ(x) := x − sign(x) · x 2 /4 for |x| ≤ 2. Because σ, restricted to each of the four sub-intervals of piecewise definition, is a polynomial of degree two, we can chooser = 3.
The proof of Corollary 1 is based on Lemma 2. This argument can be also used to deal with rational activation functions σ(x) = q 1 (x)/q 2 (x) where q 1 , q 2 ≡ 0 are polynomials of degree at most ρ. Then non-zero functions g ∈ Φ n,σ do have at most ρn zeroes and ρn poles such that there is no change of sign on N + 1 intervals, N = 2ρn. Thus, Corollary 1 can be extended to neural network approximation with rational activation functions in a straight forward manner.
Whereas the direct estimate (3) for cut and ReLU functions is based on linear best approximation, the counterexamples hold for non-linear best approximation. Thus, error bounds in terms of moduli of smoothness may not be able to express the advantages of non-linear free knot spline approximation in contrast to fixed knot spline approximation (cf. [45]). For an error measured in an L p norm with an order like n −α , smoothness only is required in L q , q := 1/(α + 1/p), see (7) and [16, p. 368].
Corollary 2 (Inverse tangent). Let σ = σ a be the sigmoid function based on the inverse tangent function, r ∈ N, and 1 ≤ p ≤ ∞. For each abstract modulus of smoothness ω satisfying (32), there exists a counterexample f ω ∈ X p [0, 1] such that The corollary shows sharpness of the error bound in Theorem 2 applied to the arbitrarily often differentiable function σ a .
Proof. Similarly to the proof of Corollary 1, we apply Theorem 3 with param- with N = N (n) := 8n, such that condition (29) is obvious and (30) can be shown by estimating the modulus in terms of the rth derivative of h n with (1). Let g ∈ Φ 4n,σa , Then where s(x) is a polynomial of degree 2(4n − 1), and q(x) is a polynomial of degree 8n. If g is not constant then g at most has 8n − 2 zeroes and f at most has 8n − 1 zeroes due to the mean value theorem (Rolle's theorem). In both cases, the requirements of Lemma 2 are fulfilled with N (n) = 8n > 8n − 1 such that h n − g X p [0,1] ≥ c > 0 independent of n and g. Since g can be chosen arbitrarily, (31) is shown with E 4n h n ≥ c > 0.
Whereas lower estimates for sums of n inverse tangent functions are easily obtained by considering O(n) zeroes of their derivatives, sums of n logistic functions (or hyperbolic tangent functions) might have an exponential number of zeroes. To illustrate the problem in the context of Theorem 3, let Using a common denominator, the numerator is a sum of type m k=1 α k κ x k for some κ k > 0 and m < 2 4n . According to [50], such a function has at most m − 1 < 16 n − 1 zeroes, or it equals the zero function. Therefore, an interval [k(16) −n , (k + 1)(16) −n ] exists on which g does not change its sign. By using a resonance sequence h n (x) := sin (16 n · 2πx), one gets E(Φ 4n,σ l , h n ) ≥ 1. But factor 16 n is by far too large. One has to choose φ(x) := 1/16 x and μ(δ) := δ to obtain a "counterexample" f ω with The gap between rates is obvious. The same difficulties do not only occur for the logistic function but also for other activation functions based on exp(x) like the softmax function σ m (x) := log(exp(x) + 1). Similar to (15), Thus, sums of n logistic functions can be approximated uniformly and arbitrarily well by sums of differential quotients that can be written by 2n softmax functions. A lower bound for approximation with σ m would also imply a similar bound for σ l and upper bounds for approximation with σ l imply upper bounds for σ m . With respect to the logistic function, a better estimate than (51) is possible. It can be condensed from a sequence of counterexamples that is derived in [39]. However, we show that the Vapnik-Chervonenkis dimension (VC dimension) of related function spaces can also be used to prove sharpness. This is a rather general approach since many VC dimension estimates are known.
Let X be a set and A a family of subsets of X. Throughout this paper, X can be assumed to be finite. One says that A shatters a set S ⊂ X if and only if each subset B ⊂ S can be written as This general definition can be adapted to (non-linear) function spaces V that consist of functions g : X → R on a (finite) set X ⊂ R. By applying Heavisidefunction σ h , let Then the VC dimension of function space V is defined as VC-dim(V ) := VC-dim(A). This is the largest number m ∈ N for which m points x 1 , . . . , x m ∈ X exist such that for each sign sequence s 1 , . . . , s m ∈ {−1, 1} a function g ∈ V can be found that fulfills cf. [6]. The VC dimension is an indicator for the number of degrees of freedom in the construction of V . Condition (52) is equivalent to Let function ϕ(x) be defined as in Theorem 3 such that (28) holds true. If, for a constant C > 0, function τ fulfills for all n ≥ n 0 ∈ N then for r ∈ N and each abstract modulus of smoothness ω satisfying (32), a counterexample f ω ∈ C[0, 1] exists such that Proof. Let n ≥ n 0 /4. Due to (53), a sign sequence s 0 , . . . , s τ (4n) ∈ {−1, 1} exists such that for each function g ∈ V 4n there is a point We utilize this sign sequence to construct resonance elements. It is well known, that auxiliary function The interior of the support of summands is non-overlapping, i.e., h n B[0,1] ≤ 1, and because of (54) norm h Then conditions (29) and (30) are fulfilled due to the norms of h n and its derivatives, cf. (1). Due to the initial argument of the proof, for each g ∈ V 4n at least one point
Corollary 4 (Logistic function). Let σ = σ l be the logistic function and r ∈ N.
For each abstract modulus of smoothness ω satisfying (32), a counterexample f ω ∈ C[0, 1] exists such that The corollary extends the Theorem of Maiorov and Meir for worst case approximation with sigmoid functions in the case p = ∞ to Lipschitz classes and one condensed counterexample (instead of a sequence), see [39, p. 99]. The sharpness estimate also holds in L p [0, 1], 1 ≤ p < ∞. For all these spaces, one can apply Theorem 3 directly with the sequence of counterexamples constructed in [39,Lemma 7,p. 99]. Even more generally, Theorem 1 in [38] utilizes pseudo-dimension, a generalization of VC dimension, to provide bounded sequences of counterexamples in Sobolev spaces.
Thus, all prerequisites of Corollary 3 are shown. The corollary improves (51): There exists a counterexample f ω ∈ C[0, 1] such that (see (9), (16)) The proof is based on the O(n log 2 (n)) estimate of VC dimension in [6]. This requires functions to be defined on a finite grid. Without this prerequisite, the VC dimension is in Ω(n 2 ), see [48, p. 235]. The referenced book also deals with the case that all weights are restricted to floating point numbers with a fixed number of bits. Then the VC dimension becomes bounded by O(n) without the need for the log-factor. However, direct upper error bounds (9) and (16) are proved for real-valued weights only.
The preceding corollary is a prototype for proving sharpness based on known VC dimensions. Also at the price of a log-factor, the VC dimension estimate for radial basis functions in [6] or [46] can be used similarly in connection with Corollary 3 to construct counterexamples. The sharpness results for Heaviside, cut, ReLU and inverse tangent activation functions shown above for p = ∞ can also be obtained with Corollary 3 by proving that VC dimensions of corresponding function spaces Φ n,σ are in O(n) (whereas the result of [5] only provides an O(n log(n)) bound). This in turn can be shown by estimating the maximum number of zeroes like in the proof of the next corollary and in the same manner as in the proofs of Corollaries 1 and 2.
The problem of different rates in upper and lower bounds arises because different scaling coefficients b k are allowed. In the case of uniform scaling, i.e. all coefficients b k in (50) (2), see [15], are defined using such uniform scaling, see [9, p. 172], the error bound holds. This bound is sharp: ω r (f ω , δ) = O (ω(δ r )) and E(Φ n,σ l , f ω ) = o ω 1 n r .
To prove the corollary, we apply following lemma. {0, 1, . . . , τ(n)} , and V n,τ (n) := {h : G n → R : h(x) = g(x) for a function g ∈ V n } are given as in Corollary 3. If VC-dim(V n,τ (n) ) ≥ τ (n) then there exists a function g ∈ V n , g ≡ 0, with a set of at least τ (n)/2 zero points in [0, 1] such that g has non-zero function values between each two consecutive points of this set.
Using a common denominator q(x), the numerator is a sum of type s(x) = n−1 k=0 α k (e −kB ) x which has at most n − 1 zeroes, see [50]. Because of this contradiction to n zeroes, (53) is fulfilled.
By applying Lemma 2 for N (n) = n − 1 in connection with Theorem 3, one can also show Corollary 5 for L p -spaces, 1 ≤ p < ∞.
Linear VC dimension bounds were proved in [47] for radial basis function networks with uniform width (scaling) or uniform centers. Such bounds can be used with Corollary 3 to prove results that are similar to Corollary 5. Also, such a corollary can be shown for the ELU function σ e . However, without the restriction b k = B(n), piecewise superposition of exponential functions leads to O(n 2 ) zeroes of sums of ELU functions. Then in combination with direct estimates Theorems 1 and 2, i.e., E(Φ n,σe , f ω ) ≤ C r ω r f ω , 1 n , we directly obtain following (improvable) result in a straightforward manner. Corollary 6 (Coarse estimate for ELU activation). Let σ = σ e be the ELU function and r ∈ N, n ≥ max{2, r} (see Theorem 1). For each abstract modulus of smoothness ω satisfying (32), there exists a counterexample f ω ∈ C[0, 1] that fulfills Proof. To prove the existence of a function f ω with ω r (f ω , δ) ∈ O (ω (δ r )) and (56), we apply Corollary 3 with V n = Φ n,σe and E n (f ) = E(Φ n,σe , f) such that conditions (24)- (27) are fulfilled. For each function g ∈ V n the interval [0, 1] can be divided into at most n + 1 subintervals such that on the lth interval g equals a function g l of type g l (x) = γ l + δ l x + n k=1 α l,k exp(β l,k x).
Derivative g l (x) = δ l exp(0 · x) + n k=1 α l,k β l,k exp(β l,k x) has at most n zeroes or equals the zero function according to [50]. Thus, due to the mean value theorem (or Rolle's theorem), g l has at most n + 1 zeroes or is the zero function. By concatenating functions g l to g, one observes that g has at most (n + 1) 2 different zeroes such that g does not vanish between such consecutive zero points. Let τ (n) := 8n 2 and ϕ(n) = 1/n 2 such that (54) holds true: τ (4n) = 128n 2 = 128/ϕ(n). If VC-dim(V n,τ (n) ) ≥ τ (n) then due to Lemma 3 and because n ≥ 2 there exists a function in Φ n,σe with at least τ (n)/2 = (2n) 2 > (n + 1) 2 zeroes such that between consecutive zeroes, the function is not the zero function. This contradicts the previously determined number of zeroes and (53) is fulfilled.
Sums of n softsign functions ϕ(x) = x/(1+|x|) can be expressed piecewise by n + 1 rational functions that each have at most n zeroes. Thus, one also has to deal with O(n 2 ) zeroes.
In terms of (non-linear) Kolmogogrov n-width, let X := Lip r (α, C[0, 1]). Then, for example, condensed counterexamples f α for piecewise linear or inverse tangent activation functions and p = ∞ imply The restriction to the univariate case of a single input node was chosen because of compatibility with most cited error bounds. However, the error of multivariate approximation with certain activation functions can be bounded by the error of best multivariate polynomial approximation, see proof of Theorem 6.8 in [42, p. 176]. Thus, one can obtain estimates in terms of multivariate radial moduli of smoothness similar to Theorem 2 via [30, Corollary 4, p. 139]. Also, Theorem 3 can be applied in a multivariate context in connection with VC dimension bounds. First results are shown in report [25].
Without additional restrictions, a lower estimate for approximation with logistic function σ l could only be obtained with a log-factor in (55). Thus, either direct bounds (2) and (9) or sharpness result (55) can be improved slightly. | 2020-07-02T10:30:05.986Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "c25698a728f6105e58452eb6739eb32f46e59030",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00025-020-01239-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "94229bb69163fd86235492cdbfcb07fb92f50a7d",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
264497269 | pes2o/s2orc | v3-fos-license | Identification of Novel Targeting Sites of Calcineurin and CaMKII in Human CaV3.2 T-Type Calcium Channel
The Cav3.2 T-type calcium channel is implicated in various pathological conditions, including cardiac hypertrophy, epilepsy, autism, and chronic pain. Phosphorylation of Cav3.2 by multiple kinases plays a pivotal role in regulating its calcium channel function. The calcium/calmodulin-dependent serine/threonine phosphatase, calcineurin, interacts physically with Cav3.2 and modulates its activity. However, it remains unclear whether calcineurin dephosphorylates Cav3.2, the specific spatial regions on Cav3.2 involved, and the extent of the quantitative impact. In this study, we elucidated the serine/threonine residues on Cav3.2 targeted by calcineurin using quantitative mass spectrometry. We identified six serine residues in the N-terminus, II–III loop, and C-terminus of Cav3.2 that were dephosphorylated by calcineurin. Notably, a higher level of dephosphorylation was observed in the Cav3.2 C-terminus, where calcineurin binds to this channel. Additionally, a previously known CaMKII-phosphorylated site, S1198, was found to be dephosphorylated by calcineurin. Furthermore, we also discovered that a novel CaMKII-phosphorylated site, S2137, underwent dephosphorylation by calcineurin. In CAD cells, a mouse central nervous system cell line, membrane depolarization led to an increase in the phosphorylation of endogenous Cav3.2 at S2137. Mutation of S2137 affected the calcium channel function of Cav3.2. Our findings advance the understanding of Cav3.2 regulation not only through kinase phosphorylation but also via calcineurin phosphatase dephosphorylation.
Introduction
Calcium entry through voltage-gated calcium channels depolarizes the membrane potential, facilitating the transmission of electrical signals in nerve and muscle tissues [1].Additionally, intracellular calcium serves as a crucial secondary messenger, governing diverse cell signaling pathways and biological processes [2].The regulation of intracellular calcium concentration involves both high-voltage-activated calcium channels (Cav1 and Cav2 subtypes) and low-voltage-activated calcium channels (Cav3 subtypes).The low-voltage-activated calcium channels, known as T-type (T for transient or tiny) channels, exhibit rapid inactivation kinetics and are capable of opening near the resting membrane potential, contributing to membrane depolarization.Vertebrates express three different T-type calcium channels: Cav3.1, Cav3.2, and Cav3.3 [3,4].Dysfunctions in T-type calcium channels are linked to various disease conditions, including epilepsy, autism, neuromuscular disorders, and chronic pain [5].Cav3.2 exhibits high expression levels in dorsal root ganglion sensory neurons and plays an important role in the development of chronic pain [6,7].
The pore-forming α1-subunit of T-type calcium channels consists of four homologous transmembrane domains connected by cytoplasmic N-terminus, interdomain loops, and C-terminus.These cytoplasmic regions of T-type calcium channels serve as sites of posttranslational modifications by intracellular enzymes, thereby fine-tuning the channel functions [8,9].Deubiquitination of Cav3.2 by USP5 promotes channel stability and function, thus mediating the development of neuropathic and inflammatory pain in rodents [7].Additionally, various kinases modulate the functions of Cav3.2 through phosphorylation.Phosphorylation of Cav3.2 at the S1107 residue in the II-III loop by PKA is required for Gβγ-mediated inhibition of Cav3.2 [10].Phosphorylation of Cav3.2 at the S1198 residue in the II-III loop by CaMKII causes a leftward shift in the activation threshold and facilitates channel opening near the resting membrane potential [11,12].Moreover, phosphorylation of Cav3.2 at S561 in the I-II loop and S1987 in the C-terminus by Cdk5 upregulates the channel current density [13].Although activation of kinases, including ROCK and PKC, facilitates the Cav3.2 current, the precise phosphorylation sites at Cav3.2 remain unclear [14,15].
The activity of Cav3.2 is further influenced by specific proteins that interact with its cytoplasmic regions.Syntaxin-1A, for instance, binds to the C-terminus of Cav3.2 channels, regulating both channel function and low-threshold exocytosis [16].Additionally, calcineurin also binds to the C-terminus of Cav3.2 channels, resulting in a reduction in the channel current density [17].This interaction between Cav3.2 and calcineurin is dependent on calmodulin and calcium concentration.The NFAT-binding domain of calcineurin is essential for its binding to Cav3.2.Moreover, the PCISVE (2190-2195) and LTVP (2261-2264) motifs in the C-terminus of Cav3.2 are crucial for the channels' interaction with calcineurin.The 9A-Cav3.2 mutant form, which cannot bind to calcineurin, also exhibits a higher current density [17].
Calcineurin is a serine/threonine phosphatase known for dephosphorylating various target proteins, including transcription factors, receptors, and channels [18].Notably, the dephosphorylation of the transcription factor NF-AT3 by calcineurin is implicated in pathological cardiac hypertrophy [19,20].Similarly, Cav3.2 is also involved in the development of pathological cardiac hypertrophy [21].While it is established that calcineurin interacts with and modulates Cav3.2, the specific dephosphorylation of Cav3.2 by calcineurin has remained unclear.In this study, we aimed to identify the serine/threonine residues of Cav3.2 channels targeted by calcineurin for dephosphorylation.Additionally, we discovered that CaMKII phosphorylates one of the calcineurin-targeted residues, namely S2137.Interestingly, we observed that membrane depolarization increased the S2137 phosphorylation, as confirmed by its specific antibody.The functional implications of S2137 phosphorylation were also investigated in this study.
Plasmid cDNA Construction and Mutagenesis
The QuikChange site-directed mutagenesis kit from Agilent Technologies (Santa Clara, CA, USA) was employed to generate mutant plasmid constructs.Following mutagenesis, the integrity of the constructs was confirmed through sequencing.PCR was employed to amplify the C-terminus of human Cav3.2 prior to its cloning into the pGEX-4T-1 vector obtained from Thermo Fisher Scientific (Waltham, MA, USA).
Cell Lysis and Immunoprecipitation
Transfected cells were lysed and homogenized using an immunoprecipitation buffer composed of the following components: (in mM) 50 Tris HCl, pH 8.0, 150 sodium chloride, 1% Triton-X100, 1 mM EDTA, protease inhibitors, and phosphatase inhibitors.The resulting lysates were incubated on ice for 30 min.Subsequently, undissolved pellets were separated by centrifugation at 13,000 rpm for 30 min at a temperature of 4 • C. For immunoprecipitation, the cell lysates were subjected to incubation with anti-FLAG antibody-conjugated beads (Sigma-Aldrich, Saint Louis, MO, USA) at 4 • C overnight, utilizing rotation.For the in vitro calcineurin reaction, the pulled-down Flag-Cav3.2was washed successively with lysis buffer, PBS, and calcineurin reaction buffer.The Flag-Cav3.2 was eluted by the 3xFlag peptide (Sigma-Aldrich).For the GST pull-down procedure, the GST-fusion protein consisting of the C-terminus of Cav3.2 (GST-CII) was extracted using beads conjugated with glutathione (GE Healthcare, Chicago, IL, USA).Following a thorough wash, the pulled-down proteins were eluted using an excess amount of reduced glutathione [17].
In Vitro Calcineurin and CaMKII reactions
In the context of the in vitro calcineurin reaction, the pulled-down Flag-Cav3.2was subjected to incubation with active calcineurin enzyme, human recombinant calmodulin, and calcineurin reaction buffer (Abcam, Cambridge, UK).For the control sample, incubation was performed without the active calcineurin enzyme.Following a 1 h incubation at 37 • C, the Flag-Cav3.2samples were either prepared for gel-assisted digestion or immunoblotting.For the CaMKII reaction, the C-terminus of Cav3.2 (GST-CII) or Flag-Cav3.2were incubated with CaMKII, calmodulin, and NEB buffer for protein kinases from New England Biolabs (Ipswich, MA, USA).After a 1 h incubation at 37 • C, the GST-CII samples were subjected to SDS-PAGE and stained with Coomassie blue.The resulting gel bands were excised for in-gel digestion.The CaMKII-treated Falg-Cav3.2was prepared for immunoblotting.
Gel-Assisted Digestion, In-Gel Digestion, and Immobilized Metal Affinity Chromatography
To improve the digestion efficiency of membrane proteins, gel-assisted digestion was employed.[22].To assess variations in digestion efficiency, 0.2 µg of bovine α-casein and 0.05 µg bovine β-casein were added into eluted Flag-Cav3.2samples.Protein reduction was carried out using 5 mM TCEP, followed by room temperature alkylation with 2 mM MMTS for 30 min.For the direct incorporation of proteins into a gel within a micro tube, acrylamide/bisacrylamide solution, APS, and TEMED were added.The proteinincorporated gel was fragmented and subjected to multiple washes with 0.5 mL of 50% (v/v) ACN in TEABC.Subsequently, the dehydration of gel samples was achieved using 100% ACN and the samples were thoroughly dried using a vacuum centrifuge.Next, overnight trypsin digestion was carried out in 25 mm TEABC at 37 • C. Peptide extraction was performed by sequentially adding 0.2 mL of 25 mM TEABC, 0.2 mL of 0.1% (v/v) TFA in water, 0.2 mL of 0.1% (v/v) TFA in ACN, and 0.2 mL of 100% ACN.The collected solutions were pooled and dehydrated using a vacuum centrifuge.For the in-gel digestion of GST-CII, the gel bands were fragmented, washed with 0.5 mL of 50% (v/v) ACN in 25 mM TEABC, completely dried by 100% ACN and vacuum centrifuge, followed by trypsin digestion.The resulting peptides were then extracted.Phosphopeptide enrichment using immobilized metal affinity chromatography (IMAC) was performed following previously reported procedures [23].The IMAC eluate and tryptic peptides were subsequently purified using a C18 Ziptip (Millipore, Bedford, MA, USA) for cleaning.
Mass Spectrometry (MS), Database Searching, and Phosphopeptide Quantification
The phosphopeptides enriched through IMAC and the tryptic peptides of different variants of Cav-3.2 were subjected to analysis using RP-UPLC (nanoACQUITY UPLC, Waters, Milford, MA, USA) in conjunction with Q-TOF MS (QTOF Premier, Waters), following the established procedure [23].The MS peak lists were generated in Mascot generic format (mgf) using Mascot Distiller with default parameters.The mgf files were employed for searching against the UniProt human protein database using Mascot (Matrix Science, London, UK).The database search parameters included trypsin as the protease, allowance for up to 2 missed cleavages, and tolerances of 0.07 Da for both precursor and fragment ion measurements.Variable modifications were set to include methylthio of cysteine, oxidation of methionine, and phosphorylation of serine, threonine, and tyrosine.Proteins were considered identified if they met the significance threshold of p < 0.05.Peptides were considered identified if they had a peptide score of 30 or higher.The Mascot delta score was utilized for the phosphorylation site assignment [23,24].Peptide and phosphopeptide quantification was achieved using IDEAL-Q [25].Bovine α-casein and β-casein, added externally, served as internal references for the quantification of Cav3.2 phosphopeptides.To achieve this, each mgf file was subjected to a Mascot search against the UniProt bovine protein database.The quantities of bovine casein peptides or phosphopeptides were determined using IDEAL-Q and employed for the normalization of peptides or phosphopeptides of Cav3.2.
Generation of Phospho-S2137 Cav3.2 Antibody and Immunoblotting
The preparation of the rabbit polyclonal phospho-S2137 Cav3.2 antibody included the synthesis of a corresponding peptide with phosphoserine at the indicated site.This synthetic peptide was then conjugated to BSA before being used as the peptide antigen for immunizing the host rabbits.For immunoblotting, the pulled-down GST-CII or Flag-Cav3.2, the cell lysates of transfected HEK293 cells, and the cell lysates of CAD cells were separated using SDS-PAGE.Subsequently, they were transferred onto a PVDF membrane for immunostaining using specific antibodies.The following antibodies were utilized: anti-Cav3.2(H-300, Santa Cruz Biotechnologies, Dallas, TX, USA), anti-Flag (M2-HRP, Sigma-Aldrich), anti-β actin (Proteintech, Chicago, IL, USA), and a homemade anti-phospho-S2137 Cav3.2 antibody.
Electrophysiological Recording
Borosilicate glass capillary tubes (Warner Instruments, Holliston, MA, USA) were utilized to shape patch pipettes, achieving a tip resistance of 2.8-3.5 MΩ using a P-97 Flaming/Brown type micropipette puller (Sutter Instrument, Novato, CA, USA).An Axon Multiclamp 700B microelectrode amplifier (Molecular Devices, San Jose, CA, USA) was employed for measuring the ionic currents.Data acquisition was performed with a sampling frequency of 50 kHz and a low-pass filter set at 2 kHz.Digidata 1440A interfaced with Clampex 10.4 (Molecular Devices, San Jose, CA, USA) controlled voltage and current commands as well as the digitization of membrane voltages and currents.Data analysis was carried out using pCLAMP 10.4 software (Molecular Devices, San Jose, CA, USA).For the measurement of Cav3.2 currents, cells were immersed in a 300 mOsm bath solution comprising 145 mM TEA-Cl, 5 mM CaCl 2 , 3 mM CsCl, 1 mM MgCl 2 , 5 mM glucose, and 10 mM HEPES, pH-adjusted to 7.4 with TEA-OH.The 310 mOsm pipette solution was composed of 130 mM CsCl, 20 mM HEPES, 10 mM EGTA, 5 mM MgCl 2 , 3 mM Mg-ATP, and 0.3 mM Tris-GTP, pH-adjusted to 7.3 with CsOH.For the measurement of voltagedependent calcium current, the cell membrane potentials were initially held at −90 mV for 20 ms, followed by a depolarization of 10 mV for 150 ms.A 15 s waiting period was employed for channel recovery before the subsequent additional 10 mV depolarization.For the measurement of steady-state inactivation current, transfected cells were initially held at −90 mV before stepping to conditioning potentials for 1500 ms.A 10 s waiting period was employed for channel recovery before the next step.For the inhibition of calcineurin, cyclosporine A (CSA, 10 µM) was added to the bath solution.
Identification of Amino Acid Residues on Cav3.2 Dephosphorylated by Calcineurin
Calcineurin, a calcium/calmodulin-dependent protein phosphatase, interacts with and modulates the functions of Cav3.2 T-type calcium channels [17].To investigate whether calcineurin regulates Cav3.2 through dephosphorylation of specific serine or threonine residues, we expressed Flag-tagged human Cav3.2 in HEK293 cells, where significant phosphorylation of Cav3.2 and its phosphorylation regulation have been previously documented [12,26].For the identification of the exact amino acid residues of Cav3.2 dephosphorylated by calcineurin, we performed mass spectrometry-based identification and label-free quantification of IMAC-enriched phosphopeptides (Supplementary Figure S1).Flag-tagged Cav3.2 was immunoprecipitated (IP) using an anti-Flag antibody and subsequently eluted with 3xFlag peptide.The pulled-down Cav3.2 channels were then reacted with or without calcineurin and digested into tryptic peptides using gel-assisted digestion [22].The phosphopeptides were enriched through immobilized metal affinity chromatography (IMAC).The identities and quantities of peptides or phosphopeptides were revealed by LC-MS/MS analysis (Waters Q-TOF Premier) and IDEAL-Q software (V1.063) [25].To account for quantification bias resulting from different digestion and purification efficiencies, we incorporated the standard phosphoproteins bovine αand βcasein as spike-in controls [23].In total, we identified 39 phosphopeptides matching Cav3.2 (Supplementary Figure S2).Among these phosphopeptides, 30 had single phosphorylation sites, 7 had double phosphorylation sites, and 2 had more than three phosphorylation sites (Table 1).
The assignment of phosphorylation sites in a peptide was based on the Mascot delta score of each MSMS spectrum [24].For example, the MSMS spectrum of a doubly charged phosphopeptide with a mass-to-charge ratio (m/z) of 873.41 corresponded to the amino acid residues from 2135 to 2149 of human Cav3.2.The phosphorylation site was determined as S2137, relying on the Mascot delta score difference between the first and second hits of potential candidate sequences (Figure 1A and Table 1).In this study, 36 phosphosites were assigned, with 34 phosphoserine and 2 phosphothreonine residues.These phosphosites were distributed as follows: 5 in the N-terminus, 9 in the I-II loop, 11 in the II-III loop, 4 in the III-IV loop, and 7 in the C-terminus of Cav3.2 (Figure 1B).Comparing our findings with the results of Blesneac et al. [26] and the PhosphoSitePlus database [27], we identified eight novel phosphorylation sites at S44, S719, S722, S1109, S1165, S1168, S1604, and S2030.
To identify the amino acid residues dephosphorylated by calcineurin, we compared the ion signal intensities of phosphopeptides from Cav3.2 channels treated with or without calcineurin.In Figure 1C, the selective ion chromatograms (XICs) of indicated m/z ratios matched to phosphopeptides with single phosphorylation sites were considered candidates with higher priority.We observed a decrease in ion intensities for 5 single-phosphorylated peptides upon treatment with calcineurin.Specifically, S1999, S2137, and S2222 were located in the C-terminus of Cav3.2, while S1144 and S1198 were in the II-III loop.Therefore, we suggest that calcineurin dephosphorylates Cav3.2 at S1144, S1198, S1999, S2137, and S2222.
Interestingly, the ion intensity of the S2188 single-phosphorylated peptide increased upon incubation with calcineurin.It should be noted that S2188 is located close to the calcineurin-binding motif PCISVE (amino acid 2190-2195) of Cav3.2 [17].One possibility is that the binding of calcineurin may stabilize the phosphorylation at S2188.Another possibility is that a di-phosphorylated peptide might have undergone dephosphorylation in one residue, leading to an increased level of single-phosphorylated peptide phosphorylated in another residue.However, we did not find the corresponding di-phosphorylated peptide of S2188.Conversely, we found an S29S32 di-phosphorylated peptide whose signal intensity was decreased by calcineurin, while the corresponding S32 single-phosphorylated peptide showed an increase (Figure 1D).These results suggest calcineurin dephosphorylates Cav3.2 at N-terminus S29.
CaMKII Kinase Phosphorylates S2137 of Cav3.2
The potential kinases responsible for the identified phosphorylation sites in Cav3.2 were predicted based on kinase recognition motifs [28].Table 2 reveals that 32 phosphorylation sites are associated with at least one potential kinase.Notably, the calcineurindephosphorylated sites S1198 and S2137 were both predicted to be substrates of CaMKII, PKD, or CHK1/2 kinases.Previous studies have revealed that S1198 of Cav3.2 can be phosphorylated by CaMKII [11,12].Therefore, we sought to investigate whether S2137 is also a substrate of CaMKII.To explore this possibility, we incubated the GST-fusion C-terminus of Cav3.2 (GST-CII) with or without CaMKII.In CaMKII-treated samples, a subtle mobility shift of GST-CII bands was observed (Figure 3A).The SDS-PAGE gel bands containing GST-CII were subjected to trypsin digestion.The resulting tryptic peptides were then analyzed by LC-MS/MS and matched to the amino acid residues from 2135 to 2149 of human Cav3.2.We observed that the MSMS spectra of ions with m/z 873.40 corresponded to the S2137 phosphopeptide, while the MSMS spectra of ions with m/z 555.95 corresponded to the unphosphorylated peptide (Figure 3B).The S2137 phosphopeptide
CaMKII Kinase Phosphorylates S2137 of Cav3.2
The potential kinases responsible for the identified phosphorylation sites in Cav3.2 were predicted based on kinase recognition motifs [28].Table 2 reveals that 32 phosphorylation sites are associated with at least one potential kinase.Notably, the calcineurindephosphorylated sites S1198 and S2137 were both predicted to be substrates of CaMKII, PKD, or CHK1/2 kinases.Previous studies have revealed that S1198 of Cav3.2 can be phosphorylated by CaMKII [11,12].Therefore, we sought to investigate whether S2137 is also a substrate of CaMKII.To explore this possibility, we incubated the GST-fusion C-terminus of Cav3.2 (GST-CII) with or without CaMKII.In CaMKII-treated samples, a subtle mobility shift of GST-CII bands was observed (Figure 3A).The SDS-PAGE gel bands containing GST-CII were subjected to trypsin digestion.The resulting tryptic peptides were then analyzed by LC-MS/MS and matched to the amino acid residues from 2135 to 2149 of human Cav3.2.We observed that the MSMS spectra of ions with m/z 873.40 corresponded to the S2137 phosphopeptide, while the MSMS spectra of ions with m/z 555.95 corresponded to the unphosphorylated peptide (Figure 3B).The S2137 phosphopeptide ion signal was exclusively found in the CaMKII-treated GST-CII, while the unphosphorylated peptide signal was almost absent.These results suggest that S2137 of full-length Cav3.2 can indeed be phosphorylated by CaMKII.
S32 N-terminal CK1 (S-X-X-S/T) ERK (P-X-S/T-P, P-E-S/T-P
a Prediction was conducted using the entire amino acid sequence of human Cav3.2 through Phosida. generate signals using the phospho-S2137 antibody (Figure 3C).In alignment with the outcomes from our LC-MS/MS analysis, the phospho-S2137 antibody exhibited robust reactivity with the Flag-tagged full-length Cav3.2 following CaMKII treatment (Figure 3D).Furthermore, co-incubation with calcineurin led to a reduction in the phospho-S2137 antibody signal (Figure 3D).Our findings suggest that S2137 of full-length Cav3.2 undergoes phosphorylation by CaMKII and dephosphorylation by calcineurin.To investigate the native phosphorylation of S2137 of Cav3.2, we used mouse CAD cells, which express endogenous Cav3.2 [7,29].The homolog sequences of human, rat, and mouse were aligned in the regions around human Cav3.2S2137.In mice and rats, the To further investigate the phosphorylation regulation of Cav3.2 at S2137, we generated an antibody specifically targeting the phosphorylation at this site.To confirm the antibody's specificity for phospho-S2137, we expressed Flag-tagged wild-type, S1198A mutant, and S2137A mutant constructs of Cav3.2 into HEK293 cells.Compared with the untransfected control, the wild-type and S1198A mutant generated phospho-S2137 antibody signals.However, similar to the untransfected control, the S2137A mutant failed to generate signals using the phospho-S2137 antibody (Figure 3C).In alignment with the outcomes from our LC-MS/MS analysis, the phospho-S2137 antibody exhibited robust reactivity with the Flag-tagged full-length Cav3.2 following CaMKII treatment (Figure 3D).Furthermore, co-incubation with calcineurin led to a reduction in the phospho-S2137 antibody signal (Figure 3D).Our findings suggest that S2137 of full-length Cav3.2 undergoes phosphorylation by CaMKII and dephosphorylation by calcineurin.
To investigate the native phosphorylation of S2137 of Cav3.2, we used mouse CAD cells, which express endogenous Cav3.2 [7,29].The homolog sequences of human, rat, and mouse were aligned in the regions around human Cav3.2S2137.In mice and rats, the homologous sites of human Cav3.2S2137 are also serine residues and have CaMKII recognition motifs (Figure 4A).To detect the endogenous Cav3.2 S2137 phosphorylation, we employed the phospho-S2137 antibody.We used KCl depolarization to increase the intracellular calcium concentration and CaMKII activity of mouse CAD cells [7,30].A basal phospho-S2137 Cav3.2 signal in control CAD cells was detected by the antibody.When CAD cells were depolarized by KCl, the phosphorylation of Cav3.2 S2137 increased (Figure 4B).Our results suggest that there is endogenous phosphorylation of Cav3.2 S2137, and membrane depolarization of the neuronal cell line enhances the phosphorylation of Cav3.2 S2137.
Effect of S2137 Phosphorylation on the Functional Properties of Cav3.2
To investigate the impact of Cav3.2 S2137 phosphorylation on the calcium current properties of Cav3.2, we expressed the S2137D phosphorylation mimic mutant in HEK293 cells.Additionally, we validated the functional regulation of Cav3.2 through calcineurinmediated dephosphorylation using the specific phosphatase inhibitor CSA.The voltagegated channel properties of wild-type and phosphorylation-mimicking mutants of Cav3.2 were compared using a whole-cell voltage clamp.Transfected cells were held at −90 mV and then subjected to test potentials.The representative current traces exhibited typical Ttype calcium channel behavior (Figure 5A).The current densities of the Cav3.2S2137D mutant were significantly smaller than those of wild-type Cav3.2(Figure 5B).Inhibition of calcineurin-mediated dephosphorylation by CSA increased phosphorylation on S2137 of Cav3.2 (Figure 2A) and led to a reduction in calcium current densities of wild-type Cav3.2(Figure 5B).CSA failed to affect the current densities of S2137D-phosphorylationmimicking Cav3.2, and this suggests that phosphorylation on S2137 of Cav3.2 is sufficient to inhibit the Cav3.2 calcium channel function.Although CSA also increased phosphorylation on S1198 of Cav3.2 (Figure 2A), the S1198E-phosphorylation-mimicking Cav3.2 itself could not significantly reduce the current densities of Cav3.2 unless further inhibiting the calcineurin-mediated dephosphorylation with CSA (Figure 5B).The above findings indicate that in the regulation of Cav3.2 current density by calcineurin, S2137 holds greater significance compared to S1198.Additionally, the voltage-dependent activation and steady-state inactivation curves indicated similar calcium channel gating properties between wild-type and S2137D Cav3.2 (Figure 5C).
Effect of S2137 Phosphorylation on the Functional Properties of Cav3.2
To investigate the impact of Cav3.2 S2137 phosphorylation on the calcium current properties of Cav3.2, we expressed the S2137D phosphorylation mimic mutant in HEK293 cells.Additionally, we validated the functional regulation of Cav3.2 through calcineurinmediated dephosphorylation using the specific phosphatase inhibitor CSA.The voltagegated channel properties of wild-type and phosphorylation-mimicking mutants of Cav3.2 were compared using a whole-cell voltage clamp.Transfected cells were held at −90 mV and then subjected to test potentials.The representative current traces exhibited typical T-type calcium channel behavior (Figure 5A).The current densities of the Cav3.2S2137D mutant were significantly smaller than those of wild-type Cav3.2(Figure 5B).Inhibition of calcineurin-mediated dephosphorylation by CSA increased phosphorylation on S2137 of Cav3.2 (Figure 2A) and led to a reduction in calcium current densities of wild-type Cav3.2(Figure 5B).CSA failed to affect the current densities of S2137D-phosphorylation-mimicking Cav3.2, and this suggests that phosphorylation on S2137 of Cav3.2 is sufficient to inhibit the Cav3.2 calcium channel function.Although CSA also increased phosphorylation on S1198 of Cav3.2 (Figure 2A), the S1198E-phosphorylation-mimicking Cav3.2 itself could not significantly reduce the current densities of Cav3.2 unless further inhibiting the calcineurin-mediated dephosphorylation with CSA (Figure 5B).The above findings indicate that in the regulation of Cav3.2 current density by calcineurin, S2137 holds greater significance compared to S1198.Additionally, the voltage-dependent activation and steady-state inactivation curves indicated similar calcium channel gating properties between wild-type and S2137D Cav3.2 (Figure 5C).Calcineurin binds to Cav3.2 [17] and also dephosphorylates Cav3.2.In Figure 2B, the phosphorylation levels on S1198 and S2137 of Cav3.2 were increased in the calcineurin-binding-deficient 9A mutant of Cav3.2.To distinguish between the effects of calcineurin binding and dephosphorylation, we introduced the phospho-deficient S1198A and S2137A mutants into the calcineurin-binding-deficient 9A mutant of Cav3.2.In the 9A mutant, single mutations in S1198A or S2137A led to increased current densities of Cav3.2, but these changes did not reach statistical significance.However, when both sites were mutated to S1198AS2137A, the current density was significantly increased (Figure 5D).Our results suggest that phosphorylation of S2137 of Cav3.2 inhibits the current densities of Cav3.2 calcium channels, and dephosphorylation of Cav3.2 by calcineurin enhances the current densities of Cav3.2 calcium channels.
Discussion
Previously, the phosphorylation of Cav3.2 by various kinases was elucidated [9].In this study, we identified dephosphorylation sites on Cav3.2 by calcineurin, both in vitro and in vivo.We discovered that calcineurin dephosphorylates the previously identified CaMKII target site, S1198, on Cav3.2.Additionally, we revealed that a novel CaMKII target site, S2137, on Cav3.2 is also subjected to dephosphorylation by calcineurin.To specifically recognize phospho-S2137 Cav3.2, we generated an antibody, and with its application, we confirmed that membrane depolarization increases the phosphorylation of Cav3.2 at S2137.Lastly, we observed that S2137 phosphorylation modulates the calcium channel function of Cav3.2.
In this study, our findings indicate that the residues in the C-terminus of Cav3.2 undergo more significant dephosphorylation by calcineurin when compared to the residues in the II-III loop.The docking of calcineurin to its substrates is a crucial step in the dephosphorylation of various calcineurin targets [31].Moreover, the specificity of calcineurinmediated dephosphorylation relies more on the structural characteristics of substrates rather than a specific consensus sequence [32].Notably, our findings demonstrate that the sites on Cav3.2 dephosphorylated by calcineurin lack a distinct sequence pattern.Since the substrate-binding site is located within the catalytic domain of calcineurin, a higher degree of dephosphorylation is expected in the C-terminus of Cav3.2, as observed in our study.Interestingly, the phosphorylation level of Cav3.2 at S2188, which is situated close to the PCISVE (2190-2195) calcineurin binding motif of Cav3.2, was found to be augmented by the addition of calcineurin.This observation raises the possibility that the protein/protein binding region might create an environment conducive to stabilizing the phosphorylated motifs.
Previously, CaMKII had been identified as the enzyme phosphorylating S1198 of Cav3.2, a process that facilitates the opening of channels near the membrane potential [11,12].However, in the current study, we uncovered that S1198 of Cav3.2 can also be targeted for dephosphorylation by calcineurin.Moreover, our investigation revealed a previously unknown phosphorylation target of CaMKII, S2137 of Cav3.2, which, interestingly, is also subject to dephosphorylation by calcineurin.Notably, when analyzing CAD cells that naturally express Cav3.2, we detected an increased signal of phospho-S2137 Cav3.2 antibody after membrane depolarization caused by KCl.Given that membrane depolarization prompts the opening of Cav3.2, our findings suggest that calcium influx through these channels might stimulate the phosphorylation of Cav3.2 by CaMKII, rather than inducing dephosphorylation by calcineurin.In earlier investigations, we observed a peak binding of calcineurin with Cav3.2 at a calcium concentration of 30 µM, along with 20% binding at a calcium concentration of 1 µM [17].Given the usual cytoplasmic calcium concentration span from 0.1 µM in resting cells to 1 µM in depolarized cells [33], it becomes plausible that the activation of calcineurin could take place in scenarios where there is an excessive influx of calcium through Cav3.2.Moreover, given that CaMKII binds to the II-III loop where S1198 is located, and calcineurin binds to the C-terminus where S2137 is situated, it is plausible that CaMKII would have a preference for phosphorylating S1198 over S2137, while calcineurin could have a predilection for dephosphorylating S2137 rather than S1198.These spatial arrangements of upstream regulators and downstream target sites contribute to the nuanced fine-tuning of the Cav3.2 channel function.We are of the opinion that the dephosphorylation of Cav3.2, in conjunction with its phosphorylation by CaMKII or potentially other kinases, plays a crucial role in maintaining the functional homeostasis of Cav3.2.This is particularly significant considering its involvement in conditions such as chronic pain, autism, epilepsy, and primary aldosteronism [4,5].
Owing to advancements in mass spectrometry technology, the identification of phosphorylation sites in proteins of interest is now a commonly conducted practice [34].Our study underscores the potency of mass spectrometry technology in uncovering new phosphorylation sites, even within proteins that have been extensively studied before.The ability to detect previously undiscovered phosphopeptides could arise from variations in enzyme digestion techniques for membrane proteins, phosphopeptide enrichment strategies, and mass spectrometry analysis protocols.Consequently, it remains a challenge to exhaustively identify all phosphorylation sites of a purified protein using a singular analytical approach.Enhancing identification outcomes can be achieved through a combination of diverse methods involving protein digestion, phosphopeptide enrichment, and mass spectrometry analysis.Previously, Blesneac et al. identified 34 distinct phosphorylation sites from rat brains and 43 phosphorylation sites from human Cav3.2 overexpressed in HEK293T cells using mass spectrometry technology [26].In this study, we identified 36 phosphorylation sites in human Cav3.2, and among them, 8 phosphorylation sites are novel, to our knowledge.The potential implications of these Cav3.2 phosphorylation sites can be speculated by comparing them with variant sequences of Cav3.2 from humans with phenotypes in the ClinVar database [35].In addition to identification, our study expands the scope by incorporating phosphopeptide quantification, allowing us to uncover novel target sites of calcineurin and CaMKII.Furthermore, this phosphopeptide quantification strategy revealed varying fold changes among the target sites, indicating differences in the prioritization of phosphorylation or dephosphorylation among these targets.Regarding the importance of functional implications, it is worth noting that a search in the ClinVar database unveiled mutations at calcineurin-targeted sites, including S29F, S1198D, S1999F, S2188N, and S2222Y.These mutations have been linked to conditions such as type IV familial hyperaldosteronism and idiopathic generalized epilepsy in the ClinVar database.
Certain clinical agents are categorized as T-type calcium channel blockers and are used for treating epilepsy and hypertension [36].Moreover, T-type calcium channel blockers also exhibit promising potential for pain management [37].Given that there are three subtypes of T-type calcium channels in humans, namely Cav3.1, Cav3.2, and Cav3.3, the development of subtype-specific inhibitors for these channels is considered essential for both therapeutic and research purposes [38].Research has shown that intrathecal administration of the deubiquitination target peptide of Cav3.2 to mice resulted in an analgesic effect in the context of neuropathic and inflammatory pain [7].Cell-permeable phosphopeptides have been employed to either inhibit or stimulate intracellular signaling pathways [39].We are confident that identifying Cav3.2 phosphopeptides regulated by kinases or phosphatases will advance our comprehension of channel regulation and consequently contribute to the development of treatment strategies.
Conclusions
The current study has unveiled the sites on Cav3.2 channels that undergo dephosphorylation by calcineurin.Among these calcineurin-dephosphorylated residues, S1198, situated in the II-III loop of Cav3.2, had been previously identified as a target site for CaMKII phosphorylation.Additionally, we have identified a novel site, S2137, located in the C-terminus of Cav3.2, which is both phosphorylated by CaMKII and dephosphorylated by calcineurin.Notably, membrane depolarization in mouse CAD cells led to the phosphorylation of S2137, a phenomenon confirmed by the specific antibody designed for this purpose.Furthermore, our study delved into the functional implications associated with S2137 phosphorylation.
Figure 1 .
Figure 1.Amino acid residues on human Cav3.2 dephosphorylated by calcineurin in vitro.(A) MSMS spectrum of a doubly charged ion with m/z 873.41 matched a phosphopeptide belonging to human Cav3.2, spanning residues 2135 to 2149, with phosphorylation on S2137.The fragment ions experiencing a neutral loss are denoted by an asterisk (*).(B) Cav3.2 phosphorylation sites identified in this study.Phosphorylation of residues highlighted in green indicates a decrease, while phosphorylation of residues highlighted in red indicates an increase in response to calcineurin treatment.The phosphorylation sites emphasized in bold italic font were previously unidentified, to our knowledge.The experiment was repeated twice.The numbers of unique ions/matched spectra were 4/8, 1/3, 3/5, 4/8, 4/19, and 2/6 for S2137, S1999, S2222, S2188, S1198, and S1144, respectively.(C) Selective ion chromatograms (XICs) of ions with indicated m/z at specific liquid chromatography (LC) retention times corresponding to the elution times of the identified phosphopeptides in Table1.XICs of Flag-Cav3.2treated with or without calcineurin were compared.(D) A decrease in the phosphorylation of a di-phosphorylated peptide was accompanied by an increase in the signal of its
Figure 1 .
Figure 1.Amino acid residues on human Cav3.2 dephosphorylated by calcineurin in vitro.(A) MSMS spectrum of a doubly charged ion with m/z 873.41 matched a phosphopeptide belonging to human Cav3.2, spanning residues 2135 to 2149, with phosphorylation on S2137.The fragment ions experiencing a neutral loss are denoted by an asterisk (*).(B) Cav3.2 phosphorylation sites identified in this study.Phosphorylation of residues highlighted in green indicates a decrease, while phosphorylation of residues highlighted in red indicates an increase in response to calcineurin treatment.The phosphorylation sites emphasized in bold italic font were previously unidentified, to our knowledge.The experiment was repeated twice.The numbers of unique ions/matched spectra were 4/8, 1/3, 3/5, 4/8, 4/19, and 2/6 for S2137, S1999, S2222, S2188, S1198, and S1144, respectively.(C) Selective ion chromatograms (XICs) of ions with indicated m/z at specific liquid chromatography (LC) retention times corresponding to the elution times of the identified phosphopeptides in Table1.XICs of Flag-Cav3.2treated with or without calcineurin were compared.(D) A decrease in the phosphorylation of a di-phosphorylated peptide was accompanied by an increase in the signal of its singly phosphorylated counterpart.The experiment was repeated twice.The numbers of unique ions/matched spectra were 2/6 and 2/6 for S29S32 and S32, respectively.
Figure 2 .
Figure 2. Regulation of Cav3.2 phosphorylation by calcineurin enzyme activity and binding function in HEK293 cells.(A) Effect of calcineurin inhibition on Cav3.2 phosphorylation.HEK293 cells were transfected with Flag-Cav3.2for 24 h followed by a 24 h inhibition of calcineurin using cyclosporine A (CSA).The experiment was repeated twice.The numbers of unique ions/matched spectra were 3/10 and 2/4 for S2137 and S1198, respectively.(B) Disruption of calcineurin binding function and Cav3.2 phosphorylation.HEK293 cells were transfected with Flag-tagged wildtype Cav3.2 or a calcineurin-binding deficient mutant, 9A-Cav3.2, for 48 h.XICs of peptides with indicated phosphorylation sites were compared.The experiment was repeated twice.The numbers of unique ions/matched spectra were 3/8 and 2/4 for S2137 and S1198, respectively.
Figure 2 .
Figure 2. Regulation of Cav3.2 phosphorylation by calcineurin enzyme activity and binding function in HEK293 cells.(A) Effect of calcineurin inhibition on Cav3.2 phosphorylation.HEK293 cells were transfected with Flag-Cav3.2for 24 h followed by a 24 h inhibition of calcineurin using cyclosporine A (CSA).The experiment was repeated twice.The numbers of unique ions/matched spectra were 3/10 and 2/4 for S2137 and S1198, respectively.(B) Disruption of calcineurin binding function and Cav3.2 phosphorylation.HEK293 cells were transfected with Flag-tagged wildtype Cav3.2 or a calcineurinbinding deficient mutant, 9A-Cav3.2, for 48 h.XICs of peptides with indicated phosphorylation sites were compared.The experiment was repeated twice.The numbers of unique ions/matched spectra were 3/8 and 2/4 for S2137 and S1198, respectively.
Figure 3 .
Figure 3. Identified S2137 as a novel CaMKII phosphorylation site on human Cav3.2.(A) CaMKII triggered a mobility shift in the SDS-PAGE gel for GST-CII, the GST-fusion protein derived from the C-terminus of Cav3.2.Incubation of GST-CII (1 or 2µg) with or without CaMKII was conducted.Molecular weight markers were designated in kD.n = 3 for control and CaMKII.(B) CaMKII increased the ion signal of the phospho-S2137 peptide.Tryptic peptides extracted from GST-CII gel bands were analyzed by LC-MS/MS.An ion with m/z 873.40 was identified as the phospho-S2137 peptide, while another ion with m/z 555.95 corresponded to the unmodified peptide counterpart.The experiment was repeated twice.The numbers of unique ions/matched spectra were 4/12 for S2137.(C) Verification of phospho-S2137 antibody specificity.The specificity of the antibody was confirmed by the signals generated from wild-type or mutant forms of Flag-tagged Cav3.2 expressed in HEK293 cells.n = 3 for each group.(D) Cav3.2 S2137 phosphorylation by CaMKII and dephosphorylation by calcineurin in the full-length Cav3.2 were confirmed using phospho-S2137 antibody.n = 3 for each group.
Figure 3 .
Figure 3. Identified S2137 as a novel CaMKII phosphorylation site on human Cav3.2.(A) CaMKII triggered a mobility shift in the SDS-PAGE gel for GST-CII, the GST-fusion protein derived from the C-terminus of Cav3.2.Incubation of GST-CII (1 or 2µg) with or without CaMKII was conducted.Molecular weight markers were designated in kD.n = 3 for control and CaMKII.(B) CaMKII increased the ion signal of the phospho-S2137 peptide.Tryptic peptides extracted from GST-CII gel bands were analyzed by LC-MS/MS.An ion with m/z 873.40 was identified as the phospho-S2137 peptide, while another ion with m/z 555.95 corresponded to the unmodified peptide counterpart.The experiment was repeated twice.The numbers of unique ions/matched spectra were 4/12 for S2137.(C) Verification of phospho-S2137 antibody specificity.The specificity of the antibody was confirmed by the signals generated from wild-type or mutant forms of Flag-tagged Cav3.2 expressed in HEK293 cells.n = 3 for each group.(D) Cav3.2 S2137 phosphorylation by CaMKII and dephosphorylation by calcineurin in the full-length Cav3.2 were confirmed using phospho-S2137 antibody.n = 3 for each group.
Biomedicines 2023 ,
11, x FOR PEER REVIEW 11 of 16homologous sites of human Cav3.2S2137 are also serine residues and have CaMKII recognition motifs (Figure4A).To detect the endogenous Cav3.2 S2137 phosphorylation, we employed the phospho-S2137 antibody.We used KCl depolarization to increase the intracellular calcium concentration and CaMKII activity of mouse CAD cells[7,30].A basal phospho-S2137 Cav3.2 signal in control CAD cells was detected by the antibody.When CAD cells were depolarized by KCl, the phosphorylation of Cav3.2 S2137 increased (Figure4B).Our results suggest that there is endogenous phosphorylation of Cav3.2 S2137, and membrane depolarization of the neuronal cell line enhances the phosphorylation of Cav3.2 S2137.
Figure 4 .
Figure 4. Phosphorylation of S2137 in endogenous Cav3.2 induced by membrane depolarization through KCl stimulation.(A) Alignment of Cav3.2 sequences from human, rat, and mouse around human Cav3.2S2137.(B) KCl-induced membrane depolarization led to phosphorylation of S2137 in the native Cav3.2 of mouse CAD cells.Cells were stimulated with 50 mM KCl for 5 min.n = 3 for control and KCl.
Figure 4 .
Figure 4. Phosphorylation of S2137 in endogenous Cav3.2 induced by membrane depolarization through KCl stimulation.(A) Alignment of Cav3.2 sequences from human, rat, and mouse around human Cav3.2S2137.(B) KCl-induced membrane depolarization led to phosphorylation of S2137 in the native Cav3.2 of mouse CAD cells.Cells were stimulated with 50 mM KCl for 5 min.n = 3 for control and KCl.
Table 1 .
Human Cav3.2 phosphorylation sites identified from indicated mass spectra.Amino acid positions denoting the beginning and end of phosphopeptides in human Cav3.2. e Amino acid positions indicating the sites of phosphorylation.f The Mascot score of the identified phosphopeptide.g The Mascot delta score of the specified MSMS spectrum.
a The MSMS spectrum is designated as Supplementary FigureS2.b Exp_mz: observed m/z ratio.c Phosphorylation sites are indicated with "p" before the abbreviation of serine (S) or threonine (T).d | 2023-10-27T15:32:28.570Z | 2023-10-25T00:00:00.000 | {
"year": 2023,
"sha1": "fd73d9816ad930ccfa89ffc820d9004f0c2e8f80",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/11/11/2891/pdf?version=1698236132",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "00ebc93e6c3f4baed6148a9137f22d4f0749b8e9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
218502124 | pes2o/s2orc | v3-fos-license | Non-commutative black holes of various genera in connection variables
We consider black hole interiors of arbitrary genus number within the paradigm of non-commutative geometry. The study is performed in two ways: One way is a simple smearing of a matter distribution within the black hole. The resulting structure is often known in the literature as a ``model inspired by non-commutative geometry''. The second method involves a more fundamental approach, in which the Hamiltonian formalism is utilized and a non-trivial Poisson bracket is introduced between the configuration degrees of freedom, as well as between the canonical momentum degrees of freedom. This is done in terms of connection variables instead of the more common ADM variables. Connection variables are utilized here since non-commutative effects are usually inspired from the quantum theory, and it is the connection variables that are used in some of the more promising modern theories of quantum gravity. We find that in the first study, the singularity of the black holes can easily be removed. In the second study, we find that introducing a non-trivial bracket between the connections (the configuration variables) may delay the singularity, but not necessarily eliminate it. However, by introducing a non-trivial bracket between the densitized triads (the canonical momentum variables) the singularity can generally be removed. In some cases, new horizons also appear due to the non-commutativity.
I Introduction
The general theory of relativity has, to date, robustly passed a number of experimental tests. These tests are no longer limited to the arena of weak-field gravity but also, due to more recent gravitational wave detection events, strong field regimes such as black hole mergers [1]. As successful as general relativity is, there should be some way to reconcile the fundamental properties of matter fields sourcing gravity (which at the fundamental level are quantum in nature) with the gravitational field that the matter produces. This compatibility could come from a theory of quantum gravity. General relativity, however, possesses the fundamental symmetry of background independence, and this makes the theory difficult to quantize in traditional manners [2], [3]. * mlschnei@sfu.ca † adebened@sfu.ca At the moment there are a number of candidate theories of quantum gravity which are in various stages of development [4]- [7] although none can yet be seen as a complete theory of quantum gravitation. Because of this it is useful at the classical level to attempt to glean what some effects of a quantum theory of gravity may be.
One issue that is believed to be resolved in a quantum gravity theory is that of the gravitational singularities predicted by various classical theories of gravity. The most famous of these singularities reside in the realm of early universe cosmology, and black hole interiors. It is the latter issue that we wish to discuss in this paper.
The fundamental mathematical object on which quantum theory is based on is the non-trivial commutator between a system's configuration variables and asso-ciated canonical momentum variables. At the level of classical mechanics this manifests itself as a non-trivial Poisson bracket. The field of non-commutative geometry augments this structure by introducing, as well as the usual bracket between configuration-momentum variables, a non-trivial bracket between configuration variables. At the level of usual quantum mechanics this would be a non-trivial commutator of the form where θ c is a vector whose entries measure the amount of non-commutativity between the various coordinates. The bracket (1) of course implies an uncertainty relation between different coordinates, and sets a limit on the amount of localization a particle may have. A measurement along one axis to high precision comes at the expense of losing some information along another axis. Hence, geometry in this sense really does become noncommutative.
It is natural to then further extend the theory to include a non-trivial bracket between the canonical momenta as [p a , p b ] = i c ab β c (2) leading to a similar uncertainty between the measurement of momenta in different principal directions. Non-commutative quantum theories have been studied in various fields of physics. The original paper seems to be the pioneering work of Hartland Synder [8] and since then there has been much application of non-commutative geometry to theoretical physics. (See [9] - [12] and references therein.) At the classical level the new commutators should manifest themselves as an extension of the usual Poisson algebra of ordinary classical mechanics, leading to a type of "non-commutative classical mechanics". In non-commutative mechanics the usual Poisson algebra is deformed via the introduction of a deformed Moyal product. That is, the brackets of non-commutative mechanics are calculated via where the Moyal product here is defined as Here operators with a tilde operate only on tilde coordinates and un-tilded operators operate on un-tilded coordinates. In the end, the two sets of coordinates are made coincident. The matrix w ab represents the deformed symplectic form We note here that there is actually a further correction to the above symplectic form but it is proportional to the product θ a β b and hence we ignore it as both these parameters are assumed to be small [13]. It may be seen by explicit calculation that in the limit θ c = 0 = β c the expression in (4) yields the usual Poisson brackets of ordinary classical mechanics. Explicitly, (3) and (4), using (5) yields Reviews of non-commutative mechanics may be found in [13] and [14] and references therein.
The transition from particle mechanics to field theories is not necessarily straight forward, particularly in the realm of gravitation [15]- [20]. However, if one symmetry reduces the system to minisuperspace models, then it can be argued that one augments the field Poisson algebra in a similar manner to what is done in the particle mechanics [21]- [24]. That is, in a minisuperspace model with fields ψ a and corresponding canonical momenta π b we have As the brackets are modified from the canonical ones, it is possible that such an algebraic deformation introduces an anomaly in the gravitational constraint algebra. In the symmetry-frozen homogeneous scenarios variable deformations generally do not introduce such anomalies as the algebra trivializes due to the vanishing of spatial derivatives, and being able to globally set the shift vector to zero. The situation likely needs further study under algebraic deformations, but it is generally believed that at high energies, non-commutative effects would anyway alter the symmetry of the low energy theory [25], [26], so it is not clear if one should demand low energy symmetries to hold in the regime where non-commutative effects become important. Still, one needs to be cautious in interpreting results in such potentially symmetry-broken theories. The general issue for the case of variable deformations is summarized in [27].
This manuscript is laid out as follows. In section II we analyze models where the the black holes are supported by a smeared out distribution of material, which is sometimes performed in the literature as an approximation of non-commutative effects on the matter fields due to the non-localization that non-commutative geometry introduces. In section III the non-commutativity is manifestly included in the brackets of the Poisson algebra in the configuration and momentum variables of the gravitational Hamiltonian system. The study there is performed in the connection formalism as this formalism is seen as a promising avenue to a theory of quantum gravity. Finally we conclude with a brief summary of the findings.
II Smearing of the matter distribution
The method used here is often said to be "inspired by non-commutative geometry". The idea here is quite simple and straight-forward and mainly serves as a segue to the Hamiltonian analysis of the next section. For concreteness in setting up the problem and method, we will assume at the moment that the black hole is a spherically symmetric one, but the ideas apply to all the types of metrics considered in this work. Consider the Einstein equations in mixed form If one restricts these equations to spherical symmetry by utilizing the following line element ds 2 = − exp (α(r, t)) dt 2 + exp (β(r, t)) dr 2 + r 2 dθ 2 + r 2 sin 2 θ dφ 2 , then equations (8) may be manipulated to yield the following solution, assuming the stress-energy tensor components T t t and T r r are free parameters [28] 1 [30]: where a comma denotes partial differentiation. Equation (10iii) is defined from the r − t Einstein equa-tion, and (10iv) is defined from the conservation law. Now, the Schwarzschild metric may be seen as a solution to the above equations with a "point mass" located at r = 0. That is, one may prescribe It is straight-forward, by inserting (11) into equations (10i)-(10ii), to see that the resulting metric functions, e α and e β yield, after a trivial re-scaling of the t coordinate, the famous Schwarzschild metric In non-commutative geometry inspired models, one smears the matter distribution (11) on a scale proportional to the coordinate non-commutativity parameter, θ. The argument is that the matter is not completely localized due to the uncertainty principle between coordinates brought on by the non-commutativity. Such inspired models have been studied in [31]- [33] for spherical black holes without cosmological constant, and in [34], [35] for rotating black holes. Similar studies have been performed in [36]- [39] with respect to wormholes.
We wish to extend the study here to encompass black holes beyond spherical, both in shape and in topology. This is done for the reason of consistency. That is, one wishes to study if and how singularities are affected in as many scenarios as possible to determine how universal the non-commutative effects are. One can then make more general statements about non-commutativity. We also include a cosmological constant, since in four dimensions a cosmological constant is required for black holes of exotic topology [40] - [43].
As we are interested specifically in the singularity issue of black holes, we will be concentrating on the interior region. First we wish to re-write the line element (9) in a form more appropriate for the study of black hole interiors and various topologies. The form is as follows: and reflects the fact that the interior region is time dependent and that the exterior radial coordinate, r, is timelike in the interior region (we do not consider cases here with inner horizons) 2 . We are considering time dependence only, due to the fact that we are smearing classical non-rotating vacuum black holes, save for the "point" source, whose corresponding interiors are also homogeneous. The constants c 0 and d 0 dictate the compatible topology of the spacetime's two-dimensional subspaces. The various cases are as follows: In this scenario ( , φ) submanifolds are tori (and the sub-manifolds for this case are intrinsically flat). iii) d 0 = 1, c 0 = 1: In this case ( , φ) sub-manifolds are surfaces of constant negative curvature of genus g > 1, depending on the identifications chosen. Such surfaces may be compact or not [43], [44].
In the spherical case, such solutions are sometimes referred to in the literature as "T-spheres" [45] - [47] and the time dependent domain inside the event horizon is sometimes referred to as the "T-domain" of the black hole.
Einstein's equations, for the line element (12) yield the following general solution analogous to (10i)-(10iv), assuming here time dependence only: In the case of of a "point" source (in the interior region T y , the above solutions yield, after a rescaling of the y coordinate: Such black hole solutions have been studied in detail in [40] - [43], and within quantum gravity theories in [48] - [51]. The non-commutative smearing is often performed via the implementation of replacing the "point" source with a Gaussian or Lorentzian whose characteristic width is of the scale of the non-commutativity parameter, θ. Without guidance from experiment, this is usually taken to be of the order of the Planck length. We consider here the following profile curves for T y y matter (τ ): the first profile being Lorentzian and the second Gaussian. Further, since the matter profile no longer vanishes abruptly, we relate the above stress-energy components to their corresponding local energy densities via an equation of state, for which we take the polytropic form. That is, in both the Lorentzian and Gaussian scenarios we prescribe an energy density via where k and γ are constants. This form works well in idealized studies of stellar structure [52] - [55] and seems a natural choice for the type of exotic "star" we are studying here. By using (15i) and (15ii) in equations (13i) and (13ii) one arrives at the following analytical solutions: for the Lorentzian case and (18) for the Gaussian scenario. Solutions for e B(τ ) were also obtained, but being rather complicated and in terms of quadrature, are not displayed here.
Of particular interest here is to study the properties of what replaces the singularity of the commutative theory. To facilitate this we calculate the components of the Riemann curvature tensor in an orthonormal (hatted) frame, Here h . . represent the components of a local orthonormal tetrad. We choose specifically the tetrad coincident with the coordinate directions. That is, the tetrad is given by The resulting orthonormal Riemann components are rather lengthy and do not reveal much due to their com-plication. It is useful therefore to present the lowest order terms in a series expansion about the commutative solution's singular point (τ = 0). Such an expansion yields the following components, plus those related by symmetries, for the Lorentzian case: and for the Gaussian case: The results above are "universal" at τ = 0 in the sense that the topological parameter, d 0 , does not contribute at zeroth order. This parameter comes in at order τ 2 in Rτŷ τŷ (although this term is not shown due to its length), and at order τ 4 or higher in the other components.
It may be noted that none of the components in (20i)-(21iv) are singular for finite θ and therefore the classical singularity present in the commutative theory is removed. We should point out here that this result is not really surprising. One has excised the singular distribution of (11) and replaced it with a smooth distribution. In the T-domain this forced smearing is accompa-nied by the expected energy condition violations which circumvent the singularity theorems for such spacetimes. Therefore, at the level of non-commutative geometry inspired models of black holes, the non-commutativity introduces energy condition violation on scales set by the non-commutativity parameter θ.
We proceed next to a more rigorous analysis where the non-commutativity is truly manifest in the algebra of the field variables.
III Hamiltonian evolution of black hole interiors
In this section we shall study the effects of non-commutativity by directly supplementing the usual Poisson algebra of the gravitational Hamiltonian system with the additional structure on the configuration and momentum variables as in (7i-iii). We work here in the connection variables consisting of the su(2) connection, which we denote A i a , and its conjugate momentum, the densitized triad, denoted by E a i 3 . These variables are chosen since they are the variables utilized in the theory of loop quantum gravity. At the quantum level, within the paradigm of non-commutative geometry, the commutator between the configuration variables is taken to be non-trivial. As well, one may also take the commutator between the conjugate momenta as non-trivial. Working "backwards" towards the corresponding classical theory, these non-trivial commutators should manifest themselves as non-trivial Poisson brackets.
It is generally accepted that loop quantum gravity puts forward a more promising approach towards a theory of quantum gravity than does the original Wheeler-DeWitt theory [2], [56], which utilizes ADM variables. Therefore, in light of this we choose to work in the variables of loop quantum gravity. With the modification of the Poisson brackets introduced by the extra non-commutativity, it is possible that working in these variables is a different theory than working in the corresponding noncommutative ADM theory.
III.i Black holes in connection variables
Here we briefly review the mathematical structure of black holes in connection variables. In the connection variables which are utilized in the canonical formulation of loop quantum gravity, one begins with a 3 + 1 decomposition of space-time where the metric is writ-ten in the usual way with N the lapse and N a the shift vector. One then writes the resulting action in terms of the Ashtekar variables [60]. These variables comprise of a generalized su(2) connection A i a and a densitized triad E a i . The connection field plays the role of the configuration variable, and is related to more familiar quantities as follows: where Γ i a is the "fiducial" spin connection and K i a is the densitized extrinsic curvature with K ab the usual extrinsic curvature of a τ = const. surface. The quantity γ is known as the Barbero-Immirzi parameter. In the classical theory its value is arbitrary (though non-zero) but in the resulting quantum theory of loop quantum gravity, it must be set somehow. This is usually done by calculations of black hole entropy within the paradigm of loop quantum gravity and setting the result to one-quarter the area of the black hole [61]- [65]. The momentum variable, E a i is related to the threemetric, q ab , via These two variables, A i a and E a i , are then the configuration and momentum variables respectively of the theory, subject to the Poisson algebra with other brackets equal to zero. Via variation of the action with respect to the lapse and shift one obtains the Hamiltonian (S) and diffeomorphism constraints (V b ): The extrinsic curvature quantities in (28i) are replaced with the connection via (23) (the h a i in Γ i a being functions of E a i ).
There is also the internal SU (2) degree of freedom that can be fixed. (The metric, depending on the "square" of the densitized triad via (26), allows for SU (2) rotations which preserve the metric.) This gauge can be fixed via the Gauss constraint: At this stage we need to choose an ansatz for the su(2) connection and the densitized triad which is compatible with our geometries. An appropriate ansatz is provided by the following pair [48], [50], [66]: where τ i represent the SU (2) generators. The functions a I and E I are functions of the interior time variable, τ , only. In terms of (30ii), using (26), a line-element of the form (12) is written as The Gauss constraint (29) for the cases considered here yields just one condition: which we will satisfy here by choosing The diffeomorphism constraint is automatically satisfied in these cases, leaving only the Hamiltonian (scalar) constraint. With the gauge fixing (33), the resulting Hamiltonian constraint (supplemented with cosmological constant term) may be written as It should be noted that when integrating the Hamilto-nian density the spatial variables have been integrated out and therefore the above result contains an (arbitrary) area from the y and integrals. This is set equal to one and we show below that this does not spoil the Hamiltonian evolution of the system. At this stage one has all the ingredients required to study the evolution of the interior region of the black holes. The evolution proceeds according to the usual Hamiltonian equations of motionȧ 2 = {a 2 , S}, etc. subject to the usual Poisson algebra between the configuration variables, a I , and their corresponding canonical momenta, E I .
III.ii Non-commutative evolution of black holes
Here we study the above gravitational system in connection variables, but where the usual Poisson algebra is augmented by the following brackets: We study scenarios where either θ or β is zero, as well as those where neither parameter is zero. As this is a first study, we take the non-commutative parameters, θ and β to be constants. However, it is possible that they be modified in such a way that they depend on the metric properties (via the densitized triad) of the spacetime. This, for example, could arguably improve the theory by providing a natural way for the brackets to become less significant in low curvature regions. The resulting equations in the noncommutative case are too complex to find analytic solutions so what we are solving here is a classic initial value problem. As such, initial conditions are required in order to study the evolution. We set initial conditions as follows: Note that the coordinate chart in use for the domain of (31) is τ < τ H , where τ H denotes the horizon value of τ and that the commutative solution's singular point is located at τ = 0. The evolution is started far from the singular point, and relatively close to the horizon. We make the assumption that, far from the singular point, non-commutative effects should be small as we know, for example, that the Schwarzschild solution is valid in moderately strong gravitational fields [67]. (In fact commutative general relativity seems to hold well even in the strong field regime [67], [68], so the noncommutative results here really are expected to be manifest only when one is approaching the scale of quantum gravity effects.) Therefore on the initial time surface, which is far from the extremely strong field region, we set the values of of functions a I and E I set to their general relativity values. That is, the following initial values are used: where the densitized triad components have been calculated via comparing (31) and (12). The connection components have been calculated via a rather lengthy calculation utilizing (23), (24) and (25), using metric (12)'s triad pulled back to a τ = const. hypersurface. The function B here is the commutative solution's value given by (14). The lapse, N , is generally arbitrary but as we have set the coordinate system to be that of (14)), we wish to use the time variable proportional to that used in (14). Therefore, we set the lapse equal to γ 2 √ E 3 /E 2 and at this stage one may proceed with the evolution.
The resulting "non-commutative" Hamilton equations of motion are given by: As mentioned previously, these equations are generally too complex to solve numerically, hence we illustrate the solutions below subject to the initial conditions provided by (36i) and (36ii).
III.ii.1 Non-commutative connection only
Here we briefly summarize the results of the evolution of the above system subject to θ = 0 and β = 0. That is, here the standard theory is augmented with non-trivial configuration bracket only. For each of the three topological compatibilities (spherical, toroidal-/cylindrical/planar, higher genus) the results are shown in the figures 1-3. The figures show a few, but not all, possible scenarios and below in table 1 we summarize all cases. In none of the solutions can the results be evolved indefinitely. In some cases shown there is a true singularity present, with E 3 shrinking to zero, whereas in others it is a (curvature) finite solution (E 3 non-zero) but with a new horizon appearing (see figure captions for details) 4 . In general it is found that if β = 0, and the non-commutativity parameter, θ is fairly large, one may eliminate the singularity, although in some cases an inner horizon results, and we are unable to probe beyond that horizon to glean if there is singular structure hiding behind it. For small enough values of θ the singularity is always present.
III.ii.2 Non-commutative triad only
For the case where the connection remains self-commutative but the triad becomes non-commutative we present the sets of scenarios in figures 4-6, along with a full summary in table 1.
It turns out that for both large and small non-zero values of β the quantity E 3 asymptotes to a non-zero constant. The size of the 2D subspaces, governed by the value of E 3 , depends on the value of β, with larger β values yielding larger volumes. The situation here is somewhat reminiscent of what occurs in effective loop quantum gravity when holonomy corrections are introduced, the main difference being that in the loop quantum gravity scenario the volume of the subspaces oscillates in a damped manner, asymptotically approaching a constant for large negative τ [49]. Although some appear small in the plots, no non-commutative metric component goes to zero in figures 4-6 and this remains true as long as β = 0. The above analyses are somewhat complicated due to all the different possible scenarios. We therefore provide the following summary of all the possibilities: Table 1 1. Spherical topology:
IV Concluding remarks
In this manuscript we studied the effects of non-commutative geometry on the interiors of black holes compatible with various topologies. The introduction of non-commutativity was performed in two ways. In the first part of the study a smearing of the gravitating source was performed, mimicking the effects of the non-localization introduced by a non-trivial commutator between the spacetime coordinates. It was found that this smearing was capable of removing the curvature singularity in all scenarios. This result though is not that surprising, as one has essentially forced a smoothness onto the system. The resulting matter system, know in the literature as "inspired by non-commutative geometry" will violate the energy conditions in the T -domain thus circumventing the results of the singularity theorems. However, it does hint at the fact that a possible resolution to the singularity issue lies in non-commutative geometry effects.
In the second part of the study the Poisson algebra was directly altered by the introduction of a non-trivial bracket in i) the configuration degrees of freedom only, ii) the momentum degrees of freedom only, and iii) both. It was found that for some cases the singularity is merely delayed, occurring later (earlier in coordinate time) than in the corresponding commutative scenario. However, in many cases some rather interesting results emerge. Either the singularity is removed, or else a new inner horizon forms. In the case of a new horizon, the domain that we are able to study with the method here is also non-singular. Overall, the presence of the parameter β (non-trivial bracket between the triads) is more capable of singularity resolution than the parameter θ (non-trivial bracket between the connection). The results are summarized in table 1. | 2020-05-06T01:00:54.807Z | 2020-05-05T00:00:00.000 | {
"year": 2020,
"sha1": "faf57326cffce70f567eaf1b52f9542f07bd7f28",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "faf57326cffce70f567eaf1b52f9542f07bd7f28",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
8085782 | pes2o/s2orc | v3-fos-license | Identification and functional analysis of SKA2 interaction with the glucocorticoid receptor
Glucocorticoid (GC) receptors (GRs) have profound anti-survival effects on human small cell lung cancer (SCLC). To explore the basis of these effects, protein partners for GRs were sought using a yeast two-hybrid screen. We discovered a novel gene, FAM33A, subsequently identified as a SKA1 partner and involved in mitosis, and so renamed Ska2. We produced an anti-peptide antibody that specifically recognized full-length human SKA2 to measure expression in human cell lines and tissues. There was a wide variation in expression across multiple cell lines, but none was detected in the liver cell line HepG2. A xenograft model of human SCLC had intense staining and archival tissue revealed SKA2 in several human lung and breast tumours. SKA2 was found in the cytoplasm, where it co-localized with GR, but nuclear expression of SKA2 was seen in breast tumours. SKA2 overexpression increased GC transactivation in HepG2 cells while SKA2 knockdown in A549 human lung epithelial cells decreased transactivation and prevented dexamethasone inhibition of proliferation. GC treatment decreased SKA2 protein levels in A549 cells, as did Staurosporine, phorbol ester and trichostatin A; all agents that inhibit cell proliferation. Overexpression of SKA2 potentiated the proliferative response to IGF-I exposure, and knockdown with shRNA caused cells to arrest in mitosis. SKA2 has recently been identified in HeLa S3 cells as part of a complex, which is critical for spindle checkpoint silencing and exit from mitosis. Our new data show involvement in cell proliferation and GC signalling, with implications for understanding how GCs impact on cell fate.
Introduction
Glucocorticoids (GC) act through the GC receptor (GR), a member of the nuclear receptor superfamily of ligandregulated transcription factors (Hollenberg et al. 1985, Weinberger et al. 1985, 1987, Perlmann & Evans 1997, Margolis et al. 2005, Bookout et al. 2006. On activation, the GR is capable of both upregulating and downregulating target gene expression (Ray et al. 1999). The final functional effect of activated GR in a given cell is critically determined by its interactions with a spectrum of co-modulator proteins (Lonard et al. 2004, Wu et al. 2005. Several canonical nuclear receptor interaction motifs are well recognized, including LxxLL and LIM domains (Cheskis et al. 2003, Kassel et al. 2004, but in addition, other interacting partners do not have defined peptide motifs, e.g. RelA (Nissen & Yamamoto 2000, Garside et al. 2004. While the co-modulator families primarily alter the amplitude of the GR effect, e.g. steroid receptor coactivator 1; (SRC1; Stevens et al. 2003), other GR-interacting proteins are responsible for mediating GR action, e.g. activator protein 1 (AP-1; Gottlicher et al. 1998).
GCs are widely used in the initial treatment of patients with lung cancer, primarily for their anti-emetic and antiinflammatory effects, but direct anti-tumour action has also been proposed (Sommer et al. 2007). They have profound inhibitory effects on cell cycle progression and cell proliferation in human lung cancer cell lines (Hofmann et al. 1995). GCs, such as dexamethasone (Dex), have also been shown to have anti-tumourigenic effects in mouse models of lung cancer (Witschi et al. 2005). The mode of action in this model is unclear but it is thought that GCs affect cell differentiation. Alternatively, GCs may be acting in the lung cancer models by inhibiting cell proliferation, given their effects in human lung cancer cells (Hofmann et al. 1995). We have characterized small cell lung carcinoma (SCLC) cells that are deficient in GR expression and resistant to GC action (Ray et al. 1994, Hofmann et al. 1995, Sommer et al. 2007. Importantly, overexpression of the GR in these cells powerfully induced apoptosis (Sommer et al. 2007), suggesting that acquisition of GC resistance is a survival mechanism for human SCLC (Sommer et al. 2007). Thus, the GR is a potentially informative node for novel pathogenic mechanisms in lung cancer.
Our aim was to find novel GR-interacting proteins expressed in the well-characterized SCLC cells, which could provide new insights into the cellular mechanisms associated with their proliferative potential. Using a yeast two-hybrid screen we identified a specific interaction between GR and FAM33A, which has recently been identified as SKA2, a conserved protein involved in the kinetochore complex (Hanisch et al. 2006). Depletion of SKA2 by small interfering RNAs causes the cells to be suspended in a metaphase-like state. This delays the exit from mitosis and the onset of anaphase. From this, the authors propose that the SKA complex is required for stabilizing kinetochore-microtubule attachments and/or checkpoint silencing.
The studies presented here on the identification of SKA2 in human tumour cell lines, its interaction with the GR and the impact of depletion of SKA2 on apoptotic-and proliferationassociated genes suggest SKA2 may play a role in regulation of lung cancer cell proliferation.
All cells were cultured in normal growth medium supplemented with 10% charcoal dextran-stripped FCS (HyClone, Logan, UT, USA) before treatment with Dex.
Yeast two-hybrid screen The yeast two-hybrid screen has been reported before (Garside et al. 2006).
DNA constructs
The pcDNA 3 GR plasmid has been previously described (Ray et al. 1999). The TAT 3 -luc plasmid was a kind gift from Prof. Keith Yamamoto and Dr Jorge Iniguez-Lluhi (Department of Cellular and Molecular Pharmacology, University of California, San Francisco) (Iniguez-Lluhi et al. 1997). The cMyc-SKA2 construct contained the open reading frame inserted into the N-terminal cMyc tagged vector cytomegalovirus (CMV)-Tag3B (Stratagene). A GST-SKA2 fusion plasmid was made by inserting SKA2 cDNA into the pGEX-5X-1 vector (Amersham Pharmacia Biotech).
BglII and HindIII restriction sites added to the sequences were used for subsequent ligation into pSUPER. The oligonucleotides were annealed by adding 3 mg of each into 48 ml annealing buffer (100 mM sodium chloride, 50 mM HEPES (pH 7 . 4)). The mixture was then incubated at 90 8C for 4 min, followed by 70 8C for 10 min. The mixture was then slowly cooled to 10 8C. The shRNA sequences were then ligated into pSUPER using T4 DNA ligase (Roche).
Interaction between Ska2 and GR in vitro
GST and GST-SKA2 were expressed in Escherichia coli strain DH5a (Promega Corp.) and were purified as described (Stevens et al. 2003).
Immunofluorescence HEK293 cells infected with a GR-eYFP-expressing retrovirus and A549 cells were grown on coverslips and treated with either DMSO (vehicle), Dex (100 nM) or RU486 (100 nM) for 1 h. The cells were washed with 1! PBS and fixed with 4% paraformaldehyde. Human lung tissue was obtained from the North West Lung Centre, Medicines Evaluation Unit, Wythenshawe Hospital, Manchester, UK. After fixation with 4% paraformaldehyde, tissue sections and cells were permeabilized with TD buffer (10 mM Tris-HCl (pH 8 . 0) and 50 mM NaCl) and blocked with TD buffer containing 1% BSA and 0 . 2% Triton X-100. A 1:400 dilution of SKA2 anti-peptide antibody (see above) in washing buffer (TD buffer plus 1% BSA and 0 . 05% Triton X-100) was added for 2 h at room temperature. Images were taken using a Bio-Rad MRC1024 confocal scanning Microscope, as previously described (Garside et al. 2006).
Quantification of nuclear/cytoplasmic staining HEK293 cells from triplicate slides of three separate experiments were scored for nuclear and cytoplasmic localization of both Texas Red-conjugated proteins and eYFP-tagged proteins. Treated and untreated cells were assigned 'predominantly cytoplasmic' (C) or 'predominantly nuclear' (N) for both SKA2 and GR-eYFP by a masked observer. These numbers were expressed as a percentage of the total cell number.
Generation of stable cell lines
The cMyc-SKA2 plasmid DNA was linearized and transfected using FuGENE 6 (Roche). Cells were incubated with 1 mg/ml G418 sulphate. After 2 weeks discrete colonies formed, which were cloned and screened for gene expression by western blot.
To create stable cell lines expressing shRNA, cells were co-transfected using FuGENE 6 (Roche) with the appropriate pSUPER vector and the vector pcDNA 3 (to confer G418 resistance to the cell). After 48 h transfection, the growth medium of the cells was supplemented with 1 mg/ml G418 (Calbiochem). Colonies were selected and expanded under G418 selection.
Reporter gene studies
Cells were transfected with the TAT3-luc reporter gene and analysed as previously described (Stevens et al. 2003, Garside et al. 2004.
Proliferation assays
Cells were seeded in 96-well plates and treated as indicated. Cell number was estimated using the MTS reagent (American Tissue Culture Collection) or measured by haemocytometer.
Immunohistochemistry of human lung cancer
Human SCLC xenografts were obtained by innoculating 10 8 DMS 79 cells with Matrigel (1:1) subcutaneously in the flank of athymic, nude mice. In addition, a panel of human lung cancer and malignant mesotheliomas from biopsy tissue surplus to requirements for clinical diagnosis was obtained from patients at Manchester Royal Infirmary and Wythenshawe Hospital, who had placed no restriction on the use of surplus tissue when consented for biopsy or resection of tumour. At least six separate sections were analysed for each tumour type. SKA2 staining was quantitated in triplicate sections of a panel of lung carcinomas using a colour card (0-3 for intensity) compared against a 'no primary' control by a masked observer. Data were averaged across the triplicates and expressed as intensity from 0 to 3 for each type of carcinoma.
Staining of mitotic cells
Cells were grown overnight on coverslips and mounted in 4 0 ,6-diamidino-2-phenylindole-containing mounting medium (Vectashield, Vector Laboratories Ltd, Peterborough, UK). Quantification of mitotic figures was performed using fluorescent microscopy (excitation and emission at 360 and 460 nm respectively), a field of 100 cells was counted and the percentage of cells undergoing mitosis was calculated.
Breast array
The human breast tissue array has been described previously (Zhu et al. 2006). Detection of FAM33A expression followed the standard immunoperoxidase approach. Analysis was done by a masked observer and statistical comparison by c 2 -test.
RNAi
Short interfering siRNAs specific to SKA2 were custom designed using the HiPerformance algorithm (Novartis Pharmaceuticals) and synthesized (Qiagen Ltd). Sequences are available on request.
Microarray and data analysis
Labelled cDNA was hybridized to HG-U133 PLUS2 oligonucleotide arrays (Affymetrix, Santa Clara, CA, USA). Technical quality control of the arrays was done to check for outliers using dChip software (Li & Wong 2001). Normalization and expression analysis used robust multiple array average (RMA; Bolstad et al. 2003). Since off-target effects can be expected in siRNA experiments, two independent siRNAs were used. Each experiment was done in duplicate. Statistical analysis was performed by comparing non-silencing control with the combined siRNA samples group with limma (Smyth 2004). False discovery rate correction was performed with QVALUE (Storey et al. 2004). Post-analysis was performed using the MAPPFinder Gene Ontology tool of GenMapp 2.0 software, San Francisco, CA, USA (Doniger et al. 2003), DAVID Bioinformatics Resources 2006 and MetaCore software (GeneGo, Inc., St Joseph, MI, USA). The microarray data were submitted in a Minimum Information About a Microarray Experiment-compliant format to the ArrayExpress database (http://www.ebi.ac.uk/arrayexpress/) and an accession number was assigned (E-MEXP-265).
A commercially available non-silencing control siRNA (Qiagen Ltd) was transfected to control for off-target effects. The sequence for the sense strand of this duplex was UUC UCC GAA CGU GUC ACG UdT dT and the antisense strand, ACG UGA CAC GUU CGG AGA AdT dT.
Statistical analysis
All additional statistical analysis was carried out using SPSS (Chicago, IL, USA) for Windows version 13.0. Specific tests are described in the figure legends.
Identification of SKA2 as a GR-interacting protein
We identified SKA2 as interacting with the C-terminal domain of the GR (amino acids 525-777) in a yeast two-hybrid library generated from the human SCLC cell line CORL103. The interaction between GR and SKA2 was confirmed in re-transformed yeast cells and found to be constitutive but also seen in the presence of Dex and RU486 (Fig. 1a).
SKA2 is encoded by an 831 nucleotide cDNA sequence, which derives from a four exon gene located on human chromosome 17q 23.2 and is highly conserved (Fig. 1b). There were no indications of protein function from its amino acid sequence, and no homology with known nuclear receptor co-modulators.
To confirm a direct interaction between the GR and SKA2, we used a GST pull-down approach (Stevens et al. 2003). There was a clear interaction between the two proteins, but little difference was seen in the presence of either Dex or RU486 (Fig. 1c), as predicted from the yeast studies.
Development of an antibody and analysis of endogenous SKA2 expression
We raised a specific rabbit antibody to human SKA2 (Fig. 2a). SKA2 protein was detected in both human SCLC and NSCLC Figure 5 SKA2 co-localizes with GR and moves to the nucleus in Dex-treated cells. (a) Immunofluorescence of HEK293 cells stably expressing GR-eYFP, which were treated with vehicle, Dex (100 nM) or RU486 (100 nM) as indicated, for 1 h before fixation. The cells were stained for SKA2 using the anti-peptide antibody, with and without peptide-blocking and a Texas Red-conjugated secondary antibody. (b) Quantification of SKA2 and GR-EYFP cytoplasmic and nuclear distribution. Cells expressing the GR-eYFP were selected for analysis. A total of 46 untreated cells and 101 treated cells from three separate experiments were assigned 'predominantly cytoplasmic' (C) or 'predominantly nuclear' (N) for both SKA2 and GR-EYFP by a masked observer. Statistical analysis used ANOVA followed by Tukey-Kramer post hoc test, ***P!0 . 001.
Figure 4
Expression of SKA2 in normal human lung tissue and normal and tumour sections from a breast tissue array. (a) Immunoperoxidase staining of SKA2 in normal human lung at (i and iv) low power, (ii and v) high power and (iii and vi) peptide adsorbed controls. Expression is seen in epithelium (Ep), alveolar structures (Al) and lymphoid follicles (Ly). (b) Immunostaining for SKA2 showed cytoplasmic staining with nuclear exclusion predominantly in normal tissue. Staining in both the nucleus and cytoplasm was evident in many tumour samples. The peptide adsorbed control is shown. cells as well as other human cell lines, but strikingly not in the human liver cell line HepG2 (Fig. 2b). Further immunoblot analysis revealed SKA2 expression in several breast cancer cell lines and in HeLa cells (Fig. 2c). When cmyc-SKA2 was transfected into HEK cells, both endogenous and transfected forms of SKA2 were detected (Fig. 2d).
Expression of SKA2 in human lung and breast carcinomas
Immunofluorescence localized SKA2 mainly to the cytoplasm of the NSCLC cell line, A549 (Fig. 3a). Expression of SKA2 was also seen in human SCLC xenografts (Fig. 3b) and in a resected human lung adenocarcinoma. SKA2 was also detected in a selection of primary human lung cancers of different histological types. However, using the more sensitive immunoperoxidase technique, expression of SKA2 was also seen in non-tumourous human lung recovered from the resection margins of surgical specimens (Fig. 4a).
As high-level SKA2 expression was seen in breast cancer cell lines, expression was sought in a breast tissue array. Expression of SKA2 was easily detected and specificity was ensured by using peptide adsorbed controls (Fig. 4b). As with the lung cancer tissue samples, expression of SKA2 was found in both normal and cancerous breast (Fig. 4b) with similar expression levels (mean intensity: 2 . 5 for cancers and 2 . 4 for normal; PZ0 . 67).
However, there was a striking difference in the intracellular distribution of the SKA2 between normal and tumour tissue, with a greater proportion of the tumour cells having nuclear SKA2 when compared with the normal breast (P!0 . 001).
Intracellular localization of SKA2 and GR in HEK293 cells
As the distribution of SKA2 within cells appeared to be different in normal and tumour cells, we studied its localization in relation to that of the GR. We stained for SKA2 expression using our antibody in HEK293 cells infected with a GR-eYFP-expressing retrovirus and analyzed untreated and treated cells by immunofluorescence. GR was cytoplasmic under basal conditions and both Dex, an agonist, and RU486, an antagonist, promoted nuclear translocation of the GR, as expected ( Fig. 5a and b). SKA2 was also cytoplasmic under basal conditions, showing co-localization with GR ( Fig. 5a) However, in ligandtreated cells expressing high levels of transgenic GR, there was also nuclear translocation of SKA2, further supporting an interaction between the two proteins (Fig. 5).
SKA2 modulates GR function
The effects of SKA2 on GR function were determined by expressing SKA2 in the SKA2 null, HepG2 cells. Both transient co-transfection of SKA2 with a reporter gene (Fig. 6a) and introduction of the reporter to the cells stably overexpressing SKA2 (Fig. 6b) resulted in potentiation of GC transactivation. Further examination of the effects of SKA2 on GR function utilized knockdown of SKA2 by stable expression of shRNA in A549 cells (which we have shown to express SKA2 protein, Fig. 2). This generated clones expressing shRNA with either moderate (clone1) or low (clone 2) residual expression of SKA2 (Fig. 6c). Knockdown of SKA2 (in clone 2) reversed the GC-stimulated transactivation of the TAT3-luciferase reporter, when compared with the effect in the non-silencing control (Fig. 6d).
We also examined the effect of SKA2 on GC regulation of cell proliferation. Dex (100 nM) inhibited proliferation in A549 cells by 50%. In clones with moderate (clone 1) and low (clone 2) expression of SKA2, the effect of Dex was abolished (Fig. 6e). Importantly, there was no effect of altered SKA2 expression on basal proliferation.
Regulation of SKA2 expression
Analysis of the SKA2 gene locus did not suggest how its expression was regulated. Therefore, A549 cells were treated with a panel of agents known to affect cell survival or proliferation. We examined the effects of the various compounds on SKA2 protein expression by immunoblot. This revealed marked inhibition of SKA2 expression by Dex, Staurosporine, phorbol ester and TSA, all of which either induce apoptosis or inhibit cell proliferation in A549 cells (Fig. 7).
SKA2 potentiates cell proliferation
To determine the effect of SKA2 on cell proliferation, a nonexpressing cell line, HepG2 (Fig. 2), was transfected with a SKA2 expression cassette and stable clones were selected (Fig. 8a). The SKA2-expressing clone had a significantly enhanced proliferative response at 24 h post-IGF-I and a nearly twofold increase in proliferation at 48 h, compared with control, but there were no significant differences seen under non-stimulated conditions (Fig. 8b).
Knockdown of SKA2 in A549 cells caused cells to be held in mitosis (Fig. 8c) as evidenced by the percentage of increase in mitotic figures when compared with the non-silencing control (Fig. 8d).
Gene array profiling of SKA2 function
Initial studies identified specific siRNA sequences that potently inhibited SKA2 protein expression. Two separate Wild-type compared with clone 2, **PZ0 . 021; other comparisons with wild-type were non-significant.
Figure 9
Two effective siRNA molecules targeting SKA2 were identified and used in microarray analysis to compare the transcriptional changes between wild-type A549 cells and A549 cells transfected with SKA2 siRNA molecules. Three SKA2-specific siRNAs were transiently transfected into cells at the concentrations shown. Two of the siRNAs showed greater than 80% SKA2 knockdown at 24 h (numbers 3 and 4). These two were used in the microarray analysis summarized in Table 1. sequences were sought to allow additional refinement in the post-array analysis (Fig. 9). For post-analysis purposes, a relatively lenient q value threshold of 0 . 2 produced 119 probe sets that were specifically and significantly regulated by both SKA2-specific siRNAs. Of these genes, SKA2 was itself downregulated eightfold.
These genes were analysed for enrichment of Gene Ontology categories with GennMAPP (Doniger et al. 2003), DAVID (Sherman et al. 2007) and MetaCore (GeneGo, Inc.) softwares. We found that SKA2 knockdown in A549 cells resulted in coordinate regulation of a pattern of genes including gene products known to be involved in cell cycle, apoptosis and signalling (Table 1).
Discussion
As part of a genetic screen for GR-interacting proteins in SCLC we identified SKA2. During our characterization of SKA2, it was independently discovered as part of a complex involved in mitosis (Hanisch et al. 2006). SKA2 is required for assembly of condensed chromosomes on the metaphase plate (Hanisch et al. 2006). This, in turn, regulates the spindle checkpoint which on silencing, allows the exit from metaphase and the onset of anaphase. SKA2 protein requires SKA1, a protein binding partner, for stability, and SKA2 is required for correct assembly of SKA1 on the kinetochore. When cells were depleted of SKA proteins there was a marked delay in progress through mitosis, implying failure to silence the spindle checkpoint. This suggests that the SKA proteins have a role in regulating progression through mitosis.
We found SKA2 expression not only in multiple human lung cancer and breast cancer cell lines and primary tumours, but also in normal lung and breast tissue. HepG2 cells expressed no detectable SKA2 protein, demonstrating that high-level SKA2 expression is not a universal feature of transformed cells. Given that we and others (Hanisch et al. 2006) have found SKA2 in the cytoplasm of interphase cultured cells, it is interesting to note that there was markedly higher nuclear localization of SKA2 in breast cancer than normal breast tissue. Unexpectedly, we found that in cells overexpressing a GR construct, there was partial SKA2 translocation to the nucleus following GC treatment. This suggests that there may be functional interaction between the two proteins in the cytoplasm and that SKA2, which lacks a nuclear localization domain, is being 'drawn' into the nucleus by the movement of GR. However, this effect appears to require overexpression of GR to be seen clearly. This suggests that under particular conditions, as seen in breast cancer or GR overexpression, the nuclear exclusion of SKA2 in interphase is lost, with possible consequences for cell proliferation or survival.
As SKA2 was found to interact with the GR, its effect on GR transactivation function was sought. Overexpressed SKA2 resulted in modest enhancement of GR transactivation, while knockdown of SKA2 markedly inhibited GR transactivation. This supports a functional interaction between the two proteins. SKA2 also appears to have a role in GC inhibition of cell proliferation, in that, knockdown of SKA2 prevented the decrease in cell number seen with Dex treatment.
SKA2 protein expression was inhibited by agents that share anti-proliferative or pro-apoptotic actions on A549 cells. Dex had profound inhibitory effects on SKA2 expression and given that GCs are widely used in the induction phase of anti-cancer chemotherapy, this has implications for cell cycle control and appropriate chromosomal segregation, through interference with the spindle checkpoint (Hanisch et al. 2006).
To explore SKA2 function further, it was overexpressed in the SKA2 deficient cell line HepG2. This significantly enhanced proliferation following incubation with IGF-I, which suggests involvement of SKA2 in either growth factor signalling or cell survival/proliferation pathways. This is supported by knockdown of SKA2 in A549 cells, which produced an increase in cells arrested in mitosis as previously shown for HeLa S3 cells (Hanisch et al. 2006).
Cells were subjected to transcript profiling and genes were identified where expression was regulated by both SKA2 siRNAs, but not by a non-targeting siRNA. The SKA2 transcript was found in this group of genes to be markedly repressed, confirming the success of the knockdown approach. Functional grouping of identified genes using gene ontology approaches identified significant enrichment in pathways regulating cell survival, cell proliferation and also cytokine/ growth factor action, as evidenced by the list of significantly regulated genes clustered in these pathways (Table 1).
To conclude, we have identified and functionally characterized a novel GR-interacting protein, which we isolated from a human SCLC cell line. It has recently been identified as SKA2, a protein involved in regulating anaphase onset in HeLa S3 cells. Our studies show that SKA2 is expressed not only in a range of cell lines but also found in human lung and breast cancer tissue. In cells overexpressing GR, treatment with GC causes SKA2 to be co-localized with the GR in the nucleus. Interestingly, SKA2 is found predominantly in the nucleus in breast tumours but in the cytoplasm in normal tissue. The impact of depletion of SKA2 on apoptotic-and proliferation-associated genes suggests that SKA2 may play a role in regulation of cancer cell proliferation.
Declaration of Interest
The authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported.
Funding
The Wellcome Trust (D W R and A W) and MRC (I J S and B T) for funding. BBSRC studentships awarded to C W, L R and P K. D W R was supported by a Glaxo SmithKline Fellowship. | 2014-10-01T00:00:00.000Z | 2008-06-26T00:00:00.000 | {
"year": 2008,
"sha1": "6d1cf1755854f5ec89e590dd905eeebc639ab1a1",
"oa_license": null,
"oa_url": "https://joe.bioscientifica.com/downloadpdf/journals/joe/198/3/499.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "f4ad32387025c1c6909642aa2e1c75d5eac62cdf",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
252903309 | pes2o/s2orc | v3-fos-license | Monitor gastrointestinal tolerance in children who have switched to an “enteral formula with food‐derived ingredients”: A national, multicenter retrospective chart review (RICIMIX study)
Abstract Background Enteral tube feeding intolerances, such as diarrhea, are commonly reported in children. In the pediatric population, interest is growing in the use of blended diets for the management of enteral feeding intolerances. Fiber within a blended diet stimulates the growth of beneficial gut bacteria, which in turn produce short‐chain fatty acids, which are utilized as energy substrates for enterocytes. Enteral formula manufacturers have responded to this trend towards “real‐food” blended diets and developed an enteral formula with food‐derived ingredients. The aim of this study was to collect data relating to feed tolerance in children who had switched to an “enteral formula with food‐derived ingredients.” Methods A national multicenter retrospective study. Results Dietitians collected data from 43 medically unwell children between March 2021 and July 2021. Significant improvements were reported in children who had switched to an “enteral formula with food‐derived ingredients” in retching 17 of 18 children (95%), flatulence 6 of 8 children (85%), loose stools 10 of 11 children (90%), and constipation 10 of 11 children (90%). These improvements in gastrointestinal symptoms were reflected in weight change during the one month period measurements were collected (baseline, 19.5 kg [SD, 9]; 1 month, 20.1 kg [SD, 9]; P = 0.002). Conclusion We have observed beneficial outcomes in medically complex children who have switched to an “enteral formula with food‐derived ingredients.” Our data should motivate healthcare professionals to implement more research to better evaluate the clinical impact and mechanisms of action of blended diets and enteral formulas with food‐derived ingredients.
INTRODUCTION
Enteral nutrition (EN) is the preferred route for the nutrition support of patients who are unable to meet their nutrition requirements orally. 1 Standard enteral formulas are easily quantifiable, convenient, portable, safe, and reasonably cost effective. 2 Clinical manifestations of enteral feeding intolerances, such as abdominal distension and diarrhea, are some of the complications that can occur in patients. 3 The frequency of diarrhea in enterally fed patients ranges from 29% to 72%. 4,5 The management of persistent feed intolerances results in repeated feed withdrawal to allow for gut rest, contributing to malnutrition through a reduction in nutrition intake, a decrease in nutrient absorption, and an increase in nutrient reserve catabolism. 6 In the pediatric population interest is growing in the use of a blended diet for the management of feeding intolerances. Blended diets are food-based formulas liquefied to a consistency that will enable passage through a feeding tube. It is perceived to be more natural and better tolerated compared with commercially available standard enteral formulas. 7 Previous studies have reported positive clinical outcomes with the use of blended diets, including reduced gagging, retching, and vomiting compared with commercially available standard enteral formulas. 7,8 In 2020, the British Dietetic Association amended its guidelines to enable dietitians in the United Kingdom to support a blended diet for tube-fed individuals and to encourage an open, multidisciplinary approach to administering blended diets via a feeding tube (British Dietetic Association Policy Statement 9 ). Prior to this, there had been a lack of clear professional guidance.
The mechanisms as to why a blended diet is better tolerated than a standard enteral formula is unclear. 10 However, it stands to reason that "real food" aids normal gut functioning. Furthermore, there is evidence to suggest that fiber within a blended diet promotes the growth of beneficial gut flora bacteria, thereby inhibiting harmful bacteria. 5 In the large intestine, the microbiota ferment nondigested dietary fiber to produce short-chain fatty acids, primarily acetic, propionic, and butyric acid, which epithelial cells use as an energy source. 11 Butyrate is considered the main energy substrate for enterocytes and a stimulator of growth and differentiation. 12 Moreover, short-chain fatty acids are crucial to inhibit proinflammatory mediator activities in the intestinal epithelium. 13 Fiber that includes fructo-oligosaccharides, galactooligosaccharides, and inulin (also known as prebiotics) were shown in multiple human studies to increase the concentrations of bifidobacteria. 12 Bifidobacteria and Lactobacillus improve gut barrier function and host immunity and reduce the overgrowth of pathogenic bacteria, such as Clostridia. 14 Enteral formula manufacturers are responding to this trend and cultural shift towards "real-food" blended diets and developing formulas designed to address the feeding issues that children experience when receiving standard enteral formulas. Given the increasing requests for blended diets in our population and the paucity of available literature, we report on results collected from a national retrospective study to capture the clinical experience in children, across both acute and community settings, who had switched from a standard enteral formula to an "enteral formula with food-derived food ingredients."
MATERIALS AND METHODS
This is a retrospective, multicenter study that monitored feed tolerance in children who have switched to Compleat Pediatric from Nestlé Health Science, a nutritionally complete enteral tube feed (containing 13.8% foodderived ingredients in the form of rehydrated chicken, peas, green beans, and orange juice, providing 1 g fiber); study was conducted from March 2021 to July 2021 across four National Health Service Trusts: three pediatric tertiary centers and one district general community hospital. Children were included if they had switched to an "enteral formula with food-derived ingredients" because of previous feed tolerance issues related to retching, vomiting, flatulence, and/or abnormal stool consistency and frequency. Children had to have been receiving an "enteral formula with food-derived ingredients" for at least 1 month, and the enteral formula must have accounted for at least 80% of their total energy requirements. All eligible children were aged between 1 and 17 years old. Data were collected by pediatric dietitians from dietetic records and inputted to a Microsoft form to capture anthropometric and gastrointestinal outcomes over a month-long period when children were switched to an "enteral formula with food-derived ingredients." A link to the Microsoft forms was sent to each site by the clinical research company, Ixia Clinical Ltd. Once the Microsoft forms were completed by the dietitian, forms were automatically sent to Ixia Clinical Ltd. Data were compiled to represent all sites and downloaded into an Excel sheet for analysis performed by the principle investigator.
Clinical dietetic documentation on feeding tolerance was measured as either improved, no change, or worsened and on key markers of tolerance (retching, vomiting, flatulence, and stool consistency). Stool consistency and frequency were measured using a stool form scale, a standardized method of classifying stool form into a finite number of categories. The Bristol Stool Form Scale is an ordinal scale of stool types ranging from the hardest (type 1) to the softest (type 7). Data were also collected to capture any changes before and after the switch to the new enteral formula in relation to feed volume, calorie intake, and medication related to stool frequency and consistency.
Statistical analysis
The primary outcome of interest was the change in feed tolerance. For each measurement period, the change in feed tolerance was assessed for each patient to identify any trends. Adverse events while receiving enteral formula with food-derived ingredients were recorded. Anthropometric measures were recorded as median and interquartile range (IQR) for weight (kg) and height (cm). To examine the changes in weight (kg), energy intake (kcal), and feed volume (ml) during the study period, a paired t-test was used to produce a P-value and confidence interval. A P-value <0.05 was deemed statistically significant. Statistical analysis was performed with SPSS software (version 23; IBM SPSS Statistics, Armonk, NY, USA).
RESULTS
Forty-three children were included in this national multicenter, retrospective study. Demographic, primary medical diagnosis, anthropometric, and feeding history data are provided in Table 2. The median age of children who had switched to an "enteral formula with food-derived ingredients" was 6 years old (IQR, [4][5][6][7][8]. The most frequently recorded primary diagnosis of children who had switched to the new enteral formula was related to neurological or neuro-disability 20 of 43 children (47%). The median time children received an enteral formula before switching to the new enteral formula was 52 weeks: (IQR, 24-120). The primary mode of nutrition delivery was via a gastrostomy feeding tube: 34 of 43 patients (80%). A breakdown of the type of formula (amino acid, partially hydrolyzed, or whole protein) children were receiving before the switch to the new enteral formula is outlined in Table 2. One child, who was recovering from chemotherapy-induced mucositis and receiving parenteral nutrition (PN), was challenged with a hydrolyzed formula, which resulted in diarrhea and the enteral formula was stopped. Subsequently, this child was challenged again 3 days later and switched directly from PN to the new enteral formula with no signs of feed intolerance.
Sixteen children were on medication for constipation management before switching to the new enteral formula. After 1 month switching to the new formula, seven children reduced the quantity or frequency of medication, with one child stopping medication altogether. Parental reports of children who had gastrointestinal intolerances before switching to the new enteral formula recounted improvements in retching, flatulence, loose stools, and constipation after switching formulas (Table 3). One patient presented with vomiting and lethargy after switching to the new enteral formula. This child is now under the care of the local allergy team and has been diagnosed with food protein-induced enterocolitis syndrome. Prior to switching to the new enteral formula, this child was receiving a standard whole-protein formula. Overall, the type of enteral formula (amino acid, partially hydrolyzed, or whole protein) the child was receiving prior to the switch had no influence on feed tolerance outcomes.
A comparative analysis reported weight gain in children who had switched to the new enteral formula after 1 month (P > 0.002) ( Table 4). There was no significant difference in feed volume (P > 0.5) or total daily calorie intake (P > 0.7) after switching formulas ( Table 4).
The Microsoft data forms had a section available for additional comments. A common theme captured from parents was that their child seemed more comfortable after switching to the new enteral formula. One parent reported that, prior to switching, they often had to stop the feed because of retching and very poor feeding tolerance, but this has now improved and feed volume has increased with no retching. Furthermore, another family reported that bowel habits improved so much since switching to the new enteral formula that their child was finally able to successfully toilet train. Overall, seven of 43 children (16%) children experienced positive changes in mood or behavior and were more happy and settled; four of 43 children (9%) saw changes in skin or hair; and two of 43 children (4%) saw a change in their schooling patterns, as these children were able to attend school and take part in activities. Finally, 12 of 43 (28%) children saw changes in feeding patterns, such as less time spent on feeding and more simple feeding regimens, such that their families felt confident to go on a holiday. Ninety percent of dietitians who reported switching to the new enteral formula met the nutrition goals set prior to the switch, with 81% of dietitians reporting an improvement within 1 week of switching.
DISCUSSION
Children who require nutrition support from feeding tubes routinely report feeding intolerances. 4 Our national multicenter, retrospective study found that children who had switched to an "enteral formula with food-derived ingredients" reported a significant improvement in gastrointestinal symptoms, including a reduction in retching, flatulence, and vomiting. Dietitians reported clinical improvements within the first week of switching to the new enteral formula that were sustained throughout the study period.
Our study reported improved feed tolerance in children who had complex gastrointestinal issues and had switched to an "enteral formula with food-derived ingredients." Our findings support those of Samela et al, who monitored the transition of 10 pediatric intestinal failure patients (>1 year of age) from an elemental formula to an "enteral formula with food-derived ingredients." They reported improved stooling patterns and concluded that a commercially available enteral formula with food-derived ingredients is a cost effective and adequate means of providing nutrition to this patient population. 15 Furthermore, our study supports findings by Coad et al, who reported positive clinical outcomes with the use of blended diets, including reduced gagging and retching in gastrostomy-fed children with fundoplication. 7 The mechanisms behind why blended diets and "enteral formula with food-derived ingredients" work has been postulated to be the beneficial effect of fiber on the gut microbiota. 5 A recent study reported that pediatric patients previously fed standard enteral formulas acquired a more diverse microbiome when switched to blended diets. 16 Additionally, the increased viscosity of a blended diet means that digested chyme reaches the small intestine at a pace that stimulates a more regular hormonal response. 2 Antibiotic treatment is strongly associated with diarrhea in patients receiving EN and is linked to intestinal dysbiosis, which leads to an increased risk of pathogen overgrowth and an altered metabolism of macronutrients, which induces osmotic diarrhea and the malabsorption of essential nutrients. 17 For this reason, children admitted to the hospital are the most in need of a high-fiber nutritionally complete formula to minimize intestinal dysbiosis from the barrage of intravenous antibiotics often administered in acute settings.
However, blended diets may not be suitable for intensive care or other acute clinical settings because of the perceived risk of microbial contamination and the variability in micronutrients and electrolytes. 8 Therefore, having an alternative, such as a complete "enteral formula with food-derived ingredients," may serve as a compromise to a blended diet, bridging the gap between a full blended diet and a standard enteral formula, thus facilitating relationships and engagement between parents and healthcare professionals. However, as Chandrasekar et al correctly point out, there is limited evidence that blended diets can significantly reduce gastrointestinal symptoms associated with tube feeding and improve aspects of quality of life. More research is needed to evaluate whether blended diets and "enteral tube feed containing food-derived ingredients" support growth in children and to explore potential complications. 8 Of note, one child in this study discontinued the new formula because of an undiagnosed allergy-related disorder and, therefore, it is advisable that any children who have not been exposed to whole food since being exclusively tube fed should be carefully monitored and may require further input from the allergy team.
The limitations of this study include its small sample size (therefore, results are ungeneralizable to gender and ethnic groups), short trial period, and retrospective design. Rather than stating causation, we can only allude to a potential association of an "enteral formula with foodderived ingredients" and improved gastrointestinal symptoms. However, a strength of the study was its national, multicenter design and that data gathering was from a range of dietitians from different specialties and clinical settings.
Given the growing interest among caregivers to trial blended diets and "enteral formulas with food-derived ingredients," we urge that the healthcare community better understand this practice. We have observed the beneficial outcomes of switching to this new formula within a wide range of medically complex children. Our data should motivate healthcare professionals to engage and embrace this cultural shift, implementing more research to better evaluate the clinical impact and mechanisms of action of blended diets and "enteral formulas with food-derived ingredients."
F I N A N C I A L D I S C L O S U R E
Graeme O'Connor, Marie Watson, Martha Van Der Linde, Rita Shergill Bonner, and Julia Hopkins received payment per participant recruited from Nestlé Health Science UK during the conduct of the study. Sharan Saduera is a medical affairs dietitian and is employed by Nestlé Health Science UK.
A U T H O R C O N T R I B U T I O N S
Graeme O'Connor and Sharan Saduera contributed to the conception and design of the research and drafted the manuscript. Graeme O'Connor, Marie Watson, Martha Van Der Linde, Rita Shergill Bonner, and Julia Hopkins contributed to acquisition and data collection, revised the manuscript, agree to be fully accountable for ensuring the integrity and accuracy of the work, and read and approved the final manuscript. | 2021-12-23T06:22:46.747Z | 2021-12-21T00:00:00.000 | {
"year": 2021,
"sha1": "e81d405ac6126747d64f509791f3583ed00aabe4",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "56b6caad0c0d16537e41a52703f6bd58b050ceb7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
30238825 | pes2o/s2orc | v3-fos-license | Are the Opsonophagocytic Activities of Antibodies in Infant Sera Measured by Different Pneumococcal Phagocytosis Assays Comparable?
ABSTRACT Host protection against Streptococcus pneumoniae is mainly mediated by opsonin-dependent phagocytosis. Several techniques for measuring opsonophagocytic activity (OPA) of antibodies to S. pneumoniae have been standardized and used. These include the viable cell-assay, flow-cytometric assays, and an assay utilizing radiolabeled bacteria. Using these different methods, we measured the OPA of antibodies to S. pneumoniae types 6B and 19F from the sera of infants immunized with a pneumococcal conjugate vaccine, PncCRM. Generally, the results obtained by the various techniques correlated well, although serotype-specific differences were found (6B,r = 0.78 to 0.95, P < 0.001; 19F,r = 0.50 to 0.84, P < 0.001). The same serotype-specific differences were observed for the relationship between the concentrations of specific immunoglobulin G antibodies measured by enzyme immunoassay and the OPA. Since the sensitivities of the OPA assays differed, the most prominent discrepancies between the techniques were found at low antibody concentrations.
Opsonophagocytosis mediated by antibodies and complement is the major mechanism for clearing Streptococcus pneumoniae (Pnc) from the host (19,22). Therefore, the in vitro opsonophagocytic activity (OPA) of antibodies to pneumococcal capsular polysaccharides (PSs) is believed to be a measure of their functional activity in vivo. Limited data are available on the requirements of protective immune response in humans to conjugate vaccines against pneumococci (3). By contrast, protective levels of human antibodies in animals have been determined in several studies (6,12,18). In two different models of passive protection of mice against bacteremia or lung infection, OPA of human immunoglobulin G (IgG) antibody was found to correlate better with the protection than the IgG concentration (6,17). Thus, to determine the serological correlates or surrogates of protection from the samples of ongoing efficacy trials, both quantitative and qualitative characteristics of antibodies have to be measured reliably. Because the analyses may be done in different laboratories, it is important to use validated methods that give comparable results.
Validation of the enzyme immunoassay (EIA) method for measuring concentrations of serotype-specific antibodies to Pnc has advanced during recent years (10,14). A multicenter study at 12 laboratories has been completed, and similar results have been published (13). The validation of opsonophagocytic assays is far behind, although several techniques have been reported and standardized (5,11,16,20,21). Since each laboratory used its own assay for the measurement of OPA of antibodies against Pnc, it is important to determine whether the results obtained are comparable both to each other and to the IgG concentrations measured by EIA. Therefore, using four different opsonophagocytic assays, we analyzed the OPA of antibodies to Pnc serotypes 6B and 19F from the sera of infants immunized with a pneumococcal conjugate vaccine. Thereafter, we compared the results to each other and to the IgG antibody concentrations.
Vaccine subjects and sampling. Infants (n ϭ 16) were immunized at 2, 4, and 6 months of age with PncCRM and given booster injections at 15 months of age with the homologous conjugate vaccine or PncPS (1). Blood samples were obtained from subjects at 7, 15, and 16 months of age. Sera were separated by centrifugation and stored at Ϫ20°C until testing. Infants receiving booster injections of either the homologous conjugate vaccine or the PncPS vaccine were retained as one group.
EIA for anti-Pnc PS IgG. Concentrations of IgG antibodies to pneumococcal PSs were measured by EIA methods as described previously (8). The results are given as micrograms per milliliter calculated on the basis of the officially assigned IgG values of the 89-SF reference serum (15).
Opsonophagocytic assays. Functional activity of antibodies from all serum samples was determined by three different techniques: an opsonophagocytic assay using viable bacteria ("viable assay" [16]), an assay using live, radiolabeled bacteria ("radio assay" [20,21]), and a flow-cytometric assay ("flow assay 1" [5]). In addition, parts of the sera were analyzed by another flow-cytometric technique ("flow assay 2" [11]). The details of the techniques are shown in Table 1. In all assays, serum, from which the internal complement was inactivated, bacteria, external complement source, and phagocytes were mixed, and phagocytosis was allowed to take place. Polymorphonuclear leukocytes (PMNLs) served as phagocytes in viable, radio, and flow 1 assays ( Table 1). The PMNLs were isolated from the fresh peripheral blood of healthy adult donors by dextran sedimentation and Ficoll (Paque [Pharmacia Biotech, Uppsala, Sweden] or Histopaque [Sigma, St. Louis, Mo.]) density gradient centrifugation (viable and radio assays) or by Ficoll-Histopaque gradient followed by two hypotonic shocks (flow assay 1). The isolated cells were washed and dissolved in Hanks' balanced salt solution containing 1% bovine serum albumin or 2.5% fetal calf serum. In flow assay 2, differentiated HL-60 cells were used as phagocytes (11).
The viable-cell assay was a modification (2) of the assay described by Romero-Steiner et al. (16). It measured the killing of live pneumococci by PMNLs in the presence of antibody and complement. OPA of antibodies was expressed as a titer, which was the reciprocal of the serum dilution with 50% killing as compared to the bacterial growth in the controls without serum. A titer of 4 was given to sera with undetectable OPA, with a titer of 8 being the lowest positive result.
The radio assay was modified from the assay described previously by Vidarsson et al. (20,21). Instead of using internal complement of each test serum, an external complement was added. As a complement source, pooled serum of hypo-and agammaglobulinemic patients, provided by J. Plested (Churchill University, Oxford, United Kingdom), or an IgG-depleted serum of a healthy adult volunteer was used (Table 1). Fresh normal serum was depleted of IgG by protein G affinity chromatoraphy (Pharmacia Biotech, Roosendaal, The Netherlands) and stored at Ϫ70°C. The success of IgG depletion was assured by radial immunodiffusion (LC Partigen IgG; Behring, Malburg, Germany). Pooled serum of hypo-and agammaglobulinemic patients was used at a concentration of 5% in the experiments with serotype 6B. The experiments with serotype 19F were performed using IgG-depleted serum at a 12% concentration ( Table 1). The results were obtained by measuring the radioactivity in a liquid scintillation counter (Packard, Greve, Denmark) and by counting the percent uptake of radiolabeled bacteria in the presence of each serum (21). This was compared to a standard run at various concentrations in every assay. The OPA was then calculated from the standard curve and represented as arbitrary units. Undetectable OPAs were reported as 1 arbitrary unit.
Flow assay 1 was performed as described previously by Jansen et al. (5). The OPA of antibodies was expressed as a titer; it was the reciprocal of the serum dilution resulting in 25% fluoroscein isothiocyanate-positive PMNLs. A titer of 1 was given to sera with undetectable OPAs.
Sera taken from 10 infants at 15 and 16 months of age were analyzed by a flow assay 2 described by Martinez et al., using differentiated HL-60 cells as phagocytes (11). The OPA of antibodies was expressed as a titer, which was the reciprocal of the serum dilution with at least a 50% decrease in fluorescence compared with the maximal percent fluorescence of each sample. A titer of 4 was given to sera with an undetectable OPA, 8 being the lowest positive result.
Statistical analysis. Statistical comparisons were carried out using the paired t-test, Pearson's correlation analysis, and kappa statistics, interpreted as the chance-corrected proportional agreement. When the relationship between two different factors (OPA and concentration) was evaluated, sera taken from infants at different time points were retained separately due to their dependency on each other. In statistical analyses, log-transformed data of concentrations and OPAs were used.
RESULTS
The IgG concentration and the OPA of antibodies measured by different opsonophagocytic techniques showed similar kinetics ( Fig. 1). Both the antibody level and the OPA against serotypes 6B and 19F decreased significantly in samples from subjects between the ages of 7 and 15 months (P Ͻ 0.01 to 0.001), with the exception of OPAs determined by flow assay 1. A significant increase (P Ͻ 0.001) was seen after the booster vaccination by all methods.
When the data from analyses of serum samples taken from subjects at different ages were combined (n ϭ 48), the OPAs obtained by the three phagocytic assays correlated significantly (Fig. 2). The correlation of the OPAs measured by viable and radio assays was significant at all ages for both serotypes (r ϭ 0.76 to 0.92, P Ͻ 0.01 to 0.001). Likewise, after booster vaccination, the correlations between the OPAs of viable and radio assays and of flow assay 1 were mostly significant (r ϭ 0.37 to 0.82, P Ͻ 0.2 to 0.001). However, at 7 or 15 months of age, when the activities were low with all assays, no significant correlations were found between flow assay 1 and the other two assays (r ϭ Ϫ0.14 to 0.43, P ϭ 0.09 to 0.89).
The OPA measured by viable or radio assay correlated strongly with the IgG concentration measured by EIA at all subject ages for both serotypes (Fig. 3). The correlation between the OPAs of flow assay 1 and the IgG concentration was significant after the booster vaccination for both serotypes 6B and 19F and at the subject age of 7 months for serotype 6B. However, when the concentration of antibody was low, e.g., in the sera taken before booster vaccinations, no significant correlation could be found. Furthermore, the sensitivity of the OPA assays differed. For serotype 6B, the detection limits of the viable and radio assays were both about 1 g/ml, while more antibodies were usually required to get detectable OPAs with flow assay 1 (Fig. 3). For all serotype 19F OPA assays, the detection limit was higher than that for serotype 6B assays. Generally, more anti-19F than anti-6B antibodies were needed to obtain similar OPAs; this was seen by comparing the slopes for the two serotypes (Fig. 3).
Because two standardized flow-cytometric opsonophagocytosis assays for Pnc have been described in the literature, we wanted to include both of them in the comparison. However, only 20 sera were available for analyses by flow assay 2. Therefore, flow assay 2 and the other three assays were compared using only part of the sera. The OPAs measured by flow assay 2 correlated well with the OPAs of the other assays for serotype 6B (Fig. 4). For serotype 19F, the correlations were not as good between the two flow-cytometric assays but were, however, significant. The correlation between OPAs of flow assay 2 and IgG concentration assays was mostly significant in both age groups for both serotypes (r ϭ 0.49 to 0.80, P Ͻ 0.16 to 0.01).
Comparison of the proportions of sera with detectable (ϩ) or undetectable (Ϫ) OPAs determined by the different assays further emphasized the better agreement between OPA assays for serotype 6B than those for serotype 19F ( Table 2). The agreement between viable and radio assays was generally good for both serotypes. Moderate or poor agreement was found between flow assay 1 and viable or radio assays. Exactly the same samples were detectable for serotype 6B OPAs with the flow assay 2 and the radio assay ( ϭ 1.00). Since the other two assays detected mostly the same sera ( ϭ 0.80), very good agreement was found between flow assay 2 and the other assays for serotype 6B. By contrast, conflicting OPAs were obtained with the two flow-cytometric assays for serotype 19F.
DISCUSSION
In this study, comparable results were obtained by the opsonophagocytic assay techniques, which, themselves, differed in many respects. However, the levels of OPAs were different, which may be due to the dissimilarities in the details of the assays (e.g., bacterium/PMNL ratio, complement source, and concentration) and in the ways results are calculated. In addition, there were serotype-specific differences. For serotype 6B, the OPAs obtained by all assays correlated well, but for serotype 19F, the correlations were poorer. High concentrations of anti-19F antibodies were often required to get detectable opsonic activities, and OPAs measured by different assays from the sera with low anti-19F concentrations varied. Sera with high antibody concentrations had generally high OPAs with all methods.
The highest correlation was found for both serotypes between the OPAs obtained by the viable and radio assays. Likewise, functional activities measured by these assays correlated well with IgG concentrations measured by EIA. For data de-duced from sera with detected OPAs by viable assay but undetectable OPAs by radio assay, the viable assay seemed to be somewhat more sensitive. This was further confirmed for results of the correlation between IgG concentration and OPA; for serotype 19F, more antibodies were usually required to get detectable activity by radio assay than by viable assay.
The OPAs of flow assay 1 correlated well with the OPAs of the other types of assays for serotype 6B. For serotype 19F, the correlations were not as high, and there were discrepancies with the sera of low functional activities. The same was seen when the OPAs were correlated to IgG concentration. In fact, the flow assay 1 seemed to be somewhat less sensitive than either the viable or radio assay for both serotypes. This may partly explain the lack of correlation between the concentration and OPA of serotype 19F in the sera having low antibody levels. Considerably better correlations were found for serotype 6B in the postbooster sera that had higher concentrations and OPAs.
Unfortunately, all sera were not available for flow assay 2. Based on the data received by using 20 sera, the OPAs obtained by flow assay 2 correlated well with the OPAs of the other assays for serotype 6B. For serotype 19F, however, there were discrepancies in the OPAs measured by the two flowcytometric assays. These differences may have resulted from differences between the assays in source and concentration of complement, bacterium/phagocyte ratio, methods growing and labeling the bacteria, etc. The difference was most probably not due to the use of HL-60 cells instead of fresh PMNLs in flow assay 2.
FIG. 3. Relationship between the IgG concentration measured by EIA and OPA of antibodies to serotypes 6B and 19F measured by the three phagocytic techniques: viable, radio, and flow 1 assays. The correlation between the two parameters was analyzed separately for the sera taken from infants at 7, 15, and 16 months of age.
Each method had its own advantages and drawbacks. The main advantage of the viable-cell assay was its sensitivity and the fact that it was the only assay that measured the killing of the bacteria. The other assays measured binding between the phagocytes and bacteria, but the good correlation between these later assays and the viable assay suggests that binding most likely indicates killing. This was demonstrated using 10 adult postvaccination sera and removing aliquots from each tube at the end of the radio assay for plating onto agar. A significant correlation was found between percent uptake and percent killing (E. Saeland, unpublished data). The viable assay was more laborious than the other assays and consumed more PMNLs. Therefore, when fresh PMNLs were used, a large volume of blood was needed for their isolation. The number of sera that could be analyzed per day was also lower when the viable assay was used when the radio assay or both flow assays were used. Radio and flow assays were fast, convenient, and easy to perform. In addition, an advantage of the FIG. 4. Relationship between the OPAs determined for serotypes 6B and 19F by flow cytometric assay 2 and by the other three methods: viable, radio, and flow 1 assays. The sera from 10 infants obtained at 15 and 16 months of age were retained as one group (n ϭ 20). flow assays is the possibility they afford to semiautomate the technique and measure OPAs for multiple serotypes (using different dyes) in one tube (4). A drawback of the radio assay was the need of large volumes (up to 200 l per serotype) of the test sera; large volumes were required to be able to count  emissions reliably. Furthermore, the radio assay is very dependent on human complement, and the best sensitivity is gained when intact test sera (21) or sera from the agammaglobulinemic patients, both retaining the full complement activity, are used as sources of complement. Radioactive waste may also be considered as a drawback of the radio assay. Flow assay 1, though very easy to perform if a laboratory has good equipment, is at the moment too insensitive for analyzing samples from infants whose sera contain low concentrations of specific antibodies. However, there is a conflict between sensitivity and specificity; flow assay 1 has been made completely specific for anticapsular PS antibodies by using highly encapsulated bacteria grown three times to log phase. Bacteria encapsulated this heavily may be difficult to phagocytize (7), which would impair the sensitivity of an assay. We ended up using the mentioned sources and concentrations of complement, ways of growing and labeling the bacteria, bacterium/phagocyte ratios, etc., to perform the assays as described previously in the literature. Because none of the assays was optimal, the influence of different factors on each technique should be evaluated in the future. Nevertheless, taking into consideration the large differences in the performance of the assays, the OPAs obtained were fairly comparable; every method correctly detected the sera with high activity and identified the activity as high, although there were variations near the detection limit of the assays. As pointed out earlier, different methods had different sensitivities, and those having the highest sensitivity gave OPAs that correlated best with each other and with IgG concentrations. The issue of sensitivity is especially important when serological correlates of protection induced by immunizing infants with pnuemococcal conjugate vaccines are analyzed. The reports of animal studies and efficacy trials suggest that the minimal protective antibody level might vary between 0.05 and 1.15 g of specific antibodies per ml, depending on the serotype and the disease in question (different for invasive and mucosal infections) (6,9,12,18). This range is mostly below the detection limit of any tested opsonophagocytic assay. Therefore, effort is needed, in the future on increasing the sensitivity of the assays.
ACKNOWLEDGMENTS
This study was supported by World Health Organization (GVP/ VRD contract V23/181/76) and by the Academy of Finland, the Nederlandse Organisatie voor Wetenschappelijk Onderzoek, and the Federation of European Microbiological Societies (FEMS). The clinical part of the study was supported by Wyeth-Lederle Vaccines and Pediatrics.
We are grateful to Joseph Martinez and Sandra Romero-Steiner for teaching us one of the flow-cytometric techniques, as well as a method for the treatment and differentiation of the HL-60 cells. We also thank George M. Carlone for giving us an opportunity to do part of the analyses in his laboratory. Furthermore, we thank Maijastiina Voutilainen, Arja Vuorela, Hannele Lehtonen, and Sirkka-Liisa Wahlman for excellent technical assistance; Heidi Å hman for the IgG concentration data; Joyce Plested for providing us with the sera of hypo-and agammaglobulinemic patients; and Virva Jäntti for statistical help. Personnel of the study centers are acknowledged for their help in the clinical part of the study. | 2018-04-03T01:39:02.645Z | 2001-03-01T00:00:00.000 | {
"year": 2001,
"sha1": "f2c5b4ac5b5e9a5db2a1f8ebc471f5e950451123",
"oa_license": null,
"oa_url": "https://cvi.asm.org/content/8/2/363.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b571976f47227754dcba49ac379ffdd129161665",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
16042242 | pes2o/s2orc | v3-fos-license | In-Line Phase Contrast Imaging of Hepatic Portal Vein Embolization with Radiolucent Embolic Agents in Mice: A Preliminary Study
It is crucial to understand the distribution of embolic agents inside target liver during and after the hepatic portal vein embolization (PVE) procedure. For a long time, the problem has not been well solved due to the radiolucency of embolic agents and the resolution limitation of conventional radiography. In this study, we first reported use of fluorescent carboxyl microspheres (FCM) as radiolucent embolic agents for embolizing hepatic portal veins. The fluorescent characteristic of FCM could help to determine their approximate location easily. Additionally, the microspheres were found to be fairly good embolizing agents for PVE. After the livers were excised and fixed, they were imaged by in-line phase contrast imaging (PCI), which greatly improved the detection of the radiolucent embolic agents as compared to absorption contrast imaging (ACI). The preliminary study has for the first time shown that PCI has great potential in the pre-clinical investigation of PVE with radiolucent embolic agents.
Introduction
Preoperative portal vein embolization (PVE) is an effective modality to induce hepatic hypertrophy by obstructing the selective portal vein supplying the diseased segment of the liver [1][2][3][4]. In order to prevent blockage in the wrong region, it is quite essential to determine the distribution of the embolic agents inside target liver during and after the embolization. Clinically, gelatin sponge (GS) and polyvinyl alcohol particles (PVA) are the most commonly used embolic agents [5,6]. However, GS and PVA are low-absorption materials which are hardly visualized by conventional radiography. Therefore, embolic agents are always injected by combining them with iodine contrast agent to enhance image contrast. Nevertheless, iodine-enhanced method only indirectly shows the embolization site, and can not accurately verify the distribution of embolic agents. Additionally, it is still difficult to show fine embolized vessels with a diameter of 200 mm or less by conventional angiography due to the limitation of spatial resolution [7].
To overcome the challenges, novel imaging method should be applied. Currently, synchrotron radiation (SR) phase contrast imaging (PCI) has been widely utilized to provide excellent image contrast for soft tissues [8][9][10][11]. PCI, utilizing the phase shift, can produce higher contrast images than absorption contrast imaging (ACI) [12,13]. Also, PCI is considered as a powerful preclinical imaging modality to observe fine structures with its resolution higher than any available clinical radiography [14,15]. Using PCI, hepatic vessels down to micron level can be clearly shown without using contrast agents [16,17]. The values of these applications raise the possibility of using PCI for clearly imaging lowabsorption embolic materials.
In this study, GS and PVA were imaged by SR imaging. PCI and ACI were performed and compared. We evaluated the feasibility of using PCI for imaging PVE with radiolucent fluorescent carboxyl microspheres (FCM).
Sample Preparation
All experiments were conducted in accordance with the guidelines established and approved by Shanghai Jiao Tong University's Institutional Animal Care and Use Committee. 6 male ICR mice were anesthetized using an intraperitoneal injection of ketamine (100 mg kg 21 ) and xylazine (10 mg kg 21 ). The main portal trunk was dissected, and then punctured with a thin PE-50 catheter through a midline laparotomy. Then PVE was performed by injecting 100 FCM (in 0.1 ml PBS) into the portal vein via the catheter attached to a 1 ml syringe. 5 minutes after the PVE, mice were sacrificed by cervical dislocation under anesthesia. The livers were harvested and placed in 4% formaldehyde solution. The numbers of FCM in main lobes of liver were counted under fluorescence microscope. Data were expressed in mean 6 standard deviation. Three non-dehydrated livers were randomly chosen to scan by phase contrast CT imaging. For imaging dehydrated livers, the other three livers were placed in a 4% formaldehyde solution for 72 hours, dehydrated with 100% ethanol for 48 hours, and then placed in the air for 2 hours.
SR Imaging Parameters
Imaging was performed at the BL13W1 beamline in Shanghai Synchrotron Radiation Facility (SSRF, China). X-rays were derived from a 3.5 GeV electron storage ring. The beamline covered an energy range of 8 to 72.5 keV. X-Ray was monochromatized at 19 keV energy using a double-crystal monochromator with Si(111) and Si(311) crystals. The energy resolution was gE/E,5610 23 . The transmitted x-rays were first converted to visible light by a scintillator consisting of a 100 mm thick CdWO 4 cleaved single crystal, and then captured by a CCD camera with the pixel size of 3.7 mm. (Photonic Science, UK). Samples were positioned on a translation/rotation stage at a distance of 34 m from the synchrotron source. The distance between the sample and the detector had a changeable range of 8 m (Fig. 1).
Comparison between ACI and PCI
Clinically utilized 150-350-mm GS (Eric Kang, China) and 90-180-mm PVA (PVA-100, Cook) were purchased for imaging. FCM (ACMEmicrospheres, USA) were used for hepatic PVE. FCM had a mean diameter of 100 mm, ranging from 90 to 105 mm. ACI and PCI were performed with the same imaging parameters except sample-to-detector distance (d = 1 cm and 60 cm, respectively). The distance was changed by moving the CCD camera on a rail. Relative densities were evaluated by line profile analysis via Image-Pro Plus 6.0.
Phase Contrast CT Imaging 1000 projection images were obtained from each sample over 180u in rotation steps of 0.18u. The projections were recorded with sample-to-detector distance of 60 cm and exposure time of 1 s. The raw data were processed by applying the filtered back projection (FBP) algorithm with PITRE software [18]. 3D phase contrast reconstructed images were acquired by using the Amira 5.2 software (Mercury Computer Systems, USA).
SR Imaging of GS and PVA
The radiolucent characteristic of GS and PVA was demonstrated in Fig. 2. No distinct contrast between GS or PVA and its surrounding air could be observed on the absorption images (Figs. 2c and g). After adjusting the distance to 60 cm, we were able to clearly visualize the GS and PVA on the phase contrast images (Figs. 2d and h). The two embolic agents were both irregular in shape (Figs. 2b and f ). Morphological Observation and SR Imaging of FCM FCM have smooth and uniform surface morphology, as confirmed by optical microscopy (Fig. 3a). The microspheres also present bright green fluorescence (Fig. 3b). In Figs. 3c and d, the images obtained by absorption contrast could not reveal the microspheres at all. In comparison, PCI could provide clear visualization of the microspheres (Figs. 3e and f). The beads caused a visible change in intensity around the edges on the phase contrast image (Fig. 3f). The fluorescent micrograph (Fig. 4b) shows FCM in the liver more clearly than optical micrograph (Fig. 4a)
PCI of Hepatic Embolization
FCM could be clearly identified inside the occluded vessel in dehydrated livers (Figs. 4c and 5a). In Fig. 5b, an about 100-mm portal vein and its branches are clearly shown to be embolized by a 100-mm FCM. In Figs. 5c and d, FCM can be clearly distinguished from the wall of the vessel on axial view. The spatial distribution of FCM inside target vessels could be clearly revealed by 3D reconstruction application (Figs. 5e and f). The FCM could also be clearly revealed to identify the embolized segment of the non-dehydrated liver (Fig. 6).
Discussion
In order to prevent postoperative liver insufficiency, PVE is often clinically used to stimulate growth of the non-embolized liver segment [19][20][21]. The effect will be better if the embolic agents The intensity values along a straight line (green) were displayed by using line profile analysis (red). Note that the beads caused a reduced change in intensity between the left (2) and right (3) boundaries on the phase contrast image (f). Images were obtained at the energy of 19 keV with two sample-to-detector distances of 1 cm (c) and 60 cm (e). The pixel size was 3.7 mm63.7 mm; the exposure time was 1 s. doi:10.1371/journal.pone.0080919.g003 selectively block the blood supply that feeds the tumor. Accordingly, good knowledge of embolic spatial distribution is vital to make embolization at the desired liver segment. The present study has for the first time reported that PCI has great potential for preclinical PVE investigation with radiolucent embolic agents.
Though the effects of x-ray scatters on the absorption contrast images are significantly reduced by using narrow SR x-ray beam, no distinct contrast between GS or PVA and its surrounding air could be observed by ACI. By increasing the sample-to-detector distance, in-line phase contrast can be obtained for the sample [22,23]. PCI has been shown to be quite suitable for liver research, such as liver fibrosis [24] and liver cancer [25]. PCI, unlike ACI, depends mainly on phase shift properties to offer greatly improved contrast for low-absorption materials [10,26,27]. The phase contrast technique can convert such phase shifts into intensity differences that can be detected directly. The two embolic agents present irregular in shape. However, the irregular shape and variable size of these particles may present a poor correlation of the occlusion level and the particle size [28]. When large particles are properly oriented, they can reach a distal vessel. In addition, the aggregation behavior of the particles may cause blockage of proximal large vessels rather other desired vessels. Nowadays, spherical embolic agents with uniform shape have been developed to overcome the disadvantages of conventional embolic agents [28][29][30]. The microspheres can make a predictable occlusion of vessels according to the particle size selected. Here, we use spherical FCM as radiolucent embolic agents. The fluorescent characteristic of the microspheres can offer the potential for multimodal imaging. Because the difference between the absorp-tion coefficients of FCM and air is small, no evident contrast can be detected by ACI. In comparison, PCI exploits the differences in the refractive index and enables clear visualization of the FCM. The hepatic vessels were filled with air to replace blood after they were fully dehydrated. On PCI, the phase shifts arising from FCM-air interfaces can generate sufficient image contrast to show the weak-absorption FCM. After the injection, FCM run through vessels, and then stop when they reach vessel of their own size. Thus, the embolization level was directly related to the diameter of injected embolic agents. Many embolic beads were found in three main lobes of liver; still, some FCM retained in main portal vein. In further study, super-selective catheterization technique may be used to concentrate the embolic agents in the targeted hepatic lobe. The phase contrast CT imaging could also provide adequate image contrast to distinguish the FCM from the non-dehydrated liver tissues; consequently, the embolized liver segment could be directly and obviously identified. Besides phase contrast, high spatial-resolution performance is another important characteristic for PCI using SR [31,32]. The spatial resolution of PCI can arrive at submicron scale, which is much higher than that of conventional angiography using x-ray, CT, and MRI [33][34][35]. On PCI, we could visualize vessels of about one-tenth of the diameter measured by conventional angiography. So PCI has the potential for imaging fine embolized vessels.
In summary, the characteristics of sensitivity to low-absorption materials and high resolution for PCI are very attractive. Compared with ACI, PCI is capable of creating remarkable visibility of radiolucent embolic agents. In addition, phase contrast CT technology can help to noticeably reveal the 3D spatial distribution of low-absorption embolic agents and clearly identify the embolized liver segment. Therefore, PCI can be currently used for pre-clinical evaluation of new embolic agents in animal models; meanwhile, the imaging modality may allow potential medical application if a compact SR x-ray source is employed in the future. | 2016-05-04T20:20:58.661Z | 2013-12-04T00:00:00.000 | {
"year": 2013,
"sha1": "1f35b1b936a49535a503a11ab0ab3bc42d77c687",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0080919&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1f35b1b936a49535a503a11ab0ab3bc42d77c687",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269684550 | pes2o/s2orc | v3-fos-license | DIVERSITY AND PHYTATE-DEGRADING POTENTIAL OF YEAST MICROORGANISMS ISOLATED FROM SOURDOUGH
Phytases, which perform the stepwise hydrolysis of phytic acid to myo-inositol and inorganic phosphate, are used worldwide to reduce phosphorus pollution and improve nutrition in monogastric animals and humans. Yeasts isolated from their natural environments represent rich and still underexplored sources of industrially valuable enzymes, including phytases; therefore, they are widely studied for the production of these enzymes. In this regard, thirteen yeast pure cultures were isolated from the microbial consortium of four types of sourdough obtained during the natural fermentation of different grain-based flours. Ten of the newly isolated yeast strains were selected as potential phytase producers based on their growth in liquid culture media with sodium phytate as the sole source of phosphorus. Using 18S rDNA and D1/D2 26S rDNA analyses, the species affiliation of the selected isolates was established. They referred to seven yeast species from 3 families, with the most significant representation of the family Saccharomycetaceae. Intracellular phytate-degrading activity was found in 8 isolates, the highest being in Nakaseomyces glabratus strain 7-4. The highest level of extracellular phytase was measured in Pichia membranifaciens strain 5-2. Both isolates showed significant antioxidant capacity higher than those of ascorbic acid.
INTRODUCTION
Phytases are a class of enzymes that catalyse the hydrolytic degradation of phytic acid to free inorganic phosphorus and lower molecular weight myo-inositol phosphate esters [1].Among the phytate-degrading enzymes described so far, the most widespread are those belonging to the group of histidine-acid phosphatases (HAPs) (EC 3.1.3.8), which are found in both microorganisms and higher eukaryotes [2].The addition of phytases to animal feed, on the one hand, increases the bioavailability of digestible phosphorus and minerals by breaking down phytic acid, thus removing its anti-nutrient effect and ensuring balanced nutrition.On the other hand, phytases, included as feed additives, lead to a reduction in the amount of undigested phytate in the manure, which significantly reduces the negative consequences for the environment [3,4].Microorganisms are a promising source of phytatedegrading enzymes.A significant number of bacterial and fungal species that produce phytases have been isolated from diverse environments [5,6].Phytate-degrading enzymes isolated from Aspergillus niger, Peniophora lycci and Escherichia coli are added to animal feed to improve the bioavailability of phosphorus and minerals [6 -8].Even though some of the already described microbial phytases have found industrial applications, they are still unable to meet all the requirements of the feed industry.The search for new phytases with high activity and stability at temperatures above 37°C and acidic pH, accompanied by low production cost, is the subject of increased scientific interest [9].Yeasts are good candidates for enzyme production due to their ease of cultivation, rapid growth, and genetic stability.Moreover, these eukaryotic microorganisms naturally inhabit the surface of phytic-acid-containing crops, vegetables, and other plants, suggesting the production of phytate-degrading enzymes from them.They could be isolated from various cereal-based fermented foods and beverages and are of great importance for preserving and improving the quality of food products due to their antioxidant and hydrolytic capacity.In the present study, we screened thirteen yeast cultures, newly isolated from different types of sourdough, for their extracellular and intracellular phytase production.The isolates with the highest phytase activity were taxonomically identified to a species level.The antioxidant capacity of the selected yeast microorganisms was also studied in relation to their potential industrial application in food and feed processing.
Sourdough preparation
Four different types of cereal flour (wheat white flour (S5), rye flour (S6), wholegrain rye flour (S7) and white wheat flour type 500 (85 %) and type 1850 (15%) (S8)) were used as a raw material for spontaneous microbial fermentation and production of sourdough.Mixtures of 5 g flour and 5 mL prewarm distilled water were prepared and left for 24 hours at room temperature.The sourdough fermentation was performed for 7 days, and every 24 hours a new portion of flour and water was added to the mixture.The obtained fermented product was stored at 4ºС and used as a source for further analyses.
Isolation of pure yeast cultures from sourdough
Enrichment of yeast cultures was performed for each sourdough sample (S5, S6, S7 and S8) (~10 g of each) aerobically at 28°C for 2 days, in 500 mL Erlenmeyer flasks containing 100 mL sterile YPD medium (20 g glucose, 10 g peptone and 10 g yeast extract per litre of distilled water, рН 6.3).Antibiotics Tetracycline and Streptomycin were added at a final concentration of 50 mg L -1 to inhibit bacterial growth.Ten-fold dilutions from the enriched microbial cultures S5, S6, S7 and S8 were inoculated onto YPD plates to isolate single colonies.The cultures were then incubated at 28ºС for 48 h.Morphologically, different types of colonies were selected after incubation, and pure cultures were obtained after at least three repeated cultivations on agar.
Morphological characterization
The morphology of yeast colonies was observed on a solid YPD medium using a binocular magnifier.The morphology of yeast cells was observed by light microscopy.
Biochemical characterization
To taxonomically identify the newly isolated yeast microorganisms, a biochemical rapid identification test API 20 C Aux (bioMerieux) was performed according to the manufacturer's instructions.The results obtained were processed by apiweb@ software (bioMerieux).
Genetic characterization Isolation of genomic DNA
The extraction of gDNA was performed according to the protocol described by Biss et al. [10].
PCR amplification and sequencing of 18S rDNA and D1/D2 region of 26S rDNA
Nearly the entire nucleotide sequence of the 18S rDNA gene (22 -1771 nt; Saccharomyces cerevisiae numbering) was amplified from the extracted DNA for all yeast species using universal primers NS-1F (5`-GTAGTCATATGCTTGTCTC) and NS-8R (5`-TCCGCAGGTTCACCTACGGA) [11].Domains 1 and 2 of the 26S r DNA gene (63 -642 nt; S. cerevisiae numbering) for 2 yeast species were amplified using oligonucleotide primers NL -1F (5`-GCATATCAATAAGCGGAGGAAAAG) and NL -4R (5`-GGTCCGTGTTTCAAGACGG) as described by Kurtzman and Robnett [11].The PCR reaction mixture (25 μL) contained: 1 to 10 ng of DNA, PCR Master Mix 2x (GENET BIO) containing 0.625 U Taq DNA Polymerase, and 400 nM of each primer.The reaction mixture was incubated in a Thermal Cycler (LKB) for an initial denaturation at 94°C for 5 min followed by 30 cycles of 94°C for 30 s, 55°C for 30 s, and 72°C for 60s, then a final extension step at 72°C for 5 min.
The amplified DNA fragments were sequenced in Macrogen Europe B.V, Netherlands.The same oligonucleotide primers were used for the sequencing.The obtained sequences were compared to the known sequences in the GenBank database by using BLASTn search to determine their close relatives.
Screening of the yeast isolates for phytase production
The screening was performed on solid and liquid mineral medium (Phytase Screening Medium PSM), according to Palla et al. [12].It contained (w/v): 1.0 % glucose, 0.4 % sodium phytate (Sigma, USA), 0.2 % CaCl 2 , 0.5 % NH 4 NO 3 , 0.05 % KCl, 0.05 % MgSO 4 , 0.001 % FeSO 4 , 0.001 % MnSO 4 (pH 5.0).Phytase production by yeast isolates from S1, S2, S3, and S4 sourdough samples was determined qualitatively by the agar diffusion method.The culture broth of each yeast strain (0.1 mL) (grown on PSM for 48 h) was dropped into wells on PSM agar plates or a loop of 24 h pure yeast culture was stroked on the test media.After incubation for 72 h at 28°C, a specific two-step staining of the medium with an aqueous solution of CoCl 2 and subsequent soaking with a solution of 6.25 % N 6 H 24 Mo 7 O 24 x 4H 2 O and 0.42 % NH 4 VO 3 (1:1) was applied as described by Bae et al. [13].A commercial phytase enzyme preparation (Sigma-Aldrich) with a concentration of 0.4 U mL -1 was used in the experiments as a control.The positive phytase activity was shown as the presence of greenish zones of phytate degradation against yellow background.
The ability of the selected yeast isolates to degrade sodium phytate was detected also by their growth in the liquid PSM medium containing 0.4 % (w/v) sodium phytate as a sole source of phosphorus.Isolates were cultivated at 28°C for 48 h in flasks, and the growth was monitored by measuring the optical density at 600 nm (OD 600nm ).
Intracellular and extracellular phytase activity assay
Phytase activity was assayed in a 1.075 mL total reaction mixture containing 0.2 % Phytic acid sodium salt (Sigma) in sodium-acetate buffer (0.05 M, pH 5.5) with 2 mM CaCl 2 at 30°C for 30 min.The reaction was terminated by adding 10 % trichloroacetic acid, and phosphorus liberated by the enzymatic action was measured after adding 0.75 mL colour reagent, prepared daily by mixing four volumes of 1.5 % (w/v) ammonium molybdate in 5.5 % (v/v) sulfuric acid solution and one volume of a 2.7 % FeSO 4 solution .The absorption was registered at 700 nm [13].One unit of phytase activity is defined as the release of 1 µM inorganic orthophosphate per minute under the above conditions.The intracellular phytate-degrading activity was analysed in cell-free extracts obtained after cell disruption of the yeast isolates.For this purpose, yeast biomass was mixed with glass beads (size 0.1 mm) and 0.05 M potassium phosphate buffer, pH 7.8, in a ratio of 1:1:2 and subjected to disintegration in a Bullet Blender Storm homogeniser at 8000 rpm, three times for 5 min.Cell debris was removed by centrifugation at 2300 x g for 15 min at 4°C, and the resulting supernatant was clarified after centrifugation at 15500 x g for 20 min at 4°C.The cell-free homogenate was stored at -20°C and used for intracellular phytase assay.Extracellular phytase activity was analysed in culture broth obtained after 48 h cultivation in PSM medium (28°C), followed by centrifugation for 15 min at 2300 x g for cell removal.
Total antioxidant capacity of the selected yeast isolates
Cell-free extracts of the yeast strains, isolated from sourdough, were tested for their antioxidant capacity according to Kumaran and Karunakaran [14].The ascorbic acid solution in methanol (0.09 % w/v) was used as a positive control.The antioxidant activities of the yeast strains were expressed as a number of equivalents of ascorbic acid (antioxidant activity = 1.0).
Data analysis
The analyses were performed in triplicate and the data used represent the mean values with Standard error of the mean (± SEM) of the three independent experiments.The statistical analysis was performed using MICROSOFT OFFICE 365 EXCEL 2020 software.
Screening of the isolates for growth in phytate mineral medium
The growth of the isolated thirteen pure yeast cultures in a liquid medium containing sodium phytate as the sole source of phosphorus was evaluated based on their OD 600 nm values.(Fig. 2).
Ten isolates showed very good growth after 48 hours of cultivation, with OD 600nm varying between 0.143 -0.560.One isolate showed no growth under the selected culture conditions.Two more strains -6-1 and 7-5 were registered with very poor growth (OD 600nm < 0.1).Strains 5-1, 5-2, and 5-3, followed by 7-4, 7-3, 8-3, 7-2 and 8-2, showed high optical density values, indicating that they efficiently uptake the substrate from the PSM medium and expression of probable phytase activity.The presence of the enzyme phytase breaks down the phytate and provides the phosphorus necessary for growth and metabolic processes in those yeast strains [15].
Biochemical and genetic characterization of the potential phytase-producing yeast cultures
The 10 yeast strains, showing good growth in the phytate mineral medium, were further taxonomically characterized by 18S rRNA gene analysis.After amplification and sequencing of the 18S rRNA gene for each isolate, and performed phylogenetic analysis, the ten isolates were affiliated to seven yeast species from 3 families (Table 1): Pichiaceae (Pichia membranifaciens (3 strains) and Pichia kudriavzevii (1 strain); Saccharomycetaceae -Candida milleri, reclassified as Kazachstania humilis [16] (1 strain), Kazachstania viticola (1 strain), Nakaseomyces glabratus (1 strain) and Saccharomyces cerevisiae (2 strains) and Phaffomycetaceae (Wickerhamomyces anomalus) (1 strain).Eight of the isolates showed more than 98 % similarity to the closest representative in the NCBI database (Table 1), allowing their species identification.In two of the isolates -6-2 and 7-3, however, a lower percentage of identity of the 18S rRNA gene sequence (97.26 and 87.49%) with the most closely related organisms in the database was found; therefore, additional biochemical and genetic (Domains 1 and 2 (63-642 nt) of 26S rRNA gene) analyses were performed.
The yeast isolate 7-3 showed 87.49 % similarity in the 18S rRNA gene to that of the yeast species Wickerhamomyces anomalus.According to the NCBI database, this species was originally described as Saccharomyces anomalus E.C. Hansen, based on morphological characteristics [17].Later, the application of the polyphasic approach in taxonomic identification led to a number of subsequent reclassifications of this species -Endomyces anomalus, Hansenula anomala, Pichia anomala, Willia anomala, Candida beverwijkiae and Candida pelliculosa.Since 2008, the species has been assigned to the newly described genus Wickerhamomyces [17].This genus belongs to the order Saccharomycetales, family Phaffomycetaceae.Since the strains of W. anomalus show quite considerable morphological and physiological variation, the species has a large number of synonyms.A distinguishing biochemical characteristic of Wickerhamomyces anomalus is its ability to degrade cellobiose and ferment sugars, including xylose [18].Performed biochemical analysis of isolate 7-3, confirmed the ability of this strain to ferment a number of sugars, including xylose, and to degrade cellobiose (Table 2).Studies by other authors regarding the biotechnological application of W. anomalus have shown the ability of this species to synthesize cell-bound phytase [19].Palla et al. reported the isolation of W. anomalus as the dominant species during the natural fermentation of different whole-grain flours [20].The belonging of isolate 7-3 to this yeast species was confirmed also by phylogenetic analysis based on the partial sequencing of 26S rRNA gene.Isolate 7-3 was determined to share 98.86 % sequence similarity of its 26S rRNA gene D1/ D2 domain to the Wickerhamomyces anomalus strain TEMFP3 (MH481638.1).
The yeast strain 6-2 showed 18S rRNA sequence similarity below 98 % to the nearest phylogenetic relative Pichia membranifaciens.It was distinguished among other isolated yeast strains by its pink colonies on solid medium and elongated cells that occur singly, in pairs, or in chains forming a pseudomycelium.The cells of Pichia membranifaciens, like those of isolate 6-2, are elongated and form filamentous structurespseudohyphae.However, there is a difference in the morphology of the colonies, which for the species Pichia membranifaciens are described as yellowish and smooth [21].Yeasts of this species are most often isolated from various cereals and plants and performed alcoholic fermentation.Its ability to ferment sugars to ethanol with high capacity makes it a promising and affordable sustainable biological solution to the global water and energy crisis [22].The anamorphic species of Pichia membranifaciens are Candida valida and Pichia manchuria [23].Biochemical profiling of isolate 6-2 Table 2. Biochemical characteristics of yeast isolates.
revealed the strain's ability to ferment a limited range of sugars such as glucose, xylose, xylitol and N-acetyl glucosamine (Table 2).Processing of the obtained results with apiweb software showed 98% similarity of isolate 6-2 with Candiba boidini.The assignment of yeast species to genera and families was primarily based on vegetative cell morphology, mating type, and physiologic-biochemical characteristics when applying fermentation and growth tests commonly used in yeast systematics.The application of molecular methods based on gene sequence analyses to yeast systematics shows a discrepancy between data from phenotypic and genotypic analyses [11].To confirm the species identity of isolate 6-2, additional analyses of the D1/ D2 sequence of 26S rRNA gene were performed.The obtained high percent similarity of 98.74 % to the nearest Pichia membranifaciens strain MUT<ITA>:6351 (MT151656.1),proved the belonging of the 6-2 isolate to this yeast species.
Intracellular and extracellular phytate-degrading activity in the selected 10 yeast isolates
The 10 yeast strains, selected on the basis of their good growth on liquid phytate medium, were tested for extracellular phytate-degrading activity on agar medium containing the specific substrate (Fig. 3).Clearly visible halos formed as a result of phytate degradation by extracellular phytase, were observed after specific two-step staining [13].The application of this staining method makes it possible to distinguish the effect of the phytase from that of acids, produced by some yeast strains.The results from the performed qualitative analysis revealed that 30 % of the isolates (7-2, 7-3, 7-4) formed clearly visible halos (Petri plate A).Weak degradation of phytate (poorly visible zones) was observed by the culture broths of Pichia membranifaciens strains 5-1 and 5-2 (Petri Plate C).Bigger halos of phytic acid degradation were detected around the strokes of the same strains (Petri Plate D).
All ten yeast strains were quantitatively tested for the presence of intracellular and extracellular enzyme activity, although the qualitative analysis showed the presence of extracellular phytase activity in only 5 of the isolates.The measurement of the intracellular phytate-degrading activity revealed that three of the yeast cultures -Kazachstania humilis , Pichia membranifaciens and Nakaseomyces glabratus showed significantly higher intracellular enzyme activity than the other studied strains (Fig. 4).It varied between 0.57 -0.74 U mL -1 .In the isolates 5-1 and 5-2 (Pichia membranifaciens), Kazachstania viticola (7-1), Pichia kudriavzevii (7-2) and Wickerhamomyces anomalus , the intracellular phytate-degrading activity was in the range between 0.11 -0.51 U mL -1 .No phytase activity was registered for the intracellular extracts of strains 8-2 and 8-3, which were identified as Saccharomyces cerevisiae, based on 18S rDNA analysis.
The spectrophotometric assay of the extracellular phytase production showed the highest level of the enzyme in Pichia membranifaciens isolate 5.2 -0.09U mL -1 (Fig. 5).Other authors also reported the production of phytase from this species [24,25].For strains Pichia kudriavzevii 7-2, Wickerhamomyces anomalus 7-3 and Nakaseomyces glabratus strain 7-4 the detected extracellular activity was about 0.05 -0.06 U mL -1 .In 4 yeast strains (Pichia membranifaciens 6-2, Kazachstania viticola 7-1, S. cerevisiae strains 8-2 and 8-3) no phytase activity was measured in the tested culture broth, although they were selected on the basis of good growth in phytate selective medium.Saccharomyces cerevisiae strains have earlier been reported to have phytase activity, which was not confirmed by our results [26].Similar results for the presence of growth but lack Fig. 3.A qualitative method for detection of extracellular phytase.Petri Plates A, C and F: 0.1 mL of culture liquids were dropped into wells on phytate agar medium.The supernatants were obtained after 48 hours of cultivation of the selected yeast strains in liquid PSM medium and subsequent centrifugation of the culture broth for removing the cells.Petri plates B and D: Strains, inoculated by stroke method.
of phytase activity or very low levels of the synthesized enzyme were reported by other authors [27].In this case, the growth was probably due to the uptake of free phosphorus, which was found, albeit in very low concentrations, in the commercial phytate preparations.Another possible explanation is cell growth at the expense of reserve phosphorus in the inoculum.
Antioxidant capacity of yeast cultures isolated from sourdough
Due to the fact that reactive oxygen species are at the base of many human diseases, such as cancer, diabetes, autoimmune diseases, etc., in the last few years, the interest in the antioxidant properties of foods has greatly increased.Recently, yeasts have been shown to enhance bioactive components in fermented food products through the biosynthesis of enzymes and metabolites such as glutathione, citric acid, coenzyme Q, torulahoidin, tocopherols, riboflavin (vitamin B2), cytochrome C, etc, which can act as antioxidants thus affecting the antioxidant properties of foods, in particular bakery products [20].In this regard, the antioxidant capacity of the determined most active phytatedegrading yeast cultures was investigated.(Table 3).The study revealed that all the tested strains reduced Mo (VI) to Mo (V) in different proportions, with the level of antioxidant activities varied between 0.42 -1.5.Furthermore, two of the seven tested yeast strains -7-4 and 5-2, showed higher antioxidant capacity than the ascorbic acid.Among the remaining five yeast strains, the intracellular extract of Pichia membranifaciens strain 5-1 showed the same antioxidant activity as ascorbic acid, while isolates 7-2 and 7-3 were characterized by 8 and 12 % lower reducing power than the positive control.The data obtained suggest that the studied yeast microorganisms, in addition to their phytate-degrading capacity, which has a positive effect on increasing the bioavailability of various minerals, could also serve as a source of natural antioxidants added as starter cultures to foods.
CONCLUSIONS
An increasing number of studies highlight the important role of yeast metabolism on sourdough functional and nutritional characteristics.Therefore, the evaluation of the diversity and important functional properties of these microorganisms is of increased scientific interest.Newly isolated yeast microorganisms offer new metabolic genes and metabolites, that could serve to develop new biotechnological products.In regard to this, our study revealed the diversity, phytate-degrading and antioxidant potential of yeast microorganisms, newly isolated from a sourdough microbiota.Four different types of sourdough were obtained in the laboratory by spontaneous microbial fermentation of 4 Bulgarian grain-based flours.The results obtained showed a different yeast species composition in the 4 investigated sourdough samples.Some of the isolates are perspective producers of industrially important phytate-degrading enzymes and bioactive metabolites with antioxidant properties.
Fig. 5 .
Fig. 5. Level of extracellular phytase in supernatants of the tested yeast isolates.
*
АА is compared with the activity of ascorbic acid = 1.0 as a control
Table 1 .
Properties of yeast microorganisms, isolated from sourdough samples.
Table 3 .
Antioxidant activity of yeast culture isolated from different sourdough samples. | 2024-05-11T15:55:38.161Z | 2024-05-07T00:00:00.000 | {
"year": 2024,
"sha1": "632d78b45ce652ac72ec7682cd5f3f9c7e2d8915",
"oa_license": "CCBYNC",
"oa_url": "https://j.uctm.edu/index.php/JCTM/article/download/365/229",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "48582ef596e6708c8e98b355f423df93e843fa78",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
251997056 | pes2o/s2orc | v3-fos-license | Korean Red Ginseng extract treatment prevents post-antibiotic dysbiosis-induced bone loss in mice
Background The intestinal microbiota is an important regulator of bone health. In previous studies we have shown that intestinal microbiota dysbiosis, induced by treatment with broad spectrum antibiotics (ABX) followed by natural repopulation, results in gut barrier dysfunction and bone loss. We have also shown that treatment with probiotics or a gut barrier enhancer can inhibit dysbiosis-induced bone loss. The overall goal of this project was to test the effect of Korean Red Ginseng (KRG) extract on bone and gut health using antibiotics (ABX) dysbiosis-induced bone loss model in mice. Methods Adult male mice (Balb/C, 12-week old) were administered broad spectrum antibiotics (ampicillin and neomycin) for 2 weeks followed by 4 weeks of natural repopulation. During this 4-week period, mice were treated with vehicle (water) or KRG extract. Other controls included mice that did not receive either antibiotics or KRG extract and mice that received only KRG extract. At the end of the experiments, we assessed various parameters to assess bone, microbiota and in vivo intestinal permeability. Results Consistent with our previous results, post-ABX- dysbiosis led to significant bone loss. Importantly, this was associated with a decrease in gut microbiota alpha diversity and an increase in intestinal permeability. All these effects including bone loss were prevented by KRG extract treatment. Furthermore, our studies identified multiple genera including Lactobacillus and rc4-4 as well as Alistipes finegoldii to be potentially linked to the effect of KRG extract on gut-bone axis. Conclusion Together, our results demonstrate that KRG extract regulates the gut-bone axis and is effective at preventing dysbiosis-induced bone loss in mice.
Introduction
Osteoporosis is a pathological condition characterized by decreased bone mass and/or altered bone quality/structure. The detrimental consequence of osteoporosis is an increased risk for bone fracture which can lead to increase in morbidity and mortality and decrease in independence and quality of life. The economic burden of fractures related to osteoporosis accounts for~$17 billion in the US [1]. Current medications to treat osteoporosis have limitations such as off-target effects and unwillingness of patients to take medications out of fear of these off-target effects. Thus, a critical unmet medical need is the identification of novel strategies to prevent or treat osteoporosis without significant side effects (reviewed in [2,3]).
Microbiota refers to collective consortium of microorganisms (including bacteria, viruses, and fungi) found in a particular niche. Based on studies over the last decade, we now know that intestinal microbiota and its metabolites are important in regulating diverse physiological processes [4]. Importantly, altered composition of microbiota or a decrease in bacterial diversity has now been shown to be closely linked with the pathogenesis of several disease processes including IBD, obesity and diabetes [5]. We have shown that pathogenic bacteria such as H. hepaticus causes bone loss [6].
Conversely, we and others have shown that beneficial bacteria (eg. probiotics) can enhance bone health and prevent bone loss in various mouse models of osteoporosis (reviewed in [2]). In recent studies we showed that when mice are administered oral broadspectrum antibiotics for 2 weeks followed by 4 weeks of natural repopulation, microbiota composition is significantly altered and this is associated with gut barrier dysfunction and significant bone loss [7,8]. This suggests that directly perturbing a healthy microbiota with antibiotics can lead to bone loss in mice. We further demonstrated that probiotic bacteria L. reuteri can prevent bone loss induced by post-antibiotic dysbiosis. In this present study we tested the effect of Korean Red Ginseng (KRG) on bone loss induced by post-antibiotic dysbiosis. Although KRG has been shown to be beneficial for preventing or attenuating a number of diseases including bone loss [9], the effect of KRG on gut-bone axis has not been investigated.
Ginseng, known as the "king of herbs" is an herbal root. Korean Red Ginseng (KRG) belongs to the family of Araliaceae and is officially called the Panax ginseng Meyer. Its cultivation in Korea started in~11 B.C. The ginseng plant contains many active ingredients including saponins, and ginsenosides. Recent studies have identi-fied~128 ginsenosides in Panax ginseng. Ginsenosides have been reported to have multiple activities including anti-diabetic, anticancer, anti-oxidant and anti-adipocyte properties (reviewed in [10]). In addition to these properties some previous studies have shown anti-osteoporotic activities. Previous studies have shown that ginseng and its ingredients are beneficial to bone health [11e15]. However, none of these studies have examined the effect of KRG on gut-bone axis using the antibiotic-dysbiosis model in mice. In this study we tested the hypothesis that microbiota dysbiosis-induced barrier dysfunction and bone loss will be prevented in KRG treated mice.
Materials
Korean Red Ginseng (KRG) extract was obtained from Korea Ginseng Corp. (Daejeon, Korea), and the major components of the KRG extract are shown below as reported previously [16].
Animals and experimental design
All animal procedures were approved by Michigan State University Institutional Animal Care and Use Committee and conformed to NIH guidelines. Eleven-week-old male Balb/C mice (#C0009615) were obtained from Charles River Laboratories (Wilmington, MA, USA) and were allowed to acclimate to the facility for one week prior to beginning experiments. Animals were housed at 4 mice per cage, on a 12:12 hour light-dark cycle, and had ad libitum access to sterilized standard chow (Teklad 2019, Teklad, Madison, WI, USA) and water. Upon reaching 12 weeks of age, mice were treated with oral broad-spectrum antibiotics (2-wks of ampicillin/neomycin, 160 and 80 mg/kg/day in sterilized drinking water), to deplete gram (þ) and (À) bacteria [7]. Following ABX treatment, mice were given 4 weeks to naturally repopulate their intestinal microbiome. Korean Red Ginseng extract (KRG extract) @ 500 mg/kg/d was orally (by gavage) administered once daily during the duration of the 4-week treatment as follows: 1. Vehicle (H2O); 2. KRG extract 500 mg/kg/d (4 wk); 3. ABX (2 wk)þ vehicle(4 wk); 3. ABX (2 wk)þKRG extract 500 mg/kg/d (4 wk).
2.3. Microcomputed tomography (mCT) bone analysis 2.3.1. Femurs and vertebrae collected during harvest were scanned in a GE Explore Locus mCT (GE Healthcare, Piscataway, NJ, USA) at a resolution of 20 mm obtained from 720 views and were analyzed as described before [7,8,17]. The distal femur trabecular bone region was defined as 10% proximal to the distal growth plate based on total bone length and excluded cortical bone. Trabecular bone was also analyzed within the body of the L4 vertebrae. Trabecular bone parameter values including volume, thickness, spacing, and number were obtained using GE Healthcare MicroView software version 2.2.
Microbiota analysis
A Qiagen® PowerSoil® DNA extraction kit was used to extract DNA from the fecal pellets following standard protocol. The variable region 4 of the bacterial 16S rRNA gene was amplified with universal primers 515f/806r and sequenced on the Illumina MiSeq platform at the Michigan State University Sequencing Core. The raw sequences were processed using QIITA (qiita.ucsd.edu [18]), which is based on QIIME2 algorithms [19], and quality filtered to generate amplicon sequence variants (ASVs) through the Deblur method [20]. Alpha diversity was assessed by Shannon index using Qiita and the values were input into GraphPad Prism for statistical analysis. Beta diversity was assessed using Bray-Curtis dissimilarity metric using Qiita. For analysis of bacterial composition at various taxonomical ranks, the relative distribution at various taxonomical ranks calculated from ASVs were input into GraphPad prism to analyze the distribution of Phylum, Class, Order and Family in the various groups. To further understand the relationship of the abundance changes in the bacterial taxa to bone health, we performed correlation analysis of the various taxa to that of femur BV/ TV. Based on these correlations, we then assessed if there are statistically significant differences in the relative abundance between the various treatment groups. For this we focused at the level of the genus and species.
In vivo intestinal permeability measure
For measuring whole intestinal permeability, mice were gavaged with 300 mg/kg of 4 kD fluorescein isothiocyanate dextran (FITC-dextran) in sterile PBS 4 hours prior to the time of death. Sterile blood was collected via cardiac puncture immediately after euthanasia. Serum fluorescence was analyzed using Tecan Infinite M 1000 fluorescent plate reader (Tecan, Mannedorf, Switzerland) at an excitation/emission wavelength of 485/530 nm. The rate of 4 kD FITC-dextran transfer into the serum was calculated as described before [7].
Statistical analyses
Data analysis was performed using GraphPad Prism software version 9 (GraphPad, San Diego, CA, USA). The statistical tests are indicated in the figure legends. Data are shown as violin plots with lines at the median and quartiles.
Dysbiosis model and general body parameters
Adult male mice (12 week, Balb/c) received oral broad-spectrum antibiotics (2-wks of ampicillin/neomycin, 160 and 80 mg/kg/day in sterilized drinking water), to deplete gram (þ) and (À) bacteria [7]. Following ABX treatment, mice were given 4 weeks to naturally repopulate their intestinal microbiome. Korean Red Ginseng extract (KRG extract) @ 500 mg/kg/d was orally administered during the duration of the 4-week treatment as follows: 1. Vehicle (H2O); 2. KRG extract 500 mg/kg/d (4 wk); 3. ABX (2 wk)þvehicle(4 wk); 3. ABX (2 wk)þKRG extract 500 mg/kg/d (4 wk). At the end of the experimental period, animals were euthanized, and various tissues collected. Analysis of bone, intestinal permeability and microbiota were performed as described in our previous studies and as described in the methods [7,17]. As shown in Supplementary Fig. 1, body weight was similar between the different groups at the end of the experiment. Similarly, spleen and kidney weights were not significantly different between the groups. Interestingly however, liver weight was significantly decreased in the ABX þ Vehicle group (without KRG extract) compared to Vehicle. In addition, ABX þ KRG extract group also showed decreased liver weight compared to Vehicle and ABX þ Vehicle groups. The significance of this effect on liver is unclear.
Korean Red Ginseng (KRG) extract treatment prevents dysbiosis-induced bone loss
In previous studies we demonstrated that post-ABX dysbiosis causes bone loss in mice [7,8]. Consistent with that, we demonstrate here that natural repopulation following ABX treatment, i.e. post-ABX (ABX þ Vehicle group) caused a significant bone loss as evident in femur BV/TV and vertebral BV/TV ( Fig. 1 and Supplementary Fig. 2). Treatment with KRG extract during the repopulation period however, prevented this bone loss (both femur and vertebral BV/TV). Analysis of femoral bone microarchitecture revealed a decrease in trabecular thickness (Tb.Th.) in the ABX þ Vehicle group that was prevented by KRG extract treatment. Trabecular number (Tb.N) and spacing (Tb.Sp) were not significantly altered in the femur of the ABX þ Vehicle group. Analysis of the vertebral bone architecture revealed a significant increase in trabecular spacing (Tb.Sp) and a significant decrease in trabecular number (Tb.N) and thickness (Tb.Th) in the ABX þ Vehicle group compared to the controls. KRG extract treatment significantly prevented the Tb.Sp. and Tb.Th. parameters in the vertebrae. Interestingly, treatment of control mice with KRG extract for 4 weeks significantly increased femur BV/TV but not vertebral BV/TV. Consistent with this, the femur bone architecture revealed an increase in trabecular number (Tb.N.) and a decrease in trabecular spacing (Tb.Sp.) but no significant change to trabecular thickness (Tb.Th.) in the KRG extract treated mice compared to control group. Together these results demonstrate that KRG extract treatment during the microbiota repopulation period following ABX treatment is beneficial to bone health. In addition, KRG extract treatment without any underlying disease conditions can increase the femoral bone volume in healthy mice.
Korean Red Ginseng (KRG) extract treatment modulates intestinal microbiota
To assess the effect of KRG extract on the intestinal microbiota, the relative abundance and composition of the fecal extract was determined prior to the start of the study ("pre" group) and at the end of the study ("post" group). Alpha diversity was assessed by calculating the Shannon index. As shown in Fig. 2, compared to the "pre" group, ABX treatment followed by natural repopulation caused a significant decrease in alpha diversity. KRG extract treatment during this natural repopulation period (ABX þ KRG extract group) completely prevented this decrease in alpha diversity. None of the other treatment groups showed any significant changes when compared between the respective -pre and the -post groups. To understand the relationship of changes in alpha diversity to that of bone volume, we correlated Shannon index to that of femur and vertebral BV/TV. Shannon index from the post-microbiota samples showed a significant correlation to both femur and vertebral BV/TV (r ¼ 0.3028, p ¼ 0.0364 for femur and r ¼ 0.2966, p ¼ 0.0365 for vertebra) (Fig. 2). These results suggest that changes in alpha diversity following post-antibiotic dysbiosis likely predict femur and vertebral bone volumes.
Further analysis of beta diversity did not reveal any significant findings (not shown). We next analyzed bacterial composition at various taxonomical ranks. For this, the relative distribution at various taxonomical ranks calculated from ASVs were input into GraphPad prism to analyze the distribution of Phylum, Class, Order and Family in the various groups. For relative abundance graphs at each taxonomic ranks ( Fig. 3 and Supplementary Fig. 3) the replicates from each treatment group were combined and the aggregate data is shown. At the Phylum level, there were some marked differences in their distribution between the groups. Bacteroidetes and Firmicutes were the predominant phyla in all groups (Fig. 3). KRG extract treatment reduced the abundance of Firmicutes in ABX treated (ABX þ KRG extract-post) and untreated mice (KRG extractpost). Conversely, KRG extract treatment increased the abundance of Bacteroidetes in the KRG extract treated groups. Also, KRG extract treatment distinctly increased the abundance of Protobacteria only in the non-ABX treated mice (KRG extract-post). At the class level, Bacteroidia (of Phylum Bacteroidetes) and Clostridia (of Phylum Firmicutes) were predominant in all groups of mice, and these followed similar trends to that of the phyla when compared between the groups. Like the Class, at the level of the Order, Clostridiales (of Class Clostridia) and Bacteriodales (of Class Bacteroidia) were the predominant groups in all the mice, and these showed trends similar to Phyla and Class in terms of the effect of KRG extract treatment. Because Bacteroidetes and Firmicutes were the predominant groups, we focused on the relative abundance of these two Phyla at the Family level. As shown in Supplementary Fig. 3, the Bacteroidales group S24-7, Porphyomondaceae, Rikenellaceae and Bacteroidaceae were the predominant members among the different families in Bacteroidetes. While S24-7 increased in abundance in post-ABX mice (ABX þ Vehicle-post) compared to control, KRG treatment did not affect the levels. Bacteroidaceae and Porphyromonadaceae were decreased in the post-ABX mice (ABX þ Vehicle-post) compared to control and KRG extract treatment did not have any marked effect. KRG treatment markedly decreased the abundance of Rikenellaceae in post-ABX mice (ABX þ KRG extract-post) compared to its respective controls (ABX þ Vehicle-post or ABX þ KRG extract-pre). In the Firmicutes, Ruminococcaceae, Lachnospiraceae, Paenibacillaceae and Lactobacillaceae were predominant. Interestingly, Peptostreptococcaceae was present only in the post-ABX group and was absent in all other groups. Also, KRG extract treatment appeared to increase the abundance of Lactobacillaceae in the KRG treated groups.
To further understand the relationship of the abundance changes in the bacterial taxa to bone health, we performed correlation analysis of the various taxa to that of femur BV/TV. We found that the relative distribution of several bacterial taxa were either positively or negatively correlated with femur BV/TV (the ones that were correlated at various taxa up to the family level are shown in Supplementary Table 1. Data not shown for genus and species levels). Based on these correlations, we then assessed if there are statistically significant differences in the relative abundance between the various treatment groups. For this we focused at the level of the genus and species. At the genus level we found that Lactobacillus (family Lactobacillaceae), rc4-4 (family Peptococcaceae) and an unknown genus of the family S24-7 showed significant differences between the KRG extract treated and untreated in the post-ABX mice. Specifically, we found that Lactobacillus was suppressed in the post-ABX group (ABX þ Vehicle-post) compared to its control (ABX þ Vehicle-pre) (p ¼ 0.07 based on ANOVA Holm-Sidak's multiple comparisons test; p ¼ 0.0021 based on t-test) (Fig. 4). This decrease in Lactobacillus was markedly prevented in the KRG extract treated group (ABX þ KRG extract-pre vs ABX þ KRG extract-post); p ¼ 0.9084. KRG extract did not have any effect on Lactobacillus in the non-ABX mouse groups. Abundance of genus rc4-4 was significantly suppressed by KRG extract treatment in the post-ABX mice when compared to post-ABX mice without KRG extract treatment (ABX þ KRG extract-post vs ABX þ Vehiclepost) (Fig. 4). Abundance of unknown genus in the family f_S24-7 was significantly increased by KRG extract treatment compared to its control (ABX þ KRG extract-pre vs ABX þ KRG extract-post). Overall, at the genus level, KRG extract treatment appears to regulate the abundance of these 3 genera. In addition, abundance of these 3 genera were significantly correlated with femur BV/TV (Lactobacillus and f_S24-7;g_ were positively correlated and rc4-4 was negatively correlated), suggesting a link between these bacteria and bone health.
We further did a similar analysis at the species level taking into account all the identifiable species that showed significant correlation to femur BV/TV. We then analyzed to compare the abundance of each of those species in the different treatment groups. Interestingly, our results demonstrate that Alistipes finegoldii is significantly modulated by KRG extract treatment. As shown in Fig. 5, post-ABX dysbiosis increased the abundance of Alistipes finegoldii in the post-ABX mice (compared between ABX þ Vehiclepre vs ABX þ Vehicle-post; p ¼ 0.0045). KRG extract treatment significantly inhibited the abundance of this bacteria in the post-ABX mice (compared between ABX_Ginseng_Pre vs ABX_Gin-seng_post; p ¼ 0.2148). When compared between post-ABX mice with and without KRG treatment (ABX þ Vehicle-post vs ABX þ KRG extract-post), abundance of A. finegoldii was significantly lower in the KRG treated mice (p ¼ 0.002). Importantly, abundance of A. finegoldii was significantly and negatively correlated to femur BV/TV (p ¼ 0.0294). Together, microbiota analysis reveals important changes in bacterial composition in response to KRG extract treatment.
Korean Red Ginseng (KRG) extract treatment prevents dysbiosis-induced intestinal barrier leakage
In previous studies we showed that intestinal barrier function is strongly correlated with bone health in the ABX-dysbiosis-induced bone loss model in mice [7]. To examine if KRG extract treatment prevents dysbiosis-induced barrier leakage, we assessed in vivo permeability using FITC-dextran (4 KDa). Although post-ABX dysbiosis group showed a modest increase in barrier leakage (as determined by serum FITC-dextran levels; p ¼ 0.1), treatment with KRG extract significantly prevented the barrier leakage (Fig. 6). KRG extract treatment in control mice did not affect serum FITC-dextran levels. These results suggest that treatment with KRG extract prevents intestinal leakage induced by ABX-induced dysbiosis. To understand if these changes correlate with bone health, we analyzed correlation between serum FITC-dextran and femur and vertebral BV/TV. Our findings reveal a significant negative correlation between serum FITC-dextran and vertebral BV/TV (r ¼ À0.3115; p ¼ 0.0261). To further understand the mechanisms of changes in intestinal permeability, we assessed mRNA levels of various junction proteins in the distal colon and ileum. Except for Claudin-4 in the distal colon, none of the other genes were significantly altered in the ABX-treated group (without KRG extract treatment). KRG treatment of the post-ABX group did not alter any of these genes either in distal colon or ileum when compared to the post-ABX group (data not shown).
Discussion
The focus of the current study is to understand whether Korean Red Ginseng (KRG) extract can prevent antibiotic dysbiosis-induced bone loss in mice as well as decipher the possible mechanisms of action on the gut-bone axis. As indicated earlier, previous studies have shown that ginseng prevents bone loss in other animal models. Kang et al [21]showed that co-administration of panax ginseng at 500 mg/kg/day along with Brassica oleracea (cabbage) for 10 weeks prevented ovariectomy-induced bone loss in mice. It is important to note that in this model, panax ginseng alone did not have any significant effect on either body weight or bone loss. Compared to these studies, Kim et al [22] showed that KRG could prevent glucocorticoid-induced osteoporosis (at 100 mg/Kg and 500 mg/kg). Similarly other studies have looked at the effect of ginseng in different models of bone loss and have found ginseng to be protective [23]. However, these studies did not look at the role of gut microbiota or barrier dysfunction in the context of bone loss. In our studies we used a dose of 500 mg/kg/day and observed significant effect on microbiota, barrier function and bone health (both femur and vertebrae) when mice were treated with KRG extract during post-ABX dysbiosis. Interestingly healthy control mice (without dysbiosis) treated with KRG extract for 4 weeks showed a significant increase in femur BV/TV but not vertebral BV/ TV suggesting that different mechanisms could be at play in regulating femur and vertebral bone at least in control mice.
Even though previous studies have not examined the effect of ginseng on gut-bone axis, the role of ginseng on gut microbiota has been examined extensively (for review, see [24]). Han et al [25], showed that feeding white Korean ginseng to rats, increases Muc2 gene expression in the ileum and increases the number of Lactobacillus strains compared to control. Using human subjects, Song et al [26] have shown that treatment with panax ginseng (4 g twice a day for 8 weeks) was associated with some changes in gut microbiota. In another double-blind, placebo controlled human clinical trial [27], responses to KRG administration (at 6 g/day dose) were associated with changes in gut microbial composition. While there were some limitations to the study, the authors showed that KRG treatment decreased Firmicutes and Proteobacteria and increased Bacteroidetes and that patient responders that had higher abundance of Lachnospiraceae and Clostridiales showed decreases in serum total cholesterol and LDL. In a recent study, Ren et al [28] showed that the polysaccharide extract of the American ginseng can increase the relative richness of Lactobacillus and Bacteroides in an antibiotic-induced diarrhea model in rats. Ginsenoside Rk3, similarly improved antibiotic-induced diarrhea and enriched the beneficial microbiota in a mouse diarrhea model [29]. The antibiotic-induced dysbiosis model we have used in this study is not a diarrhea model but models a clinically relevant broad spectrum antibiotic treatment and subsequent dysbiosis. However, like the antibiotic-induced diarrhea model, KRG treatment in our studies also induced beneficial effects on the microbiota.
Our studies demonstrate a number of unique changes in microbiome with KRG extract treatment at various taxonomic levels. When examined at the genus level, our results reveal the importance of Lactobacillus, rc4-4 and an unknown genus under family S24-7 in the context of KRG extract treatment. In previous studies our lab and others have demonstrated the importance of Lactobacillus in different models of bone loss [7,8,17,30e32]. In particular, we have shown that Lactobacillus reuteri, a probiotic can Fig. 3) show the distribution of the data and line at the median and quartiles. N ¼ 14-16 for all groups except KRG extract alone group (n ¼ 5). Statistical analyses were performed using ANOVA with post Holm-Sidak's multiple comparisons test and P values from Holm-Sidak's are as shown. Based on t-test, # p ¼ 0.0021, @ p ¼ 0.0309. Pearson correlation was performed between the respective genus vs femur BV/TV and shown on the side. beneficially influence the microbiota and bone health in multiple mouse models. Our results here show that the abundance of genus Lactobacillus is decreased with ABX-dysbiosis and that this decrease is markedly prevented by KRG treatment. In addition, abundance of Lactobacillus was positively correlated with femur BV/TV. Taken together based on our previous studies on Lactobacillus reuteri, our studies strongly suggest an important role for Lactobacillus in the beneficial effects of KRG extract treatment on bone health. The genus rc4-4 belongs to the Phylum Firmicutes, Class Clostridia and Family Peptococcaceae. Role of rc4-4 in modulating the effect of KRG extract is not well known and its effect on bone has not been studied. Our results reveal that the abundance of rc4-4 is negatively correlated with bone volume and that KRG extract treatment decreases its abundance in post-ABX mice. This suggests that presence of rc4-4 may not be beneficial to bone health. Further studies are needed to test this hypothesis. The abundance of an unknown genus in the family S24-7 was positively correlated with femur bone volume and KRG extract increased its abundance in post-ABX mice, suggesting that this bacteria may be beneficial to bone health. S24-7 belongs to the Phylum Bacteroidetes and Order Bacteroidales. Although the genus is unknown, the effect of S24-7 in bone health or ginseng treatments has not been well studied.
At the species level, our studies identify Alistipes finegoldii as a potentially important bacteria that is involved in the effects of KRG extract in the antibiotic-dysbiosis-induced bone loss model in mice. A. finegoldii negatively correlates with bone volume and its abundance increases with post-ABX dysbiosis. Importantly, A. finegoldii abundance is decreased with KRG treatment suggesting that the beneficial effect of KRG on bone in this model may be linked to a decrease in this bacterium. Role of bacteria of the genus Alistipes in terms of bone health is not well known, in part because it is a relatively recently described genus. There are 13 species in this genus including A. finegoldii [33]. Although the role of Alistipes in the pathogenesis of disease processes has been contrasting, A. finegoldii colonization has been shown to induce colitisassociated colon cancer via activation of IL-6/STAT3 pathway [34,35]. Thus, it is possible that A. finegoldii has potentially negative effects on bone health and will be the subject of future studies.
None of these studies however, examined the effect of gut microbiota changes in the context of gut permeability and bone health. In our study, we find that KRG prevents dysbiosis and gut permeability dysfunction, and importantly, these effects were associated with significant prevention of bone loss. We have previously shown that inhibiting an increase in intestinal permeability prevents post-ABX-dysbiosis induced bone loss in mice [7]. Thus, it is likely that KRG's effect on intestinal permeability in part explains the mechanism by which KRG extract prevents post-ABX dysbiosisinduced bone loss. Whether the changes induced by KRG on microbiota precedes gut permeability effects is not known and will be the subject of future studies.
Our studies for the first time demonstrate protective effect of KRG extract on post-ABX-dysbiosis-induced bone loss in mice. However, there are some limitations to our studies. We have used a single dose of KRG to test the effect on bone loss in mice. However, Fig. 5. Korean Red Ginseng (KRG) extract treatment inhibits relative abundance of Alistipes finegoldii: Violin plots of relative abundance of Alistipes finegoldii from the various mouse groups (as shown in Fig. 3) show the distribution of the data and line at the median and quartiles. N ¼ 14-16 for all groups except KRG extract alone group (n ¼ 5). Statistical analyses were performed using ANOVA with post Holm-Sidak's multiple comparisons test. P values are as shown. Pearson correlation was performed between the respective species vs femur BV/TV and shown above the graph. the dose we used in our mouse studies is similar to the human dose (3 g/day) used in a randomized, double-blind placebo-controlled trial that showed improvement in arthritis symptoms and serum osteocalcin concentrations over a 12-week period in osteopenic women [15]. Thus, we believe the dose we used in our study is clinically relevant. Our studies did not identify cellular and molecular mechanisms by which KRG affects gut-bone axis and this will be the focus of future studies. | 2022-09-02T15:24:13.480Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "1145e2f9568064261a8fac5d803c62b92469ce40",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jgr.2022.08.006",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2ce72ed57f0d7df9ea174586b51ea460385064ec",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249383192 | pes2o/s2orc | v3-fos-license | Anti-Proliferative and Cytoprotective Activity of Aryl Carbamate and Aryl Urea Derivatives with Alkyl Groups and Chlorine as Substituents
Natural cytokinines are a promising group of cytoprotective and anti-tumor agents. In this research, we synthesized a set of aryl carbamate, pyridyl urea, and aryl urea cytokinine analogs with alkyl and chlorine substitutions and tested their antiproliferative activity in MDA-MB-231, A-375, and U-87 MG cell lines, and cytoprotective properties in H2O2 and CoCl2 models. Aryl carbamates with the oxamate moiety were selectively anti-proliferative for the cancer cell lines tested, while the aryl ureas were inactive. In the cytoprotection studies, the same aryl carbamates were able to counteract the CoCl2 cytotoxicity by 3–8%. The possible molecular targets of the aryl carbamates during the anti-proliferative action were the adenosine A2 receptor and CDK2. The obtained results are promising for the development of novel anti-cancer therapeutics.
Introduction
The plant hormones cytokinins are predominantly adenine-derived regulatory molecules that take part in almost all stages of plant growth and development. Studies of the biological activity of cytokinins in animal cells, implemented mainly on 6-substituted purines, in particular on kinetin [1,2], have shown the presence of antiviral, antiparasitic, antitumor, antioxidant and other therapeutic properties [3][4][5][6]. At the same time, many chemical compounds with cytokinin activity other than substituted purines are known, but their activity has not been investigated on animal and human cells.
One of the promising classes of cytokinin-like compounds are aryl and heteroaryl urea derivatives. Some derivatives, such as 1-phenyl-3-(4-pyridyl) urea (4PU), exhibit surprisingly high cytokinin activity in tobacco callus culture. Some synthetic derivatives were even more active than natural endogenous cytokinins [7,8], but studies of other types of biological activity of these compounds were not carried out. Another class of synthetic cytokinin analogs contain urea and carbamate moieties with an ethylene linker. Among them, the oxalylaryl carbamates ( Figure 1, Structure II) have anti-stress growth-regulatory activity for crops [9,10], and ethylene diurea (EDU, Figure 1, III) has ozone protective properties [11][12][13][14]. Cytokinin-like phenylureas bind to the same site as cytokinin receptors [15], are stable, resistant to the action of oxidases, and contribute to an increase in the activity of peroxidase and superoxide dismutase.
Chemical modification of cytokinin analogs may result in both an increase in their pro-proliferative effects and in an inversion of their activity. Thus, it was shown that the introduction of substituents into the aromatic ring increases the activity, and electronwithdrawing substituents lead to a greater effect than electron-donating substituents [16]. Chlorine derivatives of nonpurine analogues of cytokinins usually have a higher proliferative activity [17].
In addition to the cytoprotective effect, the analogs of EDU were shown to exhibit anticancer activity via ROS-dependent apoptosis induction with EC50 of about 10-20 µM [18], but the available data on this topic are quite limited. Earlier we synthesized a series of aryl-substituted ureas and carbamates containing aromatic chlorine and a modified imidazolidinone moiety. These compounds were found to be cytotoxic to the breast cancer cell line MDA-MB-231, glioblastoma U-87 MG and neuroblastoma SH-SY5Y, but not to the melanoma A-375 cell line. The introduction of chlorine into the aromatic ring of cytokinin analogues significantly reduced the cytotoxicity, but at the same time provided the capability to protect cells from oxidative stress induced by H2O2 [19]. The observed cytotoxicity was quite low (EC50 of about 100 µM) but, given the scarcity of the data, there was a high probability that there could be more active compounds among other similar cytokinin analogs.
In the current research, we tried to find more active analogs of cytokinins with both anti-cancer and cytoprotective activities. We synthesized a novel set of modifications of 4PU (I), EDU (III), and oxalylaryl carbamates (II) with alkyl and chlorine substitutions and evaluated of their anti-proliferative and cytoprotective activity. Aryl carbamates with the oxamate moiety were anti-proliferative for the cancer cell lines tested, while the aryl ureas were inactive. In the cytoprotection studies, all the derivatives displayed little or no activity. The possible molecular targets of aryl carbamates during the anti-proliferative action were the adenosine A2 receptor and CDK2. Cytokinin-like phenylureas bind to the same site as cytokinin receptors [15], are stable, resistant to the action of oxidases, and contribute to an increase in the activity of peroxidase and superoxide dismutase.
Compound Synthesis
Chemical modification of cytokinin analogs may result in both an increase in their pro-proliferative effects and in an inversion of their activity. Thus, it was shown that the introduction of substituents into the aromatic ring increases the activity, and electronwithdrawing substituents lead to a greater effect than electron-donating substituents [16]. Chlorine derivatives of nonpurine analogues of cytokinins usually have a higher proliferative activity [17].
In addition to the cytoprotective effect, the analogs of EDU were shown to exhibit anticancer activity via ROS-dependent apoptosis induction with EC 50 of about 10-20 µM [18], but the available data on this topic are quite limited. Earlier we synthesized a series of aryl-substituted ureas and carbamates containing aromatic chlorine and a modified imidazolidinone moiety. These compounds were found to be cytotoxic to the breast cancer cell line MDA-MB-231, glioblastoma U-87 MG and neuroblastoma SH-SY5Y, but not to the melanoma A-375 cell line. The introduction of chlorine into the aromatic ring of cytokinin analogues significantly reduced the cytotoxicity, but at the same time provided the capability to protect cells from oxidative stress induced by H 2 O 2 [19]. The observed cytotoxicity was quite low (EC 50 of about 100 µM) but, given the scarcity of the data, there was a high probability that there could be more active compounds among other similar cytokinin analogs.
In the current research, we tried to find more active analogs of cytokinins with both anti-cancer and cytoprotective activities. We synthesized a novel set of modifications of 4PU (I), EDU (III), and oxalylaryl carbamates (II) with alkyl and chlorine substitutions and evaluated of their anti-proliferative and cytoprotective activity. Aryl carbamates with the oxamate moiety were anti-proliferative for the cancer cell lines tested, while the aryl ureas were inactive. In the cytoprotection studies, all the derivatives displayed little or no activity. The possible molecular targets of aryl carbamates during the anti-proliferative action were the adenosine A2 receptor and CDK2.
Compound Synthesis
The preparation of 1-phenyl-3-(4-pyridyl) urea derivatives I (Table 1) was carried out accordingly to known methods [20,21]. The last stage consisted of the interaction of phenyl isocyanate with 4-aminopyridine. To convert 4-aminopyridine and 4-amino-2chloropyridine salts into the free form and accelerate the process, basic catalysis was used by adding a few drops of triethylamine to the reaction mixture.
Aryl ureas and aryl carbamates (Table 1) were produced by the reaction of corresponding aryl isocyanates with imidazolidinone-substituted alcohol or amine in the presence of triethylamine in anhydrous toluene (for aryl ureas) or acetonitrile (for aryl carbamates) as described in the literature [19]. Oxalylaryl carbamates were obtained in the same way as described in refs. [9,10]. Aryl ureas and aryl carbamates (Table 1) were produced by the reaction of corresponding aryl isocyanates with imidazolidinone-substituted alcohol or amine in the presence of triethylamine in anhydrous toluene (for aryl ureas) or acetonitrile (for aryl carbamates) as described in the literature [19]. Oxalylaryl carbamates were obtained in the same way as described in refs. [9,10]. Table 1. Structural formulas of synthesized aryl carbamates and ureas.
Anti-Proliferative Activity Evaluation
We first tested the synthesized compounds for their ability to induce cell death or decrease proliferation in a set of cancer cell lines. We used human cell lines for three major cancer types (glioblastoma U-87 MG, melanoma A-375, metastatic breast cancer MDA-MB-231), and a neuroblastoma SH-SY5Y, which was later intended to be a model in a cytoprotection setting. The cells were incubated with the test compounds overnight, and their proliferation was evaluated using the MTT assay. The compounds were assayed in the concentration range of 1-100 µM to account for the lowest potential load for the patient's organism.
All of the compounds from the analogs of EDU series IIIa-h displayed no cytotoxicity for all cell lines tested ( Figure 2). On the other hand, all aryl carbamates except IIc were moderately anti-proliferative for all cell lines, decreasing the cell viability by about 40% at 100 µM ( Figure 3). Pyridyl urea derivatives demonstrated low anti-proliferative activity; the most active of them was Ic ( Figure 4 Aryl ureas and aryl carbamates (Table 1) were produced by the reaction of corresponding aryl isocyanates with imidazolidinone-substituted alcohol or amine in the presence of triethylamine in anhydrous toluene (for aryl ureas) or acetonitrile (for aryl carbamates) as described in the literature [19]. Oxalylaryl carbamates were obtained in the same way as described in refs. [9,10]. Table 1. Structural formulas of synthesized aryl carbamates and ureas.
Anti-Proliferative Activity Evaluation
We first tested the synthesized compounds for their ability to induce cell death or decrease proliferation in a set of cancer cell lines. We used human cell lines for three major cancer types (glioblastoma U-87 MG, melanoma A-375, metastatic breast cancer MDA-MB-231), and a neuroblastoma SH-SY5Y, which was later intended to be a model in a cytoprotection setting. The cells were incubated with the test compounds overnight, and their proliferation was evaluated using the MTT assay. The compounds were assayed in the concentration range of 1-100 µM to account for the lowest potential load for the patient's organism.
All of the compounds from the analogs of EDU series IIIa-h displayed no cytotoxicity for all cell lines tested ( Figure 2). On the other hand, all aryl carbamates except IIc were moderately anti-proliferative for all cell lines, decreasing the cell viability by about 40% at 100 µM ( Figure 3). Pyridyl urea derivatives demonstrated low anti-proliferative activity; the most active of them was Ic ( Figure 4). Aryl ureas and aryl carbamates (Table 1) were produced by the reaction of corresponding aryl isocyanates with imidazolidinone-substituted alcohol or amine in the presence of triethylamine in anhydrous toluene (for aryl ureas) or acetonitrile (for aryl carbamates) as described in the literature [19]. Oxalylaryl carbamates were obtained in the same way as described in refs. [9,10]. Table 1. Structural formulas of synthesized aryl carbamates and ureas.
Anti-Proliferative Activity Evaluation
We first tested the synthesized compounds for their ability to induce cell death or decrease proliferation in a set of cancer cell lines. We used human cell lines for three major cancer types (glioblastoma U-87 MG, melanoma A-375, metastatic breast cancer MDA-MB-231), and a neuroblastoma SH-SY5Y, which was later intended to be a model in a cytoprotection setting. The cells were incubated with the test compounds overnight, and their proliferation was evaluated using the MTT assay. The compounds were assayed in the concentration range of 1-100 µM to account for the lowest potential load for the patient's organism.
All of the compounds from the analogs of EDU series IIIa-h displayed no cytotoxicity for all cell lines tested ( Figure 2). On the other hand, all aryl carbamates except IIc were moderately anti-proliferative for all cell lines, decreasing the cell viability by about 40% at 100 µM ( Figure 3). Pyridyl urea derivatives demonstrated low anti-proliferative activity; the most active of them was Ic ( Figure 4).
Anti-Proliferative Activity Evaluation
We first tested the synthesized compounds for their ability to induce cell death or decrease proliferation in a set of cancer cell lines. We used human cell lines for three major cancer types (glioblastoma U-87 MG, melanoma A-375, metastatic breast cancer MDA-MB-231), and a neuroblastoma SH-SY5Y, which was later intended to be a model in a cytoprotection setting. The cells were incubated with the test compounds overnight, and their proliferation was evaluated using the MTT assay. The compounds were assayed in the concentration range of 1-100 µM to account for the lowest potential load for the patient's organism.
All of the compounds from the analogs of EDU series IIIa-h displayed no cytotoxicity for all cell lines tested ( Figure 2). On the other hand, all aryl carbamates except IIc were moderately anti-proliferative for all cell lines, decreasing the cell viability by about 40% at 100 µM ( Figure 3). Pyridyl urea derivatives demonstrated low anti-proliferative activity; the most active of them was Ic ( Figure 4).
Selectivity of the Active Arylcarbamates
To investigate substance selectivity, we used normal immortalized human fibroblast
Selectivity of the Active Arylcarbamates
To investigate substance selectivity, we used normal immortalized human fibroblast cell line Bj-5ta in the same experimental setting as in the cytotoxicity studies. The compounds displayed slight anti-proliferative activity with about 10% cell death at 100 µM of the substance ( Figure 5). The selectivity indices were not calculated because of the very low cytotoxicity of the compounds for the Bj-5ta cell line in the designated concentration range. However, at 100 µM, IIa, b and d compounds induced a 32-42% proliferation decrease in the cancer cell lines and only 4-17% in the Bj-5ta cell line. Based on these data, compound selectivity calculated as the anti-proliferative activity ratio at the 100 µM concentration was 2 to 8 ( Table 2).
Cell Death Type and Mechanism of the Active Arylcarbamates
An investigation of cell death type and mechanism was performed for the IId compound on the MDA-MB-231 cell line. Several sets of experiments were performed: (1) cell staining with DNA binding and phosphatidylserine binding dyes with further microscopy to detect necrosis and apoptosis, accordingly; (2) measurement of caspase 3, 8, and 9 activity; (3) measurement of the ability of blockers of necroptosis (necrostatin-1 Table 2. Selectivity of the aryl carbamate II cytotoxicity for the percent of proliferation decrease at the compound concentration of 100 µM. Incubation time 20 h, MTT assay data, percent of proliferation decrease, mean ± standard error (n = 5 experiments). Selectivity was calculated as the anti-proliferative activity ratio for the appropriate cell line to the anti-proliferative activity for the Bj-5ta cell line. ND, not defined. We chose the substance IId as a model to further evaluate the selectivity and cell death type based on the observed anti-proliferative activity.
Cell Death Type and Mechanism of the Active Arylcarbamates
An investigation of cell death type and mechanism was performed for the IId compound on the MDA-MB-231 cell line. Several sets of experiments were performed: (1) cell staining with DNA binding and phosphatidylserine binding dyes with further microscopy to detect necrosis and apoptosis, accordingly; (2) measurement of caspase 3, 8, and 9 activity; (3) measurement of the ability of blockers of necroptosis (necrostatin-1 and necrosulfonate), apoptosis (Z-VAD-FMK), autophagy (hydroxychloroquine), and of a ROS scavenger (N-acetyl cysteine) to prevent IId cytotoxicity.
IId treatment led to a slight increase in caspase 3 activity and, to some extent, induced phosphatidylserine externalization ( Figure 6). Neither of the inhibitors used was able to prevent the cytotoxicity of the compounds, except for the N-acetylcysteine (Figure 7). In accordance with that, the compound induced an increase in the intracellular ROS concentration (Figure 7). However, the inhibition of the ROS-sensitive kinase ASK1 did not reduce the compound's cytotoxicity. These data indicate that the compound induces both apoptosis and necrosis, and possibly slows down cell proliferation.
Cytoprotection
Based on the data on the cytoprotective activity of the structurally similar compounds [5,19,22,23], we tested the synthesized compounds in two antioxidant models (protection against the H 2 O 2 and CoCl 2 cytotoxicity) in a 24 h incubation. In addition, we evaluated the analogs of EDU III for their ability to stimulate cell proliferation after a 72 h treatment. We did not test aryl carbamates II and pyridyl ureas I for this activity, as these compounds were significantly cytotoxic ( Figure 3).
In the cytoprotection experiments, the substances were mainly inactive ( Figure 8). However, IIc and IId at concentrations from 1 to 10 µM were able to increase the cell survival in the CoCl 2 cytotoxicity test by 3 to 8%.
Among the aryl ureas, IIIf exhibited a statistically significant pro-proliferative effect at concentrations of 1-10 µM, and IIIb, c, and d demonstrated anti-proliferative action at the concentration of 100 µM (Figure 9). h treatment. We did not test aryl carbamates II and pyridyl ureas I for this activity, as these compounds were significantly cytotoxic (Figure 3).
In the cytoprotection experiments, the substances were mainly inactive ( Figure 8). However, IIc and IId at concentrations from 1 to 10 µM were able to increase the cell survival in the CoCl2 cytotoxicity test by 3 to 8%. Among the aryl ureas, IIIf exhibited a statistically significant pro-proliferative effect at concentrations of 1-10 µM, and IIIb, c, and d demonstrated anti-proliferative action at the concentration of 100 µM (Figure 9).
Molecular Docking
Since the compounds demonstrated a substantial anti-proliferative activity with a pro-apoptotic compound, we decided to perform a series of experiments to gain some insights into their molecular targets. We hypothesized that the synthesized compounds and their molecular prototypes cytokinins could share at least some of the receptors.
For A2AR, aryl carbamates II typically displayed affinity between the inhibitor caffeine and activator adenosine, while EDU analogs III had a lower affinity than both caffeine and adenosine ( Figure 10, Tables 3, S2 and S3).
Molecular Docking
Since the compounds demonstrated a substantial anti-proliferative activity with a pro-apoptotic compound, we decided to perform a series of experiments to gain some insights into their molecular targets. We hypothesized that the synthesized compounds and their molecular prototypes cytokinins could share at least some of the receptors.
For A2AR, aryl carbamates II typically displayed affinity between the inhibitor caffeine and activator adenosine, while EDU analogs III had a lower affinity than both caffeine and adenosine ( Figure 10, Tables 3, S2 and S3). For APRT, the affinity of all compounds was much lower than for the substrate GMP and inhibitor IMP (Figure 11, Tables 4 and S4-S6). For CDK2, aryl carbamates II displayed affinity close to that of the inhibitor SCP2, and aryl ureas had a much lower affinity ( Figure 12, Tables 5, S7 and S8).
Discussion
In this paper we report the synthesis and evaluation of some novel pyridyl urea, aryl urea and carbamate derivatives with alkyl and chlorine substitutions for biological activity. The compounds were designed to fill the gap in the known synthetic analogs of the substituted cytokinin-like derivatives. This research continues our earlier study [19], extending it with novel compounds and data on the activity mechanisms. The task looked promising as such compounds are known for exerting cytoprotective and antitumor activity.
To synthesize the designated derivatives, we used known literature methods with the yield in the range of 15 to 55%, which is typical for such compound types.
The synthesized compounds were evaluated for their ability to induce cell death in a set of human cancer cell lines (glioblastoma U-87 MG, melanoma A-375, and metastatic breast cancer MDA-MB-231) chosen based on the clinical significance of the corresponding tumors and on the neuroblastoma SH-SY5Y cell line, which was later intended to be used in the cytoprotection tests. EDU analogs III derivatives were not toxic up to the concentration of 100 µM (Figure 2) after 24 h of incubation, but IIIb, c, d and g displayed some antiproliferative activity after 72 h ( Figure 9). However, IIIf in the latter experiment setting stimulated cell proliferation in SH-SY5Y. Such pro-proliferative activity is quite typical for the cytokinin analogs [5,19,22,23].
Aryl carbamate compounds II were anti-proliferative for all cell lines, and three of them demonstrated substantial selectivity compared to the immortalized fibroblast cell line ( Table 2). The activity, however, was relatively low for a cytotoxic compound but substantial for an anti-proliferative one, with a 20-40% cell proliferation decrease at a concentration of 100 µM. Pyridyl urea Ic was also anti-proliferative, with some preference toward the melanoma and breast cancer cell line (Figure 4). The observed activity was in line with the already described in the literature [29].
Based on the discovered selectivity of the aryl carbamates II, we used a set of methods to describe the type of cell death induced by them, with IId as the model compound. We used blockers of necroptosis, apoptosis, and autophagy, stained the cells with the apoptosissensitive dye, and evaluated the activation of caspases 3, 8, and 9. IId induced only a slight increase of caspase 3 activity and apoptotic cell staining, and the only blocker able to decrease its cytotoxicity was the ROS scavenger N-acetyl cysteine. The latter's activity agreed with the IId induced accumulation of ROS in the cells (Figure 7). These results point to the primarily anti-proliferative mechanism of the action of the compounds.
To obtain more insights into the molecular mechanism of action of the aryl carbamates II and EDU analogs III, we performed molecular docking studies with the most known cytokinine analogs targets: adenosine A2 receptor, ARPT, and CDK2 [30,31]. We observed affinities close to those of the known inhibitors toward the A2AR and CDK2 for compounds II, and much lower affinities for the compounds III (Tables 3 and 5). These results agree with the literature data on the anti-proliferative activity of the inhibitors of these proteins [32,33], but a more detailed study is required to prove this interaction.
Based on the literature data on the ability of the cytokinin derivatives to protect cells against various stress, we tested the synthesized compounds for their ability to protect cells against the cytotoxicity of H 2 O 2 and CoCl 2 , and for their ability to stimulate cell proliferation directly. In these experiments, the substances mainly were inactive (Figure 8). However, IIc and d at concentrations from 1 to 10 µM were able to increase the cell survival in the CoCl 2 cytotoxicity test by 3 to 8%. Among the aryl ureas, IIIf exhibited a statistically significant pro-proliferative effect at concentrations of 1-10 µM.
The obtained data on the aryl urea, aryl carbamate, and pyridyl urea derivatives demonstrated their ability to inhibit cancer cell proliferation. The probable targets of this activity are adenosine A2 receptor and CDK2, but a more detailed study is required to obtain the molecular details of this interaction. Compounds from the II-series were synthesized according to refs. [9,10]. O-i-Propyl-N-(2-hydroxyethylamino)carbamate (IIa) was synthesized according to known procedures [35].
Briefly, for the oxamate derivatives, a solution of 1 eq. of O-alkyl-N-(2-hydroxyethyl)oxamate in dry toluene (15 mL per 2 g of substance) was placed in a round bottom flask equipped with a calcium chloride tube and magnetic stirrer. Then, the solution of 1 eq. 4-methyl phenyl isocyanate in dry toluene (30 mL per 1.5 g of isocyanate) and 2-3 drops of triethylamine were added. The reaction mixture was stirred at room temperature for 15 min, wherein precipitation was formed. The resulting precipitate was filtered off. The synthesis of the compound IIc is described in the literature [9,10]. Compounds from the III-series were synthesized according to ref. [19]. Briefly, for the aryl carbamates (IIIa, b, c, d) synthesis, 1 eq. of 2-hydroxyethyl derivative in a small volume of dry acetonitrile (20 mL per 1 g of substance) was placed in a round bottom flask equipped with a calcium chloride tube and magnetic stirrer. Then, a solution with 1 eq. of the relevant phenyl isocyanate in dry acetonitrile (30 mL per 1 g of substance) and 2-3 drops of triethylamine were added. The reaction mixture was stirred at room temperature for 24 h. The solution was evaporated to dryness, and the residue was recrystallized from methanol and from isopropanol. The precipitate was filtered off and washed with a small amount of cold isopropanol. For the compounds IIIc and IIId see ref. [19]. Briefly, for the aryl ureas (IIIe, f, g, h) synthesis 1 eq. of amine in dry toluene (50 mL per 4 g of substance) was placed in a three-necked flask with a thermometer, a dropping funnel, and a magnetic stirrer. The mixture was cooled in an ice bath to a temperature no higher than 5 • C. Then, a solution with 1 eq. of the relevant phenyl isocyanate in dry toluene (50 mL per 3.5-4 g of substance) was added dropwise with stirring, keeping a temperature no higher than 5 • C. The reaction mixture was stirred at room temperature for 24 h. The precipitate was filtered off and recrystallized from acetone. The compounds IIIe, g, h are described in the literature [19].
Cell Culture
All cell lines were maintained in a CO 2 incubator at 37 • C, 95% humidity and 5% CO 2 . The composition of the culture medium for the cells was as follows:
Oxidative Stress Induction
For cell viability experiments, the cells were seeded at a density of 30,000 per well of a 96-well plate in 100 µL of the test medium (culture medium with 50 mM HEPES, pH 7.4, and without serum and pyruvate) and incubated for 12 h. After that, a substance solution with or without the toxic agent in 100 µL fresh test medium was added to the medium present in the wells and incubated for 24 h, after which cell viability was measured using the MTT assay. Cytotoxicity was induced by either 100 µM of H 2 O 2 or 700 µM of CoCl 2 (from the freshly prepared stock in EtOH).
Cytotoxicity and Proliferation Stimulation
For analysis of cell death induction and ROS generation, the cells were plated in 96-well plates at a density of 1.5 × 10 4 cells for the cytotoxicity assay and 8000 for the proliferation study per well and grown overnight. The dilutions of the test compounds prepared in DMSO and dissolved in the culture medium (without serum starvation) were added to the cells in triplicate for each concentration (100 µL of the fresh medium with the substance to 100 µL of the old medium in the well) and incubated for 18 h in the case of cytotoxicity and 72 h in the case of the proliferation stimulation. The incubation time was chosen based on the most pronounced differences between the compounds tested. The final DMSO concentration was 0.5%. Negative control cells (100% viability) were treated with 0.5% DMSO. Positive control cells (100% cell death) were treated with 3.6 µL of 50% Triton X-100 in ethanol per 200 µL of the cell culture medium. Separate controls were without DMSO (no difference with the control 0.5% DMSO was found). Depending on the experiment series, the effects of the test substances on the cell viability and ROS production were evaluated using the MTT assay and DCFH-DA, accordingly.
Cell Viability Assay
Cell viability was analyzed using the MTT test [36]. In short, the culture medium was removed from the wells and 75 µL of the 0.5 mg/mL solution of MTT with 1 g/L D-glucose in Earle's salts was added to each well and incubated for 90 min in the CO 2 incubator at 37 • C. After that, 75 µL of 0.04 M HCl in isopropanol was added to the MTT solution in each well and incubated on a plate shaker at 37 • C for 30 min. The optical density of the solution was determined using a Hidex Sense Beta Plus microplate reader (Hidex, Turku, Finland) at the wavelength of 570 nm with a reference wavelength of 620 nm.
Apoptosis Assay
Apoptosis level was analyzed using an Apoptosis/Necrosis detection kit (ab176749, Abcam, Cambridge, UK). The cells were seeded at a density of 15,000 per well of a 96-well plate and grown for 12 h. After that, 475 µM of H 2 O 2 alone or with the peptide was added in 100 µL of the fresh medium to 100 µL of the old medium in the wells and incubated for 1 h at 37 • C in a CO 2 incubator. After that, the medium was removed, and the cells were stained according to the manufacturer's instructions using the phosphatidylserine sensor (apoptotic cells, green fluorescence) and membrane-impermeable dye 7-AAD (necrotic cells, red fluorescence). The stained cells were photographed using an inverted fluorescent microscope Nikon Ti-S using a Semrock GFP-3035D filter cube with magnification 100×. For each well, five non-intersecting view fields were captured, and apoptotic cells were counted.
Caspase Activity Assay
The determination of caspase activity was performed using the specific substrates with a fluorescent 7-amido-4-trifluoromethylcoumarin (AFC) label. Cells were seeded into a 96-well plate (7 × 10 4 cells/well) and incubated overnight. Test compound solutions in the full culture medium were added to the cells without medium change and incubated in a CO 2 incubator for 4 h at 37 • C. A pan-caspase inhibitor Z-VAD-FMK (80 µM) was used as a negative control. Then, the medium was discarded and 120 µL of the caspase assay buffer (20 mM HEPES, 2 mM EDTA, 0.1% CHAPS, 5 mM dithiothreitol, protease inhibitor cocktail, pH 7.4) was added to the cells. Then, the cells were frozen at −50 • C. After thawing, 120 µL of the caspase substrates Ac-DEVD-AFC (32 µM), Ac-LEHD-AFC (32 µM), and SCP0139 (32 µM) were added to the cell lysates and incubated for 90 min at 37 • C. The released AFC determination was performed using the Hidex Sense Beta Plus microplate reader (Hidex, Turku, Finland) at λ ex = 505 nm, λ em = 400 nm.
ROS Assay
ROS generation was measured using the DCFH-DA dye. The cells were seeded at a density of 60,000 per well of a 96-well plate and grown for 12 h. After that, the cells were treated with the substances in the culture medium for 24. Cells treated with a medium without H 2 O 2 and substances were used as a control. After that, the medium was replaced with a fresh one with 25 µM of the dye, and the cells were incubated in the CO 2 incubator at 37 • C for 1 h. After the incubation, the cells were washed twice with Earle's balanced salt solution, and the fluorescence was measured using the plate reader Hidex Sense Beta Plus (Hidex, Turku, Finland), λ ex = 490 nm, λ em = 535 nm.
Protein structures were obtained from the PDB database (https://www.rcsb.org/, access date 1 May 2022) and optimized using the Chiron service (https://dokhlab.med.psu. edu/chiron/processManager.php, access date 01.05.2022) [38]. Molecular docking was performed using the AutoDock Vina 1.1.2 (http://vina.scripps.edu/, access date 1 May 2022) [39]. To detect possible alternative binding sites and compare the affinities of the ligands for them, the procedure described in the literature [40] was used. As such, molecular docking was performed in two steps: first, we docked each molecule to the whole receptor as one large binding area to locate potential alternative binding sites, then the coordinates of the docking results were clustered and averaged to give the centers of the binding sites. The grid center coordinates are represented in the Table 6. For large proteins, several grid centers were used to cover the whole protein. In all cases, the grid size was 126 × 126 × 126 Å, chosen to cover the whole protein, and exhaustiveness was set to 16. For each ligand, the docking was performed 10 times with different random seeds generating 10 conformations each time. The resulting coordinates were clustered using the OPTICS algorithm [41] from the package scikit-learn [42].
Statistics
All experiments were performed at least in triplicate. Statistical analysis was performed with the GraphPad Prism 9.0 software using ANOVA with the Holm-Sidak or Tukey post-tests; p ≤ 0.05 was considered a statistically significant difference.
Conclusions
In this paper we report the synthesis of some aryl carbamate, pyridyl urea, and aryl urea derivatives with alkyl and chlorine substitutions and tests of their cytotoxic and cytoprotective activity. Aryl carbamates with an oxamate moiety were anti-proliferative for the cancer cell lines tested, while the aryl ureas were inactive. In the cytoprotection studies, aryl carbamates were able to counteract the CoCl 2 cytotoxicity by 3-8%. The possible molecular targets of the aryl carbamates with oxamate moiety during the anti-proliferative action were the adenosine A2 receptor and CDK2.
The novelty of the research was the screening of the chemically synthesized cytokinin analogs, which have never been characterized for such biological activity before. Although most of the compounds displayed little activity in the most tests, compounds of the series II displayed an interesting highly selective antiproliferative capacity. This activity was observed, among others, for the glioblastoma cell line. Given the lack of efficient treatments for this cancer type, such activity could be used in the combined or supporting therapy after additional research.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/molecules27113616/s1. Figure S1. NMR spectroscopy data for the synthesized compounds. Table S1: Affinities of the clusters for the A2AR receptor variant 5mzj. Table S2: Affinities of the clusters for the A2AR receptor variant 2ydo. Table S3: Affinities of the clusters for the A2AR receptor variant 5mzp. Table S4: Affinities of the clusters for the APRT variant 6hgs. Table S5: Affinities of the clusters for the APRT variant 6hgr. Table S6: Affinities of the clusters for the APRT variant 6hgp. Table S7: Affinities of the clusters for the CDK2 variant 5fp5. Table S8: Affinities of the clusters for the CDK2 variant 2jgz.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to legal issues. | 2022-06-06T15:13:12.660Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "9e87470aed1f8ea4989f26ab4e1f3fd590fe61e5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/11/3616/pdf?version=1654338625",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a30b20a5bd3030688f82aef93fa8e4acb70dc17",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247100155 | pes2o/s2orc | v3-fos-license | A Review of Alpha-1 Antitrypsin Binding Partners for Immune Regulation and Potential Therapeutic Application
Alpha-1 antitrypsin (AAT) is the canonical serine protease inhibitor of neutrophil-derived proteases and can modulate innate immune mechanisms through its anti-inflammatory activities mediated by a broad spectrum of protein, cytokine, and cell surface interactions. AAT contains a reactive methionine residue that is critical for its protease-specific binding capacity, whereby AAT entraps the protease on cleavage of its reactive centre loop, neutralises its activity by key changes in its tertiary structure, and permits removal of the AAT-protease complex from the circulation. Recently, however, the immunomodulatory role of AAT has come increasingly to the fore with several prominent studies focused on lipid or protein-protein interactions that are predominantly mediated through electrostatic, glycan, or hydrophobic potential binding sites. The aim of this review was to investigate the spectrum of AAT molecular interactions, with newer studies supporting a potential therapeutic paradigm for AAT augmentation therapy in disorders in which a chronic immune response is strongly linked.
An Introduction to Alpha-1 Antitrypsin
Alpha-1 antitrypsin is a 52 kDa plasma glycoprotein characterised primarily by its function as an extracellular protease inhibitor of neutrophil elastase (NE). AAT is considered the chief serine protease inhibitor that is encoded by the SERPINA1 on chromosome 14q32.1-32 [1]. Serine protease inhibitors (SERPINs) share homologous gene regions and have common protein structures [2]. As their name implies, the predominant role of SERPINs is to inhibit their cognate protease, and indeed most SERPINs have evolved in parallel with their specific protease, e.g., antithrombin with thrombin, C1 inhibitor with C1 esterase, and antiplasmin with plasmin [3]. However, the primary function of other extracellular non-inhibitory serine proteases as carrier proteins in plasma is well established, particularly in the case of SERPINA6 (cortisol-binding globulin) and SERPINA7 (thyroxinebinding globulin) [3,4]. The potential role of non-protease AAT-protein interactions has been alluded to in studies of its three-dimensional structure, principally in relation to its corticosteroid-binding domain [5], but also through investigation of candidate binding partners, as summarised in Table 1. However, the status and relative importance of AAT as a carrier protein in the circulation or its non-protease binding at sites of inflammation are incompletely understood at present. Several reports have described electrostatic and hydrophobic protein and peptide interactions with AAT, which highlights the role of AAT beyond protease inhibition [6,7]. In this review, we provide an overview of AAT and the Inflammation/innate immunity [30] Previously published binding partners to AAT are categorised by biological compartment. The involvement of the respective protein and disease process is provided where applicable. Glossary: IgA = immunoglobulin A. BPH = benign prostatic hypertrophy. OA = osteoarthritis.
Control of Alpha-1 Antitrypsin Production
AAT is abundant in the plasma with a mean concentration of 1.3 g/L (range 0.9-1.75 g/L) and a plasma half-life of 4-5 days. AAT produced by hepatocytes contributes to almost all of the circulating AAT, although it is also produced in smaller quantities by other cells such as monocytes, macrophages, pulmonary alveolar cells and intestinal epithelial cells [31][32][33]. In this regard, SERPINA1 transcriptional regulation occurs at exons IA, IB and IC, in a tissue-specific manner, as IC regulates transcription in hepatocytes, while IA and IB mediate AAT release in monocytes and macrophages [34,35]. Moreover, SERPINA1 presents an inflammation-responsive promoter which favours AAT release during inflammation conditions [36,37] and is furthermore reported to be epigenetically regulated by SERPINA1 promoter methylation [38,39].
Following transcription, the tertiary AAT protein structure comprises three β-sheets, nine α-helices and a reactive central loop (RCL) at the C-terminal region [40], a property that is well conserved among other members of the SERPIN superfamily [41] (Figure 1). The reactive methionine residue at position 358 (Met358) is located in the RCL, which extends out from the body of the protein and directs binding to the target protease. From plasma, Figure 1. Molecular model of glycosylated alpha-1 antitrypsin. Blue-peptide; yellow-glycans; red-reactive centre loop (peptide linkage) (residues M382-S383). Methods: Molecular modelling was performed on a Silicon Graphics Fuel workstation using InsightII and Discover software (Accelrys Inc., San Diego, USA). Figures were produced using the program Pymol [43]. Protein structures used for modelling were obtained from the pdb database and the structure of glycosylated AAT was based on the crystal structure of human alpha-1 antitrypsin as previously described [44]. The AAT molecule is post-translationally modified by N-glycosidically linked oligosaccharides at three asparagine residues at positions 70, 107 and 271.
AAT has a key role in innate immune defence and during the acute-phase protein response, the plasma concentration of AAT can rise to between two and four-fold under the influence of the pro-inflammatory cytokines interleukin (IL)-6 (IL-6) and IL-1β, and to some extent IL-8, transforming growth factor β (TGF-β) and IL-17 [45]. As a consequence of the acute-phase protein response, increased local and systemic AAT production results in much higher tissue concentrations of AAT, where its ability to bind proteases, proteins, peptides and cytokines, as well as interact with cell surface domains, may have important implications for the regulation of inflammation [18,44]. In turn, cell signalling mechanisms leading to downregulation of AAT production are underexplored, with one in vitro study indicating the ability of AAT itself to downregulate SERPINA1 mRNA expression in both hepatocytes and peripheral blood mononuclear cells [46].
Alpha-1 Antitrypsin Deficiency States
Much of our understanding of the importance of AAT as an antiprotease, and as an anti-inflammatory molecule, is reliant upon the involvement of patients deficient in AAT in studies exploring its biological effects. Alpha-1 antitrypsin deficiency (AATD) is the best-characterised heritable form of pulmonary emphysema and its discovery was a major breakthrough in our understanding of the role of protease imbalance in pulmonary emphysema [47]. Each SERPINA1 allele is transmitted by autosomal co-dominant Mendelian Blue-peptide; yellow-glycans; red-reactive centre loop (peptide linkage) (residues M382-S383). Methods: Molecular modelling was performed on a Silicon Graphics Fuel workstation using InsightII and Discover software (Accelrys Inc., San Diego, USA). Figures were produced using the program Pymol [43]. Protein structures used for modelling were obtained from the pdb database and the structure of glycosylated AAT was based on the crystal structure of human alpha-1 antitrypsin as previously described [44]. The AAT molecule is post-translationally modified by N-glycosidically linked oligosaccharides at three asparagine residues at positions 70, 107 and 271.
AAT has a key role in innate immune defence and during the acute-phase protein response, the plasma concentration of AAT can rise to between two and four-fold under the influence of the pro-inflammatory cytokines interleukin (IL)-6 (IL-6) and IL-1β, and to some extent IL-8, transforming growth factor β (TGF-β) and IL-17 [45]. As a consequence of the acute-phase protein response, increased local and systemic AAT production results in much higher tissue concentrations of AAT, where its ability to bind proteases, proteins, peptides and cytokines, as well as interact with cell surface domains, may have important implications for the regulation of inflammation [18,44]. In turn, cell signalling mechanisms leading to downregulation of AAT production are underexplored, with one in vitro study indicating the ability of AAT itself to downregulate SERPINA1 mRNA expression in both hepatocytes and peripheral blood mononuclear cells [46].
Alpha-1 Antitrypsin Deficiency States
Much of our understanding of the importance of AAT as an antiprotease, and as an anti-inflammatory molecule, is reliant upon the involvement of patients deficient in AAT in studies exploring its biological effects. Alpha-1 antitrypsin deficiency (AATD) is the best-characterised heritable form of pulmonary emphysema and its discovery was a major breakthrough in our understanding of the role of protease imbalance in pulmonary emphysema [47]. Each SERPINA1 allele is transmitted by autosomal co-dominant Mendelian inheritance. SERPINA1 is also polymorphic with over 200 mutations recognised to date that reduce plasma AAT levels by altering protein production and folding, or influencing the glycosylation status of AAT [48]. Several studies identified a variety of COPD-associated mutations/single-nucleotide polymorphisms located at untranslated and promoter regions and introns of SERPINA1 gene [49,50], which comprise only a small fraction of its diseaseassociated variants identified so far [51,52]. Mutations are classified by their phenotypic expression and electrophoretic mobility during isoelectric focusing; PiM (medium), PiS (slow), and PiZ (very slow) ( Figure 2) [53,54]. The most severe deficiency states are defined by AAT plasma levels less than 35% of the mean expected value (11 µM or 50 mg/dL measured by nephelometry). This is commonly as a result of a point mutation causing an amino acid change from glutamic acid to lysine at position 342 (Glu342Lys), which is referred to as the Z allele. Additionally reported are the PiSZ, PiSS, and rare or null alleles [55].
that reduce plasma AAT levels by altering protein production and folding, or influencing the glycosylation status of AAT [48]. Several studies identified a variety of COPD-associated mutations/single-nucleotide polymorphisms located at untranslated and promoter regions and introns of SERPINA1 gene [49,50], which comprise only a small fraction of its disease-associated variants identified so far [51,52]. Mutations are classified by their phenotypic expression and electrophoretic mobility during isoelectric focusing; PiM (medium), PiS (slow), and PiZ (very slow) ( Figure 2) [53,54]. The most severe deficiency states are defined by AAT plasma levels less than 35% of the mean expected value (11 μM or 50 mg/dL measured by nephelometry). This is commonly as a result of a point mutation causing an amino acid change from glutamic acid to lysine at position 342 (Glu342Lys), which is referred to as the Z allele. Additionally reported are the PiSZ, PiSS, and rare or null alleles [55]. The estimated carrier frequency of the Z allele is 1:25, with a disease incidence of 1:1575 to 1:2100 in some western European populations [56,57]. Homozygous ZZ individuals have a marked reduction in circulating plasma AAT levels to less than 10% of the normal protein concentration. Additionally, the Z-AAT protein is a less competent protease inhibitor than normal healthy type M-AAT and can take twice as long to inhibit a given concentration of NE [58]. In homozygous ZZ individuals, the net effect of reduced circulating AAT protein and diminished antiprotease activity culminates in an ineffective humoral protective shield and a marked protease/antiprotease imbalance, particularly affecting the lung, with the resultant pulmonary disease phenotype arising in these deficiency states ( Figure 3). The elastolytic proteases that are released during neutrophil recruitment and activation are the predominant cause for the pathological pulmonary findings in AATD [59], and the resultant burden of disease is the major cause for morbidity and mortality in AATD. Interestingly, there is currently no evidence that AAT levels predict lung disease risk within the SZ cohort. SZ individuals who have never smoked are not at an increased risk of lung disease regardless of their AAT level, while those who currently smoke have a significantly increased risk of airflow obstruction [60]. The estimated carrier frequency of the Z allele is 1:25, with a disease incidence of 1:1575 to 1:2100 in some western European populations [56,57]. Homozygous ZZ individuals have a marked reduction in circulating plasma AAT levels to less than 10% of the normal protein concentration. Additionally, the Z-AAT protein is a less competent protease inhibitor than normal healthy type M-AAT and can take twice as long to inhibit a given concentration of NE [58]. In homozygous ZZ individuals, the net effect of reduced circulating AAT protein and diminished antiprotease activity culminates in an ineffective humoral protective shield and a marked protease/antiprotease imbalance, particularly affecting the lung, with the resultant pulmonary disease phenotype arising in these deficiency states ( Figure 3). The elastolytic proteases that are released during neutrophil recruitment and activation are the predominant cause for the pathological pulmonary findings in AATD [59], and the resultant burden of disease is the major cause for morbidity and mortality in AATD. Interestingly, there is currently no evidence that AAT levels predict lung disease risk within the SZ cohort. SZ individuals who have never smoked are not at an increased risk of lung disease regardless of their AAT level, while those who currently smoke have a significantly increased risk of airflow obstruction [60]. Polymerised aggregates of Z-AAT protein are implicated in the pathogenesis of liver cirrhosis and chronic hepatitis. Accumulation of Z-AAT in hepatocytes leads to impaired secretion of the protein, with individuals homozygous for the Z mutation having 10-15% of normal circulating levels of AAT. Deficiency in AAT results in high influx of neutrophils to the airways, where increased release of serine proteases and uninhibited NE activity can cause damage to lung parenchyma, ultimately leading to emphysema and COPD. In rare cases, AATD is associated with a severe skin condition known as panniculitis and antineutrophil cytoplasmic antibody associated vasculitis (granulomatosis with polyangitis, formally Wegener's granulomatosis). Panniculitis is characterised by intense neutrophil infiltrates in the subcutaneous tissues and resultant tissue destruction due to the low levels of antiprotease and high levels of protease.
Anti-Inflammatory Effects of Alpha-1 Antitrypsin beyond Protease Inhibition
Prior to describing the mechanisms by which AAT can bind inflammatory proteases and mediators, it is important to understand the pleiotropic functions of AAT that mediate a broad range of anti-inflammatory activities beyond protease inhibition [61,62] (Table 2). An example includes the ability of AAT to regulate neutrophil chemotaxis by binding IL-8 and preventing IL-8 interaction with its cognate receptor CXC chemokine receptor 1 (CXCR1) on the neutrophil membrane [7]. It was demonstrated that neutrophils migrate down a functional gradient of AAT in response to an increasing gradient of IL-8, and that glycosylation of AAT is critical for this immunoregulatory effect. Furthermore, AAT prevented immune complex-mediated neutrophil recruitment by modulating disintegrin and metalloprotease domain-17 (ADAM-17) enzymatic activity and shedding of Fc gamma receptor three B (FcγRIIIb) (CD16b) [7]. AAT can also mediate anti-inflammatory effects through the modulation of TNFα signalling. AAT has been shown to bind TNFR thereby preventing TNFα signalling in neutrophils, and to inhibit ADAM-17 activity causing upregulation of TNF receptor 1 (TNF-R1) and reduced TNFα secretion [18]. As a result, AAT promotes an initial augmented response to inflammation in the acute phase followed by selective inhibition later, thereby supporting resolution of chronic inflammation [61]. AAT can also alter neutrophil activity by inducing protein phosphatase 2A (PP2A) activation to prevent the inflammatory and proteolytic responses triggered by TNFα stimulation in the lung [63] and to inhibit TNFα production by monocytes via TLR4 following stimulation with pro-inflammatory cytokines [64]. Polymerised aggregates of Z-AAT protein are implicated in the pathogenesis of liver cirrhosis and chronic hepatitis. Accumulation of Z-AAT in hepatocytes leads to impaired secretion of the protein, with individuals homozygous for the Z mutation having 10-15% of normal circulating levels of AAT. Deficiency in AAT results in high influx of neutrophils to the airways, where increased release of serine proteases and uninhibited NE activity can cause damage to lung parenchyma, ultimately leading to emphysema and COPD. In rare cases, AATD is associated with a severe skin condition known as panniculitis and antineutrophil cytoplasmic antibody associated vasculitis (granulomatosis with polyangitis, formally Wegener's granulomatosis). Panniculitis is characterised by intense neutrophil infiltrates in the subcutaneous tissues and resultant tissue destruction due to the low levels of antiprotease and high levels of protease.
Anti-Inflammatory Effects of Alpha-1 Antitrypsin beyond Protease Inhibition
Prior to describing the mechanisms by which AAT can bind inflammatory proteases and mediators, it is important to understand the pleiotropic functions of AAT that mediate a broad range of anti-inflammatory activities beyond protease inhibition [61,62] (Table 2). An example includes the ability of AAT to regulate neutrophil chemotaxis by binding IL-8 and preventing IL-8 interaction with its cognate receptor CXC chemokine receptor 1 (CXCR1) on the neutrophil membrane [7]. It was demonstrated that neutrophils migrate down a functional gradient of AAT in response to an increasing gradient of IL-8, and that glycosylation of AAT is critical for this immunoregulatory effect. Furthermore, AAT prevented immune complex-mediated neutrophil recruitment by modulating disintegrin and metalloprotease domain-17 (ADAM-17) enzymatic activity and shedding of Fc gamma receptor three B (FcγRIIIb) (CD16b) [7]. AAT can also mediate anti-inflammatory effects through the modulation of TNFα signalling. AAT has been shown to bind TNFR thereby preventing TNFα signalling in neutrophils, and to inhibit ADAM-17 activity causing upregulation of TNF receptor 1 (TNF-R1) and reduced TNFα secretion [18]. As a result, AAT promotes an initial augmented response to inflammation in the acute phase followed by selective inhibition later, thereby supporting resolution of chronic inflammation [61]. AAT can also alter neutrophil activity by inducing protein phosphatase 2A (PP2A) activation to prevent the inflammatory and proteolytic responses triggered by TNFα stimulation in the lung [63] and to inhibit TNFα production by monocytes via TLR4 following stimulation with pro-inflammatory cytokines [64].
Illustrative of the effect of the gain-of-function Z mutation, neutrophil apoptosis is accelerated in individuals with AATD by mechanisms involving endoplasmic reticulum (ER) stress and aberrant TNFα signalling [65]. This enhanced neutrophil apoptosis results in decreased neutrophil bactericidal activity, a process that can be ameliorated with AAT augmentation therapy. AAT has also been shown to reduce structural alveolar cell apoptosis independent of elastolytic activity by inhibition of vascular endothelial growth factor (VEGF) receptors with ensuing suppression of caspase-3 activation and oxidative stress [66]. Furthermore, the observation that AAT can inhibit the apoptotic factors, caspase-3 and caspase-1, has widened our perception on the role of AAT in the pathogenesis of emphysema [23,24]. Antibacterial Bacteriostasis-binding to furin (inhibits bacterial toxin activation) [71] Antiviral Inhibition of HIV-1 viral cell entry [72] Inhibition of SARS-CoV-2 entry by inhibiting transmembrane serine protease 2 and ADAM-17 [73,74]
Mechanisms of Binding
A protein's function is determined by its interaction in the fluid phase (e.g., with components of blood), within the extracellular matrix, at the cell surface, and at target sites such as the pulmonary epithelium or alveolar lining fluid. Some of the recent observations on the anti-inflammatory effects of AAT are independent of its specific antiprotease binding activity. Indeed, proteomic binding studies are uncovering many potential AAT protein interactions, suggesting that AAT may have a myriad of other functional roles other than what has been elucidated to date [16]. Knowledge of the mechanisms through which AAT binds to proteins and peptides beyond protease inhibition permits us to explore the extent of its biological function. The affinity between the surface of a complex protein and potential binding partners within a biological system is often divided between specific and nonspecific interactions [75]. Many of the interactions involving AAT that have been described, particularly protease binding, are specific, uniform contacts that result in a molecular structural change that is often irreversible. Non-specific interactions occur across the surface of the molecule and generally do not result in a structural change of either molecule within the macromolecular complex. Instead, they are driven by superimposition of three or four intermolecular interactions (e.g., Van der Waal's forces, electrostatic, steric, and hydrophobicity) and a multiplicity of structurally dependent weak interactions [76]. Indeed, it is increasingly apparent that AAT has an important role as a carrier protein as supported by its abundance, structural similarities to other lipophilic serine protease carrier proteins, its documented hydrophobic binding domain [5], and the relative specificity for binding hydrophobic proteins compared to other plasma glycoproteins [29]. This is illustrated by observations on AAT binding to LTB4 modulating interaction with cognate receptor BLT1, thereby mediating anti-inflammatory effects through downregulation of immune cell recruitment [6]. Non-specific electrostatic protein interactions observed between AAT and its binding partners are likely dependent on the attached carbohydrate residues or the hydrophilic/hydrophobic surface charge on the AAT protein.
Alpha-1 Antitrypsin-Specific RCL Protease Binding
Protease inhibition is central to AAT function and as with all proteins, its structure and function as a serine protease inhibitor are inextricably linked [5]. The primary antiprotease binding activity of AAT has been well characterised, particularly in the case of NE, though it has a wide range of protease inhibitory activity and contributes up to 90% of the total serine protease inhibitory capacity of plasma. The reactivity of the Met358 residue is primarily responsible for the spectrum of protease binding. This amino acid residue has the highest affinity for the serine hydroxyl group on NE to which it binds with an association constant of K = 6.5 × 10 7 M −1 s −1 , one of the highest binding constants found in nature, and inhibits it in an equimolar ratio (Figure 4). AAT has been shown to inhibit a wide range of other serine proteases including Cathepsin-G (Cath-G) [77], proteinase-3 (PR3) [78], and Factor Xia [79,80] (Table 3). The delay time of inhibition is an important factor regarding the functional effect of AAT in vivo; if it is too long, the protease may have insufficient time to reach its substrate, thus rendering the inhibitor inefficient. In addition, the target enzyme could inactivate the inhibitor by proteolytic attack at a site remote from the active site [77].
In an environment where multiple proteases are active, such as sites of inflammation in vivo, AAT will bind preferentially to NE over other proteases present, e.g., should PR-3 and NE be liberated at the same time and in equal concentrations, 89% of AAT would be bound to NE and 11% bound to PR-3 [81]. Cleavage of the active Met358 by the protease establishes a covalent linkage between the carboxyl group of the serpin reactive site and the serine hydroxyl of the protease. This event triggers a major structural rearrangement that involves loosening of the β-sheets and a kinetically irreversible conformational change by incorporation of the RCL into the β-sheet region of the AAT protein. The translocation of the attached protease by 71 angstrom (Å) from its initial position induces irreversible inactivation of the protease through distortion of the protease active binding site [82]. This mechanism is akin to the function of a mousetrap, with the methionine residue serving as the 'bait' that lures the protease to its fateful end [83]. Cleavage of the RCL at Met358 also exposes a new binding pentapeptide domain in the carboxyl terminal fragment of AAT. The inactivated AAT-protease complex is highly stable and can be removed from the circulation through engagement of the newly exposed binding site with the hepatocyte serpin enzyme complex (SEC) receptor [84]. This interaction on the hepatocyte cell surface signals for increased gene expression of SERPINA1 in a positive feedback loop [85,86].
A point mutation at position 358 can drastically alter the antiprotease function of the AAT molecule by reducing or changing the specificity of this bond for its target protease; this is best illustrated by the rare mutation of Met358 to arginine (AAT-Pittsburgh) resulting in greatly diminished antielastase activity and markedly increased antithrombin activity that results in a fatal bleeding disorder [87]. In addition, the reactive Met358 is a surface exposed methionine residue that is readily oxidised by hydrogen peroxide in cigarette smoke and by oxidising agents released by leukocytes during inflammation [88]. In addition, Met351 and the thiol reactive cysteine-232 (Cys232) residues of AAT are also susceptible to oxidative inactivation [89,90]. Oxidised AAT persists in a functionally inactive form in the circulation, whereby its protease binding capacity is markedly reduced, and fails to stimulate further upregulation of AAT production [91]. However, under certain conditions oxidative inactivation may be physiologically favourable, and necessary for host protease defence, to enable in vivo function of proteases such as NE within a local microenvironment. Interestingly, oxidised AAT has been shown to retain certain anti-inflammatory properties, despite losing its serum elastase inhibitory capacity, as demonstrated through the prevention of neutrophil recruitment to the lungs in a rat model of smoke-induced emphysema [92]. The observed anti-inflammatory mechanism relates to TNFα suppression that provided partial protection to the development of emphysema in this model. Nevertheless, oxidative inactivation of AAT is of major importance in the pathogenesis of emphysematous lung destruction in smokers and it is firmly established that cigarette smoke exposure is the major determinant of an accelerated decline in lung function in AATD causing early death in this population [93]. Additionally, oxidation of the mutant Z-AAT by cigarette smoke can induce Z-AAT polymerisation that may further thwart the residual humoral antiprotease shield (64), which is discussed next [94]. microenvironment. Interestingly, oxidised AAT has been shown to retain certain anti-inflammatory properties, despite losing its serum elastase inhibitory capacity, as demonstrated through the prevention of neutrophil recruitment to the lungs in a rat model of smoke-induced emphysema [92]. The observed anti-inflammatory mechanism relates to TNFα suppression that provided partial protection to the development of emphysema in this model. Nevertheless, oxidative inactivation of AAT is of major importance in the pathogenesis of emphysematous lung destruction in smokers and it is firmly established that cigarette smoke exposure is the major determinant of an accelerated decline in lung function in AATD causing early death in this population [93]. Additionally, oxidation of the mutant Z-AAT by cigarette smoke can induce Z-AAT polymerisation that may further thwart the residual humoral antiprotease shield (64), which is discussed next [94]. 4.3. Alpha-1 Antitrypsin RCL Self-Binding Leading to Polymer Formation A greater understanding of the pathogenesis of AATD was reached on discovery that certain AAT mutants, best described in the case of Z-AAT protein, manifests a gain of function, which causes protein polymerisation or aggregation. In PiZZ homozygotes, the Glu342Lys mutation results in disruption of an intramolecular salt bridge in strand 5 of the five-stranded β-sheet and uncoiling of the upper part of α-helix F [98]. This induces conformational instability of the protein, which involves an initial zero-order conversion of AAT to a polymerogenic monomer intermediate termed M* [99]. Subsequently, a slow concentration-dependent intermolecular association step results in polymerisation through the incorporation of the RCL from an adjacent molecule into the shutter region of the affected β-sheet [58]. Of interest, SERPIN polymerisation and protein aggregation are not unique to AATD; conformational instability of various proteins have been linked to several neurodegenerative processes, including Alzheimer's disease and Creutzfeldt-Jakob disease [100].
Factors that favour AAT polymerisation in vitro include increased temperature, higher Z-AAT concentration and acidosis, all of which can occur at sites of tissue inflammation in vivo. Consequently, the misfolded protein accumulates within the ER and can be visualised as a 'beads on a string' appearance on periodic acid Schiff stain of liver biopsy samples. There is a marked reduction in Z protein egress from the cell leading to ER stress, and thereafter hepatocyte autophagy is overwhelmed and cellular decompensation ensues [101,102]. Due to this mutant gain of function, individuals with severe AATD are at risk of hepatic failure. This is not limited to individuals with the PiZZ phenotype as conformationally unstable AAT variants, such as PiSZ, may also lead to clinically relevant liver disease due to the development of AAT heteropolymers [103]. Of importance, polymerised Z-AAT appear to have inflammatory properties that may contribute to an augmented systemic inflammatory response that influences the clinical phenotype of COPD in AATD [104]. Moreover, AAT polymers may also accumulate in ER of immune cells including monocytes [102] and neutrophils [65], and within bronchial epithelial cells [105]. The accumulation of misfolded Z-AAT in the ER of innate immune cells appears to play a key role in the exaggerated inflammatory response observed in AATD, whereby the accumulation of Z-AAT polymers within the ER of neutrophils leads to ER stress, increased neutrophil apoptosis and defective bacterial killing [65].
A few studies have investigated the potential of autophagy-enhancing drug candidates to treat AATD-mediated liver disease such as phenothiazines, including carbamazepine and fluphenazine, with the aim of degrading mutant Z-AAT that has been retained in the ER of hepatocytes. Both fluphenazine and carbamazepine has been shown to decrease the hepatic load of Z-AAT and hepatic fibrosis in a mouse model of AATD [106,107]. Access to results of clinical trials exploring the impact of therapies aimed at reducing hepatic accumulation of Z-AAT in patients with severe liver disease due to AATD are available online from the publicly available database, https://clinicaltrials.gov/ (accessed on 6 February 2022). Such studies include a phase II clinical trial to determine if carbamazepine therapy leads to a significant reduction in hepatic accumulation of Z-AAT. This latter study was terminated as the number of participants with available pre-and post-treatment biopsies was insufficient to analyse primary and secondary outcomes (NCT01379469) [108]. Clinical trials with small-molecule correctors aimed at correcting misfolding of mutant Z-AAT have been disappointing to date, with phase II clinical trials of VX-814 (NCT04167345) discontinued based on safety and pharmacokinetics data. Furthermore, the results of phase II trials of VX-864 (NCT04474197) resulted in exclusion of the advancement of this molecule into late-stage development. Despite prior failures, investigations are still ongoing into smallmolecule correctors and recruitment is currently underway in the UK for a double-blind, randomised, placebo-controlled study assessing the safety and tolerability of the novel compound ZF874, which hopes to act as a molecular 'patch' for Z-AAT, allowing the protein to fold correctly and potentially to relieve the hepatocyte burden of polymer accumulation (NCT04443192).
RNA interference (RNAi), known also as post-transcriptional gene silencing, is a natural biological process, whereby short oligonucleotide molecules termed RNAi trigger the silencing of gene expression and thus regulate the expression of protein-coding genes. The objective of potentially employing RNAi therapeutics in AATD therapy would be to cease the production of Z-AAT protein by the liver. This could prevent further accumulation of Z-AAT polymers, halt the progression of liver disease, and enable the gradual clearance of the pre-existing polymers [109]. Clinical trials assessing RNAi candidates, ARC-AAT (NCT02363946), ALN-AAT (NCT02503683), ALN-AAT02 (NCT03767829) as a potential therapeutic for AAT-mediated liver disease were terminated in 2016, 2018 and 2020 based on toxicity concerns in non-human primate studies, low incidence of asymptomatic, transiently elevated liver enzymes in a subset of study subjects and sponsor decision, respectively. This spurred the development of another investigational RNAi therapeutic termed ARO-AAT, which was modified to target hepatocytes through conjugation of N-acetylgalactosamine via a linker and therefore did not employ the delivery vehicle (EX1) used in the earlier clinical trial (NCT02363946). Interim results from the AROAAT2002 study (NCT03946449) demonstrated that ARO-AAT was not only well tolerated but also capable of inhibiting Z-AAT expression, reducing intrahepatic Z-AAT accumulation to allow the clearance of Z-AAT polymers and improving liver fibrosis. Phase II trials of AROAAT2001 (SEQUOIA) is underway to evaluate the safety, efficacy and tolerability of multiple doses of the investigational product, ARO-AAT, administered subcutaneously to participants with AATD (NCT03945292) [110]. As of December 2021, the ESTRELLA trial is in the recruitment phase to investigate an alternative RNAi drug named Belcesiran or DCR-A1AT in patients with AATD-associated liver disease (NCT04764448). Thus, there is an exciting number of drugs with potential clinical applications being researched to bridge the gap in therapeutics for the cohort of patients with AATD-mediated liver disease, who currently lack treatment options beyond liver transplantation.
Alpha-1 Antitrypsin Electrostatic Interactions and Post-Translational Glycosylation Effects
The total accessible surface area of AAT (2.34 × 10 4 Å 2 ) is largely hydrophilic in nature, surrounding a hydrophobic core, and all hydrogen bonds are fulfilled on the surface mainly by interactions with main-chain atoms [5]. The surface of AAT has a dipolar characteristic, with the positive pole at the S-359 end and the negative pole at the M-358 end. The isoelectric point (pI) of AAT is 5.37 and therefore it carries a negative charge at physiologic pH. In AATD, the Glu342Lys mutation results in a slight cathodal shift of the isoelectric point by 0.1, resulting in a more negative Z-AAT protein [44,111]. The influence of divalent cations, such as Mg 2+ , Ca 2+ , Cu 2+ , Zn 2+ , and Fe 2+ , is important in modulating AAT protein binding [112] and levels can change during the acute-phase response that may potentially alter the bound protein profile of AAT in inflammatory states [45]. Moreover, AAT undergoes a process of co-translational N-glycosylation, resulting in the addition of three oligosaccharide residues contributing 12.5% to the resultant molecular mass of the protein, which may exert electrostatic interactions with potential binding partners.
Comprehensive glycoproteomic analysis of AAT identified glycosylation residues at positions Asn70, Asn107 and Asn271 [113] (Figure 1). N-glycosylation takes place initially within the ER, with final glycan branching occurring in the Golgi apparatus. The transfer of oligosaccharides to the selected asparagine residues is catalysed by the enzyme oligosaccharyltransferase, which is present on the luminal surface of the ER membrane [114,115]. This is the central step in N-glycosylation. Subsequently, an outer α-1,2-linked glucose residue is trimmed by the enzyme α-glucosidase, followed by the removal of an α-1,3-linked glucose residue by α-glucosidase II, which enables the glycoprotein to interact with soluble and membrane bound lectin chaperones that aid protein folding [116]. Before exiting the ER, a further α-1,3-linked glucose is removed and mannose residues are trimmed by mannosidase I. Within the Golgi, N-acetyl-glucosaminyl (GlcNAc) transferase I substitutes GlcNAc residues onto the α-1,3-arm of the high-mannose-type sugar chain, Man5GlcNAc2 [117]. If further glycan branching is possible, this is mediated by GlcNAc transferase II, Glc-NAc transferase IV and GlcNAc transferase V forming bi-antennary, tri-antennary and tetra-antennary structures, respectively. Further branch extension by the GlcNac family of enzymes can be inhibited by GlcNAc transferase III. Chain prolongation is often terminated by the addition of a sialic acid residue to a terminal galactose, and this reaction is catalysed by beta-galactoside alpha-2,6-sialyltransferase 1 (ST6GAL1) [118].
Glycosylation of AAT is crucial for its function through prolongation of its plasma half-life, conferring resistance to proteolytic degradation, modulating intermolecular interactions, and the prevention of protein aggregation. To illustrate the relevance and clinical importance of this effect, recombinant non-glycosylated AAT protein produced by bacteria, demonstrates a markedly decreased half-life [119], and is therefore therapeutically inef-fective compared to plasma-derived AAT for the purpose of intravenous augmentation therapy [120]. The predominant mechanism for AAT elimination from the body, which is distinct from SEC receptor-mediated protease complex removal, is through the asialoglycoprotein receptor [121]. This is expressed on hepatocytes and on recognition of terminal galactose residues; it expediently removes the protein from the circulation. The addition of sialic acid to terminal glycans shields the residues from receptor binding and thereby prolongs the half-life of AAT [122].
M-AAT glycan expression is modified during the course of community acquired pneumonia (CAP). A glycoform shift arises during the resolving phase of the infection, where there is an increase in circulating levels of sialylated negatively charged AAT glycoforms. This increase in negative AAT glycoforms (termed M0 and M1 AAT) coincides with a decline in the white cell count and C reactive protein levels between days four and six in the course of the infection, and are subsequently cleared by day eight in keeping with clinical recovery. During the resolving phase of CAP, sialylated AAT has a significant binding capacity for positively charged chemokines resulting in inhibition of IL-8-mediated neutrophil chemotaxis, a further confirmation that AAT glycosylation patterns affect protein-protein interactions and modulate immune cell function [17]. More recently, however, a similar glycoform shift has been shown to occur in coronavirus disease 2019 (COVID-19) infection but appears to be associated with worse clinical outcomes [123]. Here, the presence of highly sialylated M0 and M1 glycoforms do not correlate with AAT serum levels or the intensity of the inflammatory response. With focus on AATD, increased core and outer arm fucosylation, including sialyl Lewis-X determinants of the Z-AAT protein, have been characterised [111]. This finding may have implications for the role of Z-AAT as an immunomodulatory protein and its effect upon leukocyte-mediated inflammation in AATD, irrespective of its reduced antiprotease activity. Moreover, a family of at least 32 SERPINA1 mutations termed null or Q0 have been described [124], which result in the introduction of a premature termination codon in the mRNA coding region [125]. Q0bolton is one of these rare mutations, which results in the production of a truncated 49 kDa Q0bolton-AAT protein, which, despite its altered structure, maintains some antiprotease activity. Seven glycoforms of the Q0bolton-AAT have been identified which demonstrate an altered glycosylation pattern compared to native M-AAT, with an anodal shift and an increased total fucosylation [125]. Q0bolton-AAT possesses increased levels of tri-and tetra-antennary glycans, but lower levels of bi-antennary branching, compared to M-AAT [126]. This trend toward increased core and outer-arm fucosylation differentiates Q0bolton-AAT from M-AAT and is consistent with persistent inflammation [127], although these differences do not appear to impact the binding capacity of Q0bolton-AAT for IL-8.
In summary, glycan residues, and their resultant electrostatic charge, can modulate intermolecular interactions of AAT though binding to the amino acid backbone of proteins (carbohydrate-amino acid interactions). The glycosylation of AAT also protects the protein from glycolysis, prevents protein aggregation, is less polymerogenic, prolongs its plasma half-life, and importantly, supports anti-inflammatory properties whilst not interfering with AAT antiproteinase activity.
Hydrophobic Binding of Alpha-1 Antitrypsin with the Lipoprotein System
Approximately 13% (3.2 × 10 3 Å 2 ) of the accessible surface area of AAT is hydrophobic, with five hydrophobic pockets identified to date. The central hydrophobic core of AAT is filled during relocation of the RCL after protease cleavage or during polymer formation. This site has become a target for drug delivery to prevent loop sheet polymerisation without abolishing the function of AAT [128]. This location is also a potential binding site for other small hydrophobic molecules, such as the potent neutrophil chemoattractant LTB4 [6]. Apolipoprotein B-100 (ApoB100), a major protein component of low-density lipoprotein (LDL) and very low density lipoprotein (VLDL), has previously been identified as a binding partner to AAT [8]. Lipoproteomic analysis of the process of VLDL to LDL conversion has demonstrated that AAT is acquired from plasma or other lipoprotein classes [129]. This may have a particular effect during the acute inflammatory response which is characterised by changes in apolipoprotein synthesis and AAT production [130]. From a pathophysiological perspective, the impact of AAT oxidation was apparent via the formation of AAT-LDL complexes in atherosclerotic plaques that implicates a role for oxidised AAT in atherogenesis. Contrarily, incorporation of AAT into high-density lipoprotein (HDL) may confer some beneficial antielastase properties that protect against atherogenesis [131]. Moreover, it has been reported that enrichment of AAT with HDL afforded better protection against elastase-induced pulmonary emphysema in a murine model than AAT augmentation therapy alone [132].
Alpha-1 Antitrypsin Cysteine Binding Potential
AAT has a single cysteinyl residue (Cys-232) that is situated within a protective crevice due to the close proximity of three lysine residues; this unique structural environment provides the thiolate stabilisation required for a high degree of reactivity across a broad pH range [90]. It has previously been reported that AAT has a strong affinity for monomeric light chain thiolate ions, whereby in vivo complexes between immunoglobulin-κ chains occur without affecting protease inhibitory capacity, which may constitute a mechanism for the linkage and transport of peptides with reactive thiols or disulphides released into plasma and extracellular fluids [11]. Cys-232 is reactive under physiological conditions with proteins and small molecules such as cysteine, glutathione, myeloma immunoglobulin light chains, immunoglobulin A and nitric oxide (NO) [9][10][11]. It has been demonstrated that AAT forms a disulphide bond with the penultimate C-terminal cysteine on the alpha chain of IgA [133], while also retaining its antiprotease activity [134]. AAT can also undergo S-nitrosation, through the interaction of Cys232 with NO formed at the sites of tissue ischaemia or by the action of endothelial or inducible NO synthases at sites of inflammation [30]. The resultant S-NO-AAT molecule is bacteriostatic, can induce vasorelaxation, inhibits platelet aggregation and neutrophil adhesion to endothelial surfaces [30]. AAT thereby may act as a NO reservoir and mediate cytoprotective effects through the attenuation of ischemia-reperfusion injury by maintaining tissue blood flow [135,136]. This property has led to clinical trials in humans utilising AAT augmentation therapy in ST-segment myocardial infarction [137].
The Heparin Binding Motif of Alpha-1 Antitrypsin
The effect of heparin binding to serine proteinase inhibitors is illustrated by the potentiation of antithrombin III activity, a property that is exploited in clinical practice with the use of unfractionated heparin and low-molecular-weight heparin for the purpose of anticoagulation [138]. Binding of heparin is mediated by ionic interactions between its sulphate and carboxylate groups with the positively charged side chains of target proteins. AAT does contain a heparin binding motif, the function of which in the presence of heparin may be to enhance the binding affinity of the reactive Met358 compared to native forms of the AAT protein. However, contrary to this, it has been previously demonstrated that the binding affinity of AAT to NE is reduced in the presence of heparin due to the formation of heparin-elastase complexes; in this study, heparin was not found to bind AAT [139]. Further work is warranted in this area before any definitive conclusion regarding the significance of heparin binding to AAT and its therapeutic potential can be made.
Alpha-1 Antitrypsin Protease Binding and the Coagulation System
The broad spectrum of AAT protease binding raises the possibility these interactions play a role in the homeostasis of other systems that involve serine protease cleavage, such as the coagulation pathway. Furthermore, this balance may be perturbed in AAT deficiency states. The coagulation cascade is a tightly regulated process and many of the activated coagulation factors are serine proteases. It has been shown previously that AAT accounts for the majority of the plasma inhibition of Factor XIa. However, this has been challenged by the role of other serine protease inhibitors such as C1 inhibitor and α2-antiplasmin through the measurement of Factor XIa-protease inhibitor complexes in blood. Nevertheless, the fact that AAT can bind to fibrinogen in blood [9,10] and is found in significant proportions within formed clot samples [140], indicates that a role for AAT exists not only in the inhibition of clot propagation through control of the coagulation cascade, but also fibrinolysis and the homeostasis of thrombus formation [95]. A reciprocal coupling of coagulation and innate immunity via neutrophil serine proteases has been shown previously [141]. Thrombosis is an important facet of the innate immune response as an additional mechanism to prevent the propagation of microbial invasion [142]. Disseminated intravascular coagulation is a fulminant disease process with a high mortality and previous studies have alluded to the important role that AAT may exert in the humoral response to this devastating complication. It has been postulated that localised fibrin formation may contribute to the pathogenesis of pulmonary emphysema due to increased platelet aggregation potentiating thrombosis leading to a favourable microenvironment for neutrophil attachment [143]. Of particular interest is the role that AAT may play in the protection of fibrinogen from dysregulated proteolysis, particularly by neutrophilderived proteases [144,145]. A recent study evaluating a specific NE cleavage point in fibrinogen (Aα-Val360) demonstrated that increased fibrinogen cleavage correlated with disease severity in AATD and may be a useful surrogate marker of disease activity in patients with early disease in whom therapeutic intervention may be indicated [146].
Alpha-1 Antitrypsin Protein Complexes and Tissue Inflammation
AAT also has a role in tissue repair and healing through an association with fibronectin. Proteolytically active NE is present in chronic wounds and it was shown that AAT protects fibronectin from enzymatic degradation in wound tissues and is necessary for wound healing [147,148]. There has been significant interest in the role of AAT in the inflammation associated with rheumatoid arthritis, particularly in relation to the formation of IgA-AAT complexes in plasma [149], and in the inflammatory milieu of the synovial fluid of individuals with inflammatory arthritis [150]. This association may relate to the glycosylation status of AAT in plasma during the acute-phase response in inflammatory arthritis [151]. However, there is insufficient evidence to date to indicate that AAT levels correlate with disease activity [152], or indeed that AATD is an independent risk factor for inflammatory arthritis though it may be associated with an increased prevalence of auto-antibody production [153].
Alpha-1 Antitrypsin Binding Partners and the Complement System
Products of complement activation, C3a and C5a, are important neutrophil chemoattractants and complement activation products have been found to be elevated in emphysema [154]. Given the pro-inflammatory properties of C3 activation by-products and the role of AAT in counterbalancing neutrophil-driven inflammation, it is not unexpected that these two abundant plasma proteins may interact. It has been shown that C3b interacts with a range of plasma proteins including AAT, vitamin D binding protein, and α1-acid glycoprotein, by forming high-molecular-weight aggregates through covalent interactions in complement activated serum and plasma [155]. The function of these aggregates is not fully understood at this time and whether they occur in blood in the presence of erythrocytes is not known. Dysregulated complement activation has been described in individuals with AATD, resulting in a diminished capacity to inhibit processing of complement C3 to C3d [16]. Elevated levels of complement fragment C3d have been described in the circulation and airways of patients with AATD, correlating with both the severity of airway obstruction and radiographic pulmonary emphysema. Experiments have shown AAT to bind directly to C3 both in vivo and in vitro, and this is further optimised by AAT glycosylation [16]. Treatment of patients with AATD involves AAT augmentation therapy, which can aid in modulating this uncontrolled complement cascade by disrupting C3 activation and significantly reducing C3d plasma levels when compared with those not on therapy [16]. Moreover, C3d binding to CR3 neutrophil receptors triggered granule release, increased cytokine secretion, and reduced endothelial cell migration and wound healing, with potential implications for AATD-related vasculitis [156].
Alpha-1 Antitrypsin Protease Binding and COVID-19
COVID-19 is a novel emerging infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), first identified in December 2019 in Wuhan, China and classified as a pandemic by the WHO in March 2020. The most serious manifestations of COVID-19 is acute respiratory distress syndrome (ARDS), especially in the older age groups and those with cardiopulmonary disease [157]. Individual variations in susceptibility to and severity of SARS-CoV-2 is likely to be explained by both genetic and non-genetic factors. AATD is just one example of an inheritable condition which may render populations more susceptible to COVID-19. A recent study has shown a significant positive correlation between the combined frequencies of AAT deficiency alleles in 67 countries and their reported COVID-19 mortality rates [158]. The geographical overlap between rates of AATD and severe cases of COVID-19 in Italy has been examined in detail. Genotyping of AAT performed in 3751 Italians from different regions showed a higher prevalence of AATD in northern Italy, the same region that was most affected by SARS-CoV-2 in 2020 [159,160], with 85% of total fatal cases countrywide registered in northern Italy as of 18 April 2020 [161]. These observations suggest that AATD may contribute to regional differences in COVID-19 infection rates, clinical severity and mortality rates, but caution is essential when interpreting these correlations as there are many potential confounding factors. Additional research will be required as the pandemic progresses to further examine this hypothesis and determine the real risk of COVID-19 infection in AATD patients. Nonetheless, the geographical overlap between rates of AATD and severity of COVID-19 suggests that protease-antiprotease imbalance could play a critical role in the pathogenicity and virulence of SARS-CoV-2, or in the host response to COVID-19 infection, and that AAT could be a host protective factor against COVID-19. Other than having a genetic deficiency of AAT, studies have also suggested that during the course of COVID-19 illness, patients may develop an insufficient AAT acute-phase response in severe COVID-19 illness, which may ultimately increase disease severity and risk of mortality [162].
Protease-antiprotease imbalances can arise during the clinical course of COVID-19 infection and recent studies have demonstrated that AAT can bind and inhibit key proteases involved in the pathophysiology of COVID-19, including TMPRSS2 [73,74] and ADAM17 [160] (Table 3). TMPRSS2 is a critical protease in priming of the SARS-CoV-2 spike protein and the host ACE2 receptor prior to viral entrance into the host cell. ADAM17 mediates shedding of ACE2, IL-6 and TNFα, and suppression of ADAM17 may therefore modulate the cytokine storm that has been identified in patients with severe COVID-19. However, evidence to date demonstrates that AAT fails to completely block SARS-CoV-2 entry, possibly due to unprocessed ACE2-mediated cell entry in the absence of TMPRSS2 or the expression of other proteases, which may cleave the S protein [74]. Interestingly, NE, the prime target of AAT, has been proposed to act as an alternative spike priming protease [163]. AAT has been shown to suppress SARS-CoV-2 viral replication in cell lines and primary cells including human airway epithelial cultures [73,74]. Taken together, these findings suggest AAT may play a critical role in the innate immune defence against SARS-CoV-2 infection and highlight AAT as a potential drug candidate in the treatment of COVID-19 [162]. Moreover, increased sialylation of AAT in COVID-19 is documented in the literature. AAT immunophenotyping performed on 25 COVID-19 patients in ICU demonstrated an AAT glycoform shift, which appears to be associated with worse clinical outcomes [123]. Highly sialylated M0 and M1 AAT glycoforms were identified in all those who died and in 59% of patients who survived the illness. The synthesis of more negatively charged glycoforms correlated with a higher NE inhibitory capacity ratio, but not with AAT serum levels or the intensity of the inflammatory response. The study postulates that the qualitative shift in AAT glycoforms is an attempt to trigger antielastase activity and boost the anti-inflammatory response [123], as has been reported in patients with communityacquired pneumonia [17], but unfortunately this appears futile as this modification appears to correlate with negative outcomes in COVID-19.
AAT Augmentation Therapy AAT Replacement Therapy in Acute and Chronic Disease
Efforts to restore normal circulating plasma levels of AAT in AATD individuals culminated in the development of AAT augmentation therapy in the 1980s from pooled donor plasma [164]. Initial studies demonstrated safe and effective delivery of the purified AAT protein to maintain levels above a putative protective threshold of 0.5 g/dL (11 µmol/L) and augmentation therapy was approved in the United States by the Food and Drug Administration (FDA) based on biochemical efficacy [165]. The first randomised controlled trial of intravenous plasma purified AAT was performed recently, which demonstrated slowing of emphysema progression when measured by computer tomography determined lung density [166]. Weekly treatment doses of AAT higher than the FDA-approved standard dose (60 mg/kg/week) are not currently recommended [167]. Results of a recent pilot study, however, have demonstrated that double dose AAT therapy (120 mg/kg/week) is not only well tolerated but may provide additional clinical benefits. Double dosing was found to be effective at further reducing the level of serine proteases in both the airway and circulation, reducing elastin degradation, and diminishing airway inflammation when compared to standard dose therapy [168,169]. The RAPID Programme also demonstrated that biweekly dosing with 120 mg/kg of AAT is a safe, well tolerated and a convenient alternative to the dosing regimen currently recommended by the FDA [170]. The SPARTA trial is currently ongoing in a number of European countries to further explore the efficacy and safety of AAT in subjects with pulmonary emphysema due to AATD (NCT01983241). Moreover, the use of nebulised AAT overcomes some of the shortcomings of intravenous therapy and permits delivery to the local site of inflammation [171] and the ability of recombinant AAT to neutralise NE is preserved using this approach [172]. However, recombinant AAT has not been shown to modulate markers of inflammation; such an effect has only been observed using plasma purified glycosylated AAT to date [173].
The alternative biological effects of AAT, specifically its potential anti-inflammatory and antiapoptotic properties, have led to the speculative use of AAT augmentation therapy in a range of conditions. In this regard, the beneficial effect of AAT was observed in ischaemia-reperfusion injury after myocardial infarction [25]. Subsequently, the first clinical trial outside of AATD was conducted using single-dose augmentation therapy in acute ST elevation myocardial infarction [137]. In this study, augmentation therapy was found to be safe and well tolerated with some blunting of the acute inflammatory response.
A growing body of evidence from preclinical studies has demonstrated that AAT may have therapeutic potential in autoimmune diseases. AAT activity is altered in both developing and established type I diabetes mellitus, as well as in established type II diabetes [174]. Promising results from murine models of pancreatic allograft transplantation [175,176] have culminated in clinical trials for onset type I diabetes (NCT02093221 and NCT01183468) [137,177,178]. AAT supplementation was found to be well tolerated and safe; however, its clinical benefit in type I diabetes remains inconclusive. A higher dose of AAT (>90 mg/kg/week) may be needed for optimal therapeutic effect [179,180]. Systemic lupus erythematous (SLE) is an autoimmune disorder in which reactive dendritic cells appear to play a critical role in disease development and pathogenesis. A mouse model of lupus has demonstrated that AAT can inhibit the activation and functioning of dendritic cells, and can attenuate autoimmunity and renal damage [181]. A more recent study identified that treatment with AAT can prevent lupus development and extend the lifespan of lupus prone mice [182].
Current evidence suggests that loss of AAT in salivary gland cells with a consequent increase in elastase expression could contribute to the initiation of primary Sjogren's syndrome [183], although the efficacy of AAT replacement therapy in this condition has not been assessed. A recent clinical trial has found AAT infusions to be well tolerated and demonstrated potential efficacy in the treatment of steroid-refractory severe acute graft-versus host disease [184,185], but additional studies are warranted and further clinical trials remain ongoing (NCT03805789, NCT04167514).
Reports and trials indicated the AAT have significant role in COVID-19 infection (NCT04799873; NCT04495101) [161,186]. Finally, there are currently several clinical trials evaluating the therapeutic potential of AAT in hospitalised patients with COVID-19 worldwide, including USA, Brazil and Chile (NCT04547140), Saudi Arabia (NCT04385836) and Ireland (EudraCT 2020-001391-15) [187]. The latter trial explored administration of IV plasma-purified AAT on circulating plasma levels of IL-6 in COVID-19 patients who required invasive and non-invasive respiratory support [187]. In turn, the first successful administration of IV AAT for severe COVID-19 complicated by ARDS was reported in a patient with cystic fibrosis [188]. Systemic and airway inflammatory markers, particularly IL-6, IL-1β, IL-8 and NE, were elevated in the patient's samples prior to AAT administration. A clinical improvement was observed two days after AAT administration, which was accompanied by a decrease in inflammation. Promising results from a clinical trial in Germany (NCT04799873) investigating the effect of both inhaled and combined inhaled/IV AAT administration on the clinical course of nine patients with mild to moderate COVID-19 have been published, with all patients treated with AAT surviving and displaying an eventual improvement in respiratory function before hospital discharge [189].
Conclusions
Alpha-1 antitrypsin (AAT) is the canonical serine protease inhibitor that has been the subject of extensive study. Deficiency of AAT is associated with a heritable form of pulmonary emphysema that is characterised by a markedly reduced humoral protease inhibitory shield, in particular against the effects of neutrophil-derived proteases. AAT can inhibit a broad array of other proteases to varying degrees, which may mediate important biological effects on account of its abundance in plasma. Increasingly, it is recognised that AAT has diverse interactions beyond protease inhibition that have been shown to facilitate beneficial anti-inflammatory and antiapoptotic responses. Uncovering the protease and novel non-protease binding properties of AAT has led to a deeper understanding of the function of this protein in health and disease. Knowledge of the full interaction profile of AAT as it circulates in health and in deficiency states may lead to a deeper understanding of its effects, uncover novel mechanisms of action, and ultimately lead to innovative therapeutic applications of augmentation therapy in a variety of disease states. | 2022-02-26T00:17:08.305Z | 2022-02-23T00:00:00.000 | {
"year": 2022,
"sha1": "2af2bd020bac6b26d6e05016c9f98738ecd123db",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms23052441",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae55fba83244fcb070bab570e51ec0b1a3f49e27",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254708728 | pes2o/s2orc | v3-fos-license | Effects of modafinil and caffeine on night-time vigilance of air force crewmembers: A randomized controlled trial
Background: Fatigue remains an important factor in major aviation accidents. Stimulants may counteract fatigue’s adverse effects, with modafinil as a promising alternative to caffeine. However, the effect of a single dose of modafinil after a limited period of sleep deprivation remains unknown. Aims: This study aims to determine the effect of 200 mg modafinil on vigilance during a limited period of sleep deprivation compared to 300 mg caffeine and placebo. Methods: Thirty-two volunteers of the Royal Netherlands Air Force (RNLAF) were double-blindly administered modafinil, caffeine, and placebo on three non-consecutive trial days after being awake for median 17 h. Afterwards, subjects completed six series of the Vigilance and Tracking test (VigTrack), psychomotor vigilance task (PVT), and Stanford Sleepiness Scale (SSS), yielding six primary endpoints. Results: This study revealed statistically significant effects of caffeine and modafinil compared with placebo on all endpoints, except for VigTrack mean tracking error. PVT results were less impaired 2 h after administration, followed by VigTrack parameters and SSS scores 2 h thereafter. Compared with caffeine, modafinil significantly improved PVT and SSS scores at 8 h after administration. Conclusions: The present study demonstrates that 200 mg modafinil and 300 mg caffeine significantly decrease the effects of a limited period of sleep deprivation on vigilance compared with placebo. Although PVT parameters already improved 2 h after administration, the most notable effects occurred 2–4 h later. Modafinil seems to be effective longer than caffeine, which is consistent with its longer half-life.
Introduction
In 2010, for the first time in an air crash investigation, a recording of snoring was identified on a cockpit voice recorder (Court of Inquiry India, 2010). This cockpit voice recorder belonged to Air India Express Flight 812, which crashed, killing 158 of the 166 persons onboard. The recording indicated that the captain had been asleep for more than 90 min of the 2 h flight. Residual sleepiness and impaired judgment were identified as contributing factors in this accident. The captain's fatigue was suggested to be due to flying during the Window of Circadian Low (WOCL), the period of the circadian cycle when fatigue and sleepiness are greatest and people are least able to perform mental or physical work (Valdez, 2019). This is not an isolated instance of an aviation accident being attributed to fatigue. In the last two decades, fatigue has been identified as the probable cause of 21-24% of major aviation accidents, both in civil and military aviation (Caldwell, 2012;Gaines et al., 2020;Marcus and Rosekind, 2017). As stated in the International Civil Aviation Organization's (ICAO) definition of fatigue, fatigue can impair one's performance: "A physiological state of reduced mental or physical performance capability resulting from sleep loss, extended wakefulness, circadian phase, and/or workload (mental and/or physical activity) that can impair a person's alertness and ability to perform safety related operational duties" (ICAO, 2020). This definition identifies several possible causes of fatigue, with sleep loss probably being the most notable. The optimal method of avoiding fatigue is to have sufficient (night-time) sleep. However, this is often difficult to achieve, especially during military deployments, because sleep in the field is often of poorer quality and shorter duration than sleep at home (Kelley et al., 2018). Moreover, performing military operations at night may be tactically necessary, thereby disrupting the normal sleep pattern. This, combined with possible interfering transient factors like noise or heat, may lead to irregular sleep during deployment, which may cause fatigue. Additionally, the deployment itself, with the mission and potential threats, may induce stress, which may also contribute to fatigue. This is particularly problematic at
Effects of modafinil and caffeine on night-time vigilance of air force crewmembers: A randomized controlled trial
the end of flight missions because the landing phase is a risk factor for the occurrence of aviation accidents (European Union Aviation Safety Agency [EASA], 2020). Also, when performing night-time operations, pilots might be forced to fly during circadian phases dedicated for sleep, like the WOCL, when levels of attention are at their lowest, additionally increasing the chance of incidents.
Regulations limiting flight times and suggesting optimal rosters have been implemented by aviation authorities (EASA, 2014;Federal Aviation Administration, 2012). Although these cannot completely prevent fatigue, they provide a framework to manage fatigue (Wingelaar-Jagt et al., 2021). However, the introduction of these regulations in the Royal Netherlands Air Force (RNLAF) is complicated by the variety of aircrafts available and the types of operations performed. Additionally, there is the possibility of deviating from these regulations in the case of operational necessity. These circumstances make it impossible to solely rely on these regulations to manage fatigue and its associated risks. Other countermeasures are therefore needed to enhance the fitness of pilots to fly under these circumstances. Currently, the RNLAF allows its pilots to use certain hypnotics to get sufficient sleep prior to flight operations (Military Aviation Authority, 2021).
Depending on the scenario, an alternative option is to prescribe stimulants, that is, medications that increase vigilance and reduce fatigue. Although caffeine is widely available, both in pills and beverages, aircrew members have reported that caffeine supplements are ineffective, which might be due to the high daily caffeine consumption of many (Chou et al., 1985;Nehlig, 2018). Additionally, caffeine has a relative short half-life of 4-6 h, which might be less favorable when longer periods of vigilance are needed, for example, during long night-time operations.
Modafinil is a relatively new wakefulness-promoting drug that has been approved as an agent to counter fatigue by the air forces of Singapore, the United States, India, and France (Ooi et al., 2019). Although its exact mechanism of action remains undetermined, it is thought to exert a stimulating effect by altering the levels of several neurotransmitters, including serotonin, noradrenalin, dopamine, and gamma-aminobutyric acid (Battleday and Brem, 2015;Kim, 2012). It has a longer T max (2-4 h) and T 1/2 (12-15 h) than caffeine (30-120 min and 4-6 h, respectively) (Robertson and Hellriegel, 2003;Wingelaar-Jagt et al., 2021). Evaluations of the efficacies of pharmaceutical agents showed that modafinil is a promising fatigue countermeasure. However, this was mostly studied after longer periods of sleep deprivation, sometimes lasting >40 h (Killgore et al., 2006(Killgore et al., , 2008Wesensten et al., 2002Wesensten et al., , 2004Wesensten et al., , 2005. By contrast, studies evaluating the effect of modafinil after shorter periods of sleep deprivation used multiple doses (Caldwell et al., , 2004Estrada et al., 2012). The effect of a single dose of modafinil after a similar limited period of wakefulness (e.g., 24 h) has not been studied extensively. This timeframe is particularly interesting for military aviation because this scenario is most likely during operational missions.
The present study aimed to determine the effect of a single dose of modafinil (200 mg) on vigilance during a limited period of sleep deprivation compared with those of placebo and a single dose of caffeine (300 mg). The period of sleep deprivation was 24 h, and special attention was paid to the level of vigilance during the WOCL. We expected both modafinil and caffeine to counteract the effects of fatigue on vigilance compared with placebo, with the beneficial effects of caffeine occurring earlier than those of modafinil due to the difference in T max .
Participants
This randomized, double-blind, crossover, active-and placebocontrolled clinical trial was conducted at the Center for Man in Aviation, RNLAF (Soesterberg, the Netherlands) and adhered to the principles of the Declaration of Helsinki, the International Council on Harmonization, and the Good Clinical Practice guidelines. The protocol was approved by the Medical Ethical Committee Brabant (reference: NL62145.028.17/P1749) and the Surgeon General of the Ministry of Defence. The study was registered in the Dutch Trial Register (No. NTR6922) and EU Clinical Trials Register (No. 2017-002288-16).
Healthy employees of the RNLAF aged between 18 and 60 years were eligible for inclusion. Eligible participants were fit to fly according to the RNLAF Military Aviation Regulations or European Aviation Regulations (European Aviation Safety Authority [EASA], 2011; Military Aviation Authority, 2020). Exclusion criteria were mainly based on possible side effects or interactions of one or both medicines, for example, pregnancy or breastfeeding, the use of medication that is metabolized through CYP3A4/5, CYP2C19, or CYP2C9, and/or a history of psychiatric illness including sleep disorders, or the use of psychoactive drugs.
After being informed, both verbally and in writing, about the aims, consequences, and constraints of the study, all participants gave written consent. This informed consent was voluntary and could be retracted at any time without any consequences. According to (inter)national privacy regulations, no study data were included in the medical files of the participants.
This study included 32 subjects, two of whom only completed two of the three test days due to operational reasons. Both subjects missed the caffeine administration; per protocol their test results were included in the analysis. The subjects were aged between 25 and 59 years (mean: 35 years; standard deviation: 10 years). Five (16%) of the 32 subjects were female and 21 (66%) of all subjects were pilots. On the test days, the median waking time of the subjects was 07:00 AM, meaning that at medication administration, the subjects had a median period of wakefulness of 17 h (range: 15.5-20.0 h, (interquartile range) IQR: 16.5-18.0 h).
Materials
The Vigilance and Tracking test (VigTrack) is a dual-task that measures vigilance performance under the continuous load of a compensatory tracking task. The test has been used in various studies and is sensitive for measuring vigilance and alertness (Simons, 2017;Valk and Simons, 2009). During the tracking task, participants had to steer a blue dot using a joystick such that it remained below a red dot in the center of the display. The blue dot is programmed to move continuously from the center of the display. While tracking, participants had to perform the vigilance task. Inside the red dot, a black square alternated with a diamond, once per second. At random intervals, a hexagon was presented.
When this occurred, participants had to press an additional key on the joystick. The duration of this test was 10 min, and primary endpoints included root mean square tracking error, percentage omissions, and mean reaction time.
The psychomotor vigilance task (PVT) measures the speed with which subjects respond to a red stimulus and is used to assess the vigilance of subjects (Basner and Dinges, 2011). The interstimulus interval, defined as the period between the last response and the appearance of the next stimulus, varies randomly from 2 to 10 s. The duration of this test was 10 min, and primary endpoints included reciprocal (1/mean) reaction time and lapses. Lapses (errors of omission) were defined as RTs ⩾ 500 ms.
At the start of every trial day, a familiarization session of 5 min per task was scheduled for all subjects to avoid practice bias during the actual measurements.
The Stanford Sleepiness Scale (SSS) was used to subjectively assess the degree of sleepiness in subjects during the test days (Hoddes et al., 1973). This subjective rating scale is sensitive to detect any significant increase in sleepiness or fatigue, and it is highly correlated with flying performance and the threshold of information-processing speed during periods of intense fatigue (Perelli, 1980). Blood samples were taken four times throughout the night to determine modafinil and caffeine blood levels (at T = 0, T = +3, T = +6 and T = +8). These samples were taken by qualified medical personnel in concordance with Dutch quality and safety standards and were analyzed by an external, qualified diagnostic laboratory.
After each test day, subjects were asked to complete sleep questionnaires about their sleep on the day and night immediately following the test day and night. After the last test day, the participants were asked to report which medication they believed they had been administered on which night.
Design
This trial had a within-subjects 3 × 7 design: treatment (modafinil, caffeine, placebo) × time (T = −6, T = 0, T = +1, T = +2, T = +3, T = +4, T = +6, T = +8). The entire study consisted of three nonconsecutive trial days for every participant during which modafinil, caffeine, and placebo capsules were each administered once just after midnight (see Table 1). The dose of modafinil was 200 mg, which is regarded as an effective dose as a countermeasure for fatigue in military aviation (Caldwell et al., , 2009. The dose of caffeine (300 mg) was the usual dose administered to RNLAF aviators nowadays; it is considered a medium-range but effective dose (Caldwell et al., 2009;Lohi et al., 2007).
A wash-out period of at least 7 days was implemented to ensure that the investigational products were completely eliminated and would not interfere on subsequent trial days.
The study was double-blinded as both the subjects and investigators were unaware of the treatment given on test days. The order of the treatments for each individual subject (placebo, caffeine, or modafinil) was based on a computer-generated randomization schedule organized and monitored by an external statistician. Randomization was performed using all possible (six) treatment sequences to ensure balance for carryover effects, that is, improving skills or learning bias on the test battery.
For every test day the researchers received a treatment kit from the pharmacist. The treatment kits were labeled with the subject number and the test day and contained identical capsules.
Procedure
One week prior to the start of every trial day, participants remained within the time zone of the research center (GMT + 1, daylight-saving GMT + 2) to prevent jetlag, which might confound the test results. During the trial days, no strenuous physical exercise (including sports) or sleeping was allowed, and participants kept a log of their activities and caffeine intake. They were able to consume their normal amount of caffeine-based products until 5:00 PM. To avoid interference from caffeine with vigilance, the participants ceased their consumption of caffeine products from 5:00 PM on the test days. On three consecutive days before each test day, the participants recorded their fatigue level, sleep hygiene and habits, and daily caffeine intake in a journal. These results will be analyzed and published separately. Vital signs (temperature, blood pressure, and pulse) were collected four times during each test day, two times prior to medication administration, and 2 and 8 h after administration (see Table 1). Additionally, on every test day, female subjects were tested for pregnancy and all participants were asked if they had taken any concomitant medication or unauthorized medications during the past 3 days.
Adverse events were recorded throughout the study and at every visit after screening. Subjects were asked about any adverse events multiple times during the trial days.
Statistical analysis
Sample size calculations were performed with G*Power (Faul et al., 2007). The assumed means and standard deviations of VigTrack were used to obtain the effect size (d) for sample size analysis (Klopping et al., 2005). Two-way testing using a repeated-measures analysis of variance (ANOVA) within groups, with α = 0.05, β = 0.8, and the aforementioned effect size (d), required a minimum of n = 18 to show the effects of caffeine and modafinil. However, to compensate for dropouts and sample failures, 30 subjects were included. Test results were included if subjects completed at least 2 full days of testing (i.e., results of subjects that completed only one test day were excluded because within-group analyses could not be performed).
Statistical analyses were performed using IBM SPSS software version 27.0. A factorial repeated-measures ANOVA was conducted to analyze the main and interaction effects of time and treatment on the VigTrack and PVT parameters. When the average test revealed a significant overall difference, pairwise comparisons were conducted to analyze the difference between treatments. These consisted of paired comparisons between scores and between treatment conditions for all separate test sessions (least significant difference). SSS scores were analyzed by nonparametric tests (Friedman test for repeated measures and Wilcoxon matched-pairs signed-rank test for pairwise comparisons). The placebo group was included for reference purposes.
For all primary endpoints, the change from baseline, defined as the difference between the measure before drug intake (T = −6) and at each timepoint thereafter (T = 0 to T = +8), was calculated. Mauchly's test was performed to test if the assumption of sphericity had been violated for the different parameters. If this was the case, the degrees of freedom were corrected using Huynh-Feldt estimates of sphericity. A p-value of <0.05 was considered statistically significant.
Results
No adverse events were encountered during the study. The subjects' vital signs were unaffected by drug administration. The study ended according to protocol.
After the last test day, the participants were asked to guess which medication they had taken on which night. Of the 94 guesses, 54 (57%) were correct. Of the 32 times, modafinil was administered, five (16%) subjects believed they had taken placebo, eight (25%) thought they had taken caffeine, and 19 (59%) guessed correctly. Of the 30 times caffeine was administered, six (20%) subjects thought they had taken placebo, seven (23%) believed they had taken modafinil, one (3%) did not know, and 16 (53%) identified the medication correctly. Of the 32 times placebo was administered, five (16%) subjects assumed they had taken modafinil, seven (22%) suspected they had taken caffeine, one (3%) did not know, and 19 (59%) identified placebo correctly. These results suggest that there was no unblinding of subjects during the study.
Plasma concentrations of modafinil and caffeine can be found in Table 2.
After checking for outliers in the data with boxplots, two participants were removed from the analysis of the VigTrack parameters. These participants showed extreme values for all the VigTrack parameters, likely because they may have not understood the task properly. No outliers were identified when analyzing other parameters.
The results of Mauchly's test and subsequent correction of the degrees of freedom are provided in the appendix. Test results for all primary endpoints are displayed in Figure 1 and described in the following paragraphs, with the p-values of the pairwise comparisons summarized in Supplemental Table A.1 data.
VigTrack-mean reaction time. There was a significant main effect of treatment on mean reaction time (F(2, 50) = 5.71, p = 0.006). Post-hoc pairwise comparisons revealed that mean reaction time in seconds was significantly lower for both modafinil and caffeine than for placebo (p = 0.005 and p = 0.006, respectively).
There was a significant main effect of time of assessment on mean reaction time (F(2.32, 69.31) = 23.57, p < 0.001). There was also a significant interaction effect between time of assessment and treatment on mean reaction time (F(6.43, 160.62) = 5.02, p < 0.001). This indicates that the treatment had different effects on mean reaction time depending on the time of assessment.
Post-hoc pairwise comparisons revealed that performance was significantly less impaired with both modafinil and caffeine than with placebo during assessment at T = +4, T = +6, and T = +8.
VigTrack-mean percentage omissions.
There was a significant effect of treatment on percentage omissions (F(2,50) = 3.31, p = 0.045). Post-hoc tests revealed that percentage omissions were significantly lower for modafinil than for placebo (p = 0.018).
There was a significant main effect of time of assessment on percentage omissions (F(1.55, 38.65) = 9.57, p = 0.001). There was also a significant interaction effect between time of assessment and treatment on percentage omissions (F(4.86, 121.45) = 4.30, p = 0.001). This indicates that the treatment had different effects on percentage omissions depending on the time of assessment. Post-hoc pairwise comparisons revealed that performance was less impaired with modafinil than with placebo during assessment at T = +6 and T = +8. Performance was less impaired with caffeine than with placebo during assessment at T = +6, and T = +8.
VigTrack-mean tracking error. There was no significant main effect of treatment on mean tracking error (F(1.34, 33.49) = 0.86, p = 0.392). There was a significant main effect of time of assessment on mean tracking error (F(2.24, 55.88) = 9.26, p < 0.001). There was also a significant interaction effect between time of assessment and treatment on mean tracking error (F(3.73, 93.14) = 3.42, p = 0.014). This indicates that the treatment had different effects on mean tracking error depending on the time of assessment.
Post-hoc pairwise comparisons revealed that performance was less impaired with modafinil than with placebo during assessment at T = +6 and T = +8. There were no significant differences between caffeine and placebo.
PVT-1/reaction time.
There was a significant main effect of treatment on 1/mean reaction time (F(2.00, 54.00) = 11.50, p < 0.001). Post-hoc tests revealed that 1/mean reaction time was significantly higher for both modafinil and caffeine than for placebo (p < 0.001 and p = 0.003, respectively).
There was a significant main effect of time of assessment on 1/mean reaction time (F(4.65, 125.54) = 44.86, p < 0.001). There was also a significant interaction effect between time of assessment and treatment on 1/mean reaction time (F(10.52, 284.02) = 9.73, p < 0.001). This indicates that the treatment had different effects on 1/mean reaction time depending on the time of assessment.
Post-hoc pairwise comparisons revealed that performance was less impaired with both caffeine and modafinil than with placebo during assessment at T = +2, T = +3, T = +4, T = +6, and T = +8. Additionally, performance was significantly less impaired with modafinil than with caffeine during assessment at T = +6 and T = +8.
PVT-number of lapses.
There was a significant main effect of treatment on number of lapses (F(2, 54) = 14.15, p < 0.001). Post-hoc tests revealed that the number of lapses was significantly lower for both modafinil and caffeine than for placebo (p < 0.001 and p = 0.001, respectively).
There was a significant main effect of time of assessment on number of lapses (F(3.83, 131.35) = 28.53, p < 0.001). There was also a significant interaction effect between time of assessment and treatment on number of lapses (F(9.49, 256.15) = 7.13, p < 0.001). This indicates that the treatment had different effects on number of lapses depending on the time of assessment.
Post-hoc pairwise comparisons revealed that performance was less impaired with caffeine than with placebo during assessment at T = +2, T = +3, T = +4, T = +6, and T = +8. Performance was less impaired with modafinil than with placebo during assessment at T = +2, T = +3, T = +4, T = +6, and T = +8. Additionally, performance was significantly less impaired with modafinil than with caffeine during assessment at T = +8.
Discussion
The present study demonstrates that 200 mg modafinil and 300 mg caffeine significantly improve vigilance compared with placebo during an extended period of continuous wakefulness (mean 17.3 h), including the WOCL, without causing side effects. The most notable effects occurred in the early morning (between 4:00 and 6:00 AM), although PVT parameters improved as early as 2 h after administration. The increase in vigilance with both modafinil and caffeine was confirmed by the PVT, VigTrack, and SSS parameters. To our knowledge, this is the first randomized placebo-controlled trial to demonstrate the beneficial effects of these pharmaceutical agents after limited sleep deprivation.
Our findings are in line with the literature, although previous studies investigated the effects of caffeine and modafinil after longer periods of sustained wakefulness (Killgore et al., 2008;Wesensten et al., 2005;Wingelaar-Jagt et al., 2021). Modafinil sustains flight performance and mood state during continuous wakefulness when tested during simulated or in-flight operations, while the results for caffeine were mixed and inconclusive in these studies (Ehlert and Wilson, 2021). The effects of modafinil and caffeine appear almost simultaneously, despite their significantly different T max (30-120 min for caffeine and 2-4 h for modafinil) (Institute of Medicine (US) Committee on Military Nutrition Research, 2001; Robertson and Hellriegel, 2003). Performance was less impaired with both modafinil and caffeine than with placebo for all PVT parameters from 2 h after administration. Additionally, from T = +4, subjects had faster reaction times in the VigTrack test and lower SSS scores. This was followed by improvements of the remaining study parameters 6 h after administration (except for VigTrack mean tracking error for caffeine). This is consistent with T max of modafinil (2-4 h). However, considering that the T max of caffeine is 30-120 min, the effects of caffeine were expected to be visible earlier than the 2-6 h after administration as observed in this study. On the other hand, in a previous study in which caffeine was given to counteract the effects of temazepam, it improved performance and alertness after 1.5 h, which is comparable to this study (Klopping et al., 2005). An explanation for the delayed onset of effects of caffeine administration in this study may be the relatively early timing of medication intake (12:00 AM). The median regular bedtime of the subjects was 11:05 PM, that is, at the moment of medication administration they were awake 0.9 h longer than normally. Likewise, at medication administration the subjects had been awake for a median of 17 h. This is slightly longer than the 16 h during which well-rested individuals can maintain high levels of alertness and performance (Van Dongen et al., 2003). Additionally, the WOCL starts at 2:00 AM, initiating the period in which humans are less effective and levels of attention are lowest. This could explain the increase in effects seen after 2:00 AM and the delayed start of the effects of caffeine in this study.
At T = +8, the modafinil test group showed less impaired performance in all parameters, while caffeine showed no effect on the SSS and VigTrack mean tracking error. The PVT parameters and SSS showed an increase in vigilance with modafinil compared with caffeine during assessment at T = +8, which is in line with the longer T max (2-4 h) and T 1/2 (12-15 h) of modafinil than of caffeine (30-120 min and 4-6 h, respectively) (Klopping et al., 2005;Robertson and Hellriegel, 2003). This explains the decrease in performance improvements with caffeine, but not with modafinil, starting at T = +6. Due to its long half-life, modafinil likely continues to be effective for hours after the end of the test period used in this study. This was shown in previous studies, in which the effects of modafinil remained noticeable after 10-12 h (Killgore et al., 2008;Wesensten et al., 2005). If the measurements had been continued after T = +8, it might have been possible to identify the duration of the effects of caffeine and modafinil on performance and vigilance. However, the test period used in this study is relevant for the RNLAF because it is congruent with common operational missions. RNLAF pilots are not kept awake for more than 24 h. While, it is possible that after being awake for a normal day (16-17 h), they are asked to perform a mission at the moment when their performance starts to decrease due to operational necessity (Van Dongen et al., 2003). Even with this restricted test period, it is clear that modafinil and caffeine have different periods of effectiveness. Thus, it is prudent to consider which stimulant offers the desired period of performance improvements.
Subjects did not always correctly identify which medication they had taken. In slightly more than half of instances, they were correct. Approximately 25% of subjects mistook modafinil for caffeine or vice versa, and 16-20% of subjects mistook modafinil or caffeine for placebo. The effects of modafinil and caffeine were more pronounced when interpreting the PVT scores than VigTrack parameters. This may be explained by the difference in the difficulty of the tasks. The PVT is a relatively simple task that is more sensitive to (feelings of) fatigue than VigTrack. By contrast, VigTrack is a more complicated and challenging test that may induce more motivation to perform and stay awake. Additionally, although both tests are sensitive for measuring vigilance and alertness, they are not comparable to the work load or complexity of tasks demanded of pilots in the cockpit. Performance improvements are more pronounced in simulator studies than in in-flight testing (Ehlert and Wilson, 2021). Potential explanations are the more demanding conditions and potentially increased arousal of pilots in-flight (Caldwell and Roberts, 2000). This could also be relevant to the present study, which was performed in a controlled laboratory environment and used relatively simple tasks. Therefore, our findings should be carefully extrapolated to real-life scenarios. Future studies are required to determine the effectiveness of stimulants during actual air operations.
Additionally, the effects found in this study may have been biased by the subjects' level of caffeine consumption. Although the subjects ceased all caffeine consumption from 5:00 PM on the test days, the effects of their habitual caffeine intake may have still influenced their performance. Supplementary analysis is needed to determine the effect of daily caffeine consumption on the effects of stimulants during periods of sleep deprivation, and it may help to personalize stimulant use in pilots. Conversely, minor aberrations in the manufacturing process could have affected the results. While we believe these to be negligible, as the manufacturer complied with national legislation and good clinical practice, we cannot rule this out.
Caffeine plasma concentrations measurements are in line with its pharmacokinetic characteristics (T max 30-120 min and T 1/2 4-6 h), even though the peak plasma concentration was probably before T = +3. The measured caffeine plasma concentrations from T = +3 on are in the therapeutic range of 4 to 10 μg/ml (Schulz and Schmoldt, 2003). The height of the modafinil peak plasma concentration in the present study is comparable to literature, even though in other studies the peak concentration was reached earlier after administration (1.5-2 h) (Darwish et al., 2009;Robertson and Hellriegel, 2003). Furthermore, when comparing the modafinil plasma concentrations with its pharmacokinetic characteristics (T max 2-4 h and T 1/2 12-15 h) one would have expected the peak plasma concentration to occur earlier than at T = +6. A possible explanation is that the true peak plasma concentration was between T = +3 and T = +6 and was missed due to the low number of blood samples. Although this limited number of blood samples is a limitation of this study, with the 6 and 8 h follow-up time, we were able to provide details of serum concentrations relatively long after administration.
Moreover, sleep-related factors were not considered in this study. Sleep deprivation and also an extended period of wakefulness may negatively affect performance (Wingelaar-Jagt et al., 2021). To best reflect circumstances of operational military aviation, the participants were not imposed with bedtimes or waking times; therefore, the time since the last sleeping period and the duration of that sleeping period differed between subjects. These differences may have caused variation in performance during the test periods. It would be insightful from an academic perspective to investigate how much of an influence this actually constitutes. However, due to its crossover design, we do not believe this affected the results of our study. Additionally, the results presented in this study reflect in vivo benefit from modafinil and caffeine, and therefore they provide operational relevant data for military aviation. Furthermore, the effects of modafinil and caffeine on subsequent sleep periods were not considered in this analysis. The literature is ambiguous regarding the effects of modafinil on recovery sleep. One study reported that recovery sleep 16 h after modafinil administration was of a lesser quality and quantity (Estrada et al., 2012), while other studies showed that recovery sleep was unaffected (Killgore et al., 2008;Walsh et al., 2004).
In conclusion, both modafinil and caffeine improved vigilance and performance based on the PVT and VigTrack, and resulted in a lower level of reported sleepiness after a limited period of sleep deprivation. Modafinil was effective for longer than caffeine, which is consistent with its longer half-life. The effects of both modafinil and caffeine were noticeable approximately 2 h after drug administration. The delayed effect of caffeine in comparison with its short T max of 30-120 min may be due to the relatively short period of wakefulness and subsequent start of the WOCL. Stimulants may play an important role in military aviation, especially in situations where pilots are already fatigued but operational necessity requires them to continue their mission. Therefore, it is paramount to be able to choose the optimal stimulant for the situation. Additional research evaluating the effects of modafinil and caffeine on in-flight performance, the effects of previous caffeine administration and extent of sleep deprivation, and the effects of modafinil on recovery sleep is needed to provide an evidence-based basis for this choice. Lastly, as our data suggest that modafinil continues to positively affect performance 8 h after administration, future studies could explore this. Aviation is not the only industry in which peak performance is demanded during night-time or after periods of sleep deprivation. Therefore, these results may also prove to be relevant for employees and employers in other fields, such as healthcare and logistics.
Significance statement
Fatigue remains an important safety risk in aviation. Stimulants, like modafinil and caffeine, counteract fatigue's adverse effects on vigilance and performance, and each has its own characteristics and optimal timeframe. Stimulants may be of particular importance in situations where pilots are already fatigued, but operational necessity requires them to continue their mission. Aviation is not the only industry in which peak performance is demanded during night-time or after periods of sleep deprivation. Therefore, it is paramount to better understand these stimulants in order to select the optimal stimulant for each situation. This may improve safety not only in aviation, but also in other fields, such as healthcare and logistics.
Author contributions
YW conceived the idea, designed and performed the experiments, carried out the statistical analysis, and drafted and revised the manuscript. CB designed and performed the experiments, carried out the statistical analysis, and was involved in drafting and revision of the manuscript. WR conceptualized this paper, supervised the experiments, and revised the manuscript. JR conceptualized this paper, supervised the experiments, and revised the manuscript.
Data-sharing plan
Data will be available in the near future on EudraCT (European Union Drug Regulating Authorities Clinical Trials Database) and is available upon reasonable request.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Funding through Dutch Ministry of Defense.
Supplemental material
Supplemental material for this article is available online. | 2022-12-16T16:03:52.577Z | 2022-12-14T00:00:00.000 | {
"year": 2022,
"sha1": "898f71d8ef5dcdf9439203a3cf2ceb01a9c5dd24",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Sage",
"pdf_hash": "e2dba429bac6251261be17ce0f46bbe6bbb9618a",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
1551806 | pes2o/s2orc | v3-fos-license | Presentation of Classical Propositional Tableaux on Program Design Premises
We propose a presentation of classical propositional tableaux elaborated by application of methods that are noteworthy in program design, namely program derivation with separation of concerns. We start by deriving from a straightforward specification an algorithm given as a set of recursive equations for computing all models of a finite set of formulae. Thereafter we discuss the employment of data structures, mainly with regard to an easily traceable manual execution of the algorithm. This leads to the kinds of trees given usually as constituting the tableaux. The whole development strives at avoiding gaps, both of logical and motivational nature.
Introduction
We teach a course Logic for Computing in a Software Engineering programme of studies. Prior to this, students have received courses in Calculus, Algebra and introductory Programming in Java, plus a course called Foundations of Computing, which introduces polymorphic, higher-order functions and inductive types with the fundamental methods of induction and recursion in their various forms. Foundations of Computing makes emphasis on a mathematical approach to Programming, specifically on correctness proofs. Logic for Computing, in turn, concerns itself essentially with the notion of formal proof. It follows from the foregoing that we should be very much interested in making explicit methods of proof. By this we mean both general strategies for developing and fully understanding solutions to problems, as well as manners of presenting the corresponding proofs which convey natural, concise and complete justifications of their design. Now, as it turns out, we have observed that some methods that have arisen within what could be called the science of Programming can be employed for obtaining or conveniently presenting mathematical results. This is to our mind a fact to be most welcome, for it exposes a unity of method between Programming and Mathematics that cannot but bring about positive outcomes for both sides, at least in as much the learning and teaching aspects are concerned.
In this paper we present an example of the latter, concerning the presentation of the method of tableaux. This is a proof procedure for both propositional and predicate logic dating back to [1] and [2], and whose ultimate variant (termed analytic tableaux) has been introduced in [3]. Specifically, what we do is: (1) We derive the method as a set of equations -to be used as rewriting rules-from a straightforward specification, namely the one demanding the computation of the set of all models of the given set of formulas. (2) We discuss the design of data structures for actually effecting and keeping trace of the execution of the method, which leads to the sorts of trees that are called "the tableaux" in textbooks. The first part yields a compact proof of the correctness of the method, much simpler than the ones in textbooks. The second part introduces the convenient and classical notation and establishes its correctness relating it to the set of equations originally given by a simple inductive argument. As a whole, the process is one in which we repeatedly employ simple techniques of program derivation and separation of concerns to obtain a presentation and justification both modular and simpler of the method of tableaux.
The rest of the paper consists of a general background section whose contents is assumed to be taught priorly to the study of tableaux. In section 3 we present the derivation of the equational algorithm calculating the set of all models of given set of formula. In section 4 we discuss the data structures for tracing the execution of the algorithm, leading to the usual presentations of tableaux, after which we finish up with a general discussion. The presentation is to be read basically as a concise course handout, with some explicit considerations of logical or didactic nature.
Background
Syntax. It is enough to consider the set of connectives {¬ , ∧}. Then the set of formulae is defined as usual, starting out from a denumerable set V of propositional letters p: α, β ::= p | ¬α | α ∧ β. We use signed formulae σ ::= Sα where S ::= F | T, as the forms of assertion or judgement. 1 Semantics. Interpretations belong in I= V → Bool. The semantic value of each formula is defined as follows -let A be the set of formulae and (!) and (&&) denote respectively Boolean negation and conjunction: [ Using the former we now define truth of an assertion (signed formula) in an interpretation. CallŜ the boolean value corresponding to sign S. Then i |= Sα ≡ [[α]] i =Ŝ, which reads: i is a model of Sα, and also: i satisfies S α or S α is valid in i. We shall consider finite sets Γ of signed formulae and define models thereof (i.e. i |= Γ) as the interpretations satisfying every formula of Γ. Truth in an interpretation. It is generally interesting to develop a method for checking truth of signed formulae in an interpretation. If we start with the propositional letters, we get: For the other cases we wish to obtain (structurally) recursive equations. As to negation, writing S the sign opposite to S, we obtain: where we seem to get stuck. Indeed, to rewrite the left hand side requires to consider the definition of (&&) and this is not uniform with respect to truth and falsity. Therefore we are led to try instead distinguishing the cases of S: Ultimately, we arrive at the following characterisation of the satisfaction relation: Signed letter :
The Set of All Models
We now set ourselves the problem of computing all models of any given finite set Γ of signed formulae. This is accomplished by the function M :: P fin (Σ) → P(I) M(Γ) = {i ∈ I | i |= Γ}, where Σ is the set of signed formulae, P is the power set operator yielding the set of subsets of given set, and P fin does the latter for the finite subsets 3 . Now this straightforward definition presents the inconvenience that, as a method of computation, it obliges to construct all the interpretations and check each of them against the formulae in Γ. We are rather in the search of a syntactic procedure, i.e. one that applied exclusively to the formulae in Γ ends up arriving at the desired set of models. Let us then examine Γ.
First of all, Γ could be empty, which is indeed a plainly uninteresting case. Indeed, every interpretation trivially satisfies the empty set of formulae and so the result in such case is I. So let us assume Γ = ∅. If this is the case, then we can pick any one of the formulae σ in Γ and write the latter in the form ∆ | σ, which means that Γ = ∆ ∪ σ and σ ∈ ∆. Given the former, we can now write: The only source of information in the latter expression is the analysis of the form of σ, and so we are led to an examination of cases, i.e. to considering: We can make profit of this analysis by using the results obtained at the end of the preceding section for checking the truth of signed formulae in a given interpretation. As it happens, the first case is a bit discouranging, for the satisfiability condition i |= S p takes us to consider the value of p in the given interpretation, i.e. a semantic rather than a syntactic move. But it pays off to insist. Negation gives the following: where we have used (, ) instead of (∪) for set union. Notice that it is indeed this operation and not the formerly used split (|) which is to be employed in this case, for we do not now know whether the formula S α belongs or not to ∆. The equation thus obtained, namely M (∆ | S (¬α)) = M (∆, S α), is very convenient, for it rewrites the desired set of all models into an expression in which the overall complexity of the formulae has been strictly decreased. The same works for conjunction, whose results with respect to satisfiability can be used by distinguishing the two cases of the sign affecting it: . As a result we have so far obtained: where the case of a signed letter, i.e. a literal, could not be included. Now taking a look at the preceding equations for M, we readily realize that the missing case is actually that of a set not containing any composite formulae, i.e. that of a set of literals. Such is the base case of our recursion, since this proceeds by decreasing the size of the formulae of the set being treated -and not the size of the set itself. Therefore it is natural to wonder whether the solution of such base case could actually be just immediate. This is indeed the case, because there is a straightforward manner of converting a set Γ of literals into the set of all its models. There are two cases: Γ contains pairs of opposite literals. Then it is inconsistent and the set of its models is ∅.
Otherwise the models of Γ are the interpretations that coincide with Γ at the letters mentioned in it. Formally, callΓ the set of all models of the set Γ of literals. It is defined as follows: Notice that the alternative is decidable and that in the second case the result is sufficiently characterised by the set Γ of literals and so we get a finite representation of it. We can then put together equations for actually computing M: We claim that M captures the essence of the method of tableaux, and the derivation carried out above gives actually a quite simple proof of its correctness. Nevertheless, its actual execution needs to employ some kind of data structure to record the successive transformations leading to the final result. That is what we turn now to examining.
Data Structures for the Tableaux
List of lists. If we ignored the second equation above we would be in presence of a tailrecursive algorithm, i.e. one whose execution could consist merely in successively rewriting the finite set of formulae at hand. We would then do simply with a list of formulae from which we would choose the next formula to be transformed. Now, consideration of the second equation does not in principle introduce any dramatic modification of this situation: it is enough that each application of the equation produces a split of the list from which the formula F(α ∧ β) is taken into two lists, each of it containing exactly one of the two formulae There seem to be three inconveniences as to this execution. The first is that we have treated one and the same formula twice, and that on two different occasions. One readily realizes that the issue is avoidable if the use of the branching equation corresponding to a false conjunction is always subsequent to the use of every other (non-branching) equation formerly applicable. The second inconvenience is that we have rewritten many a formula that was without change. And, finally, the execution is awkwardly traceable -we have namely indicated the successive steps taken by means of narrative text interspersed in the successive rewritings. The latter is of importance when we consider executions by handthen a more formal and easily checkable notation would be most welcome by both students and teachers. We shall consider these two remaining issues in the next two subsections, beginning with the latter about an easily traceable notation. Tree of lists. The straightforward manner of making executions like the former traceable and easily verifiable is just to record the application of each rule, including mention to the formula used. We should therefore begin by naming the equations of the algorithm, say T∧, F∧, ¬ and l in the order in which they are written above. The procedure leads to the deployment of a tree structure whose nodes are lists of formulae as in the preceding section, and whose internal nodes (not leaves) are decorated by labels as explained presently: 0. To begin with, we have only one item, namely the original list of formula. This is of course a tree with only one terminal node (leaf). 1. At each step we choose a composite formula within a leaf (call this leaf L) and apply the corresponding rule as already explained. As a result one or two new lists of formulae are obtained, which are linked to L, becoming successors of L in the tree. At the same time we label L with the name of the equation and the formula used. The leaves of these trees coincide with the lists of formulae obtained by the procedure explained in the preceding paragraph -we have only added a tree structure on top of them for tracing their computation. Therefore, the set of models of the root of the tree obtains as the union of the sets of models of the leaves. Formally, this much becomes clear after the consideration that the union of the sets of models of the leaves, and therefore the invariant just mentioned, are indeed preserved by each application of one equation as described above. Therefore the correctness of the computation procedure using these trees follows by straightforward mathematical induction. The right formulation and proof of this result is left as exercise.
Notice that the preceding description amounts to inductively defining these trees as a family T (Γ) indexed by the finite sets Γ in a manner such that the constructors stand in correspondence with the equations as named above, in the following manner: to internal nodes, constructors T∧, F∧ and ¬ are associated, corresponding to the equation used in each case. The leaves are the as yet untreated nodes or those already formed by literals only. In either case we associate to the leaf the constructor l. Unfortunately, we must skip a detailed explanation for reasons of space. Tree of formulae. The repetition of possibly large lists of formulae along the trees as introduced in the preceding section can be avoided, e.g. by employing the procedure described in [3]. We describe these less expensive trees as follows. The general idea is to write at each node of the tree different from the root only the formulae originated by the use (decomposition) of another formula. The root of the tree will contain the originally given set (list) of formulae. With this information it is possible to compute the full trees of the preceding paragraph provided the used formulae are recorded at each step, i.e. at each node. Therefore, the correctness of the present method with improved trees will follow from the correctness of the prior method. Specifically, we define the improved trees as follows: 1. Each node will have associated an explicit set E and an implicit set Γ of formulae. E is to be written down explicitly, whereas Γ is to be computed when necessary.
2.
For the root of the tree, both E and Γ coincide with the originally given set of formulae. 3. For the other nodes, E will consist of one or two formulae. 4. All internal (i.e. non-leaf) nodes will also have associated one formula, to be called the one used at the node.
We now indicate how to extend the tree down from a leaf: 1. A formula in Γ is chosen and written down at the node as its used formula.
2.
Then one proceeds according to the form of the chosen formula: a. In case it is of the form T (α ∧ β) then the tree is extended with one child node. For this new node, E = {T α, T β}. b. In case it is of the form F (α ∧ β) then the tree is extended with two children nodes.
One of them will have E = {F α} , whereas the other one will have E = {F β}. c. Finally, in case the chosen formula is of the form S (¬α) then the tree is extended with one child node having E = {S α}. For every case of newly created node, the set Γ is computed as follows: If Γ 0 is the implicit set of the parent node, then Γ = (Γ 0 − σ) ∪ E, where − denotes deletion of a member in a set. Now, to each improved tree t with a non-leaf root which has associated explicit set E and implicit set Γ of formulae, as well as used formula σ, a full tree of type T (Γ) can be associated, whose constructor is the one corresponding to the form of σ, i.e. T∧, F∧ or ¬, and its children trees are the ones (recursively) corresponding to the children trees of t. If otherwise t is just a leaf, then its corresponding full tree is l(Γ), where Γ is the implicit set of formulae of the leaf in question. This correspondence gives already a method for using the improved trees in order to compute all the models of any given set of formulae. Nevertheless, the following result makes such process easier: The implicit set at each leaf is the union of the explicit sets at the branch ending up at the leaf in question, minus those formulae that have been used on that branch. Thereby one can determine when a branch is completed, which happens when the implicit set at the corresponding leaf is a set Γ of literals. Further, thenΓ is the corresponding set of models, and one can then compute the set of models of the whole tree (i.e. of the originally given set of formulae) by taking the union of the sets at each leaf, just as with the full trees.
Conclusions
We have put forward a presentation of classical propositional tableaux elaborated by application of some principles that are noteworthy in program design. Foremost among those principles is the one of separation of concerns: We have namely started by deriving from a straightforward specification an algorithm given as a set of recursive equations for computing all models of a finite set of formulae. The correctness of the algorithm is brought about hand-in-hand with its derivation by means of a basic inductive argument whose cases are each solved by calculational reasoning yielding identities between sets of interpretations that need not the usual "ping-pong" (or direct-and-converse) argument. Thereafter we discussed the employment of data structures, mainly with regard to a manual execution of the algorithm. A requirement of natural traceability and verification led us to the trees of sets or lists of formulae presented in [1,4], the correctness of which is immediate after their derivation as traces of the employment of the original equations. A further improvement avoids repetition of unmodified formulae giving rise to the trees presented in [3], whose correctness is in turn guaranteed by showing that they carry the same information as the former trees.
Smullyan's classical presentation [3] introduces instead the method as a proof procedure for establishing unsatisfiability of (finite) sets of (signed) formulae. The tableaux are given directly in the form of our improved trees of formulae. The proof of correctness is then as usual composed by two arguments, one of soundness and one of completeness, to the T T L 2 0 1 5 effect that unsatisfiable sets give rise to closed tableaux, i.e. one in which every branch contains a contradiction and thus has no model. The proof of soundness is by a quite direct tree induction, whereas the proof of completeness involves showing that an open completed branch, i.e. one in which every formula has been fully decomposed, is a Hintikka set. Besides, Hintikka's lemma is proven, to the effect that every Hintikka set has a model.
In our experience, the use of the method as in the classical presentation leads students to the realisation that they either prove the given set of formulae inconsistent or can compute every counter-example (i.e. a sufficient characterisation thereof). Subsequently they tend to ask why we cannot establish such fact as a meta-theoretical result. Our presentation does precisely that -and the correctness of the method as a proof procedure follows as immediate corollary. The idea of computing all models of the given set of formula has led us to give an abstract formulation of the procedure. We then treat as a separate matter the question of the concrete trace of the manual execution of the method. As we have been able to check, this treatment provides the students with improved command over the method, i.e. they exercise a more sound domain over what they are doing and also over the various possible notations or manners of justification they can give thereof.
It could be argued that Smullyan's presentation and proof is scalable to infinite sets of formulae and first-order-logic, and therefore ask about such feature regarding our presentation. Concerning infinite sets of formulae, the first thing to say is that the validity of our equations is certainly not affected. Nevertheless, they cannot of course be interpreted anymore as an algorithm. Even if we assume as usual a principle of omniscience concerning the infinite sets, the method of choice of the formulae to be succesively decomposed by application of the equations is essential for getting the right result. But, as is the case also with the classical presentation, there exist method of orderly choice that guarantee (under the ominiscience principle) the computation of all models and thus the correctness of the method. Generalisation to first-order logic, on the other hand, requires to abandon the idea of "computing all models", replacing it by e.g. "determining whether the set of formulae is or not (un)satisfiable".
We conclude that our presentation may contribute in a better way to the achievement of profficiency with understanding, which is our main learning objective. Also it emphasizes design methodology, which we strive to do along and across the whole of the program of studies. It also could be argued that the method is tailored to just students of Computing Science or Software Engineering. We however believe that it can be taught also without much difficulty to Mathematics or Philosophy students and that the advantages we claim to obtain can also be appreciated in such cases. This, however, is yet to be checked out.
Finally, we should like to think of this work as one interpretation and case of the disclosing of the "doing" of Mathematics as advocated by Dijkstra [5]. We have tried to avoid all gaps of both mathematical and motivational nature. To our mind, this case is yet another sample of the unity of structure and method that mathematics and programming share 4 . Exploiting such unity should be fruitful for improving understanding and thus better helping learning. | 2015-07-14T02:51:48.000Z | 2015-07-14T00:00:00.000 | {
"year": 2015,
"sha1": "90c9070b68a623e450eb08860e3597c1a71f7726",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "87ca4a090f85836cff6fbb35ab54fd155248f3f1",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
55977308 | pes2o/s2orc | v3-fos-license | Career Maturity of Students with Visual Impairment in Relation to their Self Efficacy and Self Advocacy
Corresponding Author: Kaur Supreet Department of Education, USOL, P.U. Chandigarh, India Email: supreet10000@gmail.com Abstract: The study investigated relationship of career maturity with career decision self-efficacy and self-advocacy of the students with visual impairment. Mixed method approach employed for the study. Purposive sampling technique was employed, a sample of 100 students were taken for study. The main finding are: There is a significant relationship between career maturity (competence test) and career decision making self-efficacy of the students with visual impairment. There is a significant relationship between career maturity (self-appraisal, occupational information, goal selection, problem solving) and self-advocacy of the students with visual impairment. There is no significant gender difference on career maturity of students with visual impairment. Career decision making self-efficacy and selfadvocacy were found to be the predictors of career maturity (self-appraisal, goal selection, planning) and contribute significantly to career maturity (self-appraisal, goal selection, planning) of students with visual impairment. Career decision making self-efficacy was found the predictors of career maturity (occupational information, problem solving) and contribute significantly to career maturity (occupational information, problem solving) of students with visual impairment. There are various factors such as socioeconomic status, uneducated parents, knowledge of braille etc. which contributed to low career maturity of the students with visual impairment.
Introduction
The social organization of any society depends greatly on the career development of individuals. This is assumed that by the end of high school, adolescents have sufficient knowledge about the world of work and they are in a position to make a career choice (Coertse and Schepers, 2004). Individuals with disabilities have a more hard career development process than their peers and are more receptive to vocational identity and career decision-making problems. There are a number of factors that influence the career decision making process for people with disabilities. Individual factors are, gender cultural background, socioeconomic status, self-esteem, selfefficacy (Szymanski and Hershenson, 1998) and disability status. Environmental factors are family involvement, work experiences (Blustein et al., 2000;Ohler et al., 1996) and decision-making opportunities (Hagner and Salomone, 1989) have been found to affect the career decision-making abilities of individuals with disabilities. Individuals with disabilities should have an understanding of their disabilities and they can advocate for themselves (Mellard and Hazel, 1992;Minskoff, 1994).
The World Health Organization has clearly distinguished the use of three terms: Impairment, disability and handicap. Impairment means, abnormalities of body structure and functions of organ or organ system. Impairment means problems at organ level. Disability considers the results of impairment in terms of functional performance of organs and activity by the individual. Handicap refers to disadvantages faced by the individual as a result of impairments and disabilities; so handicaps reflect the interaction of adjustment of individual with his surroundings (WHO, 1976).
Vision Impairment
Visual handicap is defined in terms of visual sensitivity, ability to see and visual efficiency. Visual ability is the ability of the eye to see distant as well as nearby objects clearly using Snellen chart. Individuals who see the letter 'E' from a 20 feet distance instead of 200 feet are legally blind. Low vision or residual vision children are sighted and their visual sensitivity does not exceed 20/70. These children have coordination and mobility problems. The causes of blindness can be both genetic and environmental. Students with visual impairment have problems in reading, viewing boards, overheads, videos and other visual demonstrations. They have difficulties in getting around the campus as well as find places or materials in classrooms.
Career Maturity
Career maturity can be defined as an individual's readiness to make informed career choices and decisions, which are both, realistic and age-appropriate (Savickas, 1984). Career maturity means knowledge of professional behaviour. Therefore, in career development process career maturity plays an important role in influencing the career decision-making (Creed et al., 2006). Career maturity is an area to which an individual is able to acquire those career developmental tasks which are suitable to the appropriate stage of his/her life (Miles, 2008). Therefore, career maturity can be a useful measure with individuals with disabilities. Because they can perform according to their ability.
Dimensions of Career Maturity
There are two dimensions of career maturity (Crites, 1976):
Attitudinal
The attitudinal dimension refers to attitudes and emotions of the individual about making a career choice and whether they enable to continue their career choice as they engage in the occupation. Career-related orientation, involvement, planning and exploration have been defined as representing career attitudes or career behaviours (Schimitt-Rodermund and Silbereisen, 1998).
Cognitive
The cognitive dimension refers to individuals' awareness to make a career choice and their understanding in other career possibilities. It consists of information which refers to the knowledge and competencies required to make career decisions, which include occupational information, self-appraisal and planning (Creed and Patton, 2003). For students with disabilities, the cognitive dimension is significant in their career decision making process. These students face cognitive challenges of selecting a career, finishing college and finding employment.
Career Decision Making Self-Efficacy
According to Taylor and Betz (1983) career decision making self-efficacy is an individual's belief that he or she can successfully complete task indispensable to career decision making. Career decision making selfefficacy is an individual's beliefs that he or she can perform the activities and obligations correctly to make a powerful career decision (Gati and Amir, 2006) Self-Advocacy Balcazar et al. (1996) defined, "Self-advocacy is a assertion by people with developmental disabilities that they want to be seen as people who have something to offer and skills to share, rather than be seen as people with handicaps or limitations. Self-advocacy influences one's ability to preserve an independent standard of living. It impacts one's achievements within the school and the community. Particularly for students with disabilities, communicating rights and requesting appropriate accommodations have been areas of weakness (Swanson, 2008). Self-Advocacy refers to skills one uses to communicate, express needs and desires or state his or her own interests, to gain access to their needs and rights (Van Reusen et al., 2015).
Rational of the Study
The main idea to pursue the present piece of work is to study career maturity of students with visual impairment in relation to their career decision making self-efficacy and self-advocacy. The selection of an occupation is one of the most important decisions for an adolescent. These days a large number of career options are available to the students, therefore it is a difficult task for the individuals to make a mature choice. Students with visual impairment face many challenges in their career decision making process and in school-to-work transition. They face environmental and behavioral barrier that can impede their achievement of advancement in their career development.
Delimitations
The study under investigation was delimited to the following: • The study was delimited to special schools of Delhi and Chandigarh only • The study was delimited to age group from 14-22 years of the students with visual impairment • The study was further delimited to one type of disability i.e., visual impairment
Objectives
The specific objectives of the study were: • To study the nature of the variables under study viz. career maturity, career decision making self-efficacy and self-advocacy in the students with visual impairment • To study the relationship of the career maturity with career decision making self-efficacy of the students with visual impairment • To study the relationship of the career maturity with self-advocacy of the students with visual impairment • To find out whether boys and girls with visual impairment exhibit any differences with regards to their career maturity • To find out the predictors of career maturity from among the independent variable of career decision making self-efficacy and self-advocacy of the students with visual impairment
Hypotheses
Based on above mentioned objectives following hypotheses have been framed: • There exists no significant relationship between the sub dimension of career maturity (self-appraisal) and career decision making self-efficacy of students with visual impairment • There exists no significant relationship between the sub dimension of career maturity (occupational information) and career decision making selfefficacy of students with visual impairment • There exists no significant relationship between the sub dimension of career maturity (goal selection) and career decision making self-efficacy of students with visual impairment • There exists no significant relationship between the sub dimension of career maturity (planning) and career decision making self-efficacy of students with visual impairment • There exists no significant relationship between sub dimension of career maturity (problem solving) and career decision making self-efficacy of students with visual impairment • There exists no significant relationship between dimension of career maturity (attitude scale) and career decision making self-efficacy of students with visual impairment H 2 • There exists no significant relationship between sub dimension of career maturity (self-appraisal) and self-advocacy of students with visual impairment • There exists no significant relationship between sub dimension of career maturity (occupational information) and self-advocacy of students with visual impairment • There exists no significant relationship between sub dimension of career maturity (goal selection) and self-advocacy of students with visual impairment. • There exists no significant relationship between the sub dimension of career maturity (planning) and self-advocacy of students with visual impairment. • There exists no significant relationship between the sub dimension of career maturity (problem solving) and self-advocacy of students with visual impairment • There exists no significant relationship between the dimension of career maturity (attitude scale) and self-advocacy of students with visual impairment H 3 • There exists no significant gender difference in the sub dimension of career maturity (self-appraisal) of students with visual impairment • There exists no significant gender difference in the sub dimension of career maturity (occupational information) of students with visual impairment • There exists no significant gender difference in the sub dimension of career maturity (goal selection) of students with visual impairment • There exists no significant gender difference in the sub dimension of career maturity (planning) of students with visual impairment • There exists no significant gender difference in the sub dimension of career maturity (problem solving) of students with visual impairment • There exists no significant gender difference in the dimension of career maturity (attitude scale) of students with visual impairment H 4 • None of the independent variable of career decision making self-efficacy and self-advocacy would contribute significantly in predicting the career maturity (self-appraisal) independently as well as conjointly among the students with visual impairment • None of the independent variable of career decision making self-efficacy and self-advocacy would contribute significantly in predicting the career maturity (occupational information) independently as well as conjointly among the students with visual impairment • None of the independent variable of career decision making self-efficacy and self-advocacy would contribute significantly in predicting the career maturity (goal selection) independently as well as conjointly among the students with visual impairment • None of the independent variable of career decision making self-efficacy and self-advocacy would contribute significantly in predicting the career maturity (planning) independently as well as conjointly among the students with visual impairment • None of the independent variable of career decision making self-efficacy and self-advocacy would contribute significantly in predicting the career maturity (problem solving) independently as well as conjointly among the students with visual impairment • None of the independent variable of career decision making self-efficacy and self-advocacy would contribute significantly in predicting the career maturity (attitude scale) independently as well as conjointly among the students with visual impairment
Method and Procedure
The method of the study was designed to best address the research questions through a combination of quantitative and qualitative approaches in a sequential manner. The mixed methods study in which one method is used to further explore and expand the findings of another (Creswell, 2003;Tashakkori and Teddlie, 1998).
Quantitative Descriptive Phase
The purpose of this phase of the research study to collect quantitative data through survey instruments from the students with visual impairment studied in special schools of Delhi and Chandigarh. Exploratory descriptive survey method was employed in this study. The study was completed in two phases. In first phase, the tool i.e., self-advocacy questionnaire was constructed and validated by the investigator. In the second phase the data was collected, analyzed and interpreted. A sample of 100 respondents (62 boys and 38 girls) was selected using purposive sampling. This sample was consisted of students with visual impairment, studying in different special schools was taken for study. Out of which 50 students from National confederation of blind (Rohini) and 50 students from National confederation of blind (Chandigarh) was taken for study.
Tool Used
• Career maturity inventory (CMI, Gupta, 1989) was used • Career decision making self-efficacy-short form (CDSE-SF; Betz, Klein and Taylor, 1996) was used. • Self-advocacy questionnaire developed by investigator herself
Statistical Techniques Used
For the analysis of data following statistical techniques were used: • Descriptive statistical techniques such as mean, standard deviation, skewness and kurtosis were worked out to ascertain nature of the distribution of the scores on the dependent variables career maturity and its dimensions and independent variable of career decision making self-efficacy and self-advocacy • Pearson's Product Method was used to compute correlation of the career maturity and its dimensions with career decision making self-efficacy and selfadvocacy • t-ratio was employed to find out the gender differences on the career maturity and its dimensions • Step-wise multiple regression analysis was done to find out the predictors (contributors) of criterion variable career maturity and its dimensions from among the independent variables of career decision making self-efficacy and self-advocacy
Qualitative Phase
As qualitative studies depends on collecting data from participants in their natural settings and most of the data are usually without numbers, unstructured text data, a high level of language skills are required. In terms of selecting the students for interviews, the researcher first and foremost used their career maturity score as selection criterion. Researcher selected those students who had low score on career maturity scale. The interview was semi-structured. Each participant was asked the same question during the interview. For the quantitative phase of the study, the sample comprised of 4 students with visual impairment. The data for this study was collected through in-depth interviews and observations. All interviews were conducted at the convenience of the participants. Notes were taken to obtain relevant information.
Analysis and Interpretation of the Results
• Analysis of descriptive statistics for the students with visual impairment • Correlation of career maturity with career decision making self-efficacy and self-advocacy for the students with visual impairment • Gender difference for the students with visual impairment on career maturity • Regression Analysis for students with visual impairment • Factors contributing to low career maturity of the students with visual impairment
Analysis of Descriptive Statistics for Students with Visual Impairment
The mean, S.D., Sk and Ku of the variables under study i.e., career maturity, career decision making selfefficacy, self-advocacy, in case of students with visual impairment has been given in Table 1.
Career Maturity
The variable of career maturity includes two types of measures i.e., attitude scale and competence test. The attitude scale measures the conative aspects of decisionmaking. The competence test measures the cognitive variables in choosing a vocation. In all there are five parts of the competence test (Self-appraisal; Occupational information; Goal selection; Planning; Problem solving).
Competence Test
Self-Appraisal Table 1 reveals that for the students with visual impairment, the mean and S.D. of career maturity (selfappraisal) were 6.47 and 2.01 respectively. The value of mean and S.D. were average. This depicts that students had average ability to accurately appraise their own strengths related to career decision. Sk is found to be -0.59 which is negative and shows that the distribution is negatively skewed. Ku is-0.073 which is lesser than Ku for normal curve and shows that curve is leptokurtic. Table 1 reveals that for the students with visual impairment, the mean and S.D. of career maturity (occupational information) were 8.61 and 1.96 respectively. The value of mean and S.D. were average. It reveals that students had average ability to locate various sources of information about college majors and occupations. Sk is found to be -0.71 which is negative and shows that the distribution is negatively skewed. Ku is 1.420 which is greater than Ku for normal curve and shows that curve is platykurtic. Table 1 reveals that for the students with visual impairment, the mean and S.D. of career maturity (goal selection) were 8.39 and 2.06 respectively. The mean was found to be high as per norms. It reveals that students had high ability to match one's own characteristics to the demands of careers. Sk was found to be -0.87 which is negative and shows that the distribution is negatively skewed. Ku is 0.706 which is greater than Ku for normal curve and shows that curve is platykurtic.
Goal Selection
Planning Table 1 reveals that for the students with visual impairment, the mean and S.D. of career maturity (planning) were 9.29 and 2.35 respectively. The mean was found to be high as per norms. It depicts that students had high tendency to think about various means, which were necessary to attain desired end. It reveals that Sk is found to be -0.94 which is negative and shows that the distribution is negatively skewed. Ku is 1.325 which is greater than Ku for normal curve and shows that curve is platykurtic. Table 1 reveals that for the students with visual impairment, the mean and S.D. of career maturity (problem solving) were 6.33 and1.80 respectively. The mean was found to be high as per norms. It reveals that students possessed high capability in solving problems that arise in the process of decision making. Sk is found to be -0.38 which is negative and shows that the distribution is negatively skewed. Ku is -0.625 which is lesser than Ku for normal curve and shows that curve is leptokurtic. Attitude Scale Table 1 reveals that for the students with visual impairment, the mean and S.D. of career attitude of career maturity were 26.72 and 3.15 respectively. The mean was found to be low as per norms. The low score indicate that students with visual impairment had less developed attitude towards career decisions. Sk is found to be -0.09 which is negative and shows that the distribution is negatively skewed. Ku is 0.959 which is greater than Ku for normal curve and shows that curve is platykurtic.
Problem Solving
Career Decision Making Self Efficacy Table 1 shows that for the students with visual impairment, the value of mean and S.D. of career decision making self-efficacy was 94.09 and 17.31 respectively. The value of mean and S.D. were average. This depicts that students with visual impairment can complete tasks necessary to make career decisions. Sk is found to be -0.87 which is negative and showed that the data is negatively skewed. Ku is 4.090 which is greater than 0.263 ku for normal curve and exhibits that the curve is platykurtic. Table 1 shows the value of mean and S.D. of selfadvocacy for the students with visual impairment, were 27.85 and 3.72 respectively. The value of mean and S.D. were average. This reveals that students with visual impairment understand their disability and were aware of their strengths and weaknesses. Sk is found to be -0.72 which is negative and showed that the data is negatively skewed. Ku is -0.464 which is lesser than 0.263 ku for normal curve and exhibits that the curve is leptokurtic.
Analysis of Correlation
Correlation of Career Maturity with Career Decision Making Self-Efficacy Table 2 shows the correlation of dimensions of career maturity (competence and attitude) with the career decision making self-efficacy viz self-appraisal, occupational information, goal selection, planning and problem solving and attitude scale are 0.443, 0.467, 0.384, 0.432, 0.340, 0.037 respectively. These values 0.443, 0.467, 0.384, 0.432, 0.340 are more than the table value of 0.254 at 0.01 level of significance and hence significant at 0.01 level of significance. No significant correlation is found between dimension of career maturity (attitude scale = 0.037) and career decision making selfefficacy. Thus there exists a significant relationship between dimension of career maturity (competence test) and career decision making self-efficacy.
Correlation of sub Dimension of Career Maturity (Self-Appraisal) with Career Decision Making Self Efficacy Table 2 shows that the significant relationship is found between sub dimension of career maturity (self-appraisal) and career decision making self-efficacy (0.443) at 0.01 level, for the students with visual impairment. Hence, the hypothesis H 1 (i) there exists no significant relationship between self-appraisal and career decision making selfefficacy for the students with visual impairment, is not accepted. The result shows that self-appraisal contributes towards career decision making self -efficacy. Above result also depicts that students who were more confident in studying their own abilities, talents and potentialities faced lesser difficulties in decision making. Table 2 reveals that significant relationship is found between sub dimension of career maturity (occupational information) and career decision making self-efficacy (0.467) at 0.01 level, for the students with visual impairment. Hence, the hypothesis H 1 (ii) there exists no significant relationship between sub dimension of career maturity (occupational information) and career decision making self-efficacy for the students with visual impairment, is not accepted. The result reveals that students who were able to collect occupational information from various sources, could progress in the career decision-making process. Table 2 shows that significant relationship is found between sub dimension of career maturity (goal selection) and career decision making self-efficacy (0.384) at 0.01 level, for the students with visual impairment. H 1 (iii) there exists no significant relationship between sub dimension of career maturity (goal selection) and career decision making self-efficacy for the students with visual impairment, is not accepted. The result reveals that students were able to select a goal suitable to their capacities in decision making.
Correlation of sub dimension of Career Maturity (Goal Selection) with Career Decision Making Self Efficacy
Correlation of sub Dimension of Career Maturity (Planning) with Career Decision Making Self Efficacy Table 2 shows that significant relationship is found between sub dimension of career maturity (planning) and career decision making self-efficacy (0.432) at 0.01 level, for the students with visual impairment. Hence, the hypothesis H 1 (iv) there exists no significant relationship between sub dimension of career maturity (planning) and career decision making self-efficacy for the students with visual impairment, is not accepted. The result demonstrates that students, who were able to make and execute plans, faced less difficulty in career decision making. Table 2 shows that significant relationship is found between sub dimension of career maturity (problem solving) and career decision making self-efficacy (0.340) at 0.01 level, for the students with visual impairment. Hence, the hypothesis H 1 (v) there exists no significant relationship between career maturity (problem solving) and career decision making self-efficacy for the students with visual impairment, is not accepted. The result depicts that individuals with higher problem-solving skills were more confident in their decision-making ability and career potential and more certain about their educational and career choice.
Correlation of sub Dimension of Career Maturity (Problem Solving) with Career Decision Making Self Efficacy
It is clear from above discussion that out of six dimensions of career maturity correlation is found on five dimensions namely self-appraisal, occupational information, goal selection, planning and problem solving with career decision making self-efficacy for the students with visual impairment. These findings are supported by Barker and Kellen (1998) who found that one of the most important tasks that one undertakes as part of the career decision making process is to collect information about the possible career options that one is interested in. These finding are also supported by Powell and Luzzo (1998) who found that those who had more personal control over their career decisions had more positive attitudes toward career decision-making and were more career aware.
Correlation of Career Maturity with Self advocacy
The correlation between variables has been presented as under. Table 2 shows the correlation of self-advocacy and sub dimensions of career maturity viz. self-appraisal, occupational information, goal selection, planning, problem solving and attitude scale are 0.339, 0.292, 0.302, 0.310, 0.068 and 0.171 respectively. These values 0.339, 0.292, 0.302, 0.310 are more than the table value of 0.254 at 0.01 level of significance and hence significant at 0.01 level of significance. No significant relationship is found between problem solving (0.068) sub dimension of career maturity and self-advocacy, attitude (0.171) dimension of career maturity and self-advocacy. Thus, there exists significant relationship between sub dimensions of career maturity i.e., self-appraisal, occupational information, goal selection and planning with self-advocacy. Table 2 shows that significant relationship is found between sub dimension of career maturity (selfappraisal) and self-advocacy (0.339) at 0.01 level, for the students with visual impairment. Hence, the hypothesis H 2 (i) there exists no significant relationship between sub dimension of career maturity (self-appraisal) and selfadvocacy for the students with visual impairment, is not accepted. It means self-appraisal contributes towards self-advocacy. The students who were aware about their disability could choose better career for themselves. Table 2 shows that significant relationship is found between sub dimension of career maturity (occupational information) and self-advocacy (0.0292) at 0.01 level, for the students with visual impairment. Hence, the hypothesis H 2 (ii) there exists no significant relationship between sub dimension of career maturity (occupational information) and self-advocacy for the students with visual impairment, is not accepted. It means that the students with disabilities who can understand their strengths and weaknesses were able to obtain information about various occupations. Table 2 shows that significant relationship is found between sub dimension of career maturity (goal selection) and self-advocacy (0.302) at 0.01 level, for the students with visual impairment. Hence, the hypothesis H 2 (iii) there exists no significant relationship between career maturity (goal selection) and self-advocacy for the students with visual impairment, is not accepted. The result demonstrates that students who were aware about their strengths, weaknesses and disability rights were able to obtain their goal.
Correlation of Sub dimension of Career Maturity (Goal Selection) with Self Advocacy
Correlation of Sub Dimension of Career Maturity (Planning) with Self Advocacy Table 2 shows that significant relationship is found between sub dimension of career maturity (planning) and self-advocacy (0.310) at 0.01 level, for the students with visual impairment. Hence, the hypothesis H 2 (iv) there exists no significant relationship between career maturity (planning) and career decision making self-efficacy for the students with visual impairment, is not accepted. The above result depicts that self-advocacy strategies helped the students to function more independently by identifying their areas of weakness and accessing resources to meet goals.
It is clear from above discussions that out of six dimensions of career maturity, correlation is found on four dimensions namely self-appraisal, occupational information, goal selection, planning with selfadvocacy for the students with visual impairment. The finding are supported by Luzzo (1995) that the students with hearing, visual, or physical disabilities were better able to describe the impact of their disability on academic and career development than were students with other types of disabilities. The students with disabilities, who participate in self-advocacy training, can develop individualized career plans.
Gender Differences on Career Maturity for the Students with Visual Impairment
Discussion Based on Table 3 Significance of differences between means of boys and girls on the variable of career maturity along with six sub-variables of CMI was computed by t-ratios with a view to examine the gender differences (Table 4). All mean differences presented in Table 4 are insignificant between the boys and girls on various dimensions of career maturity, namely, self-appraisal (t = -0.31), occupational information (t = -0.29), goal selection (t = 0.18), planning (t = 0.61), problem solving (t = 0.53), attitude scale (sub test) (t = 0.40). Table 4 shows that the F-value for this step is 6.44 which is significant at 0.05 level. This demonstrates that increase in the prediction value after the addition of self-advocacy is significant. In present study career decision making self-efficacy and self-advocacy contributes conjointly as well as independently towards the prediction of career maturity (self-appraisal). Thus the null hypothesis H 4 (i) that none of the independent variable of career decision making self-efficacy and self-advocacy would contribute significantly in predicting the career maturity (self-appraisal) independently as well as conjointly among students with visual impairment, is not accepted. Table 5 shows that the F-value for this step was 27.38 which is significant at 0.01 level. This demonstrates that career decision making self-efficacy was significantly predictor of criterion variable i.e., career maturity (occupational information) of the students with visual impairment. Thus the null hypothesis H 4 (ii) that none of the independent variable of career decision making self-efficacy and self-advocacy would contribute significantly in predicting the career maturity (occupational information) independently as well as conjointly among students with visual impairment, is not accepted in the present investigation.
Regression Analysis for Students with Visual Impairment
From the Table 3 It is clear that out of six dimensions of career maturity, no significant gender difference was observed in case of students with visual impairment. These findings are also supported by Salami (2008) who demonstrated that no significant differences were found between the males and females in their career maturity. Step-wise multiple regression equation for sub dimension of Career maturity (self-appraisal) for the students with visual impairment (N = 100)
Change statistics -----------------------------------------------------------------------Adjusted
Std .013 **Significant at 0.01 level; * Significant at 0.05 level; a. (Critical value 3.95 at 0.05 and 6.90 at 0.01 level of df 98); b. Predictor: (constant), CDMSE; c. Predictor: (Constant), CDMSE, SA Table 6 shows that the F-value for this step is 4.910 which is significant at 0.05 level. This demonstrates that increase in the prediction value after the addition of selfadvocacy is significant. In present study this career decision making self-efficacy and self-advocacy contributes conjointly as well as independently towards the prediction of career maturity (goal selection). Thus, the null hypothesis H 4 (iii) that none of the independent variable of career decision making self-efficacy and self-advocacy would contribute significantly in predicting the career maturity (goal selection) independently as well as conjointly among students with visual impairment, is not accepted. Table 7 shows that the F-value for this step is 4.850 which is significant at 0.05 level. This demonstrates that increase in the prediction value after the addition of selfadvocacy is significant. The career decision making selfefficacy and self-advocacy contributes conjointly as well as independently towards the prediction of career maturity (planning). Thus, the null hypothesis H 4 (iv) that none of the independent variable of career decision making self-efficacy and self-advocacy would contribute significantly in predicting the career maturity (planning) independently as well as conjointly among students with visual impairment, is not accepted. Table 8 shows that the F-value for this step is 12.708 which is significant at 0.01 level. This demonstrates that career decision making self-efficacy is significantly predictor of career maturity (problem solving) of the students with visual impairment. Thus, the null hypothesis H 4 (v) that none of the independent variable of career decision making self-efficacy and selfadvocacy would contribute significantly in predicting the career maturity (problem solving) independently as well as conjointly among students with visual impairment, is not accepted.
Factors Contributing to Low Career Maturity of Students with Visual Impairment
It is very difficult to educate a person with visual impairment. This is because an individual with visual impairment faced many challenges varying from participation in social activities, locomotion, education, employment etc. It requires lot of money to educate children with visual impairment than their sighted peers. Unlike normal children who can easily learn so many things merely by observing and imitating others, but the child with visual impairment required help for learning so many concepts, thus required more time to be cared for. It is interesting to know that people with visual impairment have the abilities to learn various skills and these skills are useful for them, if they are properly guided by relevant professionals. With these skills the people with visual impairment can live an independent life. The research questions explored possible factors contributing to low career maturity of the students with visual impairment.
Student-I
Student 1 is 17 years male; his scores on career attitude, self-appraisal, occupational information, goal selection, planning and problem solving were 20, 2, 2, 2, 2, 3. Due to lack of confidence and self-knowledge he could not make decision regarding career choice. Selfknowledge and self-confidence are the primary factors for the students with visual impairment. Self-knowledge means understanding of one's own abilities, character, feelings, or motivations. The student had uneducated parents. Less educated parents are less likely to be involved in their children's education process. Parents play an important role in their children's career decision making process. Parents are important role models for their children (Morrow, 1995). The student belonged to low socioeconomic status. Parents of low-income families participate less in their children's education process. The student also had poor decision-making and problem solving skills because the student did not have experience of part time employment during their secondary school years. The student was not interested in studies and was not getting good marks in the studies. The student did not have knowledge about various occupations. The student had no knowledge about the braille and computer. He was learning braille now, because earlier he was studying in the normal school. When asked from the student what would be his ideal job? He did not respond confidently. He stated that nobody told him that what he should do. It is very difficult for him to decide about his future, because he did not know what he wants to do".
Student-II
The student-2 scores on career attitude, selfappraisal, occupational information, goal selection, planning and problem solving were 23, 1, 2, 2, 2, 3. Due to lack of confidence and self-knowledge the student could not make decision regarding career choice. Poor self-knowledge makes it difficult for an individual to engage in the career decision-making process altogether. The student had uneducated parents. Less educated parents are less likely to be involved in their children's education process. The family is the most basic institution in our culture and the primary setting where children learn to interact with their environment. The student belonged to low socioeconomic status family. Low SES students have lower levels of career maturity due to lack of occupational information and employment opportunities; (Kerka, 1998). The student also had poor decision-making and problem solving skills as evident from the score, because he did not have experience of part time employment during his secondary school years. He was less likely to have participated in work activities. The student did not have knowledge about various occupations. The student did not have knowledge of braille. He was learning braille now. He did not have knowledge of computer. No facilities were available at home. He was not interested in studies and did not get good marks in studies. He stated that he had interest in music. When asked from him he can choose music as a career but he did not respond confidently. He stated that there was no proper guidance provided in the school.
Student-III
The student 3 scores on career attitude, selfappraisal, occupational information, goal selection, planning and problem solving were 25, 3, 6, 5, 5, 4. Student reported that he had not decided about what he would like to do after school. He was not confident about his capabilities. Due to lack of confidence and self-knowledge the student could not make decision regarding career choice. The qualification of his father was matric and he was running his own shop. The student belonged to middle socioeconomic status family. The family is the most basic institution in our culture and the primary setting where children learn to interact with their environment. The student also had poor decision-making and problem solving skills and was less likely to have participated in school activities. This student had little knowledge about various occupations, but he was not aware what kind of job he could do. His father was a shopkeeper, he told that his father often discussed about some occupations, but he had not decided what he would do in the future. The student used braille and had knowledge of computer. He was able to assess the internet in the school but he was not interested in studies. He was getting average marks in his studies. He had interest in music. He stated that normal people think that disabled people cannot do any kind of job.
He was not aware how the subject he learned in the class could be helpful to him in the future (Table 9).
Student-IV
The student-4 score on career attitude, selfappraisal, occupational information, goal selection, planning and problem solving were 27, 2, 5, 4, 4, 3. Due to lack of confidence and self-knowledge the student could not make decision regarding career choice. The student had less educated parents. Parents maintained an important role in the lives of students with disabilities by encouraging, supporting and understanding them and the issues they face in life. The student had poor decision-making and problem solving skills as evident from the score, because he had less experience of part time employment during his school years. Students with disabilities are less likely to obtain work experiences and therefore do not receive the benefit of having worked. As a result, these students may not have the chance to practice decision-making, problem solving and exploration of different jobs. This student had little knowledge about various occupations. His father was a farmer. He told that his father did not know about various kinds of jobs. He was dependent on school teachers for his studies. The student used braille and had knowledge of computer. He said that he could gather information about some occupations from internet, which were suitable for him but was unable to know how to proceed further.
From the above discussion it reveals that there are various factors which contribute towards the low career maturity of students with visual impairment. Students were not aware about the subjects they had learned in the class could be helpful in the future. Poor selfknowledge regarding one's capabilities, interests or personal traits serve as some of the issues relating to lack of information. They had limited information about the occupations and what is involved in these occupations as well as various options that were available; and lack of information about the ways in which one can get career information. Parents maintained an important role in the lives of students with disabilities by encouraging, supporting and understanding them and the issues they face in life. Students feel as if they are not supported, are less likely to be engaged in school and have less positive attitude towards their future careers. Lindstrom et al. (2007) found that family plays an important role in career decision making process and also found that low socioeconomic status has a direct effect on career development and vocational identity. These students depend upon their school teachers for guidance.
Educational Implication of the Study
• Career maturity levels and career decision-making abilities are very important to people with disabilities. Study reveals that there is a relationship between career maturity and career decision making of the students with visual impairment. This encourages the growth of career maturity in the students through assisting them to obtain employments • Facilitating informed choice for students by guiding them in the discovery of more detailed information about particular occupations, any potential difficulties their disability may cause in particular occupational roles and potential solutions to these difficulties • Educating students about the responsibilities of workplaces and institutions of higher education and training to provide necessary accommodations. • Providing assertiveness training to enable disabled students to confidently and appropriately explain their needs and make requests for accommodations, in job interview situations and in everyday interactions with workmates, colleagues, customers, clients and others
Conclusion
The present study investigated relationship of career maturity with self-efficacy and self-advocacy among the students with visual impairment. The results show that there exists significant relationship between dimension of career maturity and career decision making selfefficacy of the students with visual impairment. There exists significant relationship between sub dimensions of career maturity and self-advocacy of students with visual impairment. Significant predictors of sub dimensions of career maturity of students with visual impairment were career decision making self-efficacy and self-advocacy.
There were various factors which contributed to low career maturity of students with various types of disabilities. The study would be useful for teachers and administrators to provide appropriate guidance and counseling to develop self-awareness, to enlarge career awareness and to develop appropriate attitude towards work in the students with disabilities. | 2019-05-11T13:06:46.728Z | 2018-01-06T00:00:00.000 | {
"year": 2018,
"sha1": "3bef5380f2cb1e428a51fcaaefc0a3ff5d9694fa",
"oa_license": "CCBY",
"oa_url": "http://thescipub.com/pdf/10.3844/jssp.2018.30.42",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "19fbc417b3ab4d3eec085272b340362d24e0e37e",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
225631635 | pes2o/s2orc | v3-fos-license | Normative Values of College-Aged Men and Women for the YMCA Bench Press Test for Muscular Endurance
The current investigation reports on percentile normative values of college-aged men and women (18-25 years) for the YMCA bench press endurance test (YMCA-BPT). Previously reported normative values did not include the sample size or population specific information. Participants in this investigation were healthy, men and women who completed a standardized warm-up and then underwent the procedures for the YMCA-BPT. Demographic data are reported along with YMCA-BPT scores including averages of 34.18±12.51 for men, 25.26±10.20 for women and 29.69±12.26 for the combined group (repetitions±standard, respectively). Percentile scores by gender are also reported.
Introduction
Physical fitness, a multi-faceted parameter, is comprised of cardiorespiratory fitness, body composition, flexibility and muscular strength and endurance. Muscular endurance (ME), the ability to sustain a given level of submaximal force or repetitions of that force over time [1], is inversely proportional to intensity [2]. In addition, ME is a common measure selectively associated with physical performance [1] and has also been used as a predictor of health [3]. The Young Men's Christian Association (YMCA) Muscular Endurance Bench Press Test (YMCA-BPT) is a protocol specifically used to assess upper body ME that was first reported in 1989 [4] in the well-known text, "Y's Ways to Fitness." However, the normative values utilized for comparisons of performance were reported with no information on the total number or characteristics of the participants beyond age and gender.
The purpose of the current investigation was to re-evaluate and report new percentile norms for the YMCA-BPT for a college-aged population (18 -25 years) of men and women. A previous version of the American College of Sports Medicine (ACSM) guidelines [5] references the norms of this test that are now more than 20 years old. Overall, normative referencing is a valuable tool that allows an individual to be compared by their test score to a population of peers' performance of the same test. However, it is important that the norms are relevant to the population to maintain their value.
Participants
Two hundred healthy participants, men (n=100) and women (n=100), between 18-29 years of age were recruited from a university population to participate in this project. Participants were healthy, had not been involved in varsity or club athletics (i.e., organized sports teams) in the last six months preceding data collection, had no orthopedic limitations that would have impaired their ability to perform physical tasks, and did not consume alcohol or participate in acute exercise (i.e., physical activity above their daily routine) within twelve hours prior to laboratory testing. The university's Institutional Review Board approved the study and all methods were carried out in accordance with approved institutional guidelines and regulations. All participants provided written informed consent to participate.
Overview of Procedures
Following the informed consent process, demographic information and anthropometric measurements were collected. Height was measured using a stadiometer (Invicta Plastics Limited, Leicester, England) to the nearest centimeter, while weight was measured using a digital scale (BWB-800, Tanita Corp, Japan) to the nearest tenth of a kilogram. Each participant completed an aerobic warm-up and a total of seven stretches designed to stretch all major muscle groups. Following warm-up each participant was instructed on the testing protocol, sat quietly for five minutes, and then performed the test.
Warm-up
All participants completed an aerobic warm-up consisting of treadmill (TMX425 Trackmaster, Full Vision, Newton, KS) walking for five minutes at a self-selected speed between 2.5 to 3.5mph (4.0 to 5.6kph) and 1.0 to 5.0% grade incline. Upon completion, participants performed a series of static stretches targeting the major muscle groups of the legs, trunk, shoulder/chest, and back. Following the stretching protocol each participant underwent barbell bench press familiarization. Using a flat bench with barbell rack that each individual completed five repetitions at 15lbs (6.8kg) for women and 45lbs (20.4kg) for men; Ultra Lite 6 foot (1.8m) aluminum Olympic barbell and weight plates, Hampton, Inc, Ventura, CA) at a pace of 30reps/min via metronome tones at 60beats/min (tone in-synch with top and bottom of bench press cycle; Franz Metronome, New Haven, CT) followed by two repetitions with the testing weight of 35lbs (15.9kg) for woman and 80lbs (36.3kg) for men at the same pace.
Muscular Endurance Testing
The YMCA-BPT protocol [4] was assessed with the participant lying supine on the flat bench with their feet flat on floor. The barbell was handed to the participant by a spotter in the down position (their elbows flexed and palms up), gripping the barbell with hands shoulder-width apart. The metronome was set at 60beats/min, on the sound of the tone the participant pressed the barbell up fully extending the elbows; this was counted as one repetition. In order to continue, participants were required to stay in-synch with the metronome during both the down and extended positions. Participants continued repetitions until fatigue or until the 30reps/min pace could not be sustained; if the participant paused between repetitions they were immediately prompted to continue and if they did not continue immediately the test was considered finished. Participants were encouraged to breathe regularly and perform maximally with a trained and suitable spotter(s) in the standing position over and behind the head of the participant or spotters at each end of the barbell. The maximal number of repetitions were recorded [4].
Data Analyses
Data Analyses. Data was analyzed using SPSS 25.0 (SPSS, Inc, Chicago, IL) and is reported as means ± standard deviations, range, and percentiles.
Results
Anthropometric and demographic information and descriptive results are included in Table 1, while Table 2 illustrates the percentile scores for men and women.
Discussion
The current investigation provides updated normative values for the YMCA-BPT derived from a large group of college-aged men and women. The YMCA-BPT is a common test to assess upper-body muscular endurance that is taught throughout collegiate-coursework in exercise science and often utilized in regimens to assess physical fitness and athletic performance. The guidelines for the YMCA-BPT, including normative values to interpret individual performance on this test, were originally published in 1989 in the Y's Way to Physical Fitness [4]. In fact, the National Council of YMCA of the United States has been a leader in the promotion of physical fitness since the mid-1960s; during the first YMCA national consultations on physical fitness in the early-to mid-1970s, a group of experts in the field of physical fitness, physiology and sports medicine joined together and introduced standards for physical fitness which was published as The Y's Way to Physical Fitness [6]. Since the first edition of the Y's Way to Physical Fitness [6], it has been updated, revised and edited several times in order to include new tests and updated normative values based on the specific needs of the community and an overall comprehensive approach towards greater physical fitness [7,8]. However, the normative data for the YMCA-BPT from which to compare individual's performance was provided at that time without a scientific reference -no detailed information for testing methodology, specific subject numbers or characteristics (beyond age and sex).
Research delimitations, methodologically established parameters placed on data collection, are necessary to establish guidance in interpretation of the results and are well established in the current investigation. Moreover, the limitations, the generalized applicability of the results, which in one sense is a limitation of this investigation, was chosen as the intent was to evaluate college-aged men and women. Normative data for a test provides a valuable tool for comparing performance within specified limits. However, in addition to the present data providing definitive participant characteristics and detail sufficient to compare to the sample an end-user will apply this data to, the need to re-evaluate physical fitness performance test normative data is often less evident. As the original data was over 20 years old, it is important to consider how changes in physical activity over the last few decades as frequently reported in the adolescent [9] and adult populations [10]. Clearly, changing patterns of physical activity may have significant influence on physical performance [11] and likely muscular endurance. In addition, in the United States, average height and weight [12,13] and the incidence of obesity [14] have increased over the last few decades. Given the plethora of anecdotal evidence and the literature to highlight that, to some extent, physical dimensions are proportional to measures of muscular strength and may also influence muscular endurance [15], all told, it is worthwhile to re-evaluate normative values of physical performance tests. Moreover, the current data is derived from a group of men and women who are largely in line with the current mean height, weight and body mass index of the population in the US for this age group [16].
Other popular tests are often used to assess upper-body muscular endurance based on push-ups and chin-ups completed by an individual [4]. These tests are considered good field tests (i.e., limited equipment necessary to complete), however, since they are body weight dependent for resistance they are considered more difficult for participants to complete and do not standardize the workload between participants. Therefore, proposed benefits of utilizing the YMCA-BPT is to predict 1RM strength [17,18] and to compare muscular endurance amongst participants without having to correct for body weight [17]. Additionally, the YMCA-BPT requires minimal form development, for those unfamiliar the testing modality, thus, advocates for this test believe novices are more effective in the execution of the repetition and reduced risk for injury when compared with the push-up or chin-up exercise tests [19].
Conclusions
Normative data is an important aspect of any test. Whereas previously published normative values for the YMCA-BPT did not illustrate the sample size, anthropometric or demographic information for the age group reported on, the present investigation provides these details. Reporting individual performance in comparison to normative values is an important form of feedback as their performance relative to peers may provide insight into strengths and weaknesses of their overall health or may provide more information to aid in the development of an exercise prescription to improve their own physical fitness. | 2020-07-23T09:09:08.809Z | 2020-07-15T00:00:00.000 | {
"year": 2020,
"sha1": "b44a1887ac7c3584fad06754fcaa9c10c4b9d343",
"oa_license": null,
"oa_url": "https://doi.org/10.12691/jpar-5-1-6",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b54a82cfeca50b8e751a4b45c323f05aa0bd087c",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
266835731 | pes2o/s2orc | v3-fos-license | Immigration status-related exclusive e-cigarette use and cannabis use and their dual use disparities associated with mental health disorder symptoms
Introduction: E-cigarette and cannabis use has been linked to various health risks, including respiratory and cardiovascular conditions. Yet, extant knowledge about the risk factors for exclusive and dual use of e-cigarettes and cannabis is limited, especially among immigrants. We examined exclusive e-cigarette and cannabis use and their dual use associated with mental health disorders among immigrants and U.S.-born. Methods: We analyzed national cross-sectional data collected between May 13, 2021, and January 9, 2022, among adults aged ≥18 years (n= 4766) living in U.S. Multinomial logistic regression analyses were conducted to model the associations of exclusivity and dual-use (reference group= non-use) with anxiety/depression. Results: The dual-use prevalence was higher than exclusive e-cigarette and cannabis use, especially among U.S.-born (dual use= 14.79% vs. cannabis use= 13.53% vs. e-cigarette use= 7.11%) compared to immigrants (dual use= 8.23% vs. cannabis use= 5.03% vs. e-cigarette use= 6.31%). Immigrants had lower risks of exclusive cannabis and dual use compared to U.S.-born. Anxiety/depression was associated with higher risks of exclusive cannabis use and dual use across immigration status, but was associated with exclusive e-cigarette use among only immigrants. While effect sizes of dual-use associated with anxiety/depression were higher among U.S.-born, the effect sizes of exclusive e-cigarette and cannabis use associated with anxiety/depression were higher among immigrants. Conclusions: The findings revealed significant mental health risks for e-cigarette, cannabis, and their dual use among immigrants and U.S.-born, especially among U.S.-born. These findings highlight the need for public health research and interventions to consider immigration status-related disparities in substance use.
Introduction
The United States (U.S.) is currently witnessing a profound shift in the patterns of prevalent substance use, primarily triggered by the rise of electronic cigarettes (known as e-cigarettes) and the progressive legalization of cannabis.E-cigarettes, commonly touted as a less harmful substitute to conventional tobacco products, have experienced a rapid rise in popularity, especially among younger age groups (Golan et al., 2023;Lewis et al., 2022;Short and Cole, 2021).Their marketing often highlights the perceived reduced risk, creating a narrative that leads to an increase in adoption among populations previously hesitant about tobacco usage (Do et al., 2022;Mantey et al., 2016;Ozga et al., 2023;Stanton et al., 2022;Zheng and Lin, 2023).Concurrently, the decriminalization and legalization of cannabis in numerous states may have aided in an uptick in the use of e-cigarettes and cannabis (Adhikari et al., 2021;Bhatia et al., 2022;Meng et al., 2022;Nicksic et al., 2020;Veligati et al., 2020).While these substances are often consumed independently, an emerging trend of dual use, defined as the concurrent use of e-cigarettes and cannabis, has been observed (Islam et al., 2023;Mattingly et al., 2023;Roberts et al., 2022;Williams et al., 2023).The increase in both e-cigarettes and cannabis may also be attributed to the use of e-cigarette products to deliver or administer cannabis (Chadi et al., 2020;Fataar and Hammond, 2019).This pattern of dual-use presents unique health risks that are becoming a significant public health concern (Azagba, 2018;Carlini et al., 2022;Davis et al., 2022).E-cigarette use, despite its perceived safety, has been linked to a range of health risks, including respiratory diseases and cardiovascular conditions (CDC, 2021;Cho et al., 2023;Marques et al., 2021).Furthermore, nicotine, the primary addictive substance in e-cigarettes, has been well-documented to have deleterious effects on cardiovascular health, including increased heart rate and blood pressure (CDC, 2023;HHS, 2016;Singh et al., 2020;Williams et al., 2013).Cannabis use, particularly at high doses, has also been associated with mental health disorders, cognitive impairment, and an increased risk of accidents (Albaugh et al., 2023;Brown et al., 2023;Cheng et al., 2023).The synergistic effects of these substances' dual use could potentially exacerbate these health risks, leading to severe health outcomes.For instance, individuals who engage in dual-use may experience heightened respiratory issues due to inhaling e-cigarette vapor and cannabis smoke (Buckner et al., 2021).Additionally, the simultaneous use of these substances may lead to increased psychoactive effects of both substances and potentially higher risk of mental health disorders, such as anxiety and depression (SAMHSA, 2020b).Other areas of concern for dual use of these substances are the possible exacerbation of dependency issues, complicating the treatment process and negatively impacting the overall health outcomes.
Despite the potential health risks associated with the dual use of these substances, research in this area remains sparse.Most existing studies focus on using e-cigarettes (Kim et al., 2022;O'Brien et al., 2021;Stallings-Smith and Ballantyne, 2019) or cannabis (Robinson et al., 2022;Schlossarek et al., 2016;Van der Steur et al., 2020) in isolation, thereby overlooking the unique risks associated with their combined usage.This one-sided focus inadvertently leaves a significant gap in our understanding of the cumulative effects of these substances on an individual's health, limiting our ability to respond effectively to this growing public health concern.Although some few emerging studies (Jacobs et al., 2023;Jones et al., 2023;Mattingly et al., 2022;McClure et al., 2023;Reboussin et al., 2021;Smith et al., 2022) have examined dual use of e-cigarettes and cannabis, there is a deficit of research investigating the risk factors for this dual use behavior based immigration status, especially when considering population subgroup-related risk factors such as mental health disorders, patterns of substance use, and sociodemographic differences.
In the general population, mental health disorder symptoms (e.g., anxiety, depression) are well documented risk factors for substance use (e.g., e-cigarette use, cannabis use, alcohol use), including dual and poly substance use (Conway et al., 2017;Duan et al., 2022;Kondracki et al., 2022;Lewis et al., 2022;Spears et al., 2019Spears et al., , 2020;;Thrul et al., 2020).Individuals with mental health disorder symptoms are more likely to use substances such as e-cigarettes, cannabis, or their combination (Conway et al., 2017;Duan et al., 2022;Kondracki et al., 2022;Lewis et al., 2022;Spears et al., 2019Spears et al., , 2020;;Thrul et al., 2020).However, the key factors such as underlying associations of the established patterns of substance use mental health disorder symptoms and sociodemographic differences have not been studied in the immigrant (i.e., individuals not born in the U.S.) and U.S.-born (i.e., individuals born in the U.S.) populations.Immigration status is a significant social determinant of health, especially in the U.S. where the highest number of immigrants worldwide live (Castañeda et al., 2015;DeFries et al., 2022;Kagotho et al., 2020;Martinez et al., 2015).Immigrants are also one of the most vulnerable, disadvantaged, and minority groups that experience greater risks of poor health and substance use disorder complications (DeFries et al., 2022;Grace et al., 2018;Kagotho et al., 2020).This gap in the literature is particularly pronounced for immigrant populations, who are often understudied in substance use research despite potentially facing unique risks and challenges (Bustamante et al., 2021).Despite immigrants' significant presence in the U.S. (Budiman, 2020), they often remain understudied, potentially due to language barriers, cultural nuances, or logistical issues related to data collection (Berry, 2006;Klein et al., 2020;Lee et al., 2013).This lack of attention is concerning, as immigrant populations may face unique risks and challenges associated with substance use due to acculturation stress, socioeconomic inequalities, or limited access to healthcare services (Berry, 2006;Lee et al., 2013).The existing studies indicate that immigration status plays major roles in substance use because immigration stressors (e.g., legal status, forced migration, historical trauma, violence, family separation, and poverty) contribute to vulnerability and increased risk of substance use (DeFries et al., 2022;Marginean et al., 2023;Salas-Wright et al., 2014).However, none of the existing studies examined dual use of e-cigarette and cannabis, or their exclusivity with mental health disorder symptoms in immigrants and U.S.-born to identify the disparities in this behavior for personalized public health interventions.
Given the increasing prevalence of e-cigarette and cannabis use and the potential health risks associated with their dual use, it is crucial to expand the literature on and our understanding of the factors contributing to this dual-use behavior based on immigration status.This study aims to fill the gap in the literature by (1) estimating exclusive e-cigarette and cannabis use and their dual use by mental health disorder symptoms and sociodemographic characteristics based on immigration status, (2) the associations of exclusive e-cigarette and cannabis use and their dual use with immigration status, adjusting for mental health disorder symptoms and sociodemographic characteristics, and (3) the associations of exclusive e-cigarette and cannabis use and their dual use with mental health disorder symptoms, adjusting for sociodemographic characteristics, among immigrants and U.S.-born.Consequently, our findings will shed light on this understudied area and inform targeted interventions to mitigate the potential risks associated with dual-use behaviors.
Study design and participants
We analyzed national cross-sectional data that were collected as part of a study, Understanding the Impact of the Novel Coronavirus (COVID-19) and Social Distancing on Physical and Psychosocial (Mental) Health and Chronic Diseases, among adults aged 18 years or older living in the U.S.This survey was an anonymized, online or web-based survey that was conducted among a random sample of the U.S. adults.The participants' recruitment, screening, enrolment in the study, and survey administration were conducted between May 13, 2021, and January 9, 2022, by Qualtrics LLC using their existing survey panels.Qualtrics used demographic characteristics of a theoretical cohort to randomly match eligible panel members and drew the sample, including US-and foreign-born (i.e., immigrant) adults, from the American Community Survey.Low income (<$30,000 annual household income) and rural adults were oversampled among US-born White, Black, and Hispanic and foreign-born population to ensure representativeness of the participants.Qualtrics compensated each participant with a $5-$10 gift card for completing the survey.The survey was developed in English and distributed to 10,000 participants in English with about 59.38% response rate representing 5938 surveys received by Qualtrics LLC.Information Management Services (Carlini et al.), Inc. was contracted to review and correct the de-identified data based on the survey completeness criteria (completed ≥80% of the 102 survey questions for not less than 5 minutes).IMS determined 5413 participants had accurately completed the surveys that formed the final sample for the study.Further details about this survey have been published elsewhere (Talham and Williams, 2023).We conducted a complete case analysis of 4766 participants out of the 5413 participants for this current analysis to ensure that we included only the participants with no missing data on our variables of interest.The National Institutes of Health's Institutional Review Board determined the study as exempt (IRB #000308) on 12/23/2020.We used Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) to guide the writing of this paper (Vandenbroucke et al., 2007).
Measures
2.2.1.Outcome variable-Exclusive e-cigarette and cannabis use and dual use of ecigarettes and cannabis were assessed with two questions: During the past month, how often did you (1) smoke e-cigarettes or use vaping products?and (2) use cannabis?The responses include 1= not at all, 2= once during the month, 3= several times during the month, 4= once a week, 5= several times a week, 6= every day or almost every day, and 7= several times a day.We dichotomized the responses into use (response options 2-7) and did not use (option 1).Next, we combined the responses into a single categorical variable to indicate non-use (if participants did not use both products), exclusive e-cigarette use (if participants used e-cigarettes but not cannabis), exclusive cannabis use (if participants used cannabis but not e-cigarettes), and dual-use (if participants used both products).
2.2.2.
Exposure variables-Immigration status and mental health disorder symptoms (anxiety/depression, Post-Traumatic Stress Disorder [PTSD], and loneliness) were the exposure variables.Immigration status was determined by asking the participants whether they were born in the U.S. (including all 50 states and the District of Columbia) or outside U.S. (not excluding Puerto Rico and other US territories).Those not born in the U.S. were referred to as immigrants, while being born in the U.S. was considered U.S.-born or non-immigrants.
Anxiety/depression symptoms were assessed with four survey questions based on the Patient Health Questionnaire-4 (PHQ-4) scale.The participants were asked how often they experienced anxiety and depression symptoms in the last two weeks.Specifically, they were asked if they have been disconcerted by (1) feeling nervous, anxious or on edge, (2) not being able to stop or control worrying, (3) feeling down, depressed, or hopeless, and (4) little interest or pleasure in doing things (Kroenke et al., 2009a;Löwe et al., 2010).The response options for each of the four questions include not at all = 0, several days = 1, more than half the days = 2, or nearly every day = 3, with a total PHQ-4 score ranging from 0-12.In this study, we analyzed the PHQ-4 cutoff points used to determine minimal/ negative (score= 0-2), mild (score= 3-5), moderate (score= 6-8), and severe (score= 9-12) anxiety/depression symptoms (Kroenke et al., 2009b;Löwe et al., 2010).PTSD, based on the Primary Care PTSD (PC-PTSD-5) screen for the Diagnostic and Statistical Manual of Mental Disorders 5th edition (DSM-5), involves five items used to identify probable PTSD in participants.The participants were first asked (to determine their eligibility for the five items) if they ever (yes/no) experienced any frightening, horrible, or traumatic events (e.g., accident/fire, war, environmental disaster, assault).Those who reported experiencing any of such events were further asked to respond (yes/no) to five items about their experiences in the past month: (1) Had nightmares about the event(s) or thought about the event(s) when you did not want to? (2) Tried hard not to think about the event(s) or went out of your way to avoid situations that reminded you of the event(s)?
(3) Been constantly on guard, watchful, or easily startled?(4) Felt numb or detached from people, activities, or your surroundings?(5) Felt guilty or unable to stop blaming yourself or others for the event(s) or any problems the event(s) may have caused?(Prins et al., 2016).Participants who answered "yes" to three or more of the five questions were considered to have PTSD symptoms (Prins et al., 2016).Otherwise, the participants screened negative for PTSD.We further categorized the participants into three groups: (1) ineligible/unqualified for PC-PTSD-5, (2) eligible/qualified for PC-PTSD-5 but had no PTSD, and (3) eligible/ qualified for PC-PTSD-5 and had PTSD.
Loneliness was measured with a 3-item UCLA Loneliness Scale (a short version) among the participants.The participants were asked about how often they (1) lack companionship, (2) feel left out, and (3) feel isolated from others (Hudiyana et al., 2022;Hughes et al., 2004;Russell, 1996).The possible answers to the three items include hardly ever= 1, some of the time= 2, and often= 3. The participants can only select one option per item.The total score for the three items ranges from 3 to 9, with higher scores indicating higher loneliness.We evaluated the reliability of the loneliness scale (UCLA Loneliness Scale -Short) using Cronbach alpha (α) and found strong internal consistency or reliability of the scale among U.S.-born (α= 0.88) and immigrants (α= 0.87).
2.2.3.
Covariates/confounders-Individual-level factors/variables were included in the analysis based on previous studies that established their significant associations with substance use (Barger et al., 2021;Compton et al., 2023;Garrison-Desany et al., 2023;Jun et al., 2019;SAMHSA, 2020aSAMHSA, , 2022)).These factors were age, gender identity (man, woman, non-binary, or transgender), sexual orientation (lesbian, gay, bisexual, or heterosexual), race/ethnicity (Black/African American, other [American Indian/-Alaskan Native, Pacific Islander, Asian, multi-racial], Hispanic/Latino, or White), level of education completed (less than high school, high school diploma or GED, some college/vocational or technical school, or college/higher education), and U.S. census region (Northeast, West, Midwest, and South).For this study and due to limited samples within groups, we dichotomized sexual orientation into heterosexual and sexual minority (lesbian, gay, and bisexual).We also included past-month alcohol use, which was determined with similar questions as those used for cannabis and e-cigarette use.
Statistical analysis
Before we combined e-cigarette and cannabis use, we estimated the intersection of their use frequencies stratified by immigration status (Fig. 1).Next, we conducted descriptive and bivariate analyses to determine the prevalence of exclusive e-cigarette and cannabis use and their dual use by sociodemographic characteristics, mental health disorder symptoms, and alcohol use based on immigration status (Table 1).The bivariate statistics were computed using Chi-Squared tests or analysis of variance (ANOVA) to determine group differences in the outcome variable.We conducted multinomial logistic regression analyses to model the associations of exclusive e-cigarette and cannabis use and their dual use (reference group= non-use) with mental health disorder symptoms, adjusting for sociodemographic characteristics and alcohol use, among immigrants and U.S.-born (Table 2.2).Before stratifying the logistic regression model by immigration status, we examined the association between the outcome variable and immigration status, adjusting for sociodemographic, mental health, and alcohol use characteristics (Table 2.1).We reported relative risk ratios (RRRs) with 95% confidence intervals (CIs) for the estimates.The statistical significance level was determined at p<0.05.Before conducting the multinomial logistic regression analyses, we evaluated the association between the predictors to determine their multicollinearity.The mean variance inflation factor (VIF) was 1.21, indicating no significant multicollinearity because the VIF is lower than VIF value of 10 to be considered serious multicollinearity.Analyses were conducted using STATA version 16.1.
Descriptive characteristics of the participants by immigration status
The characteristics of the participants are presented by immigration status in Table 1.Of the 3672 U.S.-born, most of them were within the age 35-49 years (32.30%),identified as a woman (62.99%), heterosexual (89.79%), White American (49.51%), some college/ vocational or technical school (35.78%), and resided in the U.S. South (46.16%).A significant proportion of the U.S.-born also experienced mild (22.88%), moderate (13.97%, and severe (12.83%) anxiety/depression symptoms.About 10.29% of them experienced PTSD symptoms.They had a mean loneliness score of 5.06 (2.06), and most of them engaged in alcohol use (60.32%) in the past month.Of the 1094 immigrants, most of them were 35-49 years (29.25%),identified as a woman (63.62%, heterosexual (87.75%),Latino/ Hispanic (35.37%), had college or higher education (51.19%), and resided in the U.S. South (42.96%).A higher proportion of them experienced mild anxiety/depression symptoms (21.48%), followed by moderate (11.15%) and severe (8.96%) symptoms, respectively.About 8.32% experienced PTSD symptoms.A mean loneliness score of 4.79 (SD= 1.93) was reported among them.More than half of them engaged in alcohol use in the past month (51.74%).
Differences in the prevalence of exclusive e-cigarette and cannabis use and their dual use by immigration status
Among the U.S.-born (Fig. 1), most individuals who used e-cigarettes and used them once to several times per week also used cannabis once to several times per week (32.48%).The next groups were those who used both e-cigarettes and cannabis daily to several times per day (24.18%) or used both products once to several times per month (24.39%).For the distributions in the immigrants, the majority of those who used e-cigarettes and used them once to several times per month also used cannabis once to several times per month (45%) or week (27.03%).The next group of individuals who used e-cigarettes was those who used both products once to several times per week (43.24%).About 29.17% of them who used e-cigarettes daily to several times per day also used cannabis daily to several times per day.
Stratified by immigration status, Table 1 shows that the prevalence of e-cigarette and cannabis use varied significantly by the participants' sociodemographic characteristics, mental health, and alcohol use.The prevalence of dual use of e-cigarettes and cannabis (14.79%) was higher than the prevalence of exclusive cannabis use (13.53%) and exclusive e-cigarette use (7.11%) among U.S.-born, while the prevalence of dual use (8.23%) was also higher than the exclusive e-cigarette use (6.31%) and exclusive cannabis use (5.03%) among immigrants.The prevalence of dual use of e-cigarettes and cannabis was most common than exclusive use within all the subgroups of U.S.-born and immigrants.For instance, within subgroups in U.S.-born and immigrants, majority of those who engaged in dual use identified as non-binary/transgender, sexual minority, Black/African American, had less than high school education, experienced severe anxiety/depression symptoms, had higher loneliness scores, and used alcohol in the past month.Most of the individuals who engaged in dual use among the U.S.-born group had similar statistically significant sociodemographic, mental health, and alcohol use characteristics as those in the immigrants, except based on age (U.S.-born aged 18-25 or 26-34 years vs. immigrants aged 18-25 years), U.S. census region (U.S.-born= U.S. West vs. immigrants= results did not significantly vary), and PTSD status (U.S.-born had PTSD vs. immigrants did not have PTSD).
Associations of e-cigarette and cannabis use with mental health, sociodemographic, and alcohol use factors
As shown in Table 2.1, immigrants (vs.U.S.-born) had significantly lower risks of engaging in exclusive cannabis use and dual use of e-cigarettes and cannabis (reference group: non-use), adjusting for sociodemographic, mental health, and alcohol use characteristics.The model fit information (Х 2 (72, N= 4766) = 1668.09,p<0.001) for the model in Table 2.1 suggests that this model fits significantly better than a model without any predictors.Table 2.2 presents multinomial logistic regression models for e-cigarette and cannabis use (reference group: non-use), stratified by immigration status.The model fit information for U.S.-born (Х 2 (69, n= 3672) = 1297.84,p<0.001) and immigrant (Х 2 (69, n= 1094) = 324.64,p<0.001) samples in Table 2.2 indicates that these models significantly improved with addition of the predictors and covariates.In U.S.-born, individuals with mild, moderate, and severe anxiety depression symptoms (reference group: no symptoms) had significantly higher risks of engaging in exclusive cannabis use and dual use.Those who were ineligible for PTSD assessment (reference group: had no PTSD symptoms) had significantly lower risks of using cannabis exclusively.The results also showed risks associated with the covariates/controlled factors.Compared to individuals aged 18-25, those aged 50 years or older were significantly less likely to use e-cigarettes or dual-use e-cigarettes and cannabis exclusively.Those who identified as a woman had significantly lower risks of engaging in exclusive e-cigarette and cannabis use or dual use compared to those who identified as a man; non-binary/transgender/other had significantly lower risks of exclusive cannabis use.Sexual minority individuals (vs.heterosexual persons) had significantly higher risks of exclusive cannabis use.Black/African American individuals (vs.White American individuals) were significantly more likely to engage in exclusive cannabis use and dual use.Education was significantly associated with lower risks of engaging in dual use.College or higher education (vs.less than high school) significantly decreased the risks of an exclusive cannabis use.Residing in the U.S. Northeast, Midwest, or South was significantly associated with lower risks of a dual-use behavior compared to residing in the West.Those residing in Midwest and South also had significantly lower risks of exclusive cannabis use behavior.Alcohol use was significantly associated with exclusive e-cigarette and cannabis use and their dual use.
Among immigrants (Table 2.2), individuals with moderate anxiety/depression symptoms had significantly higher risks of engaging in exclusive e-cigarette use and dual use, while severe anxiety/depression symptoms were significantly associated with higher risks of exclusive e-cigarette and cannabis use or dual use.Those with PTSD symptoms were significantly more likely to engage in exclusive cannabis use.The following covariates or controlled factors were significantly associated with e-cigarette and cannabis use.Individuals aged 65 years or older had lower risks of engaging in exclusive e-cigarette and cannabis use and their dual use compared to those aged 18-25.Persons who identified as a woman (vs. a man) were less likely to engage in exclusive cannabis use behavior or dual-use behavior.Black/African American and other racial/ethnic groups had lower risks of exclusive use behavior compared to White American individuals.College or higher education (vs.less than high school) was associated with lower risks of exclusive e-cigarette use.The risks of engaging in an exclusive e-cigarette and cannabis use and their dual use were associated with alcohol use.
Discussion
Our study provides a comprehensive analysis that sheds light on the prevalence of e-cigarette and cannabis use behaviors among immigrant and U.S.-born populations.It also examines the correlations of these behaviors with mental health disorder symptoms, including anxiety/ depression, PTSD, and loneliness.A key finding from our research is that the dual use of e-cigarettes and cannabis was more prevalent than their exclusive use across all subgroups in U.S.-born and immigrants, especially in U.S.-born subgroups, in our study.Specifically, the prevalence of dual use was higher in both immigrant and U.S.-born individuals who identified as non-binary/transgender, sexual minority, Black/African American, young adult, had less than high school education, experienced severe anxiety/depression symptoms, had higher loneliness scores, and used alcohol.These findings are consistent with the observations of other researchers, who found that individuals with lower and underserved socioeconomic and sociodemographic characteristics (e.g., young adults, people with less than high school education, sexual and gender minority persons, Black/African American individuals) and mental health disorder symptoms (e.g., anxiety/depression, stress) were more likely to engage in substance use, including e-cigarette and cannabis use, particularly their combinations (Adzrago et al., 2022;Clendennen et al., 2021Clendennen et al., , 2023;;Conway et al., 2017;Duan et al., 2022;Kondracki et al., 2022;Lewis et al., 2022;Spears et al., 2019Spears et al., , 2020;;Thrul et al., 2020).However, none of the aforementioned studies examined subgroup differences in dual use of e-cigarettes and cannabis within immigrant and U.S.born populations to identify immigration status-related disparities in dual use behavior for tailored substance use interventions aimed at reducing substance use and its health consequences, especially the increased risks of dual use of substances.Our findings also revealed a shared vulnerability to dual substance use behavior among individuals with mental health disorder symptoms irrespective of immigration status, but with higher effect sizes among U.S.-born.This association between substance use and mental health underlines the importance of an integrated approach to tackling this issue, where mental health and significant social determinants of health (e.g., immigration status) considerations are treated as integral to substance use interventions (Castañeda et al., 2015;Conway et al., 2017;DeFries et al., 2022;Duan et al., 2022;Grace et al., 2018;Kagotho et al., 2020;Kondracki et al., 2022;Lewis et al., 2022;Martinez et al., 2015;Spears et al., 2019Spears et al., , 2020;;Thrul et al., 2020).Furthermore, the widespread prevalence of dual use across immigrant and U.S.-born populations, particularly U.S.-born, indicates the need for personalized prevention strategies alongside those targeted toward specific high-risk groups.
Our research also found that immigrant populations, adjusting for sociodemographic factors, mental health status, and alcohol use characteristics, exhibited lower risks of engaging in exclusive cannabis use and dual use of e-cigarettes and cannabis.Immigrants, in general, are known to be less likely to engage in substance use or misuse behaviors than U.S.born (Johnson et al., 2002;Salas-Wright et al., 2018).This finding further supports the premises of the "healthy immigrant effect," which suggests that immigrants tend to have better health outcomes than their host country natives (Ru and Li, 2021;Salas-Wright et al., 2018).Immigrants' substance use behavior may be due to fear or concerns of being involved in risky or illegal behaviors that have immigration consequences (e.g., deportation) (Vaughn et al., 2014).Nonetheless, little to no studies examined exclusive and dual use of cannabis and e-cigarettes among immigrants and U.S.-born.The unique cultural and immigration-rated experiences of immigrants may have different implications for their substance use behaviors.Thus, comparing the behavior of subgroups of immigrants and U.S.-born may provide detailed information about specific group differences for tailored substance use prevention interventions.The findings also underline the importance of a context-specific understanding of the diverse factors related to e-cigarette or cannabis use among immigrants and U.S.-born.Immigrants, depending on their cultural backgrounds, reasons for migration, and experiences post-migration, may have different attitudes toward substance use compared to U.S.-born or other demographic groups (Prado et al., 2009;Tran et al., 2010).These attitudes are likely shaped by a confluence of factors such as societal norms, personal experiences, and access to substances, which can substantially impact the patterns of substance use.
In relation to mental health, our findings showed that individuals with mild, moderate, and severe anxiety/depression symptoms had higher risks of engaging in exclusive cannabis and e-cigarette use and their dual use across immigrant and U.S.-born populations.These findings align with the study by Burke et al. and Chloe et al., which found positive associations of tobacco use with mood disorder, psychotic disorder, and anxiety disorder (Buckner et al., 2021;Chloe et al., 2023).They also found a positive association between cannabis use and these disorders (Buckner et al., 2021;Chloe et al., 2023).This consistency across studies underscores the complex interplay between substance use and mental health disorders and the need for integrated interventions that address both issues.It also indicates that mental health status may contribute to the observed heterogeneity in substance use behaviors within these groups.Our findings further revealed that while exclusive e-cigarette use behavior was not significantly associated with anxiety/depression symptoms among U.S.-born, this behavior was significantly associated with anxiety/depression symptoms among immigrants, suggesting unique immigration status-related disparities in substance use behavior and mental health for consideration.
The findings on the influence of mental health further revealed that while U.S.-born individuals who were ineligible for PTSD assessment were significantly less likely to exclusively use cannabis, immigrant individuals with PTSD symptoms were significantly more likely to exclusively use cannabis.However, exclusive e-cigarette use behavior and dual use behavior were not significantly associated with PTSD across immigrant and U.S.born individuals.Similarly, neither exclusive nor dual use of e-cigarettes and cannabis was significantly associated with loneliness across immigrant and U.S.-born individuals.The findings suggest that while mental health disorder symptoms may be associated with substance use behavior, the associations may vary depending on specific substances, mental health disorder symptoms, and target populations.Thus, the substance use behavior and mental health disorder symptoms may be different or similar in immigrant and U.S.-born populations depending on the specific substances and mental health disorder symptoms.These findings also emphasize the importance of disaggregating data to delineate and identify specific health behaviors associated with specific mental health disorder symptoms within specific populations for tailored public health and clinical interventions in addressing health disparities, especially in minority populations (Diaz et al., 2021;Etowa et al., 2021;Kauh et al., 2021;Quint et al., 2021).Aggregated data on immigration status can mask or obscure health behavior disparities among subgroups (Choi et al., 2023;Etowa et al., 2021;Lee et al., 2022;Quint et al., 2021).Consequently, we found that some subgroups within U.S.-born and immigrants exhibited more noticeable disparities in substance use behaviors.For instance, while some groups had consistent substance use behavior in immigrant and U.S.-born populations, others had inconsistent behaviors.Similar to other research findings, we observed that non-Hispanic Black/African American individuals were more likely to engage in dual-use behaviors in the U.S.-born population, but no differences in such behavior in the immigrant population (Uddin et al., 2020).The findings highlight the need for more studies, especially longitudinal studies, to quantify changes in substance use behavior within the subgroups to enhance deeper understanding of the consistency of this behavior and mental health among immigrants and U.S.-born.
Consistent with the findings of other studies, we found that among U.S.-born, individuals identifying as a sexual minority and Black/African American had higher risks of exclusive cannabis use and dual use (Adzrago et al., 2021;Dyar et al., 2021;Swann et al., 2020).Among immigrants, sexual minority and Black/African American individuals had lower risks of exclusive e-cigarette use.This observation could be further explained by the findings of other studies that observed lower smoking, e-cigarette use, and substance use prevalence among immigrants (Bosdriesz et al., 2013;Salas-Wright et al., 2014;Wang et al., 2016).These findings emphasize the need to evaluate health behaviors and outcomes within specific population subgroups to better delineate the related disparities.While immigrants generally are less likely to use cannabis and e-cigarettes, the findings also revealed that some of them (e.g., younger individuals, those with anxiety/depression, lower education, identified as a man, or used alcohol) are more likely to use these substances.
While our study provides valuable insights into the disparities in e-cigarette and cannabis use between U.S.-born and immigrants, it is not without limitations.This study was based on web survey, which excludes individuals without internet access, with limited comprehension of the study materials, and does not allow to determine if the qualified participants complete the online survey themselves; this often result in disproportionate distributions among groups leading to under or overestimation of findings.The study's cross-sectional design limits the ability to establish causal relationships between immigration status, mental health disorder symptoms, and substance use behaviors.The reliance on self-reported data may also introduce response bias.Also, the study did not assess the severity of e-cigarette or cannabis use regarding mental health because only patterns of dual use frequency were assessed between the immigrant and U.S.-born groups.This might have provided a more in-depth assessment of the differences between immigrant and U.S.-born groups regarding the association between anxiety/depression and frequency of use (exclusive and dual).Furthermore, the study did not account for the composition and potency of cannabis use, which may refer to a range of forms including combustibles, vapes, edibles, and flowers.These may differ considerably between immigrant and U.S.-born populations regarding the association with mental health factors.Due to limited samples across e-cigarette and cannabis use categories within immigrant and U.S.-born groups, we dichotomized alcohol use instead of assessing alcohol use frequency (1= not at all, 2= once during the month, 3= several times during the month, 4= once a week, 5= several times a week, 6= every day or almost every day, and 7= several times a day) in the past month.Assessing severity (frequency) of alcohol use in the past month might have provided greater differences between the U.S.-born and immigrant groups in terms of risk for cannabis and ecigarette use.Because the study was conducted in only English language, the findings could not be generalized to non-English speaking, reading, and writing individuals.Only English language might have impacted the findings relating to factors such as loneliness which, in this population, was not associated with e-cigarette and cannabis use by immigration status.English language is a major barrier to communication and healthcare utilization among immigrants who come from culturally, ethnically, and linguistically diverse countries.Although we controlled for several factors in this study, residual confounders such as acculturation, generational status, country of origin, religion, access to healthcare services, and using other substances (e.g., cocaine, opioids, and ecstasy) could influence the observed associations.These residual confounders could have overestimated or underestimated the findings.Future research should address these limitations to provide a more comprehensive understanding of the factors influencing e-cigarette and cannabis use among U.S.-born and immigrant populations in the United States.
Conclusions
Our study contributes to expanding the limited health disparity literature on the prevalence of e-cigarette and cannabis use and its association with mental health disorder symptoms based on immigration status.The findings revealed significant mental health risks for e-cigarette, cannabis, and their dual use among immigrants and U.S.-born, especially among U.S.-born.However, these associations varied depending on specific substance use behavior and mental health disorder symptoms across immigrant and U.S.-born populations.Exclusive e-cigarette use behavior was not associated with anxiety/depression symptoms among U.S.-born, but it was associated with anxiety/depression symptoms among immigrants.Anxiety/depression symptoms, particularly the severe symptoms, were associated with higher likelihoods of exclusive cannabis use and dual use among immigrants and U.S.-born.The general effect sizes of dual-use associated with anxiety/depression symptoms were higher among U.S.-born, but the effect sizes of exclusive e-cigarette and cannabis use associated with anxiety/depression symptoms were higher among immigrants.Exclusive e-cigarette and dual use behaviors were not associated with PTSD symptoms across immigrant and U.S.-born individuals, while exclusive cannabis use behavior was associated with PTSD symptoms among immigrant and U.S.-born individuals.Exclusive and dual use of e-cigarettes and cannabis was not associated with loneliness across immigrant and U.S.-born individuals.The findings suggest the need to disaggregate data to examine specific substance use behaviors and mental health disorder symptoms to improve personalized substance use and mental health interventions in addressing health disparities, especially in minority populations.Future longitudinal or prospective studies should explore the mechanisms driving the associations between substance use behavior and mental health, especially the immigration status-related disparities.Prevalence of e-cigarette use frequency by cannabis use frequency stratified by immigration status among U.S. adults.
Williams M, Villarreal A, Bozhilov K, Lin S, Talbot P, 2013.Metal and silicate particles including nanoparticles are present in electronic cigarette cartomizer fluid and aerosol.PLoS One 8 (3), e57987.[PubMed: 23526962] Williams RJ, Wills TA, Choi K, Pagano I, 2023.Associations for subgroups of E-cigarette, cigarette, and cannabis use with asthma in a population sample of California adolescents.Addict.
Table 1
Descriptive and bivariate analyses of the past-month exclusive e-cigarette use, cannabis use, and their dual use by sociodemographic characteristics, mental health disorder symptoms, and alcohol use among U.S.-born (n= 3672) and immigrants (n= 1094). U.
S.-born Immigrants Overall sample None used Exclusive e- cigarette use
Drug Alcohol Depend.Author manuscript; available in PMC 2024 February 14.U.S.
-born Immigrants Overall sample None used Exclusive e- cigarette use
Statistical significance at p<0.05.All p-values are based on chi-square tests for the categorical variables and ANOVA tests for the continuous variables.
Drug Alcohol Depend.Author manuscript; available in PMC 2024 February 14.
Table 2 .1
Multinomial logistic regression analysis of past-month exclusive e-cigarette use, cannabis use, dual use, and their associations with immigration status, adjusting for sociodemographic characteristics, mental health symptoms, and alcohol use among adults living in the U.S. (N= 4766).
reference category: None used Exclusive e-cigarette use Exclusive cannabis use Dual use of e-cigarettes and cannabis
Drug Alcohol Depend.Author manuscript; available in PMC 2024 February 14.U.S. | 2024-01-08T16:06:53.591Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "61630a8ea75f00d092be79426a202b1a79896f02",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.drugalcdep.2024.111083",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc7ce7c02aa9a916e17df08a8601d834d5c38009",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220964692 | pes2o/s2orc | v3-fos-license | Supercritical Water is not Hydrogen Bonded
Abstract Thinking about water is inextricably linked to hydrogen bonds, which are highly directional in character and determine the unique structure of water, in particular its tetrahedral H‐bond network. Here, we assess if this common connotation also holds for supercritical water. We employ extensive ab initio molecular dynamics simulations to systematically monitor the evolution of the H‐bond network mode of water from room temperature, where it is the hallmark of its fluctuating three‐dimensional network structure, to supercritical conditions. Our simulations reveal that the oscillation period required for H‐bond vibrations to occur exceeds the lifetime of H‐bonds in supercritical water by far. Instead, the corresponding low‐frequency intermolecular vibrations of water pairs as seen in supercritical water are found to be well represented by isotropic van‐der‐Waals interactions only. Based on these findings, we conclude that water in its supercritical phase is not a H‐bonded fluid.
Introduction
Hydrogen-bonding and the resulting three-dimensional network topology certainly is the hallmark of water [1][2][3] and provides much of the mechanistic underpinnings of its many so-called anomalies. [4,5] Thus,t hinking about water implies thinking about H-bonding.Inthis article,weare going to ask (and answer) the simple question if this remains true in the supercritical phase of water.
Recently,s upercritical fluids and supercritical water (SCW) in particular have attracted enormous cross-disciplinary attention [6][7][8][9] as "tunable solvent environments" [8] in chemical synthesis [10] and catalytic processes. [11,12] SCW is even envisaged as am ediator in nuclear power plants. [13] Moreover,S CW and supercritical fluids in general, attracted interest in fundamental physics in view of putative transitions from liquid-like to gas-like regions discussed in terms of separating Widom, Frenkel or percolation lines. [14][15][16][17][18][19][20] In nature SCW occurs in the earth mantle where it takes an active part in hydrothermal formation processes [21] and it could be discovered close to so-called "black smokers" at the bottom of the deep sea. [22] In all these application and natural appearances SCW acts as as olvent. Solvation properties,a t least in room temperature water (RTW), are inextricably linked to H-bonds and the famous tetrahedral H-bond network and therefore the question if H-bonds exist in SCW and how their behavior changes with respect to RTWis of fundamental importance.
Thec urrently established picture concerning whether Hbonds exist in SCW at all is primarily based on structural considerations in terms of ensemble averaged radial distribution functions (RDFs), in particular O-O and O-H RDFs. From ND experiments it was concluded that there is "…little room for doubt that the hydrogen bond persists in the supercritical regime…", [39] yet there is "…areduction of the H-bond population in the supercritical state". [26] Noteworthy, the H-bonding feature of the O-H RDF "…has been washed out into ab road shoulder" [40] in SCW while it is ap rominent peak in RTW. This qualitative change of the RDF led to the question "…a st ow hether this can still be regarded as hydrogen bonding at this temperature.". [40] An alternative experimental approach is to analyze the time-averaged proton NMR chemical shift as af unction of density,t emperature,and pressure.Here,itwas concluded for SCW that "… there are still 29 %a sm any hydrogen bonds at 400 8 8Ca nd 400 bar (1 = 0.52 gcm À3 ) as for room temperature water". [28] Note that these landmark papers from the mid 1990ies still represent the state-of-the-art in the field even today.
Perhaps more importantly,i th as been confirmed experimentally and computationally that the famous tetrahedral arrangement [41] of the water molecules in RTWd ue to the preferred local fourfold coordination of the individual water molecules is completely lost in SCW. [24,33,38,42] Moreover,t he extent of H-bonding,asquantified by the number of H-bonds per water molecule,systematically and significantly decreases as af unction of increasing temperature. [33,38] Last but not least, the famous cooperative H-bonding effect, which is responsible for the unusually large molecular dipole moment of H 2 Omolecules in RTW, [43] is drastically reduced in SCW. [38] These experimental and computational findings indicate that H-bonds are present in SCW,but are strongly weakened, and that the properties of the H-bond network are significantly modified compared to RTW. This viewpoint culminates in the currently accepted picture of SCW as summarized in arecent authoritative monograph:" There is sufficient evidence that hydrogen bonds do exist in SCW,with general agreement that the tetrahedral hydrogen-bonded network present in ambient water is no longer present in SCW". [9] Based on all these studies and clear statements it is nowadays broadly assumed at the outset that H-bonds do exist in SCW without questioning it. This is despite the fact that reported orientationally and time-averaged observables might also be explained differently than assuming sufficiently stable and properly directional water-water arrangements (as we will demonstrate in what follows). This might also explain why there usually is no clear commitment made in recent studies if SCW is H-bonded or not. In this vein, advanced experimental and computational studies tacitly assume the existence of H-bonds when analyzing and interpreting the data and, thus,c onclude that "H-bonding is drastically reduced", see for example,r efs. [20,27,34,38].
Very complementary to the aforementioned approaches is vibrational spectroscopy in the THz frequencyw indow since that technique has been shown to most directly probe the Hbond dynamics within the water network. [44] Here,the famous H-bond network mode is directly probed, which monitors the intermolecular hindered translations of the water molecules. THz spectroscopy is thereby different than both, traditional mid-IR and Raman experiments where the H-bond is only indirectly probed by induced changes of the intramolecular O-Hs tretching motion, as well as from NMR, ND,o rX RD experiments where at ime-averaged and mostly also orientationally averaged picture is obtained. In the case of RTW, [45] the H-bond network THz mode is located around 200 cm À1 .It could be shown that this pronounced resonance is sensitive to local perturbations of the H-bond network induced by for example,simple ions [46][47][48][49] or small molecules. [50] Recently,the network mode has been shown to also respond very sensitively to increasing hydrostatic pressure. [51] At supercritical conditions,p reliminary FFMD simulations [18] yielded qualitatively different THz spectra compared to RTW, however without being able to disclose the underlying molecular mechanism due to methodological shortcomings of the simulation method. Based on all this evidence accumulated in recent years only,itistherefore suggestive that the H-bond THz mode should provide am ost sensitive probe to also monitor H-bonding in the supercritical state of water.
In this Research Article,w eg ob ack to square one and ask, in afresh effort, the question if supercritical water is aHbonded fluid by using advanced simulation and spectral analyses techniques.
Ab Initio Supercritical Water
Our investigation is based on extensive AIMD simulations [35] using the RPBE-D3 functional which allows us to sample atotal of more than 20 ns of AIMD trajectories using 128 water molecules;see SI for details.Only such long AIMD trajectories allow us to compute well-converged THz spectra of SCW as illustrated in the SI. Concerning the choice of the functional, we note that RPBE-D3 has been shown repeatedly by several groups to reliably represent both, RTWand SCW [38,[52][53][54] with respect to experimental data. In AIMD,the computationally much more demanding revPBE0-D3 hybrid functional has been demonstrated to provide an excellent representation of the full-dimensional many-body potential energy surface that describes RTW. [55] In supporting Figure S1, we additionally compare the RDFs of SCW as obtained from RPBE-D3 to the revPBE0-D3 benchmark with most favorable agreement which explicitly validates the accuracy of RPBE-D3 also for supercritical water. Finally, when it comes to H-bond dynamics and THz spectroscopy,we refer to direct comparisons of our RPBE-D3 results to NMR relaxation data (Figure 3b)a nd to THz spectroscopy (Figure 1a)with good agreement for these dynamical properties.
THz Spectra and Two-Body Vibrational Densities of States
We begin our discussion by presenting molar absorption coefficients, kñ ðÞ ,int he THz regime (dubbed "THz spectra" for short) at selected super-a nd subcritical state points in Figure 1(a). To illustrate the location of the shown state points we also present the corresponding phase diagram in Figure 1(b). In case of RTW, our spectrum computed at 300 K agrees favorably with the experimental one [45] and close to perfectly reproduces the absolute intensities and positions of the two prominent peaks around 200 and 650 cm À1 stemming from the intermolecular H-bonding and the librational dynamics,r espectively.T he effect of thermal fluctuations on the THz spectrum is probed upon increasing the temperature until reaching supercritical conditions while keeping the density fixed at its RTWv alue,1 .0 kg L À1 .T he two distinct peaks are seen to vanish in favor of asingle broad peak which, moreover,systematically red-shifts as afunction of increasing temperature.O nce in the supercritical phase,d ecreasing the density of SCW isothermally at 750 Ki sfound to systematically red-shift that peak even more until it phenomenologically reaches at low density the frequency of the intermolecular H-bonding peak of RTW, that is,roughly 200 cm À1 .
Given these pronounced changes of the THz response,i t is key to separate the H-bond mode,b eing the prominent probe of H-bonding dynamics in ambient liquid water, [44,45] from the librational band to assess their changes individually upon reaching supercritical conditions.Inaneffort to dissect the single broad peak in SCW in terms of molecular motion, we employ aprojected relative velocity [Eq. (1)] [56] Dv IJ t whereṽ I t ðÞandṽ J t ðÞare the center of mass velocities of two different water molecules andd IJ ðtÞ is the connecting vector between their centers of mass.T he corresponding relative two-body vibrational density of states (2B-VDOS) is then given by [Eq. (2)] where N is the total number of water pairs considered and F ÁÁÁ ½ denotes the forward Fourier transform. We compute this specific spectral density for all water pairs whose centers of mass are closer than 4 ,b ut irrespective if they are Hbonded or not, meaning that their relative orientation is fully ignored. Ther esulting 2B-VDOS is presented in Figure 2a t selected state points together with the computed THz spectrum of RTWa sr eference.E vidently,a ny vibrational DOS exclusively probes the particle dynamics and, thus,does not carry dipolar intensity (contrary to kñ ðÞ ), which allows us to scale their maxima to ac onvenient reference value for better comparison. Note that we have also separately determined the librational contribution L rotñ ðÞto the total THz band. It turns out that the THz spectrum of all SCW states is overwhelmingly dominated by this librational band, whereas L 2Bñ ðÞ contributes only little to the total THz response as detailed in the SI.
When directly comparing in Figure 2t he two-body spectral density L 2Bñ ðÞwith the THz spectrum kñ ðÞin the limit of RTW, one observes aslight red-shift of the maximum and ashoulder around 75 cm À1 .Both features can be tracked back to motion ranging beyond H-bonded water pairs at RTWc onditions because we employ af airly large cutoff of 4 to still meaningfully identify water pairs in SCW (where the average OO distances are considerably longer in particular at low densities). Importantly,when reducing that cutoff to avalue of 3.4 as appropriate for RTW, [57] L 2Bñ ðÞexactly reproduces the THz response kñ ðÞof the H-bond spectral feature of RTW, see dotted line in Figure 2. Therefore, L 2Bñ ðÞ is indeed capable to perfectly represent the intermolecular Hbond stretching motion of RTW( giving rise to its THz network peak) and, thus,d oes monitor the intermolecular stretching motion of water pairs in more general terms,that is, Figure 1. Molar THz absorption coefficients kñ ðÞof room temperature and supercritical water (a) and the phase diagram of water as given by the accurate experimental IAPWS95 equation of state [32] (b), where the coexistencecurve is given as black solid line and the critical and triple points are marked using black solid squares. In panel (b), the green triangles mark all simulated state points on our isochore (down triangles) and isotherm (up triangles) scans;t he violet square places the previously estimated CP of RPBE-D3 water. [54] Those state points where the THz spectra are presented in panel (a) are highlighted in (b) using colored circles and the very same color code as in (a). In panel (a), we present representative THz spectra computed from from RPBE-D3 simulationsofroom temperaturew ater (black solid line, RTW), of subcritical liquid water at 1.0 kg L À1 and temperatures of 400 and 550 K(brown and red dashed lines, respectively)a swell as of supercritical water at 750 Ka nd densities of 1.0, 0.6, and 0.1 kg L À1 (green, red and brown solid lines, respectively). The corresponding RTW experimental THz spectrum [45] is reproduced in panel (a) as ablue dotted line for reference;n ote that neither the frequency nor the intensity of the computed THz spectra have been scaled or adjusted. without taking the relative orientations into account. Now, increasing the temperature at constant density,1.0 kg L À1 ,the pronounced L 2Bñ ðÞpeak is found to systematically red-shift from am aximum frequency of 180 cm À1 in case of RTWt o about 150 cm À1 at 750 Ki nS CW according to Figure 2. Decreasing next the density at that supercritical temperature, the maximum frequencyisseen to red-shift even further down to about 100 cm À1 at 0.1 kg L À1 .
At this stage,i ts eems that the analyses presented so far strongly support the notion that SCW remains H-bonded even in the low-density limit. Theo nly difference of supercritical compared to ambient water appears to be ap ronounced red-shift of the H-bond resonance from 200 down to 100 cm À1 ,s ee Figure 2, which in turn gets simply masked by the much more intense low-frequencywing of the pronounced librational band since that shifts dramatically from % 650 at RTWto% 250 cm À1 in low-density SCW,s ee Figure 1. It will be demonstrated in what follows that this suggestive conclusion does not hold true.
Hydrogen-Bond Lifetimes and Reorientational Relaxation Times
As the next step we analyze the reorientational and Hbond dynamics in terms of the associated relaxation and lifetimes, [58] respectively,w hen moving from RTWa long subcritical states to the supercritical phase of water. These dynamical properties complement the analyses of intermolecular H-bond vibrations and have been proven valuable to investigate SCW by FFMD and AIMD simulations,s ee for example,r efs. [19,38,56,59].T aking into account the wellknown ambiguities in the selected H-bond criterion, [19,20,33,38] existing studies nevertheless broadly agree that the continuous H-bond lifetime t HB is more than one order of magnitude smaller compared to RTW [9,38,56,59,60] and amounts to about 100 fs in SCW including long-time tails.
In order to provide aphysical observable that on the one hand probes the dynamics within the water network, but on the other hand is independent on any H-bond criterion and thus H-bonding bias,w ea nalyze now the reorientational relaxation time [Eq. (3)] [29] t 2R ¼ where V(t)i st he angle between the unit vector of the intramolecular O À Hb ond at time t and at time 0. Being ap roper observable, t 2R can not only be computed but also determined experimentally by NMR relaxometry even in SCW. [29] Obviously, t 2R is as ingle-molecule quantity and, therefore,c annot be directly compared to H-bond lifetimes, but both quantities are certainly closely related:A th igh densities,the average coordination number (within adistance radius of 3.43 )p er water molecule, n c ,i sr ather large in SCW,f or example, n c > 3f or densities exceeding 0.6 kg L À1 and n c % 5a t1 .0 kg L À1 for RPBE-D3 water. [38] If am olecule rotates in such an environment at least one H-bond must be broken, [61] irrespective of the rotation axis,and it follows that H-bond and reorientational dynamics must be closely related at the level of their intrinsic time scales.T herefore, t 2R provides an independent, measurable observable to probe H-bond motion that is completely unrelated to any H-bond definition. It thus does not suffer from the well known dispersion of H-bond lifetimes reported in the literature.
Tu rning now to the data in Figure 3(a), both time scales, t HB and t 2R ,are found to dramatically decrease with respect to RTWw hen the temperature is increased. This qualitative behavior is not unexpected since the H-bond lifetime follows an Arrhenius-type behavior [62] as indeed explicitly confirmed by us for RPBE-D3 water. [38] Given that t HB is sensitive to the H-bond criterion we used our RTWa nd SCW criteria throughout as explained in the SI. Interestingly,t he impact of these two quite different criteria is seen to be negligible on the scale of the physical changes of that lifetime as afunction of temperature.T urning now to the supercritical isotherm at 750 Ki np anel (b), the reorientational relaxation time t 2R is found to systematically and significantly increase with increasing density,from about 34 fs at 0.1 kg L À1 to 65 fs at 1.1. kg L À1 Importantly,o ur RPBE-D3 values of t 2R perfectly match the available experimental NMR data [29] at acomparable temperature of 673 K(corresponding to T* = 1.040 given that our simulations are conducted at T* = 1.056).
Overall, numerous computational studies which use vastly different H-bond criteria, sampling protocols and water models predict H-bond lifetimes of about 100 fs in (b) Same properties as in (a) using the same color code but along the supercritical isotherm at 750 Kasafunctiono fdensity from 0.1 to 1.1 kg L À1 .E xperimental NMR t 2R data [29] at acomparable reduced temperature of T* = T/T c = 1.040 are shown as reference (black squares) for the RPBE-D3 data (red crosses) without any adjustments; recall that our simulations are conducted at T* = 1.056. The H-bond lifetimes are adapted from ref. [38]. SCW. [9,38,56,59,60] Moreover,o ur computed reorientational relaxation times,w hich do not depend on any H-bond definition, excellently agree with available experiments and yield values somewhat smaller than 100 fs.E ven ag ross extrapolation of the H-bond lifetime from RTWutilizing the Arrhenius-type behavior [62] (which is observed irrespective of the given H-bond criterion [38] )y ields aH -bond lifetime of 87 fs in SCW,a nd thus is consistent with both, the literature and our study.W ea re therefore confident that H-bond lifetimes not exceeding 100 fs in SCW as computed by us are reasonable estimates of the true dynamical behavior of SCW as quantified in terms of the lifetimes of intermolecular Hbonds.
Lifetimes vs. Intermolecular Vibrational Periods
Having now quantitative access to these molecular time scales,w ec an compare them in Figure 4t ot he oscillation periods of the intermolecular stretching vibrations t osci (as quantified by the maxima of the corresponding two-body spectral densities L 2Bñ ðÞshown in Figure 2). In case of RTW, we find an oscillation period of about 0.18 ps and aH -bond lifetime of approximately 1.41 ps.T his implies that H-bonds oscillate roughly ten times before they break in RTW. This familiar picture of the H-bond water network changes dramatically when increasing the temperature towards supercritical conditions.At750 Kand at the same density as RTW, 1.0 kg L À1 ,w ef ind t osci % 224 fs whereas t HB is even less than half of it, only about 78 fs.N ote that from as pectroscopic viewpoint this corresponds to values of about 150 and 430 cm À1 ,r espectively,a nd thus corresponds to very distinct frequency regimes in vibrational spectroscopy! In other words:The lifetime of putative H-bonds is much shorter than the oscillation period of the intermolecular stretching vibrations due to hindered translations!T his observation holds true for all investigated state points of supercritical water, irrespective of their density and irrespective of the chosen H-bond criterion, and is confirmed when using the experimentally accessible reorientational relaxation times instead, t 2R .
Thefact that the H-bond lifetime is much smaller than the oscillation period of the intermolecular vibrations imposes important consequences for H-bonding in SCW.I ti mplies that H-bonds are broken while an intermolecular vibration is still ongoing. Hence,t hese short-range hindered translations are unaffected by any orientational directionality.This utmost dynamical picture of intermolecular encounters in SCW stimulates the question if it is meaningful to call SCW "Hbonded" if on average aH -bond does not even survive asingle intermolecular oscillation period.
How does our conclusion of supercritical water not being H-bonded compare to existing viewpoints regarding its Hbonding properties?I nanutshell, THz spectroscopy offers ad irect and, most importantly,t ime dependent approach to study intermolecular vibrations in water. In contrast, ND/ XRD and NMR spectroscopy only yield at ime-averaged picture.O ur conclusion does not imply that there are no structural H-bond motives at all, however, they are remarkably short-lived. This means that all H-bond contacts are counted by such time-averaged methods although they exist only fleetingly.I nS ec. IV in the SI we more elaborately discuss that our conclusion as concisely announced by the title of this publication is indeed perfectly consistent with existing experimental data.
Mid-IR spectroscopy,differently from diffraction, probes H-bonds by using the intramolecular O-H stretching dynamics as ap roxy. [63] These vibrations are located around 3500 cm À1 and are thus more than one order of magnitude faster than the intermolecular vibrations probed at THz frequencies.I t follows that the intramolecular O-H stretching vibrations are fast enough to detect also very short-lived instantaneous H-bond contacts,although these contacts exist too shortly to be recognized by the much slower intermolecular O···O stretching vibrations detected by THz radiation. Indeed, we show in supporting Figure S6 that the very pronounced redshift of the O-H stretch in RTW, being the hallmark of Hbonding in liquid water (where intermolecular O···H distances are short and OÀH···O angles close to linear), gets dramatically reduced in SCW.Note that the same observation as obtained here from AIMD was also made recently in asophisticated quantum-classical study of the intramolecular stretching vibrations of SCW. [34] At the level of the underlying structural dynamics [64][65][66][67] such surprisingly small O-H shifts in SCW directly correlate with an enormously enhanced pop- Figure 4. Oscillation period of the intermolecular stretching mode, t osci plotted against the continuous H-bond lifetime, t HB ;s ee text for definitionsa nd note the logarithmicscale of the abscissa. The dashed lines mark the regime to the left where the H-bond lifetime is smaller than acertain multiple k of the oscillation period, t HB = n·t osci where n = 1,2,5,10. The H-bond lifetimes are determined using the RTW and SCW criteria (blue and green, respectively) for all supercritical state points (circles) to demonstratetheir invariance w.r.t. the definition, while the RTW criterion is applied to all subcritical states (filled squares) for simplicity.The subcritical state point of highest temperature, 700 K( i.e. T* = 0.93), is marked using alarge grey circle. In addition, the reorientational relaxation time, t 2R ,along the supercritical isotherm is shown using red triangles. ulation of distinctly non-linear intermolecular OÀH···O orientations,w hich red-shift much less even if the two water molecules come close.Here,this enhanced population of nonlinear orientations in SCW is statistically captured by the joint distance-angle distribution functions in supporting Figure S7: Whereas in RTWt he majority of nearest-neighbor waterwater orientations is quasi-linear as expected for H-bonded liquids,the majority of them is indeed distinctly non-linear in SCW even at densities that exceed that of RTW(see Sec. II.D in the SI for details). This essentially flat distribution of waterwater orientations,w here quasi-linear O À H···O H-bonding arrangements are scarce compared to strongly bent orientations,i sd ue to the dramatically decreased reorientational relaxation time in SCW (from roughly 2500 fs in RTWt o % 60 fs in SCW at 1.0 kg L À1 according to Figure 3b), which makes proper H-bonding orientations ultra-short lived and thus transient in SCW.R emarkably,e ven in case of the highest density SCW this flat distribution of water-water orientations is observed, although the coordination number exceeds six. In other words:Such orientations,which look like H-bonds on an ultra-short timescale,are mechanistically due to essentially isotropic statistical encounters of two water molecules at high temperatures,rather than due to directional intermolecular bonding.W ea re going to provide strong evidence in the following section that this picture indeed holds true.
Coda:Supercritical Water as an Isotropic van-der-Waals Fluid
What else can the physical underpinnings be that lead to signatures in observables that have long been considered to support H-bonding in SCW? To answer that question, we go back to the peak of the two-body spectral density L 2Bñ ðÞ analyzed in Figure 2with the aim to understand to which kind of vibrational motion this outstandingly pronounced resonance corresponds to in SCW.Atambient conditions,wehave already demonstrated that it unambiguously corresponds to intermolecular H-bond stretching vibrations along essentially linear donor-acceptor arrangements within the tetrahedral Hbond network, thus confirming H-bonding in RTW. However, we have also shown that at supercritical conditions,t he Hbond lifetime is way too short to support the same interpretation.
As af irst step toward understanding,w ep robe the role that directional H-bonding plays for the hindered translational dynamics in SCW by using astandard water model that is well-suited to describe SCW, [18] namely SPC.H owever,i n order to probe the impact of H-bonding on the structural dynamics of water, we have switched off all those directional intermolecular water-water interactions which imprint the respective orientational dependences (as described in more detail in the SI), thus leaving us with the corresponding purely isotropic Lennard-Jones interactions between the oxygen sites only.I nt he absence of any directional H-bonding,t his so-called LJ-wat model enables us to qualitatively disentangle the spectral response of SCW at THz frequencies due to directional H-bonding from that due to purely isotropic van der Waals bonding.
In practice,w eu se these simple LJ-wat reference simulations exclusively to systematically analyze the lowfrequency vibrations when moving from RTWt oh ot subcritical to supercritical conditions as af unction of temperature and density-but without any H-bonds being present. Therefore,n os hort-range tetrahedral orientational order is imprinted at all. Let us note in this context that Rahman [68] already realized that there are indeed low-frequencyv ibrations (unveiled by him using the standard single-particle vibrational DOS) between the individual particles in the subcritical LJ liquid which lead to ap ronounced resonance exclusively due to hindered translational motion (obviously in the absence of any angular interactions and thus orientational order). He has also worked out that this dynamical resonance of the subcritical LJ liquid is distinctly different from that of aLangevin liquid which is only subject to ballistic motion but not to any van der Waals attraction as the LJ fluid. Coming now back to SCW,i no rder to compare LJ water to realistic water, it must be considered that the critical points of LJ-wat and RPBE-D3 water differ quantitatively.F or the sake of mapping,w et herefore use the principle of corresponding states to obtain comparable state points;werefer to the SI for details as well as for as comparison of the LJ-wat and RPBE-D3 phase diagrams.I nf ull analogy to our RPBE-D3 simulations,wesampled the LJ-wat fluid along acorresponding supercritical isotherm from very low to very high densities at the corresponding reduced temperature T* = T/T c = 1.056 as well as along acorresponding isochore from the triple point up to that supercritical isotherm as illustrated in supporting Figure S2.
Theresulting two-body spectral density L 2Bñ ðÞof the LJwat fluid along the isotherm and isochore scans are depicted in panels (a) and (b) of Figure 5, respectively,i nd irect comparison to those of RPBE-D3 water in Figure 2using the same line code.I nt he supercritical phase,s ee Figure 5(a), as ystematic red-shift of the hindered translational mode of the simple LJ-wat fluid is observed when the density is decreased along the supercritical isotherm. This implies that, indeed, LJ-wat reproduces at supercritical conditions the same qualitative trend as observed for SCW in Figure 2, but obviously without any directional order (meaning here Hbonding) being present. Thes ituation is distinctly different, however, when starting from ambient conditions,corresponding to RTW, and heating the LJ-wat liquid up to the supercritical isotherm (while keeping the RTWd ensity constant) as compiled in Figure 5(b). Now,t he hindered translational LJ-wat mode blue-shifts as af unction of increasing temperature,while RPBE-D3 water shows exactly the opposite qualitative behavior,that is,the H-bond mode in Figure 2r ed-shifts as af unction of increasing temperature from RTWtoSCW.Given these facts,one must conclude that preferred water-water orientations are key to describe the low-frequencyi ntermolecular motion in RTW, being aHbonded liquid, while they are not at all required to describe that same resonance in SCW.
How can this qualitative difference in the super-a nd subcritical phases of water be interpreted?T here are evidently no directional (i.e.H -bonding) interactions whatsoever operational in the LJ-wat fluid, being exclusively subject to isotropic van der Waals interactions,w hich perfectly describe simple liquids such as rare gas atoms.T his implies that any low-frequencyresonance observed in the LJwat fluid must be unrelated to any tetrahedral directional Hbond dynamics but rather exclusively due to isotropic interactions.Y et, the spectral changes of that intermolecular resonance L 2Bñ ðÞin the supercritical phase of RPBE-D3 water in response to changing the density are perfectly captured by the LJ-wat fluid. TheL J-wat fluid, however, qualitatively fails to describe the corresponding spectral changes observed when isochorically cooling the supercritical phase until ambient temperature is reached where H-bonding interactions are decisive to describe RTW. Themost obvious (OccamsR azor type) inference based on these facts is that tetrahedral directionality and thus H-bonding do not play any role in the supercritical state of water, whereas they clearly do in subcritical water.
Conclusions and Outlook
In conclusion, we unveil that the H-bond lifetime in supercritical water is on average shorter than as ingle oscillation period of an intermolecular vibration between two adjacent water molecules.T his rises the question if supercritical water should be considered as "H-bonded". On the one hand, our ab initio simulation results are shown to nicely agree with long existing experimental data in the supercritical phase of water, such as reorientational relaxation times obtained from NMR relaxometry or orientationally and time-averaged radial distribution functions from XRD or ND experiments.O nt he other hand, our original time-dependent and orientation-resolved analyses of the structural dynamics and, in particular,t he low-frequency vibrational spectral response do not support the notion that supercritical water is aH-bonded fluid.
Instead, we rather show that the low-frequencyi ntermolecular vibrations-that are clearly detected in supercritical fluid water at THz frequencies-are unambiguously due to isotropic water-water contacts,which of course include ultrashort-lived linear donor-acceptor arrangements among many others orientations.T his scenario is in stark contrast to ambient liquid water where the THz resonance is very clearly ascribed to long-lived linear donor-acceptor arrangements and thus to the tetrahedral H-bonded water network. Here, long-lived implies that many water-water oscillations are possible in linear donor-acceptor arrangements being the hallmark of H-bonding,w hereas short-lived means that not even as ingle such intermolecular oscillation is possible.A s such, the underlying hindered translational motion of water molecules at supercritical conditions does not correspond to intermolecular H-bond stretching vibrations in highly directional tetrahedral arrangements,asopposed to ambient liquid water. This is the reason why the hindered translational motion, and thus the low-frequencyv ibrational spectral response,insupercritical water is same as that of supercritical van der Waals fluids.T he latter are clearly not subject to any directional H-bonding and, thus,c an be perfectly described using the purely isotropic Lennard-Jones interactions as we explicitly demonstrate here for the supercritical state.W e think that the absence of H-bonding is the fundamental reason why supercritical water is adistinctly different solvent than ambient liquid water.
Acknowledgements
We are grateful to Bikramjit Sharma (Bochum) for having provided the hybrid revPBE0-D3 simulation for room temperature water as used for reference in the SI. This work was partially supported by DFG via MA 1547/11 and is also part of the Cluster of Excellence "RESOLV": Note that all L 2Bñ ðÞspectra are scaled, see text, such that their maximum intensities are identical. The color code again corresponds to the highlighted state points in the phase diagraminFigure 1(b). In the SI we detail how exactly thermodynamic state points of the RPBE-D3 model and asimple LJ fluid model can be compared. | 2020-08-05T13:06:30.064Z | 2020-08-04T00:00:00.000 | {
"year": 2020,
"sha1": "642d4051e9775c5acb6e33ab3501312da02771e1",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/anie.202009640",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f7a1fcdb329b0298f1b745e209a79acbbd21c85",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
203653498 | pes2o/s2orc | v3-fos-license | Association between palliative care and end-of-Life care for patients with hematological malignancies
Abstract To date, few studies have examined the end-of-life (EOL) care for patients with hematological malignancies (HMs). We evaluated the effects of palliative care on the quality of EOL care and health care costs for adult patients with HMs in the final month of life. We conducted a population-based study and analyzed data from Taiwan's Longitudinal Health Insurance Database, which contains claims information for patient medical records, health care costs, and insurance system exit dates (our proxy for death) between 2000 and 2011. A total of 724 adult patients who died of HMs were investigated. Of these patients, 43 (5.9%) had received only inpatient palliative care (i-Pal group), and 19 (2.6%) received home palliative care (h-Pal group). The mean health care costs during the final month of life were not significantly different between the non-Pal and Pal groups (p=0.315) and between the non-Pal, i-Pal, and h-Pal groups (p=0.293) either. By the multivariate regression model, the i-Pal group had lower risks of chemotherapy, ICU admission, and receipt of CPR, but higher risks of at least two hospitalizations and dying in hospital after adjustments. The h-Pal group had the similar trends as the i-Pal group but lower risk of dying in hospital after adjustments. Patients with HMs who had received palliative care could benefit from less aggressive EOL cancer care in the final month of life. However, 8.6% patients with HMs received palliative care. The related factors of more hospitalizations and dying in hospital warrant further investigation.
Introduction
The incidence of hematological malignancies (HMs) in Taiwan is markedly lower than that in Western countries, but it had a drastically increasing trend in recent decades. [1] In Taiwan, the percentage of HMs among all cancer deaths was 4.53% and HMs were the seventh most common cause of cancer-related deaths in 2012. [2] Despite advancements in diagnosis and treatment, mortality due to HMs is not decreased. A previous study reported that the mean number of symptoms and level of distress were comparable to those patients with metastatic nonhematological malignancy. [3] In addition, patients with HMs were often treated with intensive antineoplastic regimens until the last days of life. [4] Under some medical conditions such as infections, cytopenias, and coagulopathies, patients with HMs needed frequent hospitalizations, invasive investigations, monitoring and therapies. [5,6] A cohort study reported that patients with HMs received more inappropriate care during end-of-life (EOL) care. [7] Continued efforts are needed to improve the provision of quality EOL care for patients with HMs.
Palliative care is an interdisciplinary approach to symptom management, psychological support, and treatment decisionmaking for patients with serious illnesses and their family members. More evidence highlights that patients with cancer could benefit greatly from palliative care, which can reduce symptom burden, [8] improve quality of life and mood, [9,10] increase the likelihood of survival, [10,11] and improve outcomes for caregivers as well as for patients. [12,13] In addition, the palliative care services may assist hematologists in the management of their patients' suffering and quality of life during the timing of increased symptom burden for patients with HMs. [3] In Taiwan, there are no residential facilities to provide hospice care. "Palliative care" encompasses much more than just EOL or hospice care. [13] In the current study, palliative care included hospital-based inpatient care, outpatient services, and home care.
Six QIs of EOL cancer care have been developed and are outlined as follows: undergoing chemotherapy during the last 2 weeks of life, having more than 1 emergency room (ER) visits in the final month of life, being admitted to a hospital at least twice in the final month of life, receiving intensive care unit (ICU) care in the final month of life, receiving cardiopulmonary resuscitation (CPR) in the final month of life, and dying in a hospital. [14,15] These QIs have been adopted in the United States, [16] Canada, [17] and Taiwan [18] and are considered as aggressive EOL cancer care. All indicators are considered to indicate poor quality care. More aggressive EOL care is considered inappropriate for the terminally ill patients. [17] Inappropriate EOL care was examined by a composite score adapted from Tang et al. [19] Therefore, measuring the score is crucial for evaluating the quality of palliative care programs. In this study, we used Taiwan's National Health Insurance Research Database (NHIRD) to evaluate the impact of palliative care on QIs of EOL cancer care and health care costs for patients with HMs in the final month of life.
Data source
Taiwan's National Health Insurance (NHI) program was implemented in March 1995; it is a single-payer program that covered as many as 99.9% of Taiwan's residents in 2012. [20] Taiwan's NHI has the unique characteristics of universal insurance coverage, comprehensive services (including medications, home care, even Chinese herbal medicine therapy) provided, and a single-payer system with the government as sole insurer. Patients have free access to any health care system and provider they choose. Health care systems are reimbursed for services provided, and copayment is waived for patients with examined catastrophic illness certificate (CIC), including malignancy. In the present study, patient data were linked to Taiwan's 2000 Longitudinal Health Insurance Database (LHID2000), a subset of the NHIRD. The LHID2000 contains all original claims data for 1 million individuals randomly sampled from the 2000 NHIRD Registry. All patients who had a diagnosis of hematological malignancies with matching CIC between January 1, 2000 and December 31, 2011 were included in our study. We followed patients with HMs until December 2012 by using the LHID2000. Claims data included medical records (inpatient care, outpatient records, and home care) of patients who had and had not received palliative care. Patients under 20 years old and those who had died within 1 month after HM diagnosis were excluded. The International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes (200-208) and A codes (A140, A141, and A149) were used to define HMs. To increase the validity of diagnoses of diabetes or hypertension, we defined patients with these conditions as those with 3 reported diagnoses of diabetes or three instances of hypertension in their medical claims data based on the ICD-9-CM or A codes for these disease entities. [21,22]
Variables
Patient characteristics included age, sex, age at death, median (range) survival in years after HM diagnosis, chemotherapy (the chemotherapy was assumed whenever there was an order for a reimbursement code of oral or intravenous chemotherapy during the period of study), geographic location, [23] level of urbanization, and whether diagnosis was made at a teaching hospital (Table 1). Comorbid conditions listed in the Charlson comorbidity index (CCI) and common comorbidities (e.g., diabetes, hypertension, stroke, and chronic kidney disease) were identified based on ICD-9-CM codes. [24] 2.3. Variable definitions Inpatient palliative, home palliative, palliative, and nonpalliative care groups: We searched the claim data for the reimbursement codes of inpatient palliative care and home palliative care. Among the claim data, patients with the codes for inpatient palliative care and without the codes for home palliative care were classified as inpatient palliative (i-Pal) group. If patients were with the codes for home palliative care, they were classified as home palliative (h-Pal) group. Accordingly, these inpatient palliative units also served as the back-up system for patients receiving home palliative care when they experienced exacerbating symptoms that required further readmission to the palliative unit or whose families needed respite care from caregiving. Patients of i-Pal group and h-Pal group were combined into the Pal groups. Patients with HMs who had not received palliative care were categorized as non-palliative group (non-Pal group).
2.3.1. Charlson comorbidity index (CCI). We calculated CCI scores by examining ICD-9-CM-based diagnoses and procedure codes recorded using the Deyo method. We subsequently applied calculated indices to inpatient and outpatient claims reported by Klabundle et al. [25,26] 2.3.2. Health care costs. We calculated each patient's health care costs by summing the inpatient service and outpatient service costs listed in his or her claims records. We converted these costs based on the average U.S. Dollar to New Taiwan Dollar exchange rate in 2006 (US$1.00 = NT$32.53).
Socioeconomic status (SES).
According to the procedures described in a previous study, [27] the income categories are generally representative of the 5 income groups in Taiwan in 2005. [28] In this study, we classified socioeconomic status (SES) as three groups: the low SES group comprised patients earning less than US$922 (NT$30,000) per month, the moderate SES group comprised patients earning between US$922 and US$3074 (NT $30,000-100,000) per month, and the high SES group comprised patients earning more than US$3074 (NT$100,000) per month.
2.3.4. Aggressive EOL care and composite scores. Aggressiveness of EOL care was examined using a composite measure adapted from Earle et al. [14] The following 6 QIs of EOL cancer care in the final month of life were employed: chemotherapy within the final 2 weeks of life; more than one ER visit, more than 1 hospitalization, ICU admission, and CPR during the final month of life; and dying in hospital. Instead of using this measure to determine the occurrence for any of the above 6 indicators, we scored 1 point per indicator per person. Composite scores ranged from 0 to 6, with a higher score indicating more aggressive EOL care, as adapted from Tang et al. [19] In these recent years, aggressive EOL care was considered as inappropriate EOL cancer care. The protocol for this study was reviewed and approved by the Research Ethics Committee of Dalin Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Taiwan (No. B10301001). Because the analyzed NHIRD files contained only deidentified secondary data, the review board waived the requirement for informed consent.
Statistical analysis
All statistical analyses were performed using R version 3.0.2 (R Foundation for Statistical Computing, Vienna, Austria). A twosided P value of .05 was considered statistically significant. The distributional properties of continuous variables and categorical variables were expressed as the median (range) or frequency (percentage). Survival was defined as the time from HMs diagnosis until death. The Kaplan-Meier estimator was used to measure the survival probabilities of patients after HMs diagnosis and tested using the log-rank test. [29] Normality was examined using the Shapiro-Wilk test. In the univariate analysis, Wilcoxon rank-sum test, Kruskal-Wallis rank sum test, chisquared test, and Fisher's exact test were conducted to examine differences in the distributions of continuous variables and categorical variables between 2 or 3 groups. We assessed patients' demographic and clinical characteristics, including age, sex, CCI score, geographic area of residence, and treatment modality (Tables 1 and 2).
A multivariate analysis was conducted by fitting multiple logistic regression models with the stepwise variable selection procedure to determine vital predictors of QIs during the final month of life. Generalized additive models were fitted to detect the potential nonlinear effects of continuous covariates and determine appropriate cutoff points for discretizing continuous covariates if necessary, during stepwise variable selection.
We assessed the goodness of fit of the final logistic regression model based on the estimated area under the receiver operating characteristic curve (AUC) (also called the "c-statistic"). In practice, a c-statistic value (c = 0-1) of ≥0.7 suggests an acceptable level of discriminatory power. Statistical tools for regression diagnostics, including multicollinearity checking, were applied to ascertain any problems associated with the regression model or data. receiving palliative care for more than 3 months, and 4 patients (6.5%) receiving palliative care for more than 6 months after enrollment. The study flowchart was shown in Figure 1.
The median age at diagnosis of the patients in the Pal group was older than that of the patients in the non-Pal group (76.1 vs 68.4 years; P < .001). Compared with the non-Pal group, the Pal group had significantly higher proportions of patients with low SES (P = .002) and those living in urban areas (P = .024). Trend was still similar after separating the Pal group into the i-Pal and h-Pal groups (Table 1). Kaplan-Meier survival analysis revealed that the median probability of survival after diagnosis did not differ between the 2 groups (0.75 and 0.67, respectively; P = .377; Fig. 2), even the 3 groups (P = .585). Six QIs of EOL cancer care and health care costs in the final month of life were compared between the Pal and non-Pal groups ( Table 2). The median composite scores were not significantly different between Pal and non-Pal groups (2 vs 2, P = .758). Compared with the non-Pal group, the Pal group had lower proportions of patients receiving chemotherapy in the final 2 weeks of life (4.8% vs 35.2%; P < .001), those requiring ICU admission (4.8% vs 29.2%; P < .001), and those requiring CPR (9.7% vs 39.4%; P < .001). However, compared with the non-Pal group, the Pal group had significantly higher proportions of patients requiring more than 1 hospitalization (32.3% vs 16.5%; P = .005) and those who died in hospital (69.4% vs 54.4%; P = .024). No difference in patients requiring more than 1 ER visit was observed (56.5% vs 49.8%; P = .354). The median health care cost per person during the final month of life between the Pal group and the non-Pal group were not significantly different (US$3096 [0-13762] vs US$3900 [0-43783], P = .315). Similar results were found after separating the Pal group into i-Pal group and h-Pal group.
Discussion
The novel finding in this study was that patients with HMs receiving palliative care had less inappropriate EOL cancer care in the final month of life. The contributing factors might be Patients with HMs had more conditions, such as antineoplastic regimens, infections, cytopenias, and coagulopathies during the end of life, and these patients need more frequent ER visits and hospitalizations. However, the palliative care teams may assist hematologists in the management of their patients' suffering and quality of life during the timing of increased symptoms burden. [3] In Taiwan, palliative care programs include both inpatient and home care models and have been available since 1990 for patients with serious illnesses without an absolute limitation of predicted survival duration. [30] Palliative care is covered by NHI, adopted palliative chemotherapy or radiotherapy, and included inpatient and home services. Thus, patients requiring inpatient palliative care were admitted to hospitals in Taiwan. Although most EOL quality measures were met by hematological oncologists, the quality of indicators, such as at least 2 hospitalizations and dying in hospital was not decreased in this model. We suggest that the QI of hospitalization during EOL care could be modified to days of hospital stay in the final month of life. In this study, we found that days of hospital stay in the final month of life did not differ between the Pal and non-Pal groups. One of the reasons of more hospital stays might be patients with HMs and those with solid tumors had significant symptom burden at the time of referral for palliative care, and patients with HMs exhibited more substantial drowsiness and tiredness than did those with solid tumors. [31] Previous study reported that patients receiving home-based palliative care was associated with a significant reduction of dying in a hospital. [32] In this study, we further separated the palliative group into h-Pal group and i-Pal group. We found that the h-Pal group had the similar trends as i-Pal group, but had lower trend of dying in hospital after adjustments. It might be due to the small sample size for h-Pal group to reach significant. In this study, we also found the median survival for patients with HMs was 0.74 years, which was different from previous report. The 5-year survival rate was 51.1% for patients with chronic lymphocytic lymphoma during 1990 to 2004, in Taiwan. [33] On account that those patients with HMs being alive at the end of this study were excluded, the selection bias might exist.
Another issue in care of patients with HMs is how to increase the proportion of patients with HMs who receive palliative care. Although the patients with HMs in this study who received palliative care benefited from it during EOL care, we found that palliative care use was 8.6% among patients with HMs. A cohort study reported that 8.0% of the patients with HMs had received inpatient palliative care in the United States. [7] A previous study reported that 19.9% patients with lung cancer received inpatient palliative care in Taiwan. [34] An integrative systemic review study reported that palliative care for patients with HMs is often limited to the EOL phase with late referral to palliative care. [35] Previous studies have reported that HMs were associated with inappropriate cancer-directed care during EOL care and underuse of palliative care. [16,36,37] Possible reasons for lesser use of care on palliative wards are (1) patients maintaining strong relationships with their oncologists and not wishing their care to become fragmented, (2) lower severity of symptoms among patient with HMs, and (3) oncologists tending toward optimism in their prognostication for patients with advanced cancers. [38] Identification of when the EOL period begins in patients with HMs is crucial for hematological oncologists. In a previous study conducted as a series of focus groups with hematological oncologists, researchers reported that the factors influencing initiation of EOL care for patients with HMs were age, comorbidities, and performance status; the researchers also found that disease-directed treatments were causing a significant decline in patient quality of life. [39] Other barriers included hematologic oncologists' attitudes and beliefs toward EOL care and patients' and their family members' preferences. [40] There are limited data regarding the quality of EOL care for patients with HMs. Previous study reported that translating evidence into action improve chronic illness care. [41] The successful approaches included provider-oriented components, such as continuing education or physician feedback, information systems changes, and patient-oriented interventions. [41] We could www.md-journal.com learn from this model of improving chronic illness care. A potential method for increasing palliative care use is the timing of integrating palliative care. In 2012, the American Society of Clinical Oncology offered a guideline update on the integration of palliative care into standard oncologic care. [42] The guideline update recommended that inpatients and outpatients with advanced cancers should receive dedicated palliative care as early as possible in the disease course alongside standard oncologic care. [43] One study reported that patients with HMs had considerable physical and psychological symptom burden and the most appropriate time to introduce palliative care might be during increased symptom burden. [3] Prospective studies evaluating earlier implementation of palliative care with standard care of HMs are warranted. Another potential method was to reinforce the criteria (the indicators of EOL cancer care) that the health care team should follow to define the final decisions to continue or discontinue treatment in the EOL cancer care.
Further studies should also look into patient reported quality of life outcomes.
Limitations
HMs were defined as incurable diseases at presentation or relapsed/refractory status. [7] The information about the staging of HMs was not obtained in the claimed data, and it is a major limitation in current study. We classified HMs as leukemia, lymphoma, and multiple myeloma. The numbers of these 3 subgroups would be too small to analyze. Additionally, this was another major limitation in current study. This study had other limitations. First, our cohort being restricted to adult patients might have limited the generalizability of our findings to people younger than 20 years of age. Second, misclassification bias may have occurred because of the inaccuracy of some of the variables used, including calculations of comorbidity scores. Third, the patients included in this study were not randomized to Pal (i-Pal, h-Pal), and non-Pal groups for comparison, and it might have selection bias. Fourth, the risk factors related to each QI (e.g., clinical symptoms and signs, patients' or family members' preferences, physician recommendations, and do-not-resuscitate designations) were not recorded in the administrative database. Patients' and family members' preferences may have influenced some outcomes. Fifth, the care of HMs has been improved over time. Alongside, the claimed data which had been used in the current study might have limitations of out-of-date care. Previous studies reported that clinical trials in HMs have been growing rapidly since 2010. Sixth, we used the insurance system exit dates as our proxy for death. The proxy date might be a couple of days later than the real death date, as it sparsely occurred. In addition, patient-centered outcome measurements, such as quality of life, health care utilization and functional capacity were incorporated in a small number of trials. [44] Finally, only 35.5% of the patients survived for more than 30 days after receiving palliative care, and the inappropriate EOL care score might be overestimated and the health care costs might be underestimated.
Conclusion
Patients with HMs who receive palliative care could benefit from less inappropriate EOL cancer care in the final month of life. However, we found that palliative care was received by 8.6% of patients with HMs. The related factors of more hospitalizations and dying in hospital warrant further investigation. | 2019-10-04T13:17:00.555Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "5b6bf8d51089e168a476b12bebee1c66389eda2a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/md.0000000000017395",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a745316bd3614e99583bbb82eaff8c88c87dbd54",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
150461571 | pes2o/s2orc | v3-fos-license | The Development of an Electronic Book on Quantum Phenomena to Enhance Higher-Order Thinking Skills of the Students
This research aims to produce electronic books on quantum phenomena which can be used to enhance the higher-order thinking skills of the students. This research is a development research which used ADDIE method consisting of 5 stages, namely: (1) analysis, (2) design, (3) development, (4) implementation, and (5) evaluation. Pre-research data analysis has been collected from 130 students and 4 physics teachers of 3 Senior High Schools in Bandar Lampungby using questionnaire and has been analyzed descriptive-quantitatively. At the design stage, storyboard making was conducted based on the needs of teachers and students in the school as well as the opinion of four experts in physics education. Electronic books on quantum phenomena are deloped with a scientific approach based on indicators of higher-order thinking skills. At the development stage, the realization of electronic book design was carried out and it was then followed by the making of the higher-order thinking skills instrument. The implementation of the electronic was conducted on the students in a class of a Senior High School in Bandar Lampung. The result of the development of the electronic book on quantum phenomena is assessed based on 3 aspects: (1) validity, (2) practicality, and (3) effectiveness. Validity and the practicality of data were obtained by using the validity sheet and questionnaire, effectiveness data were obtained by using higher-order thinking test instrument. Based on the result of data analysis, the product of electronic book on quantum phenomena which is valid, practical, and effective to be used to enhance the higher-order thinking skills of the students.
Introduction
The basic purpose of education is to allow someone to use knowledge in solving problems in daily life. Problem-solving in daily life allows someone to keep learning by developing thinking skill [1, 2]. Generally, learning activity at school: (1) is based only on books (books which are used as guidance or physics learning source), (2) is teacher-centered with lecture method, (3) ends the activity by giving an assignment that will be submitted on the next meeting, and (4) the assessment is based on the student's final answer. The assignment is given by teachers only the completion of calculation questions contained in the student worksheet or guidebook so that the thinking skill of the students is not improved .
In general, students' difficulties in learning physics are found in conducting experiments, using formulas and calculations, reading graphs, and providing conceptual explanations at the same time [3], [4]. Quantum phenomena are one of the abstract physics materials [4,5] which is highly microscopic and can not be observed directly [7] and it requires higher-order thinking skills to comprehend it comprehensively. Quantum phenomena study dark matter radiation, Stefan-Boltzmann's law, Wien's shifting law, Rayleigh-Jeans law, Planck's quantum theory, Photoelectric Effects, Compton effects and Xrays which are essential to study because all of them becomes the basis for the development of modern science and technology [6].
Based on the data of requirement analysis, because the time for XII grade students to study in the even semester is very limited, so, the teachers only explain the materials in a summary which are considered to be used in the exam. The limited time of face-to-face learning made 72% of students have problems in understanding the materials. These limitations made teachers innovative to use various media and learning resources. Learning which uses an electronic book on quantum phenomena based on Learning Content Development System (LCDS) program is expected to facilitate teachers to deliver the contents of learning visually and interactively so that learning will be more interesting and effective. Innovations that can be done in learning the material of quantum phenomena are providing a practical simulation, visualizing quantum phenomena, and conducting interactive tests so that it can be easily understood by the students [7,8].
The effective and innovative learning based on the requirements of teachers and students can be done by providing an alternative to developing an interactive electronic book that can be used independently by the students. The electronic book on quantum phenomena consists of a description of learning materials, animations, simulations, videos, sample questions, and practice questions that can enhance higher-order thinking skills. Higher-order thinking skills are needed to solve problems, to make decisions, and to explain the phenomena encountered in dailylife [10]. This research aims to produce an electronic book on quantum phenomena that can be used to enhance the higher-order thinking skills of students.
Research Method
The method used in this research is developing research based on ADDIE model consisting of: (1) analyze, (2) design, (3) development, (4) implementation and (5) evaluation. Analysis activities include analysis of requirement and analysis of learning materials in school. The instrument used in this analysis stage is questionnaire given to 130 high school students and 4 physics teachers who were randomly drawn from 3 different Senior High Schools in Bandar Lampung to find out the criteria for the required electronic book. At the design stage, an electronic book design on Quantum Phenomena encompasses the breadth and depth of the material were made so that it could foster higher-order thinking skills of the students.
Figure 1 Flowchart of research and development
At the development stage, the realization of electronic book design which was then followed by the making of the higher-order thinking skills test instrument. The electronic book on Quantum Phenomena produced was validated by 5 experts in physics education using a validation sheet. The data collected were then analyzed descriptive-quantitatively. The feasibility of product resulted from the development was assessed on three aspects, namely validity, practicality, and effectiveness and the data were collected by using questionnaire and higher-order thinking test. The validity of the electronic book was judged based on the material and design aspects. The practicality of electronic book was seen in the implementation of learning activities using an electronic book on quantum phenomena. The effectiveness of product resulted from the development was assessed based on responses and improvements in student learning outcomes. The implementation of an electronic book was carried out on high school students in Bandar Lampung to measure the effectiveness of the product using the pre-test and post-test value of students expressed through n-gain value [11].
Results and Discussion
Based on the results of the analysis of teachers' and students' requirements, it is known that changes are needed in the learning activity, where many activities can be conducted to make students active in learning activities, such as providing other learning resources, conducting experiments, and using various learning media [5]. Technological advances have the potential for innovation in learning so that they can engage students in innovative exploratory learning activities [12]. An electronic book as a portable reading device, provide easiness in accessing and using it [12,13]. The right design will allow the students to save and to share information in a short time [14,15] up to integrating it towards other sources and learning media. The interactive electronic book chart is presented in Figure 2.Electronic book on quantum phenomena consist of materials black body radiation, Stefan-Boltzmann laws, Wien displacement laws, Rayleigh-Jeans laws, theory quantum Planck, photoelectric effect, Compton effect, and X-rays. Each material contains quantum phenomena visualization, graphics, animations, simulations, concepts, theories, principles, formulas, question examples, discussions, interactive tests, and implementation in daily life which can be used to enhance higherorder thinking skills. Table 1. Based on the data in Table 1, it is concluded that the electronic book on quantum phenomena is feasible to use in teaching the material of quantum phenomena, because it has met the aspects of validity (material and design), practicality (implementation of learning activities, implementation of social system, and implementation of reaction principles), and effectiveness (learning outcomes and student responses in learning activities). The effectiveness of the electronic book on quantum phenomena in terms of student learning outcomes obtained an n-gain score of 0,63 (quite effective), meaning that the electronic book on quantum phenomena is effective to enhance higher-order thinking skills of students. The strengths of the electronic book compared to a textbook in schools are the presence of practical simulations, visualization of physical phenomena, and interactive tests with feedback [16][17][18][19]. Based on the practicum simulation for dark matter radiation material (Figure 3), students can find examples of dark matter, identify the characteristics of dark matter, and find out the utilization of radiation in daily life. Higher-order thinking skills that can be grown through the simulation include: (1) building basic skills through observation, (2) making conclusions, (3) considering and integrating, (4) thinking smoothly, and (5) thinking flexibly. Simulation and animation, the examples of the implementation of virtual technology which have only been regarded as learning media, can be used to improve students' understanding of a material and help students develop the explanation of complex material [21]. Virtual technology which used virtual laboratory can incorporate all components of laboratory activities through observed phenomena to improve students' physics learning outcomes and to provide learning experience enhancement [21,22]. To ease the students in understanding the requirement of a learning is to make learning meaningful [24][25][26] by preparing materials and concepts that can be linked in daily life. In studying the concept of radiation as the basis of dark matter radiation material, based on the presented phenomena, a number of questions are asked including: "What do you feel when you are under the heat of the sun? ( Figure 4). Notice the colors of the clothes they are wearing! Who feels the hottest heat? According to your prediction, what color shirt is dried fastest when it is dried under the sun heat? "( Figure 5). Thus, teachers are required to deliver material by building concepts as the basis for enhancing higher-order thinking skills [27]. In learning activities, Figure 4 and Figure 5 are used to build basic concepts related to radiation phenomena in daily life. After students can build basic concepts, the teacher will guide students to analyze based on available questions. So students can enhance higher-order thinking skills in the learning activity.
Conclusion
According to the research results of the electronic book on quantum phenomena containing learning materials equipped by physics phenomena which are displayed either apparently or even using animations, practical simulations, visualizations of physics phenomena, and interactive tests are said to be feasible and can foster higher-order thinking skills of students. The validation test results show a value of 3,60 (very valid), meaning that the electronic book on quantum phenomena has met the validity aspects of material and design. On the practicality test, it obtained a value of 3,88 (very practical), meaning that the electronic book on quantum phenomena is very practical to use in learning activities. The effectiveness of the electronic book on quantum phenomena is based on students' responses and learning outcomes. Based on students' responses, it obtained a value of 3,15 (positive response) meaning that the students feel that the electronic book on quantum phenomena is very effective, interactive, efficient, and easy to use in learning activities. Based on the results of the test using higher-order thinking test instrument, it is known that then-gain score is 0,63 (quite effective), meaning that the electronic book on quantum phenomena can enhance higher-order thinking skills of the students. | 2019-05-13T13:05:27.886Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "e7c14900889e4ac4cbdd7172d2e1d07cd3ffb762",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1155/1/012012/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "e3792e11b7f4c3804ccbbecae60351ba0912b7ec",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Physics"
]
} |
232164560 | pes2o/s2orc | v3-fos-license | Rapid prioritisation of topics for rapid evaluation: the case of innovations in adult social care and social work
Background Prioritisation processes are widely used in healthcare research and increasingly in social care research. Previous research has recommended using consensus development methods for inclusive research agenda setting. This research has highlighted the need for transparent and systematic methods for priority setting. Yet there has been little research on how to conduct prioritisation processes using rapid methods. This is a particular concern when prioritisation needs to happen rapidly. This paper aims to describe and discuss a process of rapidly identifying and prioritising a shortlist of innovations for rapid evaluation applied in the field of adult social care and social work. Method We adapted the James Lind Alliance approach to priority setting for rapid use. We followed four stages: (1) Identified a long list of innovations, (2) Developed shortlisting criteria, (3) Grouped and sifted innovations, and (4) Prioritised innovations in a multi-stakeholder workshop (n = 23). Project initiation through to completion of the final report took four months. Results Twenty innovations were included in the final shortlist (out of 158 suggested innovations). The top five innovations for evaluation were identified and findings highlighted key themes which influenced prioritisation. The top five priorities (listed here in alphabetical order) were: Care coordination for dementia in the community, family group conferencing, Greenwich prisons social care, local area coordination and MySense.Ai. Feedback from workshop participants (n = 15) highlighted tensions from using a rapid process (e.g. challenges of reaching consensus in one workshop). Conclusion The method outlined in this manuscript can be used to rapidly prioritise innovations for evaluation in a feasible and robust way. We outline some implications and compromises of rapid prioritisation processes for future users of this approach to consider. Supplementary Information The online version contains supplementary material available at 10.1186/s12961-021-00693-2.
social care broadly, e.g. new models of care, service innovations, payment and commissioning innovations, person and community-centred approaches to innovations and technological innovations. This approach was undertaken in respect of England but is relevant anywhere.
The majority of innovations that have been piloted and implemented in adult social care and social work in England are mostly small in scale and/or inconsistently implemented [2]. It is essential to learn more about their effectiveness, cost-effectiveness, context and generalisability when considering their potential to be rolled-out more widely [3]. High quality and timely evaluation is needed to identify which innovations are priorities for adoption and scale-up [3]. Given the large number of innovations that could be evaluated, it is necessary to prioritise where to focus limited resources that are available to undertake evaluations.
Identification and prioritisation processes are more widely conducted in healthcare research and evaluation than in social care. It is important to note that there are likely differences between priority setting for research compared to priority setting for evaluation and implementation (e.g. different priority setting exercises may require different criteria and nuanced applications). There is a substantial, international literature on priority setting in healthcare research, but there is currently no agreed gold standard approach [4][5][6][7]. Yoshida et al. highlighted the need for a "transparent, replicable, systematic and structured approach" to priority setting [6]. Viergever et al. identified nine best practice themes when conducting health priority setting exercises: context, use of a comprehensive approach, inclusiveness, information gathering, planning for implementation, selection of relevant criteria, methods for deciding on priorities, evaluation and transparency [5]. The WHO reviewed its health research priorities using a research cycle framework [7]. Recognising the need for transparent reporting of priority setting, Tong et al. developed the REporting guideline for PRIority SEtting of health research (REPRISE) guideline, which covers 10 domains: context and scope, governance and team, framework for priority setting, stakeholders/participants, identification and collection of priorities, prioritization of research topics, output, evaluation and feedback, translation and implementation, and funding and conflict of interest [8].
Prioritisation of research and evaluation in social care and social work has attracted less attention than in healthcare, but that may be changing. Increasing demand and constraints in public spending have put social care and social work services in England under severe pressure and that is heightening the need for service evaluations. The Health Foundation and The King's Fund estimates that demand for publicly funded social care will increase in real terms by 3.7% annually on average over the period to 2030-2031, which is much faster than historic growth in public funding [9]. In response, local government, care providers and other organisations have been looking at novel approaches to deliver services. The Local Government Association's Green Paper for adult social care and wellbeing [10] and other reports such as "Six Innovations in Social Care" [11] and "Total Transformation of care and support" [12] highlight innovations across the UK in: demand management; working in closer partnership with the National Health Service (NHS), voluntary and social enterprise sectors; and using community-based assets to provide care solutions to local populations [10,12]. Government statements on the future of social care in England highlight the importance of innovation [13], and in 2019 the Department of Health and Social Care (England) funded the development of the Social Care Innovation Network, a collaboration between the Social Care Institute for Excellence (SCIE), Think Local Act Personal (TLAP) and Shared Lives Plus to support local providers, commissioners and citizens to adopt evidence-based innovations in social care [14,15].
Whilst priority setting exercises are more numerous in healthcare research, there are also examples in social care and social work. The James Lind Alliance (JLA) Adult Social Work Priority Setting Partnership (PSP) identified in 2018 the top 10 priority research questions in adult social work in England using the JLA's long established PSP approach for the first time in a non-health related setting [16]. The approach included stakeholders from the adult social care workforce and providers, but also service users and their carers [16]. In 2019, the National Institute for Health Research (NIHR) undertook a scoping review of adult social care research priorities to guide decisions about funding further research in the area. Thirty distinct research priorities were identified from National Institute for Health and Care Excellence guidelines, NIHR-funded reviews and research, JLA PSPs and other documents [17]. Stakeholder engagement is necessary when conducting priority setting processes as there can be differences between researcher priorities for research topics and end user priorities for research topics [18]. Additionally, stakeholder engagement may increase relevance and reduce research waste [19].
To supplement the JLA and NIHR exercises, which focused on research questions, and to identify priorities specifically for immediate rapid evaluation in adult social care and social work, in July 2019 the NIHR commissioned the rapid prioritisation exercise presented in this paper.
Our rapid prioritisation process aimed to identify and prioritise adult social care and social work innovations for evaluation. This manuscript describes and explains the rapid prioritisation method that we used.
Methods
Our approach to rapid prioritisation of social care innovations (for subsequent rapid evaluation) focused on achieving speed while retaining an acceptable level of coverage (of the range of social care and social work innovations) and reliability (participation by all key stakeholder groups in the prioritisation process). To achieve this balance, we adapted the JLA approach to priority setting, which uses a dialogue model for multi-stakeholder involvement [18,20]. The JLA model draws on and adapts consensus development models such as the Nominal group Technique and Delphi methods [21,22]. Our adapted approach followed four steps: (1) identification of innovations; (2) development of shortlisting criteria; (3) grouping and sifting innovations; and (4) prioritisation of innovations in a multi-stakeholder workshop (See Additional file 1: S1). The whole process (including project initiation through to completion of the final report) took four months (July-November 2019).
We started by identifying a long list of specific, named innovations, rather than topics/questions for research, as would be more conventional in a JLA context. We identified specific innovations rather than underlying topics/ questions for research as the purpose of this prioritisation process was to rapidly identify specific innovations for evaluation in the short term. The horizon scanning encompassed all types of innovation in adult social care and social work, including: new models of care; service innovations; payment and commissioning innovations; person-and community-centred approaches; and technological innovations. Emails were sent to 182 individuals or organisations with knowledge of social care and social work, including people who use adult social care services, carers, frontline professionals, service providers, commissioners, national organisations, think tanks and researchers. The stakeholder list was created using the combined networks of all the authors and colleagues (including their research units, contacts of contacts, Google searches and forwarding of the email to interested parties). The email asked stakeholders to identify interesting innovations in adult social care and/or social work, which would benefit from being evaluated. Stakeholders were asked to provide, within a four-week deadline (compared to three months or more in the usual JLA process): the innovation(s)'s name(s); short description; where/who is implementing the innovation; brief description of any known evaluations of the innovation; and links to further information. As part of this exercise, we were recommended to include the innovations in the 'Six Innovations in Social Care' report by Think Local Act Personal [11]. Two team members (HW/SMT) grouped innovations which were identical or similar, based on information from the recommender(s) and/or further information from rapid web searches by team members (PLN/HW/SMT). Innovations were grouped into nine categories: (1) Workforce capacity building innovations, (2) Training and support innovations, (3) Technology innovations to support care, (4) Housing community innovations, (5) Home adaptation innovations, (6) Relationship based innovations, (7) Innovations linking patients with health or social care professionals for provision of care, (8) Innovations on social services in the community and (9) Innovations relating to the provision of funding support.
To shorten the (very) long list of innovations to a reduced list, we developed shortlisting criteria. To be included, innovations had to fit within our scope, focus on adult social care and social work, take place within the four nations of the UK, provide enough detail to understand what the innovation is, focus on social care and social work, be amenable to evaluation and rapid evaluation and focus on a relevant outcome for social care (see Additional file 1: S2). To develop criteria, we drew on literature on research prioritisation in health and social care and their own experiences of prioritisation. Criteria were discussed and refined in follow-up team discussions (see Additional file 1: S2).
The criteria were initially applied to the resultant list of innovations by two team members (HW/SMT) in consultation with an expert social care academic adviser independent of the team (CN). We were as inclusive as possible at this stage. Innovations were automatically omitted only where all three of the initial reviewers advised that. To further condense the remaining list of innovations, the full project team met in September 2019. The discussion focused on innovations where there had been disagreement among the three initial reviewers, but also reconsidered those innovations where all three had so far agreed to retain them. Innovations were excluded by the full team at this stage if, when considered by the larger group, it was known that: they had already been evaluated thoroughly; they were mainly healthcare rather than social care/social work focussed; they were not innovative; or were too non-specific. A final reduced list of 20 adult social care and social work innovations was then taken to a multi-stakeholder workshop to identify the top five priorities for evaluation.
A key element of the rapid prioritisation was to engage in depth with a full range of stakeholder perspectives in an open and purposeful discussion to arrive at a wellfounded shortlist of adult social care and social work innovations for evaluation. To achieve this, we ran a one-day multi-stakeholder workshop in October 2019, in London. We aimed to recruit around 25 workshop participants including: people who use adult social care services, carers, practitioners, providers, commissioners, researchers and key national organisations. To identify participants, we included an invitation to the workshop in the initial request for innovations (described earlier) and encouraged recipients to pass on the invitation to other individuals likely to be interested. Twenty-three people (beyond the project team) took part in the workshop. Travel costs were covered for participants. People who use adult social care services and carers were offered payment for preparation, workshop attendance and travel time. It was agreed that participants would not be identified in any reporting.
The workshop materials-agenda, participant worksheets and workshop guide-were prepared by an expert practitioner in using the JLA approach, who also led the facilitation of the workshop itself (KC). The format of the workshop was adapted from the JLA model [23]. Prior to the workshop, participants were sent an approximately 200-word description, plus a web link for further information where available, for each of the 20 innovations. Participants were asked to read these and rank all 20 from most to least important to evaluate. These initial views were to be shared with other participants at the start of the workshop.
At the start of the workshop, participants received short presentations which explained its purpose and how the 20 innovations had been identified. Participants were given the opportunity to ask questions and seek clarification. During the workshop, participants were at different times split into three sub-groups with similar numbers of members and a balanced range of stakeholder perspectives. Three facilitators (KC, NJF and JS) guided participants through group activities in which participants discussed and prioritised the list of 20 innovations for evaluations. Facilitators were neutral and did not contribute to discussions or prioritisations.
The work on the day was undertaken in three successive stages of prioritisation, each building on the one before. The criteria for prioritising one innovation ahead of another for evaluation were deliberately not pre-set but were left to the workshop participants to propose (explicitly or implicitly). An initial discussion took place within each small group. Participants took turns to describe their top three and bottom three priorities for evaluation from the list of 20 and their reasoning. The facilitator of each group then summarised and presented back to the group the aggregate of their initial proposals for the highest and lowest priorities to evaluate. The first round of prioritisation then took place, within the same small groups. Drawing on the prior discussion, the facilitator in each group arranged 20 cards (with the individual innovations outlined) to create a diamond shape. The top of the diamond represented the most important innovations expressed in the previous discussion and the lower tip the less important topics. The middle reflected innovations that received divided opinions. The diamond was then developed into a more linear and prioritised list through discussion and negotiation, with all innovations ranked one to 20 by each small group separately. The three groups' rankings were then combined in a spreadsheet and presented back to all the workshop participants in a plenary session. One facilitator (KC) gave an overview of the combination of all small group rankings, drawing attention to areas of agreement or disagreement between the groups.
In the next prioritisation session, participants were allocated to three new small groups. Thus, participants were with a largely different group from that with which they had discussed priorities in the preceding session. This provided an opportunity for participants to hear and understand different views, and to review and, if agreed, revise the shared ranked list. Participants were advised to focus on the top half of the combined list from the plenary session, in order to work towards a final prioritisation. Again, the three groups' rankings were entered on a spreadsheet.
In the final prioritisation session, all workshop participants met in plenary to review the aggregate of the second round of group rankings and completed a final round of prioritisation together. This focused on agreeing the top five priorities. It was agreed that the ranking positions of the remaining 15 innovations were of less importance.
Three observers (JE, HW and JN) took notes throughout the workshop. Notes covered how decisions were made, areas of agreement and disagreement, key themes and insights into participants' perceptions of innovations and their importance for evaluation. These notes were used to provide further insight into why decisions were made and why certain innovations were prioritised over others.
To better understand the participants' perspectives on the prioritisation workshop process, we asked them to complete feedback forms. Feedback forms included questions on: the stakeholders role at the workshop, how they found the information sent in advance of the workshop, the extent to which the prioritisation process was helpful in agreeing priorities, whether they felt able to voice their opinions, whether everyone was encouraged to join in equally, fairness and independence of facilitators, suitability of the venue and refreshments. Stakeholders were also given the opportunity to provide free-text comments.
Results
In total, 158 different innovations were suggested by 59 individuals from 43 academic, government, NHS and third sector organisations. After grouping and sifting by the project team, 20 innovations (12.7% of the original total) were included in the final shortlist (see Fig. 1). Table 1 indicates the stakeholder groups from which the 23 workshop participants came. There was strong representation by service users and care practitioners, in particular.
Several themes emerged during the workshop discussions, around the criteria for determining which innovations should be prioritised for evaluation. There was a desire for a range of types of innovations to be evaluated, to include a mix of community-centred, individual-centred and technological innovations. Participants wanted to ensure that some innovations that focused on community-centred support and connecting communities were prioritised. They also wanted innovations that support individuals and families to maintain independence, e.g. innovations that focus on prevention, self-directed support or helping people to do more for themselves. Technological innovations such as apps and web-based interventions were also considered necessary to be included, although some participants contested the suitability of technological innovations for certain user groups and their suitability for rapid evaluation. To facilitate the desire for a mix, participants decided to prioritise innovations that they considered represented groups of similar innovations. For example, among all the technological innovations, one group prioritised the one that was perceived to be the most sophisticated or potentially beneficial. Innovations that were seen to be breaking wholly new ground, were novel or shaking up the current social care system, were ranked higher.
There was discussion of the relative priority of evaluating innovations expected to be quality improving but also cost increasing, versus those focused on cost saving. One group queried whether we should be evaluating innovations that local authorities were unlikely to be able to afford to implement. But other participants considered it preferable to prioritise evaluating innovations that might strengthen how social care services are delivered, ahead of innovations aimed at cost saving.
Innovations that appeared potentially generalisable but were currently lacking supporting evidence were prioritised over those that were known to have been evaluated or currently being evaluated. However, there were varying views on what counted as having enough evidence: e.g. the extent to which innovations had been evaluated for use with different target groups within a given population or evaluated in other countries (whereby findings might not be wholly applicable to UK settings). Some participants accorded higher priority to innovations that focused on underrepresented or relatively neglected groups with unmet needs (such as individuals living with brain injury, or prisoners). The top five innovations to prioritise for evaluation that were identified at the end of the workshop are described in Walton et al. (2019) [23]. These innovations (listed here in alphabetical order) were: (i) Care coordination for dementia in the community, (ii) Family group conferencing, (iii) Greenwich prisons social care, (iv) Local area coordination and (v) MySense.AI. Taken together the top five span a wide diversity of innovation types. All aim mainly to improve care quality rather than save costs. They include two innovations around ways of coordinating care locally, including community assets; one focused at the individual level, looking at how to improve planning of individualised care; one was a technology-focused innovation; and one aimed at a particularly under-served group of the population, namely prisoners.
Fifteen out of 23 (65%) participants provided feedback. Feedback indicated that participants welcomed the information sent prior to the workshops about the innovations and the plans for the workshop itself, although some participants felt there was less information on some innovations than on others and would have welcomed greater information (if available) as well as more time than one week to gather their thoughts prior to attending the workshop.
The key tensions expressed by participants were: (1) the application of the JLA format and reaching consensus in a single workshop; and (2) achieving a balance of voices in discussions to prioritise innovations. Some participants felt there was some 'disconnect' between the small group discussions in the first two stages of prioritisation at the workshop, and the final, plenary discussion. The lack of pre-set criteria left some participants feeling uneasy. Some participants considered that some of the discussions had, despite the facilitators' efforts to ensure inclusion, been disproportionately driven by "stronger characters" who were more informed and knowledgeable about particular innovations. Hence, there were notable differences between how innovations were prioritised in a small group and the final order agreed in the large group discussion. Nevertheless, participants welcomed the opportunity to contribute in a workshop attended by a range of individuals committed to improving services in adult social care; and, while many have critiqued the methodology of an adapted JLA process, participants overall felt the workshop achieved a sufficient level of inclusivity and consensus, and was able to prioritise innovations based on informative discussions.
Discussion and conclusions
An established, dialogue-based model for inclusive research priority setting, the JLA approach has been used internationally in over 100 areas of health and social care [24]. JLA Priority Setting Partnerships (PSP) focus on the prioritisation of health research questions, facilitating a process that involves multiple stakeholders in a 12-to 18-month process. In order to rapidly to set priorities for the evaluation of innovations in social care and social work, the JLA framework [18,20] was adapted with the aim of being done rapidly, but still with input from a range of stakeholder groups, including people who use services, carers, practitioners, providers, commissioners and researchers. In this section, we review the adaptations we made to the JLA process and the steps we took to try and uphold the rigour of the exercise. We examine the implications of undertaking rapid prioritisation, including the impact on the engagement of stakeholders and on the innovations prioritised for rapid evaluation.
The rapid priority setting exercise was delivered in four months' elapsed time, from commencing initial planning to reporting the priorities to NIHR. This rapid exercise took a lot less time than most JLA priority setting processes which usually take 12-18 months (from design through to comprehensive checking processes) [25][26][27]. Project management was a key factor in achieving speed. A multi-disciplinary project team was established to design and run the project, including coordination and administration support, evaluation and research expertise, social care and social work research expertise, and priority setting methods experience. JLA PSPs are similarly operationally supported by a project team but, unlike our rapid adaptation, they are also overseen and led by a steering group involving service users and practitioners [25][26][27]. The team for the rapid prioritisation was able to focus on the innovation identification and priority setting and convene meetings quickly and regularly, including overseeing the shortlisting. Additional professional expertise was brought in on an ad hoc basis, including calling on colleagues to help identify networks for recruiting workshop participants, and to sense-check the selection of innovations shortlisted for discussion at the workshop.
One reason for adapting the JLA method, as opposed to any other priority setting method, was to draw on its methods for involving diverse stakeholders in the decision-making process. It was agreed that the results should be shaped by a range of groups, including people with lived experience of accessing social care and social work services, and their carers. However, undertaking a rapid prioritisation did affect the ability to engage with service users in the earlier stages of the process (as is recommended in JLA PSPs) (e.g. [25][26][27]). For example, no individual service users submitted innovations to the consultation seeking candidate innovations. The short time available meant that there was no opportunity to develop a separate consultation approach tailored for this group (which was something the JLA Adult Social Work PSP did) or to develop relationships with individuals and groups who could provide access to those who use services and their carers. Instead, the consultation relied on the project team's existing networks, with no opportunity to build relationships with community groups and secure buy-in and participation that way. Individuals with lived experience were recruited to the priority setting workshop and actively influenced the outcomes of that, but these individuals tended to have established links to advocacy groups and other involvement activities. More vulnerable potential service users, including people with learning disabilities and cognitive impairment, were not involved. Participants were mindful of this and tried to represent the interests of the under-represented where possible.
Achieving buy-in was potentially a challenge. This is consistent with previous research which has highlighted the challenges of engagement [28]. JLA PSPs that focus on single healthcare conditions can engage with established patient and clinician communities who have common experiences, knowledge, terminology and a clear understanding of their clinical area (e.g. [29]). Their vested interest in the topic helps secure their buy-in. A broad topic such as adult social care and social work does not involve a single constituent group. Undertaking rapid priority setting on this topic meant that there was no time to build a partnership of engaged parties and participants, to develop shared understanding and establish common goals. Achieving engagement for social care may also be more challenging due to social care being a large and diverse topic. Securing engagement may be easier for rapid priority setting on narrower topics.
The rapidity of prioritisation may affect its results. In a full JLA priority setting exercise, the consultation to collect people's 'unanswered questions' lasts at least three months (e.g. [25][26][27]). A further two to three months are then spent collating that data, creating summary questions and checking them against systematic reviews and guidelines, in order to remove questions that do not require further research. This is supported by a patientand-clinician-led Steering Group, which reviews the interpretation of the raw data and the development and wording of the summary research questions, before those questions go back into the public domain for prioritisation. In the exercise presented in this paper, the consultation asked for people to suggest known, named innovations in social care or social work that could benefit from evaluation. This approach required people to understand the notion of innovation and to have enough knowledge to be able to identify one or more. Taking a rapid approach meant that the search for innovations could not be comprehensive, as not all current innovations would be known to the individuals consulted.
There was not time prior to the workshop to check more than cursorily the extent to which the shortlisted innovations had been evaluated already, although this was done more thoroughly subsequently for the top five proposed priorities for evaluation. While colleagues in the field were asked to sense-check the list, it is possible that some items went forward to the prioritisation workshop that were low priorities for further evaluation. Indeed, during the priority setting workshop, one innovation that had been a high priority in some of the initial group discussions, was deprioritised in the final, plenary, workshop session when a participant had the opportunity to inform all participants of an ongoing evaluation. Had there been more time before the workshop, this could have been avoided: the item would not have made it to the workshop. Future rapid prioritisation exercises could develop a plan for managing this, at the cost of more elapsed time and more researcher inputs.
Whilst criteria were used by the research team to reduce the very long initial list to a list of 20 for consideration in the workshop (see Additional file 1: S2), we did not wish to constrain workshop participants with pre-set criteria. Some workshop participants were concerned about the (deliberate, in the JLA method) absence of pre-set criteria for determining priorities for evaluation. The aim was to permit the participants to generate the criteria according to their different perspectives. This aim appears to have been achieved and no participants, despite some expressing concerns in feedback, proved unable or unwilling to contribute actively to the prioritisation discussions. It was also noted by some participants in their post-workshop feedback, that some voices had proved more powerful than others in discussions, particularly at the final, plenary, stage of the workshop. There was no suggestion of disrespectful behaviour by any participants, and it is to be expected that those with more knowledge of an item are more likely to speak, and to speak more, about it. Nevertheless, this is an important and sensitive issue that requires careful and deliberate mitigation by workshop facilitators.
One limitation of this process was that the speed with which we identified innovations meant that service users were unable to suggest innovations at the beginning of the process and that the responses to our call for innovations may have been limited, or potentially biased. Therefore, some innovations may have been missed. Given the nature of our rapid prioritisation process (which included a call for innovations), it is likely that service user involvement at the beginning of the process may have required a longer process including discussion groups. However, we tried to ensure that we identified as many innovations as possible and we conducted checks to ensure that the risk of bias or missing innovations were minimised (e.g. consulting experts). A further limitation is that we were unable to include input from service users on the analysis and reporting of the prioritisation process.
Despite the limitations discussed here, we managed to ensure that innovations were suggested, in rapid responses, by a wide range of organisations and individuals with varying roles within these organisations and the adult social care and social work field. We were then able to adapt and apply an established, robust method for priority setting-that used by the JLA [18]-rapidly and with clear outcomes agreed by a multi-stakeholder group.
We have outlined a systematic but pragmatic method that other researchers who would like to undertake rapid prioritisation processes could imitate. The findings from the specific prioritisation of social care and social work innovations for evaluation can be used to inform the selection of social care innovations to be evaluated. This will support increasingly effective social care and social work to the benefit of service users in future.
We conclude that a rapid version of a priority setting method such as the JLA's may be helpful when engaging in rapid prioritisation processes. Our experience indicates that an adapted version of this prioritisation method was feasible for identifying priorities in a rapid and systematic way, with limitations. However, users of this approach should be aware of the implications and the compromises it entails. | 2021-03-10T14:52:34.375Z | 2021-03-10T00:00:00.000 | {
"year": 2021,
"sha1": "0987fc5a612f31dec6aec52e5e50d1f52176cc15",
"oa_license": "CCBY",
"oa_url": "https://health-policy-systems.biomedcentral.com/track/pdf/10.1186/s12961-021-00693-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0987fc5a612f31dec6aec52e5e50d1f52176cc15",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231910538 | pes2o/s2orc | v3-fos-license | Correlation between Microstructure and Hydrogen Degradation of 690 MPa Grade Marine Engineering Steel
Electrochemical H charging, hydrogen permeation, and hydrogen-induced cracking (HIC) behavior of 690 MPa grade steel substrate and different heat-treatment states (annealed, quenched, normalized, tempered) are investigated by cyclic voltammetry (CV), hydrogen permeation, electrochemical H charging, and slow strain rate tensile test (SSRT). The results show that hydrogen diffuses through the steel with the highest rate in base metal and the lowest rate in annealed steel. The hydrogen-induced cracks in base metal show obvious step shape with tiny cracks near the main crack. The cracks of annealed steel are mainly distributed along pearlite. The crack propagation of quenched steel is mainly transgranular, while the hydrogen-induced crack propagation of tempered steel is along the prior austenite grain boundary. HIC sensitivity of base metal is the lowest due to its fine homogeneous grain structure, small hydrogen diffusion coefficient, and small hydrogen diffusion rate. There are many hydrogen traps in annealed steel, such as the two-phase interface which provides accommodation sites for H atoms and increases the HIC susceptibility.
Introduction
Hydrogen in microstructures can be roughly divided into two forms, diffusing hydrogen and trapping hydrogen [1]. The diffusion rate and solubility of hydrogen in steel substrates are influenced by hydrogen traps, which can enhance or decrease the hydrogeninduced-cracking (HIC) sensitivity of the steel. Pressouyre [2] classified hydrogen traps as reversible and irreversible traps according to the desorption activation energy of hydrogen, which can be measured by thermal desorption spectroscopy. If the E a (binding energy) of the trap is higher than 50 kJ/moL, the trap is irreversible and can capture hydrogen until it reaches the saturated state [3]. When the temperature rises or the hydrogen content exceeds the saturation concentration, the hydrogen leaves and diffuses into the lattice. The E a of reversible traps, which can easily capture and release hydrogen even at low temperatures, is lower than 30 kJ/mol [4]. One important factor affecting HIC sensitivity is the amount of hydrogen in steel [5]. The HIC sensitivity of steel may increase with increasing hydrogen concentration both internally and externally. However, the hydrogen content that causes HIC may be affected by other factors. For example, the critical value of hydrogen content that causes HIC may be affected by the applied stress, microstructure, and tensile strength [6][7][8]. Moreover, when the hydrogen content in steel reaches a saturation concentration, the HIC sensitivity of steel will not change significantly. Hydrogen capture has a great influence on hydrogen accumulation and mobility in steel, and the microstructure of steel can capture and limit hydrogen.
The HIC sensitivity of steels with different microstructures is also different, because of the difference in the distribution of phases, grain size, and defects, which can affect hydrogen diffusion and accommodation [9]. When tensile strength is lower than
Microstructure Characterization and Microhardness Test
The steel substrate and heat-treated specimens were ground to 5000 # waterproof sandpaper successively, polished with a 1.5 μm diamond grinding paste, cleaned with ethanol and acetone, rinsed with deionized water, and dried in air for microstructure observation. The steel substrate and heat-treated specimens were etched with 4 vol.% nital solution for 10 s and saturated picric acid containing a small amount of detergent at 70 °C for 3-5 min to observe the microstructure and prior austenite grain boundary, respectively, with a light microscope (Zeiss Lab. A1 (Zeiss, Oberkochen, Germany)) and scanning electron microscope (SEM, Quanta 250, Portland, OR, USA).
The Vickers microhardness profile of the steels was measured by automatic turret digital display microhardness tester with image analysis (JMHVS-1000ZCCD Shanghai Precision Instrument Co., Ltd, Shanghai, China). The different microstructure sites were observed through a microscope and were subjected to continuous hardness tests, with a load of 100 g for 10 s. At least 40 sites were detected to get reproducible values.
Electrochemical Test
The specimens (10 mm × 10 mm × 2 mm) used for the electrochemical tests were encapsulated with epoxy, leaving the surface of 1 cm 2 as the working electrode [26]. The specimens were ground sequentially to 1500 # by an emery paper and then degreased in dehydrated ethanol, ultrasonic rinsed with deionized water, and dried in air.
The electrochemical tests were conducted through an electrochemical workstation (Autolab PGSTAT 302 N Metrohm Autolab B.V., Utrecht, The Netherlands) with the three-electrode system, in which the platinum was the counter electrode and a saturated Ag/AgCl electrode (+0.197 V vs. SHE) was the reference electrode. The electrolyte solution is 1 mol/L NaOH solution containing 8 g/L of thiourea. Many researchers have found that thiourea can act as a hydrogen recombination inhibitor to prevent H + binding and escaping from the specimen surface [27,28]. The NaOH solution was chosen to prevent corrosion of the steels during electrochemical experiments.
The cyclic voltammetry adopted in this paper consisted of three steps: Step 1: Surface pretreatment of specimens before electrochemical H charging was conducted, namely, two consecutive CV scans, recording the current and potential curves, to analyze and judge the reaction mechanism of the specimen surface.
Microstructure Characterization and Microhardness Test
The steel substrate and heat-treated specimens were ground to 5000 # waterproof sandpaper successively, polished with a 1.5 µm diamond grinding paste, cleaned with ethanol and acetone, rinsed with deionized water, and dried in air for microstructure observation. The steel substrate and heat-treated specimens were etched with 4 vol.% nital solution for 10 s and saturated picric acid containing a small amount of detergent at 70 • C for 3-5 min to observe the microstructure and prior austenite grain boundary, respectively, with a light microscope (Zeiss Lab. A1 (Zeiss, Oberkochen, Germany)) and scanning electron microscope (SEM, Quanta 250, Portland, OR, USA).
The Vickers microhardness profile of the steels was measured by automatic turret digital display microhardness tester with image analysis (JMHVS-1000ZCCD Shanghai Precision Instrument Co., Ltd, Shanghai, China). The different microstructure sites were observed through a microscope and were subjected to continuous hardness tests, with a load of 100 g for 10 s. At least 40 sites were detected to get reproducible values.
Electrochemical Test
The specimens (10 mm × 10 mm × 2 mm) used for the electrochemical tests were encapsulated with epoxy, leaving the surface of 1 cm 2 as the working electrode [26]. The specimens were ground sequentially to 1500 # by an emery paper and then degreased in dehydrated ethanol, ultrasonic rinsed with deionized water, and dried in air.
The electrochemical tests were conducted through an electrochemical workstation (Autolab PGSTAT 302 N Metrohm Autolab B.V., Utrecht, The Netherlands) with the three-electrode system, in which the platinum was the counter electrode and a saturated Ag/AgCl electrode (+0.197 V vs. SHE) was the reference electrode. The electrolyte solution is 1 mol/L NaOH solution containing 8 g/L of thiourea. Many researchers have found that thiourea can act as a hydrogen recombination inhibitor to prevent H + binding and escaping from the specimen surface [27,28]. The NaOH solution was chosen to prevent corrosion of the steels during electrochemical experiments.
The cyclic voltammetry adopted in this paper consisted of three steps: Step 1: Surface pretreatment of specimens before electrochemical H charging was conducted, namely, two consecutive CV scans, recording the current and potential curves, to analyze and judge the reaction mechanism of the specimen surface. Step 2: The potential of −1.25 V (vs. Ag/AgCl) was applied to the specimen for cathodic polarization, and H charging was carried out by electrolytic water reaction (Equation (1)) on the working electrode.
The current transient curves were recorded under potentiostatic polarization H charging to assess the current response and determine the charge as a function of time. The H charging was performed with different times (10 min, 30 min, 1 h, 2 h, 3 h, and 4 h) to compare the hydrogen saturation levels of the different microstructures.
Step 3: Three CV scans were recorded after the H-charging step. For most experiments, the potential started at −1.25 V (vs. Ag/AgCl) with a scanning rate of 10 mV/s. The cyclic voltammetry method proposed and validated by Ozdirik et al. [1] includes the H-discharging step in addition to the H-charging step. The CV method can be used to monitor the relationship between the adsorption/desorption behavior of diffused hydrogen and the hydrogen charging time, and to determine the hydrogen saturation level of the different microstructures. Additionally, special H-discharging experiments are required for the quantitative analysis of diffused hydrogen in an electrochemical test. The Hdischarging experiment has one more step than the H-charging experiment, that is, the potentiostatic polarization H-discharging (at −0.9 V vs. Ag/AgCl for 30 min) is carried out before the H-charging step 3. The purpose of this experimental procedure was to verify the adsorption/desorption of hydrogen and to compare the amount of accumulated and released hydrogen during charging and discharging between pure NaOH and NaOH solution containing thiourea. By integrating the current recorded during the H-discharging experiment with the current transient curves (only the first 100 s of discharge), the quantity of electric charge after H-discharging (Q H.charg ) and the background quantity of electric charge (Q Non.H ) could be obtained. After two CV scans (step 1), a constant potential of −0.9 V (vs. Ag/AgCl) was immediately applied for H-discharging, and the transient current recorded during the period was the background current. Therefore, the background quantity of electric charge (Q Non.H ) of the steel was about 3.5 × 10 −3 C/cm 2 through the current-time transient integral.
According to Faraday's formula (Equation (2)), the concentration of absorbed hydrogen (C 0 ) that is oxidized during discharging can be calculated as follows: where n is the number of electrons involved in the oxidation reaction (H→H + + e), F is Faraday's constant (96,485 C/mol), V is the effective volume of the specimen (cm 3 ). The effective volume of all specimens used in the electrochemical H-charging experiment is 0.02 cm 3 . The effect of thiourea on the electrochemical behavior of the steel at cathodic potential was studied by the linear scanning volt-ampoule method (LSV). The experiment began with two consecutive CV scans as described above (step 1), and then a linear scan was performed in NaOH solution with and without thiourea at a scanning rate of 2 mV/s from open circuit potential (OCP) to −1.6 V (vs. Ag/AgCl).
Hydrogen Permeation Test
According to ASTM G148, the Devanathan-Stachurski cell, which consists of two compartments including the charging cell and the oxidation cell, was used in the hydrogen permeation test. The hydrogen permeation specimens were rectangular thin slices with a thickness of 1.5 mm, leaving a rounded exposed area of 1.5 cm 2 as the working surface. To ensure stable oxidation, the specimens were polished with a 1.5 µm diamond grinding paste. Afterward, constant current nickel plating with a galvanostatic current density of 3 mA/cm 2 for 10 min in a Watt coating bath (250 g/L NiSO 4 ·6H 2 O + 45 g/L NiCl 2 ·6H 2 O Materials 2021, 14, 851 5 of 20 + 40 g/L H 3 BO 3 ) was conducted. The charging side of the specimen was only ground up to 1500 # and then cleaned with deionized water. To avoid the influence of hydrogen produced during specimen preparation on the testing results, the specimen was placed in a vacuum drying oven at 150 • C for 24 h.
To fully oxidize the diffused hydrogen atoms on the oxidizing side, the specimens were imposed a potential of +300 mV (vs. SCE saturated calomel electrode) in deaerated NaOH (0.2 mol/L). After the background current density dropped below 0.1 µA/cm 2 , H-charging solution (1 mol/L NaOH solution containing 8 g/L of thiourea) was introduced into the charging cell and H-charging potential of −1.25 V (vs. Ag/AgCl) was applied to the specimen. Hydrogen permeation tests of five microstructures were repeated at least three times at room temperature (22 • C).
The hydrogen permeation parameters were calculated from the obtained permeation curves by Equations (3)- (5).
where D eff (cm 2 /s) is the effective diffusion coefficient, J is hydrogen flux (mol/(cm 2 ·s)), C app is apparent hydrogen concentration (mol/cm); i ss is the steady-state permeation current, L is the specimen thickness, t lag means the time when the steady-state current value is multiplied by 0.63.
Hydrogen-Induced Cracking Test
The location and propagation path of hydrogen-induced cracks in different microstructures were observed. For this test, the six surfaces of the specimen (75 mm × 40 mm × 12 mm) were ground to 5000 # with waterproof sandpaper, cleaned with acetone, and dried in air. Then, the samples were electrochemically charged in deaerated 0.5 mol/L H 2 SO 4 for 12 h at −1.25 V (vs. Ag/AgCl). After H charging, the specimen was ground to 5000 #, polished with diamond polishing paste, and etched with 4 vol.% nital solution. Then, the crack initiation and propagation were observed with a metallographic microscope.
Slow Strain Rate Tensile Test (SSRT)
The SSRT method was used to investigate the hydrogen-induced cracking behavior of the steel with different microstructures in H-charging solution (1 mol/L NaOH solution containing 8 g/L of thiourea) at −1.25 V (vs. Ag/AgCl). According to GB/T 15970, SSRT specimens were prepared and then ground with sandpaper along the stretch direction to 1500 # on the working surface. Before the test, the specimens were pre-charged for 12 h to ensure steady-state surface conditions and a uniform hydrogen concentration. A tensile rate of 0.0018 mm/min (yielding a strain rate of 10 −6 /s) was used to carry out the SSRT tests with the WDML-30KN Material Test System (manufacturer, city, country). The SSRT results under each condition were tested three times to check the reproducibility.
To quantitatively characterize the cracking susceptibilities of the steel with different microstructures, the sensitivity in terms of elongation loss rate (I δ ) was calculated using the following equation [29]: where δ s and δ 0 are the elongation of steel in test solutions and air, respectively.
Results and Discussion
3.1. Microstructure Figure 2 shows the microstructure characteristics of the 690 MPa grade steel with different heat-treatment states. The austenitizing temperature was chosen at 1150 • C for 10 min because the microstructure difference was obvious and relatively clear at this temperature. Ma et al. [30] found that the critical transition temperature of E690 steel in re-austenitizing along the prior-austenitic grain boundary is about 745 • C. Farzad et al. [14] reported that the lower the heat treatment temperature, the more M-A island components existed in API X80 pipeline steel. According to the previous experimental results, the critical quenching temperature for the roughening of quenched martensite microstructure of the steel is around 1150 • C (not shown). Figure 2 shows the microstructure characteristics of the 690 MPa grade steel w different heat-treatment states. The austenitizing temperature was chosen at 1150 °C 10 min because the microstructure difference was obvious and relatively clear at this t perature. Ma et al. [30] found that the critical transition temperature of E690 steel in austenitizing along the prior-austenitic grain boundary is about 745 °C. Farzad et al. reported that the lower the heat treatment temperature, the more M-A island compone existed in API X80 pipeline steel. According to the previous experimental results, the ical quenching temperature for the roughening of quenched martensite microstructur the steel is around 1150 °C (not shown). The microstructure of the steel substrate consists of granular bainite (GB) and a small amount of lath bainite (LB), with fine and uniform grains (Figure 2a,b). The microstructure after annealing treatment consists of ferrite and pearlite, which exhibit white and black color in the light microscope image ( Figure 2c) and vice versa in the SEM picture ( Figure 2d). Five different fields of view were selected to analyze the ratio of two phases after annealing through binary extraction in metallographic analysis software. The results show that the volume fractions of ferrite and pearlite are 71% and 29%, respectively. The microstructure of the steel after quenching treatment consists of lath martensite (LM) and a small amount of bainite (Figure 2e,f). The white network at the grain boundary is ferrite and there are featherlike bainite (FB) structures. Martensite packet is defined as a lath structure with the same surface, which is composed of lath or packet [31]. With the further study of martensite substructure by researchers, martensite lath block is generally considered as a lath structure with a similar orientation. The microstructure of the steel after normalizing consists of granular low-carbon bainite (GLCB), which is an island structure composed of ferrite and cementite (Figure 2g,h). After tempering heat treatment, the microstructure consists of tempered sorbite. The gray area is fine acicular martensite, and the black part is sorbite in the SEM image ( Figure 2n). Figure 3 shows the morphology of prior austenite grains of the steel with different heat treatment states. The effective grain size is an important parameter to indicate the anti-crack ability of high-strength low-alloy steel [31]. The average size of prior austenite grains of the original microstructure and the heat-treated steels was quantitatively analyzed by using the straight-line transversal method. For the accuracy of the statistical results, the number of prior austenite grains in each heat-treated steel should be counted at least 200. The results are shown in Figure 4. The microstructure of the steel substrate consists of granular bainite (GB) and a small amount of lath bainite (LB), with fine and uniform grains (Figure 2a,b). The microstructure after annealing treatment consists of ferrite and pearlite, which exhibit white and black color in the light microscope image ( Figure 2c) and vice versa in the SEM picture ( Figure 2d). Five different fields of view were selected to analyze the ratio of two phases after annealing through binary extraction in metallographic analysis software. The results show that the volume fractions of ferrite and pearlite are 71% and 29%, respectively. The microstructure of the steel after quenching treatment consists of lath martensite (LM) and a small amount of bainite (Figure 2e,f). The white network at the grain boundary is ferrite and there are featherlike bainite (FB) structures. Martensite packet is defined as a lath structure with the same surface, which is composed of lath or packet [31]. With the further study of martensite substructure by researchers, martensite lath block is generally considered as a lath structure with a similar orientation. The microstructure of the steel after normalizing consists of granular low-carbon bainite (GLCB), which is an island structure composed of ferrite and cementite (Figure 2g,h). After tempering heat treatment, the microstructure consists of tempered sorbite. The gray area is fine acicular martensite, and the black part is sorbite in the SEM image ( Figure 2n). Figure 3 shows the morphology of prior austenite grains of the steel with different heat treatment states. The effective grain size is an important parameter to indicate the anti-crack ability of high-strength low-alloy steel [31]. The average size of prior austenite grains of the original microstructure and the heat-treated steels was quantitatively analyzed by using the straight-line transversal method. For the accuracy of the statistical results, the number of prior austenite grains in each heat-treated steel should be counted at least 200. The results are shown in Figure 4. The metallographic and statistical results of the prior austenite grains show that the steel substrate has the finest grain with an average grain diameter of 16.7 μm. Annealed steel, quenched steel, normalized steel, and tempered steel have a similar grain size. Due to the different positions of samples in the muffle furnace during heat treatment, the grain size has some fluctuations in the acceptable range. Because annealed steel has no prior austenite grain boundary, ferrite grain size was calculated.
Microstructure
Microhardness is an important parameter to evaluate the resistance of metal to local deformation. It can be seen from Figure 4b that the microhardness of quenched steel is the highest (402 HV), which conforms to the characteristics of quenched martensite. The microhardness of normalized steel is relatively lower because of the existence of a large amount of ferrite (195 HV). The average Vickers microhardness of base metal, normalized steel, and tempered steel is 282 HV, 295 HV, and 326 HV, respectively. Figure 5 shows the linear sweep voltammogram of base metal for NaOH solution with and without thiourea and a larger view in LSV at −1.25 V (vs. Ag/AgCl). As shown in Figure 5, a higher overpotential is required to produce the same current in the thioureacontaining NaOH solution than that in the pure NaOH solution. At −1.25 V (vs. Ag/AgCl), the current in the pure NaOH solution is approximately four times higher than that in the thiourea-containing NaOH solution. In addition, in the pure NaOH solution, the formation of bubbles can be observed on the specimen surface when the potential is more negative than −1.25 V (vs. Ag/AgCl) and bubbles completely cover the specimen surface as the potential is shifted lower than −1.4 V (vs. Ag/AgCl). However, in the solution containing thiourea, no bubble formation was observed above −1.4 V (vs. Ag/AgCl). From −1.4 V (vs. Ag/AgCl) to −1.6 V (vs. Ag/AgCl), slight bubble formation was detected which is much less than that in pure NaOH. This further proves that the hydrogen release reaction is inhibited by thiourea. Ozdirik et al. [1] proved that thiourea inhibits the recombination of H atoms to form H2 when they studied hydrogen adsorption/desorption of SAE 1010 steel. Some researchers have shown that thiourea prevents the recombination of H atoms to form hydrogen, but promotes the entry of H atoms into the steel [27,32,33]. The metallographic and statistical results of the prior austenite grains show that the steel substrate has the finest grain with an average grain diameter of 16.7 µm. Annealed steel, quenched steel, normalized steel, and tempered steel have a similar grain size. Due to the different positions of samples in the muffle furnace during heat treatment, the grain size has some fluctuations in the acceptable range. Because annealed steel has no prior austenite grain boundary, ferrite grain size was calculated.
Effects of Thiourea on the Electrochemical Behavior
Microhardness is an important parameter to evaluate the resistance of metal to local deformation. It can be seen from Figure 4b that the microhardness of quenched steel is the highest (402 HV), which conforms to the characteristics of quenched martensite. The microhardness of normalized steel is relatively lower because of the existence of a large amount of ferrite (195 HV). The average Vickers microhardness of base metal, normalized steel, and tempered steel is 282 HV, 295 HV, and 326 HV, respectively. Figure 5 shows the linear sweep voltammogram of base metal for NaOH solution with and without thiourea and a larger view in LSV at −1.25 V (vs. Ag/AgCl). As shown in Figure 5, a higher overpotential is required to produce the same current in the thioureacontaining NaOH solution than that in the pure NaOH solution. At −1.25 V (vs. Ag/AgCl), the current in the pure NaOH solution is approximately four times higher than that in the thiourea-containing NaOH solution. In addition, in the pure NaOH solution, the formation of bubbles can be observed on the specimen surface when the potential is more negative than −1.25 V (vs. Ag/AgCl) and bubbles completely cover the specimen surface as the potential is shifted lower than −1.4 V (vs. Ag/AgCl). However, in the solution containing thiourea, no bubble formation was observed above −1.4 V (vs. Ag/AgCl). From −1.4 V (vs. Ag/AgCl) to −1.6 V (vs. Ag/AgCl), slight bubble formation was detected which is much less than that in pure NaOH. This further proves that the hydrogen release reaction is inhibited by thiourea. Ozdirik et al. [1] proved that thiourea inhibits the recombination of H atoms to form H 2 when they studied hydrogen adsorption/desorption of SAE 1010 steel. Some researchers have shown that thiourea prevents the recombination of H atoms to form hydrogen, but promotes the entry of H atoms into the steel [27,32,33]. The results show that there are two anodic reactions before and after hydrogen charging in the thiourea-containing NaOH solution, as shown in Figure 6a. The first anodic reaction peak before H charging is labeled as peak a (at −0.87 V (vs. Ag/AgCl)) and the first anodic reaction peaks after H charging are labeled as peak a.1 (at −0.92 V (vs. Ag/AgCl)) and a.2 (at −0.87 V (vs. Ag/AgCl)). In NaOH solution without thiourea, the anodic reaction peak (namely peak a′ at −0.92 V (vs. Ag/AgCl)) also changes significantly after hydrogen charging (Figure 6b). It is worth noting that the current density in the thiourea-containing NaOH solution is much higher than that in pure NaOH. As shown in Figure 6, in thiourea-containing NaOH solution, peak a.1 only appears after H charging, and peak a.1 appears at the same potential as peak a' in pure NaOH solution. This indicates that the position of this peak is related to H charging in both solutions. In addition, in the thiourea-containing NaOH solution, the potential of peak a.2 is consistent with that of peak a before H charging, while in the pure NaOH solution, no peak appears at this potential, which proved that peak a.2 is a reaction related to thiourea. The results show that there are two anodic reactions before and after hydrogen charging in the thiourea-containing NaOH solution, as shown in Figure 6a. The first anodic reaction peak before H charging is labeled as peak a (at −0.87 V (vs. Ag/AgCl)) and the first anodic reaction peaks after H charging are labeled as peak a.1 (at −0.92 V (vs. Ag/AgCl)) and a.2 (at −0.87 V (vs. Ag/AgCl)). In NaOH solution without thiourea, the anodic reaction peak (namely peak a′ at −0.92 V (vs. Ag/AgCl)) also changes significantly after hydrogen charging (Figure 6b). It is worth noting that the current density in the thiourea-containing NaOH solution is much higher than that in pure NaOH. As shown in Figure 6, in thiourea-containing NaOH solution, peak a.1 only appears after H charging, and peak a.1 appears at the same potential as peak a' in pure NaOH solution. This indicates that the position of this peak is related to H charging in both solutions. In addition, in the thiourea-containing NaOH solution, the potential of peak a.2 is consistent with that of peak a before H charging, while in the pure NaOH solution, no peak appears at this potential, which proved that peak a.2 is a reaction related to thiourea. The results show that there are two anodic reactions before and after hydrogen charging in the thiourea-containing NaOH solution, as shown in Figure 6a. The first anodic reaction peak before H charging is labeled as peak a (at −0.87 V (vs. Ag/AgCl)) and the first anodic reaction peaks after H charging are labeled as peak a.1 (at −0.92 V (vs. Ag/AgCl)) and a.2 (at −0.87 V (vs. Ag/AgCl)). In NaOH solution without thiourea, the anodic reaction peak (namely peak a at −0.92 V (vs. Ag/AgCl)) also changes significantly after hydrogen charging (Figure 6b). It is worth noting that the current density in the thioureacontaining NaOH solution is much higher than that in pure NaOH. As shown in Figure 6, in thiourea-containing NaOH solution, peak a.1 only appears after H charging, and peak a.1 appears at the same potential as peak a' in pure NaOH solution. This indicates that the position of this peak is related to H charging in both solutions. In addition, in the thiourea-containing NaOH solution, the potential of peak a.2 is consistent with that of peak a before H charging, while in the pure NaOH solution, no peak appears at this potential, which proved that peak a.2 is a reaction related to thiourea. Two consecutive CV scans were performed on the base metal as a pretreatment before H charging, as shown in Figure 7a. Two anodic and one cathodic response can be observed in the CV curves (Figure 7a). The current density value of peak a decreases with the number of scans, while those of peak b and peak c increase with the cycle. The results of the area integrals of peak b and peak c show that the quantity of electric charge released by the two peaks is similar. The color of the specimen changed to brownish-yellow during the anodic scan (starting from −0.8 V (vs. Ag/AgCl)) and disappeared again during the reverse scan. Peaks b and c in the cyclic voltammetry are usually attributed to hydrogen oxidation/reduction reactions [34][35][36]. In Figure 7b, three continuous CV curves recorded after electrochemical H charging are shown. The first anodic response in the first CV scan was marked as two peaks of a.1 and a.2. In the second CV scan, peak a.1 disappeared while peak a.2 continued to exist. However, it can be seen from Figure 7b that there was a gradual downward trend of peak a.2 current density from the first scan to the third scan. From the third scan, there was no significant change in peak a.2 [1]. The potential of the peak a before H charging was the same as that of peak a.2 after H charging. The current density values of peak b and peak c increased with the number of scans, which may be related to the formation of multilayer iron oxide or the surface roughness of the specimen [37].
Effects of H charging on the Electrochemical Behavior
To clarify the relationship between H-charging time and peak of CV, the first CV scanning curve of the base metal after H charging is shown in Figure 8. Two consecutive CV scans were performed on the base metal as a pretreatment before H charging, as shown in Figure 7a. Two anodic and one cathodic response can be observed in the CV curves (Figure 7a). The current density value of peak a decreases with the number of scans, while those of peak b and peak c increase with the cycle. The results of the area integrals of peak b and peak c show that the quantity of electric charge released by the two peaks is similar. The color of the specimen changed to brownish-yellow during the anodic scan (starting from −0.8 V (vs. Ag/AgCl)) and disappeared again during the reverse scan. Peaks b and c in the cyclic voltammetry are usually attributed to hydrogen oxidation/reduction reactions [34][35][36]. In Figure 7b, three continuous CV curves recorded after electrochemical H charging are shown. The first anodic response in the first CV scan was marked as two peaks of a.1 and a.2. In the second CV scan, peak a.1 disappeared while peak a.2 continued to exist. However, it can be seen from Figure 7b that there was a gradual downward trend of peak a.2 current density from the first scan to the third scan. From the third scan, there was no significant change in peak a.2 [1]. The potential of the peak a before H charging was the same as that of peak a.2 after H charging. The current density values of peak b and peak c increased with the number of scans, which may be related to the formation of multilayer iron oxide or the surface roughness of the specimen [37].
To clarify the relationship between H-charging time and peak of CV, the first CV scanning curve of the base metal after H charging is shown in Figure 8.
The shape and height of peak a.1 and peak a.2 depend on the duration of H charging. After 10 min of H charging, peaks a.1 and a.2 are nearly the same height in the first CV scan. After H charging for 30 min, the current density values of peak a.1 and peak a.2 increase with a more obvious increasing trend of the former peak. As the H-charging time continues from 1 to 3 h, the heights of peaks a.1 and a.2 show little change. The results show that the peaks a.1 and a.2 are related to the H-charging time, and the steel substrate almost reaches the hydrogen saturation state after the H charging for 1 h. In addition, under all H-charging time conditions, the three consecutive CV curves after H charging show that the current density value of peak a.2 tends to decrease from one scan to the next. Moreover, peak a.1 did not appear again during the second CV scan, indicating that hydrogen was completely desorbed from the steel during the first CV scan, independent of the scanning rate and the H-charging time [1,27]. The potential corresponding to peak a.1 was −0.92 V (vs. Ag/AgCl), which was taken as the discharge potential in the potentiostatic polarization H-discharging experiment. In three consecutive CV scans, the peak a.2 decreased but did not disappear. Since peak a.2 only existed in the CV curve of thiourea-containing NaOH solution, it is speculated that it must be an oxidation process related to thiourea [1]. The shape and height of peak a.1 and peak a.2 depend on the duration of H charging. After 10 min of H charging, peaks a.1 and a.2 are nearly the same height in the first CV scan. After H charging for 30 min, the current density values of peak a.1 and peak a.2 increase with a more obvious increasing trend of the former peak. As the H-charging time continues from 1 to 3 h, the heights of peaks a.1 and a.2 show little change. The results show that the peaks a.1 and a.2 are related to the H-charging time, and the steel substrate almost reaches the hydrogen saturation state after the H charging for 1 h. In addition, under all H-charging time conditions, the three consecutive CV curves after H charging show that the current density value of peak a.2 tends to decrease from one scan to the next. Moreover, peak a.1 did not appear again during the second CV scan, indicating that hydrogen was completely desorbed from the steel during the first CV scan, independent of the scanning rate and the H-charging time [1,27]. The potential corresponding to peak a.1 was -0.92 V (vs. Ag/AgCl), which was taken as the discharge potential in the potentiostatic polarization H-discharging experiment. In three consecutive CV scans, the peak a.2 decreased but did not disappear. Since peak a.2 only existed in the CV curve of thiourea-containing NaOH solution, it is speculated that it must be an oxidation process related to thiourea [1]. Figure 9 shows the first CV scans for five different microstructure specimens after hydrogen charging for 30 min in NaOH solution containing thiourea. Figure 9 shows the first CV scans for five different microstructure specimens after hydrogen charging for 30 min in NaOH solution containing thiourea. The CV curves mainly included peak a and peak b, wherein peak a was further divided into peak a.1 and peak a.2 which are attributed to hydrogen oxidation and thiourearelated reactions, respectively. The current density value of peak b in CV curves was very similar in the five microstructures, while significant differences could be seen in peak a. Though, after H charging after 30 min, five different microstructures were similar in oxidation peak shape, but the current density value of peak a was the highest in the tempered steel and the lowest in the normalized steel. This may be related to hydrogen permeation in the microstructure [38]. The CV curves mainly included peak a and peak b, wherein peak a was further divided into peak a.1 and peak a.2 which are attributed to hydrogen oxidation and thiourea-related reactions, respectively. The current density value of peak b in CV curves was very similar in the five microstructures, while significant differences could be seen in peak a. Though, after H charging after 30 min, five different microstructures were similar in oxidation peak shape, but the current density value of peak a was the highest in the tempered steel and the lowest in the normalized steel. This may be related to hydrogen permeation in the microstructure [38].
Effects of Microstructure on the Electrochemical Behavior
As discussed above, peak a can be viewed as a function of H-charging time. The base metal reached the hydrogen saturation state after H charging for 1 h as indicated by the relatively stable current density of peak a.1 when the charging time exceeded 1 h (Figure 8). Figure 10 shows the first CV scans of the steel with different microstructures after hydrogen charging for different times in NaOH solution containing thiourea. In the cases of annealed steel, quenched steel, normalized steel, and tempered steel, the current density values of peak a did not change significantly after the H-charging time reached 4 h, 3 h, 3 h, and 2 h, respectively, indicating the accomplishment of the saturation state. The H-charging time required for the annealed steel to reach the hydrogen saturation state was the longest (4 h) and that of the base metal was the shortest (1 h), implying the fast hydrogen diffusion rate in the annealed steel with ferrite and pearlite microstructures. Zhang et al. [39] found that the hydrogen diffusivity within fine granular bainite was higher than that of ferrite for welded X80 steel under pressurized gaseous hydrogen. To further study the relationship between different microstructures and hydrogen behavior, the concentration of absorbed hydrogen (C0) is calculated by Equation (2) and shown in Figure 11. In the cases of annealed steel, quenched steel, normalized steel, and tempered steel, the current density values of peak a did not change significantly after the H-charging time reached 4 h, 3 h, 3 h, and 2 h, respectively, indicating the accomplishment of the saturation state. The H-charging time required for the annealed steel to reach the hydrogen saturation state was the longest (4 h) and that of the base metal was the shortest (1 h), implying the fast hydrogen diffusion rate in the annealed steel with ferrite and pearlite microstructures. Zhang et al. [39] found that the hydrogen diffusivity within fine granular bainite was higher than that of ferrite for welded X80 steel under pressurized gaseous hydrogen. To further study the relationship between different microstructures and hydrogen behavior, the concentration of absorbed hydrogen (C 0 ) is calculated by Equation (2) and shown in Figure 11. The base metal could reach a hydrogen saturation state in a shorter time (1 h), which may be attributed to the number of dislocations and various defects within the fine granular bainite being lower, and thus the diffused hydrogen was not easily trapped and stored. However, the annealed steel has obvious interface and defects and also has a thick pearlite structure, which is convenient for hydrogen capture and storage by diffusion.
Hydrogen Permeation Behavior
The microstructure is an important factor affecting hydrogen permeation and HIC sensitivity [7,38,39]. To accurately indicate the hydrogen absorption and diffusion in five microstructures, the results of hydrogen permeation were analyzed. Figure 12 shows the hydrogen permeation curves and the calculated parameters of different microstructures. The base metal could reach a hydrogen saturation state in a shorter time (1 h), which may be attributed to the number of dislocations and various defects within the fine granular bainite being lower, and thus the diffused hydrogen was not easily trapped and stored. However, the annealed steel has obvious interface and defects and also has a thick pearlite structure, which is convenient for hydrogen capture and storage by diffusion.
Hydrogen Permeation Behavior
The microstructure is an important factor affecting hydrogen permeation and HIC sensitivity [7,38,39]. To accurately indicate the hydrogen absorption and diffusion in five microstructures, the results of hydrogen permeation were analyzed. Figure 12 shows the hydrogen permeation curves and the calculated parameters of different microstructures.
At the charging side, a negative potential of −1.25 V (vs. Ag/AgCl) was applied to reduce hydrogen ions to hydrogen atoms that could be captured by the steel. The thiourea prevented the recombination of H atoms to form H 2 and promoted the adsorption of H atoms. Then, the hydrogen atoms diffused to the oxidation side across the steel sheet. The diffusion hydrogen oxidized in NaOH solution in the oxidation cell to generate hydrogen ions, and the reaction current was expressed as hydrogen permeation current [40,41]. The hydrogen permeation current density (0.414 × 10 −6 A/cm 2 ), hydrogen flux (0.429 × 10 −11 mol/(cm 2 ·s)), and apparent hydrogen concentration (0.405 × 10 −6 mol/cm 3 ) of the base metal were the lowest, and the effective diffusion coefficient (1.58 × 10 −6 cm 2 /s) was the highest. Hydrogen atoms had the highest diffusion rate inside the base metal, and those captured by the hydrogen trap were the lowest. Thus, it was easier to reach the saturation state. It was also mentioned in the CV tests that the base metal reached the hydrogen saturation state after H charging for only 1 h. The annealed steel consists mainly of ferrite phases and a small portion of flake pearlite which can be considered reversible hydrogen traps [19,42]. Many researchers believe that hydrogen was preferentially enriched in dislocations, grain boundaries, inclusions, and two-phase interfaces after entering the material [43,44]. Annealed steel has more active sites of hydrogen capture and storage than other microstructures. Hydrogen atoms are more easily pinned into the annealed steel (ferrite and pearlite) [30,38]. In the case of martensitic steel, the martensitic lath boundary and the prior austenite grain boundary are considered as the microstructure characteristics of the hydrogen trap [45,46]. Depover et al. [47] reported that martensitic steel has high-density dislocation in its microstructure, and the content of reversible hydrogen in martensitic steel (at the position error) accounted for about 75% of the total diffused hydrogen content. Therefore, diffused hydrogen in martensitic steel can be derived from martensitic lath boundary, prior-austenite grain boundary, and dislocation. Nava et al. [48] proved that dislocation controlled the effective diffusion coefficient of hydrogen in martensitic steel and the absorption of hydrogen in martensitic steel was mainly due to the contribution of dislocation. Jiang et al. [49] demonstrated that the dislocation walls and cells hindered the diffusion of hydrogen and homogeneous distribution of dislocations dispersed trap sites for capturing hydrogen. Under the action of no external force, the H content of nailing in martensite steel is low due to fewer internal dislocations. Hydrogen permeation results show that annealed steel nailing H is more than quenched steel, but diffusion H is less than quenched steel. The J and C app of annealed steel are slightly smaller than that of quenched steel ( Figure 12). At the charging side, a negative potential of −1.25 V (vs. Ag/AgCl) was applied to reduce hydrogen ions to hydrogen atoms that could be captured by the steel. The thiourea prevented the recombination of H atoms to form H2 and promoted the adsorption of H atoms. Then, the hydrogen atoms diffused to the oxidation side across the steel sheet. The diffusion hydrogen oxidized in NaOH solution in the oxidation cell to generate hydrogen ions, and the reaction current was expressed as hydrogen permeation current [40,41]. The hydrogen permeation current density (0.414 × 10 −6 A/cm 2 ), hydrogen flux (0.429 × 10 −11 Figure 13 shows the light microscope micrographs of the hydrogen-induced cracks formed on the steel with different microstructures after H charging for 12 h in 0.5 mol/L H 2 SO 4 .
HIC Analysis
tion. Jiang et al. [49] demonstrated that the dislocation walls and cells hindered the diffusion of hydrogen and homogeneous distribution of dislocations dispersed trap sites for capturing hydrogen. Under the action of no external force, the H content of nailing in martensite steel is low due to fewer internal dislocations. Hydrogen permeation results show that annealed steel nailing H is more than quenched steel, but diffusion H is less than quenched steel. The J and Capp of annealed steel are slightly smaller than that of quenched steel (Figure 12). Figure 13 shows the light microscope micrographs of the hydrogen-induced cracks formed on the steel with different microstructures after H charging for 12 h in 0.5 mol/L H2SO4. In general, hydrogen-induced crack initiation is easy in the "hydrogen trap" of steel, such as inclusions, dislocations, voids, and grain boundaries [50][51][52]. Under the condition of electrochemical H charging, bubbles and surface crack propagation are easy to occur on the surface of the specimen. During the electrochemical H charging, hydrogen atoms are formed on the surface of the specimen, and some hydrogen atoms enter into the steel. Hydrogen molecules are formed by bonding at the hydrogen trap (grain boundary, dislocation entanglement, the second phase, inclusion, etc.) and the high pressure generated in the bonding place will lead to nucleation and propagation of the crack. It can be seen from Figure 13a that the hydrogen-induced crack in the base metal shows an obvious step shape, and tiny cracks are generated near the main crack. If there is an external force, these stepped cracks will perforate under the action of shear stress at both ends, resulting in the In general, hydrogen-induced crack initiation is easy in the "hydrogen trap" of steel, such as inclusions, dislocations, voids, and grain boundaries [50][51][52]. Under the condition of electrochemical H charging, bubbles and surface crack propagation are easy to occur on the surface of the specimen. During the electrochemical H charging, hydrogen atoms are formed on the surface of the specimen, and some hydrogen atoms enter into the steel. Hydrogen molecules are formed by bonding at the hydrogen trap (grain boundary, dislocation entanglement, the second phase, inclusion, etc.) and the high pressure generated in the bonding place will lead to nucleation and propagation of the crack. It can be seen from Figure 13a that the hydrogen-induced crack in the base metal shows an obvious step shape, and tiny cracks are generated near the main crack. If there is an external force, these stepped cracks will perforate under the action of shear stress at both ends, resulting in the reduction of the bearing capacity of the steel structure. Huang et al. [53] reported that low carbon bainite has lower hydrogen cracking sensitivity than quenched and tempered martensite. The crack distribution of annealed steel was mainly along the pearlite (Figure 13b), mainly because the carbide and some inclusions are easy to be precipitated at the pearlite boundary. Hydrogen atoms are easily enriched in these sites, causing hydrogen embrittlement and forming cracks, which expanded under the stress. Some researchers have found that HIC is found to initiate at interfaces of ferrite and pearlite bands [54,55]. Quenched and tempered steel is widely used because of its excellent toughness with high hardness, strength, and weight [9]. These types of steel are prone to HIC, resulting in poor impact resistance [9,56]. The crack of martensite steel mainly propagated by the transgranular method (quenched steel, Figure 13c) and some tiny cracks germinated on the main crack. For the tempered steel, intergranular cracking (IGC) along the prior-austenite grain boundary (PAGB) dominated the fracture process (Figure 13d). IGC along the PAGB could be detected if the H charging was powerful enough, which has been observed in many materials [41,[57][58][59]. Figure 14 shows the stress-strain curves of the steel with different microstructures in 1 mol/L NaOH solution containing 8 g/L of thiourea at −1.25 V (vs. Ag/AgCl) (a) and the HIC susceptibility in terms of the elongation loss rate (b). carbon bainite has lower hydrogen cracking sensitivity than quenched and tempered martensite. The crack distribution of annealed steel was mainly along the pearlite (Figure 13b), mainly because the carbide and some inclusions are easy to be precipitated at the pearlite boundary. Hydrogen atoms are easily enriched in these sites, causing hydrogen embrittlement and forming cracks, which expanded under the stress. Some researchers have found that HIC is found to initiate at interfaces of ferrite and pearlite bands [54,55]. Quenched and tempered steel is widely used because of its excellent toughness with high hardness, strength, and weight [9]. These types of steel are prone to HIC, resulting in poor impact resistance [9,56]. The crack of martensite steel mainly propagated by the transgranular method (quenched steel, Figure 13c) and some tiny cracks germinated on the main crack. For the tempered steel, intergranular cracking (IGC) along the prior-austenite grain boundary (PAGB) dominated the fracture process (Figure 13d). IGC along the PAGB could be detected if the H charging was powerful enough, which has been observed in many materials [41,[57][58][59]. Figure 14 shows the stress-strain curves of the steel with different microstructures in 1 mol/L NaOH solution containing 8 g/L of thiourea at −1.25 V (vs. Ag/AgCl) (a) and the HIC susceptibility in terms of the elongation loss rate (b). Due to the influence of hydrogen, the elongation of the base metal and the heattreated steel in air was significantly higher than that in the NaOH solution. Alvaro et al. [60] reported that the hydrogen in lattice interstitials was mainly responsible for the embrittlement by 3D cohesive modeling. Researchers have reported that the intercritical heat affected zone (ferrite and M-A island microstructure) has a high deformation capacity due to its high ferrite phase proportion and soft phase [30,38]. Although the elongation of annealed steel (ferrite and pearlite microstructure) is the highest, the HIC sensitivity calculated based on the loss of elongation was the largest, which proves that annealed steel is more susceptible to hydrogen. This may have more to do with "hydrogen traps" in annealed and normalized steel [27,31]. Laurent et al. [61] found that residual austenite, pearlite, and bainite grain boundaries could be the active site of pinned hydrogen. The HIC sensitivity of heat-treated steel is higher than that of the base metal (BM) probably due to the existence of a large number of dislocations, phase interfaces and grain boundaries in the heat-treated steel. It was found by Zhang et al. [39] that fine-grain bainite had lower hydrogen embrittlement sensitivity than coarse-grain bainite. Microstructure characteristics, test data, and cracking characteristics for BM and the heat treatment microstructure were summarized and shown in Table 1. Due to the influence of hydrogen, the elongation of the base metal and the heat-treated steel in air was significantly higher than that in the NaOH solution. Alvaro et al. [60] reported that the hydrogen in lattice interstitials was mainly responsible for the embrittlement by 3D cohesive modeling. Researchers have reported that the intercritical heat affected zone (ferrite and M-A island microstructure) has a high deformation capacity due to its high ferrite phase proportion and soft phase [30,38]. Although the elongation of annealed steel (ferrite and pearlite microstructure) is the highest, the HIC sensitivity calculated based on the loss of elongation was the largest, which proves that annealed steel is more susceptible to hydrogen. This may have more to do with "hydrogen traps" in annealed and normalized steel [27,31]. Laurent et al. [61] found that residual austenite, pearlite, and bainite grain boundaries could be the active site of pinned hydrogen. The HIC sensitivity of heat-treated steel is higher than that of the base metal (BM) probably due to the existence of a large number of dislocations, phase interfaces and grain boundaries in the heat-treated steel. It was found by Zhang et al. [39] that fine-grain bainite had lower hydrogen embrittlement sensitivity than coarse-grain bainite. Microstructure characteristics, test data, and cracking characteristics for BM and the heat treatment microstructure were summarized and shown in Table 1.
HIC Analysis
The microstructure morphologies of the base metal and normalized steel were similar, but the grain size of normalized steel was three times higher than that of the base metal. Secondly, apparent hydrogen concentration and hydrogen permeation current density of BM was the lowest. So base metal had the lowest HIC sensitivity in five microstructures. Under the test conditions in this paper, the hydrogen concentration in the steel easily reached the critical value that induces hydrogen embrittlement. When the hydrogen content was high, hydrogen concentration and hydrogen pressure at the grain boundary were relatively high, resulting in the instantaneous cracking along the grain boundary and hydrogen-induced bubbles. HIC sensitivity increased in the following order: fine granular bainite, acicular martensite and tempered sorbite, lath martensite, coarse granular bainite, ferrite, and pearlite.
Conclusions
The correlation between the microstructure and hydrogen degradation of 690 MPa grade marine engineering steel was investigated in the present work. The following conclusions are obtained: (1) The CV tests show that thiourea is an effective hydrogen permeation accelerator.
The hydrogen diffusion rate in the steel base metal with uniform microstructure and fine grain is the highest, while that in the annealed steel with ferrite and pearlite microstructure is the lowest. (2) The hydrogen-induced cracks in the steel base metal show obvious step shape and tiny cracks near the main crack. The cracks of the annealed steel are mainly distributed along pearlite. The crack propagation of martensite steel (quenched steel) is mainly transgranular, while the cracks of tempered steel along the prior austenite grain boundary. (3) HIC sensitivity of the base metal is the lowest due to its low hydrogen flux and apparent hydrogen concentration. Annealed steel exhibits higher HIC sensitivity at lower hydrogen diffusion flux and surface hydrogen concentration, due to many hydrogen traps in annealed steel. Annealed steel with a ferrite and pearlite microstructure is more susceptible to hydrogen.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-02-14T06:16:18.321Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "7920c6eb212537a92f28b6d0d80f6f8f2bb228fb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/14/4/851/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "838f02e7dd6e98f6a4eb56d53b11241718f13283",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255215555 | pes2o/s2orc | v3-fos-license | RNA-Seq Analysis Identifies Differentially Expressed Genes in the Longissimus dorsi of Wagyu and Chinese Red Steppe Cattle
Meat quality has a close relationship with fat and connective tissue; therefore, screening and identifying functional genes related to lipid metabolism is essential for the production of high-grade beef. The transcriptomes of the Longissimus dorsi muscle in Wagyu and Chinese Red Steppe cattle, breeds with significant differences in meat quality and intramuscular fat deposition, were analyzed using RNA-seq to screen for candidate genes associated with beef quality traits. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis showed that the 388 differentially expressed genes (DEGs) were involved in biological processes such as short-chain fatty acid metabolism, regulation of fatty acid transport and the peroxisome proliferator-activated receptor (PPAR) signaling pathway. In addition, crystallin alpha B (CRYAB), ankyrin repeat domain 2 (ANKRD2), aldehyde dehydrogenase 9 family member A1 (ALDH9A1) and enoyl-CoA hydratase and 3-hydroxyacyl CoA dehydrogenase (EHHADH) were investigated for their effects on intracellular triglyceride and fatty acid content and their regulatory effects on genes in lipogenesis and fatty acid metabolism pathways. This study generated a dataset from transcriptome profiling of two cattle breeds, with differing capacities for fat-deposition in the muscle, and revealed molecular evidence that CRYAB, ANKRD2, ALDH9A1 and EHHADH are related to fat metabolism in bovine fetal fibroblasts (BFFs). The results provide potential functional genes for maker-assisted selection and molecular breeding to improve meat quality traits in beef cattle.
Introduction
Beef is a popular meat, known for being rich in nutrients that are important for antioxidant and anti-inflammatory responses as well as nerve, muscle, retinal, immune and cardiovascular function [1]. In recent years, meat characteristics, particularly fat deposition, have become an important factor influencing consumers' meat purchasing decisions [2]. Livestock lipid metabolism is mainly affected by heredity, feeding and the external environment [3]. With the development of modern molecular breeding techniques, such as cell engineering and molecular markers, modern beef cattle breeding combines conventional selective breeding with molecular biology, bioinformatics and computer information technology. Screening and verification of functional genes has become important for molecular breeding to improve meat quality traits. In addition, the cellular and genetic mechanisms of different fat deposition sites involve complex and highly coordinated gene expression programs. Therefore, lipid metabolism has always been a topic that offers both challenge and interest in livestock animal research.
Transcriptomics was the first (and is now the most widely used) molecular technology in basic research, clinical diagnosis and drug development [4]. In recent years, with the rapid development of RNA-seq, it has been applied in the livestock industry. RNA-seq analyses have been used to determine large numbers of candidate genes, new transcripts, single nucleotide polymorphisms (SNPs), and regulatory networks for different species and for tissues of different animals such as pigs, cattle and sheep [5]. Understanding the transcriptomes of livestock animals is critical for explaining the function of their genomes, revealing the molecular makeup of cells and tissues and exploring development and disease. With transcriptomics now applied broadly to livestock studies [6][7][8][9][10][11][12][13], several RNA-seq analyses have already been performed in the Wagyu and other breeds in the U.S. and European regions to select functional genes for improved meat quality [14,15]. However, only a few studies have been applied to screen for differences between functional genes in Wagyu and the yellow cattle of Asia.
Chinese Red Steppe cattle are a native Chinese breed used for meat and dairy, mainly distributed in the northeast of China [11]. They have unique features, such as disease resistance and better meat quality, compared to other local Chinese cattle. They are popular among meat consumers because of the unique flavor of their meat. However, Chinese Red Steppe cattle have a lower intramuscular fat (IMF) content than foreign commercial beef cattle, which limits the beef quality and economic benefits. The Wagyu is a Japanese beef cattle breed derived from native Asian cattle. The most distinctive feature of Wagyu beef is its beautiful marbling, due mainly to its high intramuscular fat content, which improves the overall taste [16]. A previous meat quality assessment found that Wagyu beef is high in fat, which is essential for improving the texture of the meat [16]. Studies have shown that genetic differences in meat quality are expressed as inter-varietal differences [17]. Therefore, the two breeds of cattle are perfect models for screening functional genes associated with fat deposition affecting meat quality traits. In addition, the study of these model breeds could reveal novel key genes and regulatory networks that are involved in regulating fat deposition in muscle tissue, leading to further improvement of meat quality traits in Asian cattle.
In the present study, the Longissimus dorsi muscles of Wagyu and Chinese Red Steppe cattle were used to screen differentially expressed genes (DEGs) by RNA-seq, and the functions of candidate genes related to meat traits in lipid metabolism were analyzed. These results revealed novel functional genes for the molecular breeding of cattle, which will be useful for further study on the meat quality of different breeds.
Analysis of DEGs in Longissimus dorsi Muscle between Cattle Breeds
The details of the transcriptome sequencing data are reported in our previous study on the alternative splicing comparative analysis of the Longissimus dorsi muscle in Wagyu and Chinese Red Steppe cattle [11]. The data were submitted to the Gene Expression Omnibus (GEO) of the National Center for Biotechnology Information (NCBI) [18] and are accessible through GEO Series accession number GSE161967 (https://www.ncbi.nlm.nih.gov/geo/ query/acc.cgi?acc=GSE161967; accessed on 1 May 2022).
DEG Participation in Biological Processes Related to Lipid Metabolism
To understand their putative functions, 388 DEGs in Wagyu were mapped to the Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) orthology (KO) databases for analysis. The results showed that 475 biological processes were significantly enriched in Wagyu cattle compared to Chinese Red Steppe cattle ( Figure 1B). Among these enriched biological processes, seven (β-oxidation of fatty acids using acyl-CoA oxidase, regulation of fatty acid transport, metabolic processes of short-chain fatty acids, metabolic processes of fatty acids, biosynthetic processes of fatty acids, transport of plasma membrane long-chain fatty acids and negative regulation of fatty acid oxidation) were closely related to beef quality traits.
The results of KEGG enrichment analysis were shown in Figure 1C. The 138 DEGs were enriched in 171 KEGG pathways, of which 11 pathways were significantly enriched (p < 0.05), such as Alanine, aspartate and glutamate metabolism (PATH:00250), ECM-receptor interaction (PATH:04512) and D-Arginine and D-Ornithine Metabolism (PATH:00472). Genes specifically expressed in Chinese Red Steppe cattle were mainly enriched in 20 pathways, which also included D-arginine and D-ornithine metabolism. In addition, genes specifically expressed in Wagyu were enriched in seven pathways (p < 0.05), with the highest enrichment in dilated cardiomyopathy and ECM-receptor interaction.
Prediction Analysis of Genes Interaction Network
The results of the gene interaction network prediction analysis are shown in Figure 2A. Among these, the down-regulated genes enoyl-CoA hydratase and 3-hydroxyacyl CoA dehydrogenase (EHHADH), the up-regulated gene ACOX1 and the down-regulated genes aldehyde dehydrogenase 9 family member A1 (ALDH9A1) and acyl-CoA synthetase (ACSS) formed regulatory relationships through intermediate compounds, including transhexadec-2-enoyl-CoA, (S)-methylmalonate semialdehyde and acetoacetyl-CoA in: fatty acid degradation (bta00071); valine, leucine and isoleucine degradation (bta00280); and butanoate metabolism (bta00650), respectively. The up-regulated collagen type IV alpha 1 chain (COL4A1) gene may play a role in activating the up-regulated genes SDC3 and CD44 via ECM-receptor interaction (bta04512), and SDC3 is also activated by the upregulated genes thrombospondin 4 (THBS4); the up-regulated gene adenylate cyclase 5 (ADCY5) may interact with the up-regulated gene protein kinase cAMP-activated catalytic subunit beta (PRKACB) via several pathways, and it also has an interaction with the down-regulated gene phosphodiesterase 4C (PDE4C) via cyclic GMP in pathways of purine metabolism (bta00230). PRKACB acts as an activator of the up-regulated calcium voltagegated channel auxiliary subunit gamma 4 (CACNG4) gene in dilated cardiomyopathy (bta05414); the down-regulated gene glutamic-oxaloacetic transaminase 1 (GOT1) is associated with the down-regulated genes adenylosuccinate synthase (ADSS) and D-aspartate oxidase (DDO). The down-regulated gene methionine adenosyltransferase 2B (MAT2B) and the up-regulated gene DNA methyltransferase 3 alpha (DNMT3A) form an interaction through S-adenosyl-L-methionine in cysteine and methionine metabolism (bta00270); the down-regulated genes thymidylate synthetase (TYMS) and the down-regulated gene methylenetetrahydrofolate dehydrogenase, cyclohydrolase and formyltetrahydrofolate synthetase 1 (MTHFD1) may interact with 5,10-methylene-tetrahydrofolate in one carbon pool via folate (bta00670).
Effects of Candidate Genes on Triglyceride Content in Bovine Fetal Fibroblasts (BFFs)
To analyze the functions of key candidate DEGs involved in lipid metabolism, overexpression vectors of related genes were constructed separately. The coding sequences (CDSs) of the ANKRD2, CRYAB, ALDH9A1 and EHHADH genes were successfully amplified by PCR and ligated to the pBI-CMV3 vector ( Figure 3A). The BFFs in overexpression and negative control groups showed regular morphological expression of green fluorescent proteins (GFPs) 24 h post-transfection ( Figure 3B). RT-qPCR was used to detect the candidate genes. The results showed that the mRNA expression levels of ANKRD2, CRYAB, ALDH9A1 and EHHADH in each overexpression group significantly increased (p < 0.01) compared with the control group ( Figure 3C).
Effects of Candidate Genes on Triglyceride Content in Bovine Fetal Fibroblasts (BFFs)
To analyze the functions of key candidate DEGs involved in lipid metabolism, overexpression vectors of related genes were constructed separately. The coding sequences (CDSs) of the ANKRD2, CRYAB, ALDH9A1 and EHHADH genes were successfully amplified by PCR and ligated to the pBI-CMV3 vector ( Figure 3A). The BFFs in overexpression and negative control groups showed regular morphological expression of green fluorescent proteins (GFPs) 24 h post-transfection ( Figure 3B). RT-qPCR was used to detect the candidate genes. The results showed that the mRNA expression levels of ANKRD2, CRYAB, ALDH9A1 and Detection of intracellular triglyceride showed that the overexpression of ANKRD2, ALDH9A1 and EHHADH resulted in significant decreases in the intracellular triglycerides (p < 0.01). We found the most pronounced decrease after the overexpression of EHHADH (33.132 ± 4.127 μmol/g). Overexpression of CRYAB in BFFs also lowered intracellular triglycerides (49.326 ± 0.341 μmol/g), whereas there was no significant difference between the pBI-CMV3-CRYAB and pBI-CMV3 groups (54.387 ± 0.791 μmol/g, Figure 3D). These Detection of intracellular triglyceride showed that the overexpression of ANKRD2, ALDH9A1 and EHHADH resulted in significant decreases in the intracellular triglycerides (p < 0.01). We found the most pronounced decrease after the overexpression of EHHADH (33.132 ± 4.127 µmol/g). Overexpression of CRYAB in BFFs also lowered intracellular triglycerides (49.326 ± 0.341 µmol/g), whereas there was no significant difference between the pBI-CMV3-CRYAB and pBI-CMV3 groups (54.387 ± 0.791 µmol/g, Figure 3D). These results suggest that CRYAB, ANKRD2, ALDH9A1 and EHHADH are negative regulators of triglyceride synthesis in BFFs.
Effects of Candidate Genes on Bovine Intracellular Fatty Acids
The results of the gas chromatography analysis showed that six fatty acid components were detected in the cells: caproic, caprylic, palmitic, stearic, linoleic and cis-4,7,10,13,16,19docosahexaenoic acids (Figure 4). Short-chain fatty acids (hexanoic) are critical nutrients for ruminants, and medium-chain fatty acids (octanoic) are required for regulation of neural energy balance. Palmitic acid is thought to promote the accumulation of triglycerides in livestock, stearic acid reduces low-density lipoprotein (LDL) cholesterol and linoleic acid is an essential fatty acid. Therefore, these fatty acids are of considerable interest for their nutritional and therapeutic properties. results suggest that CRYAB, ANKRD2, ALDH9A1 and EHHADH are negative regulators of triglyceride synthesis in BFFs.
Effects of Candidate Genes on Bovine Intracellular Fatty Acids
The results of the gas chromatography analysis showed that six fatty acid components were detected in the cells: caproic, caprylic, palmitic, stearic, linoleic and cis-4,7,10,13,16,19-docosahexaenoic acids ( Figure 4). Short-chain fatty acids (hexanoic) are critical nutrients for ruminants, and medium-chain fatty acids (octanoic) are required for regulation of neural energy balance. Palmitic acid is thought to promote the accumulation of triglycerides in livestock, stearic acid reduces low-density lipoprotein (LDL) cholesterol and linoleic acid is an essential fatty acid. Therefore, these fatty acids are of considerable interest for their nutritional and therapeutic properties. Compared with the control group, each fatty acid (except for hexanoic acid) and total fatty acids decreased in the CRYAB and ANKRD2 overexpression groups; the content of each fatty acid and total fatty acids within the ALDH9A1 overexpression group (83.48 ± 20.22 μg) was higher than that for the control (34.29 ± 20.10 μg). In the EHHADH group, the amount of linoleic acid (1.79 ± 0.863 μg) and cis-4,7,10,13,16,19-docosahexaenoic acid (2.01 ± 0.686 μg) increased in BFFs, while the rest of the fatty acids decreased. In addition, hexanoic acid was not detected in the ALDH9A1 and EHHADH overexpression groups.
Regulation of Genes Related to Lipid and Fatty Acid Metabolism by Candidate Genes
ANKRD2, CRYAB, ALDH9A1 and EHHADH gene overexpression in BFFs was analyzed using the RT2 Profiler PCR Array to detect their effects on key genes involved in fat and fatty acid metabolism ( Figure 5). Overexpression of CRYAB resulted in up-regulation of glycerol-3-phosphate dehydrogenase 1 (FC > 1.5, p < 0.05) and down-regulation of acyl- Compared with the control group, each fatty acid (except for hexanoic acid) and total fatty acids decreased in the CRYAB and ANKRD2 overexpression groups; the content of each fatty acid and total fatty acids within the ALDH9A1 overexpression group (83.48 ± 20.22 µg) was higher than that for the control (34.29 ± 20.10 µg). In the EHHADH group, the amount of linoleic acid (1.79 ± 0.863 µg) and cis-4,7,10,13,16,19-docosahexaenoic acid (2.01 ± 0.686 µg) increased in BFFs, while the rest of the fatty acids decreased. In addition, hexanoic acid was not detected in the ALDH9A1 and EHHADH overexpression groups.
The protein interaction network between the candidate genes and the regulated fatty acid metabolism pathway genes was predicted and analyzed using STRING. The results showed that ANKRD2 did not have protein interactions with up-and down-regulated genes. The ALDH9A1 gene interacted with MUT ( Figure 6B). The EHHADH gene had direct and indirect interactions with 28 genes; among the down-regulated genes, HSL/LIPE interacted with protein kinase AMP-activated non-catalytic subunit beta 2 (PRKAB2) and PRKACB. GPD1 interacted with glycerol-3-phosphate dehydrogenase 2 (GPD2) ( Figure 6C).
Discussion
Indicators such as fat deposition and fatty acid content play crucial roles in related economic traits, such as growth, reproduction and meat quality in livestock. The intramuscular fat content of beef affects tenderness, flavor, marbling and nutritional value. A moderate amount of intramuscular fat deposition can increase the marbling level, reduce the shear force of beef and improve the flavor [19]. Beef is rich in fatty acids and essential polyunsaturated fatty acids, and the type of fatty acid and unsaturated fatty-acids have also become important indicators for evaluating beef quality [20].
Of the DEGs, the ASIP gene has been reported to regulate coat color phenotype in animals [21]. Furthermore, with the discovery of the expression of ASIP mRNA in subcutaneous fat cells [22], the role of ASIP genes in lipid metabolism has become a subject of interest. Elke et al. [23] reported that the ASIP gene is widely expressed not only in adipose tissue but also in the muscle tissue of cattle. Compared with DEGs in Holstein cattle, ASIP mRNA was up-regulated more than nine-fold in the intramuscular fat of Japanese Wagyu cattle (p < 0.001), which suggested that it may be a functional gene related to the excellent beef quality of this breed.
This study showed that the expression level of AGRN in the Longissimus dorsi of Wagyu was 1.79 times higher than that of Chinese Red Steppe cattle. Epigenetic studies also suggest a role for AGRN in human obesity as a link between DNA methylation and AGRN was found in a study involving obese and healthy individuals [24]. Meanwhile, our previous study found that the expression levels of AGRN were negatively correlated with its promoter DNA methylation levels [11]. Furthermore, Maak et al. [25] found that 11 genes were expressed in Longissimus dorsi samples with higher fat content in the F2 generation of a Charolais × Holstein cross. The up-regulated genes included AGRN, which indicates that it may be one of the candidate genes that increases fat deposition.
Hematopoietic and immune cells highly express the CD44 protein as a hyaluronanbinding surface receptor [26,27]. It is also highly expressed on the surface of human adipose stem cells, suggesting that CD44 may be involved in the pluripotency and differentiation of preadipocytes [28]. Previous studies have shown that treating high-fat-diet mice with a monoclonal antibody of CD44 suppresses the development of obesity and reduces adipose tissue inflammation [29]. Our recent study showed that the CD44 gene is a key regulator of lipid metabolism in bovine mammary epithelial cells [30]. The above evidence suggests that CD44 is important for lipid metabolism.
The SDC3 gene is thought to be a novel regulator of feeding behavior and body weight that participates in energy metabolism. It has been previously demonstrated that male SDC3-deficient mice respond to food deprivation by reflexively reducing their feeding [31].
In the presence of high-fat diets, SDC3-null mice accumulated less fat, demonstrated better glucose tolerance, and were resistant to obesity induced by high-fat diets [32]. In humans, SDC3 polymorphisms have been linked to obesity and female hyperandrogenemia [33]. Interestingly, recent studies have found that SNPs in the SDC3 gene are associated with growth traits in cattle [34]. This evidence suggests that SDC3 may affect the formation of meat traits by regulating energy metabolism in domestic animals.
The expression level of CRYAB in the Longissimus dorsi of Wagyu was 2.57 times higher than that of the Chinese Red Steppe cattle (p < 0.05) in the present study. The expression level of the CRYAB gene in 3-month-old Japanese Wagyu × Hereford cattle was 1.73 times higher in Pyrmont × Hereford cattle, and the intermuscular fat content of Japanese Wagyu × Hereford hybrid cattle was significantly higher than that of the Pyrmont × Hereford hybrid cattle [35], suggesting that CRYAB may be related to multiple traits, such as high intermuscular fat content and muscle development. However, the overexpression of CRYAB inhibited fat deposition and reduced intracellular fatty acid content, which is inconsistent with the results in which the expression level of CRYAB was proportional to fat content. It is suggested that the effect of CRYAB in vivo on intermuscular fat deposition and lipid metabolism may occur through the coregulation of endocrine and paracrine pathways. Thus, gene overexpression could not increase the intracellular fat content in fibroblasts and might not be entirely consistent with the result in vivo. Hence, the regulatory effects of this gene on lipid metabolism and networks need further in vivo validation. In addition, CRYAB overexpression up-regulated GPD1 (FC > 1.5, p < 0.05), and in a human obesity study, triglyceride synthesis was reduced and muscle mass was increased by growth factor receptor bound protein 14 (GRB14) and GPD1. The expression levels of GPD1 and GDF8 were down-regulated after weight loss, but increased in obese women compared with lean women [36]. At the same time, studies have also shown that in the omental adipose tissue of obese women, LPL, GPD1 and leptin (LEP) are significantly reduced, suggesting that the increased expression of GPD1 promotes fat deposition [37]. The down-regulated gene PRKAG1 was located in a quantitative trait locus (QTL) associated with pig fat traits, suggesting that the PRKAG1 gene may also be associated with bovine fat traits. The above results suggest that CRYAB-mediated regulation of fat content has an effect on fat deposition capacity, but its effects on fat deposition and the regulatory mechanisms involved need further analysis in vivo.
ANKRD2 is localized to the nucleus and sarcomere in muscle cells and plays a role in the differentiation of muscle cells, as shown by its expression being induced during C2C12 differentiation in vitro [38]. The results of this study showed that ANKRD2 gene overexpression inhibited intracellular fat deposition, reduced intracellular fatty acid composition and regulated expression levels of genes in fatty acid metabolism pathways. At the same time, studies in a diabetic mouse model showed that the expression level of ANKRD2 was also changed in diabetes [39]. Overexpression of ANKRD2 could up-regulate HMGCS2 and down-regulate GK2. Previous studies found that HMGCS2 induced fatty acid α-oxidation and ketone production in hepatoma cells and played a crucial role in fatty acid oxidation [40]. In addition, HMGCS2 up-regulation increased intracellular fat oxidation and reduced triglyceride content [40]. It is worth noting that GK2 was found to be associated with pig backfat thickness in a genome-wide association analysis of Duroc pigs [41]. Overall, the ANKRD2 gene could improve meat quality traits by regulating the expression of genes such as HMGCS2 and GK2, which are involved in glucose and lipid metabolism, to affect intracellular fat and fatty acid composition.
Transcriptome analysis showed that the expression level of the ALDH9A1 gene in the Longissimus dorsi of Wagyu was 62.93% of the level in Chinese Red Steppe cattle. We speculate that the ALDH9A1 gene is also a potential functional gene for meat quality traits in cattle. In a study in which blood lipids were reduced in rats fed mulberry leaves, the expression level of ALDH9A1 was significantly lower in these rats, suggesting that ALDH9A1 also participates in lipid metabolism [42]. Notably, genome-wide association analysis in multiple pig populations found that ALDH9A1 was associated with fatty-acids in muscle and abdominal adipose tissue in pigs [43], suggesting a potential role for ALDH9A1 in meat quality traits and fatty acid content in pork [44]. Furthermore, overexpression of ALDH9A1 could result in up-regulation of the ACADSB expression level (FC > 1.5, p < 0.05). Our previous studies found that the ACADSB gene could significantly increase the intracellular triglyceride content (p < 0.05) [45] and that the knockout of ACADSB in bovine mammary epithelial cells may have been an important regulator of intracellular fatty acid content [46]. However, we found that overexpression of the ALDH9A1 gene caused a significant decrease in intracellular triglyceride content (p < 0.05), suggesting that it may not regulate intracellular triglyceride contents through increased ACADSB gene expression alone. We found that the expression level of EHHADH in the Longissimus dorsi muscle of Wagyu was only 25.33% of the expression level of Chinese Red Steppe cattle (p < 0.05). EHHADH is part of the canonical peroxisomal fatty acid β-oxidation pathway that can be induced by PPARα activation [47]. In a dairy cow genome-wide association analysis study, EHHADH and 19 other genes were correlated with milk fatty acid traits in a Chinese Holstein dairy cow population [48], suggesting that EHHADH is a potential functional gene that affects fatty acid metabolism in cattle. Overexpression of EHHADH in BFFs significantly reduced intracellular triglycerides. Meanwhile, it resulted in up-regulation of seven genes and down-regulation of 28 genes in the fatty acid metabolism pathway, among which the expression level of carnitine palmitoyltransferase 1A (CPT1A) was down-regulated by a factor of 57.28. CPT1 is a key rate-limiting enzyme responsible for the transport of longchain fatty acids into mitochondria, and overexpression of CPT1 in skeletal muscle in vivo increases fatty acid oxidation and reduces triacylglycerol esterification [49]. Stimulation of systemic CPT1 activity may also accelerate peripheral fatty acid oxidation [50]. Furthermore, CPT1 has three paralogous genes in mammals: CPT1A, B, and C. CPT1A is mainly expressed in the liver, whereas CPT1B is expressed in muscle and, to a lesser degree, in the liver. After treatment with tetradecylglycine, PPARα, CPT1A and CPT1B were significantly upregulated in the livers of mice [51]. It appears that the expression of CPT1A and CPT1B may vary with the timing of PPARα activation or may not be fully mediated by the PPARα pathway. Notably, it was found in our previous study that HSL gene overexpression led to up-regulation of the EHHADH gene [52], whereas overexpression of the EHHADH gene resulted in down-regulation of HSL gene expression. Although both HSL and EHHADH are essential genes in regulation of fat and fatty acid metabolism, the regulatory roles of HSL and EHHADH are poorly defined [53][54][55]. From the results of our previous study, it is speculated that HSL overexpression increased intracellular triglyceride content, so the oxidation of EHHADH was activated. However, EHHADH gene overexpression led to a decrease in intracellular triglyceride in this study. We believe that with decreasing triglyceride levels, HSL gene expression became inactive and the decreasing content of triglycerides reduced its expression. Therefore, the regulatory mechanism of the level of expression and the interaction between HSL and EHHADH need further study.
Animals and Longissimus dorsi Sample
The Longissimus dorsi of Wagyu and Chinese Red Steppe used in this experiment were provided by Inner Mongolia University (Hohhot, China) and Agricultural Science Academy of Jilin Province (Gongzhuling, China), respectively. The two farms raising the two groups of cattle are located at similar altitudes and have similar natural climatic conditions. Cattle in both groups were grown under similar feeding conditions and fed on corn and hay with free access to fodder and water. The Wagyu and three Chinese Red Steppe were slaughtered at 28 months for longissimus dorsi muscle tissues, and biological replicates are three. All the tissue details were shown in our previous study [11]. Cut the samples into pieces, aliquot them within cryovials, and store them in liquid nitrogen quickly. All the animal experiments in the present study strictly complied with the relevant regulations regarding the care and use of experimental animals issued by Jilin University Animal Care and Use Committee (Approval ID: 20140310).
RNA Extraction and RNA Sequencing
Total RNAs of tissues were extracted by Trizol (Takara, Dalian, China). Extractions were treated with DNaseI (NEB, Beijing, China), and the concentration of total RNA obtained was detected by Agilent 2100 Bioanalyzer (Davis, CA, USA). The purified RNA was then used for RNA sequencing. First, enrichment of eukaryotic mRNA with magnetic beads with Oligo (dT). Then, the mRNA is broken into short fragments by adding a breaking reagent at the appropriate temperature in the thermomixer (Eppendorf AG, Hamburg, Germany). The broken mRNA is used as a template to synthesize cDNA and then synthesize the second cDNA. Finally, the product is purified and recovered, the sticky ends are repaired and base "A" is added to the 3 end of cDNA to connect the linker. The constructed library was sequenced using the Illumina HiSeq2000 (BGI, Shenzhen, China) sequencing platform after quality checking using ABI Step One Plus Real-Time PCR System (StepOnePlus; Applied Biosystems, Waltham, MA, USA).
Sequencing Data Analysis
The original sequencing data is called raw reads, and clean reads are obtained after filtering out low-quality reads. The clean reads were mapped to the reference genome (Bos taurus UMD_3.1.1) using HISAT2 software [56]. Next, we used Bowtie2 [57] to align clean reads to this reference sequence and then used RSEM [58] to calculate gene and transcript expression levels. We use the internationally recognized algorithm EBSeq to differentially base Fragments Screening of factors (Log2 FC > 0.585 or <−0.585, FDR < 0.05).
Gene Ontology Enrichment and Kyoto Encyclopedia of Genes and Genomes Pathway Analysis
The DEGs were subjected to GO enrichment analysis by the R language package of GO seq [59], with gene length corrected for bias. KOBAS software [60] was used to test the statistical enrichment of related genes in the KEGG pathways [61]. A corrected p < 0.05 was considered significant enrichment.
Real-Time Quantitative PCR Analysis
To detect their relative expression levels of mRNA by Real-Time Quantitative PCR (RT-qPCR), the primers for RT-qPCR were designed using Primer Premier 6.0 software. The details of the gene ID and primer sequences are shown in Table S1. The β-actin (ACTB) gene was selected as the internal reference gene. RT-qPCR reaction system: SYBR ® Premix Ex Taq (Tli RNaseH Plus) (2×) 5.0 µL, PCR upstream and downstream primers (10 µmol/L) each 0.2 µL, cDNA template 1.0 µL, nuclease-free water 3.6 µL, total reaction volume 10.0 µL; reaction conditions: 95 • C for 5 min; 95 • C for 10 s, 60 • C for 30 s, 40 cycles; add a melting curve program after quantifying the amplification parameters: 95 • C-15 s, 60 • C-20 s, 95 • C-15 s.
Construction of Candidate Gene Overexpression Vector
Two pairs of primers were designed for each candidate gene, and the CDS region sequence of the target fragment was amplified by nested PCR. The details of the primer sequences and mRNA ID are shown in Table S2. PCR products were verified by Sanger sequencing and ligated into pBI-CMV3 plasmid (#631632, Clontech Laboratories, Mountain View, CA, USA) to generate overexpression vectors for candidate genes.
Cell Culture and Transfection
The BFFs in the study were purified and cultured from newborn cattle ear tip tissues according to our laboratory's previous method using tissue nubble culture [62]. The BFFs were cultured in 10 cm culture plates (Falcon, 353003, Franklin, Lake, NJ, USA) in DMEM/F12 (HyClone, 12-719Q, Logan, UT, USA) supplemented with 10% FBS (Fetal bovine serum, 11011-6123, Tian Hang, Hangzhou, Zhejiang, China). To investigate the role and regulatory mechanism of candidate genes for meat quality traits on fat and fatty acid metabolism, we seeded cells at a concentration of 2 × 10 6 cells/well in six-well culture plates (353,090, Falcon) and cultured at 37 • C and 5% CO 2 in an incubator (Thermo Fisher Scientific, Inc., Waltham, MA, USA). When the density of cells reached 80%, exchange the culture medium for transfection. Each overexpression vector of candidate genes was transfected into cells using FuGENE HD Transfection Reagent (PRE2311, Promega, Madison, WI, USA) according to the manufacturer's protocol, respectively, and the group of BFFs transfected with pBI-CMV3 was the negative control group. At 24 h post-transfection, cell morphology and growth state were observed under microscope, and the expression of green fluorescent protein (GFP) in the cells was observed under a fluorescence microscope (Nikon TE2000, Tokyo, Japan) to observe the transfection efficiency. Triplicate experiments were performed by transfecting the same number of cells with the same vector in different wells.
Analysis of the Triglyceride Contents in BFFs
Cells were collected post-transfection for 48 h. The triglyceride contents in each group of BFFs were detected according to the manufacturer's protocol of triglyceride assay kit (Applygen Technologies, E1015-105, Beijing, China), and the absorbance value at the wavelength of 550 nm was detected by a multi-function microplate reader (Biotech, San Francisco, CA, USA). The BCA protein detection kit (KeyGEN BioTECH, KGP902, Nanjing, Jiangsu, China) was used, and the test steps were referred to the standard operating procedure of the instruction manual to detect the concentration of protein in the same sample. The triglyceride content was finally corrected per mg protein content.
Fatty Acids (FAs) Extraction and Content Analysis in BFFs by GC
FAs were extracted post-transfection for 48 h of BFFs. After washing 3 times with phosphate-buffered saline (PBS), cells were trypsinized with 0.25% trypsin solution (Gibico, Grand Island, NY, USA). Cell pellets were collected by centrifugation at 500 g. The methods of FAs detection were referred to Pingjiang [30]. Briefly, Folch solution (2:1 CHCl3:CH3OH, v/v) and internal reference FA (Ginkgolic acid C13:0, 49,962, Sigma-Aldrich, St. Louis, MO, USA) were added to cells of each group. Then, the tube with the cells pellet was filled with high-purity nitrogen and shaken vigorously. After chloroform was evaporated, methylated mixed solvent consisting of 35% BF3 methanol (14%) (33040-U, Sigma-Aldrich), 45% methanol and 20% hexane was added to the glass tube. Finally, 1 mL of hexane and 0.4 mL of NaCl (0.88%) were added, and the supernatant was transferred via a Pasteur pipette with a long pipette into a clean glass vial with a lid for gas chromatography analysis (GC7980, ALS7020, Techcomp, Hong Kong, China). The standard FAs (Supelco37, 18919-1 AMP, Sigma-Aldrich) is the standard solution. The Gas Chromatograph System (Agilent 7890A) was used with an HP-FFAP elastic quartz capillary column (100 m × 0.25 mm, film thickness 0.2-µm) (CP-Sil 88 for Methyl esters, Agilent, Santa Clara, CA, USA) to detect fatty acid methyl ester. The specific operating conditions are as follows. The initial column temperature is set to 70 • C. The injection and detector temperatures were 250 • C and 255 • C, respectively. The split ratio was 10:1 and the carrier gas was nitrogen. The injection volume under operating conditions was 1.0 µL. The flow rates of hydrogen, nitrogen, and air gases at the outlet were 25, 20 and 150 mL/min, respectively. Clean after every four samples. The content of FA components was calculated by peak area normalization. Thirty-seven standard FAs are measured and the content of the FA should be calculated according to the amount of the internal reference FA and the ratio of FA to the total FA.
Analysis of Lipid Metabolism by RT 2 Profiler PCR Array
Two micrograms of total RNA were extracted from BFFs using the RNeasy mini kit (74134, Qiagen, Frederick, MD, USA). The cDNA was then synthesized using the RT2 First Strand Kit (330404, Qiagen, Frederick, MD, USA) according to the manufacturer's protocol. RT-qPCR was performed with an Mx3005p system (Stratagene, Agilent, Santa Clara, CA, USA). The transcript levels of lipid metabolism genes were detected by RT2 profile PCR Array (CLAB24070A, Qiagen, Frederick, MD, USA). According to the manufacturer's instructions, RT2 SYBR Green ROX qPCR Master Mix (Qiagen, Frederick, MD, USA) was used. The reactions were incubated at 95 • C for 10 min, followed by 40 cycles of 95 • C for 10 s and 60 • C for 1 min, then add a melting curve program: 95 • C 15 s, 60 • C 20 s and 95 • C 15 s. β-actin, glyceraldehyde-3-phosphate dehydrogenase (GAPDH), tyrosine hydroxylase (YWHAZ), hypoxanthine phosphoribosyltransferase (HPRT1) and TATA-binding protein (TBP) is the reference control. Reference genes were selected based on normalized threshold count (CT) values. The online RT2 Profiler PCR Array data analysis software (https://geneglobe.qiagen.com/cn/analyze, accessed on 1 May 2021) was used to analyze the relative gene expression data). p value < 0.05 was considered statistically significant.
Statistical Analysis
Experimental results are expressed as mean ± standard error of measurement (SEM). Relative expression levels of DEGs were calculated using the comparative Ct method (2 −∆Ct ). Meanwhile, candidate genes expression levels of overexpression groups and the negative control group were calculated using the comparative Ct method (2 −∆∆Ct ). The expression level of each mRNA relative to β-actin was analyzed and calculated. GraphPad Prism 6 software (GraphPad Software, San Diego, CA, USA) was used to analyze the data with a t-test. The statistical analysis of triglyceride contents was performed with GraphPad Prism 6 software and carried out by one-way analysis of variance (ANOVA) with Dunnett's multiple comparisons tests. p < 0.05 was defined as statistical significance.
Conclusions
This study generated a dataset from transcriptome profiling of two cattle breeds with differing fat deposition capacities of the muscle and identified 388 DEGs in the Longissimus dorsi between Wagyu and Chinese Red Steppe. The presented DEGs confirm in part previous reports about the functional genes related to intramuscular fat deposition and also provide novel candidate genes related to meat quality traits in cattle. Meanwhile, CRYAB, ANKRD2, ALDH9A1 and EHHADH gene overexpression inhibited intracellular triglycerides and affected intracellular fatty acid components by regulating gene expression levels in fat and fatty acid metabolic pathways. The results provide valuable insight into the significant variation between Wagyu and Chinese Red Steppe cattle meat quality and offer useful genetic markers for the breeding of high-grade beef.
Data Availability Statement:
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found at: https://www. ncbi.nlm.nih.gov/genbank/ (accessed on 1 May 2022), GSE161967. | 2022-12-29T16:17:01.224Z | 2022-12-26T00:00:00.000 | {
"year": 2022,
"sha1": "9a9eab58912ac5097e20a169f7a765c309895cdf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/24/1/387/pdf?version=1672049805",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "048b84db6d50cfc70ec6af0236717980b1ace943",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
247962520 | pes2o/s2orc | v3-fos-license | Spontaneous pseudointimal graft dissection in a patient with coarctation of the aorta
CASE SUMMARY A 45-year-old male with a past medical history significant for chronic chest pain syndrome and a past surgical history significant for a coarctation repair 14 years prior. The coarcation repair was performed using an 18mm Gelweave bypass graft between the descending thoracic aorta and left subclavian artery. The patient developed graft thrombosis one year ago and was treated with long-term anticoagulation. On presentation, the patient complained of acute substernal chest discomfort radiating to his back. Vital signs and physical exam were normal. There were no pulse deficits or radio-femoral delay. A CTA revealed a dissection flap involving the synthetic graft without evidence of graft thrombosis. The patient was treated conservatively with B-Blockers and continuation of his anticoagulation medication. One month later, the patient returned with similar substernal chest pain. Repeat CTA demonstrated patent graft with chronic dissection. The patient was scheduled for evaluation of his graft at an outside facility.
R A D I O L O G I C A L C A S E
Spontaneous pseudointimal graft dissection in a patient with coarctation of the aorta Shams Jubouri, MD; Mani Razmjoo, MD; and Paulomi Kanzaria MD
CASE SUMMARY
A 45-year-old male with a past medical history significant for chronic chest pain syndrome and a past surgical history significant for a coarctation repair 14 years prior.The coarcation repair was performed using an 18mm Gelweave bypass graft between the descending thoracic aorta and left subclavian artery.The patient developed graft thrombosis one year ago and was treated with long-term anticoagulation.
On presentation, the patient complained of acute substernal chest discomfort radiating to his back.Vital signs and physical exam were normal.There were no pulse deficits or radio-femoral delay.A CTA revealed a dissection flap involving the synthetic graft without evidence of graft thrombosis.
The patient was treated conservatively with B-Blockers and continuation of his anticoagulation medication.One month later, the patient returned with similar substernal chest pain.Repeat CTA demonstrated patent graft with chronic dissection.The patient was scheduled for evaluation of his graft at an outside facility.
IMAGING FINDINGS
CT angiogram was obtained (Figures 1 and 2), which demonstrated coarctation of the aorta and bypass graft between the left subclavian artery and the descending thoracic aorta.A dissection flap within the graft was noted along its entire length with equal contrast opacification of the true and false lumens.No evidence of graft thrombosis.Additionally, there was narrowing of the side-to-side anastomosis of the graft with the aorta (Figure 2).This narrowing was stable on comparison to prior imaging.
DISCUSSION
Post-coarctation graft complication includes pseudoaneurysm, true aneurysm, late graft rupture, fistulization, and graft stenosis. 1One report has described pseudointimal dissection in a Dacron graft leading to re-coarctation, which was managed surgically. 2 Current literature lacks adequate research surrounding synthetic graft dissection.Intimal hyperplasia along with a secondary insult predispose patients to synthetic graft dissection. 3raft dissection may be spontaneous or secondary to blunt trauma.Spontaneous dissection of the neo-intima has been described in a porcine-valve Dacron conduit utilized in surgical correction of various congenital cardiac anomalies. 4Shigematsu et al, discussed a non-anastamotic stenosis of a knitted Dacron graft following a repeated ilio-femoral bypass surgery.The short segmental stenosis was attributed to a pseudointimal dissection as a result of intraoperative clamping of the graft. 3arge vessel reconstruction using synthetic grafts causes alteration in the reflexes of the systemic cardiovascular function.For instance, Dacron is 24 times stiffer than healthy aortic arch tissue.The mismatched compliance
R A D I O L O G I C A L C A S E
between the two results in excessive stress at the suture line and leads to subsequent intimal hyperplasia. 5ntimal hyperplasia is a well recognized cause of delayed graft failure.It is a physiologic response to vascular irritation resulting in an abnormal proliferation and migration of smooth muscle cells and extracellular matrix accumulation.This response is commonly seen at the distal anastomosis.High oscillatory shearing during systolic phase, platelet derived growth factor, and release of mitogens from leukocytes, smooth muscle cells, and endothelial cells are the contributing factors.The endothelialization acts as a trigger for smooth muscle proliferation in the underlying layer. 6,7Intimal hyperplasia has also been recognized in autologous vein grafts.This phenomenon, however, is a sequel of arterialization of the graft. 3
CONCLUSION
We report an unusual graft complication, a pseudointimal graft dissection in a patient after repair of coarctation of the aorta.While extremely rare, this case highlights that synthetic grafts can dissect their pseudointima.Intimal hyperplasia can extend beyond the anastomotic site, as suggested in our case where the dissection flap spans throughout the graft.This complication should to be considered in evaluation of patients with synthetic grafts.
FIGURE
FIGURE 1. Axial (A) and coronal (B) CTA reformats demonstrate an end-to-side bypass graft (short blue arrows) between the left subclavian artery (not demonstrated on these figures) and the descending thoracic aorta (b).There is a dissection flap within the graft along its entire length (short red dotted arrows) with equal contrast opacification of the true (medial) and false (lateral) lumens.Figure A demonstrates the aortic arch (a), the ascending aorta (b) and the existing coarctation (blue circle).
FIGURE 1. Axial (A) and coronal (B) CTA reformats demonstrate an end-to-side bypass graft (short blue arrows) between the left subclavian artery (not demonstrated on these figures) and the descending thoracic aorta (b).There is a dissection flap within the graft along its entire length (short red dotted arrows) with equal contrast opacification of the true (medial) and false (lateral) lumens.Figure A demonstrates the aortic arch (a), the ascending aorta (b) and the existing coarctation (blue circle).
FIGURE 2 .
FIGURE 2. 3D volume-rendered CT image showing the ascending (a) and descending thoracic aorta (b), again demonstrates dissection in the bypass graft between the left subclavian artery (c) and the descending thoracic aorta (b).There is narrowing of the side-to-side anastomosis of the graft with the aorta (blue circle). | 2019-10-05T23:34:12.642Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "bf945b602605eb088e1a41e4a59c485615c28f2e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.37549/ar2450",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "bf945b602605eb088e1a41e4a59c485615c28f2e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
257884080 | pes2o/s2orc | v3-fos-license | Multi-Agent Deep Reinforcement Learning for Multi-Robot Applications: A Survey
Deep reinforcement learning has produced many success stories in recent years. Some example fields in which these successes have taken place include mathematics, games, health care, and robotics. In this paper, we are especially interested in multi-agent deep reinforcement learning, where multiple agents present in the environment not only learn from their own experiences but also from each other and its applications in multi-robot systems. In many real-world scenarios, one robot might not be enough to complete the given task on its own, and, therefore, we might need to deploy multiple robots who work together towards a common global objective of finishing the task. Although multi-agent deep reinforcement learning and its applications in multi-robot systems are of tremendous significance from theoretical and applied standpoints, the latest survey in this domain dates to 2004 albeit for traditional learning applications as deep reinforcement learning was not invented. We classify the reviewed papers in our survey primarily based on their multi-robot applications. Our survey also discusses a few challenges that the current research in this domain faces and provides a potential list of future applications involving multi-robot systems that can benefit from advances in multi-agent deep reinforcement learning.
Introduction
In a multi-robot application, several robots are usually deployed in the same environment [1][2][3]. Over time, they interact with each other via radio communication, for example, and coordinate to complete a task. Application areas include precision agriculture, space exploration, and ocean monitoring, among others. However, in all such real-world applications, many situations might arise that have not been thought of before deployment, and, therefore, the robots must need to plan online based on their past experiences. Reinforcement learning (RL) is one computing principle that we can use to tackle such dynamic and non-deterministic scenarios. Its primary foundation is trial and error-in a singleagent setting, the agent takes an action in a particular state of the environment, receives a corresponding reward, and transitions to a new state [4]. Over time, the agent learns which state-action pairs are worth re-experiencing based on the received rewards and which ones are not [5]. However, the number of state-action pairs becomes intractable, even for smallish computational problems. This has led to the technique known as deep reinforcement learning (DRL), where the expected utilities of the state-action pairs are approximated using deep neural networks [6]. Such deep networks can have hundreds of hidden layers [7]. Deep reinforcement learning has recently been used in finding a faster matrix multiplication solution [8], for drug discovery [9], to beat humans in Go [10], play Atari [6], and for routing in communication networks [11], among others. Robotics is no different-DRL has been used in applications ranging from path planning [12] and coverage [13] to locomotion learning [14] and manipulation [15].
Going one step further, if we introduce multiple agents to the environment, this increases the complexity [16]. Now, the agents not only need to learn from their own observations in the environment but also be mindful of other agents' transitions. This essentially means that one agent's reward may now be influenced by the actions of other agents, and this might lead to a non-stationary system. Although inherently more difficult, the use of multiple robots and, consequently, a multi-agent reinforcement learning framework for the robots is significant [17]. Such learning multi-robot systems (MRS) may be used for precision agriculture [18], underwater exploration [19], search and rescue [20], and space missions [21]. Robots' onboard sensors play a significant role in such applications. For example, the state space of the robots might include the current discovered map of the environment, which could be created by the robots' laser scanners [22]. The state might also include locations and velocities, for which the robot might need sensory information from GPS or an overhead camera [23]. Furthermore, vision systems, such as regular or multi-spectral cameras, can be used by the robots to observe the current state of the environment, and data collected by such cameras can be used for robot-to-robot coordination [24]. Therefore, designing deep reinforcement learning algorithms, potentially lightweight and sample-efficient, that will properly utilize such sensory information, is not only of interest to the artificial intelligence research community but to robotics as well. However, the last survey that reviewed the relevant multi-robot system application papers that use multi-agent reinforcement learning techniques was conducted by Yang and Gu in 2004 [17]. Note that the entire sub-field of DRL was not invented until 2015 [6].
In this paper, we fill this significant void by reviewing and documenting relevant MRS papers that specifically use multi-agent deep reinforcement learning (MADRL). Since today's robotic applications can have a large state space and, potentially, large action spaces, we believe that reviewing only the DRL-based approaches, and not the classic RL frameworks, is of interest to the relevant communities. The primary contribution of this article is that, to the best of our knowledge, this is the only study that surveys multirobot applications via multi-agent deep reinforcement learning technologies. This survey provides a foundation for future researchers to build upon in order to develop state-of-theart multi-robot solutions, for applications ranging from task allocation and swarm behavior modeling to path planning and object transportation. An illustration of this is shown in Figure 1.
Multi-robot Application
Deep RL Multiagent Learning Figure 1. The main contribution of this article is that we have reviewed the latest multi-robot application papers that use multi-agent learning techniques via deep reinforcement learning. Readers will be able to find out how these three concepts are used together in the discussed studies and this survey will provide them with insight into possible future developments in this field, which, in turn, will advance the state-of-the-art.
We first provide a brief technical background and introduce terminologies necessary to understand some of the concepts and algorithms described in the reviewed papers (Section 2). In Section 3, we categorize the multi-robot applications into (1) coverage, (2) path planning, (3) swarm behavior, (4) task allocation, (5) information collection, (6) pursuit-evasion, (7) object transportation, and (8) construction. We identify and discuss a list of crucial challenges that, in our opinion, the current studies in the literature face in Section 4, and then, finally, we conclude.
Background
In this section, we provide technical backgrounds on relevant computing principles.
MDP and Q-Learning
Let S and A denote the set of all states and actions available to an agent. Let R: S × A → R denote a reward function that gives the agent a virtual reward for taking action a ∈ A in state s ∈ S. Let T denote the transition function. In a deterministic world, T: S × A → S, i.e., the actions of the agent is deterministic, whereas in a stochastic world, these actions might be probabilistic-T: S × A → prob(S). We can use a Markov Decision Process (MDP) to model such a stochastic environment, which is defined as a tuple S, A, T, R . The objective is to find a (optimal) policy π: S → A that maximizes the expected cumulative reward. To give higher preference to the immediate rewards than to the future ones, we discount the future reward values. The sum of the discounted rewards is called value. Therefore, to solve an MDP, we will maximize the expected value (V) over all possible sequences of states. Thus, the expected utility in a state s ∈ S can be recursively defined as follows: V(s) = R(s, a) + γ max a ∈A ∑ s P(s |s, a)V(s ) The above is called the Bellman equation, where P(s |s, a) is the probability of transitioning into s from s by taking an action a. We can use value or policy iteration algorithms to solve an MDP. However, in a situation where the R and T functions are unknown, the agent will have to try out different actions in every state to know which states are good and what action it should take in a particular state to maximize its utility. This leads to the idea of reinforcement learning (RL) where the agent will execute a in state s of the environment, and will receive a reward signal R from the environment as a result. Over time, the agent will learn the optimal policy based on this interaction between the agent and the environment [25]. An illustration is shown in Figure 2. In a model-based RL, the agent learns an empirical MDP by using estimated transition and reward functions. Note that these functions are approximated by interacting with the environment as mentioned earlier. Next, similar to an MDP, value or policy iteration algorithm can be employed to solve this empirical MDP model. In a model-free RL, the agent does not have access to T and R. This is true for numerous robotic applications in the real world. Therefore, most of the robotics papers we review in this survey use model-free RL techniques. This is also true for RL algorithms in general.
The goal of RL is to find a policy that maximizes the expected reward of the agent. Temporal difference learning is one of the most popular approach in model-free RL to learn learn the optimal utility values of each state. Q-learning is one such model-free RL technique, where the Q-value of a state-action pair (s, a) indicate the expected usefulness of that pair, which is updated as follows.
α is the learning rate that weighs the new observations against the old. It is off-policy learning and converges to an optimal policy π * following π * (s) = arg max a∈A Q(s, a) An excellent overview of classic RL applications in robotics can be found in [26]. Keeping track of Q-values for all possible state-action pairs in such an RL setting becomes infeasible with, for example, a million such combinations. In recent years, artificial neural networks have been used to approximate the optimal Q-values instead of storing the values in a table. This has given birth to the domain of deep reinforcement learning.
Agent
Environment a in s s R(s, a)
Multi-Agent Q-Learning
Assuming that the state space S is shared among n agents N and that there exists a common transition function T, an MDP for N is represented by the following tuple N, S, A, O, T, R , where the joint action space is denoted by A ← A 1 × A 2 · · · × A n ; the joint reward is denoted by R ← R 1 × R 2 · · · × R n ; and O denotes the joint observation of the agents, As there is more than one agent present, the action of one agent can potentially affect the reward and the consequent actions of the other agents. Therefore, the goal is to find a joint policy π * . However, due to the non-stationary environment and, consequently, the removal of the Markov property, convergence cannot be guaranteed unlike the single-agent setting [27]. One of the earliest approaches to learning a joint policy for two competitive agents is due to Littman [28]. It was modeled as a zero-sum two-player stochastic game (SG). It is also known as Markov Game in game theory. In SG, the goal is to find the Nash equilibrium, assuming the R and T functions are known. In a Nash equilibrium, the agents (or the players) will not have any incentive to change their adopted strategies. We slightly abuse the notation here and denote the strategy of agent N i with π i . Therefore, in a Nash equilibrium, the following is true V where V(s) denotes the value of state s ∈ S to the i-th agent and π −i is the strategy of the other players. Here, we assume the agents to be rational, and, therefore, all the agents always follow their optimal strategies. This general SG setting can now be used to solve multi-agent reinforcement learning (MARL) problems. In a cooperative setting, the agents have a common goal in mind. Most of the studies in the robotics literature that use MARL use such a cooperative setting. In this case, the agents have the same reward function, R. Given this, all the agents in N will have the same value function, and, consequently, the same Q-function. The Nash equilibrium will be the optimal solution for this problem. Two main types of learning frameworks are prevalent-independent and joint learners. In an independent learning scenario, each agent ignores the presence of other agents in the environment and considers their influence as noise. The biggest advantage is that each agent/robot can implement its own RL algorithm and there is no need for coordination and, consequently, a joint policy calculation [16,27]. Independent classic Q-learners have shown promising results in AI [29,30], as well as in robotics [31,32]. On the other hand, the joint learners aim to learn the joint optimal policy from O and A. Typically, an explicit coordination, potentially via communication in an MRS, is in place and the agents learn a better joint policy compared to the independent learners [16,27]. However, the complexity increases exponentially with the number of agents causing these not to scale very well. The joint Q-learning algorithms are also popular in robotics [24,33,34], as well as in general AI [28,35]. A comprehensive survey for MARL techniques can be found in [27,36]. The authors in [36] also discuss the application domains for MARL, which includes multi-robot teams. A specific relevant example that is discussed is multi-robot object transportation.
(Multi-Agent) Deep Q-Learning
As the state and the action spaces increase in size, maintaining a table for the Q-values for all possible state-action pairs might be infeasible. To tackle this challenge, Mnih et al. [6] have proposed a neural network-based approach to approximate the Q-values directly from the sensory inputs. This has given birth of 'deep' Q-learning, as the Q-values of the state-action pairs are updated using a deep neural network.
Q-Networks
In their seminal paper, Mnih et al. [6] have proposed DQN-a convolutional neural network (CNN) to approximate the Q-values for a single agent. This is called the Q-network, which is parameterized by θ. The current state s t is passed as an input to the network that outputs the Q-values for all the possible actions. An action is chosen next based on the highest Q-value, i.e., a * = arg max a∈A Q(s t , a) To ensure that the agent explores the state space enough, a * is chosen with probability and the agent takes a random action with (1 − ) probability. Due to this action, the state transitions to s t+1 . To avoid instability, a target network is maintained-it is identical to the Q network, but the parameter set θ is periodically copied to the parameters of this target network, θ − . The state transitions are maintained in an experience replay buffer D. Mini-batches from D are selected and target Q-values are predicted. θ is regressed toward the target values by finding the gradient descent of the following temporal loss function One of the most popular extensions of DQN is Double DQN (DDQN) [37], which reduces overestimation in Q-learning. DDQN uses the Q-network for action selection following the -greedy policy, as mentioned above, but uses the target network for the evaluation of the state-action values. DQN and DDQN are extremely popular in robotics [13,[38][39][40][41]. A visual working procedure of the generic DQN algorithm is presented in Figure 3. Reward R Figure 3. An illustration of the DQN architecture with a target network and an experience replay.
Policy Optimization Techniques
In policy optimization methods, the neural network outputs the probability distribution of these actions instead of outputting the Q-values of the available actions. Instead of using something like the -greedy strategy to derive a policy from the Q-values, the actions with higher probability outputs from the network will have higher chances of being selected. Let us denote a θ-parameterized policy by π θ . The objective is to maximize the cumulative discounted reward where R is the finite-horizon discounted cumulative reward. By optimizing the parameter set θ, e.g., by following the gradient of the policy, we aim to maximize the expected reward. Similar to the Q-networks, the learning happens in episodes. In general, the parameters in episode i + 1, θ i+1 , will be an optimized version of θ i as the following standard gradient ascent formula In the vanilla form, similar to the Q-networks, the mean square error between the value of the policy (usually approximated using a neural network) and the reward-to-go (i.e., the sum of rewards received after every state transition so far) is calculated and the approximate value function parameters are regressed. Some of the popular policy optimization techniques include Deep Deterministic Policy Gradient (DDPG) [42], Proximal Policy Optimization (PPO) [43], Trust Region Policy Optimization (TRPO) [44], and Asynchronous Advantage Actor-Critic (A3C) [45], among others. Among these, DDPG is one of the most widely used for multi-robot applications [46][47][48][49][50]. It learns a Q-function similar to DQN and uses that to learn a policy. The policy DDPG learns is deterministic and the objective of this is to find actions that maximize the Q-values. As the action space A is assumed to be continuous, the Q-function is differentiable. To optimize θ and update the policy, we perform one-step policy ascent as follows: DDPG uses a sophisticated technique called actor-critic to achieve the successful combination of these two types of deep Q-learning. The actor essentially represents the policy and the critic represents the value network, respectively. The actor is updated towards the target and the critic is regressed by minimizing the error with the target [51]. The difference between the expected state value and the Q-value for an action a is called the advantage. One of the most popular algorithms that uses such an actor-critic framework is A3C [45]. In this algorithm, parallel actors explore the state space via different trajectories making the algorithm asynchronous; therefore, it does not require maintaining an experience replay. Another popular algorithm in the multi-robot domain is PPO, potentially because of its relatively simple implementation [43]. PPO-clip and PPO-penalty are its two primary variants that are used in robotics [52][53][54][55][56][57].
Extensions to Multi-Agent
As described earlier, in independent learning frameworks, any of the previously mentioned deep RL techniques, such as DQN, DDPG, A3C, or PPO, can be implemented on each agent. Note that no coordination mechanism is needed to be implemented for this [16,27,58].
For multi-agent DQN, a common experience memory can be used, which will combine the transitions of all the agents, and, consequently, they will learn from their global experiences while virtually emulating a stationary environment. Each agent can have its own network that will lead it to take an action from its Q-values [59]. Yang et al. [60] have proposed a mean field Q-learning algorithm for large-scale multi-agent learning applications. A mean-field formulation essentially brings down the complexity of an n-agent learning problem to a 2-agent learning problem by creating a virtual mean agent from the other (n − 1) agents in the environment. In [61], the authors have introduced the multi-agent extension of DDPG (MADDPG). Here, the actor remains decentralized, but the critic is centralized. Therefore, the critic needs information on the actions, observations, and target policies of all of the agents to evaluate the quality of the joint actions. Figure 4 shows an illustration of this process. Yu et al. [62] have proposed a multi-agent extension of PPO in cooperative settings (MAPPO). Similar to MADDPG, it uses centralized training with a decentralized execution strategy.
Another approach to extending a single-agent DRL algorithm to a multi-agent setting is to model it as a centralized RL, where all the information from agents is input together. This might create an infeasibly large state and action space for the joint agent. To alleviate this, researchers have looked into how to find each agent's contribution to the joint reward. This is named Value Function Factorization. VDN [63] is one such algorithm for cooperative settings where the joint Q-value is the addition of the local Q-values of the agents. A summary of the main types of RL algorithms used in multi-robot applications is presented in Table 1. The reader is referred to [64,65] for recent comprehensive surveys on state-of-the-art MADRL techniques and challenges. Furthermore, Oroojlooy and Hajinezhad [66] have recently published a survey paper reviewing the state-of-the-art MADRL algorithms specifically for cooperative multi-agent systems. As in most of the scenarios, the robots in an MRS work together towards solving a common problem, we believe that the survey in [66] would be a valuable asset for the robotics community. Table 1. Types of deep RL algorithms used in the surveyed papers are listed. If a popular algorithm is used as a foundation, the algorithm's name is also mentioned within parentheses.
Multi-Robot System Applications of Multi-Agent Deep Reinforcement Learning
A summary of the discussed multi-robot applications is presented in Figure 5.
Coverage and Exploration
The goal of an MRS in a coverage path planning (CPP) application is that every point in the environment is visited by at least one robot while some constraints are satisfied (e.g., no collision among the robots) and user-defined criteria are optimized (e.g., minimizing the travel time) [145]. CPP is one of the most popular topics in robotics. For multirobot coverage, several popular algorithms exist even with performance guarantees and worst-case time bounds [146][147][148][149]. In exploration, however, the objective might not be the same as the multi-robot CPP problem. It is assumed that the sensor radius r > 0, and, therefore, the robots do not need to visit all the points on the plane. For example, the robots might be equipped with magnetic, acoustic, or infrared sensors in ground and aerial applications whereas a group of underwater vehicles might be equipped with water temperature and current measuring sensors. The robots will need GPS for outdoor localization. Such exploration can be used for mapping and searching applications among others [150][151][152]. Constraints such as maintaining wireless connectivity for robots with limited communication ranges might be present [153]. Inter-robot communication can be achieved via ZigBee or Wi-Fi. An example is shown in Figure 6.
Mou et al. [68] studied area coverage problems and proposed deep reinforcement learning for UAV swarms to efficiently cover irregular three-dimensional terrain. The basis of their UAV swarm structure is with the leader and the follower UAVs. The authors implement an observation history model based on convolutional neural networks and a mean embedding method to address limited communication. Li et al. [69] proposed the use of DDQN to train individual agents in a simulated grid-world environment. Then during the decision-making stage, where previously trained agents are placed in a test environment, the authors use their proposed multi-robot deduction method, which has foundations in Monte Carlo Tree Search. Zhou et al. [154] have developed a multi-robot coverage path planning mechanism that incorporates four different modules: (1) a map module, (2) a communication module, (3) a motion control module, and (4) a path generation module. They implement an actor-critic framework and natural gradient for updating the network. Up to three robots have been used in a simulation for testing the proposed coverage technique in a grid world. The two cornerstones of the study by Hu et al. [155] are (1) Voronoi partitioning-based area assignment to the robots and (2) the proposed DDPG-based DRL technique for the robots to have a collision-avoidance policy and evade objects in the field. The control of the robots is provided by the underlying neural network. The authors use a Prioritised Experience Replay (PER) [156] to store human demonstrations. The simulation was performed within Gazebo [157] and three Turtlebot3 Waffle Pi mobile robots were used to explore an unknown room during validation. Bromo [53], in his thesis, used MADRL on a team of UAVs using a modified version of PPO to map an area. During training, the policy function is shared among the robots and updated based on their current paths. For multi-UAV coverage, Tolstaya et al. [110] use graph neural networks (GNNs) [158] as a way for the robots to learn the environment through the abstractions of nodes and edges. GNNs have been successfully used in various coordination problems for multi-robot systems, and more recently, Graph Convolution Networks (GCNs) have been used [158]. The authors in this paper use "behavior cloning" as a heuristic to train the GNN on robots' previous experiences. In order for the individual UAVs to learn information distant from their position, they use up to 19 graph operation layers. PPO is the base DRL algorithm in this paper. Aydemir and Cetin [159] proposed a distributed system for the multi-UAV coverage in partially observable environments using DRL. Only the nearby robots share their state information and observations with each other. Blumenkamp et al. [111] developed a framework for decentralized coordination of an MRS. The RL aspect of their system uses GNNs and PPO. The agents train and develop a policy within a simulated environment and then the physical implementation of the policy with the robots occurs in a test environment. The authors also compare centralized control and communication levels to decentralized decision-making.
Similarly, Zhang et al. [160] have also proposed to employ graph neural networks for multi-robot exploration. The authors emphasize the "coarse-to-fine" exploration method of the robots, where the graph representation of the state space to be explored is explored in "hops" of greater detail. Simulation experiments involved up to 100 robots. Exploration can also be used for searching for a target asset. Liu et al. [84] have proposed a novel algorithm for cooperative search missions with a group of unmanned surface vehicles. Their algorithm makes use of two modules based on a divide-and-conquer architecture: an environmental sense module that utilizes sensing information and a policy module that is responsible for the optimal policy of the robots. Gao and Zhang [161] study a cooperative search problem while using MADRL as the solution method. The authors use independent learners on the robots to find the Nash equilibrium solution with the incomplete information available to the robots. Setyawan et al. [101] also use MADRL for multi-robot search and exploration. Unlike the previously mentioned studies, the authors have adopted a hierarchical RL approach, where they break down an abstraction of the global problem space into smaller sub-problem levels in order for the robot system to more efficiently learn in an actor-critic style. The lowest level in this order decides the robots' motor actions in the field. Sheng et al. [162] propose a novel probability density factorized multi-agent DRL method for solving the multi-robot reliable search problem. According to this study, when each robot follows its own policy to maximize its own reliability metric (e.g., probability of finding the target), the global reliability metric is also maximized. The authors implement the proposed technique on multiple simulated search environments including offices and museums, as well as on real robots. Another study in a similar application domain is done by Xia et al. [127]. Specifically, the authors have used MADRL for the multi-agent multi-target hunting problem. The authors make use of a feature embedding block to extract features from the agents' observations. The neural network architecture uses fully connected layers and a Gated Recurrent Unit (GRU) [163]. Simulation experiments included up to 24 robots and 12 targets. Caccavale et al. [96] proposed a DRL framework for a multi-robot system to clean and sanitize a railway station by coordinating the robots' efforts for maximum coverage. Their approach is decentralized where each robot runs its own CNN and the foundation of their technique is DQN. Note that the robots learn to cooperate online while taking the presence of the passengers into account.
Not only with the ground and aerial vehicles, but MADRL has also been used for ocean monitoring with a team of floating buoys as well. Kouzehgar et al. [105] proposed two area coverage approaches for such monitoring: (1) swarm-based (i.e., the robots follow simple swarming rules [164]) and (2) coverage-range-based (i.e., the robots with fixed sensing radius). The swarm-based model was trained using MADDPG and the latter model MARL was trained using a modified (consisting of eliminating reward sharing, collective reward, sensing their own share of the reward function, and independence based on individual reward) MADDPG algorithm.
Communication is one of the most important methods of coordination among a group of robots. More often than not, when and with whom the communication will happen is pre-defined. However, if the robots are non-cooperative, such an assumption does not work. Blumenkamp and Prorok [118] propose a learning model based on reinforcement learning that allows individual, potentially non-cooperative, agents to manipulate communication policies while the robots share a differentiable communication channel. The authors use GNN with PPO in their method. The proposed technique has also been successfully employed for multi-robot path planning. Along a similar path, Liang et al. [165] proposed the use of DRL to learn a high-level communication strategy. The authors presume the environment to be partially observable and they take a hierarchical learning approach. The implemented application is a cooperative patrolling field with moving targets. Meng and Kan [102] also put multi-robot communication at the forefront of their study while tackling the coverage problem. The goal of the robots has to cover an entire environment while maintaining connectivity in the team, e.g., via a tree topology. The authors use a modified version of MADDPG to solve the stated problem.
MADRL has also been used for sensor coverage, alongside area coverage [166]. In sensor-based coverage, the objective is to cover all the points in an environment with a sensor footprint. An example of this is communication coverage, where the goal of a team of UAVs is to provide Wi-Fi access to all the locations in a particular region. This might be extremely valuable after losing communication in a natural disaster, for example. The authors in [167] presented a solution for UAV coverage using mean field games [168]. This study was targeted toward UAVs that provide network coverage when network availability is down due to natural disasters. The authors constructed the Hamilton-Jacobi-Bellman [169] and Fokker-Planck-Kolmogorov [170] equations via mean field games. Their proposed neural network-based learning method is a modification of TRPO [44] and named mean-field trust region policy optimization (MFTRPO). Liu et al. [104] proposed a coverage method to have a system of UAVs cover an area and provide communication connectivity while maintaining energy efficiency and fairness of coverage. The authors utilize an actor-critic-based DDPG algorithm. Simulation experiments were carried out with up to 10 UAVs. Similar to these, Nemer et al. [171] proposed a DDPG-based MADRL framework for multi-UAV systems to provide better coverage, efficiency, and fairness for network coverage of an area. One of the key differentiating factors of this paper is that the authors also model energy-efficient controls of the UAVs to reduce the overall energy consumption by them during the mission. For a similar communication coverage application, Liu et al. [172] proposed that the UAVs have their own actor-critic networks for a fully-distributed control framework to maximize temporal mean coverage reward.
Path Planning and Navigation
In multi-robot path planning (or path finding), each robot is given a unique start and a goal location. Their objective is to plan a set of joint paths from the start to the goal, such that some pre-defined criteria, such as time and/or distance, are optimized and the robots avoid colliding with each other while following the paths. An illustration is presented in Figure 7. Planning such paths optimally has been proven to be NP-complete [173]. Like A * [174], which is used for single-agent path planning in a discreet space, M * [175] can be used for an MRS. Unfortunately, M * lacks scalability. There exist numerous heuristic solutions for such multi-robot planning that scale well [176][177][178][179]. Overhead cameras and GPS can be used to localize the robots in indoor and outdoor applications, respectively. In GPS and communication-denied environments, vision systems can be used as a proxy [180]. Recently, researchers have started looking into deep reinforcement learning solutions to solve this notoriously difficult problem. One of the most popular works that use MADRL for collision avoidance is due to Long et al. [22]. They propose a decentralized method using PPO while using CNNs to train the robots, which use their onboard sensors to detect obstacles. Up to 100 robots were trained and tested via simulation. Lin et al. [109] proposed a novel approach for centralized training and decentralized execution for a team of robots that need to concurrently reach a destination while avoiding objects in the environment. The authors implement their method using CNNs and PPO as well. The learned policy maps LiDAR measurements to the controls of the robots. Bae et al. [72] also use CNNs to train multiple robots to plan paths. The environment is treated as an image where the CNN extracts the features from the environment, and the robots share the network parameters.
Fan et al. [107] have proposed a DRL model technique using the policy gradient method to train the robots to avoid collisions with each other while navigating in an environment. The authors use LiDAR data for training, and, during testing, this drives the decision-making process to avoid collisions. The authors then transfer the learned policy to physical robots for real-world feasibility testing. The simulation included up to 100 robots with the objective of avoiding collisions with each other, static objects, and, finally, pedestrians. It builds on their previous work from 2018 [131]. Wang et al. [181] also use CNNs for multi-robot collision avoidance and coordination. The authors also use a recurrent module, namely Long Short Term Memory (LSTM) [182] to memorize the actions of the robots to smooth the trajectories. The authors have shown that the combined use of CNN and LSTM can produce smoother paths for the robots in a continuous domain.
Yang et al. [71] use a priori knowledge to augment the DDQN algorithm to improve the learning efficiency in multi-robot path planning. To avoid random exploration at the beginning of the learning process, the authors have used A * [174] paths for single robots in static environments. This provides better preliminary Q-values to the networks, and, thus, the overall learning process converges relatively quickly. Wang and Deng [39] propose a novel neural network structure for task assignment and path planning where one network processes a top-down view of the environment and another network processes the first-person view of the robot. The foundation of the algorithm is also based on DQN. Na et al. [49] have used MADRL for collision avoidance among autonomous vehicles via modeling virtual pheromones inspired by nature. The authors also used a similar pheromone-based technique, along with a modified version of PPO in [55] for the same objective. Ourari et al. [123] also used a biologically-inspired method (specifically from the behavior of flocks of starlings) for multi-robot collision avoidance while a DRL method, namely PPO, is at its foundation. Their method is executed in a distributed manner and each robot incorporates information from k-nearest neighbors.
For multi-robot target assignment and navigation, Han, Chen, and Hao [117] proposed to train the policy in a simulated environment using randomization to decrease the performance transfer from simulation to the real world. The architecture they developed utilized communication amongst the robots to share experiences. They also developed a training algorithm for navigation policy, target allocation, and collision avoidance. It uses PPO as a foundation. Moon et al. [38] used MADRL for the coordination of multiple UAVs that track first responders in an emergency response situation. One of the key ideas behind their method is the inclusion of the Cramér-Rao lower bound into the learning process. The intent of the authors was to use the DRL-based UAV control algorithm to accurately track the target(s) of the UAV system. They used DDQN as their foundation technique.
Marchesini and Farinelli [74] extended their work in [75] by incorporating an Evolutionary Policy Search (EPS) for multi-robot navigation. It had two main components: navigation (reaching a target) and avoiding collisions. They extended their prior work [75] (using DDQN and LSTM at its core) by incorporating the EPS, which integrated randomization and genetic learning into the MARL technique to enhance the ability for the policy to explore and help the robots learn to navigate better.
Lin et al. [112] developed a novel deep reinforcement learning approach for coordinating the movements of an MRS such that the geometric center of the robots reached a target destination while maintaining a connected communication graph throughout the mission. Similarly, Li et al. [183] proposed a DRL method for multi-robot navigation while maintaining connectivity among the robots. The presented technique used constrained policy optimization [184] and behavior cloning. Real-world experiments with five ground robots show the efficacy of the proposed method. Maintaining such connectivity has previously been studied in an information collection application [185] applied to precision agriculture, albeit from a combinatorial optimization perspective [18].
On the other hand, Huang et al. [167] proposed a deep Q-learning method for maintaining connectivity between leader and follower robots. Interestingly, the authors do not use CNNs, instead, they rely only on dense fully connected layers in their network. Similar to these, Challita et al. [167] developed a novel DRL framework for UAVs to learn an opti-mal joint path while maintaining cellular connectivity. Their main contribution is founded in game theory. The authors used an Echo State Network (ESN), a type of recurrent neural network. In a similar setting, the authors' other work [186] studied minimizing interference from the cellular network using MADRL. Wang et al. [187] proposed to incorporate environmental spatiotemporal information. The proposed method used a global path planning algorithm with reinforcement learning at the local level via DDQN combined with an LSTM module. Choi et al. [92] also used a recurrent module, namely GRU along with CNN for the multi-agent path planning problem in an autonomous warehouse setting. The base of their work was the popular QMIX [188] algorithm, a form of value function factorization algorithm similar to VDN [63]. Another study of multi-robot path planning for warehouse production scenarios was carried out by Li and Guo [128]. They proposed a supervised DRL approach efficient path planning and collision avoidance. More specifically, using imitation learning and PPO, Li and Guo aimed to increase the learning performance of the vehicles in object transportation tasks.
Yao et al. [115] developed a map-based deep reinforcement learning approach for multi-robot collision avoidance, where the robots do not communicate with one another for coordination. The authors used an egocentric map as the basis of information that the robots use to avoid collisions. Three robots have been used for real-world implementations. Similar to this, Chen et al. [189] also did not rely on inter-robot communication for multirobot coordinated path planning and collision avoidance while also navigating around pedestrians. Chen et al. [94]'s study on multi-robot path planning also considered noncommunicating and decentralized agents using DDQN. Simulation experiments involved up to 96 robots.
Tan et al. [116] have developed a novel algorithm, called DeepMNavigate that uses local and global map information, PPO, and CNNs for navigation and collision avoidance learning. Their algorithm also makes use of multi-staged training for robots. Simulation experiments involved up to 90 robots. Chen et al. [190] proposed a method of using DRL in order for robots to learn human social patterns to better avoid collisions. As human behaviors are difficult to model mathematically, the authors noted that social rules usually emerge from local interactions, which drives the formulation of the problem. Chen et al. [87] proposed a novel DRL framework using hot-supervised contrastive loss (via supervised contrastive learning) combined with DRL loss for pathfinding. The robots do not use communication. They also incorporated a self-attention mechanism in the training. Their network structure used CNNs with DQN while up to 64 agents have been used for testing the approach in simulation. Navigation control using MADRL was also studied in [88], where the authors showed that the robots could recover from reaching a dead end. Alon and Zhou [135] have proposed a multi-critic architecture that also included multiple value networks. Path planning is also important in delivering products to the correct destinations. Ding et al. [91] have proposed a DQN-based MADRL technique for this specific application while combining it with a classic search technique, namely the Warshall-Floyd algorithm.
Transfer learning and federated deep learning have also been used for multi-robot path planning. In transfer learning, the assumption that the training and the testing data come from the same domain does not need to hold, which makes it attractive in many real-world scenarios, including robotics [191]. The objective here is to transfer the learning from one or more source domains to a potentially different target domain. Wen et al. [133] developed two novel reinforcement learning frameworks that extend the PPO algorithm and incorporate transfer learning via meta-learning for path planning. The robots learn policies in the source environments and obtain their policies following the proposed training algorithm. Next, this learning is then transferred to target environments, which might have more complex obstacle configurations. This increases the efficiency of finding the solutions in the target environments. The authors used LSTM in their neural network for memorizing the history of robot actions. In federated deep learning, training data might still be limited similar to the transfer learning applications. In this case, each agent has its own training data instead of using data shared by a central observer [192]. For example, each robot might have access to a portion of the environment, and they are not allowed to share the local images with each other, where the objective is still to train a high-quality global model. Luo et al. [193] have employed such a federated deep RL technique for multi-robot communication. The authors, in this paper, avoid blockages in communication signals due to large obstacles while avoiding inter-robot collisions. It has been shown that the proposed semi-distributed optimization technique is 86% more efficient than a central RL technique. Another federated learning-based path planning technique can be found in [130]. To reduce the volume of exchanged data between a central server and an individual robot, the proposed technique only shares the weights and biases of the networks from each agent. This might be significant in scenarios where the communication bandwidth is limited. The authors show that the presented technique in their paper offers higher robustness than a centralized training model.
PRIMAL is a multi-agent path-finding framework that uses MADRL and is proposed by Sartoretti et al. [194]. PRIMAL used the A3C [45] algorithm and an LSTM module. It also makes use of imitation learning whereby each agent can be given a copy of the centrally trained policy by an expert [195]. One of the highlights of this paper is that the proposed technique could scale up to 1024 robots albeit in simulation. PRIMAL 2 [196] is the advanced version of PRIMAL and was proposed by Damani et al. in 2021. It also uses A3C as its predecessor, offers real-time path re-planning, and scales up to 2048 robots-double that which PRIMAL could do.
Curriculum learning [197] has also been used for multi-robot path planning in [198], where the path planning is modeled as a lesson, going from easy to hard difficulty levels. An end-to-end MADRL system for multi-UAV collision avoidance using PPO has been proposed by Wang et al. [57]. Asayesh et al. [137] proposed a novel module for safety control of a system of robots to avoid collisions. The authors use LSTM and a Variational Auto-Encoder [199]. Li [200] has proposed using a lightweight decentralized learning framework for multi-agent collision avoidance by using only a two-layer neural network. Thumiger and Deghat [56] used PPO with an LSTM module for multi-UAV decentralized collision avoidance. Along the same line, Han et al. [54] used GRUs and their proposed reward function used reciprocal velocity obstacle for distributed collision avoidance.
For collaborative motion planning with multiple manipulators, Zhao et al. [108] proposed a PPO-based technique. The manipulators learned from their own experiences, and then, a common policy was updated while the arms continued to learn from individual experiences. This created differences in accuracy or actuator ability among the manipulators. Similarly, Gu et al. [50] proposed a method for asynchronous training of manipulator arms using DDPG and Normalized Advantage Function (NAF). Real-world experiments were carried out with two manipulators. Prianto et al. [143] proposed the use of the Soft Actor-Critic (SAC) algorithm [14] due to its efficiency in exploring large state spaces for path planning with a multi-arm manipulator system, i.e., each arm has its own unique start and goal configurations. Unlike the previous works in this domain, the authors used Hindsight Experience Replay (HER) [201] for sample-efficient training. On the other hand, Cao et al. [144] proposed a DRL framework for a multi-arm manipulator to track trajectories. Similarly to [143], Cao et al. also used SAC as their base algorithm. The main distinguishing factor of this study is that the multiple manipulator arms were capturing a non-cooperative object. Results show that the dual-arm manipulator can capture a rotating object in space with variable rotating speeds. An illustration of such a dual-arm manipulation application is shown in Figure 8.
Everett et al. [202] have proposed to use LSTM and extend their previous DRL algorithm [189] for multi-robot path planning to enhance the ability of the robots to avoid collisions. Semnani et al. [203] proposed an extension of the work proposed in [202] by using a new reward function for multi-agent motion planning in three-dimensional dense spaces. They used a hybrid control framework by combining DRL and force-based motion planning. Khan et al. [136] have proposed using GCN and a DRL algorithm called Graph Policy Gradients [134] for unlabeled motion planning of a system of robots. The multi-robot system must find the goal assignments while optimizing their trajectories.
Song et al. [90] designed a new actor-critic algorithm and a method for extracting the state features via a local-and-global attention module for a more robust MADRL method with an increasing number of agents present in the environment. The simulated experiments used dynamic environments with simulated pedestrians. Zhang et al. [204] proposed a method for using a place-timed Petri net and DRL for the multi-vehicle path planning problem. They used a curriculum-based DRL model. Huang et al. [205] proposed a visionbased decentralized policy for path planning. The authors use Soft Actor-Critic with auto encoders [206] as their deep RL technique for training a multi-UAV system. The 3D images captured by the UAVs and their inertial measurement values were used as inputs, whereas the control commands were rejected by the neural network. Simulation experiments with up to 14 UAVs were performed within the Airsim simulator. Jeon et al. [207] proposed to use MADRL to improve the energy efficiency of coordinating multiple UAVs within a logistic delivery service. The authors show that their model performs better in terms of consumed energy while delivering similar numbers of goods. MADRL has also found its way into coordinating multiple autonomous vehicles. The authors in [208] provide a solution to the "double merge" scenario for autonomous driving cars that consists of three primary contributions in this field: (1) the variance of the gradient estimate can be minimized without Markovian assumptions, (2) trajectory planning with hard constraints to maintain the safety of the maneuver [209], and (3) introduction of a hierarchical temporal abstraction [25] that they call an "Option Graph" to reduce the effective horizon which ultimately reduces the variance of the gradient estimation [210,211]. Similar to this, Liang et al. [212] have modeled the cooperative lane changing problem among autonomous cars as a multi-agent cooperation problem and solved it via MADRL. Specifically, the authors have used a hierarchical DRL method that breaks down the problem into "high-level option selection" and "low-level control" of the agent. Real-world experiments were performed using a robotic test track with four robots, where two of them performed the cooperative lane change.
Finally, Sivanathan et al. [119] proposed a decentralized motion planning framework and a Unity-based simulator specifically for a multi-robot system that uses DRL. The simulator can handle both independent learners and common policies. The simulator was tested with up to four cooperative non-holonomic robots that shared limited information. PPO was used as the base algorithm to train the policies.
Swarm Behavior Modeling
Navigation of a swarm of robots through a complex environment is one of the most researched topics in swarm robotics. To have a stable formation, each robot should be aware of the positions of the nearby robots. A swarm consisting of miniature robots might not have a sophisticated set of sensors available. For example, a compass can be used to know the heading of the robot. Additionally, range and bearing sensors can also be available [213,214]. Infrared sensors can be used for communication in such a swarm system [215]. Inspired by swarms of birds or schools of fish, robots usually follow three simple rules to maintain such formations: cohesion, collision avoidance, and velocity alignment [164]. It is no surprise that multi-agent deep reinforcement learning techniques have been extensively employed to mimic such swarm behaviors and solve similar problems. An illustration of forming a circle with a swarm of five e-puck robots is presented in Figure 9. Zhu et al. [216] proposed a novel algorithm for multi-robot flocking. The algorithm builds on MADDPG and uses PER. Results from three robots show that the proposed algorithm improves over the standard MADDPG. Similarly, Salimi and Pasquier [106] have proposed the use of DDPG with centralized training and a decentralized execution mechanism to train the flocking policy for a system of UAVs. Such flocking with UAVs might be challenging due to complex kinematics. The authors show that the UAVs reach the flocking formation using a leader-follower technique without any parameter tuning. Lan et al. [217] developed a control scheme for the cooperative behavior of a swarm. The basis of their control scheme is pulled from joint multi-agent reinforcement learning theory, where the robots not only share state information, but also a performance index designed by the authors. Notably, the convergence of the policy and the value networks is theoretically guaranteed. Following the above-mentioned works, Kheawkhem and Khuankrue [99] also proposed using MADDPG to solve the multi-agent flocking control problem.
Qiu et al. [100] used MADRL to improve sample efficiency, reduce overfitting, and allow better performance, even when agents had little or "bad" sample data in a flocking application. The main idea was to train a swarm offline with demonstration data for pretraining. The presented method is based on MADDPG. Not only for coverage as described earlier, but GNNs are also popular in general for coordination in a swarm system, especially in spatial domains. For example, Kortvelesy and Prorok [218] developed a framework, called ModGNN, which aimed to provide a generalized, neural network framework, that can be applied to varying multi-robot applications. The architecture is modular in nature. They tested the framework for a UAV flocking application with 32 simulated robots.
Yan et al. [219] studied flocking in a swarm of fixed-wing UAVs operating in a continuous space. Similar studies on flocking can also be found in more recent papers from these authors [83,220]. Similar to Yan et al.'s body of work, Wang et al. [142] proposed a TD3based [221] solution for a similar application-flocking with fixed-wing UAVs where the authors test the method with up to 30 simulated UAVs. Not strictly for swarms, Lyu et al. [47] addressed the multi-agent flocking control problem specifically for a multi-vehicle system using DDPG with centralized training and decentralized execution. Notably, the authors take connectivity preservation into account while designing their reward function-the maximum distance could not go beyond the communication range and the minimum distance was kept at d s , a physically safe distance between two vehicles. Interestingly, the mission waypoints are pre-defined in this paper. Bezcioglu et al. [48] also study flocking in a swarm system using DDPG and CNN, and tested it with up to 100 robots. The authors have used bio-inspired self-organizing dynamics for the joint motion of the robots.
Wang et al. [113] used MADRL to organize a swarm in specific patterns using autoencoders [222] to learn compressed versions of the states and they tested the presented solution with up to 20 robots. Li et al. [46] proposed using a policy gradient method, namely MADDPG, with an actor-critic structure for circle formation control with a swarm of quad-rotors. Although circle formation is a popular application [223][224][225][226], this is one of the few studies that employed MADRL techniques. Sadhukhan and Selmic [121] have used PPO in order to train a multi-robot system to navigate through narrow spaces and reform into a designated formation. They used two reward schemes (one individual to the agents and one depending on the contributions to the team) and the system was centrally trained. In [125], Sadhukhan and Selmic extended their prior works by proposing a bearing-based reward function for training the swarm system, which utilizes a single policy shared among the robots.
Chen et al. [97] have developed an improved DDPG to enhance the ability of a robot to learn human intuition-style navigation without using a map. Furthermore, they create a parallel version of DDPG to extend their algorithm to a multi-robot application. Thereby, providing the robots with a method of sharing information/experiences in order to maintain formation, navigate an indoor environment, and avoid collisions. Qamar et al. [138] proposed novel reward functions and an island policy-based optimization framework for multiple target tracking using a swarm system. Along a similar line, Ma et al. [98] developed a DDPG-based algorithm for multi-robot formation control around a target, particularly in a circle around a designated object. The algorithm allows the robots to independently control their actions using local teammates' information.
Recently, Zhang et al. [124] have also proposed a target encirclement solution that uses a decentralized DRL technique. The main contribution of their work is the use of three relational graphs among the robots and other entities in the system designed using a graph attention network [227]. In their simulation experiments, the authors use six robots encircling two targets. Similarly, Khan et al. [134] have used a graph representation of the robot formation and proposed using graph convolutional neural networks [158,228,229] to extract features, i.e., local features of robot formations, for policy learning. Simulation policies were trained on three robots and then the policy is transferred to over 100 robots for testing. The robots are initialized to certain positions and are to form a specific formation while reaching an end goal.
Zhou et al. [230] recognized the problem of computational complexity with existing MADRL methods for multi-UAV multi-target tracking while proposing a decentralized solution. Their proposed solution has its root in the reciprocal altruism mechanism of cooperation theory [231]. The experience replay is shared among the UAVs in this work. Zhou et al. [139] also study target tracking with a swarm of UAVs. Not only do they learn to track a target, but the robots also learn to communicate better (i.e., the content of the message) for such tracking following the proposed policy gradient technique.
Yasuda and Ohkura [78] used a shared replay memory along with DQN to accelerate the training process for the swarm with regard to path planning. By using more robots contributing their individual experiences to the replay memory, the swarm system was able to learn the joint policy faster. Communication is an important aspect of swarm systems. Usually, researchers use pre-defined communication protocols for coordination among the swarm robots. Hüttenrauch et al. [140] proposed a histogram-based communication protocol for swarm coordination, where the robots use DRL to learn decentralized policies using TRPO [44]. An example task is graph building formation, where the robots aim to cover a certain area through coordination. Another considered task is establishing a communication link between the robots and connecting two points on a map. Along the same line, in 2019, Hüttenrauch et al. [141] used TRPO again to find MADRL-based solutions for rendezvous and pursuit-evasion in a swarm system. The main contribution of their work is the incorporation of Mean Embedding [232] into the DRL method they use to simplify the state information each agent obtains from other agents. Up to 100 robots were used in simulation experiments.
Pursuit-Evasion
In a pursuit-evasion game, usually, multiple pursuers try to capture potentially multiple evaders. When all the evaders are captured or a given maximum time elapses, the game finishes [233][234][235]. For a detailed taxonomy of such problems, the reader is referred to [233]. Some of the sensors that the robots might use in this application include sonar, LiDAR, and 3D cameras, among others. A unified model to analyze data from a suit of sensors can also be used [236]. An illustration is shown in Figure 10.
Egorov [59] proposed a solution for the classic pursuit-evasion problem [233] using an extension of single-agent DQN, called multi-agent DQN (MADQN). The state space is represented as a four-channel image consisting of a map, opponent location(s), ally location(s), and a self-channel. Yu et al. [40] proposed the use of a decentralized training method for pursuit evasion where each agent learns its policy individually and used limited communication with other agents during the training process. This is unlike traditional MADRL techniques where the training is centralized. The execution of the policy for each agent is also decentralized. Wang et al. [23] proposed to extend a MARL algorithm called cooperative double Q-learning (Co-DQL) for the multi-UAV pursuit-evasion problem. The foundation of Co-DQL is Q-networks with multi-layer perceptrons. Unlike traditional applications where the evader might move around randomly, in this paper, the authors assume that the target also learns to move intelligently up to a certain degree via RL. In [237], the authors consider a setup with one superior evader and multiple pursuers. They use a centralized critic model, where the actors are distributed. Unlike traditional broadcasting techniques, the authors smartly use a leader-follower line topology network for inter-robot communication that reduces the communication cost drastically. Although not strictly pursuit-evasion, Zhang et al. [76] use MADRL for coordinated territory defense, which is modeled as a game where two defender robots coordinate to block an intruder from entering a target zone.
Gupta et al. [81] argue that instead of using a centralized multi-agent DRL framework, where the model learns joint actions from joint states and observations, a more sophisticated parameter-sharing approach can be used. A drawback of the centralized learning system is that the complexity grows exponentially with the number of agents. The authors use TRPO as their base algorithm and the policy is trained with the experiences of all agents simultaneously via parameter sharing. The multi-agent scenarios they use for testing the quality of the proposed solution are pursuit-evasion and a multi-walker system with bipedal walkers.
Information Collection
The objective of information gathering about an ambient phenomenon (e.g., temperature monitoring or weed mapping) using a group of mobile robots is to explore parts of an unknown environment, such that uncertainty about the unseen locations is minimized. Relevant sensors for information gathering include RGB, Normalized Difference Vegetation Index (NDVI), or multi-spectral cameras, and thermal and humidity sensors, among others. This is unlike coverage, where the goal is to visit all the locations. There are two main reasons for this: (1) information (e.g., temperature measurements) in nearby points are highly correlated, and, therefore, the robots do not need to go to all the locations within a neighborhood [238]; and (2) the robot might not have enough battery power to cover the entire environment. This is especially true in precision agriculture, where the fields are usually too large to cover [18]. An illustration is shown in Figure 11, where the robots are tasked with collecting information from their unique sub-regions, and through communication, they will need to learn the underlying model. Viseras and Garcia [240] have developed a novel DRL algorithm based on the popular A3C [45] algorithm. They also provide a model-based version of their original algorithm for gathering information, which uses CNNs. Said et al. [241] have proposed a mean field-based DRL technique that uses an LSTM module-a type of recurrent neural network for multi-robot information collection about an unknown ambient phenomenon. The robots are battery-powered with limited travel ranges. Recently, Wei and Zheng [67] also used MADRL for multi-robot informative path planning. They develop two strategies for cooperative learning: (1) independent Q-learning with credit assignment [4], and (2) sequential rollout using a GRU. Along the same line, Viseras et al. [85] have proposed using a MADRL framework for a multi-robot team to monitor a wildfire front. The two main components in this framework are (1) individually-trained Q-learning robots and (2) value decomposition networks. The authors have used up to 9 UAVs for testing the efficiency of their presented work.
Task Allocation
Multi-robot task allocation (MRTA) is a combinatorial optimization problem. Given a set of n robots and m tasks, the goal is to allocate the robots to the tasks such that a given utility function is optimized. Now, if multiple robots need to form a team to complete a single task, then it is a single-task, multi-robot allocation problem. On the other hand, if one robot can offer its services to multiple tasks, then it is called a single-robot, multi-task allocation problem. The robots might be connected to a central server via Wi-Fi, e.g., in a warehouse setting, and can receive information about tasks and other robots. Similarly, communication can happen with other robots via this central server using Wi-Fi as well. Overhead cameras or tracking systems can be used for robot localization in such a scenario. Comprehensive reviews about such MRTA concepts and solutions can be found in [242,243]. An example task allocation scenario is presented in Figure 12. Figure 12. An illustration of multi-robot task allocation: there are 3 iRobot Roombas (r1-r3) and 3 rooms to clean. In a one-to-one matching scenario, the objective would be to assign one Roomba to a certain room. However, as room 2 is larger in size, two robots might be needed to clean it, whereas the third robot (r3) might be assigned to rooms 1 and 3. Elfakharany and Ismail [132] developed a novel multi-robot task allocation and navigation method. This is the first work to propose a MADRL method to tackle task allocation, as well as the navigation problem. They use PPO with actor-critic. Their centralized training and decentralized execution method uses CNNs. Paul et al. [129] proposed to use DRL for multi-robot task allocation. They proposed a neural network architecture that they called a Capsule Attention-based Mechanism, which contains a Graph Capsule Convolutional Neural Network (GCapCN) [244] and a Multi-head Attention mechanism (MHA) [245,246]. The underlying architecture is a GNN. The task graph is encoded using GCapCN and combined with the context, which contains information on the robot, time, and neighboring robots. This information is then decoded with the MHA. Although not strictly task assignment, MADRL has been used for forming teams of heterogeneous agents (such as ambulance and fire brigade in a rescue operation) to complete a given task by Goyal [86]. Goyal has applied this technique for training a team of fire brigades to collaboratively extinguish a fire in a city within the Robocup Rescue Simulator.
Devin et al. [247] developed a novel method of compartmentalizing a trained deep reinforcement learning model into task-specific and robot-specific components. Due to this, the policies can be transferred between robots and/or tasks. Park et al. [114] propose a PPO-based DRL technique for task allocation. Their solution is tested with single-task, multi-robot, and time-extended assignments. They use an encoder-decoder architecture to represent robots and tasks, where a cross-attention layer is used to derive the relative importance of the tasks for the robots.
Scheduling tasks is another important aspect of task planning. Wang and Gombolay [82] used GNNs and imitation learning for a multi-robot system to learn a policy for task scheduling. The proposed model is based on graph attention networks [227]. The scheduling policy is first learned using a Q-network with two fully-connected layers. Imitation learning is then used to train the network from an expert dataset that contains schedules from other solutions. On the other hand, Johnson et al. [93] study the problem of dynamic flexible job shop scheduling, where an assembly line of robots must dynamically change tasks for a new job series over time. The robots learn to coordinate their actions in the assembly line. Agrawal et al. [52] performed a case study on a DRL approach to handling a homogeneous multi-robot system that can communicate while operating in an industry setting. PPO is used as the foundation algorithm. The objective of this work is to train the robots to work with each other to increase throughput and minimize the travel distances to the allocated tasks while taking the current states of the robots and the machines on the floor into account.
One of the most recent studies on deep RL-based MRTA is due to [89], which aims to use DRL for the parallelization of processing tasks for MRTA. The authors base their method on Branching Dueling Q-Network [248] with respect to multi-robot search and rescue tasks. In such a network, multiple branches of a network shares a common decisionmaking module where each branch handles one action dimension. This helps to reduce the curse of dimensionality in the action space. In total, 20 robots have been used within a simulation to test the feasibility of the proposed technique.
A very different and interesting task assignment application in defense systems is studied by Liu et al. [120]. The authors presented a DRL framework for multi-agent task allocation for weapon target assignment in air defense systems. They use PPO-clip along with a multi-head attention mechanism for task assignments of a (army) general and multiple narrow agents. The neural network architecture uses fully connected layers and a GRU. The major aim of this work is to increase processing efficiency and solution speed of the multi-agent task assignment problem at a large scale. Simulation experiments were carried out in a virtual digital battlefield. The experimental setup includes offensive forces and defensive forces. The defensive forces have places to protect and need to make real-time task allocation decisions for defense purposes. The defensive forces are tested with 12 and the offensive forces are tested with a total of 32 agents.
Object Transportation
To transport an object using two or more cooperative mobile robots, the goal is to design a strategy where the robots' actions are highly coordinated. Communication among the robots may or may not be possible. The robots can use depth cameras or laser scanners for avoiding obstacles. On the other hand, an optic-flow sensor can be used to determine if the pushing force from the robot has resulted in any object movement or not [249]. A forcetorque sensor can be used on the robot to measure the force amount placed on the object. For a comprehensive review of this topic, please refer to [250]. An illustration is shown in Figure 13.
Zhang et al. [73] have used a modified version of DQN that controls each robot individually without a centralized controller or a decision maker. To quantitatively measure how well the robots are working together, they use the absolute error of estimated stateaction values. The main idea is to use DQN to have homogeneous robots carry a rod to a target location. Each robot acts independently with neither leading nor following. Niwa et al. [251] proposed a MADRL-based solution to the cooperative transportation and obstacle removal problem. The basis of their solution is to use MARL to train individual robots' decentralized policies in a virtual environment. The policies are trained using MADDPG [61]. The authors then use the trained policies on real teams of robots to validate the effectiveness. The robots are supposed to push a target object to a final waypoint while moving a physical barrier out of the way to accomplish the task. Manko et al. [77] used CNN-based DRL architecture for multi-robot collaborative transportation where the objective is to carry an object from the start to the goal location. Eoh and Park [79] proposed a curriculum-based deep reinforcement learning method for training robots to cooperatively transport an object. In a curriculum-based RL, past experiences are organized and sorted to improve training efficiency [252]. In this paper, a region-based curriculum starts by training robots in a smaller area, before transitioning to a larger area and a singlerobot to multi-robot curriculum begins by training a single robot to move an object, then transferring that learned policy to multiple robots for multi-robot transportation. Figure 13. An illustration of multi-robot object transportation is presented where 2 iRobot Create robots carry a cardboard box and plan to go through a door in front of them.
Collective Construction
In a collective construction setup, multiple cooperative mobile robots are required. The robots might have heterogeneous properties [253]. The robots can follow simple rules and only rely on local information [254]. In the popular TERMES project from Harvard University [254], a large number of simple robots collect, carry, and place building blocks to develop a user-specified 3D structure. The robots might use onboard vision systems to access the progress in construction. A force sensor-equipped gripper can be used for holding the materials. Furthermore, a distance sensor, e.g., sonar can be used for maintaining a safe distance from the construction as well as other robots [255].
Sartoretti et al. [256] developed a framework using A3C to train robots to coordinate the construction of a user-defined structure. The proposed neural network architecture includes CNNs and an LSTM module. Each robot runs its own copy of the policy without communicating with other agents during testing.
A summary of the state and action spaces and reward functions used in some of the papers reviewed in this article are listed in Table 2. Table 2. Examples of state and action spaces and reward functions used in prior studies.
Refs.
State Action Reward [59] Map of the environment and robots' locations with 4 channels Discreet Based on the locations of the robots [67] Robot locations and budgets Discreet Based on collected sensor data [68] Position of the leader UAVs, the coverage map, and the connection network Discreet Based on the overall coverage and connectivity of the Leader UAVs.
[69] Map with obstacles and the coverage area Discreet Based on the robot reaching a coverage region within its task area. [24] Map with covered area Discreet Based on robot coverage. [71] Map of the environment Discreet Based on the robot movements and reaching the target without collisions. [70] Map with robots' positions Discreet Based on the herding pattern.
[72] Map with robots' positions and target locations Discreet Distance from the goal and collision status. [39] Map with robots' positions and target locations Discreet Distance from the goal and collision status. [40] Pursuer and evader positions Discreet Collision status and time to capture the predator.
[73] The map, locations, and orientations of the robots, and the objects the robots are connected to Discreet Based on the position of the object and the robots hitting the boundaries. [74] The map, and the locations of the robots Discreet Based on distance from the target and collisions. [76] The regions of the robots, positions of the defender and the attacker UAVs and the intruder Discreet Based on distance. [77] The distance from the MRS center to the goal, the difference in orientation of the direction of MRS to the goal, and the distance between the robots Discreet Based on the distance to the goal, orientation to the goal, proximity of obstacles, and the distance between the robots.
[78] Sensor input information that includes distance to other robots and the target landmarks Discreet Based on becoming closer to the target landmark.
[79] Spatial information on the robots, the object, and the goal Discreet Based on the object reaching the goal while avoiding collisions.
[80] Robot position and velocity Discreet Based on the robots being within sensing range of one another. [38] The positions and speed of the first responders and UAVs Discreet Based on the Cramér-Rao lower bound (CRLB) for the whole system. [93] The agents' positions, types, and remaining jobs Discreet Based on minimizing the makespan. [83] The position and direction of the leader and the followers Discreet Based on the distance from followers to leaders and collision status.
[84] Information on the target, other agents, maps, and collisions Discreet Based on finding targets and avoiding obstacles.
[85] The robot's position, position relative to other robots, and angle and direction of the robots Discreet Based on covering a location on fire.
Ref. State
Action Reward [23] The positions, velocities, and distances between UAVs Discreet Determined by the distance from the target and the evader being reached by the pursuer. [87] Consists of static obstacle locations and the locations of other agents Discreet Based on the robots' movements toward the goal while avoiding collisions. [94] Contains the sensor data for the location of the target relative to the robot and the last action done by the robot Discreet Determined by the robot reaching the goal, reducing the number of direction changes, and avoiding collisions. [95] A map that includes the agent's locations, empty cells, obstacle cells, and the location of the tasks Discreet Determined by laying pieces of flooring in the installation area. [110] Map of the environment represented with waypoints, locations of the UAVs, and points of interest Discreet Based on the coverage of the team of robots. [114] The positions and tasks of the robots, the state of the robot Discreet Based on minimizing the number of timesteps in an episode. [96] Map of the area to be sterilized and the positions of the agents, the cleaning priority, size, and area of the cleaning zone Discreet Based on the agents cleaning priority areas for sanitation. [86] Temperature and "fieryness" of a building, location of the robots, water in the tanks, and busy or idle status Discreet Based on keeping the fires to a minimum "fieryness" level.
[53] Sensory information on obstacles Discreet Based on the UAV's coverage of the area.
[52] Robots' positions and velocities and the machine status Discreet Determined by robots completing machine jobs to meet the throughput goal, and their motions while avoiding collisions. [107] Includes the laser readings of the robots, the goal position, and the robot's velocity Continuous Based on the smooth movements of the robots while avoiding collisions. [22] Includes the laser readings of the robots, the goal position, and the robot's velocity Continuous Based on the time to reach the target while avoiding collisions.
[108] An environment that includes the coordinates of the manipulator arm gripper Continuous Based on reaching the target object.
[109] Laser measurements of the robots and their velocities Continuous Based on the centroid of the robot team reaching the goal. [117] The state of the robot, other robots, obstacles, and the target position Continuous Based on the robots' relative distance from the target location.
[97] Sensed LiDAR data Continuous Based on the robot approaching and arriving at the target, avoiding collisions and the formation of the robots. [132] Consists of the goal positions, the robots' positions, past observations Continuous Based on the robot moving towards the goal in the shortest amount of time. [133] Contains laser data, speeds and positions of the robots, and the target position Continuous Based on arriving at the target, avoiding collisions, and relative position to other robots. [113] The position information of other robots (three consecutive frames) Continuous Based on time for formation, collisions, and the formation progress. [115] The most recent three frames of the map, local goals that include positions and directions Continuous Based on minimizing the arrival time of each robot while avoiding collisions. [116] Map of the environment, robot positions and velocities, and laser scans Continuous Based on the arrival time of the robot to the destination, avoiding collisions, and smoothness of travel. [104] The coverage score and coverage state for each point of interest and the energy consumption of each UAV Continuous Defined by coverage score, connectivity fairness, and energy consumption.
[134] Robot's relative position to the goal and its velocity Continuous Based on the robots having collisions.
[88] Robot motion parameters, relative distance and orientation to the goal, and their laser scanner data
Discreet/ Continuous
Determined by reaching the goal without timing out and avoiding collisions.
Challenges and Discussion
Although we find that a plethora of studies have used multi-agent deep reinforcement learning techniques in recent years, a number of challenges remain before we can expect wide adaptation of them in academia as well as commercially. One of the biggest challenges that we identify is scalability. Most of the papers reviewed in this article do not scale beyond tens of robots. This limits real-world adaptation. Although this is an issue with multi-robot systems in general, the data-hungry nature of most of today's DRL techniques makes the situation worse. In the future, the research community needs to come up with lightweight techniques that potentially are inspired by nature, such as swarming in biology or particle physics while making necessary changes to the underlying RL technique to fit these appropriately.
The second drawback we found in most of the studies is the lack of resources to make them reproducible. One of the overarching goals of academic research is that researchers across the world should be able to reproduce the results reported in one paper and propose a novel technique that potentially advances the field. In the current setup, most papers employing MADRL use their own (simulation) environments for their robots, which makes it extremely difficult for others to reproduce the results. As a community, we need to come up with an accepted set of benchmarks and/or simulators that the majority of the researchers can use for method design and experiments, which, in turn, will advance the field.
The next challenge is to transfer the learned models to real robots and real-world applications. We find that most experiments in the literature are conducted virtually, i.e., in simulation, rather than with physical robots. This leads to a gap in understanding the feasibility. This corroborates the finding by Liang et al. [257]. Unless we can readily use the learned models on real robots in real-world situations, we might not be able to widely adopt such techniques. It is tied up with the previously-mentioned issue of scalability. Additionally, in the deployment phase, the algorithms need to be lightweight while considering the bandwidth limitation for communication among the robots.
Software plays a significant role in developing and testing novel techniques in any robotic domain and applications of MADRL are no different. Here, we discuss some software that are popularly used for testing the feasibility of the proposed techniques in simulation.
• VMAS: Vectorized Multi-Agent Simulator for Collective Robot Learning (VMAS)
is an open-source software for multi-robot application benchmarking [258]. Some applications that are part of the software include swarm behaviors, such as flocking and dispersion, as well as object transportation and multi-robot football. Note that it is a 2D physics simulator powered by PyTorch [259]. • MultiRoboLearn: Similar to VMAS, this is an open-source framework for multirobot deep reinforcement learning applications [260]. The authors aim to unify the simulation and the real-world experiments with multiple robots via this presented software tool, which is accomplished by integrating ROS into the simulator. Mostly multi-robot navigation scenarios were tested. It would be interesting to extend this software to other multi-robot applications, especially where the robots might be static. • MARLlib: Although not strictly built for robots, Multi-Agent RLlib (MARLlib) [261] is a multi-agent DRL software framework that is built upon Ray [262] and its toolkit RLlib [263]. This is a rich open-source software that follows Open AI Gym standards and provides frameworks for not only cooperative tasks, but for competitive multirobot applications as well. Currently, ten environments are supported by MARLlib among which the grid world environment might be the most relevant one to the multi-robot researchers. Many baseline algorithms including the ones that are highly popular among roboticists, e.g., DDPG, PPO, and TRPO are available as baselines.
The authors also show that this software is much more versatile than some of the existing ones including [264,265].
Not only these specialized ones, but other traditional robot simulators, such as Webots [266], V-rep [267], and Gazebo [157], can also be used for training and testing multiple robots. These established software platforms provide close-to-reality simulation models for many popular robot platforms. This is especially useful for robotics researchers as we have seen in this survey that MADRL applications range from aerial and ground robots to underwater robots and manipulators. Table 3 summarizes the main types of robots that have been used for MADRL applications. Not only software development, but another challenge is training data. As most of the state-of-the-art algorithms rely on massive amounts of training data, it is not always easy to train a robot with sufficient data. Dasari et al. [268] have created an open-source database for sharing robotic experiences. It contains 15 million video frames from 7 different robot manipulators including Baxter, Sawyer, Kuka, and Fetch arms. Researchers can use this dataset for efficient training while adding new experiences from their experiments to the dataset itself. Table 3. Types of robots in the reviewed papers. If the type is not specified in the paper, it is not listed here.
Although many robotic applications are utilizing the progress in multi-agent reinforcement learning, we have not seen any paper on modular self-reconfigurable robotics (MSRs) [270][271][272][273] where MADRL has been utilized. We believe that the field of modular robots can benefit from these developments especially given the fact that MSRs can change their shapes and the new shape might not have been pre-defined. Therefore, its control is undefined as well and it might need to learn to move around and complete tasks on-the-fly using techniques, such as MADRL, where each module acts as an intelligent agent.
On the other hand, we have found MADRL-based solutions for manipulation and motion separately. The next question that should be answered is how one can simultaneously learn those two actions where they might affect each other. For example, in a scenario, where multiple UAVs are learning to maintain a formation while manipulating an object with their onboard manipulators. This task would potentially require the robots to learn two actions simultaneously. The research question then would be how to best model the agents, their goals, and the rewards in this complex scenario.
Conclusions
In this paper, we have reviewed state-of-the-art studies that use multi-agent deep reinforcement learning techniques for multi-robot system applications. The types of such applications range from exploration and path planning to manipulation and object transportation. The types of robots that have been used encompass ground, aerial, and underwater applications. Although most applications involve mobile robots, we reviewed a few papers that use non-mobile (manipulator) robots as well. Most of the reviewed papers have used convolutional neural networks, potentially combining them with fully connected layers, recurrent layers, and/or graph neural networks. It is worth investigating such reinforcement learning techniques for robotics as they have the potential to learn high-level causal relationships among the robots, as well as between the robots and their environment, which might have been extremely difficult to model using a non-learning approach. As better hardware is available on a smaller scale and at a lower price, we expect to see significant growth in novel multi-robot system applications that use multi-agent reinforcement learning techniques. Furthermore, with the progress of the field of artificial intelligence in general, we expect that more studies will have theoretical underpinnings along with their showcased empirical advancements. Although a number of challenges remain to be solved, we are perhaps not too far away from seeing autonomous robots tightly integrated into our daily lives. | 2023-04-02T15:33:36.666Z | 2023-03-30T00:00:00.000 | {
"year": 2023,
"sha1": "92c590df020b448c06c8eacb17cbf9c869dd88d8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/7/3625/pdf?version=1680188318",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2581887cd2848fc28a78019360953e2046368326",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
4544548 | pes2o/s2orc | v3-fos-license | THE EXTROVERTED FIRM: HOW EXTERNAL INFORMATION PRACTICES AFFECT INNOVATION AND PRODUCTIVITY Forthcoming, Management Science
We gather detailed data on organizational practices and IT use at 253 firms to examine the hypothesis that external focus – the ability of a f irm to detect and therefore respond to changes in its external operating environment – increases retu rns to information technology, especially when combined with decentralized decision-making. First, using survey-based measures, we find that external focus is correlated with both organization al decentralization and IT investment. Second, we find that a cluster of practices including exter nal focus, decentralization and IT is associated with improved product innovation capabilities. Thir d, we develop and test a 3-way complementarities model that indicates that the com bination of external focus, decentralization and IT is associated with significantly higher prod uctivity in our sample. We also introduce a new set of instrumental variables representing barriers to IT-related organizational change and find that our results are robust when we account for the pote ntial endogeneity of organizational investments. Our results may help explain why firm s that operate in information-rich environments such as high-technology clusters or ar eas with high worker mobility have experienced especially high returns to IT investmen t and suggest a set of practices that some managers may be able to use to increase their retur ns from IT investments.
Introduction
Falling internal communication costs and new internal information practices enable informationage firms to quickly respond to changes in consumer preferences, technology and competition.However, improvements in the accuracy and timeliness of information are valuable only when combined with appropriate changes in decision rights and organizational practices (Brynjolfsson and Mendelson, 1993;Mendelson and Pillai, 1999).This suggests that the adoption of practices used to detect and respond to changes in the external operating environment should become increasingly common.Internet companies are an extreme example: firms like Amazon and Google record each customer's keystrokes and analyze the data to continuously optimize their products, processes and marketing.Off-line companies are also using customer data extensively.For example, Harrah's invested heavily in capturing data on consumer gaming patterns, which they used to design compelling packages to attract high-value customers and outperform competitors (Loveman, 2003).Similarly, firms like Cisco, Capital One, UPS, and Wal-Mart have been described as gaining competitive advantage by adopting an aggressive approach to learning about their customers and competitors (Davenport and Harris, 2007).
A growing research literature on the behavior of modern organizations has linked firm performance to the ability to identify and respond to changes in a firm's competitive environment (Saxenian, 1996;Dyer and Singh, 1998;Dyer and Nobeoka, 2000;Powell, Koput, and Smith-Doerr, 1996;Bradley and Nolan, 1998;Von Hippel, 1998).Researchers have also emphasized the role of IT in the development of information gathering and processing capabilities that facilitate external orientation (Mendelson and Pillai, 1999;Malhotra et al., 2005;Pavlou and El Sawy, 2006;Rai et al., 2006;Bharadwaj et al., 2007).However, the growing emphasis on external orientation has not been integrated into the IT productivity literature, which has primarily emphasized the importance of adopting organizational changes like decentralization in conjunction with IT investments (Bresnahan, Brynjolfsson, and Hitt, 2002;Brynjolfsson, Hitt, and Yang, 2002).
In this study, we argue that information technologies are most productive when they allow firms to quickly respond to external information.The central argument of this paper is that the combination of external focus, changes in decision-rights and IT investments forms a 3-way system of complements resulting in higher productivity levels (Figure 1).For example, Harrah's, in addition to adopting new information technologies to monitor consumer gaming patterns, simultaneously made extensive changes to internal practices, such as implementing the appropriate incentives for customer service personnel to keep high-value customers happy.These changes were required to successfully handle the massive amounts of customer intelligence being generated.
The implication is that organizations that do not have the appropriate receptors in place through which to sense environmental change will not experience the same returns to IT investments, even if they have re-organized decision-making.In keeping with earlier research (Mendelson and Pillai, 1999), we define "external focus" to be a set of practices firms use to detect changes in their external operating environment.In information-rich environments, firms should engage in practices that make up-to-date, accurate information available to decision-makers.The literature has emphasized several mechanisms through which firms can capture external information, such as customer interaction, benchmarking, and using inter-organizational project teams.We argue that returns to IT and decentralization are higher in firms that have adopted these practices.
Conceptually, complementarities between external information awareness and internal information practices are grounded in the literature on information processing organizations (Radner, 1992;Cyert and March, 1973).Because 'boundedly rational' organizations are limited in the amount of information they can effectively process, improvements in internal information processing capabilities, such as those offered by information technologies, increase the firm's capacity to process information for decision-making and to therefore respond to external information.Thus, the largest productivity benefits from improving a firm's internal information-processing infrastructure should be observed in dynamic environments where firms continuously capture and respond to external signals.Beyond broad performance benefits, this literature places special emphasis on product development as an important mechanism through which IT-led improvements in information processing lead to higher productivity (Mendelson, 2000;Pavlou and El Sawy, 2006;Bartel, Ichniowski, and Shaw, 2007).Firms that effectively sense and process external information should have market-based advantages when introducing new products (Kohli and Jaworski, 1990;Mendelson and Pillai, 1999).
Our study is based on a 2001 survey of organizational practices in 253 moderate and large sized firms, matched to data on IT investment and firm performance from private and public sources.In addition to including measures of internal organization used in prior work, we included constructs to capture external focus and product innovation, motivated specifically by the work done by Mendelson and Pillai (1999) on external practices in the computer manufacturing industry, but adapted to a more heterogeneous set of firms, and broadened to include other sources of external information such as tacit knowledge obtained from the strategic recruitment of new employees.
We find that external focus, decentralized organization, and IT investment are correlated.
Second, we find that these practices lead to higher product innovation rates.Third, we estimate a threeway complementarities model (IT, external focus, decentralization) and demonstrate that firms that combine all three practices derive substantially greater benefits from their IT investments.Our econometric identification strategy includes the assumption that organizational practices are quasi-fixed in the short run.However, we also introduce an innovative set of instrumental variables based on inhibitors of organizational change to demonstrate that our results are not sensitive to this assumption.In our preferred specifications, the output elasticity of IT investment is about 7 percentage points higher in firms that are one standard deviation above the mean on both our external focus and organizational decentralization measures compared to the average firm in our sample.
These findings suggest that firms can more successfully leverage IT investments if they effectively capture external information through networks of customers, suppliers, partners, and new employees.Mounting a more effective response to external information requires firms to have the mechanisms in place through which to absorb this information, as well as the mechanisms to allow effective local information processing.Internal workplace organization, external information practices, and information technologies appear to be part of a mutually reinforcing cluster associated with faster product cycles and higher productivity.
Our paper contributes to a literature on IT value, supporting the argument that organizational complements lead to higher IT returns (Brynjolfsson and Hitt, 1995;Brynjolfsson and Hitt, 2000;Dedrick, Kraemer, and Gurbaxani, 2003;Melville, Kraemer and Gurbaxani, 2004).We build upon prior work that addresses complementarities between IT and internal practices such as decentralized decision making (Bresnahan, Brynjolfsson, and Hitt, 2002;Caroli and Van Reenen, 2002) but add the external orientation dimension which has been shown to be important in technology-intensive firms (Mendelson and Pillai, 1999;Pavlou and El Sawy, 2006).Identifying organizational complements is useful for managers who are restructuring their organizations to take advantage of improvements in computing.In addition, our results improve our understanding of why firms in information-rich environments such as Silicon Valley (Saxenian, 1996) appear to receive greater benefits from technology investments and why IT returns may be influenced by geographic position (Dewan and Kraemer, 2000;Bloom, Sadun, and Van Reenen, 2008).
Data and Measures
Our organizational practice measures are generated from a survey that was administered to 253 senior human resource managers in 2001.The survey was conducted by telephone on a sample of 1,309 large and upper middle-market firms 1 that appear in a database of IT spending compiled by Harte Hanks (see further detail below) and also have the requisite financial data in Compustat.The survey yielded a response rate of 19.3%, which was typical for large-scale corporate surveys at the time.The sample of responding firms has a slightly higher proportion of manufacturing firms relative to the sample population (62% vs. 54%) and the firms tend to be slightly smaller when measured in sales, assets, employees and market value.However, after conditioning on industry, the size differences between responding and non-responding firms are not statistically significant.Furthermore, there is no significant difference between responding and non-responding firms on performance measures such as return on assets or sales per employee.
The questions for this survey were drawn from a previous wave of surveys on IT usage and workplace organization administered in 1995-1996, and by incorporating additional questions on external and internal information practices motivated by research on IT and organizational design (Mendelson and Pillai, 1998).Our survey also includes questions related to firms' human capital mix, including occupational and educational distributions (see Table 1 for a summary of variables and their descriptive statistics).
External Focus
Our measure of external focus is based on an industry-specific "external information" construct utilized by Mendelson and Pillai (1999) (designated as MP hereafter) which is in turn closely related to the customer-specific concept of "market orientation" defined by Narver and Slater (1990) and Jaworski and Kohli (1993) and operationalized by Kohli, Jaworski and Kumar (1993) (designated as KJK hereafter).We broaden our measure to be applicable beyond customer-information (like MP) and to multiple industries.In Table 2, we present the components of our external focus measure along side the components used in related work.Both KJK and MP include constructs for direct customer interaction (see Table 2, KJK scale items 1-3, MP scale items 1-2), which we capture in a question related to customer participation on project teams, but we also include partners and suppliers (variable PROJTEAM).Our second question focuses on the use of competitive benchmarking (BNCHMRK) which relates to a firm's awareness of the industry and broader business environment in KJK (scale items 5, 6) and the industry-specific measure of order throughput benchmarking used in MP (scale item 3).
To these measures, we add additional constructs for incorporating new technology (scale item 3, variable NEWTECH) as well as measures that examine how the firm might capture external information through employee mobility -the involvement of executives in recruiting (EXECRCT) and the use of higher pay as an inducement to attract new employees (NEWEMP).The inclusion of employee mobility was motivated by work in strategic management that emphasizes this particular pathway as a means of gathering tacit knowledge related to the competitive or technological environment (Argote and Ingram, 2000;Song, Almeida, and Wu, 2003).Executive involvement in recruiting and pay for performance were specifically identified as key components of digital strategy in a case study of Cisco Systems (Woerner, 2001).Pay for performance has also been central to numerous other studies, including recent work by Aral, Brynjolfsson and Wu (2009).In summary, we cover many of the same constructs as prior work, but adapt them to apply to a broader set of industries than the industry-specific measures in MP, and we place greater emphasis on non-customer information (in contrast to KJK) to reflect an operations rather than marketing focus that may better fit a heterogeneous cross-section of firms.
Correlations between the individual constructs are shown in Table 3.The measures are positively correlated, but not very highly correlated, and Cronbach's alpha for a five-item scale constructed from the individual variables is 0.521.The relatively lower alpha value is because these external measures are multi-dimensional in the sense that just because firms do one of these activities, they do not necessarily also have do the others.This implies that firms in different industries may access environmental information in many ways, all of which may have similar economic impact.Indeed, in our main analysis, we could not reject the hypothesis that the standardized values of the five components of external focus have the same coefficients when entered into the regression individually.Consequently, we combined these measures in a similar manner to our workplace organization variables, where each factor is first standardized (STD) by removing the mean and then scaled by its standard deviation, yielding an external focus measure with a mean of zero and a standard deviation of one.The full form of our aggregate external focus variable is shown below. ) While higher values on this scale represent more channels of external information acquisition, firms that use none of these practices can still be externally focused (Type II error), although it is likely that firms that have implemented unmeasured external information practices will also rate high on our external focus scale.It is somewhat less likely that a firm that rates high on our external focus scale will know little about the external environment (Type I error).Regardless, to the extent that our construct mis-measures the true underlying external focus of some firms, measurement error is likely to bias downwards the estimates on our external focus variables (Griliches and Hausman, 1986).Results from productivity regressions using a variety of alternative external focus measure constructions, including one that omits the two variables associated with the employee mobility (and thus are more directly comparable to MP and KJK) show similar results (available from authors upon request).
Workplace Organization
To capture internal organizational processes that are complementary to external focus, we rely on a scale focused on decentralized and team-oriented work practices used in prior work (Bresnahan, Brynjolfsson, and Hitt, 2003;Brynjolfsson, Hitt and Yang, 2002), which was originally motivated by the extensive literature on "high performance work systems" (Ichniowski, Kochan, Levine, Olson, and Strauss, 1996).
The measure contains six constructs of group-based decentralized decision-making [the use of selfmanaged teams in production (SMTEAM), the use of team-building activities (TEAMBLD), the use of teamwork as a promotion criterion (PROMTEAM), the use of quality circles or employee involvement groups (QUALCIR)] and two measures capturing individual decision rights [the extent to which individual workers decide the pace of work (PACE) and the extent to which individual workers decide methods of work (METHOD)].The Cronbach's alpha for the four team-based measures is .732,and the alpha for all six measures is .671.Similar to external focus, we construct a scale (WO) from these measures using the standardized sum of the standardized values of each component.We utilized this scale because it shows significant variation across firms, it has been previously shown to be a useful summary metric IT-related work practices (Brynjolfsson and Hitt, 1997), and it has a clear economic interpretation as decentralized, team-based decision making which is relatively narrow and specific, making our model and econometrics more precise and interpretable.
Organizational Inhibitors
Some of our analyses are based on the assumption that the organizational measures described above are quasi-fixed over short time periods, which is theoretically justified by a large literature on organizational adjustment costs (Applegate, Cash, and Mills, 1988;Attewell and Rule, 1984;David, 1990;Milgrom and Roberts, 1990;Murnane, Levy, and Autor, 1999;Zuboff, 1988;Bresnahan and Greenstein, 1996).
However, in addition to organizational practice variables, our survey data includes questions on individual inhibitors of organizational change.These were designed to allow us to create direct measures of organizational adjustment costs, which we can use as instrumental variables for our organizational asset measures.These survey questions ask respondents to describe the degree to which the following factors facilitate or inhibit the ability to make organizational changes: Skill Mix of Existing Staff, Employment Contracts, Work Rules, Organizational Culture, Customer Relationships, Technological Infrastructure, and Senior Management Support.These responses are used as instruments in our product development and productivity regressions, as well as to create an aggregate adjustment cost measure which was computed as the standardized sum of the standardized values of the individual inhibitors.
Cronbach's Alpha for the seven individual inhibitors is 0.725.These organizational inhibitors are suitable as instrumental variables because they reflect the costs faced by firms in adopting new organizational practices.Firms that face constraints in terms of culture, work rules, or staff mix may find it more difficult or costly to reengineer existing practices, or to adopt practices complementary to new IT investments.Therefore, these organizational inhibitors are a source of exogenous variation in the degree to which we are likely to observe the adoption of organizational practices when firms adopt IT.These inhibitors, however, are less likely to be correlated with firm performance directly.
Innovation, Product Cycles and Technological Change
Three of the variables from our survey data reflect a firm's innovation and product development capabilities with respect to its competitors.Our goal in choosing these measures is not to fully characterize a firm's product development processes -the literature on product development is very large and includes a variety of perspectives on effective product development (Ulrich and Krishnan, 2001).
Instead, our variables were chosen to reflect different aspects of the innovation and product development process for which access to information might prove beneficial.We measure 1) whether a firm is normally the first to introduce a new product in its industry (FIRST), 2) the speed of internal product development once a new product has been approved (SPEED) and 3) whether a firm regularly weeds out marginal products (PLMGMT), which is a measure of the effectiveness of a firm's product line management.Access to different product development variables is useful because introduction of new products is related to innovation and the firm's ability to collect and process external information, but product development speed should be more closely associated with the ability to process information within the organization.Our innovation and product development measures are standardized to have a zero mean and standard deviation of one.
Information Technology
We use two types of measures of computerization, one from our survey and one constructed from a separate data set on IT employment.Managers responding to our survey were asked both the percentage of workers in the organization that used personal computers (%PC), as well as the percentage of workers in the organization that used email (%EMAIL).However, these internal measures are only available in the survey base year.To construct our data set for the longitudinal productivity analysis, we use panel IT measures based on an external data set describing firm-level IT employment from 1987 to 2006 (Tambe and Hitt 2011), which we use as a proxy for firms' aggregate IT expenditures.
IT employment in this data set is estimated using the employment history data from a very large sample of US-based information technology workers.Table 4 shows the occupational composition of these IT workers.These data include fewer programmers and higher numbers of support personnel.For our purposes, this employment-based data set compares favorably to alternative archival data sets, such as the Harte-Hanks CITDB capital stock data, in several ways.Although much recent research on IT productivity has relied on the Computer Intelligence Technology Database (CITDB), complete panel data is generally only available for Fortune 1000 firms, the definitions of variables changed significantly after 1994 and most importantly, the CITDB no longer includes direct measures of IT capital stock.
Consequently, even using methods to infer capital stock from available data only yield self-consistent capital stock measures through about 2000. 2 Our employment-based data, by contrast, are available on a consistent basis through 2006 and include matches for nearly all the firms we surveyed.We have benchmarked these data against a number of other sources of IT data from ComputerWorld, Computer Intelligence, and InformationWeek and generally find high correlations between these different sources in both cross-section and time series.
Descriptive statistics and correlations for the IT employment measures and the survey-based IT measures are shown in Table 5.The mean usage of both PCs and email for firms in our sample is about 60%.By comparison, similar measures from a survey conducted in 1995 indicated that in the average firm, about 50% of workers used computers, and only about 30% of workers used email, implying significant growth in IT intensity in the six-year interim period.The average firm in our sample had about 470 IT workers in 2001, comprising about 2.3% of total employment, compared to 2.2% of total employment accounted for by workers in "Computer and Mathematical Occupations" in the Bureau of Labor Statistics 2001 Occupational Employment Survey. 3The large variation across firms for our measures of the fraction of IT workers, email use, and computer use suggests that some firms, such as those in IT-producing industries, have much greater IT usage than others.Therefore, we log transform our IT measures to facilitate direct comparisons with our organizational factor data.Where we require normalized measures for size, we compute IT workers as a proportion of total workers.
Value Added and Non-IT Production Inputs
We obtained longitudinal data on capital, labor, research & development expense, and value-added for the firms in our sample by using the Compustat database.We used standard methods from the microproductivity literature to create our variables of interest from the underlying data.Price deflators for inputs and outputs are taken from the Bureau of Labor Statistics (BLS) and Bureau of Economic Analysis (BEA) web sites.Eight industry dummies were created using 1-digit NAICS headers.Table 6 shows statistics for the 2001 cross section of the Compustat variables included in our analysis.In 2001, the average firm in our sample had about $3.8 billion in sales and 15,200 employees.
Methods
Providing direct evidence of complementarities is challenging due to the endogeneity of organizational practices in observational data (Athey and Stern, 1998;Brynjolfsson and Milgrom, 2009;Cassiman and Veugelers, 2006;Novak and Stern, 2009).Moreover, lack of information about the costs and value of specific organizational practices limits the ability to implement structural models of organizational investment.The existing empirical literature on organizational complements has therefore focused instead on providing evidence of the economic implications of complementarities between organizational practices (Arora and Gambardella, 1990;Bresnahan, Brynjolfsson and Hitt, 2002).The empirical strategy followed in these studies is to marshal a number of different types of evidence consistent with the complementarities hypothesis, which when considered in whole, strongly suggest complementarities between organizational practices.
In particular, complementarities imply that we should observe 1) the clustering of practices across firms and 2) that the simultaneous presence of these complements impacts performance more than the sum of the individual effects.To the extent managers understand and embrace complementarities, they would be expected to adopt them jointly, which should lead to significant correlations, but lower power for the performance tests.In contrast, to the extent that the practices vary due to random shocks, the performance tests can be expected to have more power (Brynjolfsson and Milgrom, 2009).We measure clustering as correlation within a survey base year as well as changes in correlations over time, and performance by regression models with interactions as well as newer tests proposed by Brynjolfsson and Milgrom (2009) that contrast performance for different combinations of complementary practices.We also include two useful measurement innovations.First, unobserved human capital among firms is likely to be a significant omitted variable in prior work on organizational practices.Using our survey data we are able to include human capital controls at the firm level.Second, we are able to consider the potential endogeneity of work practices by instrumenting these measures with our data on inhibitors to organizational innovation, which indirectly capture the cost variation of organizational investments across firms.Thus, we substantially increase the number of factors that we are able to directly measure, reducing the role that unobserved heterogeneity and endogeneity play in the analysis relative to earlier studies on organizational complementarities.
Correlation Tests
The first test we conduct is based on correlations among these organizational practices.First, using our cross-sectional data, we examine how the use of IT and the proposed complementary practices co-vary in the survey base year.If these practices are complements, price declines in IT should be accompanied by greater use of both complementary organizational practices.Second, we can examine time trends in correlations.If IT is complementary to the proposed organizational practices, we should see rising correlations over time as managers adjust IT levels to match levels of other complementary inputs.
Innovation and Product Development Regressions
We can also use our data to develop some insight into how these inputs affect the productivity of firms.
We test how our organizational and IT variables are associated with various stages of the product development process by estimating the following model.
PROD represents one of our three possible product development outcomes (FIRST, SPEED, and PLMGMT), EXT is our external focus variable (EXT), WO measures workplace decentralization, IT is a measure of IT usage within the firm, RD measures R&D intensity computed as the R&D expense per employee, and i indexes firms.For our IT usage variable, we use the percentage of workers who use email.As control variables, we include dummy variables for industry and the percentage of a firm's workers that are college educated.
One concern with these regression estimates is that organizational practice variables and product development measures may be simultaneously determined.Therefore, we use instrumental variables to conduct regressions in which the organizational measures (WO and EXT) are treated as endogenous.As instruments, we use our individual inhibitors of organizational transformation, which reflect the ease or difficulty through which firms can develop these organizational assets, as well as the state in which a firm's corporate headquarters are located, which may affect a firm's cost for external information gathering.
Productivity Tests
We test complementarities in production by embedding our measures within a production function.The productivity framework has been widely used in IT productivity research (Brynjolfsson andYang, 1995 andStiroh, 2004 review much of this literature).IT productivity scholars embed measures of information technology, along with levels of other production inputs, into an econometric model of how firms convert these inputs to outputs.Economic theory places some constraints on the functional form used to relate these inputs to outputs, but a number of different functional forms are widely used depending on the firm's economic circumstances.
We use the Cobb-Douglas specification, which aside from being among the simplest functional forms, has the advantage that it has been the most commonly used model in research relating inputs such as information technology to output growth (e.g., Brynjolfsson and Hitt, 1993, 1995, 1996;Dewan and Min, 1997), and has been used extensively in research testing for complementarities between IT and organization (Bresnahan, Brynjolfsson, and Hitt, 2002;Brynjolfsson, Hitt, and Yang, 2002).Our primary regression model can be written where va is the log of value added, k is the log of capital, it is the log of IT employees, nite is the log of non-IT employees, and WO and EXT are our organizational variables.Dummy variables are included for industry and year.In some specifications, we also control for the firm's human capital to rule out some alternative explanations for our principal results.
In the productivity regression, the organizational variables are entered in levels as well as in (Bresnahan, Brynjolfsson, and Hitt, 2002), and the assumption that organizational factors are associated with substantial adjustment costs and take considerable time to change is supported by substantial case and econometric evidence cited earlier.Furthermore, in our analysis, we use adjustment cost data as instrumental variables to directly test this assumption.
An additional potentially important source of endogeneity is our IT measures.Unobserved productivity shocks will tend to exert an upward bias on the IT estimates as firms adjust IT to accommodate higher production levels.However, the endogeneity of IT investment may not exert too large an influence on our key estimates for two reasons.First, in other work we show that using GMMbased estimators that account for the endogeneity of IT investment (such as the Levinsohn-Petrin estimator) lowers our IT estimates by no more than 10% when using these data (Tambe and Hitt 2011).
Second, our key estimates, based on the 3-way complementarity between IT, external focus, and decentralization are less subject to bias relative to our main effect estimates because any biases that affect the complementarity term must be present only at the confluence of all three of these factors, but not when factors are present individually or in pairs. 4For example, unobservable factors like "good management" might explain why some firms are simultaneously productive and extroverted.However, such an unobservable would not explain why EXT is productive in the presence of IT and WO but not in its absence.That would require a much more unusual sort of unobservable factor which increased productivity only when the other inputs were present as a group, but not individually.Thus, although we cannot completely eliminate all sources of bias, the effects of unobservables on our key estimates should be limited.
Correlation Tests
Table 7 shows partial correlations between our IT measures and our organizational practice variables.All correlations include controls for firm size.We also control for 1-digit NAICS industry, as well as the percent of skilled blue-collar workers and the percent of professional workers to control for the nature of the firm's production process.Although these correlations by themselves are neither necessary nor sufficient evidence of complementarities (Athey and Stern, 1998;Brynjolfsson and Milgrom, 2009), they provide preliminary evidence as to whether managers perceive these practices as mutually beneficial.
Our external focus measure is correlated with our IT measure, and is highly correlated with the decentralization measure.Workplace organization is also positively associated with our IT measures.The correlation between workplace organization and external focus is 0.45 (p<.01), indicating that external information practices are significantly more likely to be found in firms with decentralized decision architectures.These correlations between external focus, workplace organization, and IT support the argument that external focus, workplace organization, and information technology usage are complements in the production process.Furthermore, our aggregated adjustment cost variable, which we use as an instrument in both our product development and productivity regressions, is negatively and significantly associated with both organizational measures, indicating that firms that have higher adjustment costs are less likely to have implemented either of these systems of work practices, as theory would predict.
We can also examine how managers adjust IT levels over time to match organizational practices.
Figure 2 compares changes in aggregate IT employment levels over time where firms are separated according to whether they are above or below the median in terms of adoption of EXT and WO.The trend lines suggest that IT demand in firms with high levels of both EXT and WO has been increasing faster than in firms that have not adopted these practices or firms that are mismatched on these practices.
Innovation and Product Cycle Regressions
Table 8 shows associations between our innovation and product development measures and our technology and organizational variables.In Columns (1)-(3), we report OLS regressions of how the different organizational practice and IT measures are related to product development.In (1), the dependent variable is how likely a firm is to be the first in its industry to introduce a new product.The point estimate on external focus is positive and significant (t=3.44),suggesting that extroverted firms also tend to exhibit product leadership.The dependent variable in (2) is related to internal product development speed, which captures how quickly a firm can introduce a new product or service after it has been approved.Thus, this measure captures speed of execution, rather than innovation per se.The estimates in (2) indicate that in addition to R&D intensity, technology usage, rather than organizational variables, is more closely associated with faster internal product development (t=2.12).The dependent variable in (3) is effective management of the product line, and the coefficient estimates indicate that external focus (t=3.16) and to a lesser degree, decentralization (t=1.69), are closely related to how well a firm manages its product line.
In Columns ( 4)-( 6), we report estimates from 2SLS regressions where our organizational measures are treated as endogenous, and individual inhibitors of organizational transformation and location variables are used as instruments.As in our OLS regressions, the estimates from this set of regressions indicate that external focus is positively and significantly associated with new product introduction (t=3.26), and that IT investment is most closely associated with product development speed (t=2.19).However, in our IV estimates, decentralization rather than external focus appears to be most closely associated with effective management of the product line (t=2.18).Hausman test statistics from all three IV regressions, displayed at the bottom of Table 8, indicate that we cannot reject the null hypothesis that decentralization and external focus are exogenous to our regression models, consistent with our assumption that organizational factors are difficult to change in the short-run.
In aggregate, these results indicate that the ability to exercise product leadership is more closely connected to a firm's ability to capture information from its environment, but its ability to internally process and manage products in a timely manner is governed by its internal information processing capacity.Competing in quickly changing product environments, therefore, appears to require external receptors in addition to decentralization and technology.
Full-Sample Regression-Based Productivity Tests
The central hypothesis of this paper is that external focus is an important organizational asset affecting the returns to IT investment, especially when combined with decentralization.Table 9 shows the results from our regressions directly testing this hypothesis in a complementarities framework.All estimates are from pooled OLS regressions, and errors are clustered by firm to provide consistent estimates of the standard errors under repeated sampling of the same firms over time.First, we establish a baseline estimate of the contribution of IT to productivity during our panel, which extends from 1999 to 2006.The coefficient estimate on our IT employment variable is about .084(t=2.3),consistent with many pooled OLS regressions of this type that appear in the literature using other sources of data on IT expenditures (e.g., Brynjolfsson and Hitt, 1996).
In Column (2), we include only decentralization measures, for comparison with earlier studies.
Although the coefficient estimate on decentralization is significant (t=3.3), the interaction term between decentralization and IT is insignificant, in contrast with earlier work.This may be because decentralized work practices have more broadly diffused to most IT-intensive firms that can benefit from them, leading to minimal marginal effects on productivity in recent data. 5The coefficient estimate on IT is slightly smaller but is close to the estimate without any organizational factors explicitly modeled.In Column (3), we include only our external focus measure plus an interaction term with information technology.The results are similar-the estimate on the external focus measure is significant (t=2.08),but the two-way interaction term between external focus and IT is not significant.
In our main results, reported in Column (4), we include the full set of organizational factors and interaction terms.The coefficient estimates on the three-way interaction term as well as on the decentralization term are positive and significant.For IT returns within our sample range, the estimates imply that IT returns are increasing when EXT and WO are matched in either direction.This is consistent with the interpretation that unless high IT firms have adopted these organizational complements together, adopting only one or the other in isolation may make them worse off than adopting neither.Therefore, IT is complementary with the EXT*WO combination rather than just WO in isolation.In the cube-based productivity analysis presented later in the paper, we show that of the possibilities for matching EXT and WO for high IT firms-either high-high or low-low--the highest productivity group corresponds to firms that have adopted both practices along with IT, not those that have invested in IT but adopted neither of the two organizational practices.Based on supplemental analysis (see Tambe, Hitt and Brynjolfsson, 2011) these point estimates suggest that complementarities are present among any two factors when the third factor is close to or above the sample mean, and a single factor is complementary to the combination of two other factors when the two factors are above the sample mean.After including the organizational factors and all interaction terms, the IT main effect estimate in Column ( 4) is no longer significantly different from zero.Although our benchmark estimates in Column (1) indicate an output elasticity of about 0.08, our Column (4) estimates suggest that these benefits are only captured by firms that have also chosen the right combination of decentralization and external focus to match their IT investments. 6o gauge the robustness of these results, we re-estimate our model (Columns 5 and 6) including a control for workforce composition (percentage of skilled workers and professionals out of total employment) to account for the fact that human capital is closely related to organizational innovation and technology adoption (Bartel and Lichtenberg, 1987).Our coefficient estimates do not change substantively after including these human capital measures or after including more detailed industry dummies.Second, we conduct instrumental variables regressions using our organizational inhibitors measures as instruments for external focus, decentralization and the interaction terms.The pattern of IV estimates (Column 7) is similar to that in earlier regressions and indicates that our core findings are unlikely to be heavily influenced by the endogeneity of organizational investments.At the bottom of Column ( 7), we report values of the Hansen J-statistic, which tests the instrument exclusion restriction, and the Anderson Canonical Correlation, which tests for weak instruments.The reported values indicate that instrument validity is not likely to be a problem in our IV regression model.A Hausman test is just short of rejecting the null hypothesis that our organizational measures are exogenous with respect to productivity, and that our OLS regressions in Columns ( 1)-( 5) produce consistent estimates.
Sample Difference Tests
We can use a number of contrasts among subsamples of our data to further investigate potential endogeneity or other specification problems.For instance, we construct a measure of adjustment costs by creating a composite scale (comparable to what we did with EXT and WO) for our organizational inhibitor variables, which allows us to segment the sample into firms that have high and low organizational adjustment costs.Firms facing higher adjustment costs are likely to have been endowed with whatever organizational practices we observe so our quasi-fixed assumption is most likely to be valid, while firms with lower adjustment costs are more likely in the midst of change to more modern work practices.If unusually high performing firms are also likely to be investing in decentralized work practices, we would expect the endogeneity problem to be concentrated in the low adjustment cost firms.
In Columns ( 1) and ( 2) of Table 10, we report regression estimates for the subsamples of firms that have lower than average and higher than average adjustment costs, respectively, and find results that suggest our analyses are not biased upwards by endogeneity.The coefficient estimate on the 3-way interaction term for firms with lower organizational adjustment costs is .058(t=1.93),only slightly lower than our baseline estimate, and we cannot reject the hypothesis that the coefficient on the 3-way interaction term is the same across the two regressions.The comparable coefficient estimate for firms with high adjustment costs, for whom our assumption of quasi-fixed organizational factors is more likely to be accurate, is .106(t=2.72).Therefore, consistent with our IV estimates, it appears that to the extent that our organizational factors are changing during the sample period, it would introduce a downward bias to our productivity estimates.
We can also test for other specification problems by varying the length and sample frame of our panel.In particular, our organizational practice measures are likely to accurately reflect actual practices in the interval around 2001, and be less accurate in the early and late years.Moreover, if firms adopt these practices over time as IT prices decline, as our theory would predict, we will likely overstate the use of these practices in early periods, and understate them in later periods.In Column (3), when we restrict the sample to a five-year panel close to 2001, we obtain estimates similar to our full estimates in Table 9, and we cannot reject the hypothesis that the coefficients on the 3-way interaction term are the same across the two regressions.In Columns ( 4) and ( 5), we run separate regressions from 1999-2001 and from 2002-2006.The higher coefficient estimates on the organizational measures in the 1999-2001 period are consistent with the interpretation that our survey measures understate organizational differences before 2001 and overstate them after 2001.Overall, our estimates in (1) through ( 5) suggest that even if firms were becoming more externally focused during these years, measurement error in organizational factors is unlikely to have had a significant effect on our productivity estimates.
In Table 11, we implement a series of tests for complementarities proposed by Brynjolfsson and Milgrom (2009) that contrast the productivity of firms that have adopted different combinations of IT, EXT and WO.We first dichotomize each of the three variables where a 1 represents high levels of the organizational practice, and a 0 represents low levels.This yields eight cells (2x2x2), one for each possible combination of practices.Each cell in the table is instantiated with average productivity differences of firms in that cell relative to the (0, 0, 0) cell.Unlike the productivity tests shown above, this test distinguishes productivity differences between high IT firms that have invested in EXT and WO and high IT firms that have invested in neither.
We find that the highest productivity cell is that in which firms invest in all three factors (1, 1, 1).F-tests indicate that the productivity differences between the (1, 1, 1) cell and cells with any combination of two factors are all significant at the 5% level.This pattern of results is what would be predicted by the complementarities story, and provides additional evidence that our results are not being driven by endogenous organizational investment.Although reverse causality between performance and organizational investment might explain the (1,1,1) quadrant, it does not explain why firms that have neither factor in place would be more productive than those with one but not the other in place.
Furthermore, Chi-squared tests (shown with Table 11) indicate that the majority of firms appear to cluster into one of the two main diagonal corners within this group, as would be expected given the observed productivity differences and the expected clustering of complementary practices.Interestingly, these results also suggest that even for low IT firms, the combination of decentralization and external focus appears to provide benefits that are independent of IT investment levels.
Complementarities arguments also predict that the marginal benefit of adopting a practice should be increasing in the presence of complementary practices.As noted by Aral, Brynjolfsson and Wu (2009) and Brynjolfsson and Milgrom (2009) this can be viewed as comparisons along the edges of a cube where each axis represents one of the (dichotomized) practice measures (see Figure 3).This increasing returns argument implies three specific tests of along a pair of edges, plus a fourth test that simultaneously considers all three pairs of edges.For instance, one test is whether the adoption of EXT adds greater benefit in the presence of IT and WO [the comparison of (1,1,0) vs. (1,1,1)] than adoption EXT alone [the comparision of (0,0,0) vs. (0,0,1)].The results of these tests suggest that the benefits of adopting external focus in the presence of IT and decentralization are greater than the benefits of adopting external focus alone (p<.01), and a test of whether the benefits of adopting decentralization are increasing in the presence of IT and external focus falls slightly short of being significant at the 10% level.IT adoption provides greater productivity benefits in the presence of decentralization and external focus, but this is not significant, perhaps due to the substantial complementarity between external focus and decentralization alone. 7Finally, we reject the null hypothesis of no increasing returns when we consider the most comprehensive test, which examines all three comparisons simultaneously (p<.05).
The findings from Table 11 and Figure 3 are visually captured in Figure 4, in which we show a plot of fitted values from a regression of organizational and IT inputs on the productivity residuals when other variables have been netted out.Lighter areas in Figure 3 correspond to higher productivity values.
The surface contours corresponding to changing EXT*WO while holding IT fixed indicate that high IT firms perform better when EXT and WO are matched.Furthermore, the contours that correspond to changing IT levels with EXT*WO held fixed indicate that returns to IT increase much more rapidly in firms in which EXT and WO are matched.
Conclusion
Our results suggest that a 3-way system of complements that includes external focus, decentralization, and IT intensity is associated with productivity in modern firms.IT has the strongest effect on productivity in firms that simultaneously have the right organizational structures in place, 7 Alternatively, this could reflect lower adjustment costs of IT, and a resulting faster adoption rate.
whether through wise management or luck.While prior work has demonstrated the importance of decentralization in explaining differences in returns to IT investment, the central contribution of this paper is the integration of a third variable, external focus, into the IT productivity framework.
Our hypothesis that decentralized decision-making and external focus are complementary to IT investment is supported by a number of different analyses.First, these three factors are highly correlated, indicating that firms are likely to invest in them together.This pattern of joint investment is predicted if managers are at least somewhat aware of these complementarities or if competition selects for companies with more productive combinations of practices.We also found evidence that one of the principal mechanisms through which external focus affects productivity is via improved product development.
Some of the strongest evidence of complementarities comes from our production function estimatesthe combination of IT, decentralization, and external focus is positively associated with firm productivity.Moreover, when these complements are included in a production model, main effect estimates of IT and other organizational factors essentially disappear, indicating that firms derive the most benefit from implementing the system of technological and organizational resources, and not IT alone.
From a research perspective, our study contributes to a literature on determinants of IT value, and in particular, on IT-related organizational complements.Our findings highlight the benefits of information technologies in an environment in which innovation largely takes place through external linkages with other firms, rather than within insular firms.Information technologies appear to provide greater benefits for firms that must process information effectively to respond to frequent environmental signals.This observation is also consistent with recent research suggesting cross-regional variation in returns to IT adoption, since these complementarities are likely to be most valuable when firms are located in information-rich environments.Finally, from a research methods standpoint, we have identified an effective set of instruments for work organization and external focus, providing greater confidence that these and prior results on the benefits of IT-related organizational practices are not driven by endogeneity.
A key managerial implication of our research is that "extroverted" firms are more productive and derive disproportionate benefits from advances in IT and workplace organization.Companies that exploit this opportunity by using more information from customers, suppliers and competitive benchmarks appear to outperform their rivals.Moreover, theoretical arguments suggest that managers should implement all of the elements in a system of complements to realize the maximum benefits (Milgrom and Roberts, 1990).Therefore, managers in firms with decentralized structures may not realize productive returns to IT-related investments unless they also find a way to also promote cross-boundary information flows through external practices such as competitive benchmarking and inter-organizational product teams.Thus, while the two types of organizational practices are complementary, external focus is distinct from organizational decentralization both theoretically and empirically.However, it is likely that our measures are only a subset of an even wider set of practices that firms use to bring information into the organization.
Our findings may also have implications for policy makers.There has been recent discussion of why IT appears to have led to greater productivity growth in some regions within the US than in others, and in some parts of the world than others (Dewan and Kraemer, 2000;Bloom, Sadun, and Van Reenen, 2008).Our findings suggest that the degree to which firms are networked with customers, suppliers, and partners is a potentially important factor explaining differences in IT-led productivity growth.Even within the same industry in the US, scholars have shown that considerable variation can exist among the degree to which firms share information across regions (Saxenian, 1996).
There are some important limitations to our study.Because of the research design, we were not able to conduct fixed effect productivity regressions to determine if changes in organizational assets drive productivity changes.Thus it is possible that the organizational assets that we have focused on here are reflecting some unobserved heterogeneity among the firms in our sample.However, we controlled for the most likely candidate, human capital endowments, and supplementary data allowed us to test whether our results were sensitive to this assumption.Furthermore, while heterogeneity could explain correlations between any given practice and our performance measures, it is more difficult to construct a story of heterogeneity that drives correlations with 3-way combinations, but not one or two way combinations of these practices.
An increasing body of evidence suggests that organizational practices, such as the ones that we identify in this paper, are critical to the success of technological innovation.We expect that future research using more fine-grained measures of organization will continue to identify other organizational and management practices that interact with technology to affect productivity and innovation..20 Huber-White robust standard errors in parentheses, * significant at 10%; ** significant at 5%; *** significant at 1%.All regressions on 2001 cross sectional survey data.FIRST is a measure of the extent to which firms are the first to introduce new products in an industry.SPEED is a measure of how long it takes to design and introduce a new product after approval.PLMGMT is a measure of internal product line management, and it indicates whether firms regularly weed out marginal products from their product line.Instrumental variables used in 2SLS regressions include individual inhibitors of organizational adjustment as well as state dummies.All first-stage regressions in (4)-( 6) have an R 2 of at least .42.The Hausman Test is a test of the null hypothesis that OLS is consistent.7) vary from a low of .12 to a high of .23 with a mean of .18.The Hansen J Statistic tests the null hypothesis that the instrumental variables are uncorrelated with the residual terms (exclusion restriction).The Anderson Test tests the correlations between the endogenous regressors and instrumental variables, and therefore, for instrument weakness.The Hausman Test tests the null hypothesis that OLS is inconsistent.
Appendix A: INTERPRETING 3-WAY INTERACTION TERMS
Considering the Cobb-Douglas production function in the text with all inputs standardized to mean zero and standard deviation one, with all factors (except WO and EXT) measured in logarithms: The output elasticity of a factor (say IT) is given by: As before, two of the three hold for any G>0 (the EXT and the WO elasticities are increasing in G for G>0, the specific cutoffs being -.12 and -.36).For the IT elasticity to be simultaneously increasing in WO and EXT, they both must be at least .06standard deviations above the mean, which is essentially the entire upper portion of the sample.Thus, given the point estimates, all three tests hold.
8 Because of the standard errors of the estimates, we focus on point estimates, as the confidence intervals of these comparisons are relatively wide in comparison to the range of the sample.This is, perhaps, not surprising given that our complementarities arguments would suggest these factors are all multicollinear.4)) where we vary the construction of our external focus measure.In Columns (1) through (5), we test each of the external focus constructs individually.In Column (6), we report results when using only the three practices most closely related to those investigated in earlier research (Mendelson, 2000).In Column (6), we report results when only using the labor market variables.This set of regressions indicates that our results are not sensitive to any single underlying construct, and instead represent a broader firm orientation towards external information acquisition.
Huber-White robust standard errors are clustered on firm and shown in parentheses, * significant at 10%; ** significant at 5%; *** significant at 1%. Errors are clustered on firm.Dependent variable in all regressions is Log(Value Added).Regressions are from baseline model in Column (4) of Table 7, and also include Capital, Non-IT Employment, and controls for 1-digit industry and year.
interactions with each other and with the technology variables.A positive coefficient on the three-way term in this model is not sufficient to indicate complementarities because a high value of this variable when using standardized organizational measures can correspond to a number of different combinations of practices (e.g.high-high-high or any of the three high-low-low combinations).Therefore, interpreting what the estimated coefficients indicate for how different combinations of practices affect productivity requires evaluating the terms and cross-terms over the sample range for each factor.A derivation of what the estimates from our full-sample productivity regression model imply for how different combinations of practices affect the elasticity of other factors is provided in Appendix A. In general, we find that complementarities are present for the movements of factors considered individually or with two factors moving simultaneously when other factors are above the mean.Although our data on IT and other production inputs are longitudinal, our organizational factors data are based on a single survey conducted in 2001.We construct a seven-year panel (1999-2006) by making the assumption that organizational factors are quasi-fixed in the short run.Our survey was administered in 2001, towards the middle of our panel.Similar assumptions regarding the quasi-fixed nature of organizational assets have been used in prior research on organizational factors
Figure 4 :
Figure1 The others are defined analogously.Complementarities arguments suggest increasing returns.When a factor is considered separately (say WO), returns are increasing in the factor at the value of the other factor (EXT):It is natural to consider complementarities as being present when all factors are "high" in the sense of being above the mean.Using point estimates from the preferred specification (Table9, Column 4), * WO IT β =.013 so the first condition holds for any value of EXT>-0.18, and * WO EXT β =.038 so the third condition holds for any value of IT >-0.55.For the EXT-IT interaction (the second condition), * EXT IT β =-.021 so the complementarity holds as long as WO > .30.Thus, for two of the three tests they hold for the upper half of the sample, and the third holds for nearly all of the upper half of the sample. 8f we consider simultaneous movements of factors (both EXT and WO move from 0 to G) and we consider the IT elasticity, then we have:
Table 5 : Means, Standard Deviations, and Correlations for IT Measures
† Survey variables.
Table 9 : Regressions of IT and Organizational Practices on Productivity Measures
White robust standard errors are clustered on firm and shown in parentheses, * significant at 10%; ** significant at 5%; *** significant at 1%. Errors are clustered on firm.IT Employment, Non-IT Employment and Capital are in logs.Dependent variable in all regressions is Log(Value Added).R 2 of first-stage regressions in (
Table 11 : Productivity with Matches and Mismatches on Complements
In the above table, we report results from our main regressions (the specification shown in Table9, Column ( | 2014-10-01T00:00:00.000Z | 2011-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "170734b1eeb93fb72e83e15185f7e0da9890d5b0",
"oa_license": "CCBYNCSA",
"oa_url": "https://dspace.mit.edu/bitstream/1721.1/77239/1/Brynjolfsson_The%20extroverted%20firm.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "21ab347ef446c75207dd1598863db1830ddd1de8",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business",
"Computer Science"
]
} |
253461 | pes2o/s2orc | v3-fos-license | Relevant baseline characteristics for describing patients with knee osteoarthritis: results from a Delphi survey
Background Inclusion/exclusion criteria and baseline characteristics are essential for assessing the applicability of trial results to a given patient and the comparability of study populations for meta-analyses. This Delphi survey aimed to generate a set of baseline characteristics for describing patients with knee osteoarthritis enrolled in clinical studies. Methods Survey participants comprised clinical experts (n = 23; mean age 54 y; from 4 continents) that had authored at least two randomized trials on knee osteoarthritis. First, given a prepared list of baseline patient characteristics, the experts were asked to add characteristics they considered important for assessing comparability of patient populations in different trials that evaluated the efficacy of non-surgical interventions for treating knee osteoarthritis. Next, they were asked to rate the importance of each characteristic, on a scale of 0 (not important) to 10 (highly important), according to three outcome categories: pain, function, and structure. Results Participants identified 121 baseline characteristics. A rating ≥7 points was assigned to 39 characteristics (e.g., age, depression, global knee pain, daily dose of pain killers, Kellgren-Lawrence grading); of these, 20 were related to pain, 15 to function, and 23 to structural outcomes. Global knee pain was the only baseline characteristic that fulfilled among experts the predefined consensus criteria. Conclusions Experts identified a large number of characteristics for describing patients with knee osteoarthritis. Disagreement and uncertainty prevailed over the relevance of these characteristics. Our findings justified further efforts to define appropriate, broadly acceptable sets of baseline characteristics for describing patients with knee osteoarthritis.
Background
After reading and critically appraising a publication on the effects of a particular treatment, clinicians must consider the patients to which the reported results might apply. The study authors typically present inclusion/exclusion criteria in the Methods section and the baseline characteristics of the included patients in the Results section (typically in Table 1). Inclusion/exclusion criteria inform readers how eligible patients were selected (e.g., age, illness, duration of complaints, severity of illness, and co-morbidities) for participating in the trial. Baseline characteristics describe the participants within a given boundary of inclusion/exclusion criteria. The reported baseline characteristics represent, in general, prognostic factors that can impact the future course of the illness. For example, among patients with knee osteoarthritis, those with knee malalignments have a less favorable future course than patients without malalignments [1].
Detailed baseline characteristics of trial participants are also important for researchers in conducting systematic reviews and meta-analyses. Guidelines for preparing meta-analyses and systematic reviews recommend assessing the comparability of patient populations in different primary studies and determining whether it is reasonable Legend: List of baseline characteristics with a median rating of importance ≥7 according to three different outcome categories, based on the opinions of an expert panel.
to combine the results in a single value [2][3][4]. An appropriate comparison is possible only when the required data are reported in primary studies. Reported baseline characteristics may be incomplete. Improper attention to the comparability between patients of different trials may raise criticism of results. For example, a systematic review that compared the effects of chondroitin or glucosamine in patients with knee osteoarthritis should ensure the patient populations were comparable [5]. To our knowledge, recommendations from experts are lacking on selecting the most relevant baseline characteristics for patients with osteoarthritis of the knee. The aim of this survey was to generate a set of baseline characteristics that could, based on expert opinions, appropriately describe patients with knee osteoarthritis enrolled in clinical studies.
Methods
We assembled an international panel of clinical experts on osteoarthritis to generate a list of baseline characteristics for describing patients included in clinical trials with knee osteoarthritis. A preliminary list was prepared and sent to these experts with the request that they add patient characteristics that they considered relevant. In a second round, the experts were asked to rate the relevance of each patient characteristic.
Selection and recruitment of experts
We searched Medline and EMBASE to identify clinical experts on osteoarthritis of the knee. The following MESH terms were used: Osteoarthritis, Knee, Physical Therapy Modalities, Steroids, Viscosupplementation, Anti-Inflammatory Agents, Non-Steroidal, Randomized controlled trial (a detailed list of the search strategy is available upon request from the corresponding author). The search was restricted to articles published between the years 2007 and 2012. We included only studies that evaluated the treatment effect of steroid injections, viscosupplementation, non-steroidal analgesics, or physical therapy. The aim, set arbitrarily, was to identify 20 experts that would participate in our survey. From the list of all authors, we selected those that co-authored three or more trials plus a random sample of authors that were listed on two publications. Based on the medical specialty and/or affiliation mentioned in the publication, we categorized authors into groups of clinically-oriented (e.g., rheumatologists, physiotherapists) or methodology-oriented (e.g., clinical epidemiologists, biostatisticians) researchers. Only authors categorized as clinically-oriented researchers were contacted for participation in the survey.
First round
The selected experts were contacted by E-mail and informed about the aim of the study. Those that agreed to participate received a prepared form and a request to add characteristics to complete a preliminary list of baseline characteristics (indicated by § in Additional file 1).
The experts received the following information: Patients with knee osteoarthritis have been included in four randomized trials (A/B/C/D, Additional file 2). In each of the four trials, one group of patients received an active treatment X (non-surgical) and the other group a placebo. In trials A and B, the outcome of interest was pain; in trials C and D, the outcome was a functional parameter. The results between trials differed significantly. In trials A and C, treatment X showed a significant benefit, and in trials B and D, the identical treatment X showed no benefit. The execution of the trials (intervention, treatment, measurement of outcomes, randomization etc.) was identical; however, one reason for the contradictory results may have been the inclusion of different patient populations in all four trials. Then, the experts were asked "From your experience as a clinical expert, what baseline patient characteristics would be necessary to identify patients to whom the trial results might be applicable, and furthermore, to evaluate the comparability of the two populations in trials A/B and C/D? The participants were asked to add characteristics that they thought should be included in an attached list of baseline characteristics. Participants returned the completed lists by mail or fax.
Second round
Based on the answers from the first round of inquiries, we updated the list of patient characteristics. To avoid redundancy, nearly identical characteristics were merged into one parameter. The final list included 121 items that were assigned to one of six categories, including general information about the patient (e.g., age, gender), psychosocial factors (e.g., depression, anxiety), history (e. g., duration of pain, pain provoking maneuvers), physical examination results (e.g., periarticular tenderness, instability), laboratory tests (e.g., C-reactive protein, serum hyaluronic acid concentration), and imaging results (e.g., Kellgren-Lawrence grading, bone marrow lesion).
This updated list was sent to the experts that had returned the questionnaire from the first round. The participants were asked to rate the 'importance' of each baseline characteristic on a scale of 0 to 10, where 0 indicated no importance, and 10 indicated the utmost importance. We informed participants that the 'degree of importance' was related to two issues; first, the relevance of the characteristic in identifying patients in daily practice to whom the results of a study were applicable; and, second, its usefulness in meta-analyses for assessing the comparability of patient populations from different primary trials for a potential pooling of results.
We reasoned that the experts would probably estimate the importance of baseline characteristics differently, depending on the outcome of interest. Therefore, we asked the participants to rate each baseline characteristic according to three different outcome categories; pain (e.g., VAS), function (e.g., WOMAC-function sub-score), and structure (e.g., change of the joint space width over time).
Participants that completed both questionnaires were asked to provide information about their age, gender, and primary medical specialty. Completed questionnaires were returned by E-mail or fax. All experts that responded in the first round were informed that they would receive a voucher for $100 after returning the completed questionnaire for round 2.
Statistical analysis
For this study, the medians and interquartile ranges for each parameter were calculated to quantify the importance assigned to single items. The median is a measure for the average and the 25-75% interquartile range (IQR) and the range are measures of dispersion of values. The median value means that half of the values are below and half the values above the median value. The 25-75% IQR is the difference between the values of the 25th and 75th percentiles. The 0th and 100th percentiles (minimal and maximal values) define the range. The final list only included baseline characteristics with a median rating ≥7 (on a 0 to 10 point scale). We arbitrarily defined a consensus among experts as a rating with an interquartile range ≤4 (±2) points.
Ethical approval
This work did not involve human subjects or animals. Thus, according to national laws and institutional regulations, review board (IRB) approval was not necessary. All experts acknowledged in the manuscript gave their permission to list their names.
Recruitment and selection of experts
Between 2007 and 2012, we identified 364 randomized trials involving patients with osteoarthritis of the knee. From 1149 authors, 76 were invited to participate in the survey. Of those, 32 agreed to participate, and 23 finally took part in both rounds. The mean age of the panel members was 54 years (range 39 to 75 years); 7 of the participants were female. Medical specialties of the panel members included rheumatologists (n = 13), physiotherapy (n = 6), rehabilitation medicine (n = 1), occupational therapy (n = 1), orthopedic biomechanics (n = 1), and musculoskeletal medicine (n = 1). Nine panel members were from North America (USA/Canada), eleven from Europe, two from Australia, and one from Asia.
Results of first round
Twenty-three of 32 experts returned the list with additional baseline characteristics. The additions comprised a total of 267 baseline characteristics in addition to the 30 characteristics nominated in the first list sent out. After deleting repeated characteristics and merging highly similar characteristics, 121 remained in the final list (Additional file 1).
Results of second round
All 23 participants from the first round completed the questionnaire in round 2. The medians of the expert ratings varied between 1 (e.g., click on knee motion) and 10 (e.g., global knee pain, WOMAC score).
To ensure the list of baseline characteristics remained within reasonable limits, we only included characteristics that had been rated a median of seven or above by the expert panel. A total of 39 characteristics fulfilled this criterion; 20 were related to a pain-reduction outcome, 15 were related to a functional improvement outcome, and 23 to a structural improvement outcome. The list of baseline characteristics with a median rating ≥7 is displayed in Table 1. Details of the median, 25-75% interquartile range, and the range of estimates are shown in Additional file 1. Six baseline characteristics were rated ≥7 in all three outcome categories (age, gender, BMI, global knee pain, function of knee, duration since onset of symptoms indicating knee osteoarthritis).
A consensus on the relevance of a baseline characteristic was arbitrarily defined as a calculated range of four (± 2) points or less around the median. Only one parameter, the global knee pain (e.g., VAS, WOMAC), fulfilled this criterion; all other characteristics listed in Table 1 displayed ranges greater than four points but are still rated as relevant baseline characteristics.
Discussion
There were three main results of this study. First, experts listed a large number of baseline characteristics that described patients with osteoarthritis of the knee included in trials that evaluated treatment effects. Second, experts agreed on the relevance of only one baseline characteristic. All other baseline characteristics received ratings scattered over a broad range, which indicated disagreement among experts. Third, the relevance of baseline characteristics varied according to the outcome measure in a trial.
Researchers have published a number of relevant articles that emphasized the definitions and measurements of outcomes in clinical trials that evaluated treatment effects in patients with knee osteoarthritis [6][7][8][9]. Despite a thorough search in various databases, we could not find any publications that focused on how to select baseline characteristics of patients that participated in trials on osteoarthritis of the knee. However, we identified a few publications that summarized the evidence for prognostic factors that characterized patients with knee osteoarthritis. Cheung et al. [1] stated that strong or moderate evidence indicated that progression was associated with age, generalized osteoarthritis, knee malalignment, and serum hyaluronic acid concentration; limited evidence indicated associations with knee pain, synovitis, the adduction moment of the knee, vitamin D and C concentrations, and MRI bone marrow lesions in the knee; and conflicting evidence indicated associations with body mass index, initial severity of x-ray changes, cartilage oligomeric protein (Comp), and urinary CTX-II. In a recent systematic review, Chapple et al. [10] reported some of the same results. They found that age, generalized osteoarthritis, varus knee alignment, and radiographic features, particularly joint space narrowing were strongly associated with prognosis. The latter review [10] provided no specific statements about the prognostic relevance of serum hyaluronic acid concentration.
In part, our results were in agreement with the previous studies [1,10]; but in part, our findings disagreed with those studies. For example, the panel members of our survey considered psychosocial factors important, e.g., anxiety and fear; however, the supporting evidence for these factors appeared to be scarce. The most striking discrepancy was the difference between the number of prognostic factors gathered from the synthesis of original studies and the number collected from the clinical experts of the present study. The clinical experts listed a much higher number of relevant factors than the numbers listed in the current literature.
The results of our survey might be helpful for clinicians and researchers. This study aimed to provide guidance to clinicians for assessing the applicability of trial results to a different clinical application. After reading the results of a clinical trial, the main task of the clinician is to assess which patients might benefit from the treatment. Apart from the inclusion/exclusion criteria, the most significant information for this assessment are the baseline characteristics of study participants. The present study provides a list of relevant factors based on clinical expert opinions. Clinicians can consult this list to evaluate the comprehensiveness of the baseline characteristics in the reports they are considering.
Researchers may also find this list of baseline characteristics important for two reasons. First, our results may inform the design of future trials in patients with knee osteoarthritis. Researchers can consult the present list of baseline characteristics for each outcome of interest to decide which patient characteristics should be reported. The careful selection and reporting of baseline characteristics can facilitate the translation of research results into patient care, and this increases the usefulness of trial results. Second, researchers may find the list relevant when synthesizing the results of original studies. Guidelines for preparing systematic reviews by metaanalyses recommend checking the comparability of patient populations between original studies before pooling the results to derive a single value [2,4,11]. A prerequisite for this type of assessment is the availability of detailed information about the distribution of baseline characteristics among the patients included in the original studies.
Our study had both strengths and limitations. The primary limitation, inherent in most surveys, was that a different panel of experts may provide different results. A strength was that the members of the panel were experts in the field and had authored two or more clinical trials that evaluated the effects of treatments for patients with osteoarthritis of the knee. Furthermore, we included a large number of panel members, and they were from different countries. A panel with about 15 members is recommended for surveys to reach a consensus or to assess the degree of disconsensus [12]. With 23 panelists, we exceeded that recommended number. The panel member internationality assured a broad spectrum of opinions and eliminated the domination of an opinion based on a single clinic or region-specific beliefs. A further limitation of our study was that, in the first questionnaire, we only included pain-related and functional outcomes, but no structural outcomes. In the second questionnaire, we included the structural outcomes. We assume that the addition of an outcome parameter did not impact the results different results.
Conclusions
In conclusion, it remains uncertain which baseline characteristics are most important to collect and report in knee osteoarthritis trials. We cannot claim that our results provided a standard for reporting baseline characteristics. However, the results of this survey may serve to guide clinicians and meta-analysts in assessing whether the baseline characteristics of a given clinical trial are comprehensively reported. In addition, we provided a list of characteristics considered important for the respective study outcomes, based on the opinions of an expert panel. Finally, the extent of disagreement among experts on the relevance of baseline characteristics should motivate further research.
Participating experts (alphabetic order) | 2016-05-12T22:15:10.714Z | 2013-12-30T00:00:00.000 | {
"year": 2013,
"sha1": "8401d53a5fed230673ac03ce24b0365ab2619d0f",
"oa_license": "CCBY",
"oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/1471-2474-14-369",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce69f1dc448c91b964e33482d3fae62303b68ab4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235573173 | pes2o/s2orc | v3-fos-license | Replicating cohesive and stress-history-dependent behavior of bulk solids Feasibility and definiteness in DEM calibration procedure
This paper presents a multi-step DEM calibration procedure for cohesive solid materials, incorporating feasibility in finding a non-empty solution space and definiteness in capturing bulk responses independently of calibration targets. Our procedure follows four steps: (I) feasibility; (II) screening of DEM variables; (III) surrogate modeling-based optimization; and (IV) verification. Both types of input parameter, continuous (e.g. coefficient of static friction) and categorical (e.g. contact module), can be used in our calibration procedure. The cohesive and stress-history-dependent behavior of a moist iron ore sample is replicated using experimental data from four different laboratory tests, such as a ring shear test. This results in a high number of bulk responses (i.e. (cid:1) 4) as calibration targets in combination with a high number of significant DEM input variables (i.e. > 2) in the calibration procedure. Coefficient of static friction, surface energy, and particle shear modulus are found to be the most significant continuous variables for the simulated processes. The optimal DEM parameter set and its definiteness are verified using 20 different bulk response values. The multi-step optimization framework thus can be used to calibrate material models when both a high number of input variables (i.e. > 2) and a high number of calibration targets (i.e. (cid:1) 4) are involved.
Introduction
To simulate, design, and optimize processes and equipment for handling bulk solids, such as iron ore and coal, the discrete element method (DEM) is a suitable computational method.However, DEM simulations can only predict bulk level responses (e.g.shear strength) accurately if their input parameters are selected appropriately.To select the input parameters with confidence, the common procedure is to calibrate and to validate DEM simulations [1][2][3][4].The calibration can be done by finding an optimal combination set of DEM input parameters that replicates the captured bulk response [5].
Over the past decade, reliable DEM calibration procedures have been developed to model free-flowing bulk solids, such as iron ore pellets [1], glass beads [6], sinter ore [7], sand [8,9], and gravel [10,11].By setting multiple targets for the DEM calibration, more than a single bulk response can be considered.This prevents the ''ambiguous parameter combinations" problem in the DEM calibration procedure, which is discussed in detail in [11].For example, to calibrate DEM input variables for simulating iron pellets in interaction with ship unloader grabs, Lommen et al. [1] considered at least three different calibration targets.They replicated the static angle of repose using the ledge and free-cone methods; the penetration resistance of iron pellets was also replicated, using a wedge penetration test setup.
In general, DEM calibration is performed following the generic procedure shown in Fig. 1.To find an optimal combination of DEM input parameters that satisfies multiple calibration targets, optimization methods can offer a solution.Various optimization methods have already been applied to calibrate the continuous type of DEM variables successfully [6,7,10,12].Continuous DEM variables are numerical variables that have an infinite number of values between any two values [13].For example, the coefficient of static friction is an important continuous DEM variable during calibration [5].Richter et al. [10] concluded that surrogate modeling-based optimization methods are most promising for DEM calibration when continuous variables are included.
Categorical-type DEM variables have not yet been included in the calibration procedure when optimization methods are used.Categorical variables are finite numbers of groups or categories that might not have a logical order [13].For example, shape of particles is a DEM categorical variable that plays an important role during calibration [14].One can use design of experiments (DoE) methods to include categorical variables in the DEM calibration procedure.However, a high number of simulations might have to be run with no guarantee of finding an optimal set of DEM input parameters [10].Additionally, iron ore fines and other similar bulk solids (e.g.coal) have an irregular distribution of particle shape [15] as well as fine particle sizes [16].Modeling accurate particle shapes and sizes for cohesive bulk solids in DEM simulations thus leads to a computation time that is generally impractical for studying industrial bulk handling processes, such as flow in silo [16].
Furthermore, selecting an appropriate contact model from the available options is an important challenge in the DEM calibration.Applying optimization methods without choosing a proper contact model might, for example, lead to an empty solution space or inadequacy in meeting macroscopic bulk behaviors other than the selected calibration targets [7].A contact model generally includes multiple modules to calculate forces and torques between elements (e.g.particles).Fig. 2 schematically illustrates a contact spring-damper system between two particles, a and b.Here, three main modules are identified: contact force in the normal direction is denoted by f N , while f T and s R represent force in the tangential direction and rotational torque respectively.Contact modules can be selected independently of each other.For instance, a rolling friction module can be implemented in various ways to determine rotational torque between two particles [1,17,18].Therefore, each module of the contact model can be considered as a categorical variable in DEM calibration.
By contrast with free-flowing materials, cohesive bulk solids such as moist iron ore fines usually show a stress-historydependent and cohesive behavior [19].Their bulk responses, such as shear strength, bulk density, and penetration resistance, depend on the history of applied normal pressure on the bulk specimen [19][20][21].Thiss stress-history-dependent and cohesive behavior can be simulated by using contact models based on an elastoplastic adhesive spring [20,[22][23][24][25]. Orefice and Khinast [25] used a multi-stage sequential DEM calibration procedure to model cohesive bulk solids using a linear elasto-plastic adhesive model; the calibration was done by replicating a specific bulk response at each stage, starting with the angle of repose (measured using the funnel test) as the first calibration target.Three continuous DEM variables were included during the calibration; other DEM input parameters, continuous and categorical, needed to be kept constant during their calibration procedure.The multi-stage sequential calibration procedure might fail to meet the following criteria.
-Feasibility.Replicating all the selected bulk responses can be infeasible using chosen values for the input parameters that are constant during the calibration, such as a specific contact module.Therefore, considering the necessity of including multiple calibration targets, the calibration procedure can lead to an empty solution space for one or more than one of the calibration targets.-Definiteness or avoiding ''ambiguous parameter combinations" [11].To meet this criterion, a bulk response independent of the calibration targets needs to be simulated successfully using the calibrated set of DEM input parameters.Additionally, properly selecting all modules of the DEM contact model is a prerequisite.Otherwise, the calibrated set of input parameters might fail to capture a bulk response different than the selected calibration targets.
For example, the ''definiteness" criterion has been focused on in the automated calibration procedure developed by [26], which is based on a genetic algorithm to replicate stress-historydependent and cohesive behavior of bulk solids in the ring shear test.By introducing cohesive forces as well as elasto-plastic stiffness into the DEM calibration procedure, the number of DEM input variables and the number of required bulk responses increase [25,[27][28][29].For that reason, the abovementioned criteria become important in developing a reliable calibration procedure to simulate cohesive and stress-history-dependent behavior of bulk solids.As yet, however, no literature has addressed how to ensure that both criteria, feasibility and definiteness, are met in a DEM calibration procedure considering both continuous and categorical DEM input variables.Additionally, calibrating DEM input parameters is still a challenge when a high number (i.e.> 2) of variables in combination with a high number of bulk responses (i.e.> 2) is involved.
In this paper, we develop a reliable multi-step DEM calibration procedure to capture the cohesive and stress-history-dependent behavior of bulk solids.In each step of the calibration procedure, the variables space is narrowed down to be further optimized in the next step.The first step uses a feasibility analysis, based on Latin hypercube design (LHD), to choose a suitable contact model by efficiently searching for a non-empty solution space.This ensures that the calibration procedure meets the ''feasibility" criterion.The second step screens the significant DEM variables to quantify their influence on the selected bulk responses.This allows us to find an optimal combination of DEM input variables in the Fig. 2. A contact spring-damper system between two particles, including normal, tangential, and rotational directions.
third step by applying a surrogate modeling-based optimization method.In the third step, we use a different set of calibration targets, compared to the first and second steps, to consider the ''definiteness" criterion.The final step is to verify the adequacy of the optimal combination in replicating the cohesive and stresshistory-dependent behavior for several bulk responses, such as bulk density, shear strength, and penetration resistance.
DEM calibration procedure: a multi-step optimization framework
In general, a calibration procedure aims at identifying an optimal combination of DEM input parameters, Ny , adequately similar to responses captured in physical laboratory or in-situ tests, Y ¼ y 1 ; Á Á Á ; y Ny [5].Ns is the number of DEM input parameters and Ny the number of calibration targets.Bulk responses such as bulk density and shear strength thus need to be determined first, using appropriate physical tests.This allows us to set calibration targets and to quantify the difference in bulk responses between simulated and physically determined values.To ensure that feasibility and definiteness criteria are satisfied for multiple calibration targets, a multi-step DEM calibration procedure considering categorical input parameters is proposed in Fig. 3.The following four steps are included: (I) feasibility; (II) screening of DEM variables; (III) surrogate modeling-based optimization; and (IV) verification.
To apply surrogate modeling-based optimization, the parameter space needs to be searched effectively to be able to approximate Y 0 .Accordingly, F(X) maps relationships between new calibration targets, Y = y 1 , . .., y My , and (significant) DEM variables.
Although the full factorial design can be used to create multivariate samples, all the possible combinations between significant DEM variables must be included.This leads to a high number of simulations needing to be done.Fractional factorial designs, such as Taguchi [30], Placket-Burmann [31], and Box Behnken [32] designs, can be used to generate multi-variate samples required for surrogate modeling without the need to create all the possible combinations of variables.For example, if a full factorial design is used for 4 input variables having 3 levels each, that leads to 3 4 = 81 combinations to run.Using the Taguchi (orthogonal) method, a fractional factorial design can be created by running only 9 or 27 possible combinations.The accuracy of the surrogate model is evaluated using the coefficient of determination, R 2 .This coefficient quantifies the surrogate model accuracy in representing variability of values obtained from DEM simulations.To ensure that the surrogate model converges to a verifiable X*, a minimum R 2 value of 0.75 is considered to be met for all calibration targets.Otherwise, more samples are used to train the surrogate model.
Next, the response optimizer searches for an optimal combination of input variables, X*, that jointly meets a set of calibration targets, Y.To find X* using the surrogate model, we use the response optimizer toolbox available in Minitab [33].
The mean of absolute relative differences is used to quantify error in the verification step.If y and y' represent measured bulk responses in the experiment and the simulation, respectively, then |e| mean is determined according to Eq. ( 1) for a number of bulk responses, N e .In the current study, an |e| mean 10% is considered an acceptable outcome during verification.
Therefore, in each step of the calibration procedure the variables space is narrowed down to be further optimized in the next step.In the final step, a verified parameter set is found by checking |e| mean .
DEM calibration targets: Y
In this study, DEM calibration targets are set to values reported from our comprehensive measurements campaign on cohesive iron ores [19].Bulk property variability of cohesive iron ores has been characterized using the following laboratory tests: A, B, and C) are used in the current study to set DEM calibration targets.During the calibration procedure, two out of three influencing parameters, MC and r, are considered as sources of possible bulk property variability.Below we describe characteristics of the selected bulk solid sample as well the measured bulk responses.
Bulk solid sample
The bulk solid sample is a sinter feed type of iron ore from the Carajas mines, one of the largest iron ore resources on earth [34].The average density of the particles is 4500 kg/m 3 , with a standard deviation of 125 kg/m 3 .The median particle size, d 50 , is equal to 0.88 mm [35].The dry-based moisture content was determined according to the method described in [36], in which the sample is dried using a ventilated oven.This resulted in MC = 8.7%.An overview of measured properties of the sample is presented in Table 1.
Measured bulk responses
Table 2 displays physically measured bulk responses of the sample using the ring shear and ledge angle of repose tests when r pre 20 kPa and D MC = ±2%.Pre-consolidation or pre-shear stress, r pre , is a normal confining pressure that is applied initially.In the ring shear test, for example, a normal confining pressure of 20 kPa is applied initially during the pre-shear stage, and next a normal confining pressure of 2 kPa (r shear ) is applied.Fig. 4 shows the results of shear stress measurements in the ring shear test, including one pre-shear stage and one shearing stage.In general, r shear is smaller than r pre , which allows us to investigate a stresshistory-dependent bulk response, such as shear strength in the case of shear tests.The ledge angle of repose test has been conducted under no pre-consolidation stress, which represents the free-surface flow of bulk solids under gravity force.Maximum and minimum values of physically measured bulk responses are shown under D MC, up to ± 2%, compared to its as-received condition.By considering the maximum and minimum measured values of bulk responses, extreme values can be included in the feasibility evaluation step of the DEM calibration procedure.In other words, the feasibility is evaluated for a range of bulk response values.
According to the Mohr-Coulomb equation, the shear strength of bulk material s s is often approximated by Eq. ( 2) [37]: where tan(u) indicates the angle of internal friction.c is the shear strength of the bulk material when r n ¼ 0, thus it denotes the cohesion of the bulk material.Eq. ( 2) suggests that increasing the normal stress r n, decreases the contribution of c to the shear strength.Additionally, increasing r n results in a higher contribution of particle-particle friction to shear strength.
The wall friction was also determined in [19]; this was done using the ring shear test by applying small adjustments according Step I. Feasibility Step II.Significant DEM variables Step III.Surrogate modelling-based optimization Step IV.Verification • Searching for a feasible solution space that covers selected bulk responses.
• Using DoE techniques, variables space can be searched effectively using a minimum number of sampling points.
• X is feasible if a satisfactory coverage of solution space is reached.
• A sensitivity analysis to identify significance of DEM variables.
• One-variable-at-a-time (OVAT) is the most suitable DoE technique for this step.
• To ensure that the definiteness criterion is met, a different set of calibration targets is used in this step, compared to previous steps.
• F(X) maps relationships between new calibration targets and (significant) DEM variables, X.
Verified X *
• |e| mean is used to quantify error.
• The definiteness of X * is confirmed if bulk response(s) different than the calibration targets are simulated successfully.
Definiteness: Verify X * for various bulk responses (i.e.|e| mean ≤ 10%) to [38].The test was done with a r pre equal to 20 kPa and then the wall friction was measured for eight different levels of r shear between 2 and 17 kPa.The wall friction measurements resulted in a wall yield locus with an average wall friction angle of 19°a nd a negligible adhesion strength of 0.1 kPa.Table 3 displays measured bulk responses of the sample using the consolidation-penetration test when r pre !65 kPa and D MC = 0%.This test procedure is designed to represent the penetration resistance of iron ore cargoes during ship unloading when grabs are being used [39].To consider the stress-history dependency, two levels of r pre are included in the calibration procedure, equal to 65 and 300 kPa, respectively.As the first bulk response parameter, accumulative penetration resistance [J] on the wedgeshaped penetration tool is determined by integrating the reaction force over penetration depth [40].The secondary measured bulk response in the test is the bulk density after removing r pre .For example, after removing r pre of 300 kPa, the bulk density was measured according to the procedure described in [39], which for this sample was equal to 2807 kg/m 3 on average for three test iterations.Therefore, bulk property variability of the cohesive iron ore sample has been determined under variation of confining pressure as well as moisture content.This provides a comprehensive set of measurement data to be used in the DEM calibration procedure (illustrated in Fig. 3).
Contact modules in normal and tangential directions: elastoplastic adhesive
The EDEM (v2020) software package is used to create and run simulations.To capture the stress-history-dependent bulk responses as well as cohesive forces, an elasto-plastic adhesive contact model built into the software package is used.This is formulated in [41] under the name Edinburgh Elasto-Plastic Adhesive (EEPA).For details, refer to [41] or [20,26,42].This model has been used successfully by [20,26] to simulate bulk responses of cohesive bulk solids.
A schematic diagram of the EEPA contact spring for the normal direction of f-d (force-overlap) is provided in Fig. 5.The contact spring consists of three main spring-based parts during loading and unloading, as well as a constant pull-off force, f 0 .
Part 1.The contact starts with the loading part, with spring stiffness of k 1 , when the distance between the centers of two approaching particles is smaller the sum of their radiuses.The non-linear mode of the contact module is used in the current study by setting the slope exponent value to 1.5.
Part 2. By reducing the contact force, unloading commences; during this process, the plastic deformation is replicated by switching the spring stiffness to k 2 .The plasticity ratio, k P , determines the ratio between k 2 and k 1 .
Part 3. As unloading continues, a minimum attractive (adhesive) force is reached that is denoted by f min .The limit is determined using Eq. ( 3) [41].
where Dc and a are surface adhesion energy [J/m 2 ] and contact radius [m], respectively.If the unloading of the contact spring continues, the f À d follows the adhesive path with stiffness of Àk adh .In this study, an exponent value equal to 1.5 is used for d in part 3, which is similar to the slope exponent value used in part 1.Therefore, during the calibration procedure the adhesion path (part 3) can be controlled by adjusting f 0 and Dc as DEM input variables.The tangential stiffness of the contact model is varied as a multiplier, k t,mult , of the initial loading stiffness.
Simulation setups
DEM simulation setups are created representing the physical laboratory tests in the geometry scale of 1:1.
(A) Ring shear test
The ring shear test device used in [19] to characterize the shear strength of the iron ore sample is the same as the device used in our earlier study [26].For that reason, the same simulation setup and procedure is applied in this study.Fig. 6a and b illustrate components of the ring shear test in laboratory and simulation Accumulative penetration resistance at 70 mm depth when r pre = 300 kPa W 70,300 J 121 5 Bulk density after applying r pre = 65 kPa q b,65 kg/m 3 2668 65 Bulk density after applying r pre = 300 kPa q b,300 kg/m 3 2807 14 environments, respectively.In the simulation setup, we use cylindrical periodic boundaries to simulate a quarter of the shear cell (Fig. 6b).This allows us to reduce the computation time by 50% with no undesirable influence on the simulation accuracy [26].
(B) Ledge angle of repose test A ledge test method, according to [1], for measuring the static angle of repose, a M , of the cohesive iron ore sample was used in [19].The test setup and its procedure are also referred to by other names in literature, such as ''shear box" [44] and ''rectangular container test" [8].Fig. 7a and b show the test box dimensions, including the slope formed after failure, in laboratory and simulation environments, respectively.The container is 250 mm high, but it has been filled only to the flap opening's height at 200 mm.After opening the flap, the bulk solid can thus flow out from the container.Once a static angle of repose is created, a M is quantified by applying the linear regression technique to fit a line on the slope of bulk surface.
(C) Uni-axial consolidation-penetration test Fig. 8 shows three main components of the consolidationpenetration test: a container, a lid, and a wedge-shaped penetration tool.The lid's surface area is equal to the container's sectional area.The wedge-shaped tool is 200 mm long, which allows it to create a plane contact with particles.
The procedure of the simulated consolidation-penetration test is illustrated in Fig. 9.
-First, the container is filled with DEM particles.A stable situation is reached when the maximum velocity of the particles is smaller 0.1 mm/s.-Second, the lid is moved downward with a constant velocity of 10 mm/s to create a consolidated situation.This is continued until the desired pressure on the lid is reached (i.e.65 and 300 kPa).-Third, the sample is unloaded by moving the lid upward with a velocity of 10 mm/s.-Finally, the wedge-shaped tool is moved downward with a velocity of 10 mm/s, similar to the laboratory test procedure [19].
Initial sampling strategy for step I (Feasibility) using LHD
The initial sampling aims at evaluating the feasibility of capturing calibration targets using selected DEM input constants and variables.This allows us to select a suitable solution, including levels of categorical variables and constants.Two simulation setups, ring shear and ledge angle of repose tests, are used in step I, feasibility.This means that the shear flow in two different test setups is simulated for r pre of up to 20 kPa.Three different bulk responses, s pre=20 , s 2;20 , and a M (angle of repose), are analyzed using DEM simulations for various combinations of input parameters.
During a calibration procedure, DEM input parameters, X ¼ x 1 ; Á Á Á ; x Ns , are divided into two groups: input variables and constants.Level input variables are varied in a range to meet calibration targets (Fig. 1).Levels of DEM input constants are chosen based on available literature, if applicable; otherwise, their level is selected based on rational assumptions, as recommended by [25], or by the direct measurement method, as discussed in [5].For example, modeling the actual shape and size distribution of a cohesive iron ore sample leads to a computational time that is impractical [45,46].For that reason, a simplified representation of particle shape and size can be used to develop a DEM simulation of cohesive iron ore.This technique has been applied successfully by [20,26,47] to model bulk solids that have fine particles with irregular shape distribution.Nevertheless, the rotational torque between particles needs to be considered; according to [48], two options are possible: (a) introducing a certain level of non-sphericity in particle shape; and/or (b) suppressing the rotational freedom of particles.In this study, option (b) is applied, as -compared to using multispherical particles -it does not have a negative influence on the computational time.The rotational freedom of particles can be suppressed artificially by either introducing a rolling friction module [17] or restricting the rotation of the particles [1,26,49].Both techniques are included as a categorical variable in step I, feasibility.The rolling friction module is implemented according to [18].This implementation was classified as ''rolling model C" by [17], so we refer to the rolling friction module as RC in this article.Restricting the rotation of particles is done by applying a counterbalance torque in each time-step necessary to prevent rotational movement.This leads to an increase in the particles' resistance to rotational torque.Restricting the rotation of particles has been used successfully to resemble realistic material behavior [1,24,40,48].Additionally, the number of input variables is reduced because, when using the restricted rotation (RR) technique, rolling friction coefficient does not play a role in rotational torque.
DEM input variables when RC option is used
Table 4 displays DEM input variables when the RC option, rolling friction module C, is used.Based on the available literature, the coefficient of static friction between particles, m s,p-p , is probably the most influential parameter on the internal shear strength of bulk solids [7,14,28,[50][51][52][53][54][55][56][57][58][59][60].Coefficient of rolling friction is also usually considered as an influential variable on shear flow [5].To calibrate the shear flow of cohesive bulk solids, [61] found that a range of 0.2 to 1.0 is reasonable for coefficients of static friction and rolling friction when rolling model C is used.Particle shear modulus determines the stiffness of the contact spring.Therefore, G, particle shear modulus, is included as a continuous DEM variable in our investigation.A range between 2.5 and 10 MPa is used for G, which covers values used by other researchers modeling cohesive bulk solids using the same elasto-plastic contact model [20,26].
Constant pull-off force (f 0 ) and surface energy (Dc) are included in the calibration to control the magnitude of adhesive forces in the contact spring.f 0 is varied between À0.0005 and À0.005 N, and Dc between 5 and 50 J/m 2 .These ranges are expected to be sufficient to capture a realistic shear flow based on the DEM calibration done in [26].
DEM input variables when RR option is used
Table 5 displays DEM input variables when the RR option, rotation restricted, is used.First, based on our simulation results reported in [26], the ranges of coefficient of static friction and surface energy are changed, compared to the values in Table 4.By restricting the rotation of particles, their mobility decreases and so lower restrictive forces (e.g.cohesive and friction) can be used during the calibration procedure, compared to the case when the RC option is used.The coefficient of static friction is varied between 0.2 and 0.4, while the surface energy variation is between 2.5 and 25 J/m 2 .Second, ranges of other input variables are similar to the case when the RC option is used.
DEM input constants
Table 6 presents other DEM input parameters that are kept constant during initial sampling for step I, feasibility.Particle density is set to 4500 kg/m 3 , similar to the measured value (Table 1).As discussed earlier, the representation of particles' shape and size is simplified.Spherical particles are used and the mean particle diameter value is set to 4 mm including a normal particle size distribution with a standard deviation of 0.1.In addition to a reasonable computation time when spherical particles are used, the coarse graining principles for the elasto-plastic adhesive contact model [46] can be applied during the calibration procedure to further minimize the computation time.For example, the ledge angle of repose simulations are done using coarse grained particles with a scaling factor of S P = 2.25, as per [46].Constant pull-off force and surface energy are scaled with factors of S P 2 and S P to maintain comparable bulk responses with the unscaled simulation.For further details of particle scaling rules, please refer to [46].
The tangential stiffness multiplier, k t,mult., is recommended as 2/3 [62] for non-linear elastic contact springs.According to [63], to maintain simultaneous harmonic oscillatory positions between normal and tangential elastic springs, a value of 2/7 is recommended.However, no recommendation was found in literature to select k t,mult.when a non-linear elasto-plastic normal spring is used.For that reason, a range of k t,mult.bounded to 0.2 to 1 was used in the ledge angle of repose simulation.Within this range, no significant influence on the simulation stability and simulated bulk responses were found, and therefore k t,mult.is set to 0.4.As suggested by [26], if a negligible adhesion strength is measured in the wall friction test, the Hertz-Mindlin (no-slip) contact model [64] can be used to describe interaction between particles and geometry.The sliding friction coefficient between particles and wall geometry, m s,p-w , is therefore determined directly by Eq. ( 4), which results in m s,p-w = 0.37 for the measured average angle of the wall yield locus u x of 19 (Section 2.2).The rolling friction coefficient between particles and wall geometry has a negligible influence on simulated shear stress [65], and therefore m r,p-w is set to 0.5.
Initial samples
Using design of experiments (DoE) techniques, parameter spaces -including their levels and possible combinations -can be searched effectively using a minimum number of sampling points.A Latin hypercube design (LHD) is constructed in such a way that each of the parameters is divided into p equal levels, where p is the number of samples.Based on the U P criterion [66], the location of levels for each parameter is randomly, simultaneously, and evenly distributed over the parameter spaces, maintaining a maximized distance between each point.The LHD is constructed according to the algorithm developed in [67], which satisfies the U P criterion for up to 6 parameters.This allows us to include up to 6 DEM input parameters in a feasibility evaluation.Fig. 10 displays levels of the 5 continuous DEM input variables at S P = 1 when the RR option, restricted rotation, is used.Forty different samples are created using the LHD to simulate ring shear and ledge angle of repose tests.Similarly, using the LHD, 40 different samples are created for the 6 continuous DEM input variables (based on Table 4) at S P = 1 when the RC option, rolling friction module C, is used.
In total, 160 simulations are run during step I, feasibility, which cover 2 categorical variables and 6 continuous variables.
Results
In this section, first the simulation results of the initial samples (step I) are presented.Then a feasible solution is chosen to continue the calibration procedure when executing its next steps.Additionally, new samples are created at the beginning of each new step to meet its specific objective.Step I: feasibility Fig. 11 displays the simulation results of the 40 initial samples when the RC option, rolling friction module C, is used.Three different bulk responses are quantified: -shear stress in the pre-shear stage, s pre=20 ; -shear stress in the shearing stage, s 2:20 ; and, -average angle of repose in the ledge test, a M .Thus, N y is equal to 3 in step I, feasibility evaluation.Simulation results are also compared with the maximum and minimum values that were measured in the laboratory environment (shown in Table 2).For example, s exp.max and s exp.min are shown using blue and red dashed lines respectively.
Using the RC option, a range of s pre=20 bounded to 6.2 and 12.3 kPa is captured.This shows that the 40 samples created using LHD could vary s pre=20 by around 100%.The maximum simulated s pre=20, 12.3 kPa, is around 25% lower than s exp.min .This means that simulating a comparable s pre=20 is probably infeasible using the RC option.To confirm whether this conclusion is limited to the selected ranges of the 6 DEM input variables, additional simulations using extreme values of DEM input variables are conducted.Extreme values are selected outside the selected ranges shown in Table 4.For example, using sample 32, which produced s pre=20 = 12.3 kPa, an additional sample is created by increasing particle shear modulus, G, to 100 MPa.This leads to only a marginal increase in simulated s pre=20 .Even though the angle of repose, a M , is simulated in a range of 43°to 90°, simulating comparable bulk responses is infeasible in the ring shear test.Therefore, according to Fig. 11 we can conclude that an empty solution space is reached when the RC option is used.
Fig. 12 displays the simulation results of the 40 initial samples when the RR option, rotation restricted, is used.The same list of bulk responses as in Fig. 11 analyzed here, and therefore the feasibility is evaluated for N y = 3.
First, a range of s pre=20 bounded to 13.9 and 26.6 kPa is simulated; this covers both s exp.max and s exp.min .Second, a range of s 2:20 bounded to 2.5 to 6.5 kPa is simulated.This range covers both s exp.max and s exp.min .Third, a range of a M bounded to 60 and 90 is simulated; this covers the maximum and minimum values measured in the laboratory environment.Therefore, according to Fig. 12, a non-empty solution space is reached when the RR option is used.However, no sample satisfies all three calibration targets jointly.For example, sample 39 seems to be an optimal parameter set, however the simulated bulk responses compared to s exp,max(pre=20) , s exp,max(2:20) and a exp,max have errors, |e|, of 1.13%, 22.53% and 5.88% respectively.By establishing mathematical relationships between input variables and each calibration target, such errors can be minimized.For that reason, the RR option is used in the next steps as a feasible solution to be optimized further.
Step II: significant DEM variables A one-variable-at-a-time (OVAT) technique is used to create samples that allow us to investigate the direct effect of each DEM variable, x j , on simulated bulk responses by running a limited number of simulations.
Table 7 displays the samples created for this step, including 6 DEM input variables at the reference particle scale (S P = 1), when the RR option is used.This results in 60 samples in total, to be simulated in the ring shear and ledge angle of repose tests.When one variable is changed, the others are maintained at the displayed reference values.Reference values are based on one of the samples that was used in step I.In addition to 5 DEM input variables that were included in step I, the tangential stiffness multiplier, k t,mult., is also varied in this step.This allows us to check whether k t,mult.has any significant influence on the selected bulk responses.A similar list of bulk responses including s pre=20 , s 2:20 , and a M is analyzed in step II.Furthermore, larger ranges for the DEM input variables, compared to the previous step, are used to create samples.This allows us to run a comprehensive sensitivity analysis showing relationships between the DEM input variables and the selected bulk responses.Fig. 13 displays isolated effects of the 6 DEM input variables at S P = 2.25 on the simulated angle of repose.Since the ledge test box is performed in a rectangular container (as shown in Fig. 7), a M would be always equal or smaller than 90 .By varying coefficient of static friction, the maximum possible angle of repose, a M = 90 , being reached when m s,p-p !0.6.As expected based on the Mohr-Coulomb theory (Eq.( 2)), there is a positive strong correlation between m s,p-p and a M , as shown in Fig. 13a.A higher particle-particle friction results in a higher shear strength when normal pressure and cohesion strength are constant.By contrast, there is a negative correlation between G and a M , as can be seen in Fig. 13b.By increasing G from 1 to 128 MPa, a M decreases by around 20 .By increasing G, a lower contact overlap, d, is created.This is expected to result in lower forces in the adhesive branch of the contact spring (part III).Increasing G to higher values has negligible influence on a M .The ledge angle of repose simulations using k P equal to 0 and 0.99 result in unstable simulations, in which the stable situation (as discussed in Section 2.4) is not reached.As shown in Fig. 13c, by increasing k P from 0.1 to 0.5, a M decreases by around 20 , and further increasing k P has a negligible influence on a M .There is a strong positive correlation between Dc and a M , showing a non-linear trend near the extreme values (Fig. 13e).According to the Mohr-Coulomb theory (Eq.( 2)), a higher cohesion strength results in a higher shear strength.
Constant pull-off force and tangential stiffness multiplier are found to have negligible effects on a M in the investigated range, as shown in Fig. 13d and f, respectively.Coefficient of static friction, particle shear modulus, surface energy, and plasticity ratio are significant DEM variables influencing the angle of repose.Fig. 14 displays the results of the OVAT-based sensitivity analysis for simulated s pre=20 .There is a strong positive correlation between m s,p-p and s pre=20 .According to the Mohr-Coulomb theory (Eq.( 2)), the higher angle of internal friction of bulk material results in a higher shear strength when normal pressure and cohesion strength are constant.A linear trend seems to exist between these two parameters.The other 5 DEM input variables, compared to m s,p-p , have a weaker influence on s pre=20 .Particle shear modulus and surface energy have positive correlation values with s pre=20 .
The surface energy contributes in the cohesion strength of bulk material (denoted by c in Eq. ( 2)), thus contributing in the shear strength too.Fig. 15 displays the results of the OVAT-based sensitivity analysis for simulated s 2:20 .Coefficient of static friction has a strong positive correlation with s 2:20 , similar to its correlation with s pre=20 .The surface energy plays a more important role in s 2:20 , compared to s pre=20 .Increasing surface energy, Dc, from 0 to 25 J/m 2 causes an increase of more than 200% in s 2:20 .According to the Mohr-Coulomb theory (Eq.( 2)), at relatively low vertical pressure values, the cohesion strength, c, has a higher contribution to the shear strength, compared to shear flow at high vertical pressure values.
As expected, based on the results of the ledge of repose simulations, G has a negative correlation with s 2:20 .This is probably due a lower normal overlap created in the contact spring by increasing the value of G. Contact plasticity ratio, k p , also has some level of influence on s 2:20 , but not in a predictive manner .
In conclusion, only one input variable, k t,mult., has a negligible influence on the investigated bulk responses.Therefore, all the other 5 input variables are included in the surrogate modelingbased optimization in the next step.
Step III: surrogate modeling-based optimization In this step, first the Taguchi method is used to create multivariate samples to include variations of 5 significant DEM input variables when the RR option is used.Second, relationships between each calibration target and the DEM input variables are mapped to create F(X).This is done using the multiple linear regression technique.As discussed in Section 2.1, to consider the definiteness criterion, calibration targets are modified by excluding the ledge angle of repose test and by including W 80,65 and W 70,300 measured in the consolidation-penetration test.This means that four calibration targets are included in step III, and therefore M y = 4. Additionally, the maximum values of shear strength (shown in Table 2) are used as calibration targets in the simulation of a ring shear test.Third, an optimal set of DEM input parameters is found; these jointly satisfy the four selected calibration targets.
Table 8 presents the levels of the 5 significant DEM input variables at S P = 1 that are used to create multi-variate samples.Given the adequate simulated bulk responses in step I, m s,p-p is bounded to 0.2 and 0.4.For the same reason, levels of G are set to 2.5, 5, and 7.5 MPa.Three levels are selected for G to capture any possible non-linear relationship between G and the DEM calibration targets.k p is bounded to 0.2 and 0.6.This range is expected to be enough to capture a wide range of plasticity in the contact spring.Two other parameters, f 0 and Dc, which control cohesive forces in part III of the contact spring, are confounded.In other words, their levels are varied simultaneously in a way that allows us to minimize the number of samples.Thus, 4 coded variables are used in the Taguchi design to create samples.In total, 18 samples are created using the Taguchi method.
As investigated in [1], the reaction force on the wedge-shaped penetration tool is affected by the particle scaling factor.For that reason, the consolidation-penetration simulation is calibrated only for level of particle size (S P = 2.25), which is similar to the particle size used in the ledge angle of repose simulations.
Next, the matrix of simulated bulk responses, [Y'], including 4 different bulk responses for 18 samples, is created.This matrix is used to map relationships between DEM variables, X, and simulated bulk responses, Y'.Details of F(X) are presented in Table 9, including coefficients of the DEM variables in linear regressions fitted on simulated bulk responses, Y'.Cte.stands for the constant term in the regression model.Remarkably, in all the fitted linear regression models the coefficient of static friction has the highest level of significance.Values of coefficient of determination, R 2 , are also presented; in all the regression models, these are higher than 0.75.
Therefore, the multiple linear regression model is found to be adequate for us to continue with response optimization.If insufficient values of R 2 are reached in this step of the calibration procedure, either a higher number of training samples or more advanced surrogate modeling techniques can be used.
Fig. 16 presents an optimal set of DEM input variables that jointly satisfies four different calibration targets in step III with a composite desirability, d composite , equal to 0.61.Composite desirability, d composite , represents the geometric mean of individual desirability values, d, as shown in Eq. ( 5) and Eq. ( 6), respectively.
where f X ð Þis the predicated bulk response using the linear regression, and y is the target bulk response that is measured physically.y 0 min and y 0 max respectively represent the lowest and highest simulated values of a specific bulk response among all samples in step III.Each row in Fig. 16, except the top one, represents a specific simulated bulk response with its maximum possible d value obtained by finding an optimal set of DEM input variables.For example, the last row represents the response optimization for shear strength in the pre-shear stage, s pre=20 .For this bulk response, the physically measured value, y, is equal to 19.4 kPa.
Using the mapped relationship between DEM variables and y', simulated bulk response, a combination of variables is found that is predicted to lead to f(X*) = 18.7 kPa.This means that the outcome predicted in the simulation of a ring shear test using the current solution, shown in red, is a s pre=20 equal to 18.7 kPa, with d = 0.80.
Verifying the calibration procedure
This section discusses verification of the calibration procedure, step IV.First, we need to verify whether the outcome of surrogate modeling-based optimization is adequate.This is done by running simulations using the optimal set of DEM input parameters and comparing simulated bulk responses to predicted values, f(X*).Second, |e| mean is used to compare simulated bulk responses -using the optimal set -with all the calibration targets, corresponding to the maximum values in Table 2 and the target values in Table 3. Third, the entire yield locus in the ring shear test, including 1 level of r pre and 4 levels of r shear , is compared between the calibrated simulation and experiment.Fourth, the wall friction test as an independent bulk response is verified for various stress states.
First, ring shear and consolidation-penetration tests are simulated using the optimal set found Fig. 16.In Table 10, four different simulated bulk responses are compared with values predicted using the surrogate-based optimization.The relative difference is 10% in all cases, and therefore the adequacy of the multiple linear regression technique together with the response optimizer is confirmed for our DEM calibration problem.If large differences between y' and f(X*) had been captured, a higher number of samples or more advanced regression techniques could have been used to minimize the relative difference.
Second, |e| mean is used to compare simulated bulk responses -using the optimal set -with all the calibration targets, corresponding to the maximum values in Table 2 and the target values in Table 3.In other words, bulk density, shear strength, ledge angle of repose, and accumulative penetration resistance values are verified here.The shear stress in the pre-shear and shearing stages is simulated with |e| equal to 1% and 12.5% respectively.Bulk density values in loose and pre-sheared conditions, q b,0 and q b,20 , are simulated with |e| equal to 5.8% and 1.4%.On average, a relative deviation of 7% is captured in a ring shear test including four calibration targets.In the consolidation-penetration test, four different calibration targets are evaluated, including accumulative penetration resistance and bulk density values measured at two different pre-consolidation levels.In the consolidation-penetration test, accumulative penetration resistance parameters, W 80,65 and W 70,300 , are simulated with |e| smaller than 10%.Additionally, bulk density values at two different levels of r pre , 65 and 300 kPa, are simulated with negligible |e| values (smaller than 1%).This confirms that, using the elasto-plastic adhesive contact model, the calibration procedure was successful in capturing history-dependent behavior of the cohesive iron ore sample in terms of penetration Fig. 16.Finding an optimal set of DEM input variables that jointly satisfies calibration targets using response optimization.
Table 10
Comparing simulated bulk responses using the optimal set with predicted values of surrogate modeling-based optimization.resistance and bulk density.Finally, the ledge angle of repose, which was not used during the surrogate modeling-based optimization, is replicated with |e| = 7.1%.Therefore, considering simulated bulk density values in four different stress states and a M , the definiteness criterion is met using the optimal set of calibrated parameters, X*.Third, the entire yield locus is verified for the ring shear test conducted with r pre=20 .Fig. 17 compares the results of the ring shear test simulation using the optimal parameter set.Comparable shear stress values are measured in both simulation and experiment, with |e| mean = 6.7%.This verifies that the calibration procedure is able to replicate shear strength in various stress states and is able to capture the non-linear yield locus.Finally, wall friction measurements as a bulk response independent of the calibration targets are compared in Fig. 18, including 8 different stress states.The simulated wall yield locus shows a linear trend that replicates experimental values, with |e| mean = 5.5%.Since the Hertz-Mindlin (no-slip) contact model (without adhesive forces) was used to model particle-wall interactions, this linear trend could be expected.This finding is similar to the conclusion of [26], obtained by modeling a cohesive coal sample in a wall friction test.
Conclusions
This paper has successfully established a reliable and novel DEM calibration procedure by incorporating two important criteria: feasibility and definiteness.The DEM calibration procedure was applied successfully to model cohesive and stress-historydependent behavior of moist iron ore based on an elasto-plastic adhesive contact module.The definiteness of the calibrated parameter set has been verified using 20 different bulk response values in four test cases, such as ring shear, consolidation-penetration, and wall friction tests.
-The established calibration procedure can be used to calibrate material models when a high number of DEM input variables (e.g. 6) as well as a high number of calibration targets (i.e.> 2) are involved.-Both continuous and categorical variables can be used in step I, feasibility.Using the Latin hypercube design (LHD) method, it has been shown how a categorical DEM variable (i.e.rolling friction module) can be used during calibration.-During the calibration procedure, significant DEM variables can be screened using the one-variable-at-a-time (OVAT) method in step II.For ring shear and ledge angle of repose simulations, coefficient of static friction between particles (l s,p-p ) was found to be the most significant DEM variable.In general terms, this outcome is consistent with findings by other researchers [5].
Particle shear modulus (G), surface energy (Dc), and contact plasticity ratio (k P ) were the other significant variables when the elasto-plastic adhesive contact module was used.-In the current study, we have shown that surrogate modelingbased optimization is applicable when a high number (i.e.!4) of DEM input variables is involved.-The combination of Taguchi and multiple linear regression techniques was successful in the surrogate modeling-based optimization, with coefficient of determination values>0.75for all the calibration targets.
Further research is recommended to focus on, firstly, validating the calibrated model of the cohesive iron ore in simulating an industrial process (e.g.ship unloader grabs) where all the bulk responses (discussed in Section 4) play a role.Secondly, future researchers should apply the calibration procedure established here to other applications where high numbers of input variables and bulk responses are present.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Fig. 1 .
Fig. 1.Main components of a generic DEM calibration procedure.
(
A) Schulze ring shear test; (B) ledge angle of repose; and, (C) consolidation-penetration test.Additionally, three influencing parameters related to bulk properties were varied in the laboratory tests: (1) iron ore sample; (2) moisture content, denoted by MC; and (3) vertical consolidation pressure, denoted by r.The results obtained in the laboratory tests listed above (
Fig. 3 .
Fig. 3. Main steps of the DEM calibration procedure considering feasibility and definiteness criteria.
Fig. 4 .
Fig. 4. Schematic shear stress measurements in ring shear test, including pre-shear and shearing stages.
Fig. 7 .
Fig. 7.The ledge test box to determine angle of repose including dimensions: a) laboratory environment [19], side view; b) simulation environment, cross-sectional view.
Table 4
DEM input variables to model interaction between particles when RC option is used.
Fig. 10 .
Fig.10.Forty different samples for RR option at S P = 1, are created using Latin hypercube design for 5 variables.
Fig. 11 .
Fig. 11.Shear strength and angle of repose values captured in 40 samples when RC option is used: a) s pre=20 ; b) s 2:20 ; c) ledge angle of repose (a M ).
Fig. 12 .
Fig. 12. Shear strength and angle of repose values captured in 40 samples when RR option is used: a) s pre=20 ; b) s 2:20 ; c) ledge angle of repose (a M ).
Fig. 13 .
Fig. 13.Isolated effects of 6 DEM input variables at S P = 2.25 on the average angle of repose: a) coefficient of static friction; b) particle shear modulus; c) contact plasticity ratio; d) constant pull-off force; e) surface energy; f) tangential stiffness multiplier.
Fig. 14 .
Fig. 14.Isolated effects of 6 DEM input variables on the shear stress in the pre-shear stage (s pre=20 ): a) coefficient of static friction; b) particle shear modulus; c) contact plasticity ratio; d) constant pull-off force; e) surface energy; f) tangential stiffness multiplier.
Fig. 15 .
Fig. 15.Isolated effects of 6 DEM input variables on the shear stress in the shearing stage (s 2:20 ): a) coefficient of static friction; b) particle shear modulus; c) contact plasticity ratio; d) constant pull-off force; e) surface energy; f) tangential stiffness multiplier.
Table 5
DEM input variables to model interaction between particles when RR option is used.
Table 7
Sampling for step II, finding significant DEM variables.
Table 11 compares 9 different simulated bulk responses with their target values, which were measured physically using the laboratory tests.Four parameters in the ring shear test are compared, indicating shear strength and bulk density.
Table 8
Levels of DEM input variables at S P = 1 in step III: surrogate modeling-based optimization.
Table 9 F
(X) when i 2 M y ; mapped relationships between DEM variables and simulated bulk responses.
Table 11
Verification of calibration procedure; comparing simulated bulk responses with their calibration targets. | 2021-06-22T17:55:07.765Z | 2021-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "96ab472912104b832b2262b8a8bf3c842484f721",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.apt.2021.02.044",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "20aa93d42d02a3a91f7a07e27ee989d0dd841215",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
29583171 | pes2o/s2orc | v3-fos-license | Alterations in Metabolism and Diurnal Rhythms following Bilateral Surgical Removal of the Superior Cervical Ganglia in Rats
Mammalian circadian rhythms are controlled by a master pacemaker located in the suprachiasmatic nuclei (SCN), which is synchronized to the environment by photic and nonphotic stimuli. One of the main functions of the SCN is to regulate peripheral oscillators to set temporal variations in the homeostatic control of physiology and metabolism. In this sense, the SCN coordinate the activity/rest and feeding/fasting rhythms setting the timing of food intake, energy expenditure, thermogenesis, and active and basal metabolism. One of the major time cues to the periphery is the nocturnal melatonin, which is synthesized and secreted by the pineal gland. Under SCN control, arylalkylamine N-acetyltransferase (AA-NAT)—the main enzyme regulating melatonin synthesis in vertebrates—is activated at night by sympathetic innervation that includes the superior cervical ganglia (SCG). Bilateral surgical removal of the superior cervical ganglia (SCGx) is considered a reliable procedure to completely prevent the nocturnal AA-NAT activation, irreversibly suppressing melatonin rhythmicity. In the present work, we studied the effects of SCGx on rat metabolic parameters and diurnal rhythms of feeding and locomotor activity. We found a significant difference between SCGx and sham-operated rats in metabolic variables such as an increased body weight/food intake ratio, increased adipose tissue, and decreased glycemia with a normal glucose tolerance. An analysis of locomotor activity and feeding rhythms showed an increased daytime (lights on) activity (including food consumption) in the SCGx group. These alterations suggest that superior cervical ganglia-related feedback mechanisms play a role in SCN-periphery phase coordination and that SCGx is a valid model without brain-invasive surgery to explore how sympathetic innervation affects daily (24 h) patterns of activity, food consumption and, ultimately, its role in metabolism homeostasis.
Mammalian circadian rhythms are controlled by a master pacemaker located in the suprachiasmatic nuclei (SCN), which is synchronized to the environment by photic and nonphotic stimuli. One of the main functions of the SCN is to regulate peripheral oscillators to set temporal variations in the homeostatic control of physiology and metabolism. In this sense, the SCN coordinate the activity/rest and feeding/fasting rhythms setting the timing of food intake, energy expenditure, thermogenesis, and active and basal metabolism. One of the major time cues to the periphery is the nocturnal melatonin, which is synthesized and secreted by the pineal gland. Under SCN control, arylalkylamine N-acetyltransferase (AA-NAT)-the main enzyme regulating melatonin synthesis in vertebrates-is activated at night by sympathetic innervation that includes the superior cervical ganglia (SCG). Bilateral surgical removal of the superior cervical ganglia (SCGx) is considered a reliable procedure to completely prevent the nocturnal AA-NAT activation, irreversibly suppressing melatonin rhythmicity. In the present work, we studied the effects of SCGx on rat metabolic parameters and diurnal rhythms of feeding and locomotor activity. We found a significant difference between SCGx and sham-operated rats in metabolic variables such as an increased body weight/food intake ratio, increased adipose tissue, and decreased glycemia with a normal glucose tolerance. An analysis of locomotor activity and feeding rhythms showed an increased daytime (lights on) activity (including food consumption) in the SCGx group. These alterations suggest that superior cervical ganglia-related feedback mechanisms play a role in SCN-periphery phase coordination and that SCGx is a valid model without brain-invasive surgery to explore how sympathetic innervation affects daily (24 h) patterns of activity, food consumption and, ultimately, its role in metabolism homeostasis.
Keywords: superior cervical ganglion, scgx, circadian rhythm, metabolism, melatonin inTrODUcTiOn The circadian system, a set of biological clocks that regulate almost all physiological and behavioral processes, has evolved to adapt the organism's physiology to cyclic environmental changes (1-4). In mammals, the master clock resides in the suprachiasmatic nuclei (SCN) of the hypothalamus and is mainly synchronized by the light-dark (LD) cycle (5). The circadian system also includes peripheral clocks, entrained by the SCN via neural and humoral cues, such as rhythmically secreted hormones (6)(7)(8), and other SCN-independent cues like food (9).
One of the major physiological processes controlled by the SCN is metabolism, including metabolic rate and circadian rhythms of food intake (3). Food consumption is normally confined to the wake/active phase, while fasting periods occur during the rest/sleep phase, correlating to the anabolic, and catabolic phases of metabolism, respectively (10). Alterations of the circadian pacemaker can lead to metabolic pathologies, such as obesity or metabolic syndrome (11). For example, shift work, chronic forced circadian desynchronization or mutations of clock genes can affect the pattern of food intake and lead to increased levels of circulating triglycerides, and adipose tissue masses resulting in an augmented body weight (12)(13)(14)(15).
Melatonin is a hormone produced by the pineal gland during the dark phase and is considered one of the most important circadian outputs (16). It regulates major physiological processes, including the sleep-wake cycle, and lipid and glucose metabolism (17)(18)(19)(20)(21)(22). The SCN interact with the pineal gland through the sympathetic neurons of the superior cervical ganglia (SCG) (23). This interaction modulates the arylalkylamine N-acetyltransferase (AA-NAT) activity, the main enzyme responsible for melatonin rhythm generation in vertebrates (24). The elimination of the pineal melatonin rhythm, or a reduction of its amplitude, renders the circadian pacemaker a less self-sustained, often damped, oscillatory system (25). On the other hand, forced circadian desynchronization induced by an LD cycle of 22 h in rats (26) or by shift work in humans (27) disrupts rhythmic melatonin secretion.
The SCG are the uppermost ganglia of the paraventral sympathetic chain and innervate the pineal gland, among others structures (28). Superior cervical ganglionectomy (SCGx) is a reliable model to study the role of sympathetic innervation on neuroendocrine interactions (29)(30)(31). Moreover, SCGx has been used to determine the influences of the circadian clock (i.e., the SCN) on neuroendocrine functions. In this sense, SCGx disrupts the circadian system by depressing melatonin secretion and suppressing its rhythm (32,33), presumably by the inhibition of pineal AA-NAT activity (34). This also results in an abolition of the rhythmic excretion of urinary 6-sulphatoxymelatonin, a melatonin metabolite (35). In addition, the SCG also cover other territories such as other glands, brain areas, and the cardiovascular system, which might also be implied in metabolic regulation (36)(37)(38)(39)(40)(41).
Taking into account that the lack of melatonin can produce circadian alterations, and that sympathetic innervation from the SCG covers diverse neuroendocrine effectors, the aim of our work was to study if SCGx can affect rat metabolism and whether this is related to an impairment of the circadian clock.
ethics statement
All animal procedures were approved by the Institutional Animal Care and Use Committee at the School of Medicine, National University of Cuyo, Mendoza, Argentina (Protocol ID 9/2012) and were conducted in accordance with the National Institutes of Health's Guide for Care and Use of Laboratory Animals and the Animal Research: Reporting In Vivo Experiments (ARRIVE) Guidelines. animals Young (3 months old) male Wistar rats were raised in our colony and maintained in a 12:12 h LD cycle (with zeitgeber time 12-ZT 12-defined as the time of lights off; light intensity averaging 300 lux at the cage level), in a controlled environment with food and water ad libitum.
locomotor activity rhythms
Animals were transferred to individual cages equipped with infrared motion sensors. Locomotor activity was assessed by the interruption of the infrared beam and recorded every 5 min (Archron, Argentina). The locomotor activity rhythm analysis was performed using the "el Temps" program (http://www.el-temps. com). Locomotor activity onset was defined as the 10-min bin that contained at least 50% of the maximum activity/bin followed by another bin of at least another 50% of the maximum activity bin within 40 min. Entrainment to the LD cycle was confirmed by periodogram analysis (χ 2 test). Phase angle was measured as the difference (in minutes) between activity onset and lights off. Total daytime activity was assessed by the area under the curve (AUC) of the waveform of each animal. Activity was expressed as a percentage of the total activity or relative activity by comparing post-surgery activity to the activity counts of the 3 weeks previous to the surgery (pre-surgery) as the post-/pre-ratio.
surgery Bilateral superior cervical ganglionectomy (SCGx) was performed as described by Savastano et al. (31). Briefly, under ketamine (50 mg/kg of body weight)/xylazine (5 mg/kg of body weight) anesthesia, the ventral neck region was shaved and disinfected. The salivary glands were exposed through a 2.5 cm vertical incision and retracted to uncover the underlying muscles. The carotid bifurcations were identified through the carotid triangles and the SCG were removed after sectioning the sympathetic trunks, the external carotid nerves, and the internal carotid nerves. For sham-operated animals, the same procedure was performed but the ganglia were not removed.
animal Weight and Food intake Measurements
Body weight and food consumption were monitored weekly at ZT10. After a 3-week pre-surgery baseline, animals were subjected to bilateral SCGx or a sham procedure (n = 9 per group), and body weight and food intake were measured for another 10 weeks. Food efficiency (FE) was analyzed by the body weight/ food intake ratio.
The food intake rhythm was analyzed in both groups at week 11. Daytime (i.e., during lights on) and nighttime (during lights off) food intakes were measured daily at the end of the light and dark phases for 10 days (n = 5 per group). Daytime and nighttime
glycemia and glucose Tolerance Test (gTT)
At week 10, glycemia was measured at ZT10 using PTS PanelsTM test strips for CardioChekTM Brand Analyzer (Hannover, Germany) (n = 9 per group).
At week 13, a GTT was performed after 18 h fast (n = 5 per group). Glycemia was measured as mentioned above before and 15, 30, 60, and 120 min after glucose administration (orogastric, 3 g/kg of body weight from a 30% solution of d-glucose), at ZT10. The AUC of glycemia vs. time was calculated above each individual baseline (basal glycemia).
Fat Weight Measurements
At the end of week 13, animals were decapitated under anesthesia, and epididymal, retroperitoneal, mesenteric, and inguinal adipose tissues were collected and weighed (n = 5 per group). Fat weight was expressed as relative to body weight.
statistical analysis
Data were expressed as mean ± SEM and analyzed using PRISM5 (GraphPad Software Inc., La Jolla, CA, USA). Statistical difference between means was determined by Student's t-test. For the grouped statistical analysis, two-way ANOVA or repeated measures two-way ANOVA was used with Bonferroni as posttest. p < 0.05 was considered significant and p < 0.01 highly significant.
resUlTs global Metabolism is affected by Bilateral superior cervical ganglionectomy
To study the effect of SCGx on rat metabolism, animals were subjected to ganglionectomy or a sham procedure at the middle of week 3 (n = 9 per group). Body weight and food consumption were measured, and FE (body weight/food intake ratio) was calculated. Rats subjected to SCGx did not exhibit differences in body weight ( Figure 1A) but had significant lower food intake when compared with sham animals (Figure 1B), throughout the 10 weeks after surgery. An FE analysis (42) showed metabolic differences between the two groups. FE was higher in ganglionectomized animals, revealing that these rats gained more body mass per gram of consumed food than controls ( Figure 1C).
ganglionectomy increases Daytime locomotor activity
Rats subjected to SCGx or sham surgeries (n = 9 per group) were placed individually in cages with infrared sensors to study their activity distribution during the day. An activity rhythm analysis demonstrated that entrainment to the LD cycle and activity phase angle were not affected by ganglionectomy (Table 1; Figure 2A). Moreover, SCGx animals did not show differences in the levels of total activity as post-/pre-surgery ratio (Table 1; Figure 2B; SCGx group: 1.08 ± 0.083; sham-operated group: 0.99 ± 0.042; data expressed as mean of post-/pre-surgery ± SEM). However, locomotor activity of ganglionectomized animals during the lights-on phase increased after surgery and remained higher throughout the 10-week post-surgery interval ( Figure 2C). Moreover, the relation between the AUC of daytime activity after and before surgery was significantly higher in the SCGx animals (Table 1; Figure 2D; SCGx group: 5.492 ± 0.4126; sham group: 1.992 ± 0.2212; data expressed as mean of post-/pre-surgery ± SEM). This increase Red lines indicate the moments that the system did not record activity. (B) A locomotor activity analysis showed no differences in the levels of total activity, as post-surgery/previous-to surgery ratio (SCGx group, 1.08 ± 0.083; sham group, 0.99 ± 0.042; values are given as mean ± SEM; t-test: p < 0.353; n = 9 per group), but the activity of SCGx animals during daytime (i.e., during lights on) increased after surgery and remained higher throughout the 10 occurs at the expense of a reduced nighttime activity (
ganglionectomy increases Food intake during Daytime
We next studied the daily pattern of food consumption, which can be affected by circadian alterations (13). Ganglionectomized animals had a lower level of food intake per day (Figure 3A; 19.06 ± 0.5960 g for SCGx group; 22.80 ± 0.8027 g for sham group, n = 5 per group).
As it was observed with the activity rhythm, a food intake rhythm analysis revealed increased food consumption during daytime (Figure 3B; 16.68 ± 0.9030 g for SCGx group; 6.160 ± 0.2015 g for sham group), and a slightly but significantly lower feeding activity during the night (Figure 3C; 83.48 ± 0.8864 g for SCGx group; 93.63 ± 0.7122 g for sham group).
scgx animals exhibit lower Basal levels of Blood glucose but higher adipose Tissue
Six weeks after surgery, a glycemia analysis at ZT10 showed lower levels of blood glucose in SCGx rats (Figure 4A; 48.89 ± 4.464 mg/dl for SCGx group; 78.50 ± 4.392 mg/dl for sham group; n = 9 per group). At week 13, a GTT was performed (n = 5 per group). Surprisingly, there were no differences in glycemia kinetics (Figure 4B) or in the AUC of the GTT (Figure 4C; 935 ± 57.04 mg/dl for SCGx; 1,008 ± 65.66 mg/dl for sham) between ganglionectomized and sham animals.
DiscUssiOn
The impact of the superior cervical ganglionectomy (SCGx) on hormone secretion, and blood glucose and insulin release has been reported before (40,(43)(44)(45)(46) but its role on body weight homeostasis remains to be fully established. In this work, we assessed the impact of SCGx on rat metabolism and diurnal rhythms. Rats subjected to SCGx showed: (1) increased FE (i.e., gained more weight per gram of food consumed); (2) increased activity during the lights-on phase of the photoperiod; (3) increased feeding during daytime; (4) reduced glucose levels, without changes in glucose tolerance, at ZT10; and (5) increased adipose tissue mass.
The SCG provide sympathetic innervation to diverse areas including the hypothalamus, the pineal gland, cephalic blood vessels, the choroid plexus, the eye, the myocardium, the salivary and thyroid glands, and the carotid body (12,40,41). Removal of the superior cervical ganglia can cause loss of vasoconstriction control of brain and pituitary blood vessels (47), changes in cerebrospinal fluid production from the choroid plexus (48), and other central effects in response to partial sympathetic denervation (49). Moreover, abolition of the peripheral sympathetic innervation of the brain by SCGx is associated with several neuroendocrine changes in mammals, which include the disruption of water balance (37), and the alteration of normal photoperiodic control of reproduction (50,51).
As previously mentioned, the mammalian circadian system is held in synchrony by the SCN through endocrine and autonomic outputs (52,53). One of the mayor endocrine cues is the pineal hormone melatonin. Its synthesis and release is driven by the SCN through a multisynaptic pathway relaying in the SCG (54,55). This interaction determines the rhythmic production of the hormone, whose day-night profile is modulated by daylength (23), encoding photoperiodic changes in the metabolic state (56).
Previous evidences have shown that SCGx decreases the secretion of melatonin and suppresses its rhythm (32,33). The relationship between melatonin and the circadian control of metabolism has been demonstrated before. Pinealectomy and melatonin administration or replacement (57, 58) significantly changes body weight, as well as glucose levels and its utilization in different tissues (59). In our model, we found decreased levels of glucose at ZT10, but a GTT showed no differences between SCGx and sham-operated animals. In contrast, pineal ablation in rats was shown to increase glucose levels (57). Furthermore, leptin secretion is strongly associated with glucose and lipid metabolism, and has been shown to be modulated by melatonin (60). Moreover, the administration of melatonin in experiments conducted in rats and rabbits induced a reduction in body weight, serum lipids, adiposity, blood glucose, and insulin levels associated with the intake of a high-fat diet, suggesting a protective role of melatonin (20,61,62).
Taking into account our results, SCGx mimics the effect of pinealectomy on the neuroendocrine system only in some aspects, affecting several areas that include, but are not restricted to, the pineal gland. Although we cannot state that all SCGxinduced changes presented here are exerted via a suppressed pineal function, it is tempting to speculate that the diurnal timing of locomotion and feeding might be related to the lack of melatonin feedback to the circadian clock.
On the other hand, SCGx rats exhibit significantly augmented serum corticosterone and adreno-corticotropin hormone levels, and a suppression of their rhythm (35,70). Glucocorticoids (GCs) can stimulate the de novo synthesis of lipids (71). It has been reported that rats exposed to long-term treatment with GCs show a slower body weight gain, reduced food intake, and increased epididymal fat mass (72). Some of the effects reported here might be related to alterations in GC turnover that, in turn, could lead to the increase in FE and lipid accumulation. Indeed, the role of the sympathetic neuro-adipose connections in the regulation of lipolysis and body weight has been studied before (73). Sympathetic denervation leads to an increase in adipose tissue, while nerve stimulation results in fatty acid release, and sympathetic or ganglionic blockade inhibits the mobilization of lipids (74)(75)(76). Leptin production is also under the control of the sympathetic system (77), with participation of the SCG (78).
Regarding light synchronization, it has been demonstrated that pinealectomy accelerates the re-entrainment of rats to the new LD schedule (79)(80)(81)(82). Moreover, in rodents, melatonin administration synchronizes free-running rhythm and accelerates re-entrainment after phase shifts of the LD cycles (83)(84)(85), and reinforces entrainment to shortened 22 h LD cycles in both SCGx and pinealectomized rats (86). We studied the effect of SCGx on the entrainment to the LD cycle and found no significant differences on period, phase angle, or total locomotor activity between SCGx and sham-operated animals. However, SCGx rats showed significant differences in activity during daytime (lights on). In addition, food intake analysis evidenced augmented food consumption during daytime, which may correlate with the activity bouts under the light phase.
Also, it was previously observed that bilateral removal of the SCG delays the synchronization of feeding rhythms with a newly imposed diurnal lighting regimen, but, again, the response to pinealectomy was different (87). In fact, the elimination of pineal rhythmicity cannot account for all of the effects of SCGx on photic entrainment of feeding and locomotor activity rhythms. It can be suggested that SCGx alters the sympathetic innervation of hypothalamic structures implicated in the neural control of feeding, affecting the diurnal rhythm of food intake.
Rhythms in metabolism are orchestrated by the SCN and other inputs from different areas of the hypothalamus, like the mediobasal region, which plays a significant role in metabolic homeostasis (88)(89)(90)(91)(92)(93). Other areas, like the dorsomedial hypothalamus, have an important role as a component of the SCNindependent food-entrainable oscillator (94)(95)(96)(97). The circadian regulation of body weight depends on the integration of multiple signals of several hypothalamic areas, including the SCN, the arcuate nucleus, the ventromedial hypothalamic nucleus, and the paraventricular nucleus, that control appetite and food intake, deposition of fat, and energy expenditure (11,53,98). Melatonin not only couples circadian cues to many body functions but might also be a key player in the regulation of basal metabolic rate (99), independently of other SCG-innervated territories, such as the hypothalamus. In this sense, the results shown in this work provide evidence suggesting that SCGx may be affecting metabolism by changing the feeding pattern (i.e., increasing feeding during daytime), acting over peripheral clocks without affecting the SCN.
In conclusion, these findings provide insights into the metabolic and diurnal rhythms of ganglionectomized rats. SCGx is not only a good model to study the circadian clock influence on neuroendocrine functions, but a reliable approach to investigate the relationship between the circadian system and metabolism, as well as the role of the SCG innervation in the synchronization of the master circadian clock with the peripheral clocks, especially the ones that drive metabolic variables. | 2018-01-09T18:04:05.034Z | 2018-01-09T00:00:00.000 | {
"year": 2017,
"sha1": "18968fb6571ba1b03c925ab5279b2ef294a85958",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2017.00370/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "18968fb6571ba1b03c925ab5279b2ef294a85958",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
203836151 | pes2o/s2orc | v3-fos-license | Cross lingual transfer learning for zero-resource domain adaptation
We propose a method for zero-resource domain adaptation of DNN acoustic models, for use in low-resource situations where the only in-language training data available may be poorly matched to the intended target domain. Our method uses a multi-lingual model in which several DNN layers are shared between languages. This architecture enables domain adaptation transforms learned for one well-resourced language to be applied to an entirely different low-resource language. First, to develop the technique we use English as a well-resourced language and take Spanish to mimic a low-resource language. Experiments in domain adaptation between the conversational telephone speech (CTS) domain and broadcast news (BN) domain demonstrate a 29% relative WER improvement on Spanish BN test data by using only English adaptation data. Second, we demonstrate the effectiveness of the method for low-resource languages with a poor match to the well-resourced language. Even in this scenario, the proposed method achieves relative WER improvements of 18-27% by using solely English data for domain adaptation. Compared to other related approaches based on multi-task and multi-condition training, the proposed method is able to better exploit well-resource language data for improved acoustic modelling of the low-resource target domain.
INTRODUCTION
In automatic speech recognition (ASR), the problem of building acoustic models that behave robustly in different usage domains is still an open research challenge, despite the emergence of deep neural network (DNN) models. Several approaches have been proposed in recent years to adapt well-trained DNNs from a source domain to a new target domain, perhaps with limited training data. Examples include data augmentation strategies [1]; the use of auxiliary features such as i-vectors [2], posterior or bottleneck features [3,4] trained on source-domain data; adapting selected parameters [5,6,7]; adversarial methods [8]; as well as simple yet effective approaches such as applying further rounds of training to DNNs initialised on source data. This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Air Force Research Laboratory (AFRL) contract #FA8650-17-C-9117. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, AFRL or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. Alberto Abad was supported by Portuguese national funds through Fundação para a Ciência e a Tecnologia (FCT) with reference UID/CEC/50021/2013. The common ground in the vast majority of these works is that some transcribed data -even if usually a limited amount -from the target domain is available for adaptation of the acoustic models. This assumption, reasonable for well-resourced languages (WR), may not hold in the case of low-resourced languages (LR) for which even the amount of data available in the source domain may be very limited, and it is expensive or impractical to arrange for transcription of data from a new domain. This is the scenario tackled in the IARPA MATERIAL programme 1 . The programme seeks to develop methods for searching speech and text in low-resource languages using English queries. In particular, ASR systems must operate on diverse multi-genre data, including telephone conversations, news and topical broadcasts. However, the only manually annotated training data available is from the telephone conversations domain.
One approach to this problem is to collect a corpus of untranscribed data from the target domain in the LR language (for example, by web-crawling) and use an ASR system built for the source domain to create an automatic transcription, which is then used to train domain-adapted models. This semi-supervised approach to DNN training has been successfully used eg. [9,10,11]. However, the technique requires careful confidence-based data selection, and is very sensitive to the performance of the source system on the target data. Another drawback, when rapid deployment to a new domain is required, is the need to run computationally expensive decoding on large quantities of data in order to harvest sufficient quantities of training material.
In this work, inspired by the challenges posed by the MATERIAL programme, we adopt a completely different approach: we explore whether it is possible to transfer a specific domain transform learned in a WR language to a LR language for which no target training data is available at all, in other words, is a method for adaptation between two given domains portable across languages? We thus aim to improve the performance of a LR ASR system in a new target domain by using only data of a WR language in both the source and target domains. To this end, we propose an adaptation scheme that uses multi-lingual AM training to enable cross-lingual sharing of domain adaptation techniques. Then, based on the hypothesis that initial layers of a DNN encode language-independent acoustic characteristics, we are able to transfer the adapted layers learned for one target domain from one language to another.
For the development of the proposed cross-lingual domain adaptation approach, it is more convenient to select a pair of languages for which data is available from both source and target domains in each language, enabling oracle experiments to be carried out. Hence, in this work we initially use English as the WR language and pretend that Spanish is an LR language. As in the real MATERIAL task, we choose conversational telephone speech (CTS) as the source domain The rest of this paper is organized as follows. Section 2 describes the proposed cross-lingual domain adaptation approach. Then, experimental setup, including corpora and details on the architecture of the developed ASR systems, is reported in section 3. Finally, experimental evaluation is presented in section 4 before the final concluding remarks.
CROSS-LINGUAL DOMAIN ADAPTATION
The main objective of this work is to propose an adaptation scheme for DNN-based acoustic models that allows for cross-lingual domain adaptation from a LR language system trained for one source domain into a target domain using solely adaptation data of a WR language. Considering that the role of the DNN is to learn a non-linear mapping between the acoustic features (e.g. MFCC) and phonemerelated classes (e.g. senones), it is a common interpretation that the initial layers of a phonetic network are expected to encode lowerlevel acoustic information, while deeper layers codify more complex cues closer to phonetic classes [12]. Thus, following this interpretation, we hypothesize that the initial layers of a phonetic network encode basic acoustic information that is independent of the language and the task at hand, while the later ones are specific to each language. Under this interpretation, we suggest that the modifications that could be applied to the initial layers of a phonetic network in any language to adapt to the specific characteristics of a new domain should be similar (and transferable) among different languages. To take advantage of this possibility, it is necessary to design a network architecture in which parameter transforms can be meaningfully shared among the LR and WR languages, followed by a set of final language specific layers. This solution can be attained through multi-task learning and it is the backbone of our proposed scheme. As can be seen in Figure 1, the process consists mostly on three steps: 1) a multi-lingual network is trained with data of both the LR and WR language in the source domain (left); 2) shared layers are adapted using WR data of the target domain (center); and 3) the adapted shared layers are transferred to the original LR language network resulting in a domain adapted version of the LR network (right).
Multi-lingual training
Multi-task learning [13] refers to the process of simultaneously learning multiple tasks from a single data set that contains annotations for different tasks. Typically, the network architecture consists of some initial layers that are shared by the multiple tasks and some final task-specific layers, one for each considered task. Back-propagation is applied for each task alternatively using all the training data propagated through both the task-specific and shared layers. This type of learning provides an improved regularization effect of the resulting networks compared to conventional single task learning approaches and has been used successfully in singlelanguage acoustic modelling [14,15].
This type of approach has been also successfully applied in ASR using data from multiple languages to learn multi-lingual networks, in which each task objective corresponds to the phoneme (senone) classification of the different languages [16,17]. In general, multilingual learning has shown to be particularly beneficial when languages with limited training resources are involved. In this work, we use multi-lingual learning to train an initial network using data from both the LR and the WR languages in the source domain. Hence, the source multi-language network has a set of shared layers followed by two language specific (LR and WR) set of final layers.
Domain adaptation
Given the multi-lingual architecture described previously, we adapt the shared components of the network using target data of the WR language whilst freezing the remaining language-specific layers. By doing so, our expectation is that the kind of transformations that the new adapted network will learn will be language-independent and will mostly be related with the particular acoustic characteristics of the new data. Keeping the upper layers frozen ensures that the newly adapted lower layers of the network will continue to be appropriate as inputs for the layers specific to the LR language, despite no LR data being used for the adaptation.
In general, one may explore any of the well-researched strategies for network weight adaptation, such as LHUC [6] or LIN [18]. In this work, given that substantial quantities of target domain data are available in the WR language, we simply adapt the weights of a selected subset of layers of the shared network (initialized with the weights learned in the multi-lingual learning stage) through simple backpropagation updates with a varying number of data training epochs and an appropriately chosen learning rate.
Corpora
In all experiments we take conversational telephone speech (CTS) as the source domain, for which transcribed training data is available for all languages. For English and Spanish, we train on data from the Fisher corpus [19] (~200 hours and~163 hours respectively). Note that for the former, we use only a subset of the full corpus. For Tagalog and Lithuanian, we use data from the IARPA Babel full language packs (80 hours and 40 hours respectively). CTS data is all sampled at 8khz. In the MATERIAL task, the target domain is a mixture of broadcast news (BN) and topical broadcast (TB) domain, both with wideband 16khz audio. We approximate this target in English and Spanish by using broadcast news (BN) data from HUB4 [20] with 150 hours of English data, used for adaptation, and~30 hours of Spanish, used for oracle experiments only. For each corpus we use the standard evaluation sets: the 1997 HUB4 English Evaluation set is used for English BN; and for the Tagalog and Lithuanian, we use the BN and TB "Analysis" test sets provided by the MATERIAL programme.
System description
The Kaldi toolkit [21] has been used for the development of all the ASR systems. To obtain the set of language-specific senones and frame-level phonetic alignments needed for training the DNNs, initial HMM-GMM systems have been built for each language and domain. HMM-GMM training follows the conventional recipes in Kaldi, consisting on several stages of refinement from monophone to context-dependent models trained on LDA+MLLT+fMLLR features [22].
HMM-DNN ASR systems share a common input feature representation of 43 dimensions corresponding to 40 high resolution MFCCs components plus 3 additional pitch and voicing related features [23]. Neither side speaker information (i-vector), nor speed perturbation data augmentation have been used in these experiments. Note that all data is downsampled to 8 kHz to match the sampling rate of the CTS source domain data. Hence, all the systems reported in this work have been trained and evaluated at 8kHz.
The acoustic models are TDNN networks trained with framelevel cross-entropy loss criterion [24]. Network architectures consist of a stack of 7 TDNN hidden layers, each containing 650 units, with RELU activation functions. These correspond to the shared language-independent layers of the network. For each language, a pre-final fully connected layer of 650 units with RELU activations and a final softmax layer is appended. The size of the output layers correspond to the size of the outputs of the single language networks of the source CTS domain. During training, the samples of each language are not scaled, thus no compensation for different training data sizes is performed.
The optimization method used is natural gradient stabilized stochastic gradient descent [25] with an exponentially decaying learning rate. The starting learning rate is set to 0.0015 and decays by a factor of 10 over the entire training. The baseline and multilingual AMs were trained for 3 epochs with a minibatch size of 256. The parameter change is limited to a maximum of 2 for each minibatch to avoid parameter explosion. In the adaptation stage of the proposed approach, the network is initialized with the multi-lingual network weights and the learning rate of the frozen layers is set to 0. The number of adaptation epochs is a varying parameter investigated in the results section. All the remaining configurations are identical to that of the multi-lingual training stage. Since the focus of this work is on adaptation of the AM, domain matched language models are always used in decoding. That is, CTS source and BN target domain test data is decoded with corresponding language-specific CTS and BN LMs. While this seems to go against the assumption of working with low resource languages, text data is usually much easier to obtain compared to transcribed speech. CTS LMs are trained on the training transcriptions of the relevant corpora. BN LM for Spanish is also trained only on the training transcriptions, while the BN LM for English is trained using the transcriptions and additional text from the 1996 CSR HUB-4 Language Model and the North American News Text Corpus. BN LMs for Lithuanian and Tagalog are trained on around 30M words of web-crawled text from CommonCrawl and other online sources.
English and Spanish baseline systems
In this and the following two sections, we take Spanish to be the LR language; as always, the WR language is English. Table 1 shows, in the first row, word error rate (WER) performance of single language baseline systems using BN target in-domain acoustic models. In the case of the LR language, this is an oracle experiment, since we assume that in reality, no target domain data is available for this language. The second row shows results of CTS source domain acoustic models evaluated with both CTS and BN test data. As expected, one observes a large degradation when decoding LR BN target domain data with CTS acoustic models: about 20% absolute WER compared with the matched experiments. The performance drop can also be observed in the WR experiments, but it is not so acute, probably due to the increased amount of training data used in the WR system. Note that the objective of this work is to make the 40.0% of the LR baseline system when decoding target domain data as close as possible to the oracle 19.2% figure, without using any in-domain LR language training data.
The last row of Table 1 shows the WER performance of the multi-lingual system trained with LR and WR data of the CTS source domain. For both languages, the performance of the multi-lingual models on the CTS source domain is close to that obtained using the respective mono-lingual model. However, for the cross-domain test case, there is a remarkable improvement in the LR language performance from 40.0% to 32.9%. This is a 17.8% relative improvement in the BN target domain achieved by using only out-ofdomain CTS WR data, thanks to the multi-lingual training scheme. While the benefits of multi-lingual training for LR were expected, it is very interesting to observe that these are much more significant in the cross-domain case. This may be partially explained due to the WR LR BN target BN target mono-ling CTS AM (1) 19.6 40.0 multi-ling CTS AM (2) 19.2 32.9 proposed CL adapt AM (3) 14.5 28.4 multi-task CL AM (4) 12.4 29.1 multi-task CL + adapt AM (5) 12.3 29.1 multi-cond CL AM (6) 12.5 29.2 multi-cond CL + adapt AM (7) 12.2 29.1 Table 2. WER (%) of the mono-lingual AM (1), the multi-lingual AM (2), the proposed adapted AM (3) and alternative cross-lingual AM (4-7) obtained in the WR and LR BN target domain test sets.
considerably increased amount of data and variation to which this network is exposed compared to the LR baseline.
Cross-lingual network adaptation results
The third row of Table 2 reports the performance of the proposed method in contrast to the mono-lingual (first row) and multi-lingual (second row) baselines. The proposed cross-lingual domain adaptation method has been tested for a varying number of training epochs (from 0.5 to 3) and adapted shared layers (from 1 to 6); the reported result corresponds to the best adaptation configuration, which is obtained when the 3 first hidden shared layers are adapted using all WR target domain data for 1 epoch of training. We observed in the complete set of experiments that results on LR data are not particularly sensitive to number of epochs or layers amongst those tested, ranging from 28.4% to 29.1% in all cases. However, as expected, this is not the case for the WR language results, in which performance tends to keep improving with increased number of adapted layers and epochs. Hence, an improvement in the WR case does not necessarily imply an improvement in the LR case. We conclude that there seems to be a limit on the amount of information that is transferable from the WR system to the LR system. Overall, we observe that by using WR CTS source domain data for multi-lingual learning we are able to improve from 40.0% to 32.9% and by then using WR BN target domain data and the proposed adaptation method, we further increase performance to 28.4% WER. This is an absolute 11.6% WER decrease, which is a recovery of around 50% of the performance loss due to the lack of LR training data in the CTS target domain when compared to a system fully trained with BN data (see Table 1). This is attained by using only WR data and no any additional LR data.
Comparison with similar cross-lingual approaches
In this section, the proposed approach is compared with two related cross-lingual information transfer methods: first, with an AM trained in a multi-task fashion, considering LR source, WR source and WR target as three separate tasks, referred to as multi-task; and second, with an AM trained in a multi-task and multi-condition fashion, considering the LR source as one task, and the mix of WR source and WR target as a second task, referred to here as multi-cond. Rows 4 and 6 in Table 2 report performance of these two cross-lingual alternative approaches. For these experiments, the exact same network architecture, training and decoding recipes as previously have been followed. Notice that, like in the proposed method, it is possible to use these networks as an initialization for further fine-tuning using only WR target domain data for a varying number of epochs and shared adaptation layers. Thus, rows 5 and 7 of Table 2 report results after fine-tuning adaptation for the best configuration of epochs and number of adapted layers in each case. For both ap- Table 3. WER (%) of the single language AM (1), the multi-lingual AM (2) and the proposed adapted AM in the Tagalog and Lithuanian MATERIAL evaluation sets.
proaches, the additional fine-tuning does not provide significant improvements. Performance converges already after the initial training. In fact, we observe that the best adaptation configuration is attained with the minimal number of epochs and adaptation layers. For any other configuration, performance oscillates in absolute differences of ±0.1. Overall, the cross-lingual proposed scheme outperforms any of the other methods, being able to better leverage information from the WR data for improved LR acoustic modelling in the target domain.
Experiments with MATERIAL languages
In this section we investigate the proposed cross-lingual adaptation approach considering two of the IARPA MATERIAL languages: Tagalog and Lithuanian. For these experiments, we keep the same network architecture and training and decoding recipes as previously, including the set of best parameters found for the proposed cross-lingual method: adaptation of the 3 first hidden shared layers for 1 epoch. The new languages are less related to English than Spanish, and the data available present a poorer match between source and target conditions. Despite the significant differences among languages and target domain conditions, results reported in Table 3 show that the proposed method is able to effectively exploit English data to improve ASR performance of LR languages in any of the wideband data sub-conditions. As expected, the proposed method is more effective in closer target conditions to those of the WR data: for the BN wideband sub-condition the relative WER improvements are 21.2% for Tagalog and 30.7% for Lithuanian; while the improvements for the TB wideband sub-condition are 17.4% for Tagalog and 25.3% for Lithuanian. Overall, the average relative WER improvements for the wideband conditions are 18.3% and 27.5% for the Tagalog and Lithuanian languages, respectively.
CONCLUSIONS
This paper has demonstrated that it is possible to transfer domain adaptation of DNNs from one language to another, enabling adaption of a low-resourced language to be performed with absolutely no data from the target domain. This has been achieved thanks to a multi-lingual network architecture that allows for meaningful share of the parameter transforms among languages. In our experiments, the proposed cross-lingual domain adaptation approach outperforms other similar methods achieving up to a 29% relative WER improvement in the target domain when similar languages and source and target domain conditions are considered. Moreover, the proposed adaptation scheme also allows for remarkable WER improvements in the case of less favorable language and domain conditions. Future work will extend the method to sequence-trained models and also investigate the combination with other cross-lingual information transfer methods, such as bottle-neck features trained on multilingual multi-domain data, and SAT vector-based approaches (e.g. i-vectors). | 2019-10-04T23:21:27.000Z | 2019-10-04T00:00:00.000 | {
"year": 2019,
"sha1": "d59fb35260c5ce66b33b77bb2e89a47115273077",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1910.02168",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d59fb35260c5ce66b33b77bb2e89a47115273077",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
254117258 | pes2o/s2orc | v3-fos-license | Percolation of polyatomic species on a simple cubic lattice
In the present paper, the site-percolation problem corresponding to linear k-mers (containing k identical units, each one occupying a lattice site) on a simple cubic lattice has been studied. The k-mers were irreversibly and isotropically deposited into the lattice. Then, the percolation threshold and critical exponents were obtained by numerical simulations and finite-size scaling theory. The results, obtained for k ranging from 1 to 100, revealed that (i) the percolation threshold exhibits a decreasing function when it is plotted as a function of the k-mer size; and (ii) the phase transition occurring in the system belongs to the standard 3D percolation universality class regardless of the value of k considered.
Introduction
The percolation problems have been attracting a great deal of interest for several decades, and the activity in this field is still growing . It settles the basis to the understanding of the behavior of many systems such as network theory [3,8,9,13], transport and flow in porous media [3][4][5], transport in disordered media [14,15], spread of disease in populations [16], forest fire propagation [17], simulated spread fire in multi-compartmented structures [18], spread of the computer virus [19], network failures [20], formation of gels [21]. They are just a few examples of the wide applicability of percolation, also known as percolation theory.
The first mathematical formulation of classical percolation threshold was that of Broadbent and Hammersley [26,27]. They exposed concepts that nowadays are widely used, representing the flow of fluid through porous media by a simplified lattice percolation model. In addition, the authors were able to prove that their model has a percolation threshold. To illustrate this percolation threshold, we shall describe the stages of the percolation problem on a lattice of sites which are occupied with probability p or empty (nonoccupied) with probability (1 − p). Nearest-neighboring occupied sites form structures called clusters. Quantities relevant to percolation will depend on the concentration of sites and geometry of the lattice.
When the concentration is low, the sites appear singly or in small isolated clusters of adjacent elements. As p increases, the mean size of the clusters increases monotonically. When the occupation probability exceeds a critical value (called the percolation threshold p c ), a macroscopic, a e-mail: pcentres@unsl.edu.ar spanning, or an infinite cluster, occupying a finite fraction of the total number of sites, emerges. The percolation threshold can be depicted as the concentration of sites for which a complete path of adjacent sites crossing the entire system becomes possible. The percolation transition is then a geometrical phase transition where the critical concentration separates a phase of finite clusters (p < p c ) from a phase where an infinite cluster is present (p > p c ). This transition is a second-order phase transition and can be characterized by well-defined critical exponents.
One may also consider a percolation problem in which both sites and bonds are independently occupied, with occupancy fractions p s and p b , respectively. This more general model, known as the site-bond percolation model [28], has been widely used to study the phenomenon of polymer gelation [29].
Most studies are devoted to single occupied site (bond) on different lattices (like square, triangular, simple cubic, face centered cubic, and many others) in the framework of Monte Carlo (MC) analysis. On the other hand, there have been a few studies focused on generalizing the pure percolation model by including deposition of elements occupying more than one site (bond) [30][31][32][33][34][35][36][37][38][39][40].
In reference [31] is shown (by studying the multiplesite percolation problem) that p c exhibits an exponentially decreasing behavior as a function of the k-mer size. This feature was observed both for straight rigid k-mers and tortuous k-mers isotropically deposited on 2D square lattices. In all the studied cases, the problem was shown to belong to the random percolation universality class. Nevertheless, in a recent work by Tarasevich et al. [40] a different behavior was found for the percolation threshold: namely a non-monotonic k-mer size dependence. In this context, the present paper deals with the percolation of straight rigid k-mers on a simple cubic lattice. Using MC simulations and a detailed finite-size scaling analysis, the main percolation properties are studied. The main objectives of the paper are (i) to determine the dependence of the percolation threshold on the size of the deposited k-mers and (ii) to discuss the universality class of the phase transition. This work is also motivated by the particular behavior reported in reference [40]. The only study on these systems has been reported for dimers (k-mers with k = 2) in reference [39].
The paper is organized as follows. In Section 2, the basis of the model for the deposition of the k-mers on the simple cubic lattice is presented. In Section 3, finite-size scaling analysis of MC simulations is carried out. In Section 4, the dependence of the percolation threshold on the k-mer size is discussed. Finally, in Section 5 conclusions are drawn.
Model and Monte Carlo simulation details
The following scheme is usually called standard model of deposition or Random Sequential Adsorption (RSA). Let us consider an initially empty simple cubic lattice of linear size L on which k-mers are randomly deposited. When the size of the k-mers is one (monomers), the procedure of deposition is as follows: a lattice site is chosen at random, if the selected site is unoccupied then the monomer is deposited, otherwise, the attempt is rejected. When k > 1 the process is as follows: (i) one of the tree possible directions (x,y,z ) and a starting site are randomly chosen; (ii) if, beginning at the chosen site, there are k empty sites, then a k-mer is deposited on those sites. Otherwise, the attempt is rejected. When N k-mers are deposited, the concentration is p = kN L 3 . In Figure 1, a typical final state generated by RSA is depicted.
As it was already mentioned, the central idea of percolation theory is based on finding the minimum concentration p for which a cluster extends from one side of the system to the opposite. This particular value of the concentration is called critical concentration or percolation threshold and determines a well defined phase transition in the system. We are interested in determining (i) how the percolation threshold is modified when the size of the k-mer is increased and (ii) what universality class the phase transition of this problem belongs to.
The finite-scaling theory gives us the basis to determine the percolation threshold and the critical exponents of a system with a reasonable accuracy. The probability R = R X L,k (p) that a L × L × L lattice percolates at the concentration p of occupied sites by k-mers of size k can be defined according to [3,[41][42][43]. According to the last definition X, for our problem, can mean: : the probability of finding a rightward percolating cluster, along the x-direction; -R D L,k (p): the probability of finding a downward percolating cluster, along the z-direction; -R F L,k (p): the probability of finding a frontward percolating cluster, along the y-direction.
Other useful definitions for the finite-size analysis are: : the probability of finding a cluster which percolates on any direction; -R I L,k (p): the probability of finding a cluster which percolates in the three (mutually perpendicular) directions; . In order to express R X L,k (p) as a function of continuous values of p, it is convenient to fit R X L,k (p) with some approximating function through the least-squares method. The fitting curve is the error function because dR X L,k (p)/dp is expected to behave like the Gaussian distribution [42,43] dR X L,k dp where p X c,k (L) is the concentration at which the slope of R X L,k (p) is the largest and Δ X L,k is the standard deviation from p X c,k (L). In addition to the different probabilities, the percolation order parameter (P = S L /L 3 ) [44,45] has been measured; where S L represents the size of the largest cluster and ... means an average over MC runs. The corresponding percolation susceptibility χ has also been calculated, χ = [ S 2 L − S L 2 ]/L 3 . MC simulations were applied to determine each of the previously mentioned quantities. Thus, each MC run consists of the following steps: (a) construction of a simple cubic lattice of linear size L, with a given coverage p, (b) perform the cluster analysis using the Hoshen and Kopelman algorithm [46]. In the last step, the size of largest cluster S L is determined, as well as the existence of a percolating island. This spanning cluster, as was mentioned, could be R, D or F . At the same time I, U and A were determined.
For the above algorithm n ∼ 10 5 runs were carried out for several values of the system size (L/k = 6, 8, 10, 12, 24). The L/k ratio is kept constant to prevent spurious effects due to the k-mer size in comparison with the lattice linear size L.
Results
In Figure 2, the probabilities R U L,k (p), R I L,k (p) and R A L,k (p) are shown for two different values of k (k = 1 and k = 5 as indicated).
From a simple inspection of the figure (and from data not shown here for the sake of clarity) it is observed that: (a) the curves, corresponding to the various percolation criteria (R, D, F , etc.), cross each other in a unique universal point, R X * , which depends on the criterion X used; (b) those points do not modify their numerical value for the different k used (ranged between k = 1 to k = 100); (c) those points are located at very well defined values in the p-axes determining the critical percolation threshold for each k and (d) p c decreases for increasing k-mer sizes.
The probability R X L,k (p) is also called in the literature the percolation cumulant, whose properties are identical to those of the Binder cumulant U L in standard thermal transitions [41,47]. Namely, R X L,k (p) obeys the same scaling relation as U L , and the intersection of the curves of R X L,k (p) for different system sizes can be used to determine the critical point that characterizes the phase transition occurring in the system [3,31,[48][49][50]. From this perspective, the result given in point (b) could be taken as a preliminary indication that the universality class of the phase transition involved in the problem is conserved no matter the value of k. However, as pointed out by Selke and Shchur [51,52], the measure of the cumulant intersection may depend on various details of the model which do not affect the universality class, in particular, the boundary condition, the shape of the lattice, and the anisotropy of the system. Consequently, more research is required to determine the universality class of the phase transition.
For each R X L,k (p) and dR X L,k (p) dp curve, the fitting function was determined by least mean-square method using equation (1). In this way, p X c,k (L) is determined for the different values of k and L.
We extrapolate the previous results of p X c,k (L) for L → ∞ by using the finite-scaling hypothesis. Thus, the correlation length, ξ, associated with emergence of the percolation cluster, has the scaling relation: where ν is the critical exponent. It is known [53] that ν = 7/8 for random 3D percolation. As p → p X c,k (L) the correlation length ξ → L, being L the linear dimension of the system. Thus, we have where A X is a non-universal constant. Figure 3 shows the extrapolation towards the thermodynamic limit of p X c,k (L) according to equation ( From the procedure shown in Figure 3, one obtains p X c,k (∞) for the criteria I, A and U . Combining the three estimates for each k, the final values of p c,k (∞) are obtained. The maximum of the differences between gives the error bar for p c,k (∞).
In Figure 4 the percolation threshold p c,k (∞) is plotted as a function of the k-mer size. The values for k = 1 [p c,k=1 (∞) = 0.3116077(4)] and k = 2 [p c,k=2 (∞) = 0.2555(1)] have been already reported in [39] and [54], respectively. The points corresponding to k = 80 and k = 100 were calculated for three relatively small values of L/k (4,6,8), with an effort reaching almost the limits of our computational capabilities. A compilation of the numerical values is also presented in Table 1.
For all the range of studied sizes, the percolation threshold decreases upon increasing k. This result contrasts with the one of Tarasevich et al. [40], who found that, for two-dimensional square lattices, the percolation threshold shows a nonmonotonic k-mer size dependence. Namely, the percolation threshold decreases for small particle sizes, goes through a minimum at k ≈ 13, and finally monotonically increases as k increases. This nonmonotonic behavior observed in two dimensions has been explained accounting for the local alignment effects occurring for large values of k [40]. In the case of cubic lattices, the same effects are not detected in the range of values of k between 1 and 80.
In Figure 3 the value ν = 7/8 was used. Nevertheless, the value of ν can be obtained through the scaling relationship for R X L,k (p): being R X k (u) the scaling function and u ≡ (p − p c,k )L within numerical errors, the value of the critical exponent reported in reference [53]. The scaling behavior can be further tested by plotting R X L,k (p) vs. (p − p c,k )L 1 ν and looking for data collapsing. Using the values of p c,k previously calculated and the value ν = 7/8, an excellent scaling collapse was obtained (Fig. 5) for R I L,k and all value of k-mer size. This leads to independent control and consistency check of numerical value of the critical exponent ν.
In order to bear out the universality class of the problem, the critical exponents β and γ were calculated from the scaling behavior of P and χ [3] as follows: and where P and χ are scaling functions for the respective quantities. According to equations (5) and (6), Figure 6 shows the excellent collapse of curves of P and χ (inset) for a typical k-mer size (k = 5) and different lattice sizes as indicated. The data scaled extremely well using the reported percolation exponents β = 0.41 and γ = 1.82 [3]. The results obtained in Figures 5 and 6 suggest that the universality class corresponds to 3D percolation problem and clearly does not depend on the k-mer size. This kind of behavior has been observed in previous studies of percolation of extended objects. Thus, Cornette et al. [31] found that straight rigid k-mers and tortuous k-mers isotropically deposited on two-dimensional square lattices are in the same universality class as the standard percolation in two dimensions. The same result was obtained for percolation of aligned rigid rods [38] and percolation of rigid rods under equilibrium conditions [55] on two-dimensional square lattices. The authors reported that even though the intersection points of the curves of R X L,k (p) for different system sizes exhibit nonuniversal critical behavior 1 , the percolation transition occurring in the system belongs to the standard random percolation universality class regardless of the value of k considered.
Conclusions
In this paper, the percolation behavior of straight rigid k-mers deposited on a simple cubic lattice has been studied by numerical MC simulations and finite-size analysis.
For each value of k, the probability R X L,k (p) that a system of linear size L percolates at concentration p was used to obtain the critical concentration p c,k .
The plot of p c,k vs. k showed a monotonic decrease in all the studied k range. This result is quite surprising when compared with the results reported in reference [40], where a steep increase is shown after an initial low-k [k ∈ (1, 13)] decrease. This finding yields two possible conclusions: (a) the results reported by Tarasevich et al. in reference [40] are not applicable to the present system, (b) the upmost k value studied in the present work is not large enough to appreciate the reported behavior.
Finally, the analysis of the critical exponents ν, β and γ, supported by the excellent data collapse of the curves of R X L,k (p), P and χ, strongly suggested that the percolation phase transition involved in the considered problem belongs to the same universality class of the ordinary 3D random percolation. | 2022-12-01T15:03:46.158Z | 2013-09-01T00:00:00.000 | {
"year": 2013,
"sha1": "ed9cae6e45c7ce50185b1d166b1d0b6b00dcacfd",
"oa_license": "CCBYNCSA",
"oa_url": "https://ri.conicet.gov.ar/bitstream/11336/5673/8/CONICET_Digital_Nro.7666_G.pdf",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "ed9cae6e45c7ce50185b1d166b1d0b6b00dcacfd",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
247967826 | pes2o/s2orc | v3-fos-license | Intra Parietal Sulcus Area 1–2 and Angular Gyrus Differentiates Visual Short-Term Memory and Sustained Attention Activities
Background: Visual short-term memory (VSTM) and attention were found to modulate neural activity predominantly in a superior parietal lobule. This is thought to be the selective attention importance for encoding and manipulation in VSTM. The major area of investigation mainly rested with the differences in the neural substrates and networks mediating these cognitive processes in near and far cortical structures. Summary: Based on previous investigations, the dynamic temporal window route of attention and time locked associated cognitive processes and sub-processes are sketched and its implication in VSTM study is discussed. Imaging cortical structures to isolate closely linked cognitive tasks require circumscribing to certain time-windows in which the paradigm should support to tap time-locked associated processes and sub-processes. Key Messages: The neural activities in intraparietal sulcus area 1–2 and angular gyrus during VSTM encoding are beyond the modulatory effects of selective and sustained attention.
The human posterior parietal cortex (PPC) is known to activate during diverse range of cognitive tasks. Among which, the involvement of the PPC in visual attention and working memory remains a hot spot in cognitive neuroscience. Consistent neuroimaging evidence indicates that dorsal parts of the PPC, particularly the superior intraparietal sulcus (IPS), encodes visual attention and subsequent modulation of visual short-term memory (VSTM) processes. [1][2][3] Using multivoxel pattern analysis, human retinotopic areas support active maintenance during working memory encoding. [4][5][6] If a task requires a topdown attention maintenance, selective attention and memory are strongly linked and such links are less obvious when a singleton target, by reducing attention control demand, is presented. 7,8 This suggests that selective modulation of attention needs reorientation at the target phase, which may involve VSTM maintenance. A major challenge in this field is dissociating the spatial and temporal characteristics of these two differing, yet interwoven, cognitive processes and further clarifying how these spatial and temporal differences are accounted for the difference in neuroimaging data. A recent functional magnetic resonance imaging study undertaken by Sheremata et al. 9 added a compliment and extended the evidence that human PPC (retinotopically defined IPS1-2) encodes VSTM and its associated content is dissociable from visuospatial attention related subprocesses.
Briefly, a summary of the work of Sheremata et al. 9 is provided before proceeding to review core properties of visuospatial attention, given that active maintenance of VSTM relates to attentional selection. On this account, studies have shown that attentional mechanisms are involved in updating cue dependent trial-by-trial rule-sets. 10 Then previous behavioral and neuroimaging studies aimed at tapping temporal and spatial characteristics of attention and VSTM and how the cognitive task used by Sheremata et al. 9 elucidated the two basic cognitive processes served by the PPC is highlighted. Following this, different views in the theoretical/conceptual underpinning of both attention and VSTM by offering an alternative explanation about how the cognitive demand of the task lead to different neuroimaging results is examined. By converging evidences, the study of Sheremata et al. 9 in tackling the different neural processes associated with attention and VSTM and the implication of this study are discussed.
Sheremata et al. 9 investigated retinotopically defined subcortical structures of PPC and early visual areas by scrutinizing blood oxygen level dependent (BOLD) signal changes, functional connectivity, and hemispheric asymmetries within PPC to identify if memory-specific task demand exists beyond visuospatial attention confounds. Sheremata et al. 9 claim that previous tasks used to study VSTM may contain subprocesses of top-down attention control, like sustained and selective attention, and posed the question whether stimulusspecific delay period, which requires working memory, is dissociable from the ability to sustain attention to an object in space. Their paradigm includes two visually identical tasks with differing rules. By controlling and chasing out sustained attention-related confounds, the authors are able to tap distinctive processes associated to VSTM. Manipulating the time window was the key element in their paradigm, for the time course of VSTM and visual attention, it is distinctively different. A spatial cue was first displayed for 500 ms followed by a series of rapid stimulus presentations, 150 ms each with 150 ms interval, following which a response cue was then presented for a fixed duration of 750 ms. To isolate sustained attention, spatial and response cues capping either end of the task are ignored and attention directed at the orientation of the target shapes within the stimulus presentations. Participants were asked to identify the number of stimuli presentations in which all target shapes are vertically aligned. In a second task designed to induce VSTM mechanisms, participants were asked to determine if the orientation of shapes in the response cue is congruent with the initial "sample" cue. The short-term memory task requires maintenance of visual information over time after the masked delay to compare with a response cue 1500 ms later. On the other hand, the attention task required rapid decoding of object orientation within stimulus presentations. By controlling the possible variance of substructures of the IPS, hemisphere, and the visual field of the stimuli, the authors demonstrated that VSTM maintenance in IPS1-2 is beyond attentional confounds and task difficulty. 9 Functional connectivity analysis of retinotopically defined that the IPS with other dorsal parts of the attention network extend the evidence of modulatory interactions between anterior parts of parietal cortex with task positive and negative networks 11,12 during VSTM tasks. These findings provide novel insight in two ways: VSTM was not confounded with attentional modulation and the network-based connectivity analysis revealed the possible interactive mechanisms within dorsal attention network and modulatory effects of selective attention during VSTM encoding.
Previous studies examining the behavioral and neural basis of attention and VSTM report a complicated presence of attention that could be directed to specific items encoded in VSTM 13 and/or cognitive tasks with multiple distractors may be confounded with reorienting attention. 14 Top-down attentional Annals of Neurosciences 29 (2)(3) modulation with multiple distractors may call-up for resource demand from VSTM 14,15 and response inhibition, which is considered to be part of attention, was argued to be updated from working memory resources. 16 Other behavioral evidences in VSTM show that orienting attention influences internal representations of the encoding process. 10,17 Taken together, these evidences suggest that the late stage of attentional modulation might be time-locked with the early encoding processes of VSTM (see Figure 2, column "D"). Kuo et al. 18 demonstrated that attentional mechanisms that works based on top-down attention processing during task encoding among other competing items are maintained by PPC (particularly posterior IPS) while modulating VSTM. This suggests that the posterior IPS may operate both attention and VSTM flexibly in an interactive and translative manner as suggested by Awh and Jonides. 14 Previous studies lack demonstrating if there is a clear distinction between these two cognitive processes and the role of PPC. PPC has shown an extensive overlap for short-term memory and visuospatial attention tasks (Figure 1). By controlling cue-related attentional processes (indicated in column "C" of Figure 2), Sheremata e al. 9 showed that activities in IPS 1-2 and angular gyrus are distinctive to VSTM encoding.
Even though behavioral experiments have demonstrated that attention task with distractors requires more top-down attention control than with singleton target, some behavioral results from VSTM are rather puzzling. In the VSTM decoding task there was no significant difference in the accuracy rate between target with and without distractors, 19,20 suggesting that VSTM and sustained attention operate relatively different. This might suggest that neural regions maintaining attention are more sensitive to time than VSTM encoding process.
In summary, the evidence from behavioral and neuroimaging studies provided two key insights regarding the "distinctive" process and neural mechanisms of attention and VSTM in that (a) imaging cortical structures to isolate closely linked cognitive tasks require to circumscribe to certain time-windows in which the paradigm should support to tap time-locked associated processes and subprocesses. In this regard, the authors present a clear distinction of VSTM to the remnants of attentional components as a confounder; (b) ensuring the BOLD signal differences in attention and VSTM are not associated with cognitive demand of the task served as a milestone to disentangle the distinctive neural processes of PPC during VSTM encoding.
In the study by Sheremata et al., 9 only cue-related attentional confounds are controlled. Examining stimulusresponse mapping processes (as indicated in column "D" of Figure 2) with the same task may add another dimension in attention and VSTM. Trial-by-trial based multivoxel pattern analysis in retinotopically defined that parts of PPC may help to increase control over spatial attention confounds during VSTM encoding and the involvement of VSTM during topdown attention control process and target-related spatial updating as a confound. The latter is particularity appealing to scrutinize as neural processes related to reorienting attention when competing priorities presented at the target phase may require resource allocation from VSTM. By modeling target onset of the BOLD signal, scrutinizing the possible variance of substructures of IPS, hemisphere, and visual field of the stimuli would help to further isolate attentional confounds in VSTM. | 2022-04-06T15:11:25.873Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "cd9230a92f26e589c566c3fd5bc5002f0a2e0991",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/09727531211072301",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "681884a1fb6c629f8f06983abc1d28bbca4a7ded",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17243060 | pes2o/s2orc | v3-fos-license | C6: A Monoclonal Antibody Specific for a Fibronectin Epitope Situated at the Interface between the Oncofoetal Extra-Domain B and the Repeat III8
Background Fibronectin (FN) is a large multidomain molecule that is involved in many cellular processes. Different FN isoforms arise from alternative splicing of the pre-mRNA including, most notably, the FN isoform that contains the “extra-domain-B” (ED-B). The FN isoform containing ED-B (known as B-FN) is undetectable in healthy adult tissues but is present in large amounts in neoplastic and foetal tissues as well as on the blood vessels during angiogenesis. Thus, antibodies specific for B-FN can be useful for detecting and targeting neoplastic tissues in vivo. We previously characterised C6, a new monoclonal antibody specific for human B-FN and we suggested that it reacts with the B-C loop of the type III repeat 8 which is masked in FN isoforms lacking ED-B and that the insertion of ED-B in FN molecules unmasked it. Here we have now consolidated and refined the characterization of this B-FN specific antibody demonstrating that the epitope recognized by C6 also includes loop E-F of ED-B. Methodology We built the three dimensional model of the variable regions of the mAb C6 and of the FN fragment EDB-III8 and performed protein:protein docking simulation using the web server ClusPro2.0. To confirm the data obtained by protein:protein docking we generated mutant fragments of the recombinant FN fragment EDB-III8 and tested their reactivity with C6. Conclusion The monoclonal antibody C6 reacts with an epitope formed by the B-C loop of domain III8 and the E-F loop of ED-B. Both loops are required for an immunological reaction, thus this monoclonal is strictly specific for B-FN but the part of the epitope on III8 confers the human specificity.
Introduction
Fibronectin (FN) is a multi-domain molecule present in the extracellular matrix (ECM) and in body fluids. It is a dimer of two subunits of about 220/250kDa, linked at the C-termini by two disulfide bonds and each monomer consists of three types of repeating units. FN is involved in many cellular processes and different FN isoforms arise from the alternative splicing of its pre-mRNA [1][2]. In particular, the FN isoform containing the extra-domain B (ED-B), a complete FN type III repeat formed by 91 amino acids, is expressed only during physiological or pathological tissue remodelling such as in embryogenesis, wound healing, in uterus and ovary during the female reproductive cycle, in tumorigenesis and in degenerative chronic inflammatory diseases. The ED-B primary structure is highly conserved in different species, having 100% homology in all mammals thus far tested and 96% homology with a similar domain in chicken.
The FN isoform containing ED-B (B-FN) is undetectable in healthy adult tissues but its expression levels are highly increased in tumour tissues and it accumulates around neovasculature during angiogenesis. This makes it one of the oncologist's best markers of angiogenesis and neoplastic tissues [3][4][5][6][7][8]. The demonstration that monoclonal antibodies to B-FN can be used to selectively deliver therapeutic substances to diseased tissues [9] prompted the generation of human recombinant antibodies for preclinical and clinical diagnostic and therapeutic purposes [10][11][12][13].The biological function(s) of B-FN are still unclear, however it has been suggested that B-FN increases vascular endothelial growth factor (VEGF) expression, endothelial proliferation and tube formation [14]. More recently Kraft et al [15] reported that B-FN enhances phagocytosis more than plasma FN and that this enhancements is mediated by the integrin alphaVbeta3. On the whole the biological activities are mediated by exposed loops located mainly at the inter-domain interface, therefore the insertion of ED-B within repeat III7 and III8 modifies the domain-domain interface and would be expected to lead to changes in biological activities [16].
We have previously described C6, a monoclonal antibody specific for human B-FN [17]. Using various recombinant FN fragments containing mutations we concluded that its epitope was located within the loop B-C of III8 and we speculated that, in FN isoforms lacking ED-B, this loop is masked [18]. Here, to better understand the interaction between human B-FN and C6, we performed protein:protein docking simulation of the three dimensional models of the scFv of the mAb C6 and of the FN recombinant fragment containing the type III domains B and 8. The results confirm the interaction with the loop B-C of domain III8 but also with the loop E-F of ED-B. Further experiments using a FN fragment with mutation on ED-B confirmed that its loop E-F is part of the epitope recognized by C6.
Results and Discussion
In immunohistochemistry experiments on human tissue, the mAb C6 behaves exactly as an antibody that reacts directly with the ED-B, as it shows no reaction with healthy adult human tissues but shows a strong reaction with cancer tissue. However, unlike other ED-B specific antibodies, C6 shows no reaction with mouse tumours (Fig 1 and [17]). The absence of reaction of C6 with murine B-FN suggested that it did not react simply with ED-B, because human and mouse ED-B have a homology of 100 percent and other ED-B antibodies react equally well with human and mouse B-FN. In fact, the mAb C6 specifically recognises human B-FN and human FN recombinant fragments containing at least the type III domains B and 8. C6 does not react with mouse B-FN, human recombinant fragments formed by the type III domains 7-8, 7-B, isolated ED-B and it interacts only weakly with the isolated III8 [17]. Balza et al. [17] located the epitope recognized by C6 on III8 and excluded the possibility that the epitope was located on ED-B since its sequence is highly conserved, having 100% homology in all mammalians thus far tested it should react with the B-FN of all mammalian species while C6 is specific only for the human B-FN.
Furthermore Ventura et al. [18] compared the sequences of human and mouse III8, and since they only differ in four residues the generation of chimerical mutants, inserting residues present in mouse FN in the human recombinant fragment EDB-III8 allowed the localization of the epitope on the loop B-C. In fact it was sufficient to mutate the Asp1385 located within the loop B-C to a Glu, present in the same position in mouse, to completely abolish the ability of the mAb C6 to react with the FN fragment formed by the type III domains B-8 [18].
Since C6 does not react with the fragment formed by type III domains 7 and 8 but reacts with the fragment B-8, Ventura et al. suggested that the epitope recognized by C6 was within the loop BC of III8, and speculated that this epitope was masked in FN molecules lacking ED-B and unmasked when ED-B was inserted within the FN molecule [18]. Fig 2 shows the domain structure of FN (A); the amino acid sequence of the scFv C6 with its complementarity determining region (CDR) (B); the amino acid sequence of the recombinant fragment containing the domains of type III 7-B-8 and the loops of the various repeats (C).
Here, in order to better understand the interaction between the antibody C6 and FN we built the three dimensional model of the scFcv C6 [17] and of the FN fragment EDB-III8 and performed protein:protein docking simulation using the web server ClusPro2.0 [19]. The most probable binding mode of C6 to FN was the lowest energy solution belonging to the most populated cluster, as determined by the program. This cluster was formed by 152 individuals and it was fairly separated by the second and the third ranked ensembles, populated by 77 and 71 individuals respectively.
The results of docking simulation, shown in Fig 3, clearly indicated the FN D1385, located on the III8 could be a component of the epitope as previously demonstrated by Ventura et al. using chimerical mutants of FN fragments [18]. In fact the residues R183 and N235 of C6, on the VH-CDR2 and VH-CDR3 respectively, are at a distance from D385 of III8 of 2.5-3 Angstrom, which is a distance that allows hydrogen bond formation (Fig 3A and 3B). However, the results also indicated that Glu1329, located on the ED-B, is a possible component of the epitope. In fact the residues W56 and W239 of C6, located on VL-CDR2 and VH-CDR3 respectively also have a distance of 2.5-3 Angstrom from E1329 of ED-B (Fig 3A and 3B). Fig 3C shows a representation of the interaction between the type III domains B-8 and the scFv C6.
To confirm that E1329 of the ED-B is also relevant for the C6-FN interaction, we generated a mutant of the type III repeats B-8 by substituting Glu 1329 with an Ala and tested this fragment with the antibody C6 in ELISA assay. The results shown in Fig 4 indicated that this mutant does not react with C6, thus confirming the protein:protein docking simulation that ED-B is involved in the interaction of with C6-B-FN.
Thus, the epitope of C6 encompasses both E1329 on ED-B and D1385 on III8, and the simultaneous presence of both residues is required for reaction with C6; neither of the two alone is sufficient to ensure the interaction of C6 with B-FN. The specificity of C6 for the B-FN isoform is due to the E-F loop of the ED-B whereas the human specificity is due to the B-C loop of III8, since in the majority of other mammalian species instead of the D1385 there is E, and this is sufficient to abolish the reactivity of FN with C6.
Bencharit et al. [16] reported various differences at the interface of III8 with ED-B or III7; the main differences concern the conformation and location of the loops AB, CC' and EF of ED-B that are different from those of III7. Furthermore the inter-domain linker of ED-B and III8 buries 416Å 2 while that between III7 and III8 buries 578Å 2 [16]. The above data explain the absence of interaction between III7-III8 and C6.
In conclusion, here we consolidate and refine the previous report about the epitope recognized by the mAb C6; we report that its epitope encompassed the loop B-C of III8 (which confers the specificity for human B-FN) and also the loop E-F of ED-B (that confers the specificity for B-FN). In previous works the authors did not take into account the possibility that the epitope could consist of a part of ED-B and a part of III8 [17][18]. This because it is known that the ED-B sequence is not immunogenic in mice and have never been reported mouse antibodies to ED-B. This is the first report on a murine mAb that reacts with an epitope partially formed by an ED-B sequence. This peculiarity makes C6 strictly specific for human B-FN and it can therefore be used, for example, to distinguish human B-FN in models of human tumours transplanted in mice.
Considering that the mAb C6 interacts at the EDB and III8 interface (Fig 3C) to which possible biological functions have been attributed, C6 will also be helpful in discovering biological activities of B-FN. For example C6 can be tested for its the ability to inhibit the functions that have been attributed to the B-FN such as phagocytosis and its ability to increase the expression of VEGF [14][15].
There are other B-FN-specific antibodies, such as BC-1 that was generated over 25 years ago. BC-1 [4] was extensively used to demonstrate that B-FN is an excellent marker of angiogenesis and that mAbs to ED-B can be used in vivo to selectively target tumours. BC-1 recognizes an epitope, localized on the repeat III7, which is hindered in FN molecules lacking ED-B and exposed in FN molecules containing ED-B. The results obtained with BC-1 prompted the generation of recombinant antibodies directly reacting with ED-B [10][11][12][13]. These antibodies have been used for generation of radio-immunoconjugates as well as fusion proteins with cytokines such as TNF and IL2, for selective delivery of drugs to the tumours and they are currently used in both diagnostic and therapeutics clinical trials [10][11][12][13]. However, bio-distribution FN type III repeats numbers 7 (blue), B (black) and 8 (red); the various loops between the beta sheets structures are framed. The amino acids involved in the interaction with C6 are in bold. Sequence from http://www.ncbi.nlm.gov.nuccore/47132556. doi:10.1371/journal.pone.0148103.g002 experiments in tumour-bearing mice showed that C6 has a longer residence time in tumours when compared to other antibodies currently used in clinical trials [17], probably as consequence of the higher resistance of the C6 epitope to proteolytic enzyme. This makes C6 an attractive antibody for clinical application.
Protein modelling and docking simulations
Three dimensional model structures of the scFv C6 was built using the Phyre2 program [19] and further minimized by simulated annealing using the program CNS [20]. Docking calculations were carried out by the web server ClusPro 2.0 [21] using the atomic X-ray structure of FN domains 7-B-8-9 as a target ( [22]; PDB code 3T1W) through a systematic rigid-body search of one molecule translated and rotated about the other. ClusPro2.0 was run using the standard parameters after removing the ions and water molecules from the FN coordinate file. The intermolecular energies, for all configurations generated by this search, were calculated as the sum of electrostatic and van der Waals energies. The software chooses the lowest energy solutions, clusters them together and considers the lowest energy individual of the most populated cluster as the best candidate.
Preparation of wild type and mutated human FN recombinant fragments and ELISA assay
The human recombinant FN fragments EDB-III8 as well as its mutant (D1385-E) were previously described [17]. The cDNA encoding for the human FN recombinant fragment EDB-III8 with the mutation (E1329-A) was obtained in two steps from the cDNA of human FNIII B-8. In the first step two fragments were amplified: 1. FN fragment corresponding to amino acids 1266-1332 with mutation (E1329-A) obtained with primer forward TI-147 (5'-ctcgaattcaagaggtgccccaactcact-3'), including the Eco restriction site, and primer reverse sb-18 (5'- aatgcccggcgccagccctgt-3') allowing the substitution (E1329-A); 2. FN fragment corresponding to amino acids 1326-1447 with mutation (E1329-A) obtained with primer forward sb-17 (5'acagggctggcgccgggcatt-3'), allowing the substitution (E1329-A) and complementary to primer sb-18, and the primer reverse sb-9 (5'-ctcgcggccgctcatcatgttttctgtcttcctct-3') including the Not restriction site and two stop codons. In the second step the two cDNA fragments were PCR assembled with primers TI-147 and sb-9. The obtained cDNA fragment was digested Eco/Not and inserted into Eco/Not digested pProEX-1 (Life Technologies, Gaithersburg, MA, USA). All PCR reactions were performed with high fidelity PWO DNA Polymerase (Roche Diagnostics, Basel, Switzerland) following the manufacturer's instruction. The restriction enzymes were from Roche Diagnostics. The DNA construct was used to transform DH5α competent bacteria cells. All FN recombinant fragments were purified from the bacterial lysate on Ni-NTA columns (Qiagen, Hilden Germany) using the His6 tag at the N-termini of the FN fragments. The purified FN fragments were analyzed by SDS-PAGE as previously described. The reactivity of monoclonal C6 with the FN fragments was assessed by ELISA assay as previously described [17]. Recombinant FN fragments containing ED-B were used in immunohistochemistry control experiments to inhibit the antibodies. | 2018-04-03T02:05:49.206Z | 2016-02-11T00:00:00.000 | {
"year": 2016,
"sha1": "0ad3c7c4769eef0677a53cf2f7bf1833bf69ee9b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0148103&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0ad3c7c4769eef0677a53cf2f7bf1833bf69ee9b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
9651194 | pes2o/s2orc | v3-fos-license | Magnetoresistance measurements of Graphene at the Charge Neutrality Point
We report on transport measurements of the insulating state that forms at the charge neutrality point of graphene in a magnetic field. Using both conventional two-terminal measurements, sensitive to bulk and edge conductance, and Corbino measurements, sensitive only to the bulk conductance, we observed a vanishing conductance with increasing magnetic fields. By examining the resistance changes of this insulating state with varying perpendicular and in-plane fields, we probe the spin-active components of the excitations in total fields of up to 45 Tesla. Our results indicate that \nu=0 quantum Hall state in single layer graphene is not spin polarized.
We report on transport measurements of the insulating state that forms at the charge neutrality point of graphene in a magnetic field. Using both conventional two-terminal measurements, sensitive to bulk and edge conductance, and Corbino measurements, sensitive only to the bulk conductance, we observed a vanishing conductance with increasing magnetic fields. By examining the resistance changes of this insulating state with varying perpendicular and in-plane fields, we probe the spinactive components of the excitations in total fields of up to 45 Tesla. Our results indicate that ν = 0 quantum Hall state in single layer graphene is not spin polarized. Under a magnetic field, the linear dispersion relation of low energy electron spectrum in graphene leads to unique Landau levels (LLs) whose energy difference is unequally spaced [1][2][3]. The LL spectrum, given by E n = ± 2n v F 2 eB/c, where v F is the Fermi velocity and n = 0, ±1, ±2, ... is LL index, contains an n = 0 level, termed the zero-energy LL (ZLL). In the absence of appreciable interactions or Zeeman splitting, each LL has a 4-fold degeneracy arising from a real spin and valley degeneracy. The appearance of the quantum Hall (QH) effect in graphene at the LL filling fractions ν = ±2, ±6, ... is a manifestation of this 4-fold degeneracy of graphene LLs [4,6]. In the high magnetic field regime, however, this effective SU(4) spin-pseudospin symmetry can be broken, with more QH plateaus appearing at ν = 0, ±1, ±4 and developing signatures of QH states for other integer filling fractions [7][8][9]. The ν = 0 filling factor that appears at the center of the ZLL presents something of a paradox in QH physics, as it is not marked by the usual longitudinal resistance minima that typify all other filling factors. While initial measurements on disordered samples at this filling factor reported highfield (above 30 T) resistance in the regime of tens of KΩs [19], subsequent reports on this quantum Hall state have shown a strong insulating behavior as sample mobility is increased [10][11][12][13], with two-terminal measurements of the highest mobility suspended samples measuring into the GΩ range at fields as low as 5 Tesla [14,15].
Theoretically, various models of symmetry breaking and ordering underlying this ν = 0 insulating state have been proposed. Most of the models fall under the framework of exchange-driven quantum Hall ferromagnetism that separates different sectors of the SU(4) spin-pseudospin space [16,17]. These include: a fully spin-polarized ferromagnet [20,23], a fully pseudospinpolarized charge density wave [22,24], a Kekule distortion with a spontaneous ordering of pseudospin [25,26,29], and a canted antiferromagnet [27]. An alternative approach is based on magnetic catalysis: long-range electron-electron interactions that induce an excitonic gap [28]. Experimental reports on the nonzero filling frac- tion [7,8] suggest that the excitations of the ν = 1 state have no spin, while the Kosterlitz-Thouless insulating behavior of ν = 0 [11] is consistent with a Kekule distortion origin. The various models of the broken symmetry states involve unique bulk spin/pseudospin textures and corresponding edge state configurations [20,21]. Thus transport measurements require careful comparison of the bulk and edge state conduction in order to answer questions related to the nature of the symmetry breaking at ν = 0. In this letter we investigate the spin response of the ν = 0 QH state in monolayer graphene by measuring the bulk and edge conduction as a function of in-plane mag-netic field using high-mobility suspended graphene and on-substrate graphene Corbino device. Our experiments reveal a vanishing conductance at ν = 0, but neither exhibits an increasing gap with increasing in-plane field, suggesting that the ν = 0 state is not spin-polarized.
The suspended graphene devices are prepared using the methods described in reference [30]: after thermally evaporating Cr/Au electrical contacts onto the mechanically exfoliated graphene samples [5], a chemical etch of buffered hydrofluoric acid is performed to remove the SiO 2 under the graphene sample, leaving the whole device suspended approximately 200 nm above the SiO 2 /Si substrate. An atomic force microscope (AFM) image of the device is shown in the inset of Fig. 1(c). DC current annealing is then performed at low temperature (T = 1.7 K) to remove residual impurities from the suspended graphene. Four-terminal transport measurements are conducted using conventional low-frequency lock-in techniques. The carrier density of the graphene is tuned by applying back gate voltage V g to the degenerately-doped Si substrate, with the magnitude of the tuned density determined using Hall measurements. The mobility of this annealed device is ∼80,000 cm 2 /V·s. In Fig. 1(a), we show the longitudinal conductivity σ xx and Hall conductivity σ xy versus back gate at B =4 T normal to the graphene basal plane. As indicated by the vertical arrows, along with clearly developed ν = 2 QH state, strong ν = 0 and developing ν = 1 are observed as plateaus in σ xy and the suppression of σ xx at the corresponding filling fractions. The appearance of the ν = 0 and ν = ±1 QH states indicates that the four-fold degeneracy of the ZLL is completely broken.
To discern whether the ν = 0 symmetry breaking is spin-active, we apply a sequence of tilted magnetic fields that fix the perpendicular magnetic field B ⊥ while varying the total magnetic field B t . By fixing B ⊥ the magnetic length l B = /eB ⊥ , Coulomb energy scale E e−e = e 2 /4π 0 r l B are held constant, meaning the electron-electron and exchange interactions that underlie the ν = 0 state are unchanged. However, if this state is fully spin polarized, the current-carrying excitations will have net spins that will be affected by changes in B t via the Zeeman energy ∆E z = gµ B B t , where g is gfactor of electron and µ B is the Bohr magneton. At a fixed temperature the changes to the carrier excitation energy will result in a change in conductance observed at the ν = 0 filling factor. Thus by tuning only the Zeeman energy and examining changes in the conductance, we can determine if the activation of the ν = 0 state is spin-sensitive.
The results of measuring the insulating state of the suspended device at several different tilting angles are shown in Fig. 1(c), where the resistance maximum R max , is measured at the charge neutrality point V g = V D , at a fixed base temperature T =1.6 K. Since the resistance for ν = 0 QH state tends to increase rapidly as a function of B in [10,11,14,15], R max is a good measure to probe this insulating state. Here we use two-terminal current measurement with a constant voltage bias in order to eliminate any self-heating effects (≤ pW) and to maximize the measurable resistance range. At T = 1.7 K, we found that R max increases from ∼10 KΩ up to 100 MΩ (comparable to the limit of our measurement set-up) as B ⊥ changes from 0 to 3 T. The tilting angle dependence of R max versus B ⊥ curves show such a trend: while we do not observe appreciable dependence of R max on in-plane magnetic field at lower values of the tilting angle θ (i.e., larger B ⊥ /B t ratio), there is an indication that R max decreases at larger θ (i.e, smaller B ⊥ /B t ratio). This trend becomes most obvious for the largest tilting angle we measured, θ =80.8 • , corresponding to B ⊥ /B t , where we observe that R max versus B ⊥ curve is substantially lower than any other curves in the graph. The observed trend in the suspended device, i.e., decreasing R max with decreasing B ⊥ /B t at fixed B ⊥ suggests that the ν = 0 gap decreases as B t increases. This dependence can be viewed as strong evidence against a fully spin-polarized ordering of the ν = 0 QH state, as this ordering would result in an increase in the gap as B t increases. The relative insensitivity of R max to changes in angle for the small tilt angles may be due to broadening induced by thermal smearing or disorders.
There are two obstacles in using suspended samples to draw more quantitative conclusions about the nature of the ν = 0 QH state. First, due to the mechanical instability of suspended samples, R max drifts slightly with respect to V g . Fig. 1(b) shows the conductance as a function of V g measured at two different tilting angles. Although the overall behavior is consistent, the position of V g where R max occurs is slightly shifted. Even worse, this shift changes when the device is thermally cycled, making it difficult to estimate the energy gap by the thermallyactivated behavior. Second, the four-/two-terminal device geometry measures both the bulk conductance and any possible edge conductance in parallel. This becomes a major source of ambiguity in distinguishing whether the observed insulating behavior originates from the bulk insulating state without the edge conduction or from the localization of edge states by spin/pseudospin-flip scattering [19,20]. In order to avoid the mechanical instability and to isolate the bulk conductance, we employ an onsubstrate Corbino geometry, a disk-shaped sample with coaxial contacts in which the current flows radially from an inner contact to an outer ring contact. This geometry not only eliminates any unknown edge effects that might interfere with determining the ν = 0 conductance, but is also insensitive to the formation of the known quantized edge conductances of other filling factors. This geometry then directly allows probing bulk conduction, and thus puts the ν = 0 insulator on an even footing with the bulk insulating character of every other filling factor [31].
The fabrication procedure for our Corbino devices is shown as in Fig. 2(a). Monolayer graphene pieces are deposited on SiO 2 (300nm)/Si substrates using established mechanical exfoliation techniques, then Au/Cr ring-like electrodes are fabricated by e-beam lithography, (an optical image is shown in Fig. 2(b)), followed by a dielectric layer deposition and a top Au/Cr plate contact to connect to the inner contact (as shown in Fig. 2(c)). The plate geometry connecting to the inner contact guarantees that any voltage applied to this contact will result in a uniform change to the graphene carrier density. To measure the bulk conductance of the graphene, we apply an AC voltage bias (V bias ) across the inner and outer contacts, and measure the current (I) using a current preamplifier and lock-in amplifier. The bulk conductivity is then given by σ xx = (ln(r out /r in )/2π)(I/V bias ), where r out and r in are the radii of the outer and inner contacts, respectively. Changing the back gate voltage V g , we can tune the carrier density in the graphene channel connecting the inner and outer contacts of the Corbino device. Fig. 2(d) shows the bulk conductivity σ xx vs. back gate voltage V g , at B =0 T and 14 T at temperatures lower than 7 K. The mobility of this particular Corbino device is ∼13,000 cm 2 /V·s, obtained from the zero-field resistance. At B = 14 T, the four-fold degenerate QH state filling factors ν = ±2, ±6, ±10 appear as vanishing σ xx at their corresponding carrier density. The gate capacitance of this device is estimated to be C g /e = 7.1 × 10 10 cm −2 V −1 from the position of the observed conductivity minima.
Since the mobility of the on-substrate Corbino devices is lower than that of the suspended devices, relatively higher magnetic fields are required to access the degenerately broken filling factors. As shown in Fig. 3, at low field (B = 11.5 T) well-defined ν = ±2 states are observed on both sides of the charge neutrality point, indicative of the four-fold QH degeneracy. As the magnetic field increases to 18 T, a dip of bulk conductivity appears at the charge neutrality point. This dip fully evolves and the current flow falls below the noise level at B = 30 T. This observation of a vanishing bulk conductivity is consistent with the formation of the ν = 0 QH state [31]. At the same magnetic field, the conductivity minima corresponding to the ν = ±1 filling factors are visible. At B = 45 T, the four-fold degeneracy at the zero energy level is completely lifted, and the LL splitting at ν = −4 that marks the degeneracy breaking of the n = 1 LL is apparent, similar to the previous observation [7]. In all measured devices, the magneto-conductance is strongly suppressed in the regime between the ν = −1 and ν = −2 filling factors and is not measurable within our experimental sensitivity, which remains not fully understood.
As with the suspended devices, we adjust the relative strengths of the Zeeman and Coulomb energy in the Corbino devices by tilting the field in order to explore the nature of the ν = 0 degeneracy breaking. In Fig. 4(a), σ xx vs. filling factor ν is plotted with constant normal field (B ⊥ = 21 T) and with the total field (B t =B ⊥ / cos θ) increasing. Taking the dielectric constant r =4, the characteristic Coulomb interaction energy at B ⊥ = 21 T is E e−e = 740 K, while the Zeeman energy varies from E z = 47 K at B t = 35 T to E z = 60 K at B t = 45 T. As the Zeeman energy is increased, the behavior of the ν = 0 and ν = 4 states are completely different. For the ν = ±4 QH state, the σ xx minima decrease with increasing B t , indicating that a spin polarization underlies this LL, a finding consistent with previous experiments on Hall bar devices [7]. In contrast, the conductance curves of the ν = 0 state coincide with each other as the total field is increased from B t = 35 T to 45 T. The conductance minima are unvarying even in a magnified logarithmic-scale view, as shown in the middle inset of Fig. 4(a). The fact that this minima is independent, within disorder broadening, to changes in the in-plane field is also consistent with a state that is not fully spin-polarized, and adds further credence to the hypothesis that the ν = 0 symmetry breaking is not of spin origin. We also perform fine-tuned tilted field measurements in a range where the change of Zeeman energy is larger (increased by 50%) and the ν = 0 minima are more sensitive to small changes in B ⊥ . Fig. 4(b) shows a log-scale σ xx vs. filling factor ν at B ⊥ = 14 T and 15 T. As the normal field increases by ∼ 6%, there is a decrease in the bulk conductivity minima, showing that the ν = 0 state is not yet fully developed. Increasing the total field by ∼ 50% while fixing B ⊥ , the minima display the same insensitivity to in-plane field as in Fig. 4(a), reaffirming that the excitations of the ν = 0 state have no net spin. We are aware of not observing a decreasing R max with decreasing B ⊥ /B t at fixed B ⊥ in Corbino device, although the range of the Zeeman to Coulomb energy ratio change in Corbino device is similar to that of the suspended device. The discrepancy of the behaviors could be understood to be consequences of the different disorder energy scale.
As to the ν = 1 QH state, experimental data of its tilted-field dependence is also shown in Fig. 4(a). The σ xx minima at this filling factor decrease as B t increases.
This observation implies that the origin of this state is due in part to a lifting of real spin degeneracy. Combining this observation with that of the ν = 0 state, it produces a symmetry-breaking picture of the ZLL where a nonspin polarized state forms at ν = 0 and a spin-polarized state with spin-flip excitations forms at ν = 1 [32].
We are also aware that the observation of the spinactive ν = 1 character is inconsistent with the observations of Jiang et al. [8], whose measurements implied that excitations of ν = ±1 QH states has not spin flip. This raises the possibility that the excitations at ν = 1 and its ground state may depend on the specific disorder concentration in individual samples [16,18]. However, the fact that the insulating ν = 0 state does not respond to increasing in-plane magnetic fields in both suspended and Corbino devices, where disorder densities are very different, provides evidence that disorder effects do not alter our conclusion that the ν = 0 QH state is not a spin-polarized state for a wide range of disorder. | 2012-01-21T05:48:02.000Z | 2012-01-21T00:00:00.000 | {
"year": 2012,
"sha1": "f5764eecf38ee5c48af6dc2cbc2024d0a4b6b551",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevLett.108.106804",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "f5764eecf38ee5c48af6dc2cbc2024d0a4b6b551",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
244986736 | pes2o/s2orc | v3-fos-license | Are Avoidance Goals the Right Prescription For a Pandemic? A COVID-19 Case Study
Background: Motivation scientists study goals, self-regulatory tools that are used to help people approach or avoid objects of desire or disdain. Purpose: Using these tools, motivation science can offer insights to guide behaviour and help individuals maintain optimal health and well-being during pandemics, including COVID-19. Results: Avoidance goals help guide behaviour away from negative objects like COVID-19, and are necessary in situations where survival is at stake. Formulating the goal of avoiding COVID-19 is therefore recommended during the pandemic. However, avoidance goals have inherent limitations, in that they tax one’s energy and well-being. To minimize these costs, the pursuit of approach sub-goals may be recommended, such as increasing social connection online or exercising outdoors (particularly prior to widespread vaccination). Conclusion: Adhering to the goal of avoiding COVID-19 prevents infection and saves lives when safe and effective vaccines and treatments are lacking. But avoidance goals have known costs that must be acknowledged and addressed. One solution is to pair avoidance goals with approach sub-goals to bolster mental and physical health while adhering to the ultimate goal of avoiding COVID-19, viral variants, and future contagions.
Introduction
A new zoonotic disease emerged at the close of 2019, caused by the novel coronavirus SARS-CoV-2 [1]. The virus spread rapidly and rampantly, and by mid-March 2020, this disease -COVID-19had been declared a global pandemic [2]. Initially hampered by a paucity of pharmaceutical options like vaccines or effective treatments, individuals struggled to find the best prescription to maintain health. We look back on the early response to COVID-19 from the perspective of motivation science, and offer some suggestions for future outbreaks (or, in countries where vaccination is sparse, suggestions for the present).
A time for Avoidance?
To understand this phenomenon, we must first understand the central issue: COVID-19 is a disease; it is aberrant to normal, human physiology. Therefore, the COVID-19 pandemic, at its core, is a health-related issue. True, the pandemic has engendered myriad downstream effectson a macro level, jolting world economies and political institutions, and on a micro level, affecting plans and protocols regarding school closings and public gatherings. The scope of these issues belies the fact that at the heart of the pandemic is a simple, straightforward assessment: health and lives are at stake. Second, the pandemic is a decidedly negative event: it threatens the health and safety of countless millions. In its best light, COVID-19 could be viewed (by some) as health neutral, dismissed as akin to the common cold. Former President Donald Trump, after his bout with the virus, tweeted: "Don't be afraid of Covid. Don't let it dominate your life" [3]. More consistent with the data (in our estimation), SARS-CoV-2 is a deadly virus, especially for those who are older and have certain co-morbidities [4], and has killed nearly five million people worldwide [5].
In light of this threat, how should one respond? What goals should one pursue? Here, motivation science offers guidance. Goals have been studied from ancient to modern times, informed by philosophical insights and scientific inquiries, and a few key concepts have emerged. Goals are consciously chosen (which distinguishes them from unconscious urges or propensities). Goals are forms of regulation that guide behaviour (which distinguishes them from wants and wishes). And, goals guide behaviour toward or away from objects (a term used broadly to capture objects, events, or possibilities) [6]. Some objects are positive and are pursued (approached), whereas other objects are negative and are shunned (avoided). The commitment to approach positive objects (move toward them) and avoid negative objects (move away from them) is a fundamental property of goals and goal pursuit [7]. This approachavoidance dichotomy offers a useful lens to examine responses to the pandemic.
Notably, approach goals are generally preferred to avoidance goals with respect to guiding behaviour and fostering well-being. Structurally, an approach goal focuses on a positive possibility and offers precise directionmoving toward a desired object. Thus, successful goal pursuit results in achieving a desired (and previously absent) end-state. Successful pursuit also yields positive emotions and well-beingthe plaudits of success [8]. Not achieving an approach goal may leave the pursuer worse off temporarily, but the current absence of the desired state is thought to be a neutral launching pad toward further efforts [9].
Given the benefits of approach goals, one wonders if they would be ideal for navigating a pandemic like COVID-19. While the advantages of approach goals are well-documented, avoidance goals nonetheless seem a better fit, as the hub of regulation is decidedly negative (COVI9-19 is a negative object, and the pandemic a negative event). In other words, it makes sense to use avoidance goals during a pandemic to guide behaviour and avoid disease. An approach goal focused on the disease itself is incongruent, as there is no positive object to move toward. We find no support for the adage: "What doesn't kill you makes you stronger." There are no touted benefits of infection, as far as we are aware, no physiologic improvements after infection that would qualify as a plus, with the possible exception of developing antibodies that help prevent future infection (though even in this case, the benefit is avoiding another negative). Approach motivation is about thriving; avoidance motivation is about surviving [10]. In a health crisis, surviving is the crucial end-state; in these circumstances, avoidance goals seem the best fit.
In practice, avoidance goals have already been implemented on macro (global) and micro (personal) scales. Prior to vaccines, avoiding COVID-19 centred around behavioural interventions like banning travel; closing schools, businesses, and places of worship; disbanding large gatherings; quarantining disease carriers and suspected disease carriers; issuing stay at home orders; and encouraging social distancing [11]. Is there evidence to suggest that these avoidance-based interventionsdesigned to avoid person-to-person viral spreadare effective? In a word, yes. To cite a few examples: helped people stay safe during the pandemic. However, the regulation of avoidance behaviour is a double-edged sword. Avoidance behaviour saps energy, undermines well-being, and under-cuts performance over time [17]. Avoidance regulation focuses on a negative object, and can evoke fear and anxiety. This heightened state can help one navigate threats (like navigating icy highways in winter, when avoiding a car accident helps drivers arrive safely at their destination; or, in the case of COVID-19, in avoiding deadly microbes). But this mindset exacts a cost, and can feel "urgent and all-consuming" [16]. It is little surprise that air traffic controllerswhose job description focuses inherently on avoidanceensuring planes evade mid-air collisionssuffer from a high rate of burn-out [18]. It is also little surprise that individuals in the pandemic have had worsening anxiety and mental health. In mid-March 2020, with the declaration of a national emergency in the United States, internet searches regarding acute anxiety spiked compared to historical levels [19]. A study conducted in late March and early April found depression symptoms had jumped more than three-fold in the U.S. [20]. A U.S. Census Bureau study echoed these results. Compared to a sample in 2019 (pre-pandemic), the prevalence of anxiety and depression climbed three times higher during the pandemic [21]. Spikes in mental health disorders have extended beyond the United States as well. A systematic review reported increased rates of anxiety and depression as a result of COVID-19in addition to stress and post-traumatic stress disorderin the general populations of China, Spain, Italy, Iran, the US, Turkey, Nepal, and Denmark [22].
Avoidance goals have inherent limitations, as revealed by their structure. Given that the hub of regulation is a negative object, the best outcome is successfully avoiding that negative object and, hence, the best emotional outcome is relief [23]. Given that avoidance regulation is directed away from an object, an unanswered question remains: toward what object should one move? This type of regulation offers little guidance [9]. That does not mean avoidance behaviour should be eschewed; it means instead that the utility of avoidance behaviour may be limited to situations that represent dire threats to health and safety. Avoidance goals seem to be the right tool for times when danger is imminent and survival uncertain, when it is prudent to be on high alert. Survive today in order to thrive tomorrow, when danger has dissipated and safety is assured.
But if the costs of avoidance goals are so burdensome, are there ways to offset these costs? Motivation science answers in the affirmative. Avoidance and approach goals can exist in a hierarchical model, with a superordinate goal coupled with sub-goals [24]. Avoiding COVID-19 may be the superordinate goal, the unifying framework that guides behaviour, but sub-goals are possible that are not avoidant in nature. For example, prior to widespread vaccination, avoiding COVID-19 may have one practicing social isolation; to counterbalance this loneliness, an approach goal could be considered to promote healthy relationships (a positive object) through online interactions (via Zoom or FaceTime). Avoiding COVID-19 may prompt the avoidance of gyms; an approach goal to offset this loss may be to exercise and maintain a healthy body (a positive object) by walking or jogging outside, or riding a stationary bike inside. People have noted pandemic "fatigue" and burn-out; approach sub-goals may help to "replenish and reinvigorate" [16]. Approach goalslike those abovecan help guide behaviour even as the as the crux of behaviour falls under the auspices of avoidance.
Conclusion
At its core, COVID-19 is a disease. At its worst, it represents a deadly assault to human health and physiology. The pandemic is a negative eventa threat to millions. During this and future pandemics, motivation scientists can offer insights to guide behaviour through their work on goals (regulatory tools in which people approach or avoid objects of desire or disdain). Avoidance goals are uniquely structured to help guide behaviour away from a negative object, and are necessary in situations where survival is at stake. Thus, formulating the goal of avoiding COVID-19 is recommended during the pandemic. Data suggests that adhering to this avoidance goalat the societal and individual levelprevents infection and saves lives, particularly prior to widespread vaccination. But like any medical prescription, adherence entails risks and benefits. Avoidance goals come at a steep cost to energy, well-being, and long-term performance. To minimize these known "side-effects," a hierarchical model of goal pursuit is recommended: adding approach sub-goals to bolster mental and physical health while adhering to the ultimate superordinate goal of avoiding COVID-19, viral variants, and future contagions.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Ethical Approval
Not applicable.
Data Availability Statement
Data sharing is not applicable to this article.
Conflict of Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2021-12-09T17:54:26.764Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "7de7eb21a95429ad46aab4fef57894e1a54ff998",
"oa_license": "CCBY",
"oa_url": "https://www.scimedjournal.org/index.php/SMJ/article/download/384/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f0e9aa0a6256fd3c58d9f92ebaf14d415cfed24b",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
54719445 | pes2o/s2orc | v3-fos-license | Determination of Cardiac Ejection Fraction by Electrical Impedance Tomography using a Hybrid Heuristic Approach , a Simulation Study
An important parameter to analyze the efficiency of the heart as a pump is Cardiac Ejection Fraction (EF), which is clinically highly correlated to the functional status of the heart. Diverse non invasive methods can be applied to measure EF, like Computer Tomography, Magnetic Resonance, Echocardiography, and others. Nevertheless, none of these techniques can be used to continuous monitoring of such parameter. On the other hand, electrical impedance tomography (EIT) may be applied to accomplish this goal. In addition, low cost and high portability are also EIT’s features that justify the research for solutions involving such technique to monitor EF. EIT consists in reconstruct images of the conductivity distribution of the interior of a conductor domain by applying electric currents and measuring electrical potential on the boundary of the body. Mathematically, EIT can be classified as a non-linear inverse problem. This work proposes a method for the continuous estimation of cardiac ejection fraction, addressing it as an optimization problem. The models used in our approach assume that recent two-dimensional magnetic resonance images of the patient are available, and use them to reduce the search space. Another important feature is the parametrization of the geometry of internal inclusions inside the domain, which also reduces the cost of the method. This work proposes a Hybrid Iterated Local Search (ILS) heuristic for EIT inverse problem using Levenberg-Marquardt Method as local search. Experiments are performed on two-dimensional images with synthetically generated data for electric potentials. Two different protocols for current injection are tested in such experiments and preliminary results are presented.
Introduction
Cardiac ejection fraction (EF) is an important parameter to analyze the efficiency of the heart as a pump.It indicates the amount of blood that is pumped from each ventricle at each step of a heart cycle.In other words, EF is a measure of the blood fraction ejected from the ventricles in one heart cycle.Although it is possible to determine both left and right ventricles ejection fraction, clinically is more common to use only the ejection fraction of the left ventricle (EFLV), so the general term "ejection fraction" is often used to refer to the EFLV.By definition, the ejection fraction is calculated as follows: where PV denotes the volume of blood pumped, given by the difference between enddiastolic volume (EDV) and end-systolic volume (ESV).Diverse non-invasive techniques can be applied to determine EF, as echocardiography, cardiac magnetic resonance, and others.Although such techniques are able to produce high definition images for well-accurate diagnostics, they cannot be used for continuous monitoring, due specially to their high costs.To reach this goal, an alternative technique could be Electrical Impedance Tomography (EIT), which has advantages in terms of portability and in the fact of not using ionizing radiation, besides its low cost.
Electrical Impedance Tomography consists in reconstructing conductivity distribution images from the inside of a body, based on current injection and potential measurement protocols, where those potential measurements are taken on the boundary of the domain.This technique has been largely applied in different fields, as industrial monitoring [1], geophysics [2], and biomedical engineering [3,4].In the context of the latest field, recent work [5] has discussed viability of EIT to continuous monitoring of cardiac ejection fraction, and other related works [6][7][8] have shown preliminary results on the same subject.Such works deal with 2-D model of the human torso, contemplating internal inclusions for the heart ventricles and the lungs.The lungs are considered in the studies once their low conductivities work as barriers to the electrical currents used in the experiments.The mentioned 2-D model for human torso with cavities inclusions for the lungs and heart ventricles is used on the study presented here.
This work addresses the inverse problem of determining cardiac ejection fraction by means of EIT, from an optimization point of view.Recent work has shown that Levenberg-Marquardt Method (LMM) is well-suited for this purpose [9].However, LMM is a local search method, and the quality of local optima obtained by that kind of strategy depends on the initial solution given to the technique.Global techniques, like Genetic Algorithms have also been tested for the problem [10], but LMM provided the best results, so far.Hence, a natural evolution of the research is to try methods using multiple local searches, fed with different initial solutions, like Multistart Local Search or Iterated Local Search (ILS), that has some advantages over the first, as will be further discussed in this work.Therefore, our work proposes the application of ILS heuristic to the EIT inverse problem, in order to investigate the impact of using this approach versus the classic version of Levenberg-Marquardt.The methods used in this study are presented next section, starting with the 2-D torso model.
Two-dimension Models for the Human Torso
The human torso model used in this work is the same used in [9].It models the torso as a 2-D surface with five different regions.Two of them representing the lungs, one for each heart ventricle and the last one to represent the rest of the torso.The shape of the regions of interest are obtained by manual segmentation of magnetic resonance images, in two different phases of heart cycle: end of systole and end of diastole.For simplicity matters, the shape of the lungs and torso are considered constant during a heart cycle.Figure 1 illustrates the result of such manual segmentation.
Figure 1. Manual segmentation of a magnetic resonance image
After the segmentation, an important step on modeling is to represent the boundary lines of the regions by means of extended x-splines [11] curves with minimum number of control points.There are 7 control points for left ventricle and 8 control points for right ventricle.As the goal of our method is to recover the ventricles shape from electric potential measurements taken on the boundary of the body, and with each control point represented by two coordinates, there are 7 × 2 + 8 × 2 = 30 spline parameters (variables) to be estimated during the optimization process.In other words, the technique would have to find the best set of values of the parameters of these splines that minimizes the geometric errors in shape recovering.In addition, we apply a strategy to reduce the number of parameters.The main idea behind the strategy is to use only one parameter to define the position of each control point.Once the same control points were used in both systolic and diastolic phases, it is possible to connect their positions in each phase through a line.Then, for each control point i, a linear interpolation, parametrized by a scalar t i , is performed to determine intermediate values between the two phases.In such convention, t i = 0, ∀i, is relative to the position of spline control points i at the end of systole, while t i = 1, ∀i, is relative to the position of spline control points i at the end of diastole.Provided that, the goal is redefined to recover the cavities shape by estimating 15 parameters t i , i = 1...15, instead of the 30 original ones.
Besides geometrical issues, it is also important to discuss electrical issues for our model.The main feature to electrically identify a biological tissue is its conductivity.Main factors that influence the properties of biological tissues are presented by Grimnes [12].The tissues can be classified in thirty different kinds, according to their electrical properties [13] and can be grouped in four major groups: epithelium, muscle, connective tissue and nervous tissue.The conductivity of a tissue can also be influenced by other "environment" issues, like the frequency of electrical current, presence of water, temperature, etc.
In this work, we have taken some assumptions in order to simplify the problem.The first assumption is that the conductivity of a tissue is known, constant and isotropic.The last one has to do with the kinds of tissue themselves.We assume there are three different tissue conductivities in our model associated to: lungs, heart cavities and torso.Such assumptions are important sonce biological tissues are very difficult to characterize and even in the literature electrical properties values reported vary substantially.
For the tissues that composes the torso region, Bruder et al. [14] suggests to work with a mean resistivity value to represent such region.The resistivity of the air is 10 20 Ωcm, but it is difficult to determine the resistivity of a lung filled with air.Rush et al. [15] propose a scheme to represent heart cavities filled with blood, which comprises a simplified resistivity distribution for the blood tissue surrounded by homogeneous material with resistivity ten times greater.Based on this, we have extended this scheme to represent lung regions filled with air.Table 1 present some resistivity values found in literature for our tissues of interest: lungs, blood, heart and torso.In our experiments, we have taken the values of 1000Ωcm for torso and of 100Ωcm for blood.For the lungs, we used two different values of Ratio of Lung to Torso resistivity (RLT): RLT = 20, corresponding to 20000Ωcm for lung resistivity and RLT = 50, corresponding to 50000Ωcm for lung resistivity.Subsections 2.2 and 2.3, present, respectively, the forward and inverse problem of EIT using the model described here.
The Forward Problem
The forward problem of EIT consist of calculating electrical potentials on the external boundary of the torso generated by a current injection on a pair of electrodes.In the forward problem, the conductivity (or resistivity) distribution of the domain is known.In the model used in this work, the domain is divided in regions with different conductivities, as mentioned before.The electrical potential (φ) for every point must satisfy Laplace's equation: subject to the boundary conditions: where Γ 1 is the interface between lung and torso region; Γ 2 is the interface between blood and torso region; Γ 3 is the external boundary of the body; Γ ie 3 is the portion of Γ 3 in which the i th electrode is placed on; J i is the electric current injected through the i th electrode; and σ T , σ B and σ L are, respectively, torso, blood and lung conductivities.
The present work uses the Boundary Elements Method (BEM) [21] to solve the forward problem, with implementation based on the one used in [22].Next section describes the inverse problem associated with the forward problem presented here.
The Inverse Problem
From an electric point of view, the inverse problem associated to EIT aims on generating an image of the electrical resistivity from measures of electrical potential at the external boundary.From a geometric point of view, the aim is to recover the shape of the ventricular cavities via the estimation of vector t, containing the parameters t i , i = 1...15, as described in Section 2.1.This problem can be formulated as an optimization problem.Our goal is to minimize an objective function that computes the distance between measured electrical potential values (taken from a pair of electrodes on the external boundary of the body) and the computed ones.The computed potential values depend on the heart cavities shape, parametrized by vector t and they are calculated as described in Section 2.1.Therefore, the goal is to find the best parameter vector t that minimizes: where φj is the j th measured electrical potential; φ(t) j is the corresponding computed electrical potential that depends on the heart cavity shape parametrized by t, calculated as described in Section 2.2; m is the number of measurements taken, and depends on the current injection pattern; and R(t) is the so called residual vector.It is important to note that, in this work, the values of φj , which were supposed to be measured, are synthetically generated.The optimization problem presented this section is solved, in this work, using two different approaches, described in Subsections 2.3.2,2.3.3 and 2.3.4.
Before discussing optimization methods, it is important to pay some attention to an important aspect though.The inverse problem is here defined in terms of minimizing the residual between measured and computed values.Thus, there must to be some criteria to determine such residuals, or errors.As we are interested in obtain images from the inside domain of a body, we used geometric error, presented next Subsection, to determine the accuracy of the implemented techniques.Nevertheless, although such metric is used inside the optimization methods, for instance, to determine if a solution is better than a previous one, at the Section 3.3, which presents the computational results of our work, other kind of metric is used to compare those techniques.
Geometric Error
In the context of this work, we define geometric error as the difference between the geometry of the inclusion obtained by an optimization method and the "real", known, geometry.In this work, the "real" geometries refer to the ones corresponding to the synthetically generated data.Intuitively, the geometric error can be evaluated by visual inspection of the generated images.However, when two or more images are too close to each other, such visual inspection cannot be precise enough to determine which one is more accurate.Besides, for the purpose of automatizing the process of calculating the geometric error, a visual inspection is ineffective.Thus, objective metrics are needed to accomplish such purposes.
The geometric error is calculated in the following way.The known mesh of target inclusion is refined by dividing each element into ten parts.For each node of the mesh of elements of the identified inclusion, the distance to the closest node of the target mesh is calculated.The obtained values are accumulated until all nodes of the identified inclusion are considered.The total value obtained is divided by the number of nodes of the boundary of the identified inclusion and by the target perimeter.The result is the geometric error.The obtained value is non-dimensional.An unitary value means that, in average, each node of the boundary of the identified inclusion is far from the target by the value of the target perimeter.The value of the error aims on reflecting the quality of the obtained images in the optimization process.Further studies could contemplate other geometric features, like the area of an inclusion.More on geometric error and some strategies to compute it in EIT problem can be found in [23].Once presented the metric used to determine the quality (or the error) of a minimizing solution, next Subsections present the optimization techniques themselves.
Levenberg-Marquardt Method and Related Work
The problem represented by Equation 3 is a non-linear least square problem, and different methods can be applied to solve this optimization problem.Peters et al. [9] addressed such problem with Levenberg-Marquardt Method (LMM) [24], which is also in the basis for our approach.LMM can be viewed as a modification of Gauss-Newton method, with the model trust region approach.The minimizer of the non-linear least-square problem is obtained iteratively in the method.At each step, updates to t approximation are given by the minimizer t + of the following constrained linear least-squared problem: minimizes Where R(t) is the residual vector; t 0 is the current value for the minimization parameters vector; t + is the updated solution vector; J is the Jacobian matrix with the derivatives of each element of the residual vector with respect to the optimization variables; and δ 0 is the initial radius value for the trust region.The vector t + , solution of the constrained minimization problem, is given by where I is the identity matrix.The parameter µ is the parameter that provides the modification of Gauss-Newton method, mentioned earlier, and its value changes form one iteration to another.A more detailed description about Levenberg-Marquardt Method can be found in [24].
In this work, LMM is used in two different ways.First, a set of independent executions of LMM, each one using a different initial guess for t that is randomly chosen, composes a traditional Multistart Local Search method based on LMM.In the second scheme LMM is used as the local search method for the heuristic called Iterated Local Search (ILS) [25].
A more detailed discussion about the experimental setup and other related issues is presented in the next sections.By now, we limit the discussion to the fact that Levenberg-Marquardt was the method chosen to implement the local search, in spite of other alternatives, due to its promising results shown in [9].Other previous works have adopted different alternatives, such as Powell's method [26], Genetic Algorithms [27,28] and Feasible Arc Interior Point Algorithm (FAIPA) [29], but LMM has been shown as the best alternative, so far.
In next subsection, we describe the ILS metaheuristic, used to compose the hybrid approach proposed in this work.
Iterated Local Search
The ILS metaheuristic is a template for the development of a heuristic, i.e. is a metaheuristic.The ILS template defines that first a local search is applied to an initial solution.Then, at each iteration, a perturbation of the obtained local optimum is performed, followed by a local search method being applied to the perturbed solution, resulting in a new local optimum.Finally, this new local optimum is subjected to some acceptance criteria, and replaces the old one if it attends to certain predefined conditions.This process is repeated until some stopping criteria are attended.Such process is described by Algorithm 1.
Before presenting our implementation of the ILS template, is necessary, though, to bring a light to some details of ILS method.
• Local search.Any method, deterministic or stochastic, can be used as a local search in ILS metaheuristic.That method is treated from a black box point of view.It is important to note that population-based heuristics, like Genetic Algorithms, are not suitable for this purposes, in principle, once local search is usually a single-solution based method, like Levenberg-Marquardt.
• Perturbation Method.This should be a large move of the current solution, in order to provide diversification to ILS solutions and as an attempt of pushing the search to another basin of attraction.It is important that the perturbation method preserves some part of the given solution while strongly perturb the other.This is intended to keep some "good" information and at the same time to diversificate the obtained solutions.
• Acceptance criteria.It defines the conditions that a new solution (local optimum) have to satisfy to replace the current one.Next, we present our implementation of ILS using Levenberg-Marquardt as the local search method, as well as the perturbation method and acceptance criteria used.We call our implementation ILS-LM.
Hybrid ILS-LM Heuristic
To define our ILS implementation, we need to determine some important points.First one, and most important of all, we set LM method as the local search procedure.We also propose a perturbation method that guarantees a minimum of diversification of the local optima.This perturbation method, that we call K-dPerturb, will be better described next.Finally, we have to establish the acceptance and stopping criteria.Algorithm 2 illustrates the choices made in this work.Before continuing, an important observation here is that for simplicity reasons, we ommited the passing of the "measured" values ( φj in the inverse problem formulation) as parameters to the procedures described by the algorithms presented in this section, but one should keep in mind they are needed to calculate the precision of the methods used.Observed this, we can continue with the analysis of the heuristic ILS-LM.
SimpleAcceptance implements a criterion in which the solution s * replaces s * if its relative error is lower than the one of the current solution.Accepted or not, every considered solution s * is added to the search history, implemented with a k-d tree data structure [30,31], described later.This approach focuses only on intensification, i.e., it just aims on the quality of solutions.In our proposal, diversification aspects are provided inside the perturbation method.Thus, K-dPerturb is a partially random method using the same k-d tree structure of SimpleAcceptance to avoid generating repeated or very similar solutions, which could lead to the same local optima.As already mentioned, the k-d tree structure also works, in this case, as a search history.High-level algorithms for SimpleAcceptance and K-dPerturb are presented, respectively, in Algorithms 3 and 4.
It is worth to observe that we have defined in the implementation that a node from k-dTree can be expanded when the distance between two solutions belonging to that node is greater than a given precision 0 , which was kept during our experiments with the value 10 −2 .Also, we kept as constant the values of perc = 0.15 and maxit = 11, the maximum number of performed iterations.The choice of maxit = 11 was due to how we compared ILS-LM and LMM, and is discussed later.The steps related to the perturbation of a control point p, that is, the process comprised by lines from 6 to 16 in Algorithm 4, implement a way of controllably disturb such control point.The basic idea is simple and consists in produce a new value for the parameter t i corresponding to that point.Concerning to the description of the perturbation method, the terms p or t i are used interchangeably.
These controlled perturbations work as the following.After a control point p is selected for being perturbed, its two neighbors are identified (p lef t and p right ).Then, a line segment, connecting such neighbors, is traced (segRef ) and the orthogonal projection of p over segRef is obtained (p orth ).These are the preparation steps.Next, the method calculates limits to the perturbation.Those limits are calculated in order to be dimensionless, just like the parameters t i .Yet, they are intended to promote as much perturbation as possible, but without producing values too far from a feasible geometry, so the maximum normalized propotion between the distances d1 p , d2 p and d3 p is used as such limitation.This resulting factor has the desired features: is dimensionless and promote good perturbations, without too much relaxation.Figure 3 illustrates the distances used in perturbation factor calculation. is presented next Subsection.
K-d trees
A k-d tree [30,31] is a data structure for organizing points in a k-dimensional space, by means of establishing partitions in that space.In this work, the referred space is the search space of the optimization method, i.e. the space composed by the feasible values of vector t.From a computational point of view, k-d tree can be implemented as a binary tree with good performance on retrieving information.Each node of the current state of the tree represents a point in the space.If the node is not a leaf, its geometric interpretation could be thought as implicitly generating a splitting hyperplane diving the space in two parts.
Every node in the tree is associated with one of the k dimensions.The direction of splitting hyperplane is orthogonal (perpendicular) to the axis corresponding to that dimension.This way, all of the points of the space with coordinate values, corresponding to the selected dimension, that are inferior to that one of the node lie on the left subtree, while the points with values greater than that one of the node lie on the right subtree.Therefore, the hyperplane would be the region of the space with that value for the selected dimension and its normal would be the corresponding axis.
As there are different possible ways of choosing the order of axis selection, also there are different possible ways to construct k-d trees.The most common one consists in choosing the axis in a cyclic manner, returning to the first axis after the last one has been chosen.The order of axis are usually predefined.For example, in R 3 space, one could select x − axis for root node, after y −axis for the children nodes of root, and then z −axis for the grandchildren of root, returning to x − axis for next generation, and to y − axis for next generation and so on.Figure 4 shows an example of both generated data strcuture and the corresponding space division, for a particular case of R 2 where the points in Table 2 are inserted into the structure in the order thei appear in referred Table .Table 2. Sequence of points inserted into kd-tree example.
Order Point Order Point 1 ( In our implementation, the use of a k-d tree provides a method for fast retrieving the spatial position of a stored local minimum (vector t representation), as well as the possibility of prioritizing different regions of the space when perturbing a solution.If a perturbation generates a new solution that is not far enough (such criterion is determined by parameter 0 ) from an existing point in the tree, this perturbation is not considered and a new one is generated.In the other way, if a new solution is accepted in the distance criterion, it can be placed in the tree, splitting the search space into new smaller portions that can be explored later.
Summing up, the k-d tree structure consists in a log of all intermediate solutions generated during the execution of ILS-LM.That is, it keeps all local optima generated by each of 2 Levenberg-Marquardt (local search) execution and also all the solutions returned (approved) by K-dPerturb.Once discussed this last aspect of our proposal, in Section 3, we discuss the experiments performed in this work, as well as the obtained results.
Results
Before taking a look into the experiments performed and their corresponding computational results, it is important to discuss a little about a factor that directly influences the experiments, the stimulation patterns.
Stimulation Patterns
The choice of current injection protocols and electrical potential measurements is an important aspect on studying EIT problem.The problem is ill-conditioned, so the image generated is very sensible to the choices made.Nevertheless, this work do not focuses on the study of such protocols and measurements.A deeper discussion on the subject can be found on [32].We are limited here to test the same two patterns used in [9].The first one is called diametrical.It is called this way because of an analogy with a circular domain, where the electrodes used to inject current would be diametrically opposed.In this pattern, eight different cases of current injection are taken.For each case, there are thirteen measures of potential.This yields 13 × 8 = 104 measures.
The other pattern, called alternative, consists of an attempt to explore the region of interest better than other regions.Hence, the electrodes used to inject current are taken near to the heart, in this pattern.These electrodes are also called driven electrodes.Therefore, there are six cases of current injection with thirteen measures, what yields 13 × 6 = 78 measures.Figure 5 presents diagrams for both patterns.Each doubled arrow line indicates a pair of driven electrodes in each case of current injection.
Once again, it is important to note that the terms "measure" or "measured" used, in the context of this study, refers to synthetically generated values, generated by numerical methods.Next section presents the experimental setup for the tests performed.
Experimental Setup
As mentioned before, the experiments performed in this work aim at reducing geometric error, what directly has to do with minimizing the error while determining EF.To do so, we need to establish target values for EF that the tested techniques have to found.We call those values target values.Such target values were determined as described next.
For the two dimensional model used in the present work, the areas of the transversal section of heart cavities were assumed to be proportional to their volumes, that is, a cylindrical approximation is used, so that EF is calculate by where EDA stands for the area of transversal section of the ventricle at the end of diastole, while ESA stands for the area of transversal section of the ventricle at the end of systole.Applying Equation 5 to values obtained from segmentation of MR images taken at the end of systole and at the end of diastole, we have calculated that EF of left ventricle is 59.24%, while EF of right ventricle is 29.95%.
An artificial cardiac disfunction was synthetically generated then.Such disfunction consists in alterating cardiac cycle by making the end-systolic volume being greater than the normal, while the end-diastolic volume remains unaltered.This configures a new heart cycle, in which EF of left ventricle values 33.01%and EF of right ventricle values 16.19%.These are the target values to be estimated by the optimization methods used here.
Once defined the target values, we needed to determine initial guesses to the minimization techniques.Two different initial guesses for vector t were used.The first one corresponds to the shape of the ventricles at the end of diastole (t i = 1, ∀i) while the second corresponds to the shape at the end of systole (t i = 0, ∀i).In both cases, they refer to the values of a normal heart.A comparison of target values and initial guesses can be viewed in Figure 6.
Altogether, eight different experimental setups were submitted to LM and ILS-LM.Each one of these experimental setups is composed by one of two possible values for Ratio of Lung to Torso resistivity (RLT = 20 or RLT = 50); one of two possible values for stimulus pattern (diametrical or alternative); and one of two possible values for initial guess (t i = 0 or t i = 1).Thus, there are 2 × 2 × 2 = 8 different experimental setup combinations, concerning
Computational Results
The experiments performed where set up the way described in the last subsection.In addition, other configurations used in the work were already mentioned, namely 0 = 10 −2 , perc = 0.15 and n = 11 for our ILS-LM method.One should remember that the first local search performed by ILS-LM is executed outside the loop, so we have a total of 1 + 11 = 12 LM executions inside the metaheuristic.In order to make a fair comparison, we executed the LM algorithm 12 times, each one using a different and randomly chosen initial guess.One of the 12 Levenberg-Marquardt executions receives the same initial guess as the one used for the ILS approach.The rest of them receive randomly perturbated (in 15% of control points) versions of this initial approximation.The perturbation used to generate these initial aproximations is the same used inside K-dPerturb, as described in Subsection 2.3.4.
As already mentioned, the metric used here to compare the optimization methods (relative errors to target EF values) is not the same used inside the implementations to measure the quality of solutions (geometric error).The first one is more adequated to the analysis performed in this section, while the second can be more easily applied inside the algorithm, when compared tho the prior metric.As the two metrics are directly correlated, such change of measurements does not affect the conclusions made.The relative error to ejection fraction is computed as below: where ∆% is the relative error, ẼF is the ejection fraction calculated from the values obtained by the optimization techniques and EF is the target value for ejection fraction.Such target values are 16.19 for the right ventricle and 33.01 for the left ventricle, as mentioned in Subsection 3.2.The value of ẼF is calculated according to Equation 5, using, for it, the values of area of the geometric shapes resulting from the solutions returned by each minimization technique tested.
As the hybrid technique returns only its best solution, we have made the same to the pack of executions of classic LM.To compare results, we consider only the best result of the twelve executions of LM.Table 3 One can see that, in general, diametrical pattern presents better results than the alternative pattern.The only exception is the case of left ventricle with RLT = 50 for classic Levenberg-Marquardt.Once diametrical pattern uses more pair of electrodes, this may indicates that more studies should be made in the direction of finding the best injection protocol with minimum of measures taken.Another important observation is that, also with only one exception (right ventricle, RLT = 20, diametrical pattern), the hybrid approach has found better results than classic LM.
Concerning to initial guesses, for the diametrical pattern, all results using t i = 1, ∀i (end of diastole) were better than using t i = 0, ∀i (end of systole).In some cases, like right ventricle with RLT = 50, the improvement was significant.For the case of the alternative pattern, the end of diastole was a better initial guess only in three out of eight cases.Hence, the best initial guess may vary with the protocol of current injection and measurement used.
Concerning the Ratio of Lung to Torso resistivity, the value of RLT = 20 has shown better results for both techniques, once again, with only one exception (right ventricle, t i = 1, ILS-LM), when considering diametrical pattern.The opposite occurred when considering the alternative pattern.
In terms of computational costs, each execution of classic LM took about 25 min for the diametrical pattern, and about 20 min for the alternative pattern, what yields near 5 hours of execution for the more expensive case (diametrical) and about 4 hours for the cheapest one (alternative).The execution of ILS-LM was a little faster: around 4 hours and 30 min for the diametrical case and around 3 hours and 45 minutes for the alternative pattern.Such tests were taken in a machine with Intel Core I5, 2.8GHz, 4GB of RAM and the methods were implemented in C/C++ and Fortran77 languages.
Conclusions
Comparing the different protocols, the behavior of the diametrical one seems to be more stable.The results of diametrical pattern were more accurate than the alternate one, but with higher costs and using more measures.However, the reduction of execution time in about 20% obtained by the alternate pattern, justifies further investigations on the topic of stimulation patterns.
The most important conclusion of this work is about the optimization technique used.The Iterative Local Search implementation improved the quality of the obtained solutions and reduced the execution time of the inverse problem.The results suggest that this new hybrid method explores the search space in a more efficient way, probably due to the perturbation procedure that allows diversification on the solutions prioritizing regions not yet explored (due to the use of k-d trees).In addition, the fact of using variations of local to feed local search procedures might have provided faster convergences of the other local searches, what would explain the the reason why this new hybrid method was faster than the traditional Multistart Local Search method using LM.
Therefore, the results suggest that this new hybrid method, the ILS-LM, is a promising technique for the solution of the inverse problem associated to Electrical Impedance Tomography.Considering also the fact that some of the aspects of ILS-LM were implemented in a very simple fashion, like the SimpleAcceptance criteria or the stopping condition, there is much to be done in the direction of increasingly improve the cost vs.benefit relation of the techniques used in EIT inverse problem.In addition, future work should also investigate more intelligent and sophisticated perturbation and acceptation methods than those proposed in this work.
Figure 2
Figure 2 ilustrates the basic schema in which Iterated Local Search is based on.A last important observation about ILS is that it differs from a Multistart Local Search where initial guesses are usually chosen randomly.ILS presents an improvement, since it is an iterative process where local searches are performed using perturbed versions of previously found local optima.
Figure 3 .
Figure 3. Distances used in perturbation factor calculation.
Figure 4 .
Figure 4. Left: space partitions and right: generated k-d tree for the point sequence in Table 2
Table 1 .
Resistivity values for biological tissues found in literature.
Algorithm 1: Template for Iterated Local Search algorithm input : An initial solution s 0 output: Best solution found // Apply a predefined local search method 1 s * ← local search(s 0 ); 5 s * ← Acceptance(s * ,s * , search history); 6 until Stopping criteria;
Algorithm 3 :
Simple acceptance method input : s * : current best solution input : s * : candidate to be the new current best solution input : k-dTree structure, with history of all s * found output: Solution s * , updated current best solution 1 n ← node of k-dTree where s * should be placed on; 2 if n can be expanded then 3 expand n; 4 insert s * in k-dTree; 5 if error(s * ) < error(s * ) then 6 s * ← s * ; Algorithm 4: K-d Perturbation method input : s * : current best solution input : k-dTree structure, with history of all s * found input : perc and 0 : the same of Algorithm 2 output: Solution s to be provided as initial solution for Levenberg-Marquardt local search 1 n ← k-dTree node corresponding to s * ; 2 accept ← false; 4 {ts} ← randomly selected perc percent of t parameters from s * ; 7 p lef t ← neighbor of p in anti-clockwise direction; 8 p right ← neighbor of p in clockwise direction; 9 ref Seg ← line segment from p lef t to p right ; 10 p orth ← orthogonal projection of p in ref Seg; // Perturbation factor calculation steps 11 d1 p ← dist(p, p orth ); 12 d2 p ← dist(p orth , p lef t ); 13 d3 p ← dist(p orth , p right )); ; 15 perturbF actor ← randomly select a value in [−d p , d p ]; // Perturb: new value for t does not need to be in [0, 1] 16 t ← t + perturbF actor; // Validation of diversification phase 17 n ← k-dTree node corresponding to s ; 18 if n = n then 19 insert s in k-dTree; 20 accept ← true; 28 until accept = true;
Table 3 .
summarizes those results.The values presented in the table are the relative errors obtained by each method.Relative errors obtained in the set of experiments. | 2018-12-14T18:09:57.294Z | 2014-05-01T00:00:00.000 | {
"year": 2014,
"sha1": "acfc8fe8f01128a446e1a82e35f835aba510643c",
"oa_license": "CCBY",
"oa_url": "http://pdf.blucher.com.br/mechanicalengineeringproceedings/10wccm/19371.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "acfc8fe8f01128a446e1a82e35f835aba510643c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
52284076 | pes2o/s2orc | v3-fos-license | Current understanding of magnetic resonance imaging biomarkers and memory in Alzheimer's disease
Alzheimer's disease (AD) is caused by a cascade of changes to brain integrity. Neuroimaging biomarkers are important in diagnosis and monitoring the effects of interventions. As memory impairments are among the first symptoms of AD, the relationship between imaging findings and memory deficits is important in biomarker research. The most established magnetic resonance imaging (MRI) finding is hippocampal atrophy, which is related to memory decline and currently used as a diagnostic criterion for AD. While the medial temporal lobes are impacted early by the spread of neurofibrillary tangles, other networks and regional changes can be found quite early in the progression. Atrophy in several frontal and parietal regions, cortical thinning, and white matter alterations correlate with memory deficits in early AD. Changes in activation and connectivity have been detected by functional MRI (fMRI). Task-based fMRI studies have revealed medial temporal lobe hypoactivation, parietal hyperactivation, and frontal hyperactivation in AD during memory tasks, and activation patterns of these regions are also altered in preclinical and prodromal AD. Resting state fMRI has revealed alterations in default mode network activity related to memory in early AD. These studies are limited in part due to the historic inclusion of patients who had suspected AD but likely did not have the disorder. Modern biomarkers allow for more diagnostic certainty, allowing better understanding of neuroimaging markers in true AD, even in the preclinical stage. Larger patient cohorts, comparison of candidate imaging biomarkers to more established biomarkers, and inclusion of more detailed neuropsychological batteries to assess multiple aspects of memory are needed to better understand the memory deficit in AD and help develop new biomarkers. This article reviews MRI findings related to episodic memory impairments in AD and introduces a new study with multimodal imaging and comprehensive neuropsychiatric evaluation to overcome current limitations.
Introduction
Alzheimer's disease (AD) is a progressive neurodegenerative disorder resulting from pathological changes which typically spread through brain networks in a predictable pattern. AD pathology leads to early decline in memory, and some pathology can be detected years before measurable cognitive or functional change. At present, there are no disease-modifying treatments, and symptomatic treatment is limited in efficacy. Trials of novel therapeutics increasingly target the earliest brain changes, when a disease-modifying trajectory could potentially result in reduction or elimination of clinical impact. Memory measures remain an important way of assessing such clinical impact and are required in clinical trials in the United States [1].
Accurate diagnosis of AD was, until recently, confirmed only at autopsy. Today, there are several imaging biomarkers measuring neurodegeneration and amyloid b (Ab) deposition in the brain to support the diagnosis [2]. Atrophy on structural magnetic resonance imaging (MRI), hypometabolism on fluorodeoxyglucose positron emission tomography (FDG-PET), and increased levels of craniospinal fluid (CSF) total and phosphorylated tau are used to assess neurodegeneration. CSF Ab42 and Ab PET, on the other hand, are used to assess Ab pathology. Preclinical studies focus on groups at risk for AD, as defined by the apolipoprotein E (APOE) status or examine cognitively normal control (CNC) performance in the context of other AD biomarkers, such as CSF Ab. Large, shared, multisite, longitudinal multimodal data sets such as the AD Neuroimaging Initiative and similar studies initiated in Asia, Europe, and Australia allow for widespread exploration of structural and functional magnetic resonance imaging (fMRI) and PET data in addition to clinical, cognitive, and fluid biomarker data across the spectrum of disease. While there are several limitations, these data sets are an important resource in understanding imaging biomarkers in AD.
Memory is a complex construct. AD has an early and specific impact on episodic memory (i.e., the ability to learn and remember new information) [3], which can broadly be subdivided into encoding (or learning), recall, and recognition. Different types of stimuli (e.g., words, faces, and shapes) and memory tests (e.g., single trial and multi-trial presentations, free and prompted recall) can be used to detect deficits in these aspects of memory, and typically used measures often differ between clinical and research settings. Nonetheless, many studies of AD MRI biomarkers and biomarker candidates have included memory measures as correlates or validating factors.
At present, despite exploration of imaging biomarkers for AD, few have become widely accepted and approved for clinical use, and most remain experimental. In this targeted review, we focused on MRI studies. Following the conceptualization of AD as a biological and clinical continuum by Aisen et al [4], we assessed the MRI findings within preclinical (clinically normal individuals with evidence of AD pathology), and clinical (mild cognitive impairment [MCI] or prodromal AD, and AD dementia [ADD]) phases of AD. The transition between these phases is subtle, and individuals may report cognitive decline even when neuropsychological testing does not suggest any impairment. As episodic memory is the first cognitive domain to be affected along the course of AD, we aimed to investigate the association between MRI findings and episodic memory performance specifically. Current diagnostic criteria of MCI (prodromal AD) and ADD are based on clinical history, neuropsychological testing, and neurologic and psychiatric examinations [5,6]. Imaging methods, CSF, and blood tests are used only to support the diagnosis and to exclude other dementia causes. Nevertheless, subtle findings on MRI have been reported years before the onset of clinical symptoms. Thus, imaging findings correlating with the clinical profile may help identify underlying mechanisms and therapeutic targets for the debilitating memory deficit in AD.
Structural MRI
Aging is associated with a slow decline in both white matter (WM) and gray matter (GM) volumes, and this atrophy rate is increased in AD [7]. Although GM atrophy has been more frequently assessed in AD, structural MRI approaches also allow for the assessment of cortical thickness, as well as shape and WM alterations. This section will focus on studies investigating the relationship between episodic memory performance and structural changes in GM and WM using different imaging analysis approaches ( Table 1). Structural differences between CNC and participants within AD spectrum without any episodic memory associations are beyond the scope of this review and will not be discussed.
GM changes
Hippocampal atrophy is included in the 2011 National Institute on Aging criteria for ADD and MCI due to AD [3,5]. Before the advent of Ab PET imaging, hippocampal volumetric changes that can be determined noninvasively and relatively cheaply using MRI were one of the earliest detectable imaging changes in AD. These changes can be quantified using NeuroQuant, an Food and Drug Administration-approved imaging processing tool [42]. Decline in hippocampal volume and thickness has been consistently associated with memory deficits in AD continuum. In preclinical AD, hippocampal and entorhinal cortex volume, and hippocampal and parahippocampal thickness have been associated with verbal memory [9,12,30]. There have also been reported associations between reduced medial temporal lobe (MTL) volume in CNC with AD risk factors and future memory decline [8]. Further along the course of the disease, in MCI and ADD, decline in hippocampal volume and MTL thickness was associated with worsening in verbal memory [13,16,19,21,[23][24][25][26][27]29,[33][34][35]39,40]. Although less extensively studied, visual memory has been associated with hippocampal volume in amnestic MCI (aMCI) [39]. Studies of hippocampal subregions revealed that CA1 volume declines within hippocampus were particularly related with recall performance in aMCI and ADD [26,29,37].
With time, GM changes in AD spread outside the MTL. Extratemporal regions implicated in episodic memory decline include the posterior cingulate gyrus (PCG)/precuneus [28,30,31] and middle frontal gyrus [27,28]. Both atrophy and thinning of these regions were associated with memory decline. In MCI patients, who converted to ADD over time, decreased inferior frontal gyrus volume was associated with the verbal memory decline [38], suggesting extratemporal involvement may be predictive of disease progression.
WM changes
While AD is a disease primarily associated with GM loss, concomitant WM change has a role in cognitive expression. Diffusion tensor imaging metrics characterizing brain WM integrity are commonly affected in the AD continuum. Increased WM integrity for the whole brain was associated with better memory performance in CNC, MCI, and ADD, suggesting whole brain fractional anisotropy might be an overall marker of severity, rather than a specific measure [15,36]. Genetic status may mediate the relationship between MRI findings and cognition. In APOE ε4 carriers, loss of entorhinal WM integrity was related to worse memory performance [10]. However, other factors such as lower baseline MTL WM integrity have also been identified as predictors of memory decline in CNC converting to aMCI in 2 years [11], which can have the potential to be used as a biomarker for early diagnosis. MTL WM volume and integrity continued to have positive correlations with memory in aMCI and ADD [14,17,18,26,32,34]. Similar to GM changes, which include both MTL and extratemporal regions, precuneus WM volume reduction was also associated with worsened memory in aMCI [18]. Several fasciculi including uncinate, fornix, and cingulum, which are connected to medial temporal regions, were implicated in studies associating fiber density and memory [20,22,41]. In addition to these WM volume and integrity changes, Fujishima et al [19] reported that increased number of WMHs, pointing to increased vascular impairment, in the bilateral periventricular regions was related to worse recall performance in MCI.
Besides these more common MRI techniques, other approaches including diffusion kurtosis imaging, relaxometry, and magnetic transfer imaging may prove to be helpful in investigating WM integrity with high accuracy for whole brain mapping [43,44]. However, the number of studies using these approaches within the AD continuum is currently relatively small.
In summary, structural imaging studies show that hippocampal atrophy, which is closely related to episodic memory performance, is an established neurodegeneration biomarker in AD. Volume and cortical thickness of several additional regions, including PCG and precuneus, require further attention in terms of relationship to memory performance. WM changes, including loss of WM integrity in MTL and fasciculi connected to MTL assessed by formal diffusion tensor imaging metrics and hyperintensities in posterior regions of the brain, were also related to memory decline and should be assessed further in confirmed AD samples.
Functional MRI
fMRI is an indirect measure of brain activity relying on blood-oxygen-level dependent response, which is a proxy for neural activation. fMRI can be separated into taskbased, when a participant is asked to engage in a task during scanning, or resting state, when the participant is asked to lie still without engaging in a task. In this section, we will summarize studies finding differences between those with preclinical or clinical AD and CNC, either on memory tasks during fMRI, or with resting state fMRI interpreted in relation to memory scores.
Task-based fMRI
Many studies have implemented task-based fMRI to investigate memory-related activation patterns in AD ( Table 2). A variety of tasks have been used, most notably association tasks that pair two different stimuli (e.g., a face and a name). Whereas most studies include verbal stimuli, several studies use nonverbal stimuli (e.g., scene and picture encoding). Results of these studies support and extend the previously mentioned structural MRI findings.
Preclinical AD
Individuals with AD risk exhibit changes in bloodoxygen-level dependent responses even before the onset of memory deficits. These changes are nonlinear, with different activation patterns in MTL and heightened activation in frontal lobes sometimes reported. For example, reduced deactivation of PCG/precuneus [45,49,82], increased frontal activation [45], and altered MTL activation have all been reported, with one study reporting hyperactivation [45] and another reporting hypoactivation in preclinical APOE ε4 carriers [47]. Both presenilin 1 mutation carriers and individuals with subjective memory impairment had hippocampal hypoactivation [46,48]. Frontal hyperactivation was also observed in individuals with subjective memory impairment [48]. These activation patterns in preclinical AD are suggestive of compensatory mechanisms within these regions which are capable of maintaining normal levels of cognition.
Mild cognitive impairment
Both hypoactivation [50,51,57,59,61,62,64,77] and hyperactivation [53,78,80,83] of the MTL during memory tasks have been reported in MCI. This difference may be a result of the particular memory process being assessed, as suggested by a study by Trivedi et al [55] reporting hypoactivation of parahippocampal cortices during encoding and hyperactivation of hippocampus during recognition in aMCI. A study showing CA3/dentate hyperactivation and entorhinal hypoactivation also suggested that discrepant findings in MTL may be caused by different activation patterns in MTL structures and hippocampal subregions [60]. The discrepancy may also be due to the mixed sample of MCI patients included in the studies. For example, MCI patients with lower dementia score as determined by Clinical Dementia Rating had hippocampal hyperactivation and decreased default mode network (DMN) deactivation, whereas the activation pattern was completely opposite in MCI patients with higher dementia scores [79].
Similar to MTL, while some studies show reduced PCG/ precuneus activation [50,61,64], some report hyperactivation or reduced deactivation within these regions [53,81,82,84]. PCG/precuneus is part of the DMN, and hyperactivation of these areas is possibly due to reduced deactivation of the DMN while performing a task. Frontal cortex activation is usually reduced [50,51,[54][55][56][57]59,61,63] while several studies show hyperactivation in several frontal regions including precentral gyrus [51,52,59,64]. Dividing the MCI sample into two groups depending on cognitive performance, Clement and Belleville [58] revealed that frontal activation during a verbal memory task was decreased in MCI patients with more cognitive decline. Temporoparietal regions are also reported to be affected with some studies showing hypoactivation of these regions during picture or scene encoding tasks [55][56][57]64] and some reporting hyperactivation [61]. These findings suggest that future studies may benefit from better defined samples instead of including different types of MCI (aMCI and naMCI) patients with various levels of dementia.
Overall, task-based fMRI findings suggest that episodic memory tasks lead to MTL hypoactivation, frontal hyperactivation, and reduced PCG/precuneus deactivation in ADD. Although preclinical AD and MCI samples have activation differences within these regions, the results are not consistent yet to provide early diagnosis or disease-tracking biomarker candidates. The discrepancy of the results appear to be caused by inclusion of mixed patient samples, distinct verbal and visual memory tasks, and implementing different analysis methods for imaging. In conclusion, task-based fMRI seems like a promising tool which can detect early changes along the AD continuum requiring further investigations for biomarker research in AD.
Resting state fMRI
By its nature, resting state fMRI (rsfMRI) does not involve a task, but the connectivity metrics calculated from these data can be used to assess relationships with memory tasks completed outside of the scanner (Table 3). This technique allows the investigation of functional connectivity between two regions and/or within specific networks impaired in AD.
Preclinical AD
In APOE ε4 carriers, verbal memory decline was related to reduced anterior and posterior connectivity as shown by whole brain dynamic functional connectivity [87]. Studies using seed-based analysis reported that verbal memory decline was associated with reduced left medial temporal gyrus; and DMN and executive control network connectivity [85,86]. When episodic memory performance related to structural changes within DMN regions, reduced deactivation shown by task-based fMRI and connectivity decline of this network shown by rsfMRI are considered altogether, this network appears to play a significant role in AD and could be used for early diagnosis.
Clinical AD
The relationship between DMN connectivity reduction and episodic memory decline persisted in MCI [88,93,101,104] and ADD [100,101,104]. Longitudinal studies showed that the progression of memory decline in aMCI was related to the decline of functional connectivity between posterior cingulate cortex and other DMN regions [88], precentral gyrus [99], hippocampal formation [94], and hippocampus subregions [89]. Xie et al investigated the connectivity between regions with atrophy in aMCI and revealed that both atrophy of hippocampus, precuneus, insula, postcentral gyrus, and frontal regions and connectivity reduction between these regions were associated with worse memory performance. Decreased MTL connectivity with locus coeruleus [95], frontal medial cortex, and lateral occipital cortex [97] was associated with worse verbal memory scores. Focusing on insula subregions revealed that increased intrinsic connectivity of insula was also associated with better memory performance [92]. Combining both rsfMRI and FDG-PET approaches, Franzmeier et al [98] revealed an interaction between functional connectivity of frontal cortex and precuneus hypometabolism. With decreased frontal connectivity, precuneus hypometabolism was associated with reduced memory performance, whereas this association was lower at higher levels of frontal connectivity in aMCI. This study suggests that memory performance does not only rely on functional connectivity but also metabolism of DMN regions. Finally, in contrast to findings in aMCI, worse memory performance was associated with increased middle frontal gyrus and parahippocampus connectivity [103], and intrinsic hippocampal connectivity [102] in ADD. To summarize, rsfMRI findings have revealed that MTL and DMN connectivity changes in AD are related to episodic memory. Reductions in DMN connectivity are closely related to memory decline, whereas MTL connectivity results are not that consistent throughout the AD continuum. Whereas preclinical and prodromal AD samples have reduced connectivity in association with worse memory performance, this pattern in reversed in ADD. Although DMN findings are rather consistent, there is still a need for more studies with sufficient power before rsfMRI can provide a reliable AD biomarker or tracking tool. Future studies may benefit from combining rsfMRI with other imaging techniques, including FDG-PET, and defining patient samples better by supporting the clinical criteria with established structural MRI, PET, and CSF findings.
Molecular MRI
Proton magnetic resonance spectroscopy can be used to assess changes in cell-specific metabolites, including choline, creatine, glutamine, glutamate, glutathione, N-acetyl aspartate (NAA), and myo-inositol. Levels of NAA, reflecting neuronal loss or dysfunction, decrease in AD; whereas increased myo-inositol levels, reflecting glial cell activation, have been reported in MCI and AD [105,106]. Glutathione is an intracellular antioxidant in the brain and has yet to be extensively studied in AD [107]. In addition, there are only a few studies evaluating the association between these metabolite alterations and memory performance in particular ( Table 4).
Levels of NAA in MTL have been consistently reported to have positive associations with verbal memory performance both in MCI and ADD [108,109,[111][112][113]. In addition to the positive correlation between PCG NAA and verbal memory scores [110], NAA within this regions decrease along the AD continuum [114]. Levels of NAA were shown to decrease with age (as shown by the difference between young and old CNCs) and AD progression. Patients with ADD had the lowest NAA and creatine concentration, followed by aMCI patients, whereas young CNCs had the highest concentration in PCG/precuneus. As myo-inositol increases in AD, it also seems to be negatively correlated with verbal memory in MCI and AD [110,113]. These results suggest that increased neuronal dysfunction coupled with glial cell activation play a role in the verbal memory deterioration in MCI and AD. Elevated glutathione levels with decreased memory performance are suggestive of early compensation in MCI [107]. These molecules may prove to be markers to track disease progression with future longitudinal studies investigating the course of the levels of these molecules within specific regions in association with cognitive decline.
Arterial spin labeling MRI
Arterial spin labeling MRI measures cerebral blood flow (CBF), which is a more direct evaluation of brain physiology compared with the blood-oxygen-level dependent response measured by fMRI. A small number of studies on this MRI technique reported that the CBF alterations are associated with episodic memory within the AD continuum (Table 5).
Decreases in MTL CBF are detected even in the preclinical phase in individuals with AD risk [115]. Structures of MTL and PCG/precuneus CBF are closely associated with verbal memory performance in this sample of individuals. Individuals with positive Ab, subjective cognitive decline, and APOE ε4 carriers have a decline in verbal memory performance coupled with increased CBF [117][118][119]. Although there are no directional data regarding this association, this may be suggestive of a compensatory response within these regions aimed toward improving the performance.
In line with other MRI approaches, MCI patients show decreased CBF responses in MTL and PCG/precuneus, which correlate with the verbal memory performance [122,123]. Superior occipital lobe CBF is reduced when tasks demanding visual encoding are used [123].
Owing to diversity of the episodic memory tests used in the current studies, and the small number of studies to date, conclusions about how arterial spin labeling relates to episodic memory across the AD process would be premature. However, results to date suggest that arterial spin labeling magnetic resonance imaging holds a potential to provide biomarkers which can be used in early diagnosis and progression of AD.
Limitations and future directions
Existing literature suggests that MRI, widely available in clinical and research settings, may offer several potential biomarkers related to episodic memory impairment in AD. Structural and functional alterations in different regions may increase the predictive value of hippocampal atrophy assessed by MRI for AD diagnosis. As MRI findings correlate with episodic memory deficits, they have the potential to offer more insight into the etiology of the disease and more utility for tracking progression over time.
Nevertheless, there are several limitations to using MRI in AD. Imaging is expensive, requires skilled staff for acquisition and analysis, and is time consuming. In most of the studies, cohort sizes tend to be small, limiting confidence in results [28,31,65,67,[69][70][71][72][73][74][77][78][79][80][81]124,125]. The existence of large shared data sets such as AD Neuroimaging Initiative mitigates this to some extent and has been extremely useful in better understanding structural aspects of the disease. However, AD Neuroimaging Initiative is also limited in functional imaging data as it includes only rsfMRI and no task-based sequences. In addition, the neuropsychological battery includes only verbal memory testing. This is also true of many clinical research studies that limit our understanding of the relationship between rsfMRI and nonverbal measures. This differs from the task-based literature, where many tasks found to differentiate between AD and other cohorts involve nonverbal stimuli such as faces and scenes.
Another limitation is the use of clinical criteria for probable AD in most of the mentioned studies. For example, only a few used hippocampal atrophy, CSF Ab, or PET to support the AD diagnosis [20,26,41,73,78,80,90,95,97,98]. Remy et al [20] included hypometabolism assessed by FDG-PET, medial temporal atrophy shown by MRI, and the level of phospho-tau and Ab-tau index to confirm the AD diagnosis within their patient sample. The rest of the studies included in our review relied only on clinical criteria. Without the integration of supporting biomarkers, the positive predictive value of clinical diagnostic criteria is rather limited with poor negative predictive value [126]. If biomarkers revealing Ab deposition and neurodegeneration are present at the same time as clinical criteria, likelihood of AD dementia is significantly increased [127]. Thus, whenever possible, these biomarkers should be implemented to reliably define study samples.
Although investigating differences on a whole brain level may help discover other regions implicated in episodic memory performance, these analyses may not be efficient in detecting subtle changes. Compared with region-ofinterest analyses, whole brain analyses require spatial blurring and corrections for multiple comparisons leading to decline in power to detect small changes [45]. More powerful analysis methods should be favored in biomarker research to obtain more reliable results.
Moving forward, it seems that multimodal biomarker studies that use both Ab and/or tau PET ligands and both structural and functional MRI might become more common in AD. Our own research supported by a Center for Biomedical Research Excellence award from the National Institute of General Medical Sciences will use Ab PET, resting state fMRI, and neuropsychological testing including verbal, nonverbal, and navigational memory techniques in an attempt to fill some of the gaps in the current understanding of AD. Future work building from the current protocol will incorporate task-based fMRI to further understand taskbased network connectivity in relation to the Ab status and neuropsychological performance. Using multimodal imaging and including nonverbal memory tests in addition to verbal tests will expand on previous imaging studies. Navigational tasks used in animal studies are rarely implemented in human research, limiting the translational value of these studies. Thus, by using navigational tasks, we aim to overcome this existing limitation.
Conclusions
Several MRI and fMRI metrics, including hippocampal atrophy, hold the potential to become AD biomarkers and may be more relevant to the preclinical stages. However, most imaging studies include only one modality with either verbal or nonverbal memory tasks, which prevent generalized conclusions to be drawn from their findings. Investigating the underlying pathology of AD through the combination of multimodal imaging and extensive neuropsychological evaluation may help in early diagnosis and in testing the effectiveness of novel therapeutics. Longitudinal studies with larger participant samples, where clinical AD diagnosis has been supported by multiple biomarkers, could provide a better understanding of the disease.
Acknowledgments
This work was supported by the National Institute of General Medical Sciences (Grant: P20GM109025).
RESEARCH IN CONTEXT
1. Systematic review: Memory impairments are among the most common and early symptoms of Alzheimer's disease (AD). Structural and functional changes assessed by magnetic resonance imaging are related to memory performance.
2. Interpretation: Magnetic resonance imaging findings in AD associated with memory performance can be used as potential biomarkers in the future. However, current conflicting results are probably due to the fact that most studies use limited memory tests in small patient samples with probable AD diagnosis.
3. Future directions: More extensive neuropsychological batteries should be implemented in larger patient groups with multimodal imaging. The diagnosis for AD should be supported by currently available biomarkers to achieve more reliable results. | 2018-09-24T15:18:28.320Z | 2018-06-14T00:00:00.000 | {
"year": 2018,
"sha1": "8505c032234a4c0e784bff86148641efe2838cf5",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.trci.2018.04.007",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8505c032234a4c0e784bff86148641efe2838cf5",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256626981 | pes2o/s2orc | v3-fos-license | Reprogramming the Human Gut Microbiome Reduces Dietary Energy Harvest
The gut microbiome is emerging as a key modulator of host energy balance1. We conducted a quantitative bioenergetics study aimed at understanding microbial and host factors contributing to energy balance. We used a Microbiome Enhancer Diet (MBD) to reprogram the gut microbiome by delivering more dietary substrates to the colon and randomized healthy participants into a within-subject crossover study with a Western Diet (WD) as a comparator. In a metabolic ward where the environment was strictly controlled, we measured energy intake, energy expenditure, and energy output (fecal, urinary, and methane)2. The primary endpoint was the within-participant difference in host metabolizable energy between experimental conditions. The MBD led to an additional 116 ± 56 kcals lost in feces daily and thus, lower metabolizable energy for the host by channeling more energy to the colon and microbes. The MBD drove significant shifts in microbial biomass, community structure, and fermentation, with parallel alterations to the host enteroendocrine system and without altering appetite or energy expenditure. Host metabolizable energy on the MBD had quantitatively significant interindividual variability, which was associated with differences in the composition of the gut microbiota experimentally and colonic transit time and short-chain fatty acid absorption in silico. Our results provide key insights into how a diet designed to optimize the gut microbiome lowers host metabolizable energy in healthy humans.
paradigm of quantitative bioenergetics (NCT02939703) 2 (Extended Data Fig. 1a-b). The intervention included a highly digestible control Western Diet (WD) and a Microbiome Enhancer Diet (MBD). The MBD was 49 designed to maximize the availability of dietary substrates to the gut microbiome and included these four 50 drivers: dietary fiber, resistant starch, large food particle size, and limited quantities of processed foods 51 (Extended Figure 1a). Our design provided equivalent metabolizable energy and total macronutrients (fat, 52 protein, carbohydrates) based on classic principles and equations of food digestibility 13 . Diets were prepared in To avoid the confounding effects of energy imbalance on host and microbial metabolism, the diet intervention 57 maintained each participant in energy balance. Energy balance, evaluated by real-time energy intake and 58 energy expenditure (measured via whole-room indirect calorimetry), was maintained within our target of +/-50 59 kcals per 6-day calorimeter stay (WD 4.1 ± 5.1 kcal/day; MBD 5.4 ± 2.8 kcal/day; p = 0.8) (Extended Data Fig. 60 2a). Weight stability was a secondary criterion for evaluating energy balance, and we previously reported that 61 weight was stable during the 6-day calorimetry assessment period whilst the primary endpoint was measured; 62 the study team members were blinded to the diet assignment 2 .
64
Surveillance of adverse events revealed minimal gastrointestinal or other side effects (Extended Data Table 1). 1b; P < 0.0001), which equates to an additional 116 ± 56 kcals daily channeled to feces ( Fig. 1c; P < 0.0001). 98 These data align with the preclinical literature showing that the quantitative impact of the gut microbiome on 99 host energy balance is primarily via its critical roles on energy harvest from the diet 8,9 . 100 101 Diet reprogramed the gut microbiome 102 Given our primary finding that diet produced a clinically significant change in host metabolizable energy, we 103 next evaluated the microbial phenotype associated with host energy balance. Mean daily fecal weight was 104 higher on the MBD (P < 0.0001; Extended Data Fig. 3a), and a proportion of this additional weight was due to a 105 significant increase in 16S rRNA genes (P < 0.0001; Fig. 2a), an indication of fecal bacterial biomass increase 106 since the MBD produced 19.6 ± 3.5 gCOD/d of microbial biomass compared to 9.4 ± 1.2 gCOD/d on the WD. To further explore the compositional changes in the microbiome associated with diet-induced changes in host 118 metabolizable energy, we used metagenomic sequences to evaluate microbial taxonomic differences and 119 derived regression coefficients describing each microbe's association with diet using Maaslin2's compound metabolizable energy intake based strictly on existing food digestibility paradigms. These paradigms do not 148 account specifically for the microbial biomass or microbial energy harvest 13 .
150
One of the gaps in prior human studies was the lack of a precise quantitation of the entire energy balance 151 equation. In addition to our evaluation of energy intake (Extended Data Table 2) and fecal energy loss to derive 152 host metabolizable energy (Fig. 1 a-c), we measured energy expenditure with whole room indirect calorimetry 153 over 6 days and found no diet difference in sleep metabolic rate (in kcal/day) by diet (P = 0.15; Fig. 3d), despite 154 being able to detect a posteriori a 26.5 kcal/day difference 2 . This suggests that, under conditions of fixed energy 155 intake, the main quantitative contribution of the gut microbiome to host energy balance was through its effect 156 on energy harvested from the diet, particularly when sufficient substrates were available for fermentation, as The relationships among diet composition, gut microbes, and colonic transit time (CTT) are complex, multi-160 directional, and vary within individuals over time and between individuals 27 . Given the potential importance of 161 CTT on the microbiota-driven host response to dietary manipulations, we evaluated whole-gut transit using a 162 pH-sensing radiotransmitter device. We did not find a statistically significant difference in CTT by diet (39.2 ± 163 6.2 hours on WD vs. 29.7 ± 4.4 hours on MBD; P = 0.14; Fig. 3e). Gastric emptying evaluated by 164 acetaminophen appearance in the blood after a fixed liquid meal also was not different by diet (Extended Data 165 Fig. 4a). The pH of the colon can be an indicator of microbial fermentation activity. Neither median pH (which 166 reflects both fermentation and the impact of food mixing in the colon) nor the median pH within a 1-hour 167 window of the ileocecal passage (which is impacted primarily by microbial fermentation products) 28 differed by 168 diet (P = 0.11 and 0.23, respectively; Fig. 3f; Extended Data Fig 4b). The lack of statistically significant effects 169 likely was due to the substantial amount of interindividual variability in CTT, gastric emptying and colonic pH, 170 confirming the complex and individualized relationships among these parameters, which may be critical to 171 understanding the host-microbiota axis within individuals 27 . 172 We hypothesized that the MBD might decrease appetite relative to the WD via the inclusion of high-fiber foods 174 and production of metabolites through gut microbial fermentation 29 . This hypothesis was rejected (Extended 175 Data Fig. 4c-h). Thus, the observed negative energy balance and small changes in body composition on the 176 MBD did not trigger a compensatory change in appetitive behaviors or food intake compared to the WD.
178
The mammalian gut senses nutrients and microbial fermentation products and is part of the larger Fig 3h), with a significantly higher AUC at breakfast and lunch and a trend towards a higher 192 AUC at dinner (P = 0.02, 0.04 and 0.08, respectively) on the MBD compared with the WD. Pancreatic 193 Polypeptide (PP) iAUC was significantly increased on the MBD (Fig 3i). GLP-1 and PP decrease food 194 intake 33 . Therefore, the short-term negative energy balance within our experimental paradigm did not trigger the 199 Given the robust response to our diet intervention by the gut microbiome and host, we sought to determine the 200 quantitative role of the gut microbiome on energy harvest from the diet versus the impact driven solely by food 201 digestibility 11 . We tested the hypothesis that methane production by methanogenic archaea contributes to a net 202 negative energy balance. We developed and validated a first-in-human method to quantify 24-hour methane 203 production in a whole room calorimeter at part-per-billion resolution 34 . The range of methane measured within 204 our study was 0.28-1613 ml/day, translating to 0.002-14 kcals lost per day. While this negative energy balance This led us to hypothesize that the variability in host energy balance could be associated with the repertoire of 218 gut microbes in the colon. To test this hypothesis, we asked whether the quantitatively important variability in 219 host metabolizable energy on the MBD could be related to a unique microbial signature. To identify those 220 microbial signatures, we derived regression coefficients describing each microbe's association with the 221 independent variable of host metabolizable energy using Maaslin2's compound Poisson regression model 18 . In 222 total, host metabolizable energy was associated with 16 species (Extended Data Fig. 5a-b). The significant 223 microbes with the largest effect size (Q < 0.05; effect size ≥ 2) were Clostridium bolteae, Streptococcus 224 parasanguinis, Streptococcus australis, and Erysipelatoclostridium ramosum. All were inversely associated 225 with host metabolizable energy, indicating that reduced energy availability to the host may increase substrate 226 availability for the growth of these specific microbes (Fig. 4a). absorbed by the host due to microbial fermentation in the colon and the associated biomass. We applied this 233 model to predict the host metabolizable energy we measured in our study by inputting actual energy intake 234 components and fecal energy in grams COD/day. Our previously published model used a fixed CTT of 48 235 hours, which is a reasonable population-level estimate for healthy adults 37 . With a fixed CTT, the modeled host 236 metabolizable energy for participants on the WD was 95.2 ± 0.001% and for MBD was 92.4 ± 0.001% (Fig. 237 4b). This is similar to the mean host metabolizable energy we measured on the WD and the MBD (95.4 ± 238 0.21% and 89.5 ± 0.73%, respectively; Fig. 1b). However, the variability we saw experimentally on the MBD 239 was not reproduced by the mathematical model. We hypothesized that we could improve the model's predictive 240 ability by adding measured CTT since it is a key modulator of microbial composition, fermentation, and host 241 energy balance 27 . When we included measured CTT, the modeled range of metabolizable energy on the MBD 242 was 84.6-92.9%, which was very similar to the measured range of 84.2-96.1%; furthermore, systematic and 243 proportional bias was minimized (Extended Data Fig. 5c-d). Thus, using the CTT explained some of the 244 variability in host metabolizable energy.
Microbes contributed to energy balance
A significant proportion of the reduced metabolizable energy on high-fiber diets is due to colonic microbial 247 fermentation of fiber and resistant starch into absorbable SCFA 38 . Our model predicted that more total energy 248 (g COD) as SCFAs was absorbed by the host on the MBD, compared to the WD (72.3 ± 13 gCOD/d on the 249 MBD vs. 36.4 ± 4.3 gCOD/d of microbially-derived SCFAs; P < 0.00001; Fig. 4d). When we adjusted the 250 SCFA absorption for energy intake, we found a nearly 2-fold greater absorption of energy as SCFAs on the 251 MBD as compared to the WD (P < 0.00001; Fig. 4e). Therefore, despite less total energy being absorbed by the 252 host on the MBD, a larger proportion was derived from SCFAs. Consistent with our experimental data, our 253 model strongly supports a significant microbial contribution to host metabolizable energy and, therefore, the 254 overall energy balance. The reduction in energy harvest from the diet on the MBD relative to the WD was not accompanied by a 274 reduction in energy expenditure or an increase in hunger or ad libitum energy intake. However, the significant focus on whole, minimally processed foods resets the integrated sensing mechanisms known to affect food 280 intake and body energy stores. One or more of these mechanisms or other unknown mechanisms might be 281 responsible for the population associations between a diverse human gut microbiome and lower body mass 1 .
282
The slightly greater reduction in weight and body fat on the MBD, compared to the WD, over the inpatient 283 period despite daily titration of energy requirements to match calorimetry-derived measures of energy 284 expenditure, suggests that the use of a diet that adequately feeds colonic microbes and increases microbial 285 fermentation products (i.e., short-chain fatty acids) will not lead to additional absolute energy availability to the 286 host. In contrast, diets such as the MBD promote additional fecal energy loss and an increase in host uptake of 287 SCFAs from the colon, despite the overall decrease in host uptake of energy. Future microbiome-focused 288 research should delve into these systems for controlling body weight.
290
The quantitative contributions of gut microbes to host energy balance were addressed in two forms. First, the 291 energy in feces increased by 40.9 ± 4.6 g COD/d (116 ± 56 kcals kcal/day) for participants on the MBD, even 292 though their total metabolizable energy intake was the same. Second, the microbial community increased in 293 size (biomass) and fermentation processes that were reflected by increased fecal and serum SCFAs on the MBD 294 as compared to the WD. Thus, the host's energy intake shifted towards microbially produced SCFAs and away from proximally digested and absorbed carbohydrates in the food. While the quantitative contribution of 296 microbially generated SCFAs was overshadowed by the additional loss of microbial biomass in the feces, the 297 uptake of more microbially produced SCFAs was associated with increased GLP-1 and pancreatic polypeptide 298 concentrations.
300
We also found a taxonomic signature that was in alignment with the expected impacts of the substrates 301 available to the gut microbes on the two diets. First, many of the species detected at higher abundance on the 302 MBD were fiber degraders and/or butyrate producers. Second, our data reveal that, when the gut microbiome Host metabolizable energy was highly variable on the MBD. Given our tight control of energy intake and 311 energy expenditure, this suggests that the microbial contribution to this variability was greater in some hosts 312 than others. Indeed, with a proportionally equivalent input of substrates for microbes, fecal energy losses varied 313 over an ~6-fold range. Understanding the mechanisms by which the microbial communities in the human colon 314 modulate energy harvest and their interaction with host factors such as CTT will provide valuable quantitative 315 data to drive personalized strategies to optimize host-microbiota-diet interactions and prevent or treat obesity.
317
Host metabolizable energy was associated with a unique microbial profile on the MBD, with 4 microbial 318 species whose relative abundance increased in association with decreasing host metabolizable energy. One of 319 those species, Streptococcus australis, transiently increases after weight loss due to bariatric surgery as 320 compared to normal weight controls 44 . Hungatella hathewayi and Erysipelatoclostridium ramosum were more abundant in germ-free mice colonized with feces from a human that underwent caloric restriction with a 322 concomitant phenotype characterized by lower adiposity 45 . Clostridium bolteae, in addition to being a lactic-323 acid producing bacterium 46 , has recently been reported to bind phenylalanine, tyrosine, or leucine amino acids 324 to microbially deconjugated bile acids. While the clinical effects of these microbially transformed bile acids are 325 unclear, bile acids are known to play an important role in microbial energy extraction 47 . Overall, these findings 326 make it plausible that the variability in host metabolizable energy on the MBD is related to a specific microbial 327 signature and the metabolic processes driven by the relationships between host and microbes.
329
We also investigated, in silico, the factors that might be contributing to hose metabolizable energy variability 330 and found that colonic transit time was an important driver. Host metabolizable energy prediction with 331 measured CTT more closely captures the variability seen in measured host metabolizable energy. Our 332 mathematical model, which generated outputs consistent with the clinical data describing the metabolizable 333 energy of participants consuming WD and MBD, allowed us to determine the important role of CTT and to 334 quantify that a host on the MBD produced feces containing 19.6 ± 3.5 gCOD/day of microbial biomass (about 335 10 gCOD/day more than WD) and led to 36.4 ± 4.3 gCOD/day more uptake of microbial derived SCFAs. We 336 believe these factors, and others that may be revealed in future studies, could capitalize on the adaptability of 337 the gut microbiome as a target for personalized medicine 48 .
339
Given the size and scope of the global obesity epidemic and its continued increase, new solutions are needed.
340
The scientific community has recently reoriented itself towards population interventions that promote small 341 changes in energy intake and expenditure as a means of preventing weight gain 49 . This study demonstrates the 342 potential to enact this "small changes" principle through the consumption of a simple whole food intervention high-range digestion vials followed by a colorimetric assay (HACH, Loveland, CO; Product # 2125925). To 536 ensure that fecal energy was accurately reflective of 24-hour fecal production, we utilized the non-absorbable, 537 non-digestible fecal marker polyethylene glycol (PEG). Participants consumed 1.5g/day of PEG of molecular 538 weight 3350 g/mol (PEG3350). The PEG3350 was procured by a compounding pharmacy that prepared 0.5g capsules (percent error = 2.8%) (Pharmacy Specialists, Altamonte Springs, FL). The details of the PEG assay 540 are below. Fecal energy was measured in 6-day composites of feces collected in our calorimeters. We 541 normalized fecal energy produced to the weight of all feces produced in those 6-days and then to PEG recovery.
542
Fecal energy loss was converted to host metabolizable energy by calculating the percentage of energy that was 543 lost in feces (in g COD) relative to total energy intake (in g COD). The conversion from energy in COD to kcals 544 lost in feces per day (non-metabolizable kcals) was calculated by multiplying total EI in kcals by the percent 545 host metabolizable energy.
547
Polyethylene glycol assay. We utilized a method that is slightly modified from the initial published method by
555
The assay is linear as evidenced by the R 2 of the calibration curve (0.9987). The linear range of the assay was 556 from 0.1 uM to 20 uM with PEG3350 recovery ranging from 96.2-104.5%. The relative standard deviation of 557 the assay was 1.8%. There was no co-elution of analyte with expected excipients or related compounds in 558 chromatograms demonstrating the assay is specific for PEG3350.
563
Calibration curves using 7 data points were generated on each run using plasmids with 16S rRNA genes, and 564 adding a plasmid concentration to achieve copy numbers in the range from 10 1 to 10 9 per reaction. Reaction were trimmed using TrimGalore 11 . DNA sequences were aligned to Hg38 using bowtie2 12 and RNA sequences 588 = � � �� * 6.022 * 10 23 ℎ ( ) * 10 9 * 660 were aligned to Hg38 using STAR 13 . DNA and RNA sequences were then analyzed for taxonomic composition 589 with MetaPhlAn3 14 , using standard parameters.
591
Species Alpha-and Beta-Diversity. All calculations and analyses were conducted in R 15 . Taxonomic 592 composition output from MetaPhlAn3 was processed for beta-diversity analysis using the "phyloseq" R 593 package 16 . A rarefaction curve was created using the "vegan" R package 17 to determine the optimal count-depth 594 for rarefaction. Once the optimal count-depth was determined, rarefaction was performed using phyloseq.
595
Alpha-diversity metrics were calculated using the "microbiome" R package 18 . After samples were rarified, each 596 sample had 3,578,445 sequences. Bray-Curtis and Jaccard distance matrices were calculated on the rarefied 597 count data using vegan. The distance matrices were tested for significance by PERMANOVA using vegan. Differential Abundance. Differential abundance testing by diet and host metabolizable energy was carried out 605 using the output of MetaPhlAn3 in the "MaAsLin2" R package 21 . Taxonomic counts were filtered with a 25% 606 prevalence cut-off. Compound Poisson multivariate linear models were used to account for zero-inflated data 21 .
607
In the diet analysis the dependent variable was microbial abundance, the fixed variables were diet, period, and 608 period sequence, and participant ID was a random factor. In the host metabolizable energy analysis, the 609 dependent variable was microbial abundance, and the fixed independent variable was host metabolizable 610 energy. Appetite. Subjective ratings of appetite were determined using visual analog scales (VAS) administered at -30,
619
-15, +30, +60, +120, and +180 min pre/post each meal. Breakfast was fixed at 500 kcals and lunch and dinner Acknowledgements: We thank our study participants, without whom this work would not have been possible. Figure 941 shows all significant associations with Q < 0.05 and effect size ≤ 2. inpatient days where all 3 meals were consumed on-site, and no changes were made to the feeding for testing.
022
All data reported as mean ± s.e.m. N=17 per diet for both panels. | 2023-02-08T05:08:19.668Z | 2023-01-25T00:00:00.000 | {
"year": 2023,
"sha1": "11836f230df59f9ac918c4cfc44201270bdd9b44",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-2382790/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "11836f230df59f9ac918c4cfc44201270bdd9b44",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8682753 | pes2o/s2orc | v3-fos-license | Profiles of Human Serum Antibody Responses Elicited by Three Leading HIV Vaccines Focusing on the Induction of Env-Specific Antibodies
In the current report, we compared the specificities of antibody responses in sera from volunteers enrolled in three US NIH-supported HIV vaccine trials using different immunization regimens. HIV-1 Env-specific binding antibody, neutralizing antibody, antibody-dependent cell-mediated cytotoxicity (ADCC), and profiles of antibody specificity were analyzed for human immune sera collected from vaccinees enrolled in the NIH HIV Vaccine Trial Network (HVTN) Study #041 (recombinant protein alone), HVTN Study #203 (poxviral vector prime-protein boost), and the DP6-001 study (DNA prime-protein boost). Vaccinees from HVTN Study #041 had the highest neutralizing antibody activities against the sensitive virus along with the highest binding antibody responses, particularly those directed toward the V3 loop. DP6-001 sera showed a higher frequency of positive neutralizing antibody activities against more resistant viral isolate with a significantly higher CD4 binding site (CD4bs) antibody response compared to both HVTN studies #041 and #203. No differences were found in CD4-induced (CD4i) antibody responses, ADCC activity, or complement activation by Env-specific antibody among these sera. Given recent renewed interest in realizing the importance of antibody responses for next generation HIV vaccine development, different antibody profiles shown in the current report, based on the analysis of a wide range of antibody parameters, provide critical biomarker information for the selection of HIV vaccines for more advanced human studies and, in particular, those that can elicit antibodies targeting conformational-sensitive and functionally conserved epitopes.
Introduction
Developing a safe and effective vaccine to control the global transmission of Human Immunodeficiency Virus Type 1 (HIV-1) remains one of the greatest challenges. The surprising outcome of the STEP trial [1] demonstrated the danger of relying on one type of vaccine and not paying equal attention to other vaccination approaches [2][3]. Passive protection studies using neutralizing monoclonal antibodies (mAbs) have demonstrated the utility of antibodies in controlling infection in non-human primates [4,5,6,7,8,9,10]. Furthermore, recently completed Phase III human HIV-1 vaccine trial, RV144, using a canarypox vector prime-recombinant envelope (Env) protein boost design, showed a low but significant 31% reduction of infection compared with placebo [11]. The mechanism for such protection in RV144 is unknown but protective antibody is suspected to play a key role.
However, in-depth analysis of antibody responses elicited in RV144 trial volunteers requires baseline information on the qualities of human anti-Env antibody responses elicited by other types of HIV-1 vaccines. Currently, such comparative analysis is lacking in the literature. Recently, several new vaccination approaches have significantly improved the magnitude or quality of HIV-1 Env-specific antibody responses in humans and, thus, provide the opportunity to compare the unique profiles of antibody responses elicited by different HIV vaccine strategies.
In the current report, human vaccinee sera from three HIV-1 vaccine studies using different immunization approaches (Table 1) were analyzed for the relative levels of binding and neutralizing antibodies, the fine specificities of antibodies present in each serum, and the ability to mediate other potentially protective processes, including complement activation and Antibody-Dependent Cell-mediated Cytoxicity (ADCC). Our results indicated that each HIV vaccine regimen can elicit unique profile of antibody responses. This finding will be very useful to improve the design of HIV vaccines to elicit the optimal protective antibody responses in humans.
Results
All three candidate HIV vaccines included in the current analysis were designed to elicit HIV-1 Env-specific antibody responses (Table 1). HVTN 203 was an early phase clinical study using a canarypox prime-protein boost regimen prior to the fullscale RV144 efficacy trial. Volunteers from HVTN203 (Group B) received the canarypox vector expressing a clade B Env, and were boosted with a bivalent clade B/B Env protein formulation from HIV-1 isolates, MN, and GNE8 [12], whereas RV144 expressed a clade E Env by canarypox vector, which was then boosted with bivalent clade B/E Env proteins [11]. Volunteers in the HVTN 203 trial received a total of four canarypox vector immunizations in addition to two protein boosts adjuvanted with alum that were overlapped with the last two canarypox immunizations. Protein boosts consisted of the same recombinant Env protein vaccine that failed to show protective efficacy in a Phase III clinical trial when used alone [13]. HVTN 041 tested the immunogenicity of recombinant Env protein derived from the HIV-1 isolate W61D, adjuvanted in AS02 A , without any prime immunizations [14]. The DP6-001 trial used a DNA prime-recombinant protein boost immunization approach delivering a 5-valent Env formulation from HIV-1 isolates of clades A, B, C, and E [15]. Human volunteers were first immunized three times with Env-expressing DNA vaccines, followed by two boosts using matched recombinant Env proteins (gp120) in QS-21 adjuvant.
Neutralizing antibody activity has been a key parameter in HIV vaccine research to measure the protective potential of immune sera specific for HIV-1 Env antigens [16,17]. Results of neutralizing antibody activities in three sets of sera included in the current report were previously reported and showed diverse profiles [12,14,15]. In contrast to sera from the DP6-001 study, which were capable of neutralizing a broad range of T-cell line adapted (TCLA) and primary HIV-1 isolates [15], sera from the HVTN 041 and HVTN 203 studies was only capable of neutralizing autologous and TCLA viral strains [12,14]. Because previous neutralizing activity analyses from each trial were done in different assay systems, making direct comparisons difficult, a new but limited set of neutralization assays were conducted by using pseudotyped viruses expressing three model HIV-1 primary Env antigens with varying degrees of sensitivity to neutralization to confirm the previously reported neutralizing patterns for these three sets of human sera. No extensive NAb analysis was done in the current study, as they have been done in previously published reports [12,14,15].
Neutralizing activities against SS1196, a primary isolate that is moderately sensitive to neutralization, allowed for some differentiation of the neutralization potential of each trial sera ( Fig 1B). Only 4 of the 12 sera (33%) from the HVTN 203 trial were capable of neutralizing SS1196 at a 1:10 dilution but 8 of the 12 sera (67%) from the HVTN 041 trial were capable of neutralizing this virus. In contrast, 18 of the 21 sera (86%) from the DP6-001 trial were capable of neutralizing SS1196. Both the HVTN 041 and DP6-001 trials elicited higher titers than the HVTN 203 trial (p = 0.02 and p = 0.03, respectively). The third pseudotyped virus tested in the current analysis expressed Env from the HIV-1 isolate, SC422661.8, a Tier 2 virus representative of those found shortly after the establishment of HIV-1 infection and known to be highly resistant to neutralization [18]. A significant drop of neutralizing activities was observed with sera from all three vaccine trials against this virus ( Fig 1C). None of the sera from the HVTN 203 trial were capable of reaching 50% neutralization at the lowest dilution tested (1:10). Similarly, neutralizing activity against this isolate was only observed in two sera (17%) from the HVTN 041 trial. However, 10 of the 21 sera (48%) from the DP6-001 trial were capable of neutralizing SC422661.8 at a 1:10 dilution. This occurred despite the fact that, on average, individuals in the DP6-001 had either lower or equivalent titers of Env-specific binding antibodies when compared to other two trial sera (Fig 2 below). The lack of neutralizing activity from the HVTN 203 and 041 trials against more resistant isolates, and the low titer neutralization seen in the samples from the DP6-001 trial are both consistent with previously reported neutralization profiles [12,14,15].
In order to understand what features of the antibody responses elicited by each of these sera may be responsible for the difference in their neutralization profiles, a wide spectrum of analyses were conducted to understand the quality of different sera. The first was Env-specific binding antibodies. The gp120 protein from the clade B JR-FL strain was chosen as the model antigen to examine binding titers because it is derived from a well-characterized primary isolate and while each trial tested here was formulated with at least one clade B component, JR-FL was not a component in any of the formulations. Antibody levels generated by the HVTN 041 formulation were found to be significantly higher than the titers of binding antibodies generated in either the HVTN 203 or DP6-001 clinical trials (p = 0.035 and p = 0.0003, respectively) (Fig 2), suggesting that gp120 adjuvanted with AS02 A is an exceptionally immunogenic formulation.
Antibodies directed to CD4 inducible (CD4i) epitopes are frequently elicited in HIV-infected individuals [19] although their role in controlling viral infection is currently unknown. Prior exposure of pseudovirus to soluble CD4 (sCD4) can expose CD4i epitopes, such as the co-receptor binding site, on the viral envelope [20]. Sera from each trial included in the current study were assayed for their ability to outcompete binding to 17b, a mAb that targets the co-receptor binding site. High frequency and titers of 17b-like antibodies were detected in all three vaccine trials (Fig 3A). Seven out of 12 sera (58%) from the HVTN 203 trial, 9 out of 12 (75%) from HVTN 041, and 17 out of 21 (81%) from DP6-001 were able to outcompete binding to 17b. Interestingly, those sera that did compete did so at high titer, indicating an abundance of antibodies with this specificity.
Next assay evaluated if the CD4i antibodies found in the sera are functional in a modified neutralization assay. Pseudotyped viruses expressing Env from the JR-FL isolate were treated with sCD4 prior to incubation with serum. While without prior sCD4 treatment, JR-FL was difficult to neutralize by sera from all three trials (Fig. 3B), significant neutralizing activities against JR-FL Env pseudotyped viruses upon exposure to sCD4 were found in these sera: 7 out of 12 (58%) from HVTN 203, 10 out of 12 (83%) from HVTN 041, and 20 out of 21 (95%) from DP6-001 with positive neutralizing activities ( Fig 3C). Geometric mean neutralizing titers for HVTN 203, HVTN 041, and DP6-001 were 1:28, 1:44, and 1:49, respectively. This data suggests that under the proper conditions, CD4i antibodies present in vaccinee sera would be capable of neutralizing heterologous isolates of HIV-1.
Because it has been reported that sCD4 treatment leads to increased exposure of the V3 loop [21], we attempted to determine if the neutralizing activity observed after sCD4 treatment was due to recognition of the V3 loop or recognition of the co-receptor binding site by the 17b-like antibodies detected through competition. Vaccinee immune sera were incubated with a synthetic peptide matched to the V3 loop sequence of the JR-FL Env prior to the exposure of sCD4-treated JR-FL. This resulted in a slight drop in the geometric mean NAb titer of HVTN 203 sera to 26, of HVTN 041 sera to 25, and of DP6-001 sera to 34 ( Fig 3D). This drop in potency was also accompanied by a drop in the frequency of positive neutralizing sera to 6 out of 12 sera (50%) in the HVTN 041 trial and to 16 out of 21 (76%) in the DP6-001 trial ( Fig 3D). This data indicates that both V3 and co-receptor binding site antibodies play a role in neutralizing the sCD4-treated JR-FL virus.
A unique profile of CD4bs-directed antibodies was observed upon examination of the ability of the immune sera to outcompete binding against mAb b12 (Fig 4C). Only 4 out of 12 sera (33%) from either the HVTN 203 trial or HVTN 041 generated an antibody response capable of outcompeting binding to b12. However, 20 out of 21 sera (95%) from the DP6-001 trial were capable of outcompeting binding to b12 and did so with Additional functions of gp120-specific antibodies were analyzed. Immune sera elicited by all three vaccine regimens were capable of mediating ADCC function in an equivalent fashion with 19-21% lysis of the recombinant gp120 protein pulsed CEMNK r target cells (Fig. 5). An additional intrinsic characteristic of antigenspecific antibody is the ability to mediate activation of the complement pathway. Complement activation by gp120-specific antibody was conducted for sera from all three trials; however, they all activated complement in a similar fashion. A representative assay result is shown in Fig. 6A-B. A summary of the antibody profiles for each set of immune sera analyzed in the current study is provided ( Table 2).
Discussion
In the current report, a side-by-side comparison was conducted on the quality of human antibody responses elicited by three candidate AIDS vaccines focusing on HIV-1 Env-specific antibodies. Vaccines from all three studies had a gp120 protein vaccine component but only two of the studies included priming immunizations using either a viral vector-or DNA-based vaccine. Although the sample sizes are relatively small, our results suggest It has been shown in many published studies that passive transfusion of antibodies in non-human primate models can provide protection against challenge [4,5,6,7,8,9,10], but the real challenge is that there is limited information on the antibody specificities which may have contributed to such protection. This knowledge is important because it may contribute to the design of more effective vaccine antigens.
All three vaccines generated a high titer binding antibody response. HVTN trial 041 volunteers had the highest Env-specific serum IgG titers. Previous studies using recombinant gp120 proteins alone adjuvanted in alum did not generate high binding antibodies [13]. Therefore, it is very likely that the strong adjuvant effect of AS02 A (MPL and QS21 formulated in o/w emulsion) played an important role in the high immunogenicity observed for this gp120 protein-based vaccine.
The levels of binding antibody did not correlate with the presence or titer of neutralizing antibodies against a panel of heterologous isolates. Consistent with previous reports [12,14,15], the sera from HVTN 041 and HVTN 203 trials were, for the most part, only capable of neutralizing sensitive isolates of HIV-1, whereas sera from the DP6-001 displayed neutralization activities against more resistant isolates.
In an attempt to explain this disparity, the fine specificity of antibodies elicited by each trial was characterized. While antibodies specific for glycan, CD4i, and V3 loop epitopes on gp120 were all found at relatively similar levels among three sets of immune sera, antibodies specific for the CD4bs were found more frequently and in significantly higher titer in the DP6-001 study. This may be important as CD4bs antibodies were found responsible for the broad neutralizing activities in HIV-infected patients [22]. Very few differences were observed when other biological functions, such as complement activation and ADCC activity, were determined. This somewhat contradicts previous data indicating that the canarypox prime-recombinant Env protein boost was more effective in eliciting higher levels of binding antibody and higher frequencies of ADCC responses than the same recombinant Env when used alone [23,24]. However, the immunogenicity of recombinant Env proteins used in those studies was clearly less optimal as shown by low levels of binding antibodies to the V3 loop. The reason for this difference is not entirely clear but a more immunogenic recombinant Env formulation with a potent adjuvant system may be responsible for the high binding titers and high frequency of ADCC observed in HVTN 041 sera. However, this finding does not change the fact that canarypox prime-recombinant Env boost is also highly immunogenic although it is not more effective in generating ADCC than recombinant protein vaccines when optimally formulated. Sera from HVTN 203 had the least unique antibody profile. It is less effective than DP6-001 sera in eliciting conformationallysensitive antibodies and neutralizing activity, and less effective at raising binding antibody responses than HVTN 041 sera. It is not clear whether these differences between the canarypox vector prime and the DNA vaccine prime can be attributed to the fact that the canarypox vector expresses multiple unrelated viral vector proteins in addition to the HIV-1 Env while priming with the DNA vaccine only focuses on the expression of Env.
Since the same canarypox prime-recombinant Env protein boost approach was used in the recent RV144 trial which showed statistically significant protection against HIV-1 in an efficacy field Figure 5. Ability of vaccinee sera to mediate ADCC activity. CEMNK r target cells were pulsed with gp120 prior to exposure of vaccine serum at a 1:100 dilution. Target cell lysis indicates the ability of vaccinee serum to mediate cell killing by PBMC from a normal human donor. Dotted line indicates background cell lysis observed with a normal human sera control. doi:10.1371/journal.pone.0013916.g005 Figure 6. The ability of Env-specific antibodies to activate the complement cascade present in complement intact normal human sera was determined using deposition of C4 as a marker for complement activation. A representative plot with data from a single individual from each trial is shown. A) gp120-specific IgG measurement and B) C4 detection in the same testing sera. doi:10.1371/journal.pone.0013916.g006 trial, the results presented in the current report raise several interesting questions. If a canarypox prime-recombinant Env protein boost approach indeed offers any unique protective benefit over the other two approaches, it is then necessary to identify new biomarkers other than those included in the current study since none stood out as a unique marker for the success of the HVTN 203 trial vaccine. Alternatively, either of the two other HIV vaccines evaluated in the current study may have the potential to provide even better protection than the canarypox primerecombinant Env protein boost approach if the higher responses in certain assays observed only in the HVTN 041 or DP6-001 trial sera are any indication. Additional late phase clinical studies are needed to answer these questions. However, since the current report showed that each vaccination approach has a relatively specific antibody response profile, it may become feasible to start linking the efficacy of any future vaccine formulation to the antibody profile it exhibits.
The current report also pointed to a great need to expand the scope of research to include diverse types of antibody responses when a candidate HIV vaccine is evaluated. The presence of neutralizing antibodies has been used almost exclusively to judge the protective potential of vaccine-induced antibody responses. Other parameters, especially the induction of conformationdependent antibodies, can provide unique insight to differentiate the quality of antibodies elicited by vaccines.
In recent studies of HIV-infected individuals with broadly neutralizing activity, the neutralizing fraction of sera has often been mapped to those antibodies directed towards the CD4bs [22,25,26,27]. Several new broadly neutralizing mAbs were developed targeting the CD4bs [28]. Two other new mAbs, PG9 and PG16, also target at conformationally sensitive epitopes which were formed by domains from different adjacent gp120 antigens [29]. Other non-HIV recombinant protein-based vaccines, such as HBV and HPV vaccines, also require highly conformational antigens [30,31,32]. Because of this, it is exciting to observe the elicitation of antibodies against conformationally sensitive CD4bs as those seen in HIV-infected individuals through the use of a DNA prime-protein boost regimen. The unique antibody profile and ability to better neutralize primary isolates provides evidence that the DNA prime-protein boost regimen offers another promising heterologous prime-boost platform for further HIV vaccine development in addition to the recent RV144 canarypox prime-protein boost regimen.
HIV vaccine trial vaccinee sera
Human serum samples from the HVTN 041 (NCT00027365) and 203 (NCT00007332) trials [12,14] were obtained through an ancillary study agreement with the US NIH HIV Vaccine Trials Network (HVTN). Sera from DP6-001 study (NCT00061243) were collected as previously described [15]. All serum samples used in this study were collected two weeks after the final immunization.
Ethics Statement
Human serum samples used in the current study were provided by previously closed human clinical trials. The samples used for the current analysis do not have any identifying information about the volunteers that were included in the original studies. Two of these previously closed studies, HTVN trials (HVTN 203 (NCT00007332) and HVTN 041 (NCT00027365)), were conducted by US National Institute of Health' HIV Vaccine Trial Network (HVTN). The Institutional Review Board (IRB) of each participating site of these trials reviewed and approved these study protocols and informed consent forms according to ethics requirements established by HVTN. For the DP6-001 study (NCT00061243), study protocol and informed consent were reviewed and approved by IRB at the University of Massachusetts Medical School (UMMS), Worcester, MA, USA. For each study, IRB approved written consent was obtained from all study participants. UMMS IRB has reviewed current study of serum analysis and waived requirement of informed consent since these sera were unused samples from previously closed studies without any volunteer identifier information.
Cells and Cell Lines
TZM-bl and CEMNK r cells were obtained from the NIH AIDS Research and Reference Reagent program. PBMC used as effector cells in the ADCC assays were obtained from Dr. Marjorie Robert-Guroff. Enzyme-linked Immunosorbent Assay Endpoint binding titers were determined by applying serially diluting serum samples from each trial to JR-FL gp120-coated microtiter plates at 1 mg/mL. Bound gp120-specific IgG was detected using a biotinylated anti-human antibody and a subsequent incubation with a streptavidin-HRP. After development with a 3,395,59-tetramethylbenzidine substrate solution, endpoint titers were defined as the last dilution of sera providing at least twice the background optical density of a normal human sera control.
Neutralization Assays
Neutralization assays were done as previously described [33]. Briefly, 200 TCID 50 was incubated with human sera for 1 hr, followed by the addition of 10 5 TZM-bl cells in a final concentration of 20 mg/mL DEAE Dextran. Plates were incubated at 37uC for 48 hours and developed with luciferase reagent (Promega). Neutralization was calculated as the percent change in luciferase activity in the presence of normal human sera versus that of luciferase activity in the presence of immune sera [(NHS RLUs -Immune RLUs)/(NHS RLUs)]*100. In some neutralization assays, JR-FL pseudovirus was treated with 5 mg/mL sCD4 for 1 hr at 37uC prior to the addition of serum. When peptide adsorptions were reported, serum was incubated with a consensus clade B V3 peptide (CTRPNNNTRKSIHIGPGRAFYTTGEIIG-DIRQAHC) at 25 mg/mL for 1 hr at 37uC prior to the addition of virus.
Competitive Binding Assays
Competitive binding assays were performed as previously described [34,35] with minor modifications. Pseudovirions bearing the JR-FL Env and Vesicular Stomatitis Virus were incubated with serial dilutions of human vaccinee sera prior to the addition to a mAb-coated microtiter plate. Virus/sera mixture was then incubated in the ELISA wells for 3 hrs at room temperature. Plates were washed and 10,000 TZM-bl cells per well were overlayed and incubated for 48 hrs at 37uC. Competition activity is reported as the serum dilution at which the luciferase signal is reduced by 50%.
ADCC
The ability of serum from immunized individuals to mediate ADCC activity was performed as previously described with minor modifications [36]: 1610 6 CEMNK r cells were dual stained with 2.5610 26 M PKH-26 (Sigma) and 5610 28 M CFSE (Molecular Probes, Invitrogen). The labeled cells were pulsed with 5 mg gp120, and exposed to vaccine sera prior to incubation with PBMC from an HIV negative donor for 4 hours. Cells were then subjected to flow cytometric analysis where CEMNK r target cell lysis was defined as the percentage of CEMNK r cells in the PKH-26 hi population that lost CFSE fluorescence.
Detection of complement activation
The downstream product of complement activation, C4, was detected in an ELISA-based assay. ELISAs were performed as described above, where JR-FL gp120 protein was coated on a microtiter plate and exposed to serial dilutions of heat inactivated vaccinee sera. After washing, intact normal human serum was used as a source of complement and was incubated on the plate at a 1:100 dilution for 1 hr at RT. Deposited C4 was then detected with a goat anti-C4 antibody (1:1000 dilution for 1 hr incubation at RT). An AP conjugated anti-goat secondary antibody was used for final detection. | 2014-10-01T00:00:00.000Z | 2010-11-09T00:00:00.000 | {
"year": 2010,
"sha1": "c4b8f4d59f02c87f794cf87de1ac6bcb82013a0e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0013916",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "50f43227fffee2cf33ddb61c635417259c9ca7f0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
21741275 | pes2o/s2orc | v3-fos-license | Nucleus Accumbens Microcircuit Underlying D2-MSN-Driven Increase in Motivation
Abstract The nucleus accumbens (NAc) plays a central role in reinforcement and motivation. Around 95% of the NAc neurons are medium spiny neurons (MSNs), divided into those expressing dopamine receptor D1 (D1R) or dopamine receptor D2 (D2R). Optogenetic activation of D2-MSNs increased motivation, whereas inhibition of these neurons produced the opposite effect. Yet, it is still unclear how activation of D2-MSNs affects other local neurons/interneurons or input terminals and how this contributes for motivation enhancement. To answer this question, in this work we combined optogenetic modulation of D2-MSNs with in loco pharmacological delivery of specific neurotransmitter antagonists in rats. First, we showed that optogenetic activation of D2-MSNs increases motivation in a progressive ratio (PR) task. We demonstrated that this behavioral effect relies on cholinergic-dependent modulation of dopaminergic signalling of ventral tegmental area (VTA) terminals, which requires D1R and D2R signalling in the NAc. D2-MSN optogenetic activation decreased ventral pallidum (VP) activity, reducing the inhibitory tone to VTA, leading to increased dopaminergic activity. Importantly, optogenetic activation of D2-MSN terminals in the VP was sufficient to recapitulate the motivation enhancement. In summary, our data suggests that optogenetic stimulation of NAc D2-MSNs indirectly modulates VTA dopaminergic activity, contributing for increased motivation. Moreover, both types of dopamine receptors signalling in the NAc are required in order to produce the positive behavioral effects.
Introduction
Dopaminergic projections from the ventral tegmental area (VTA) to the nucleus accumbens (NAc) have been classically described as the core of the reward circuit (Wise, 2004).
Evidence in animal models and humans showed that the motivational aspects of reward processing are greatly mediated by these projections (Wise, 1998;Kelley and Berridge, 2002;Hyman et al., 2006;Bailey et al., 2016).
In the past years, compelling data supported a role for D1-MSNs in positive reinforcement, while D2-MSNs have been mostly associated with aversion. Nonetheless, recent data emerged in opposition to this dichotomy; whereas the division of direct and indirect neurons based on the respective expression of D1R and D2R in dorsal striatum appears to be precise, in the NAc the indirect pathway contains a mixture of D1-MSNs and D2-MSNs (Lobo et al., 2010;Kravitz et al., 2012). This implies that both NAc D1-and D2-MSNs can inhibit or disinhibit thalamic activity, with clear repercussions in behavior. In agree-ment with this view, a previous study showed that activation of either NAc D1-or D2-MSNs is sufficient to increase motivation in a progressive ratio (PR) task (Soares-Cunha et al., 2016a). In the same direction, in the ventrolateral striatum, both D1-and D2-MSNs are activated at the trial start cue in the PR test and inhibition of either population immediately after the cue resulted in decreased motivation (Natsubori et al., 2017).
These seminal findings showed that D2-MSNs play a more pro-motivation/reward role than initially anticipated and suggest that the prevailing notion of a functional segregation of MSNs should be reconsidered. Yet, it is still unclear how activation of D2-MSNs affects other local neurons/interneurons and downstream regions and how this contributes for motivation enhancement. Therefore, we combined optogenetic activation of NAc D2-MSNs with in loco pharmacological delivery of specific antagonists to identify the contribution of different NAc inputs and neuronal populations for motivational drive.
Animals
Male Wistar Han rats (two to three months old at the beginning of the tests) were used. Animals were maintained under standard laboratory conditions: 12/12 h light/dark cycle (lights on from 8 A.M. to 8 P.M.) and room temperature of 21 Ϯ 1°C, with relative humidity of 50 -60%; rats were individually housed after optical fiber implantation; standard diet (4RF21, Mucedola SRL) and water were given ad libitum, until the beginning of the behavioral experiments, in which animals switched to food restriction to maintain 85% of initial body weight.
Behavioral manipulations occurred during the light period of the light/dark cycle. Health monitoring was performed according to FELASA guidelines (Nicklas et al., 2002). All procedures were conducted in accordance with European Regulations (European Union Directive 2010/ 63/EU). Animal facilities and animals' experimenters were certified by the National regulatory entity, Direção-Geral de Alimentação e Veterinária (DGAV). All protocols were approved by the Ethics Committee of the Life and Health Sciences Research Institute (ICVS) and by DGAV.
Experimental design
Group I of animals (n D2-ChR2 ϭ 10, n D2-eYFP ϭ 7), which received intracranial viral injection and optical fiber placement in the NAc, performed the PR test (described in behavior section" throughout) and were killed 90 min after the beginning of the last PR session for c-fos analysis (Extended Data Fig. 1-1A).
Group II of animals (n D2-ChR2 ϭ 8, n D2-eYFP ϭ 7), which received intracranial viral injection and hybrid cannula (optics and fluid) placement in the NAc, performed the PR test (described below) and performed two additional PR sessions with antagonist injections. On day 1, half of the animals received antagonist injection and the other half received vehicle injection. On day 2, animals receiving drug on the first day received vehicle and vice versa. All animals were treated with vehicle and drug. After behavioral performance, all rats were killed, and cannula place-C.S.-C. was recipient of the Fundação para a Ciência e Tecnologia (FCT) Fellowship SFRH/BD/51992/2012 and is currently recipient of a post-doctoral fellowship from the Programa de Atividades Conjuntas (PAC), through MED-PERSYST Project POCI-01-0145-FEDER-016428 (supported by the Portu-gal2020 Programme). B.C. is recipient of a PhD scholarship funded by FCT (SFRH/BD/98675/2013). A.J.R. is a FCT Investigator Fellow (IF/00883/2013). N.V. is a recipient of the CNPQ Grant 249991/2013-6 and the CAPES Grant 88887.131435/2016-00. This work was developed under the scope of the project NORTE-01-0145-FEDER-000013, supported by the Northern Portugal Regional Operational Programme (NORTE 2020), under the Portugal 2020 Partnership Agreement, through the European Regional Development Fund (FEDER). Part of the work was supported by the Janssen Neuroscience Prize (1st edition) and by the BIAL Grant 30/2016. ment and viral expression were confirmed (Extended Data Fig. 1-1B).
Group III of animals (n D2-ChR2 NAc-VP ϭ 8, n D2-eYFP NAc-VP ϭ 6), which received intracranial viral injection in the NAc and optical fiber placement in the VP, performed the PR test (described below; Extended Data Fig. 1-1C).
Group IV of animals (n D2-ChR2 ϭ 4) was injected with ChR2 in the NAc, and after three weeks to allow viral expression, in vivo single unit electrophysiological recordings were performed (Extended Data Fig. 1-1D).
Subjects and apparatus
Rats were habituated to 45 mg of food pellets (F0021; Bio-Serv), which were used as reward during the behavioral protocol, 1 d before training initiation. Behavioral sessions were performed in operant chambers (Med Associates) that contained a central, recessed magazine to provide access to 45 mg of food pellets (Bio-Serve), two retractable levers with cue lights located above them that were located on each side of the magazine. Chamber illumination was obtained through a 2.8-W, 100-mA light positioned at the top-center of the wall opposite to the magazine. The chambers were controlled by a computer equipped with the Med-PC software (Med Associates).
PR schedule of reinforcement
All training sessions started with illumination of the house light that remained until the end of the session. On the first training session [continuous reinforcement (CRF) sessions] one lever was extended. The lever would remain extended throughout the session, and a single lever press would deliver a food pellet (maximum of 50 pellets earned within 30 min). In some cases, food pellets were placed on the lever to promote lever pressing. After successful completion of the CRF training, rats were trained to lever press on the opposite lever using the same training procedure. In the four following days, the side of the active lever was alternated between sessions. Then, rats were trained to lever press one time for a single food pellet in a fixed ratio (FR) schedule consisting in 50 trials in which both levers are presented, but the active lever is signaled by the illumination of the cue light above it. FR sessions began with extension of both levers (active and inactive) and illumination of the house light and the cue light over the active lever. Completion of the correct number of lever press led to a pellet delivery, retraction of the levers and the cue light turning off for a 20-s intertrial interval (ITI). Rats were trained first with one lever active and then with the opposite lever active in separate sessions (in the same day). In a similar manner, rats were then trained using an FR4 reinforcement schedule for 4 d and a FR8 for 1 d. On the test day, rats were exposed to PR or FR experimental sessions (one session per day) according to the following schedule: day 1, FR4; day 2, PR (optical stimulation); day 3, FR4; day 4, PR (no optical stimulation). PR sessions were identical to FR4 sessions except that the operant requirement on each trial (T) was the integer (rounded down) of 1.4 (T-1) lever presses, starting at 1 lever press. PR sessions ended after 15 min elapsed without completion of the response requirement in a trial.
Before the PR session, rats were connected to an opaque optical fiber, through previously implanted cannula guide placed in the NAc. At the beginning of each trial of the PR session with optical stimulation, when the retractable levers are exposed to the animal together with the cue light, animals received an optical stimulation. After basal assessment of PR (one session with optical stimulation and one session without), all animals performed seven additional sessions (with one-week interval and one FR4 reminder session before PR test) with optical stimulation and local pharmacological administration of receptors antagonist (Extended Data Fig. 1-1).
Optical stimulation was performed as follows: 473 nm; frequency of 40 Hz; 12.5-ms pulses over 1 s; 10 mW at the tip of the implanted fiber.
Constructs and virus preparation
eYFP or hChR2(H134R)-eYFP were cloned under the control of the D2R minimal promoter region as described before (Soares-Cunha et al., 2016a;Zalocusky et al., 2016). Constructs were packaged in AAV5 serotype by the University of North Carolina at Chapel Hill (UNC) Gene Therapy Center Vector Core (UNC). AAV5 vector titters were 3.7-6 ϫ 10 12 viral molecules/ml as determined by dot blot.
Surgery and cannula implantation
Rats were anesthetized with 75 mg kg Ϫ1 ketamine (Imalgene, Merial) plus 0.5 mg kg Ϫ1 medetomidine (Dorbene, Cymedica). Virus was unilaterally injected into the NAc; coordinates from bregma, according to (Paxinos and Watson, 2005): ϩ1.2 mm anteroposterior (AP), ϩ1.2 mm mediolateral (ML), and Ϫ6.5 mm dorsoventral (DV; D2-ChR2 group and D2-eYFP control group). Rats that performed the PR with only optical stimulation were implanted with an optic fiber (200 m in diameter) attached to a 2.5-mm ferrule (Thorlabs), and rats that performed the PR test with both optical stimulation and local administration of antagonists were implanted with opto-fluid cannulas (Doric Lenses) using the injection coordinates (except for the DV: Ϫ6.4 mm) that were secured to the skull using 2.4-mm screws (Bilaney) and dental cement (C&B kit, Sun Medical).
For NAc terminal stimulation in the VP, virus was injected as above but rats were implanted with an optic fiber in the VP (coordinates from bregma) Ϫ0.1 mm AP, ϩ2.4 mm ML, and Ϫ7 mm DV (D2-ChR2 NAc-VP group and D2-eYFP NAc-VP control group).
Rats were allowed to recover for two weeks before initiation of the behavioral trainings.
Single neuron activity was recorded extracellularly with a tungsten electrode (tip impedance 5-10 Mat 1 kHz) and data sampling was performed using a CED Micro1401 interface and Spike2 software (Cambridge Electronic Design). The DPSS 473 nm laser system, controlled by a stimulator (Master-8, AMPI) was used for intracranial light delivery. Optical stimulation was performed as follows: 473 nm; frequency of 40 Hz; 12.5-ms pulses over 1 s, 10 mW.
Firing rate histograms were calculated for the baseline (10 s before stimulation), stimulation period and after stimulation period (10 s after the end of stimulation). Spike latency was determined by measuring the time between half-peak amplitude for the falling and rising edges of the unfiltered extracellular spike.
NAc neurons were classified according to previous descriptions Vicente et al., 2016). In short, fast-spiking interneurons (FSIs), putative parvalbumincontaining neurons (pFSs), were identified has having a waveform half-width of less that 100 s and a baseline firing rate higher that 10 Hz; tonically active putative CINs (pCINs) were identified as those with a wave form halfwidth bigger that 300 s. Putative MSNs (pMSNs) were identified as those with baseline firing rate lower that 5 Hz and that do not met the wave form criteria for pCIN or pFS neurons.
VP GABAergic neurons were identified as those having a baseline firing rate between 0.2 and 18.7 Hz (Richard et al., 2016). Other nonidentified neurons (corresponding to less that 5% of recorded cells) were excluded from the analysis.
Single units in the VTA were separated into those putative dopaminergic (pDAergic) and putative GABAergic (pGABAergic). This classification was based on firing rate and wave form duration (Ungless et al., 2004;Ungless and Grace, 2012;Totah et al., 2013). Cells presenting baseline firing rate lower that 10 Hz and a wave form duration higher than 1.5 ms were considered pDAergic neurons. Cells presenting baseline firing rate higher than 10 Hz and wave form duration lower than 1.5 ms were classified as pGABAergic. Other single units that did not fit in any classification (Ͻ5% of recorded cells) were excluded from the analysis.
For each brain region, countings were performed in five distinct 50-m sections. Images were collected and analyzed by confocal microscopy (Olympus FluoViewT-MFV1000). Cell counts were normalized to the area of the brain region.
Drugs
All drugs were delivered 10 min before animals performed the PR test, through an opto-fluid system chronically implanted in the NAc. Injections were performed using a 5-l gastight seringe (Hamilton), attached to the implanted injection cannula of the rats through 22-gauge tubing, at a constant rate of 1 l/min.
Statistical analysis
Normality tests were performed for all data analyzed, as well as outlier analysis using Tukey's test. Statistical analysis between two groups was made using two-tailed Student's t test (unpaired t test for comparison between two groups; paired t test for comparison within the same group). One-or two-way ANOVA was used when appropriate. Bonferroni's post hoc multiple comparisons were used for group differences determination. Statistical results are displayed in Table 1. Results are presented as mean Ϯ SEM. All statistical analysis was performed using GraphPad Prism (v7.0), and results were considered significant for p Յ 0.05.
Optogenetic stimulation of NAc D2-MSNs increases motivation
To specifically modulate the activity of NAc D2R-expressing neurons, we injected in the NAc of rats a construct containing channelrhodopsin (ChR2) Fig. 1C). In addition, only 1.5% of eYFP ϩ cells were D1R ϩ ; and 2% were ChAT ϩ . Forty % of ChAT ϩ cells (CINs) were transfected since they express eYFP (Extended Data Fig. 1-2). Using single-cell in vivo electrophysiology, we showed that D2-MSN optical stimulation (40 Hz, 40 light pulses at 12.5 ms) significantly increases NAc firing rate during stimulation in comparison with baseline, and 84% of the cells return to basal activity after stimulation (F (2,48) ϭ 76.7, p Ͻ 0.000, one-way ANOVA; Fig. 1D-F). A total of 68% of recorded cells increased activity, 16% decrease, and 24% did not change activity in response to stimulation. Spike latency was ϳ2 ms (Fig. 1G).
After, animals were submitted to PR test (Extended Data Fig. 1-1) to evaluate their willingness to work for a food reward, a direct measure of individual motivation. During CRF training, both groups increased lever pressing throughout days in a similar manner (F (1,15) ϭ 0.43, p ϭ 0.522, two-way ANOVA; Fig. 1H). Likewise, all animals increased lever pressing in the FR schedule days in the active versus nonactive lever (F (3,30) ϭ 126.8, p Ͻ 0.000, two-way ANOVA; Fig. 1I).
In agreement with previous findings (Soares-Cunha et al., 2016a), D2-MSNs optical stimulation (40 light pulses of 12.5 ms at 40 Hz) occurring at the same time as the conditioned stimulus (light above the active lever), induced a significant increase in the breakpoint of D2-ChR2 rats in comparison with D2-eYFP-stimulated rats (63.6% increase; t (15) ϭ 7.7, p Ͻ 0.000, unpaired t test; Fig. 1J). All D2-ChR2 rats displayed a significant increase in the breakpoint in the session with optical stimulation (ON) in comparison with the session without stimulation (OFF; two-way ANOVA post hoc, p Ͻ 0.000; Fig. 1K). This increase in motivation was not due to differences in the number of food pellets earned during the PR session (t (15) ϭ 1.5, p ϭ 0.1380, unpaired t test; Fig. 1L). Stimulation occurring during the ITI had no effect on motivation (Fig. 1M,N), proving that the positive effect of stimulation in behavior was restricted to particular stages of the test.
Increase in motivation is dependent on NAc GABA signaling
MSNs are GABAergic in nature and synapse within each other in the NAc (Dobbs et al., 2016). Besides, local interneurons provide an additional source of GABA that also controls MSNs activity ( Fig. 2A; . To further understand the impact of GABAergic neurotransmission in the control of D2-MSNs-mediated enhancement of motivation, we used hybrid cannulas, which allow dual delivery of drugs and light in the same region (Extended Data Figs. 1-1, 2-1). Immediately before behavioral testing and optogenetic activation of D2-MSNs, we injected in the NAc either a GABA A receptor antagonist (bicuculline, 75 ng) or a GABA B receptor antagonist (CGP 55845 hydrochloride, 44 ng), in dosages that have been shown previously to induce a behavioral effect (Giorgetti et al., 2002;Kandov et al., 2006;Ikeda et al., 2010).
For GABA A receptor antagonist, we found no significant effect of treatment but there was a group effect, with D2-ChR2-stimulated animals presenting increased breakpoint (two-way ANOVA; treatment effect: F (1,13) ϭ 0.1, p ϭ 0.117; group effect: F (1,13) ϭ 118.8, p Ͻ 0.000; Fig. 2B). For GABA B receptor antagonist, there was a significant effect of treatment and group (two-way ANOVA; treatment effect: F (1,13) ϭ 30.7, p Ͻ 0.000; group effect: F (1,13) ϭ 193, p Ͻ 0.000; Fig. 2C). None of the GABA antagonists alters the breakpoint of control D2-eYFP animals (Fig. 2B,C), although there was a trend for increased number of lever presses with GABA B receptor antagonist treatment (12% increase; p ϭ 0.070, two-way ANOVA post hoc). GABA A receptor antagonist administration before D2-MSNs stimulation did not impair the breakpoint enhancement (D2-ChR2 vehicle vs D2-ChR2 GABA A antag, p ϭ 0.787, two-way ANOVA post hoc; Fig. 2B). However, administration of GABA B receptor antagonist led to an additional increase in the breakpoint of D2-stimulated animals (15.8% increase; p Ͻ 0.000, two-way ANOVA post hoc; Fig. 2C). No differences were found between groups in the number of pellets earned during the session (Extended Data Fig. 2-2).
These results suggest that GABA signaling arising from MSNs or local interneurons can modulate motivational drive in a GABA B -dependent manner.
Further studies using either one of the antagonists revealed that this blockage was mediated by nAChR (D2-ChR2 vehicle vs D2-ChR2 nAChR antag, two-way ANOVA post hoc, p Ͻ 0.000; Fig. 2D). No differences in the number of pellets earned during the session were found (Extended Data Fig. 2-2).
In the NAc, MSNs express mAChR (M1 and M4; Yan et al., 2001) but not nAChR (Jones et al., 2001;Jones and Wonnacott, 2004). The later receptors are mainly expressed in VTA dopaminergic terminals (Hill et al., 1993) and some GABAergic interneurons (Koós and Tepper, 1999; Fig. 2A). Tonic striatal ACh is able to promote dopamine release through 2-subunitcontaining (2-)ءnAChR receptors in VTA terminals (Rice and Cragg, 2004). Using different KO strains, Champtiaux and colleagues proposed that a combination of ء26␣ and ␣42 ء nAChRs mediate the endogenous cholinergic modulation of dopamine release at the terminal level (Champtiaux et al., 2003). Considering this, we injected DHE (0.7 g; dosage validated; Löf et al., 2007), an antagonist of ␣4 subunit of nAChR, in the NAc before performing the PR test. By blocking ␣4 receptors, we are abolishing at least 50% of dopamine release in the NAc (Champtiaux et al., 2003). Treatment using ␣4 antagonist had a significant effect on behavioral performance (F (1,13) ϭ 43.0, p Ͻ 0.000, two-way ANOVA; Fig. 2E). No effect in the breakpoint of control animals was found, yet, this treatment abolished the enhancement of breakpoint induced by D2-MSN stimulation (20.8% decrease; p Ͻ 0.000, two-way ANOVA post hoc). No effect on the number of pellets earned during the session was found (Extended Data Fig. 2-2).
These results suggest that cholinergic activation of VTA terminals is required for the observed behavioral effect of D2-MSN stimulation.
Enhancement of motivation by D2-MSN activation requires dopamine signaling through D1R and D2R
Activating ␣62 ء and/or ␣42 ء nAChRs in VTA terminals greatly enhances dopamine release in the NAc (Wonnacott et al., 2000;Cachope et al., 2012), and our previous results suggested that cholinergic modulation of VTA terminals was necessary for the observed motivation enhancement induced by D2-MSN optogenetic activation. Thus, we next tried to clarify the role of NAc dopamine receptors D1R and D2R in this process. To do so, we injected in the NAc before performance of PR test with optogenetic stimulation of D2-MSNs, R(ϩ)-SCH-23390 hydrochloride (0.5 g; D1R antagonist) or sulpiride (0.2 g; D2R antagonist) in doses that were previously shown to have a behavioral effect (Vezina et al., 1994).
Additionally, pharmacological inhibition of either D1R or D2R abolished the increase in motivation induced by D2-MSN optogenetic activation (D2-ChR2 vehicle vs D2-continued stimulation). E', Example of a ChR2 neuron that responds to each pulse of stimulation. Right, Example of a representative MSNs wave form. F, Increase in NAc average firing rate during optogenetic stimulation of D2-MSNs. G, Spike latency in response to D2-MSN optical stimulation. H, CRF training sessions of the PR test. I, FR training sessions of the PR test. J, Optogenetic activation of D2-MSNs during cue exposure strongly enhanced breakpoint. K, All animals increase breakpoint in the session with D2-MSN stimulation (ON versus OFF session). L, Number of pellets consumed in the PR session with stimulation was similar between groups. M, Optogenetic activation of D2-MSNs during ITI does not alter breakpoint. N, Number of pellets earned in the PR session with stimulation on ITI was similar between groups. n D2-eYFP ϭ 7; n D2-ChR2 ϭ 10. Error bars denote SEM; pءءء Ͻ 0.001 (Extended Data Figs. 1-1, 1-2). ChR2 D1R antag: p Ͻ 0.000, two-way ANOVA post hoc; D2-ChR2 vehicle vs D2-ChR2 D2R antag: p Ͻ 0.000, two-way ANOVA post hoc). A reduction in the number of pellets consumed in D1R-treated D2-eYFP rats was found (p ϭ 0.0164, two-way ANOVA post hoc; Extended Data Fig. 2-2). No significant differences in the number of pellets consumed were found in other groups. These results suggest that the motivation improvement is dependent on both types of dopamine receptor signaling in the NAc.
Optogenetic stimulation of NAc D2-MSNs recruits the VP and the VTA
The preceding results suggested a dopamine-dependent effect of D2-MSN optogenetic activation in motivation (summarized in Fig. 2H). D2-MSNs do not directly project to VTA but indirectly modulate VTA dopaminergic activity through the VP (Wu et al., 1996;Floresco et al., 2003;Grace et al., 2007;Hjelmstad et al., 2013;Kupchik et al., 2015). So, we next examined the pattern of expression of c-fos, an immediate early gene used as a marker of neuronal recruitment, after the PR test in the NAc and connected regions.
In addition, we evaluated the number of c-fos ϩ cells in accumbal downstream regions: the VTA, which is innervated solely by NAc D1-MSNs (Bocklisch et al., 2013); the VP, which is directly innervated by NAc D1-and D2-MSNs (Creed et al., 2016); and the substantia nigra pars compacta (SNc) as a control region, since it is mainly innervated by dorsal striatum MSNs (Gerfen, 1984).
Optogenetic activation of NAc-VP terminals recapitulates motivation enhancement
Next, we analyzed the activity of the VP and VTA during D2-MSN optogenetic stimulation using in vivo single-cell electrophysiology (Fig. 4A).
Concordant with a GABAergic input, NAc D2-MSN stimulation elicited an overall reduction in the firing rate of the VP (F (2,87) ϭ 10.6, p Ͻ 0.000, one-way ANOVA; Fig. 4B), with an average spike latency of 5.7 ms (Extended Data Fig. 4-1A), consistent with the expected monosynaptic input from the NAc to VP. More than 90% of recorded neurons in the VP decreased their activity during stimulation, which normalized thereafter (Fig. 4C,D).
Conversely, in the VTA, we found a significant increase in global firing rate of putative VTA dopaminergic neurons (pDAergic; F (2,56) ϭ 17.6, p Ͻ 0.000, one-way ANOVA; Fig. 4E), with an average spike latency of 170 ms (Extended Data Fig. 4-1A), indicative of polysynaptic modulation. Of continued receives cortical (prefrontal cortex (PFC)) glutamatergic inputs and VTA dopaminergic inputs. NAc D1-and D2-MSNs send GABAergic projections to VP, which in turn projects back to the NAc (not represented) and to VTA (among other regions). Besides MSNs, the NAc contains CINs and GABAergic interneurons of different natures, including FSIs, which tightly regulate striatal activity. Right, Expression of different neurotransmitter receptors in striatal neurons and terminals. Of relevance to mention that CINs also express dopamine receptor D2R and can stimulate dopamine release from VTA terminals mainly in a ␣42ءnAchRor ␣62ءnAchRdependent manner. Activation of D2R autoreceptors located in VTA terminals also controls dopamine release. iGluR: ionotropic glutamate receptors; mGluR: metabotropic glutamate receptors; nAchR: nicotinic (ionotropic) cholinergic receptors; M1/M4: muscarinic (metabotropic) cholinergic receptors. B-G, Effects of different receptor antagonists in behavior. Rats were injected in the NAc with a specific antagonist immediately before the PR test with D2-MSNs optogenetic activation. B, GABA A receptor antagonist did not alter breakpoint of control D2-eYFP animals, nor of D2-ChR2-stimulated animals. C, GABA B receptor antagonist did not alter breakpoint of control animals, but it further increased the breakpoint of D2-ChR2-stimulated animals. D, Injection of mAChR ϩ nAChR antagonist combination abolished the increased breakpoint of D2-ChR2-stimulated animals. This effect is mediated mainly by nAChR since mecamylamine per se normalized breakpoint. E, Local administration of ␣4-nAChR antagonist blocked the effect of D2-MSNs optogenetic activation. these pDAergic neurons, 82.8% increased activity during stimulation (Fig. 4F,G). No significant differences were observed in the activity of pGABAergic VTA neurons, although there was a trend for decreased activity during D2-MSNs stimulation (Fig. 4E,G).
Optical stimulation (40 light pulses of 12.5 ms at 40 Hz) of D2-MSN-VP terminals elicited a significant increase in the breakpoint of ChR2-stimulated rats in comparison with control-stimulated rats (40% increase; t (11) ϭ 10.7, p Ͻ 0.000, unpaired t test; Fig. 4I). All D2-ChR2 NAc-VP rats displayed a significant increase in breakpoint in the session with optical stimulation (ON) in comparison with interestingly, no significant differences were found between stimulated versus nonstimulated side. Error bars denote SEM; ءp Ͻ 0.05, pءء Ͻ 0.01, pءءء Ͻ 0.001 (Extended Data Fig. 3-1).
Discussion
Local microcircuits in combination with excitatory and inhibitory inputs from upstream regions play an important role in striatal function. Here, we show that activation of D2-MSNs during cue exposure increases willingness to work in the PR test, and that a concerted action of different neurotransmitter systems in the striatum is required for this behavioral effect (Fig. 5).
We first evaluated the impact of GABAergic transmission since GABAergic MSNs highly synapse within each other in the NAc (Sesack and Pickel, 1990;Dobbs et al., 2016), providing a weak lateral inhibitory network (feedback inhibition; Tepper et al., 2008). This MSN-MSN reciprocal regulation mainly occurs in a GABA A receptor mediated manner (Tunstall et al., 2002). Our results suggest that the D2-MSN-driven enhancement in motivation is not dependent on GABAergic signaling, since neither GABA A nor GABA B antagonists normalized the phenotype. However, we do observe an additional increase in the breakpoint of both control and D2-MSN-stimulated animals on GABA B antagonist administration in the NAc. Such finding is likely to rely on enhanced corticostriatal glutamatergic release on the blockade of presynaptic GABA B receptors. In fact, MSNs express GABA B receptors, application of exogenous GABA B agonists does not lead to any MSN electrophysiological effect (Logie et al., 2013), although it significantly supresses glutamatergic inputs onto MSNs via a pre-synaptic mechanism (Nisenbaum et al., 1993;Logie et al., 2013). Apart from classic studies showing that NAc cue-evoked firing is abolished by VTA inactivation (Yun et al., 2004), there is also evidence that cue-evoked excitations of NAc core neurons depend on mPFC glutamatergic projections, and contribute to the behavioral response to reward-predictive cues (Ishikawa et al., 2008).
Yet, it is important to refer that although sparse, GABAergic interneurons (which do not express D2R; Tritsch and Sabatini, 2012) display highly branched dendritic and extensive axonal arborisations (Kawaguchi, 1997;Ibáñez-Sandoval et al., 2011;English et al., 2012) and are capable of exerting a powerful control over striatal excitability (feedforward inhibition; Tepper et al., , 2008. They also express GABA B receptors (Logie et al., 2013), so the blockage of this specific feed-forward inhibition might also contribute for the observed increase in motivational drive.
In addition to local GABA control, the striatum also contains CINs, which have both excitatory and inhibitory effects in striatal MSNs (Sullivan and Brake, 2003;Pakhotin and Bracci, 2007;Witten et al., 2010). In primates, CINs exhibit multiphasic responses to motivationally salient stimuli that mirror those of midbrain dopamine neurons, being important for reward-related learning (Kitabatake et al., 2003;Joshua et al., 2008;Witten et al., 2010;Cachope et al., 2012). Since 80% of CINs express D2R (Alcantara et al., 2003), one can argue that our optogenetic stimulation protocol directly activates these interneurons, enhancing ACh release in the striatum. In line with this, we found an increase in ChAT ϩ /c-fos ϩ neurons in stimulated animals.
In vivo selective activation of CINs is sufficient to elicit dopamine release directly in the NAc and independently of the soma, by activation of nAChRs in VTA terminals (Cachope et al., 2012;Threlfell et al., 2012). It has been suggested that these nAChR act as dynamic detectors of ACh concentrations, enhancing the contrast between tonic and burst dopaminergic firing (Brunzell et al., 2010). In an elegant study using different KO strains, Champtiaux and colleagues proposed that a combination of ء26␣ and ء24␣ nAChRs mediate endogenous cholinergic modulation of dopamine release at the VTA terminal level (Champtiaux et al., 2003). Here, we show that ␣4 antag- (2), reducing VP-to-VTA inhibitory tone (3). This triggers an increase in VTA dopaminergic activity (4). These VTA dopaminergic signals require D1R and D2R signaling in the NAc (5', 5). Interestingly, cholinergic-dependent control of VTA dopaminergic terminals in the NAc (via ␣4-nAChR) is essential for this process (6). (7) Optical stimulation can also be activating D2-expressing CINs that strongly influence dopamine release and shape behavior. onist, DHE, blocks D2-MSN-dependent increase in motivation, suggesting that ACh-mediated dopamine release from VTA terminals is crucial for the observed behavioral effect. It is important to refer that besides CINs, the NAc may also receive cholinergic inputs from the laterodorsal tegmentum (Dautan et al., 2014), although the function of these projections remains completely unknown.
In the NAc, ␣4 nAChRs subunits are expressed mainly in VTA dopaminergic terminals but also in some GABAergic FSIs. So, the observed dampening of motivation with ␣4 antagonist could also depend on these interneurons. However, our data does not support this because GABA receptor antagonists did not abolish the optogenetic-induced behavioral effect.
In addition to local cholinergic control, our data suggests an indirect effect in VTA dopaminergic activity through the VP. First, c-fos analysis revealed increased recruitment of both VP and VTA regions. VP data are somehow surprising considering the GABAergic nature of accumbal-VP monosynaptic projections (Root et al., 2010;Kupchik et al., 2015). Although most studies associate c-fos expression with increased neuronal activity, at least one study has shown that activating striatal MSNs increases c-fos in the VP (Page and Everitt, 1993). Yet, rather than directly associate D2-MSN activation with this increase in c-fos in the VP, we just aim to illustrate that the VP is being differently recruited in stimulated animals. In fact, animals were killed 90 min after the beginning of the PR test, so c-fos reactivity is a sum of all neuronal events that occur during the test, and do not reflect only the optogenetic activation period.
D2-MSN stimulation decreased VP firing rate, and indirectly increased VTA dopaminergic activity, with less effects in GABAergic VTA neurons, consistent with the preferential innervation of VTA dopaminergic neurons by VP inputs (Mahler et al., 2014). So, our hypothesis is that D2-MSNs reduce the tonic VP-VTA inhibitory input, contributing for enhanced dopaminergic activity, which is known to boost motivational drive (Peciña et al., 2003;Cagniard et al., 2006). In fact, it was shown that inhibition of NAc afferents to the VP or direct infusion of GABAergic agonists into the VP, selectively increased the population activity of dopamine neurons, rising NAc dopamine efflux (Floresco et al., 2003). In line with this, we showed that optogenetic activation of D2-MSN terminals in the VP was sufficient to increase motivation. These findings are in agreement with the emerging notion that the VP is crucial for reward and motivation toward natural rewards and drugs of abuse. In fact, different subregions of the VP mediate different aspects of rewarded behavior, from motivation/incentive salience to reward prediction and consumption (Smith et al., 2009;Root et al., 2015). Yet, it is important to refer that VP is not only a relay area for indirect NAc inputs, since VP neuron responses can occur at a shorter latency than cue-elicited responses in NAc neurons (Richard et al., 2016), and that VP firing rate reflects the strength of incentive motivation (Ahrens et al., 2016).
The increased dopaminergic signals arising from the VTA act mainly (not exclusively since some interneurons also express dopamine receptors) on MSNs either by activating D1R or D2R. Local administration of either D1R or D2R antagonists decreases motivation in control animals, and also abolished D2-MSN-induced positive effects in motivation, indicating a synergistic effect of both MSN populations. In this perspective, it is important to refer that blockade of D2R would be expected to enhance activity of D2-MSNs since D2Rs are coupled to inhibitory G-proteins (Beaulieu and Gainetdinov, 2011). Yet, one has to bear in mind that D2R antagonists can also act in D2 auto-receptors in VTA terminals, disinhibiting presynaptic control of dopamine release (Anzalone et al., 2012).
Interestingly, D2-MSN optogenetic activation during cue exposure also indirectly recruited D1-MSNs, as assessed by an increase in the number of D1 ϩ /c-fos ϩ cells in the NAc on stimulation. Considering the proposed role for D1R-expressing neurons in reinforcement (Lobo et al., 2010;Kravitz et al., 2012), this activation probably also contributes for the behavioral output.
In summary, we show that NAc D2-MSN optogenetic activation enhances motivation through enhanced VTAdriven dopaminergic signaling. The behavioral effect was dependent on both D1R and D2R signaling in the NAc, suggesting that a coordinated action between these two striatal populations is needed to increase motivational levels. | 2018-05-25T21:26:22.459Z | 2018-03-01T00:00:00.000 | {
"year": 2018,
"sha1": "61a53eb1fee73f8da8a133d95b5ce6eaec60bb46",
"oa_license": "CCBY",
"oa_url": "https://www.eneuro.org/content/eneuro/5/2/ENEURO.0386-18.2018.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e12330d43772e59a4c770d3c91a45e9d9f96e17a",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
14581174 | pes2o/s2orc | v3-fos-license | Microring resonators with flow-through nanopores for nanoparticle counting and sizing
This paper proposes a high precision method for nanoparticle counting and sizing using a microring resonator-waveguide system that contains a flow-through nanopore. Theoretical analysis is carried out based on the coupled-mode theory, showing that when the nanoparticle passes the nanopore a temporal pulse signal can be detected and that the peak amplitude depends linearly on the nanoparticle volume. It is estimated that a nanoparticle of sub-10 nm in size may be detectable. ©2013 Optical Society of America OCIS codes: (120.1880) Detection; (230.5750) Resonators; (230.3990) Micro-optical devices; (280.1415) Biological sensing and sensors. References and links 1. G. S. Roberts, D. Kozak, W. Anderson, M. F. Broom, R. Vogel, and M. Trau, “Tunable nano/micropores for particle detection and discrimination: Scanning ion occlusion spectroscopy,” Small 6(23), 2653–2658 (2010). 2. R. Vogel, G. Willmott, D. Kozak, G. S. Roberts, W. Anderson, L. Groenewegen, B. Glossop, A. Barnett, A. Turner, and M. Trau, “Quantitative sizing of nano/microparticles with a tunable elastomeric pore sensor,” Anal. Chem. 83(9), 3499–3506 (2011). 3. F. Vollmer, S. Arnold, and D. Keng, “Single virus detection from the reactive shift of a whispering-gallery mode,” Proc. Natl. Acad. Sci. U.S.A. 105(52), 20701–20704 (2008). 4. H. Zhu, I. M. White, J. D. Suter, M. Zourob, and X. Fan, “Opto-fluidic micro-ring resonator for sensitive labelfree viral detection,” Analyst (Lond.) 133(3), 356–360 (2008). 5. B. Koch, Y. Yi, J.-Y. Zhang, S. Znameroski, and T. Smith, “Reflection-mode sensing using optical microresonators,” Appl. Phys. Lett. 95(20), 201111 (2009). 6. B. Koch, L. Carson, C.-M. Guo, C.-Y. Lee, Y. Yi, J.-Y. Zhang, M. Zin, S. Znameroski, and T. Smith, “Hurricane: A simplified optical resonator for optical-power-based sensing with nano-particle taggants,” Sens. Actuators B Chem. 147(2), 573–580 (2010). 7. S. Wang, K. Broderick, H. Smith, and Y. Yi, “Strong coupling between on chip notched ring resonator and nanoparticle,” Appl. Phys. Lett. 97(5), 051102 (2010). 8. A. Haddadpour and Y. Yi, “Metallic nanoparticle on micro ring resonator for bio optical detection and sensing,” Biomed. Opt. Express 1(2), 378–384 (2010). 9. J. Zhu, S. K. Ozdemir, Y.-F. Xiao, L. Li, L. He, D.-R. Chen, and L. Yang, “On-chip single nanoparticle detection and sizing by mode splitting in an ultrahigh-Q microresonator,” Nat. Photonics 4(1), 46–49 (2010). 10. S. I. Shopova, R. Rajmangal, Y. Nishida, and S. Arnold, “Ultrasensitive nanoparticle detection using a portable whispering gallery mode biosensor driven by a periodically poled lithium-niobate frequency doubled distributed feedback laser,” Rev. Sci. Instrum. 81(10), 103110 (2010). 11. T. Lu, H. Lee, T. Chen, S. Herchak, J.-H. Kim, S. E. Fraser, R. C. Flagan, and K. Vahala, “High sensitivity nanoparticle detection using optical microcavities,” Proc. Natl. Acad. Sci. U.S.A. 108(15), 5976–5979 (2011). 12. J. Zhu, Ş. K. Özdemir, L. He, D.-R. Chen, and L. Yang, “Single virus and nanoparticle size spectrometry by whispering-gallery-mode microcavities,” Opt. Express 19(17), 16195–16206 (2011). 13. L. He, Ş. K. Özdemir, J. Zhu, W. Kim, and L. Yang, “Detecting single viruses and nanoparticles usingwhispering gallery microlasers,” Nat. Nanotechnol. 6(7), 428 (2011). 14. V. R. Dantham, S. Holler, V. Kolchenko, Z. Wan, and S. Arnold, “Taking whispering gallery-mode single virus detection and sizing to the limit,” Appl. Phys. Lett. 101(4), 043704 (2012). 15. B. E. Little, J.-P. Laine, and H. A. Haus, “Analytic theory of coupling from tapered fibers and half-blocks into microsphere resonators,” J. Lightwave Technol. 17(4), 704–715 (1999). 16. I. M. White, H. Oveys, X. Fan, T. L. Smith, and J. Zhang, “Integrated multiplexed biosensors based on liquid core optical ring resonators and antiresonant reflecting optical waveguides,” Appl. Phys. Lett. 89(19), 191106 (2006). 17. K. Okamoto, Fundamentals of Optical Waveguides (Academic Press, 2000). #177881 $15.00 USD Received 10 Oct 2012; revised 30 Nov 2012; accepted 20 Dec 2012; published 3 Jan 2013 (C) 2013 OSA 14 January 2013 / Vol. 21, No. 1 / OPTICS EXPRESS 229 18. H. Li, L. Shang, X. Tu, L. Liu, and L. Xu, “Coupling variation induced ultrasensitive label-free biosensing by using single mode coupled microcavity laser,” J. Am. Chem. Soc. 131(46), 16612–16613 (2009). 19. Y. Guo, J. Y. Ye, C. Divin, B. Huang, T. P. Thomas, J. R. Baker, Jr., and T. B. Norris, “Real-time biomolecular binding detection using a sensitive photonic crystal biosensor,” Anal. Chem. 82(12), 5211–5218 (2010).
Introduction
Nanoparticle counting and sizing are essential for a broad range of applications such as nanotechnology, virology, disease diagnosis, and biomedical research [1,2].Electron microscopes such as SEM and TEM have long been used for particle sizing, but they are very expensive and bulky.Dynamic laser scattering provides a much simpler way to measure the particle size down to the order of 10 nm, but requires relatively large sample concentration.Nanopore technology based the Coulter principle has also been used for nanoparticle counting and sizing.Recently, scanning ion occlusion spectroscopy was developed based on sizetunable micro/nanopores fabricated on a polymer membrane and was able to measure the particle size down to 50 nm [1,2].
The optical microring resonator is an emerging sensing technology that has been used for highly sensitive biomolecular detection in the past decade.Recently, the ring resonator was also employed for nanoparticle detection, counting, and sizing [3][4][5][6][7][8][9][10][11][12][13][14].For example, using the whispering gallery mode (WGM) frequency shift method, Arnold et al. detected single Influenza A viral particles (~100 nm in diameter) attached to the ring resonator surface [3].Furthermore, by using the plasmonic enhancement mechanism, his group was able to detect and size single MS2 viral particles (~25 nm in diameter) with the ring resonator [14].Using the self-referencing mode splitting method, Yang et al. demonstrated detection and sizing of single nanoparticles on the order of 30 nm in diameter [9,12,13].Koch et al. and Yi et al. employed backscattering caused by the nanoparticles attached to the ring resonator surface to detect nanoparticles [5][6][7][8].Despite unprecedented achievement, the aforementioned approaches may suffer from the following limitations.First, they all rely on the direct attachment of the nanoparticles to the ring resonator.Accumulation of nanoparticles on the ring makes it difficult to continuously monitor and count nanoparticles, as the signal generated by later nanoparticles may significantly be affected by the presence of nanoparticles deposited earlier.Second, they rely primarily on diffusion for nanoparticles to reach the ring resonator surface, which process is slow, less controllable, and does not generate accurate information about the nanoparticle concentration.Third, the attachment position of the nanoparticle on the ring resonator surface is random.Consequently, the interaction between the nanoparticle and the WGM varies, which is likely to produce erroneous size information, in particular, when the frequency shift method [3,10,11,14] or the backscattering method [5][6][7][8] is used.
In this paper, we propose and analyze a microring resonator with flow-through nanopores, which overcomes the aforementioned problems.The proposed device is illustrated in Fig. 1.A dielectric microring resonator and an adjacent waveguide bus are embedded in a low-index medium.A flow-through nanopore (or nanopores) can be created in the coupling region between the ring and the waveguide.When a nanoparticle of interest is present in the nanopore, the coupling between the ring resonator and the waveguide changes.In addition, the scattering arising from the nanoparticle in the nanopore causes additional loss in the WGM.Both effects may result in a change in transmitted light intensity at the detector located at the distal end of the waveguide.
The device has a few distinct advantages over the current microring resonator designs.First, the flow-through structure allows nanoparticles to be transported directly to the detection zone by convection, which process is rapid and highly controllable.Second, the detection zone is well defined, which renders much better detection consistency and hence accuracy in size measurement.Third, the device enables continuous particle counting and sizing, thus enabling measurement of the particle concentration.Fourth, it is compatible with the conductance-based nanopore technologies.Thus, hybrid dual mode (optical and electrical) detection becomes possible.
In this paper, we present detailed theoretical analysis of the coupling effect and the scattering effect due to the presence of a nanoparticle inside the nanopore.It is shown that the coupling effect is dominant in determining the sensing signal, and that detection and sizing of nanoparticles below 10 nm is possible.
Theory
Referring to Fig. 2 and assuming that the laser is on resonance with the WGM of the ring resonator we can write the transmitted light intensity T, as:
(
) , 1 where t is the transmission coefficient and is related to the total coupling coefficient κ by When κ 2 <<1 and L<<1, Eq. ( 1) can be approximated as: In the presence of a nanoparticle in the nanopore, the coupling between the ring and the waveguide as well as the loss will be modified, which will affect the transmitted light intensity, i.e.: where ∂T/∂(κ 2 ) = S c and ∂T/∂L = S L is the device sensitivity with respect to the coupling change and the loss change.Δ(κ 2 ) and ΔL have different dependence on the nanoparticle size and refractive index, as discussed later.
Contribution from the coupling change
Let us consider S c first. Figure 3(a) plots the transmitted light intensity as a function of κ 2 for different intrinsic ring resonator losses, L. In contrast to the featureless transmission seen in a typical waveguide-waveguide system, which is linearly proportional to the total coupling κ 2 , the existence of the ring resonator completely changes the characteristics of the transmission.
The transmission becomes much more sensitive when the ring resonator-waveguide system is operated in the under-coupling regime where κ 2 <L.The lower the loss (or the larger Q 0 ) is, the steeper the transmission curve is in the under-coupling regime.When κ 2 = L, the critical coupling at which T = 0 is achieved.As discussed later, we will employ these phenomena for enhanced nanoparticle detection.Figure 3(b) provides the more quantitative details about S c as a function of κ 2 for various the ring resonator losses.For example, according Fig. 3(b), for L = 0.2, 0.1 and 0.01, S c can be as large as −13.5, −27, and −270, respectively for κ 2 /L = 0.1.When κ 2 = L or when κ 2 >>L, S c approaches zero.The total coupling coefficient, κ 2 , between the ring resonator and the waveguide bus in the absence of a nanoparticle can be calculated based on the coupled-mode theory [15,16]: where R is the ring resonator radius.
where x i H is the magnetic field component along the x-direction.Based on x i H , we can obtain the E-field along the y-direction y RR E for the ring resonator and y W E for the waveguide bus: where ε 0 , μ 0 , and ω are permittivity, permeability, and the angular frequency, respectively.β is the propagation constant.
In the presence of a nanoparticle (assumed to be a cube with a lateral size of d) as illustrated in Fig. 4, the additional local coupling coefficient and total coupling coefficient can be expressed as: where n NP and n Water are the refractive index of the nanoparticle and water that fills the nanopore, respectively.Note that the overlap integration in Eq. ( 7) takes place only in the region where the nanoparticle is present.In Eqs. ( 7) and ( 8), the nanoparticle is located at (x, y, z) = (0, 0, 0), i.e., the center of the nanopore.Detailed calculation shows that ΔC NP (0) and hence Δκ remain nearly unchanged when the nanoparticle moves within the x-z plane inside the nanopore.This can be explained by the fact that any decrease of the electric field of one mode (e.g., E RR ) when the nanoparticle moves off the center in the x-z plane is compensated for by the increase of the electric field of another mode (e.g., E W ).This phenomenon is important for detection consistency, as the detection signal will remain the same regardless the nanoparticle's position in the x-z plane.In contrast, when the nanoparticle moves along the y-direction, the largest ΔC NP (0) and hence Δκ are obtained when the nanoparticle is located at y = 0 (the middle of the nanopore along the y-direction).Therefore, when a nanoparticle flows through the nanopore, a temporal peak in the coupling will emerge, which can be used in particle counting.For simplicity, in the reminder of the paper, all the calculations are carried out for the nanoparticle located at the origin.Furthermore, we assume that the WGM of the ring resonator has the same mode field distribution as a straight waveguide of the same dimension.Based on these assumptions, we have ( ) where E(r 0 ) ( = E y W (r 0 ) = E y RR (r 0 )) is the electric field at the location of the nanoparticle (i.e., at the origin).It is important to note that Δκ is proportional to d 3 , which is the volume of the nanoparticle.For a spherical nanoparticle with a diameter of d, d 3 in Eq. ( 9) should be replaced by π d 3 /6, i.e., the sphere volume.bus based on Eqs. ( 4) and ( 9).In the calculation, we use the following parameters: ring resonator radius R = 15 μm; width w = 0.45 μm and height h = 0.1 μm for both the ring and the waveguide; n RR = n W = 1.7, n M = 1.45, n Water = 1.33, and n NP = 1.55; λ 0 = 1.55 μm; particle size d = 100 nm.From Table 1, it is important to observe that for a given ring resonatorwaveguide system, Δκ/κ remains nearly the same regardless of the size of the gap.This is understandable by comparing Eqs. ( 5) and ( 7), as both C(0) and ΔC NP (0) (hence κ and Δκ) have the same dependence on the gap.
Contribution from the loss change
The Rayleigh scattering induced loss arising from the presence of the nanoparticle, L s , can be written as: where is the Rayleigh scattering cross section.Therefore, We now compare the relative contribution of the coupling and scattering to the sensing signal, ΔT.According to Eq. ( 3), we have Comparison Δ(κ 2 ) and ΔL using Eqs.( 9) and ( 12) shows that Using n NP = 1.55, n Water = 1.33, n RR = 1.7, we have, At λ 0 = 1550 nm, κ is usually much larger than 0.01 (see, for example, Table 1).Therefore, we have Δ(κ 2 )/ΔL exceeding 10 2 and 10 5 for d = 100 nm and 10 nm, respectively.In addition, according to Fig. 3(b) and Fig. 5, when the ring resonator is operated in the under-coupling regime or near the critical point, |S c | is larger than or close to |S L |. Therefore, the contribution of the scattering loss to the transmitted light intensity change, ΔT, can be ignored, especially when we deal with nanoparticles whose size is well below 100 nm.Our analysis performed so far reveals that the coupling between the ring resonator and waveguide plays a dominant role in determining the nanoparticle sensing signal, which concurs with recent experimental results that suggest that the change in the coupling between the ring resonators induced by protein molecules may significantly modulate the characteristics of the ring resonator system [18].
Detection principle and sensor design
In actual measurement, we use the fractional change of the transmitted light intensity, ΔT/T, as the sensing signal.Based on the discussion in Section 2.2 and Eq. ( 2), we have Considering a situation in which κ 2 is near L, Eq. ( 16) can be simplified as: where f = κ 2 /L.To highlight the advantage of using a ring resonator for nanoparticle detection, we consider a situation in which the ring resonator is simply replaced by a curved waveguide that has the same κ 2 .In this case, the fractional change in the transmitted light (ΔT/T) induced by the nanoparticle would be the same as the coupling induced loss (i.e., Δ(κ 2 )).Therefore, with the ring resonator, the sensing signal (i.e., ΔT/T) is enhanced by a factor of approximately 2/[L(1-f)].Apparently, the lower L is, the higher enhancement is for given f.
Here we present a quantitative example of detecting and sizing nanoparticles with the proposed ring resonator-waveguide system.For the parameters used in Table 1, we choose a gap of 370 nm so that κ = 0.31834 and κ 2 = 0.1.To optimize the sensitivity, we place the ring resonator system close to the critical coupling point (but not exactly at the critical coupling point, as there would be no light transmitted).In the actual experiment, this can be done through appropriate designs by adjusting the gap between the ring resonator and the waveguide to find the optimal coupling (i.e., κ 2 ).Another approach is to tune L to match κ 2 , which can be accomplished by first fabricating a ring resonator with L being slightly lower than needed for the critical coupling and then introducing a small additional loss (for example, using an AFM tip [5]) to increase L. For the purpose of discussion, we can set L = 0.09, which corresponds to Q 0 = 6.9E3 and should be quite easy to achieve with the state-ofthe-art lithographic method.Based on Fig. 6 (f = for a 100 nm nanoparticle, which can easily be detected.Using a normalized standard deviation in light intensity of 8E-6 that has been demonstrated earlier [19], we estimate that a nanoparticle of 5 nm in size is detectable.Certainly a larger signal (ΔT/T) (and hence smaller detectable nanoparticles) can be achieved, if κ 2 and L are brought even closer to each other or a ring resonator with a higher Q-factor is used.In actual experiments, once ΔT/T is measured, the nanoparticle volume (and its cube-or sphere-equivalent diameter) can be deduced by Δκ/κ through Eq. ( 9) and Table 1.
Conclusion
We have performed detailed analysis of using the ring resonator-waveguide system for possible nanoparticle detection.It is found that the coupling between the ring resonator and the waveguide bus can be significantly modulated by the presence of a nanoparticle in the gap, thus generating a sensing signal that depends linearly on the nanoparticle volume.Our work presents a new method for rapid and accurate counting and sizing of nanoparticles ranging from a few hundred nanometers to sub-10 nm.
Fig. 1 .
Fig. 1.Conceptual illustration of the microring resonator with a flow-through nanopore for nanoparticle counting and sizing.(A) Top view.(B) Side view.
Fig. 2 .
Fig. 2. Geometries and parameters used in the calculation and simulation.The origin is set to be the location where the nanopore is located (black dot in the figure).(A) Top view.(B) Side view.
Fig. 3 .
Fig. 3. (A) Normalized power transmission as a function of coupling coefficient κ 2 based on Eq. (1).The critical coupling (zero transmission) occurs when κ 2 = L.The under-coupling regime refers to κ 2 <L.(B) S c as a function of coupling coefficient κ 2 for various ring resonator losses.
Fig. 4 .
Fig. 4. E-field distribution of the mode (E y ) 11 inside the gap between the ring resonator and the waveguide.Here we assume that the WGM of the ring resonator has the same mode field distribution as a straight waveguide of the same dimension.The squares show the cube-shaped nanoparticle at different locations within the gap.
Fig. 5 .
Fig. 5. S L as a function of coupling coefficient κ 2 for various ring resonator losses.
Table 1 lists
κ and Δκ for different gaps between the ring resonator and the waveguide
Table 1 . κ and Δκ for a 100 nm nanoparticle with different gaps
1.1) and Table 1, we have | 2016-01-09T01:06:55.066Z | 2013-01-14T00:00:00.000 | {
"year": 2013,
"sha1": "5bcae81cebd5b4a916b45634707cc1fa4d34bd2e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.21.000229",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5bcae81cebd5b4a916b45634707cc1fa4d34bd2e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
234395827 | pes2o/s2orc | v3-fos-license | Intermammary pilonidal sinus: rare location of a common condition
Pilonidal sinus (PNS) is a common inflammatory condition caused by penetration of hair into the epidermis of the skin. It is characterized by a pus and hair containing cavity in the skin lined by granulation tissue. It typically presents as a chronic discharging sinus with recurrent abscess formation. Typically PNS occurs in the natal cleft. Rarely, it affects other areas of the body like groin, axilla, umbilicus, interdigital web, suprapubic area, nose, clitoris, prepuce, penis, or occiput. Intermammary pilonidal sinus is an extremely rare condition with very few cases reported in literature. We describe the case of a 22 year old female with a pilonidal sinus in the intermammary region.
INTRODUCTION
Pilonidal sinus (PNS) is a common inflammatory condition caused by penetration of hair into the epidermis of the skin. It is characterized by a pus and hair containing cavity in the skin lined by granulation tissue. 1 It typically presents as a chronic discharging sinus with recurrent abscess formation. Typically PNS occurs in the natal cleft. Rarely, it affects other areas of the body like groin, axilla, umbilicus, interdigital web, suprapubic area, nose, clitoris, prepuce, penis, or occiput. Intermammary pilonidal sinus is an extremely rare condition with very few cases reported in literature. [2][3][4][5][6][7] We describe the case of a 22 year old female with a pilonidal sinus in the intermammary region.
CASE REPORT
A 22 years old female presented with a chronic discharging sinus in the intermammary region since one year. She was previously misdiagnosed and treated as recurrent fungal infection. But her symptoms persisted. The local examination revealed multiple discharging sinus in the intermammary region. Her X-ray, erythrocytic sedimentation rate (ESR) and Mantoux test were normal, thus ruling out tuberculosis. She underwent complete excision of sinus tracts with primary closure under general anaesthesia. Histopathology revealed pilonidal sinus tract with acute on chronic inflammation. The patient underwent laser epilation of the chest wall to prevent recurrence. She was also advised silicone dressing for prevention of keloid formation. On follow up, the scar was healthy with no keloid formation after six months.
DISCUSSION
The word "pilonidal" is derived from the Latin words pilus ("hair") and nidus ("nest"). Ever since its first report in 1833, pilonidal disease has always been in controversy with regards to its aetiology. However, the most accepted aetiology is that of an acquired pathology with multiple contributing factors. It is still unknown whether hair (either loose or native to the region) are the primary cause or whether the hair follicles become infected leading to microabscesses and PNS. 8 The commonest location of pilonidal sinus is the natal cleft. 97.8% of pilonidal sinus are seen in the sacrococcygeal region and intermammary location is extremely rare. 9 Sacrococcygeal pilonidal sinus is a common disorder among young hairy adults, of age group from 15-30 years, with a 3:1 male-to-female ratio. However, intermammary pilonidal sinus is seen mostly in young obese females with bulky breast. 7 It has been proposed that tight brassiers increase the pressure in the intermammamary region and enhance skin penetration by hair.
The pilonidal sinus typically presents as chronic discharging sinus with recurrent abscess formation. In the intermammary region, a few other conditions may mimic the clinical picture like hidradenitis suppurativa, pyoderma gangrenosum, syphilis or tuberculosis. 10 It was reported that hairiness is among the most important risk factor for developing PNS. However, this risk factor has not been mentioned in case of an intermammary PNS. 7 The most commonly used treatment modality for typical PNS is surgery which classically includes local excision. Depending upon the presence of contamination the wound is either left to heal by secondary intention or a primary closure is done. Post-operative wound complications are known leading to increased morbidity. Another alternative for wound closure is the use of local flaps. However, flap may not be required for closure in case of intermammary PNS due to the presence of lax skin in that region which can be mobilised for cover. 7 Non-surgical therapies described for PNS include phenol injection or topical polyphenols. 11,12 Laser epilation as primary treatment for PNS have also been tried with promising results. 13,14 The new technique with excision and tension-free primary closure using fibrin glue in order to obliterate the dead space and to promote wound healing has also been described. 15 However, for intermammary PNS, excision with primary closure remains the mainstay of treatment. 7
CONCLUSION
Intermammary pilonidal sinus is a relatively rare presentation of a common condition. Complete excision with primary closure is the mainstay of treatment. Due precaution for prevention of keloid formation is mandatory for excellent cosmetic results especially in young females. | 2020-12-31T09:04:39.645Z | 2020-12-28T00:00:00.000 | {
"year": 2020,
"sha1": "85adb7e5ea349156822c9a246ebcacaf40350484",
"oa_license": null,
"oa_url": "https://www.ijsurgery.com/index.php/isj/article/download/6250/4388",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "212404ae8fe7e3dd765d69a0981250d25b1e9449",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
201654150 | pes2o/s2orc | v3-fos-license | Comparative diversity of microbiomes and Resistomes in beef feedlots, downstream environments and urban sewage influent
Background Comparative knowledge of microbiomes and resistomes across environmental interfaces between animal production systems and urban settings is lacking. In this study, we executed a comparative analysis of the microbiota and resistomes of metagenomes from cattle feces, catch basin water, manured agricultural soil and urban sewage. Results Metagenomic DNA from composite fecal samples (FC; n = 12) collected from penned cattle at four feedlots in Alberta, Canada, along with water from adjacent catchment basins (CB; n = 13), soil (n = 4) from fields in the vicinity of one of the feedlots and urban sewage influent (SI; n = 6) from two municipalities were subjected to Illumina HiSeq2000 sequencing. Firmicutes exhibited the highest prevalence (40%) in FC, whereas Proteobacteria were most abundant in CB (64%), soil (60%) and SI (83%). Among sample types, SI had the highest diversity of antimicrobial resistance (AMR), and metal and biocide resistance (MBR) classes (13 & 15) followed by FC (10 & 8), CB (8 & 4), and soil (6 & 1). The highest antimicrobial resistant (AMR) gene (ARG) abundance was harboured by FC, whereas soil samples had a very small, but unique resistome which did not overlap with FC & CB resistomes. In the beef production system, tetracycline resistance predominated followed by macrolide resistance. The SI resistome harboured β-lactam, macrolide, tetracycline, aminoglycoside, fluoroquinolone and fosfomycin resistance determinants. Metal and biocide resistance accounted for 26% of the SI resistome with a predominance of mercury resistance. Conclusions This study demonstrates an increasing divergence in the nature of the microbiome and resistome as the distance from the feedlot increases. Consistent with antimicrobial use, tetracycline and macrolide resistance genes were predominant in the beef production system. One of the feedlots contributed both conventional (raised with antibiotics) and natural (raised without antibiotics) pens samples. Although natural pen samples exhibited a microbiota composition that was similar to samples from conventional pens, their resistome was less complex. Similarly, the SI resistome was indicative of drug classes used in humans and the greater abundance of mercury resistance may be associated with contamination of municipal water with household and industrial products. Electronic supplementary material The online version of this article (10.1186/s12866-019-1548-x) contains supplementary material, which is available to authorized users.
Background
Antimicrobials have played an important role in controlling bacterial infectious diseases in both humans and animals. In livestock, antimicrobials are used mainly for the treatment and prevention of disease as label claims for their use at sub-therapeutic levels to promote growth are being removed [1]. The worldwide consumption of antimicrobials in food animal production has been reported at ≥57 million kg with a projected increase to ≥95 million kilogram by 2030 [2]. In North American beef feedlots, a number of antimicrobials are administered to cattle, with macrolides and tetracyclines accounting for the majority of antimicrobial use (AMU) [3]. Bacteria residing in the bovine gastrointestinal tract may become resistant to these antibiotics and, once released into the environment, they may transfer antimicrobial resistance (AMR) genes (ARGs) to other bacteria including potential human pathogens [4,5]. Furthermore, residual antibiotics may enter the environment through runoff from manure, where they may select for antimicrobial resistant bacteria [6,7]. Consequently, it is not surprising that for almost every livestock-associated bacterial pathogen, resistance to at least one antimicrobial from each antimicrobial class has been reported [8].
Antimicrobials are not fully metabolized when administered to either humans or livestock. Gao et al. [9] estimated that up to 90% of many of the antibiotics used in livestock are excreted in urine or feces. Sewage treatment plants (STP) receive waste streams that contain a mixture of nutrients, metals, antibiotics, and industrial/ household chemicals from a variety of sources [10]. Antimicrobials, antimicrobial resistant bacteria (ARB) and ARGs are frequently detected in STP [11,12] and as a result these facilities have been identified as a potential hotspot for antibiotic resistance, where ARGs spread among bacteria via horizontal gene transfer. These biological pollutants are also released into the environment in STP effluent [13][14][15].
Knowledge of the microbiome and resistome within and between the environmental interface between animal production systems and urban centres is lacking. Information gained from an understanding of this interface could help support more prudent use of antimicrobials in livestock, more specifically, in defining targeted treatment options and distinguishing between essential and non-essential AMU to ensure safer food production practices.
Culture independent techniques, such as next generation sequencing (NGS) can be used to quantitatively assess the microbiota composition and its associated resistome. Advances in high-throughput NGS technologies have enabled rapid understanding of overall microbial ecology as well as occurrence and diversity of ARGs from diverse environments. Whole-metagenome shotgun analyses are accomplished by unrestricted sequencing of the genomes of most microorganisms present in a sample, including currently uncultured organisms. The present study describes the microbial metagenomes and resistomes of a variety of environmental samples from beef production to human-associated wastes (urban sewage). We utilize a NGS approach to inform surveillance as well as to improve the current understanding of the microbial community structure, the prevalence of ARGs within these microbial communities and to investigate overlaps between various components of the environmental spectrum.
Results and discussion
All 35 samples (FC = 12, CB = 13, soil = 4 and SI = 6) were sequenced to an average of~54 million reads per sample. This sequencing depth was found appropriate, as indicated by the saturation of novel taxa and ARGs in our previous study which investigated the microbiota and resistome of bovine fecal samples [16]. The average read quality score for samples in the present study ranged from 33 to 37, indicative of high quality reads. Of the total number of reads generated, 94-97% survived quality filtering and trimming across all datasets.
Each sampling group exhibited distinct composition of microbiota Across all samples 5.9% of total reads aligned to bacterial and archaeal species, representing 816 genera and 35 phyla. The proportion of prokaryote-associated (bacteria and archaea) raw (trimmed and quality filtered) reads arising from the total metagenomic raw reads varied among various sample types. Sewage influent (SI) had the highest number of prokaryote-associated reads, followed by soil, catch-basin (CB) water, and bovine feces (FC). For SI, 24.5% of the sequence reads were associated with bacteria and archaea, whereas soil, CB and FC had a much smaller proportion of prokaryote-associated reads (3.4, 4.5 and 2.1%, respectively), as revealed by the taxonomic classification via Kraken. The majority of remaining read fractions in these samples were uncharacterized, most likely originating from uncharacterized prokaryotes as well as eukaryotic organisms including algae, plants, small eukaryotes, avian or mammalian sources that are absent from the Kraken database. The comparatively high proportion of prokaryote-associated reads in SI is reflective of the very high density (2-10 g dry weight/L) of microorganisms within sewage [17]. Comparison of normalized data across all samples also supported the largest abundance of microbial taxa reads in SI, being 6.2, 6.7, and 2.4 fold higher than in FC, CB and soil, respectively (Fig. 1).
The catch basin water community was dominated by Proteobacteria (67.4%), Actinobacteria (9.3%), Firmicutes (7.9%), Bacteroidetes (5.9%), Euryarchaeota (3.3%) and Spirochaetes (3.3%), accounting for 97% of prokaryotic microbiota reads (Fig. 1). Bacterial classes ɣ-proteobacteria and β-proteobacteria were abundant (Fig. 2) and constituted 45% of the prokaryotic reads, while Rhodocyclaceae and Moraxellaceae were the most abundant families in CB. Within these families, Thauera and Psychrobacter were the most abundant Proteobacterial genera in catch basin samples (Table 1). Psychrobacter are salt-tolerant, chemoheterotrophic, cold-adapted bacteria, which oxidize ammonia in high concentration under saline conditions [26]. Species from genus Thauera are frequently found in wet soil and polluted freshwater and have been considered important for industrial wastewater treatment systems as they play a key role in refractory aromatic hydrocarbon (e.g., indole and toluene) degradation under anaerobic and denitrifying conditions [26,27]. Thauera were also observed in sewage influent. Occurrence of species from this genus in these polluted waters indicates the potential presence of aromatic hydrocarbons in these environments and as a result these functional species are of great significance for wastewater management. The soil microbial community was predominated by Proteobacteria (60.3%) and Actinobacteria (35.2%), constituting 95.5% of the prokaryotic microbiota (Fig. 1). North American and European agroecosystems studies have also identified a high abundance of Proteobacteria and Actinobacteria associated with rhizosphere and rhizoplane [28,29]. Wang et al. [30] have reported a 27 and 14% abundance of these two phyla respectively, in Chinese soils, followed by Acidobacteria (14%), Chloroflexi (8%) and Firmicutes (6%). In our soil samples, Bacteroidetes was the third most abundant phylum (1.6%), whereas Acidobacteria, Chloroflexi and Firmicutes were only present at 0.45, 0.41 and 0.13%, respectively. Lower abundance of Acidobacteria, and higher abundance of Proteobacteria, Actinobacteria, Firmicutes and Bacteroidetes has been associated with healthy agricultural soils with higher available phosphorus content [30]. Soil microbial communities can be highly diverse due to heterogeneity of soils, manure application as well as the nature of the rhizosphere [31]. In our soil samples, plant-associated species belonging to family Rhizobeaceae (α-Proteobacteria) were most prevalent (Table 1). Healthy soils generally have higher abundances of beneficial microbes including nitrogen-fixing and plant growth-promoting bacteria [32]. Interestingly, in present study, the soil collected 6 months after manure application had a higher number of Bacteroidetes (> 5 fold) and Euryarchaeota (> 3 fold) compared to non-manured and not recently manured fields. This likely reflects the presence of residual fecal bacteria from manure. Lupwayi et al. [33] also reported a higher proportion of Bacteroidetes in soils receiving composted beef feedlot manure in southern Alberta. While acknowledging the low number of soil samples originating from two agricultural fields in the vicinity of feedlot C over two years, inclusion of these samples in the analysis presents a snapshot of the influence of the feedlot manure on the soil microbiota and resistome. Proteobacteria (83.5%), Bacteroidetes (10.4%) and Firmicutes (3.8%) represented the majority of sewage microbes with Acinetobacter (29%) and Aeromonas (16%) being the most abundant of the Proteobacteria. Others have found Proteobacteria to be among the most abundant bacteria in urban wastewater followed by Bacteroidetes and Firmicutes [34]. Acinetobacter johnsii and Acinetobacter baumannii accounted for the majority of the Acinetobacter identified. The former species rarely causes human infections, whereas the latter is an emerging hospital pathogen. In addition to being frequently recovered from patients during hospital outbreaks, A. baumannii have been reported in untreated as well as in biologically or chemically treated hospital and municipal wastewaters [35][36][37][38]. Our normalized species richness data indicated that SI harbored on average 2000 or more A. baumannii sequence reads as compared to FC, CB and soil (only 4, 15 and 1 respectively; Additional file 1) This suggests that the risk to human health from A. baumannii is far greater with SI than with the other environmental samples examined. In addition to Acinetobacter spp., the most abundant bacterial taxa detected in SI by others are Campylobacteraceae (Arcobacter spp.), Aeromonadaceae and Carnobacteriaceae [39][40][41][42]. Consistent with these studies Arcobacter and Aeromonas were among the most abundant genera in SI samples in our study, followed by Acinetobacter. Among Aeromonas spp. A. hydrophila, A. media, A. veronii, A. salmonicida, and A. schubertii were prevalent in SI. Most of these species are emerging human pathogens and have been associated with gastroenteritis, wound and soft tissue infections, necrotizing fasciitis, urinary tract infections, pulmonary infections in cystic fibrosis, and septicemia [43,44]. Aeromonas spp. produce an array of virulence factors including cytolytic toxins with hemolytic activity and enterotoxins. Prevalence of these pathogens in FC, CB and soil was negligibly low as compared to SI.
Although 793 of the total 816 prokaryotic genera detected across all samples were represented in all sample types, their relative distribution was very unique between matrices ( Fig. 2; Additional file 1). The nonmetric multidimensional scaling (NMDS) plot formed distinct sample type-specific clusters ( Fig. 3) with significant separation at all taxa levels (ANOSIM R: 0.9-0.98, P < 0.05; Fig. 3). As expected, the distinct microbial composition of each sample matrix appears to be a reflection of the unique composition of nutrients, physical, physicochemical and other biotic and abiotic factors within each niche.
The SI microbiome exhibited the highest richness of microbial genera as indicated by the number of unique taxonomic (genus) assignments corresponding to discovery of new species, but the lowest α-diversity and evenness as depicted by low inverse Simpson and Pielou's evenness indexes respectively, across all sample types (Fig. 4). Wastewater biosolids are a rich source of nitrogen, phosphorus, potassium and organic matter as well as micro-nutrients [45]. This nutrient-rich environment may allow certain resident bacteria to thrive and therefore promotes richness over diversity. Although the median α-diversity of phyla was higher for fecal samples than for any other matrices, soil had the largest (p < 0.05) median α-diversity at the lower taxonomic ranks.
Distinct resistome composition of each sample matrix with predominance of tetracycline resistance in the beef production system Across all samples,~0.12% of total reads aligned to 35 mechanisms of antimicrobial resistance (AMR), coding resistance to 15 classes of antimicrobials, and~0.04% of all reads corresponded to 15 classes of metal and biocide resistance (MBR) spanning 32 mechanisms. The proportion of AMR-MBR associated raw reads to the corresponding total reads was highest in conventional FC (0.25%) followed by SI (0.12%), CB (0.03%) and soil (0.002%), indicating a high prevalence of resistance genes in bovine feces. The proportion of AMR-MBR associated reads to the corresponding prokaryote-microbial reads was highest in conventional FC (11.3%) followed by CB (0.8%), SI (0.5%) and soil (0.07%) indicating that a higher fraction of bacteria and archaea in bovine feces harboured ARGs compared to other sample types. Comparison of normalized data across all samples also supported the larger abundance of ARG-associated reads in FC compared to soil, CB and SI (Fig. 5).
Overall, the CB resistome was represented by 84 ARG and MBRG groups. Similar to FC, in the CB resistome tetracycline resistance (59%) was the most abundant followed by resistance to macrolide (17.5%), aminoglycosides (7.2%) β-lactams (4.2%), sulfonamides (3.3%), mercury (2.8%) and multidrug resistance (MDR; 2.8%) (Fig. 5). This likely reflects the surface runoff of manure-associated tetracycline resistant ARB from feedlot pen floors into the catch basins. Miller et al. [49] quantified a runoff depth of 54 mm during a major rainfall event at a southern Alberta feedlot. Feedlots A, B, C and D shared 24, 31, 28 and 38 ARG groups between FC and their associated CB, respectively. The shared ARG groups were members of the tetracycline, macrolide and aminoglycoside resistance classes (Additional file 2). Among the tetracycline resistance groups, TETQ, TETM, TETW, TET36, TETT and TET44 were most prevalent. However, the relative abundance profile of these ARG classes differed between CB and FC reflecting the niche specificity of bacteria harboring these ARGs, considering that Proteobacteria were predominant in the CB microbial community as compared to Firmicutes and Bacteroidetes in FC. Among macrolide resistance ARG groups, MEFA, MEFB and MSR were more abundant in CB. Interestingly, MEFB was not detected in FC, but was present in SI samples. This gene has been found to be generally hosted by Proteobacteria [50], whereas MEFA and MSR genes have been associated with a wide variety of enteric bacterial phyla including Proteobacteria, Bacteroidetes, Actinobacteria and Firmicutes [51]. The high relative abundance of these genes could reflect their common presence in enteric bacteria, and/or due to co-selection with other ARGs as many tetracycline ARGs are linked to macrolide ARGs through common mobile genetic elements [52].
In North America, the use of in-feed tetracycline and macrolides to prevent liver abscesses and other bacterial diseases is a common management strategy in beef cattle Fig. 4 Quantitative comparisons of microbiota between various sample types. Richness (a) as indicated by number of unique taxa (genus discovery) assignments, α-diversity (b) as measured through inverse Simpson index, and evenness (c) of microbiota as Pielou's evenness index at the genus level among various sample matrices are depicted by box-and-whisker plots. Boxes represent the interquartile ranges (upper line is the 75% quantile, and the lower line is the 25% quantile), the lines inside the boxes are the medians, the whiskers span the range of the 25% quantile or the 75% quantile plus 1.5 times the interquartile range, and dots are outliers production. Macrolides are also used to treat and manage Bovine Respiratory Disease (BRD). Conventional feedlots in the present study administered ionophores in combination with chlortetracycline or tylosin in-feed on a daily basis throughout the feeding period. Occasionally, therapeutic doses of antimicrobials were also administered to clinically ill cattle within a pen. It is acknowledged that the physical presence of a resistance gene may not always be interpretable as functional presence in the absence of gene expression data. However, the presence of an abundant gene is generally associated with some degree of its functional expression within a particular environment. The high prevalence of both tetracycline and macrolide resistance gene classes in FC and CB, therefore is likely a reflection of the ubiquitous use of these antibiotics in beef production [53,54].
Soil samples originating from agricultural fields adjacent to feedlot C had a small and unique resistome with only 9 ARG groups belonging to 6 classes and did not align with the feedlot resistome ( Fig. 5; Additional file 1). Tetracycline ARG TETL was only found in recently manured soil. Compared to soil, this ARG group had a 9-17 times lower prevalence in FC and CB and was completely absent in SI. It may be that TETL harboring bacterial species from manure survived better in soil compared to other tetracycline ARG carrying bacteria. Tetracycline was the most widely used antibiotic class in the feedlots enrolled in this study. Glycopeptide resistance associated genes were present across all soil samples, but were absent from any other sample type. Specifically, VanO-type regulators (VANRO) [55] were the only glycopeptide-related genes detected in soil samples. The vanO operon initially identified in Rhodococcus equi [55], harbors a vanHOX resistance gene cluster transcribed convergent to that of the vanS-vanR two-component regulatory system. The vanO locus in Rhodococcus equi exhibits similarity to genera Amycolatopsis and the nitrogen fixing, root nodule-forming Frankia [55] and to the teicoplanin producer Actinoplanes teichomyceticus [56]. The Amycolatopsis and Actinoplanes were among the most prevalent genera in soil samples from our study (Table 1). Other than vanO-type regulators no other vancomycin resistance operon-associated reads (Vancomycin D-alanyl-D-alanine dipeptidase and/or ligase etc.) were detected, which may be due to low homology or absence of the vanO operon associated genes in soil bacteria. The second most abundant ARGs in soil were multidrug resistance (MDR) efflux pump coding genes. The organisms with the largest number of MDR pumps are in fact found in the soil or in association with plants [57]. Along with their potential roles as multidrug efflux pumps, these are important for detoxification of intracellular metabolites, bacterial virulence in both animal and plant hosts, cell homeostasis, and intercellular signal trafficking [58]. Therefore, bacteria harboring MDR pumps are not always associated only with high antibiotic load environments.
The SI from two urban municipalities in Southern Alberta exhibited similar resistome composition. Across all sample matrices SI had the largest number of ARG groups (229) belonging to 28 classes of ARGs and MBRGs. The most prevalent resistance classes in SI included multi-drug resistance (28%), β-lactam (15.28%), mercury (11.83%), tetracycline (11.16%) macrolide (10.72%) and aminoglycoside resistance (5.78%) (Fig. 5). Historically, mercury contamination of wastewater occurs from a variety of sources including dental practice wastes, lawn fertilisers, landfill leachate, paints, domestic waste inputs, groundwater infiltration and storm water drainage. Of the 2000 tonnes per year of global atmospheric mercury that is discharged into the air and water from anthropogenic sources, Canada's atmospheric mercury share account for <0.5% of the world's emissions (https://www.canada.ca/en/environment-climate-change/ services/pollutants/mercury-environment.html).
Among β-lactam ARGs, cephalosporin resistance groups OXA and CTX were predominant, with 8 fold more richness of OXA in SI compared to CB, and its complete absence in FC and soil. Conversely, CTX was 71 fold more abundant in SI compared to FC and absent in CB and soil (Additional file 1). QNRD, a plasmid-mediated quinolone resistance (PMQR) gene group was only present in SI, likely reflecting its use in human medicine. Among all sample types, only the SI resistome contained a large variety of metal and biocide resistance genes (Additional file 1). Recently, Gupta et al. [42] reported a similar relative abundance of ARGs and a high prevalence of heavy metal resistance genes (HMRGs) in samples from a wastewater treatment plant.
Sewage wastewater is an effective source of fecal bacteria and provides a unique opportunity to monitor fecal microbes from large human populations without compromising privacy [63]. Wastewater treatment plants are considered hotspots of ARB and ARGs [15,64,65], as they receive wastewater from households and hospitals where antimicrobials are administered. The persistent selective pressure posed by sub-inhibitory concentrations of antimicrobial residues in wastewater combined with the high density [17] and diversity [66] of microorganisms could promote horizontal transfer of ARGs and HMRGs [67][68][69]. Co-selection of ARGs and HMRGs in SI [70,71] is favoured when these genes are carried on the same mobile genetic element [72]. Furthermore, leachate from wastewater sludge disposed of in landfills may promote the spread of ARGs into sub-soils and ground water [73].
A heat map of prevalent ARGs groups across all samples grouped by AMR classes (Fig. 6) indicated that the majority of AMR/MBR classes represented in FC, CB and SI resistome were absent in soil. Tetracycline, β-lactam and multidrug efflux ARGs were present among all sample types, whereas ARGs for fluoroquinolones, fosfomycin and metronidazole were only present in SI (Additional file 1), suggesting that use of these antimicrobials in humans selected for these genes. The NMDS analysis showed that the resistome from different sample types differed at the AMR gene group (ANOSIM P = 0.001, ANOSIM R = 0. 98) level (Fig. 3B) and all other levels of ARG categories (ANOSIM P < 0.05, R: 0.92-0.98) confirming the uniqueness of resistome in each sample type. Across sample types, 5, 9, 98 and 5 resistance gene groups were uniquely present in FC, CB, SI and soil respectively ( Fig. 6; Additional file 2). In addition to the microbial source and the microbial niche specificity in different environments the distinct resistome composition of each sample matrix could also be a reflection of the specific antimicrobial residues in each environment. Recent studies have identified a link between community structure and antibiotic resistance gene dynamics [74]. Future metagenomics-based microbiome and resistome studies that include bacterial genome assemblies from deep metagenomics sequencing data will shed light on the association of ARGs with their host bacteria.
The SI wastewater resistome exhibited the highest richness of ARG mechanism types among sample types (Fig. 7). In addition to having high richness, SI contained the most diverse and even resistome among all sample types as indicated by high inverse Simpson index of α-diversity and Pielou's evenness index (Fig. 7B), which reflects the diverse classes of antimicrobials used in human medicine [75] as compared to those used in cattle. After ionophores, tetracycline and macrolides are among the most frequently used antimicrobials in livestock [76,77].
Natural feedlot FC samples harboured relatively similar microbiota but smaller resistome compared to conventional samples The microbial composition of fecal samples from 'natural' and 'conventional' beef production systems had comparable richness, diversity, and similar prevalence of microbial phyla. The exception was that the composition of natural FC microbiota had a lower abundance of two bacterial (Bacteroidetes, Spirochaetes; log FC values − 0.7 and − 2.3 respectively; p < 0.05) and one archaeal (Euryarchaeota; log FC value − 3.8; p < 0.001) phyla in natural, compared with conventional FC. A 17-fold increase in the methanogenic archaeal genus Methanobrevibacter (Phylum Euryarchaeota) was observed in the samples originating from conventional pens as compared to the natural pens (Additional file 1). Considering that the animal diets between the natural and conventional feedlot practices were similar, these differences in fecal microbiota may be related to antimicrobial use. Given the small number of samples compared between natural and conventional feedlots, further studies are needed to more thoroughly investigate this phenomenon.
The proportion of AMR-MBR associated raw reads to the corresponding total reads for feedlot D conventional Fig. 6 Heat map of prevalent antimicrobial resistant gene groups across all samples grouped by antimicrobial resistance class. As described in methods section, fecal composite samples were obtained from 4 feedlots a, b, c and d. The subscript letters C and N denote conventional and natural practices, respectively FC samples was higher (0.23%) compared to natural FC samples (0.09%) indicating high prevalence of resistance genes in bovine feces. The average number of ARG-associated reads identified was higher for the conventional FC compared to natural FC (Fig. 8). This trend was observed across the top three abundant ARG classes including tetracycline, macrolide and aminoglycoside (p < 0.05). Regardless of higher ARG abundance in conventional samples, diversity of resistomes between natural and conventional pen samples was similar (Additional file 1). Prior studies have concluded no correlation between the presence of antimicrobial resistance genes in the gut microbiota and the administration of antibiotic feed additives [78][79][80][81]. However, in contrast to our study, most of these studies either did not quantify comparative prevalence of ARGs in production systems managed with and without using antimicrobials or their comparative investigation was limited to a few bacterial species and ARGs. Single-colony subcultures do not recover the actual AMR reservoir of a microbial community.
Phenicol and sulfonamide were the only resistance classes absent in the natural samples. Other groups belonging to tetracycline (TETA, TETB, TET32, TETW, TET40, TET44, TETO, TETQ, TETX), macrolide (MEFA, LNUC), aminoglycoside (APH3', ANT6) and βlactams (CFX, ACI) resistance were present in both natural and conventional FC, whereas tetracycline (TETH, TET36, TETZ, TETS, TETT), macrolide (APH6, MPHE, MPHB, MSRD ERMA, MPHE, MEL, ERMR, ERMC, ERMT), aminoglycoside (ANT3"), β-lactamase (CARB), phenicol (FLOR, CMXAB) and sulfonamide (SULII) were absent in natural samples, but were present in at least one of three conventional samples. The ARG groups MSR and TETM belong to macrolide and tetracycline drug classes respectively, and were present in all conventional FC pen samples from feedlot D, but were absent in all natural pen samples. Assuming that the presence of a gene means that it is being expressed, their presence may be associated with the use of these drug classes in the conventional feedlot. Genes belonging to this family have been shown to be associated with transposons and integrative conjugative elements [82,83], which may contribute to their ubiquitous prevalence through intra-and inter-species mobility under the added selective pressure of antimicrobial use. Considering that ARGs are ancient [84] their diverse presence in natural production systems is not surprising. The occurrence of certain ARGs within bacterial populations is likely a reflection of their association with fitness traits that enable bacteria to persist within a particular environment. While antibiotic resistance and its spread by horizontal gene transfer are ancient mechanisms, the rate at which these processes occur and the proliferation of certain ARG-harboring bacteria has increased tremendously over the last decades due to the selective pressure exerted through anthropogenic administration of antimicrobials. We argue that a holistic approach of identifying ARGs and microbiota, and quantitating their prevalence as undertaken in our study is necessary for informing surveillance and to understand the evolution and transmission of AMR in an environmental spectrum.
Conclusions
Consistent with its abundant use in feedlots, tetracycline resistance was predominant in the beef production system followed by macrolide resistance. Regardless of possessing a comparable composition of microbiota, fecal samples collected from cattle raised without antibiotics exhibited a smaller resistome as compared to fecal samples collected from conventionally raised cattle. This study enhances our understanding of the microbial Fig. 7 Quantitative comparisons of resistome between various sample types. Richness (a) as indicated by number of unique gene groups (gene group discovery) assignments, α-diversity (b) as measured through inverse Simpson index, and evenness (c) of resistome as Pielou's evenness index at the resistance gene group level among various sample matrices are depicted by box-and-whisker plots. Boxes represent the interquartile ranges (upper line is the 75% quantile, and the lower line is the 25% quantile), the lines inside the boxes are the medians, the whiskers span the range of the 25% quantile or the 75% quantile plus 1.5 times the interquartile range, and dots are outliers composition and the occurrence of ARGs and identifies common elements between those components of the environmental spectrum and indicates a distinct separation of associated microbial communities. The specific resistance profiles across various sample matrices were dependent upon the microbial community composition as well as differences in the nature and prevalence of drug, metal and biocide contaminants.
Sample collection, DNA isolation, quantitation and quality assessment
Composite fecal samples analyzed in this study (n = 12) were collected from four different beef cattle feedlots (A, B, C, D) within the province of Alberta Canada (sampling locations in Additional file 6: Fig. S1). Feedlot sampling was conducted from April -June 2014. The feedlots had operating capacities of ∼15,000-30,000 head of cattle. Production conditions were typical of western Canadian commercial feedlots, with animals housed in open-air, clayfloor pens arranged side-by-side with central feed alleys. Feedlot D had two separate wings for hosting natural (raised without antibiotics) and conventional (with antibiotics) cattle pens. Samples in Feedlot D were collected from both natural (n = 3) and conventional (n = 3) pens. The rest of the fecal composite samples (n = 6 of a total of 12) originated from conventional feedlots A, B and C (Supplementary data_3), where antimicrobials were used in a routine manner similar to the conventional wing in Feedlot D. Within a feedlot, samples were collected on the same day from pens containing 150-300 animals. Sampling procedures were reviewed and approved by the Lethbridge Research Centre Animal Care and Use Committee (AC# 14-0029), and were conducted according to the Canadian Council of Animal Care Guidelines. Each composite fecal sample comprised~20 g aliquots collected from 20 individual fresh fecal pats within each pen. Fecal samples were thoroughly mixed, placed in 532 mL Whirl-Pak bags, flash frozen in liquid nitrogen and stored at -80°C. The antimicrobials used in the sampled conventional feedlots are listed in Additional file 4. The in-feed antimicrobials (ionophores, chlortetracycline or tylosin) were administered to all cattle in the conventional feedlot throughout the feeding period with the therapeutic parenteral drugs administered to clinically ill cattle as required. Natural resources legislation in Alberta stipulates that feedlots must have catch basins (also known as retention or runoff holding ponds) for containment of surface runoff water from pens or manure storage areas generated by rainfall or snowmelt. At each feedlot, surface water was sampled from a catchment basin adjacent to the sampled feedlot pens. Water samples (2, 3, 4 and 4 samples were collected from catch basins at feedlots A, B, C, and D respectively, n = 13) (Additional file 3). One liter of water was collected at a depth of 0.5 m into a 1.3 L polyethylene bottle attached to a telescopic pole. Water was collected from four different locations within the catchment basin and the samples were combined to generate a single composite sample which was immediately transferred to the lab on ice. To complement the cattle production and associated environmental sampling, two wastewater treatment plants in Southern Alberta (Additional file 1: Fig. S1) provided sewage influent samples (n = 6) to represent the urban element of the environmental spectrum. One liter of sewage influent water was collected from post-grit tanks of the wastewater treatment facility.
Catchment basin or sewage influent water samples (n = 13, up to 100 mL each) were filtered through 0.45 μm pore size nylon filters (MilliporeSigma, Etobicoke, ON, Canada) using a water filtration manifold and membrane filtration units (Pall Corporation Ltd. Mississauga, Canada). The membrane filter was aseptically removed from the filter base using sterile forceps and stored at − 20°C in a sterile 5 ml OMNI Bead Ruptor tube (Cole-Parmer, Montreal, Canada) for later DNA extraction. If the membrane filter became plugged, samples were centrifuged at 10,000 x g in 50 mL tube to obtain a pelleted biomass for DNA extraction.
Composite core soil samples (n = 4) were collected from agricultural fields adjacent to feedlot C and included the following sample types: field with no history of manure application, from the same field as above but 6 months after manure application, and from a field with a continuous history of manure application, but not within 1-2 year prior to sampling. Soil samples were collected twice over two years (see Additional file 3 for details). Soil sampling was carried out using a soil coring kit (5 cm diameter) to a depth of 10 cm and samples at 10 points along a 100 m transect were collected and pooled for each field to constitute a composite sample.
Metagenomic DNA isolation from the bovine fecal samples was performed as previously described [16]. The DNA was extracted from soil and pelleted biomass from water samples in a manner similar to feces, with the nylon filters subject to bead-beating and incubation steps at 70°C [16]. The DNA concentrations were measured using the Quant-iT™ PicoGreen (Thermo Fisher Scientific, Mississauga,ON, Canada) and the DNA purity was determined by measuring the ratios of absorbance at 260/280 and 260/230 using a NanoDrop spectrophotometer (Thermo Fisher Scientific). The DNA extracts with a 260/280 ratio between 1.8-2.0 and a 260/230 ratio between 2.0-2.2 were regarded as pure. The presence of PCR-inhibitors was also evaluated by amplifying the 16S rRNA gene using the universal 16S rRNA gene primers 27F and 1492R [85] with undiluted and diluted samples [16].
Metagenomic DNA sequencing and data processing All library preparations, next generation sequencing and quality control steps were performed by the McGill University and Genome Quebec Innovation Centre (Montréal, QC, Canada). TruSeq DNA libraries were prepared and samples were run on an Illumina HiSeq2000 platform, with 4 samples multiplexed per sequencing lane to generate 2 × 100 base paired-end (PE) sequences [16]. As a quality control for cluster generation and sequencing, each HiSeq2000 sequencing lane was spiked with the PhiX174 sensu lato virus genomic DNA library at~1% concentration of the total DNA loaded per lane.
Trimmomatic version 0.36 [86] was used to remove adapter contamination and low quality reads using the following parameters: trimming leading and the trailing low quality or N bases (below quality 3) from sequence reads; performing quality score filtering using a sliding window at every four bases with a minimum Phred score of 15; discarding sequences with < 36 nucleotides; removing adapters supplied in the TruSeq3 adapter sequence file using a maximum of 2 mismatches in the initial seed, and clipping the adapter if a match score of 30 was reached. Singleton reads, whereby the other pair was discarded were also included in downstream analysis.
Determination of the taxonomic and ARG composition of microbiota
Taxonomic classification of microbiota and determination of AGR assignments for resistome analysis of the sequence data were performed using previous methods and parameters [16]]via a Galaxy Web server instance (https://galaxyproject.org/) supported by the National Microbiology Laboratory, Public Health Agency of Canada (PHAC NML Galaxy). The Kraken taxonomic classification tools (version 0.10.5 beta) and the resistome analysis tools were integrated in a workflow to obtain output for both the resistome and microbiome analyses (workflow details in Additional file 6: Fig. S2).
In that workflow, the trimmed paired reads that passed the quality assessment criteria from the pre-processing step with Trimmomatic were aligned to the genome of the enterobacteria phage phiX174 (GenBank accession NC_001422.1) using the minimum exact match (MEM) algorithm of the Burrows-Wheeler aligner (BWA) [87]. The sorted alignments were then processed with samtools [88] to filter out reads that did not map to the PhiX 174 bacteriophage genome. This was done using a flag value of 4 to extract the unmapped reads in binary alignment map (BAM) format. The paired reads that did not map to PhiX 174 bacteriophage were then extracted from the alignment using the bamToFastq tool of BEDTools [89]. The PhiX-filtered reads were then classified with Kraken v 1.2.3 [90] using the custom Kraken database bvfpa [16]. Kraken results were filtered using a confidence threshold of 0.05 to select for taxonomic assignments with high precision and sensitivity and thus high accuracy at the genus rank [http://ccb.jhu.edu/software/kraken/MAN-UAL.html ; 16]. Resistome analysis was conducted in parallel with the taxonomic classification as follows: Trimmed paired reads were mapped to the ARG sequences in the MEGAREs database v1.01 [91] combined with a custom metal and biocide resistance (MBR) database (MegaBio; P.S. Morley's lab; Additional file 5) using BWA-MEM v 0.7.17.1 [87] alignments in BAM format followed by conversion to sequence alignment map (SAM) format and post-processing with the Coverage Sampler tool (https:// github.com/cdeanj/coveragesampler) using a 75% gene fraction threshold and other parameters [15].
Data analyses
The microbiome and resistome data reports from individual samples were aggregated into corresponding matrices using R for downstream analyses. Microbiome and resistome matrices were normalized using the data-driven approach of Cumulative Sum Scaling normalization (CSS) with the metagenomeSeqR package [92]. This method calculates a scaling threshold that is the quantile after which the distribution of raw counts among samples is invariant. The method calculates the sum, up to and including that quantile threshold for re-scaling. In this study, a CSS normalization quantile threshold of 0.5 (the median) was used. The cumulative sum scaling method has been previously reported for normalization of comparative metagenomic sequencing data from various environments [93]. CSS has greater sensitivity and specificity compared to other normalization methods and it corrects the bias in the assessment of differential abundance introduced by total-sum normalization therefore improving sample clustering [94]. Other methods such as rarefaction analysis can lead to higher false discovery rate while comparing differentially abundant genes [95]. The exploratory analyses performed in this study included: relative abundance analysis for microbiome and resistome for all sample matrix types, assessment of α-diversity and richness for all sample types, ordination using nonmetric multidimensional scaling (NMDS), and comparative visualization of data with heatmaps and barplots. Observed richness, the Shannon's and Inverse Simpson's α -diversity indices, and Pielou's evenness were calculated using functions of the vegan package version 2.5.1 [96] and their distributions were plotted for each sample type as box-and-whisker plots using ggplot2 [97]. Heatmaps were constructed using the log 2 transformed CSS-normalized counts which were plotted using white to orange gradient scale.
A zero-inflated Gaussian (ZIG) mixture model was applied to evaluate differentially abundant features in the resistomes and microbiomes between sample matrix types. This model has been reported to increase sensitivity and specificity when working with datasets with high sparsity (abundance of zero counts). Ordination plots were generated using NMDS and statistical inference was made using the analysis of similarity (ANOSIM) with the vegan R package version 2.5.1 [96]. ANOSIM R-values ranged from 0 (total similarity) to 1 (total dissimilarity). The Kruskal-Wallis test [98] was performed to compare the distributions of richness and the Inverse Simpson's indices of α--diversity for both ARGs and microbial taxa among the various sample types. Nemenyi post-hoc comparisons [99] were conducted for incidences where differences were declared significant at P < 0.05 as per the Kruskal-Wallis analysis. The R code for the data analysis is available at https://github.com/ ropolomx/one_health_continuum. | 2019-08-28T02:51:59.627Z | 2019-08-27T00:00:00.000 | {
"year": 2019,
"sha1": "b0d4704ca9ba4ea53e25a74ad9db25a01c57b754",
"oa_license": "CCBY",
"oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/s12866-019-1548-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b0d4704ca9ba4ea53e25a74ad9db25a01c57b754",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
235368212 | pes2o/s2orc | v3-fos-license | Simulating relic gravitational waves from inflationary magnetogenesis
We present three-dimensional direct numerical simulations of the production of magnetic fields and gravitational waves (GWs) in the early Universe during a low energy scale matter-dominated post-inflationary reheating era, and during the early subsequent radiative era, which is strongly turbulent. The parameters of the model are determined such that it avoids a number of known physical problems and produces magnetic energy densities between 0.2% and 2% of the critical energy density at the end of reheating. During the subsequent development of a turbulent magnetohydrodynamic cascade, magnetic fields and GWs develop a spectrum that extends to higher frequencies in the millihertz (nanohertz) range for models with reheating temperatures of around 100 GeV (150 MeV) at the beginning of the radiation-dominated era. However, even though the turbulent cascade is fully developed, the GW spectrum shows a sharp drop for frequencies above the peak value. This suggests that the turbulence is less efficient in driving GWs than previously thought. The peaks of the resulting GW spectra may well be in the range accessible to space interferometers, pulsar timing arrays, and other facilities.
INTRODUCTION
During the past few years, numerical simulations of gravitational wave (GW) generation from early Universe turbulence have become an essential tool in predicting the stochastic background that the Laser Interferometer Space Antenna (LISA; see, e.g., Amaro-Seoane et al. 2017) and other space interferometers (e.g., Taiji Scientific Collaboration et al. 2021) might see in the future. Most of the existing predictions are based on analytical models (Dolgov et al. 2002;Caprini & Durrer 2006;Niksa et al. 2018), which tend to make simplifying assumptions about the nature of turbulent wave generation; see Caprini et al. (2016) and Caprini & Figueroa (2018) for recent reviews emphasizing the feasibility and prospects of observing such relic GWs.
A particularly popular source of turbulence in the early Universe is the electroweak phase transition. Hindmarsh et al. (2015Hindmarsh et al. ( , 2017 have produced numerical simulations of GW generation by assuming a first order phase transition (Kosowsky et al. 1992;Kamionkowski et al. 1994;Nicolis 2004;Ellis et al. 2019Ellis et al. , 2020. Even if the phase transition is not a first order one, as initially assumed, it is still possible to produce primordial turbulence from magnetic fields that could be generated during various epochs in the early Universe (see, e.g., Cornwall 1997;Joyce & Shaposhnikov 1997;Bhatt & Pandey 2016;Miniati et al. 2018). The existence of large-scale magnetic fields in the early universe is motivated by indirect evidence of their presence in the intergalactic regime from the non-detection of GeV photons in blazar observations (Neronov & Vovk 2010;Taylor et al. 2011;Tavecchio et al. 2011;Ackermann et al. 2018;Archambault et al. 2017).
Both analytical considerations (Dolgov et al. 2002;Gogoberidze et al. 2007; Kahniashvili et al. 2008) and numerical simulations (Roper Pol et al. 2020b) have demonstrated that there can be a direct correspondence between the turbulence spectrum and the resulting GW spectrum. An important additional property of GWs might be their circular polarization, which could be caused by helical turbulence (Kahniashvili et al. 2005) or by helical magnetic fields (Namba et al. 2016;Niksa et al. 2018;Anand et al. 2019;Sharma et al. 2020;Okano & Fujita 2021). Again, numerical simulations have confirmed the direct correspondence between the fractional helicity of magnetic fields and the resulting circular polarization of GWs (Kahniashvili et al. 2021).
Magnetogenesis during quantum chromodynamic (QCD) phase transitions (Quashnock et al. 1989;Sigl et al. 1997;Tevzadze et al. 2012) provide another possible avenue for GW generation at low frequencies in the nanohertz range (Kahniashvili et al. 2010;Neronov et al. 2021). If the characteristic scale of QCD turbulence is a significant fraction of the Hubble horizon at that time, as suggested by some models (e.g., Kisslinger et al. 2005), the resulting GW spectrum could show a marked drop in the spectral energy density for frequencies above the value typical of the turbulent driving scale (Brandenburg et al. 2021a). This result has been obtained by assuming the turbulence to be driven by a monochromatic forcing function. However, it remains unclear how sensitive such results are to the assumption of an artificially adopted forcing function. For this reason, it is essential to include the magnetogenesis mechanism in the actual simulations of GW production, without using any artificial forcing. One such mechanism is the dynamo effect associated with the chirality of fermions (Joyce & Shaposhnikov 1997), which is referred to as the chiral magnetic effect (Vilenkin 1980). Numerical simulations of the resulting GW generation predict that their power depends on the speeds of magnetic field generation and saturation (Brandenburg et al. 2021c). However, this model suffers from the difficulty that the typical length scale associated with the chiral magnetic effect is very short. For this reason, we focus here on magnetogenesis during inflation. It is traditionally expected to produce a large-scale magnetic field (Ratra 1992;Martin & Yokoyama 2008;Subramanian 2010;Kahniashvili et al. 2017;Fujita & Durrer 2019). However, as we discuss next, it is unclear how to properly model GW production for such a magnetic field.
Earlier approaches modeled the process of GW production from magnetic fields by assuming a magnetic field to be given; see the models ini1-ini3 in Roper Pol et al. (2020b) for such examples.
How-ever, this corresponds to switching on a magnetic field abruptly at a particular time. Therefore, the process of switching on a magnetic field with a given spectrum played a decisive role in the resulting GW energy and strain. The result would be different if the field was gradually being produced by some magnetogenesis mechanism. Including a suitable magnetogenesis model is what will be presented in this paper. This will then also allow us to quantify the resulting differences. We can therefore demonstrate what difference it would make when we just switched on the magnetic field from our magnetogenesis simulation without including the corresponding GW generation until that time.
A popular model of inflationary magnetogenesis is that of Ratra (1992), where electromagnetic fields originate from quantum fluctuations (Fischler et al. 1985) that are being amplified during inflation owing to the breaking of conformal invariance (Turner & Widrow 1988;Dolgov 1993). This is achieved by a suitable coupling of the inflaton field to the electromagnetic field through a function f leading to a term of the form f 2 F µν F µν in the Lagrangian density, where F µν is the Faraday tensor. One usually assumes f to be proportional to some power α of the scale factor a (Bamba & Sasaki 2007;Martin & Yokoyama 2008;Subramanian 2010) Of particular interest are scale-invariant magnetic fields, which can be obtained for α = 2 and −3 assuming constant expansion rate during inflation. Although the strength of the resulting magnetic field in such models may well be of astrophysical interest, they suffered from three major shortcomings. (i) In the case α = 2, the function f increases from a certain initial value to a very large value at the end of inflation. Demanding standard electromagnetism at the end of inflation requires f = 1, but this would imply a very small value of f at the beginning of inflation. This results in very large values of the effective electric charge, defined as e eff = e/f 2 (Subramanian 2010;Kobayashi 2014), where e is the standard elementary charge. Therefore, there will be a very large coupling between the electromagnetic and charged fields at the beginning of inflation. For this reason, this theory would be in the nonperturbative regime and would therefore not be reliable. This is known as the strong coupling problem (Demozzi et al. 2009). (ii) In the other case, where α = −3, the electric energy density diverges and may overshoot the background energy density during inflation. This problem is known as the backreaction problem (Demozzi et al. 2009). (iii) The production of charged particles in the presence of strong electric fields due to the Schwinger effect can lead to a premature increase in the electric conductivity, which shorts the electric field and prevents further magnetic field growth; see Kobayashi & Afshordi (2014). This problem also applies to models that solve the backreaction problem by choosing a low energy scale inflation (Ferreira et al. 2013). Such a problem could be avoided if charged particles get sufficiently large masses by some mechanism in the early Universe, as suggested by Kobayashi & Sloth (2019). The Sharma et al. (2017Sharma et al. ( , 2018 model addresses the three problems 1 by constraining the form of f (a) such that α = 2 during inflation, starting at an initial value of unity, thereby solving the strong coupling problem, and to obey during a post-inflationary era, which is assumed to be matter dominated. The exponent β > 0 is calculated for a given reheating temperature. By choosing the reheating temperature to be at the electroweak scale of 100 GeV and the total electromagnetic energy density to be 1% of the background energy density, the number of e-folds of the scale factor during inflation, N , and during reheating, N r , are found to be 34 and 9.3, respectively. To arrive at standard electrodynamics with f = 1 at the end of reheating, we require αN = βN r , and therefore β = αN/N r = 7.3. Alternatively, following Sharma et al. (2020) and Sharma (2021), we can consider the case where the reheating temperature is at the QCD energy scale of 150 MeV, for which one finds N = 36, N r = 28, and therefore β = 2.7; see Appendix A for the details. A similar model for helical magnetic field generation and polarized GWs was recently considered by Sharma et al. (2020) and Okano & Fujita (2021). These studies are based on earlier work of Durrer et al. (2011), Caprini & Sorbo (2014, Fujita et al. (2015), Sharma et al. (2018), and Fujita & Durrer (2019). However, the numerical consideration of such models is beyond the scope of the present work and is the subject of a separate paper (Brandenburg et al. 2021d).
1 Sharma et al. (2017Sharma et al. ( , 2018 discuss the Schwinger effect constraint during inflation, but not in the post-inflationary matterdominated era. The reason is that a calculation of the conductivity due to the Schwinger mechanism in a matter-dominated Universe has yet to be done. If one considers instead the expression for the conductivity in a de Sitter spacetime, given by Kobayashi & Afshordi (2014) and Kobayashi & Sloth (2019), it appears that the Schwinger effect constraint becomes important in the last phase of the matter-dominated era. However, more meaningful conclusions require a detailed investigation.
Here, we adopt the aforementioned magnetogenesis model of Sharma et al. (2017) to compute both electromagnetic fields and GWs resulting from the electromagnetic stress during the late reheating phase, when the conductivity is still negligible, and during the early radiation-dominated phase when the conductivity is high and the laws of magnetohydrodynamics (MHD) are applicable. In the first step, magnetic fields and GWs exist only on very large length scales. The significance of the second step is therefore to produce magnetic fields and GWs at smaller length scales through turbulence that is being driven by the Lorentz force from electric currents once the conductivity is high. By making simplifying assumptions, Sharma et al. (2020) have already considered this case, but avoiding the restrictions resulting from these assumptions requires self-consistently computed turbulence, which can only be done numerically.
THE MODEL
We consider a periodic domain of size L 3 . The smallest wavenumber is then 2π/L ≡ k 1 . In this work, we use cubic domains with n = 512 or 1024 mesh points in each direction with k 1 = 1, so the Nyquist wavenumber, k Ny = k 1 n/2, is either 256 or 512. We adopt a spatially flat Friedmann-Lemaître-Robertson-Walker metric. Throughout this paper, we use conformal time η = dt/a(t), where t is the physical time, and work with comoving variables that are scaled by the appropriate powers of a. In particular, the MHD equations then become equal to the MHD equations in a non-expanding universe (Brandenburg et al. 1996). Our comoving variables therefore describe the departures from the expansion of the Universe. The speed of light is always set to unity and the Lorentz-Heaviside unit system is used for the Maxwell equations. We also set the density at the beginning of the radiation-dominated era to unity. This also implies that the mean radiation energy density is unity and therefore the magnetic energy densities quoted below are automatically the fractional magnetic energy densities with respect to the radiative energy density.
As was suggested in the introduction, we perform the simulations in two separate steps. In step I, during the end of reheating, we solve the Maxwell equations with zero conductivity, but, owing to the breaking of conformal invariance, with a nonvanishing f ′′ /f term, where primes denote derivatives with respect to η. Similarly, in the GW equation, the a ′′ /a term is nonvanishing. In step II, we assume a rapid transition into the radiationdominated era, where the electric field can be neglected and we thus solve the MHD equations. Owing to finite conductivity, electric currents can flow and drive fluid motions through the Lorentz force, which leads to additional induction and magnetic field amplification at small length scales. In that case, f = 1 and a ∝ η grows linearly, so f ′′ /f = a ′′ /a = 0.
Following Roper Pol et al. (2020b), we scale the conformal time at the beginning of the radiative era to unity, i.e., η = 1. We simulate the last phase η ini ≤ η ≤ 1 of the reheating interval, where the scale factor is taken to be a = (η + 1) 2 /4 so as to match a = 1 at η = 1 (Sharma et al. 2017). In practice, we consider as the initial value of the conformal time η ini = −0.9, corresponding to an initial scale factor of a ini = 1/400. Thus, we have a ′′ a = 2 (η + 1) 2 and In step I, we solve the following equations for variables in Fourier space, denoted by tildae on the scaled magnetic vector potential A and the strains h + and h × for the two linear polarizations modes: Here,à = fÃ, whereà is the magnetic vector potential andT +/× (η, k) = e ij +/×T ij (η, k) are the + and × polarizations of the traceless-transverse projected stress in Fourier space, whereT ij (η, k) = T ij (η, x) e −ik·x d 3 x is the Fourier transformation of the electromagnetic stress, given in real space by Here, E = −∂A/∂η and B = ∇ × A are computed through inverse Fourier transformation, E(η, x) = Ẽ (η, k) e ik·x d 3 k/(2π) 3 , and likewise forB(η, k), which is given byB = ik ×Ã. As initial condition for η = η ini , we employ a random, Gaussian-distributed magnetic field with a magnetic energy spectrum E M (k) ∝ k 3 (for k < k * ) and ∝ k 1−4β (for k > k * ), where k * (η) = 2β(2β + 1)/(η + 1) is evaluated at η = η ini ; see Appendix B. This implies that the magnetic and electric energy spectra peak at a wavenumber that lies well within the computational domain, i.e., k 1 < k * (η) < k Ny . The magnetic energy spectrum is normalized such that B 2 /2 ≡ E M (η) = E M (η, k) dk. It is important to emphasize that k * is sufficiently far away from the minimal and maximal wavenumbers available in our simulation, so the k-integrated spectral energy densities are not sensitive to our precise choice of domain size and resolution. We denote the spectra of the quantity B through integration over concentric shells in wavenumber space as Sp(B)/2 ≡ E M (k). Likewise, the electric spectrum is in our normalization, where the critical energy density is unity; see also Roper Pol et al. (2020b). In step II, we also present kinetic energy spectra, which are defined as E K (k) = Sp(u)/2. Fluctuations of the radiation energy density are ignored. We recall in this connection that the mean radiation energy density is normalized to unity.
Owing to the rapid increase of the spectra at small k, the detailed initialization ofẼ turns out not to be critical and it suffices to initialize the electric field such that it is a solution to the electromagnetic wave equation with E = −A ′ , and thereforeẼ = ikÃ, where k = |k| is the length of the wavevector. This implies that E E (k) is initially equal to the magnetic energy spectrum at all k, i.e., E M (k) = E E (k) for η = η ini . These spectra then begin to change and grow rapidly at small wavenumbers, but E M (k) retains its initial k 3 scaling and E E (k) attains a k 1 scaling. This is because the f ′′ /f term in Equation (4) now dominates over the k 2 term at small k, so there is no longer the k-dependent factor betweeñ A andẼ. The electric field spectrum then becomes proportional to the spectrum of the vector potential, and therefore Since Equation (4) is linear, we can easily find by trial and error the magnetic energy that is needed so that at η = 1, the mean electromagnetic energy density, is a few the percent of the radiation energy density. In step II, for η > 1, the conductivity σ is finite and so the evolution of E can be omitted and the magnetic and GW fields are evolved by solving the MHD and GW equations, as described in previous papers (Roper Pol et al. 2020a,b), where the evolution equation is solved in real space. This equation includes the induction effects from the velocity and the finite conductivity. It is solved together with (Brandenburg et al. 1996) where are the components of the rate-of-strain tensor with commas denoting partial derivatives, and ν is the kinematic viscosity. In all cases considered below, we assume a magnetic Prandtl number of unity, i.e., νσ = 1. We recall that a = η during the radiation-dominated phase, and therefore we have a ′′ /a = 0 in Equation (5).
In step II, the stress associated with the electric field is absent in the expression for T ij . Instead, the Reynolds stress γ 2 ρu i u j now enters. Here, γ is the Lorentz factor with γ 2 = 1/(1 − u 2 ).
For both steps I and II, we use the Pencil Code (Pencil Code Collaboration et al. 2021), which is primarily designed to solving large sets of partial differential equations on massively parallel computers using sixth-order finite differences and a third-order time stepping scheme. However, the code is versatile and allows the GW equations to be advanced analytically in Fourier space from one time step to the next; see the detailed description in Roper Pol et al. (2020a). In step I, we solve Equation (4) in a similar fashion as Equation (5), where the time advance from one time step to the next is done analytically; see Appendix C, where we describe the more general case with σ = 0. It should be emphasized that, although Equations (4) and (5) are linear for A andh +/× , respectively, the combined problem is not becauseT +/× (η, k) depends quadratically on E and B through Equation (6).
In some of our simulations, the electromagnetic energy density exceeds 10% of the radiation energy density. This would be unrealistically large and those cases are only included for comparison with others of smaller electromagnetic energy density. Indeed, it would then no longer be obvious that the linearized GW equations are still applicable and that quadratic terms can be neglected. Although the fractional GW energy densities are always much below the fractional electromagnetic energy densities, it is conceivable that nonlinear effects could play a role in certain wavenumber ranges. The Pencil Code does allow for such nonlinear effects in the GW field to be incorporated. Preliminary studies suggest that nonlinear contributions to the stress begin to enhance the resulting GW energy spectra at large wavenumbers when the electromagnetic energy density reaches about 30% of the radiation energy density. However, such cases are not included in the present study and their details will be presented elsewhere.
Evolution during step I
During reheating, the energies of various fields increase rapidly in power-law fashion, i.e., E i (η) ∝ (η+1) pi with i = M, E, or GW for the magnetic, electric, and GW energies, respectively. Analytically, as shown in Appendix B, we expect p M = p E = 4β − 2, which is 27.2, 25.2, and 8.8 for β = 7.3, 6.8, and 2.7, respectively. The growth continues to occur at progressively smaller wavenumbers. The results are qualitatively similar for β = 2.7, which is relevant to reheating at the QCD energy scale; see Appendix A. At all larger wavenumbers, the magnetic field oscillates in space and time, but does not increase on average. The GW field evolves in a similar fashion, but even more rapidly, and empirically with p GW = 2(p M − 1).
In Figure 1(a), we show for the case with β = 6.8 magnetic, electric, and GW energy spectra in regular time intervals. The spectra collapse on top of each other when plotting them versus and multiplying by a compensating factor (η+1) −(pi+1) . Thus, we define (for η ≤ 1) This implies E M (η) ≡ E M (k, η) dk ∝ (η + 1) pM for the temporal growth of the (k-integrated) magnetic energy density for η < 1. In Figure 2, we show visualizations of B z and h + on the periphery of the computational domain for Run A1. We see that the typical length scales of both fields increase with time. This is due to the fact that the destabilizing term f ′′ /f in Equation (4) decreases with time and remains important only on progressively larger length scales; see Equation (3). We have also inspected visualizations ofḣ + and found that they looked virtually identical to those of h + . Unlike step II, where this is not the case (discussed below), we have therefore not shownḣ + here. However, we have looked at the local correlation between the two for each mesh point and found thatḣ + ≈ sh + with s being compatible with p M /(η + 1). This suggests that the GW evolution is almost entirely dominated by the rapid algebraic increase at each point in space.
It also turned out that, as expected from the work of Sharma et al. (2017), the electric energy exceeds the magnetic energy by a certain factor. This factor depends on the value of β and is about 8.6, 8.2, and 2.7 for β = 7.3, 6.8, and 2.7, respectively.
Magnetic and electric fields still grow rapidly at the end of reheating, but only at large length scales. At η = 1, we assume that the electric conductivity increases rapidly to sufficiently high values, so there will be no electric fields anymore, but there will be electric currents, J = ∇ × B, and they will exert a Lorentz force, J × B. We then switch to MHD and solve for the resulting velocity field, which facilitates a turbulent cascade toward smaller length scales. In Appendix D, we demonstrate quantitatively how a faster increase of conductivity reduces the magnetic energy loss during this transition into the high conductivity regime.
Evolution during step II
In all runs of step II, we have initially u = ln ρ = 0. We chose ν = 10 −4 , which was the smallest possible value that still allowed us to resolve the smallest length scales when k −1 = 1 and 512 3 mesh points were used. For two pairs of runs (Runs A1 and A2), we had to use 1024 3 mesh points. Yet smaller values of ν and the magnetic diffusivity σ −1 = ν would be physically more realistic, but would require an even larger number of mesh points. As is commonly known in turbulence theory, this would only extend the turbulent cascade to smaller length scales, but would not strongly affect the rest of the turbulent inertial range.
The magnetic, kinetic, and GW spectra are shown in Figure 3. There is a gradual establishment of a turbulent cascade in magnetic and kinetic energy spectra approximately proportional to k −2 . However, during an intermediate stage of our investigations, we also have experimented with even larger values of the exponent β and found that the turbulence in those cases is even more vigorous and can exhibit a k −5/3 spectrum, suggestive of Kolmogorov-like turbulence; see Appendix E for an example.
The kinetic energy spectrum shows approximate equipartition at small length scales, i.e., . The GW energy spectrum shows a characteristic drop at the smallest unstable scale at the end of reheating, followed by an approximate power-law spectrum at higher wavenumbers. Such a drop was found particularly clearly in recent GW simulations driven by an underlying magnetic field that was forced at very large length scales (Brandenburg et al. 2021a). More generally, such a drop is seen to various extents in all GW simulations sourced by monochromatically driven vortical turbulence; see, for example, Figure 6 of Roper Pol et al. (2020a). By comparing with their Figure 4, one sees that this drop is not seen when a turbulence spectrum is initialized through an initial condition rather than through gradual driving. This implies a sudden jump in time, even at small length scales, which is unrealistic. Our simulations predict for the first time a natural time scale of the temporal increase of the stress, especially at high wavenumbers. The relatively low amplitude of GWs at these high wavenumbers suggests that GW generation is predominantly a large-scale phenomenon and therefore also not strongly dependent on the exact details at small length scales. Another reason for this sharp drop at higher k is that the turbulent stress develops only later, when the 1/a factor on the right-hand side of Equation (5) has diminished its effect.
We find the ratio between magnetic and kinetic energies to be only about 1.3 and not as large as in some earlier simulations of turbulence driven by an initial magnetic field with a spectrum peaked at intermedi- ate wavenumbers, where E M ∝ k −2 was found. In the present case, the spectrum is peaked at large length scales, so there is no possibility for the magnetic field to display marked inverse transfer to larger length scales, as was the case in simulations of Brandenburg et al. (2015), who found inverse transfer even without magnetic helicity; see also Zrake (2014) for relativistic turbulence simulations.
Earlier work on inflationary magnetogenesis presumed the appearance of a scale-invariant spectrum proportional to k −1 ; see, e.g., Kahniashvili et al. (2012Kahniashvili et al. ( , 2017. In the present case, the magnetic field along with velocity fluctuations are being driven by a turbulent cascade and fed by the large-scale magnetic field. As already noted by Sharma et al. (2017Sharma et al. ( , 2018, the magnetic field has a blue k 3 spectrum at small wavenumbers, k < k * (1). During the early part of the radiation- dominated phase, we see that the peak gradually shifts to smaller wavenumbers, so the initial k 3 spectrum can hardly be recognized within the limited wavenumber range accessible to our simulations; see Figure 3(b).
In Figure 4, we show visualizations of B z , u z , h + , anḋ h + on the periphery of the computational domain for Run A1 during step II at η = 1.2, 2, 4, and 16. We see that for η ≤ 2, the magnetic field has almost not changed at all. The velocity is still small, but begins to become important for η > 2. Fully developed turbulence is seen at η = 16. However, the strain field and its time derivative are not visibly affected by the fully developed turbulence.
Time series for different values of β
In Figure 5, we show the evolution of B rms and E GW both for steps I and II as a double-logarithmic plot. Since η can be negative, we express time in terms of a = (η + 1) 2 /4 for η < 1 (and a = η otherwise). Owing to the quadratic scaling in time and the additional quadratic scaling of magnetic energy with B rms in step I, the slopes of 6.8, 6.3, and 2.2 for Runs A1-C1 correspond to the exponents of p M = 27.2, 25.2, and 8.8, respectively. In step II, we see that B rms displays a comparatively slow decay relative to the rapid increase for η < 1. The decay for η ≫ 1 follows a power law ∝ η −1 Figure 5. Comparison of Runs A1 (black line), B1 (blue line), and C1 (red line) showing the evolution (expressed in terms of a) of (a) Brms (expressed in gauss), and (b) the EGW. In (a), the dashed-dotted lines have slopes of 6.3, 6.8, and 2.2 for Runs A1, B1, and C1, respectively, and in (b) the slopes are 24.2, 26.2, and 7.8. Figure 6. Evolution of (a) EGW and EM as solid and dashed lines, respectively, and (b) kEGW(η, k) and kEM(η, k), also as solid and dashed lines, respectively, with k = 40 for Runs A1 (black line), B1-B3 (blue lines), and C1-C3 (red lines). In (a), the empirical decays ∝ η −0.7 and η −1 are indicated.
for β = 7.3, and ∝ η −0.7 for β = 2.3; see Figure 6(a). This panel also shows that the GW energy fluctuates in time, but is otherwise statistically stationary. There is, however, a systematic wiggle in all curves of E GW at around η = 1.05. This is caused by the discontinuity in a ′′ /a and f ′′ /f at a = 1. In Appendix F, we examine the effects of removing the discontinuity on the occurrence of oscillations and we also study the effects on the GW energy spectrum between the end of step I and the beginning of step II.
A decay of E M proportional to η is also what has been obtained in earlier simulations of magnetically dominated decaying turbulence ), but the slower decay proportional to η −0.7 has only been seen in the presence of magnetic helicity. Here, however, the mag-netic helicity is zero. The reason for this slower decay is probably connected with the absence of an extended subinertial range in our simulations, where k * (1) is too close to the minimal wavenumber k 1 . If we allowed for more mesh points and larger domains, the expected η −1 decay should be recovered.
Our simulations yield a temporal increase of the GW energy at length scales smaller than k −1 * (η) for η > 1. In the absence of turbulence, GWs would only have existed on large length scales. To see the development at intermediate length scales more clearly, we compare in Figure 6 the temporal evolution of the GW and magnetic energy densities in panel (a), and in panel (b) the GW and magnetic energies at the wavenumber k = 40. We see that magnetic and GW energies increase with time. The larger the magnetic energy at η = 1, the more rapidly E GW (η) increases and the larger is their final GW energy. There is considerable spread in the final values of the scale-dependent GW energy densities, while the spread in the scale-dependent magnetic energy densities is much less. Unlike the total, wavenumber-integrated GW energy, which is nearly perfectly statistically stationary already after a short time, the scale-dependent values are in some cases not yet steady and are still decreasing after having reached a certain maximum value.
Present-day frequency spectra
To compute the strain at the present time, we have to multiply our values of h rms by the ratio C 1 = a * /a 0 , where a * and a 0 are the scale factors at reheating and the present time, respectively. To obtain the GW energy at the present time, we also have to take the Hubble factor C 2 = H * /H 0 into account, where H * and H 0 are the Hubble parameters at reheating and the present time, respectively. Thus, we have to multiply our value of E GW by the dilution factor, C * ≡ (a * /a 0 ) 4 (H * /H 0 ) 2 ≡ C 4 1 C 2 2 ; see Roper Pol et al. (2020b). The values of C 1 , C 2 , and C * are given in Table 1, and are also consistent with those used earlier (Brandenburg et al. 2021a;He et al. 2021). Table 1. C1, C2, and C * for two values of Tr.
Tr
H * /H0 = C1 a * /a0 = C2 C * = C 2 1 C 4 2 100 GeV 6.4 × 10 27 8.0 × 10 −16 1.6 × 10 −5 150 MeV 5.6 × 10 21 1.0 × 10 −12 3.1 × 10 −5 The resulting GW energy and strain spectra, scaled to the present time, are shown in Figure 7, where we show the frequency spectra with f phys (k) = H * k/2πa 0 being the physical frequency for Runs A1-C3; see also Table 2 for the parameters. In Run A3, the GW field has been reset to zero at η = 1 (see Section 3.5), but in all other cases,h +/× and h ′ +/× have been inherited from step I. In Figure 7, we have also indicated the 2σ confidence contour for the 30 frequency power law of the NANOGrav 12.5-year data set (Arzoumanian et al. 2020); see Brandenburg et al. (2021a) for the details of the determination of the present contours.
We see that most of the power occurs at frequencies of about 20 µHz (10 nHz) for T r = 100 GeV (150 MeV), followed by a drop of GW energy and strain by four and two orders of magnitude, respectively. This may suggest that the turbulent production of GWs is only moderately effective in converting the turbulent energy from the forward cascade to GW energy. In the following, we shall look at this more quantitatively.
3.5. Significance of using the GW field from step I Except for the recent simulations of Brandenburg et al. (2021c), previous work on numerical investigations of GW generation from hydrodynamic or MHD turbulence assumed that turbulence was either switched on instantaneously or it was gradually being produced; see Roper Pol et al. (2020b) for comparisons of such models. In either case, the GW field was always initially zero, i.e.,h +/× =h ′ +/× = 0. Here, by contrast, bothh +/× andh ′ +/× are finite at η = 1. As suggested in the introduction, this can make a difference and could underestimate the resulting GW energy. To study this in more detail, we now perform an additional simulation (Run A3 in Table 2) withh +/× =h ′ +/× = 0 at η = 1, using just the magnetic field from step I. We see that the GW energy is now about 10 times weaker than otherwise (Run A1).
The resulting spectra are shown in Figure 8, where we plot the GW energy and strain for a series of models where we turned on the stress either instantaneously (as in Run A3) or gradually using a linearly varying profile function that multiplies the stress by a factor that linearly grows to unity within a time to span ∆η. Thus, we replacẽ The case of instantaneously switching on the stress corresponds then to ∆η → 0. The original Run A1 is shown for comparison. We see that an instantaneously switched on stress produces a GW spectrum that agrees with the original one at high wavenumbers, and only at low wavenumbers is there a small deficiency in GW energy and strain. This shows that the inheritance of the GW field from step I is of relatively minor importance. The agreement at high wavenumbers is no surprise because at those high k values, the GW field was absent at η = 1. The speed of regeneration of the GW field at low k is more surprising, but the qualitative agreement with Run A1 is probably related to the fact that the generation in step I is so rapid that only the last moment has a decisive effect 0 ΩGW(f phys ) and (b) hc(f phys ) for Run A1 compared with runs where the GW field from step I has been ignored, soh +/× =h ′ +/× = 0 has been set and the hydromagnetic stress was applied instantaneously (Run A3, ∆η = 0, red solid line), or gradually over a time span ∆η = 5 (orange dashed line) or ∆η = 20 (blue dotted line). Figure 9. (a) Dependence of EGW on EM/k * for Runs A1 and A2 (black), B1-B3 (blue), and C1-C3 (red). The solid line is a quadratic fit through the black and blue symbols, EGW = (qEM/k * ) 2 , with q = 37 and through the red symbols with q = 14.6. (b) Similar to panel (a), but plotted versus EM. Here, the solid line corresponds to the fit given by Equation (17). on the GW field. This then also demonstrates that the reason for the rapid drop in spectral GW energy and strain past the peak wavenumber is, at least in part, a consequence of the existence of magnetic fields only for k < k * (1) at η = 1.
To examine this further, we now describe two models where ∆η = 5 and 20. We see that the dominance of the GW energy and strain at small k diminishes and that also the sharp drop in spectral GW energy and strain becomes smaller. Whether the remaining lack of continuity of the GW spectrum at k ≈ k * (1) is caused by the limited vigor of turbulence is unclear. Nevertheless, the existence of the sharp drop in spectral GW energy and strain seems to be a physical effect that was not previously anticipated in relic GW modeling.
GW efficiency and scaling with E M
The GW energy of our runs scales approximately quadratically with magnetic energy. Following earlier work (Roper Pol et al. 2020b;Brandenburg et al. 2021b), we confirm a relation of the form E GW = (qE M /k c ) 2 , where q is the efficiency and k c is the characteristic wavenumber, for which the value k c = k * (1) has been used. The values of E M range between 0.03% and 0.5% of the radiation energy density.
With that, we find for Runs B1-B3 an efficiency parameter of q = 37, and smaller values of 14.6 for Runs C1-C3, where β = 2.7; see Figure 9(a). The obtained efficiency q is smaller for smaller values of β, suggesting a dependence between q and β. In particular, using q = 5β appears to be a good empirical description of our data. However, since k * also depends on β, and the ratio β/k * (1) is about unity, a good fit to the data is then given by We thus see that the previously obtained k c dependence of E GW (Roper Pol et al. 2020b;Brandenburg et al. 2021a,b) can just be subsumed into the dependence on E M , at least in the present case.
Change of GW amplitude between steps I and II
It turns out that the final values of E GW and h rms are not the same as those at the end of step I. However, they are proportional to each other in such a way that the final E GW in step II is about 0.66 times the value at the end of step I, and the final h rms in step II is about four times the value at the end of step I for β = 7.3 and about twice the value at the end of step I for β = 2.7; see Figure 10.
CONCLUSIONS
We have presented three-dimensional direct numerical simulations of inflationary magnetogenesis and relic GW production during the end of a matter-dominated reheating era, and the subsequent evolution in the beginning of the radiation-dominated era. As expected based on earlier analytic work, electromagnetic fields grow in power-law fashion, with the electric field exceeding the magnetic one. GWs are driven by the electromagnetic stress and also grow in power-law fashion. The growth terminates with the beginning of the radiationdominated era, when high electric conductivity leads to a turbulent MHD cascade. Vigorous motions produce small-scale hydromagnetic stresses, but they are too weak to have a significant effect on the GW spectrum, which therefore remains being dominated by large-scale features. This is seen as a marked drop in the GW energy at present-day frequencies of around 20 µHz (3 nHz) for a reheating temperature of 100 GeV (150 MeV).
In comparison with earlier analytical work estimating the efficiency of GW production from inflationary magnetogenesis by Sharma et al. (2020), our present work has highlighted some important aspects and discrepancies with numerical modeling. There is, most importantly, the spectral drop in the GW spectra above the peak wavenumber k * (1), which remained almost unchanged after η = 1. The drop is clearly seen both in wavenumber spectra (Figure 3) and in the diagnostic frequency spectra (Figure 7). Here, the frequency spectra have been obtained from the wavenumber spectra, but let us emphasize in this connection that the numerical equivalence between the two was recently confirmed and that even a spectral drop similar to that seen here is faithful reproduced from the temporal Fourier transform of the time series (He et al. 2021). Discrepancies between temporal and spatial spectra can occur, however, when the dispersion relation of GWs is no longer linear (for example for finite graviton mass), or when there is a long-term effect from the stress associated with the slowly decaying turbulence (He et al. 2021). This effect only plays a very small role because most of the GW energies are dominated by contributions from small wavenumbers. Nevertheless, it could explain small discrepancies between strain spectra and the anticipated GW energy spectra at frequencies above the spectral drop of the GW energy. This drop can then reveal subtle features associated with the turbulent inertial range, but the aforementioned differences should disappear at very late times, well beyond what has been simulated so far, and would not be observationally significant.
The spectral drop in GW energy is not a particular feature associated with inflationary magnetogenesis, but it appears to be a feature associated with turbulence driven at scales comparable to the horizon scale with k ≈ 1. It should be emphasized that, at the level of the present model, there is no immediate association between the value of the reheating temperature and the physics of phase transitions. In particular, there is no obvious feature in the resulting GW spectra from magnetogenesis during reheating and any other hypothetical source of turbulence. This can be seen by comparing the present GW spectra with those of Brandenburg et al. (2021a), where turbulence was driven by an assumed low wavenumber forcing function. A possible difference may lie in the dependence of E GW on E M , which does not involve an independent scaling with the inverse of the characteristic wavenumber k c . This finding came as somewhat of a surprise, but it may just mean that, in the present model, the efficiency parameter does indeed scale with β. Preliminary experiments with helical magnetogenesis (Brandenburg et al. 2021d) suggest that the 1/k c scaling remains in general justified.
A particular problem in previous numerical modeling of GW production by magnetic stresses laid in the fact that most of the GW energy resides at small wavenumbers or large length scales. This was particularly evident in some of the simulations describing the chiral magnetic effect; see Run B1 of Brandenburg et al. (2021c) for such an example. This meant that such results remained sensitive to the value of k 1 and thus the choice of the size of the computational domain. In the present work, we have alleviated this problem by extending our domain to larger length scales, so that the spectral peak was still well within the domain of the model. The strength of the underlying magnetic field was then only limited by the condition that the electromagnetic energy should not exceed about 10% of the radiation energy density at the end of reheating. Owing to the subsequent emergence of a turbulent cascade, the magnetic energy is then being fed into smaller length scales. This leads to a temporal growth of both magnetic and GW energies at subhorizon length scales. However, the strength of GWs at small length scales remained weak compared with that at larger length scales. We have not yet seen a strong dependence on the numerical resolution. In fact, while both velocity and magnetic fields showed a wellresolved fine structure, the strain field and also its time derivative remained dominated by large-scale features.
It is conceivable that even higher numerical resolution would be needed to produce sufficiently rapid variations at small length scales to enhance it GW production at those small scales, but at the moment there is no evidence for this.
An important extension of the present work is to consider helical magnetogenesis (Turner & Widrow 1988;Garretson et al. 1992;Field & Carroll 2000;Anber & Sorbo 2006;Campanelli 2009;Caprini & Sorbo 2014;Adshead et al. 2016;Sobol et al. 2019;Adshead et al. 2020a,b). These authors considered an additional chiral symmetry breaking term involving the dual Faraday tensor in the Lagrangian; see also the papers by Sharma et al. (2018) and Okano & Fujita (2021). Helical magnetic fields decay more slowly than nonhelical ones and are therefore more likely to survive until the present time; see Figure 11 of Brandenburg et al. (2017). It would then also be interesting to see whether there are any other similarities with magnetogenesis from the chiral magnetic effect (Brandenburg et al. 2021c), in addition to the relation between the GW efficiency and the exponent β identified in the present work.
We thank Tina Kahniashvili and Kandaswamy Subramanian for inspiring discussions. We also thank the anonymous referee for making useful suggestions. Nordita's support during the program on Gravitational Waves from the Early Universe in Stockholm in 2019 is gratefully acknowledged. This work was support through grants from the Swedish Research Council (Vetenskapsradet, 2019-04234). We acknowledge the allocation of computing resources provided by the Swedish National Allocations Committee at the Center for Parallel Computers at the Royal Institute of Technology in Stockholm and Lindköping.
Software and Data Availability. The source code used for the simulations of this study, the Pencil Code (Pencil Code Collaboration et al. 2021), is freely available on https://github.com/pencil-code/. The DOI of the code is https://doi.org/10.5281/zenodo.2315093 v2018.12.16 (Brandenburg 2018).
The simulation setup and the corresponding data are freely available on doi:10.5281/zenodo.4900075; see also https://www.nordita.org/ ∼ brandenb/projects/InflationaryMagneto for easier access of the same material as on the Zenodo site.
A. RELATION BETWEEN β AND THE REHEATING TEMPERATURE
At the end of the introduction, we stated that the values of β = 7.3 and 2.7 are appropriate for a reheating temperatures of 100 GeV and 150 MeV. Here, we provide the details of this calculation.
To compute β for a given reheating energy scale, T r , we follow the formalism of Sharma et al. (2017). The first step is to calculate the Hubble parameter during inflation, H f . It is related to T r through their Equation (51), which we state here in corrected form as where C and D are functions of α and β, whose values are amended by additional β-dependent factors that take into account that the spectrum peaks not at the Hubble wavenumber, as assumed in Sharma et al. (2017), but at k = k * . For α = 2, the numerical values are C = 0.01266 β(β + 1/2) and D = 0.0063 [β(β + 1/2)/(2β + 1/2)] 2 . In Equation (A1), we have corrected the following typo relative to the expression given by Sharma et al. (2017): the (C + D)/g r factor in their Equation (51) is replaced by (C + D)/(g r π 2 /30). By incorporating this correction and combining it with (g r π 2 /30) (7+α)/3 , their exponent (7 + α)/3 becomes (4+α)/3 instead; see Equation (A1). We have also incorporated the parameter E EM , which represents the ratio of the electromagnetic energy density to the background. This value was taken to be unity in Equation (51) of Sharma et al. (2017). Given an estimate for H f , the values of N r and N are obtained from the expressions and which then yields β = αN/N r ; see Table 3 for examples considered in this paper.
B. INITIAL CONDITION FOR MAGNETIC AND ELECTRIC ENERGY SPECTRUM DURING MATTER-DOMINATED ERA
In Section 2, we stated that the initial magnetic energy spectrum for k < k * (η) is proportional to k 3 . Here, we provide the details and discuss the β dependence of the steep slope for k > k * (η).
We discuss here the initial condition for the numerical simulations in the post-inflation matter-dominated era. The initial magnetic and electric field spectra can be understood by the following argument. For α = 2, the magnetic field spectrum is scale-invariant during inflation. However, the scale-invariant part does not contribute to the growing solution in the post-inflation matter-dominated era. Only to the next order, there is a contribution proportional to k 3 . Therefore, for the scales of interest, there is only the k 3 spectrum at super-Hubble scales. Using Equation (9) and (31) in (Sharma et al. 2017), it is concluded that the magnetic energy spectrum for a particular mode k grows with time as ∝ (η + 1) 4β+2 until the mode satisfies the condition k(η + 1) ≤ 2β(2β + 1). Defining η k as the time for which the mode k obeys k(η+1) = 2β(2β + 1), the initial condition for the magnetic energy spectrum is then given by In the above expression, η k represents the time for the mode k when k(η + 1) = 1. Similarly, the initial condition for the electric energy spectrum is given by To get the total magnetic energy density, we can integrate Equation (B4) over k. Since the magnetic energy spectrum falls very steeply for k > k * (η), taking k * as the upper limit of the integration is a good approximation. Thus, we have which is well obeyed by the numerical results.
C. SOLVING THE MAXWELL EQUATION
At the end of Section 2, we described the numerical approach to solving Equation (4) and highlighted the Figure 11. Decay of ln Brms for a linearly increasing conductivity.
Figure 12.
Decay of l = ln Brms on the slope s for a linearly increasing conductivity.
analogy to solving Equation (5). Here, we give the details.
In the initial stage of this project, we solved the wave equation for A with σ = 0 using the default sixth-order finite differences of the Pencil Code using the module special/disp current. However, we noticed artificial degrading of the solution at small length scalessimilarly to what was experienced when solving the GW equation in real space; see Roper Pol et al. (2020a) for details. Therefore, we decided to solve the Maxwell equation in Fourier space. Assuming ik ·Ã = ik ·Ẽ = 0, we haveà ′′ + σà ′ + k 2à = 0 with the characteristic equation λ 2 + λσ + k 2 = 0 and eigenvalues λ ± = (−σ ± D)/2, where D = √ σ 2 − 4k 2 . If σ = 0, then D = 2ik and λ ± = ±ik.
Analogously to our solution of the GW equation (Roper Pol et al. 2020a) we solve the equation forà from one time t to the next t + δt. We then make the following ansatz where the coefficientsà + andà − are determined from A andẼ at the time η. We can then write the solution for the time η + δη in matrix form as where is a rotation matrix for σ = 0, and M = 1 D λ + e λ−δη − λ − e λ+δη e λ−δη − e λ+δη λ + λ − (e λ+δη − e λ−δη ) λ + e λ+δη − λ − e λ−δη (C10) in the general case σ = 0. This solution is now implemented in the new module magnetic/maxwell.
D. CONDUCTIVITY CHANGES
At the end of Section 3.2, we emphasized that both for σ = 0 and σ → ∞, magnetic fields are undamped, but that there can be strong decay for intermediate values.
Here, we demonstrate that this decay depends on the duration T of the transition from σ = 0 to σ → ∞.
When σ = 0, there are electromagnetic waves that remain undamped. For σ = 0, however, the magnetic field experiences magnetic diffusion such that B rms ∝ exp(−tk 2 /σ), where t is ordinary time. Only for σ → ∞ can the magnetic field survive. Thus, we expect that a slow transition from σ = 0 to σ → ∞ can result in a significant loss of magnetic energy.
To quantify the transitional loss of B rms when σ is not yet large enough, we assume a linear conductivity increase of the form σ = t/T with a transition time T after which σ reaches a value σ max such that B rms follows a slow exponential decay at late times. We determine the logarithmic drop, ∆ ln B rms and extrapolate its value back to the time t 0 when σ was still constant; see Figure 11. Figure 12 shows the dependence ∆ ln B rms versus T , and is seen to follow an approximately linear behavior. For small T , the losses are small. This justifies our approach of assuming an instantaneous switch from σ = 0 to σ → ∞.
E. STRONGER FIELD STRENGTHS
In Section 3.2, we discussed the possibility of a Kolmogorov-type scaling for large magnetic energies, which could imply an f −8/3 phys scaling for h 2 0 Ω GW (f phys ) Figure 13. (a) h 2 0 ΩGW(f phys ) and (b) hc(f phys ) for Runs A1 (red lines) and D1 and D2 (blue lines) for Tr = 100 GeV. The theoretically anticipated slopes for Kolmogorov-type turbulence are indicated as well. (2020b). Figure 13 shows a comparison between Run A1 and two new ones, Runs D1 and D2, for which the initial electromagnetic energies are unphysically large and even exceeding unity; see Table 4. Runs D1 and D2 differ in the values of the viscosity (2 × 10 −4 and 5 × 10 −4 , respectively), but in both cases they are still larger than the value used in Run A1 (10 −4 ). This is because of the stronger magnetic field strength in these runs, which requires larger viscosity and magnetic diffusivity.
F. AVOIDING THE DISCONTINUITY AT THE END OF REHEATING
In Section 3.3, we discussed whether the discontinuities in a ′′ /a and f ′′ /f could be responsible for the occurrence of oscillations. In Figure 14, we compare two runs, where in one case the discontinuities have been smoothed out. The original run corresponds here to Run D2 of Brandenburg et al. (2021d), which differs from the present runs in that this one has helicity. This also leads to a much larger value of k * (1) of about 18 for β = 7.3. The smoothing of the discontinuities has been accomplished by dividing the two ratios by a quenching term Q = 1 + (a/a c ) n , where n = 20 has been chosen to make the onset of quenching sharp, and a c has been chosen so as to ensure that 1/Q is small enough well before η = 1 has been reached, i.e., when a ≈ a c . Generally, the quenching leads to a decrease of GW energy by the end of the run. We have therefore rescaled the spectra so as to have comparable spectral amplitudes. Looking at Figure 14, we see that the growth of E GW slows down before η = 1 is reached and that the oscillations are now absent (red line).
In Figure 15, we show GW spectra for different values of a c . We see that the effect of quenching is to make the spectra somewhat shallower. This suggests that a correspondence between the stress spectra and the spectra of GW energy can only be reached when the rapid growth of GW energy has come to an end. Denoting the computation of spectra again by an operator Sp, we can say that Sp(T ) ≈ Sp(k 2h ) = k 2 Sp(kh) = E GW (k), which agrees with our findings. Thus, in conclusion, the change from a k 1 spectrum to k 0 found in the simulations of Brandenburg et al. (2021d) occurs when the growth of electromagnetic energy has stopped. This is when f ′ = f ′′ = 0, but it is not a direct consequence of the discontinuity at η = 1 and therefore not an artifact. | 2021-06-09T01:16:06.617Z | 2021-06-07T00:00:00.000 | {
"year": 2021,
"sha1": "36d440f3ecd9c8feabe6b6ec459d28e5783e71e0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2106.03857",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "36d440f3ecd9c8feabe6b6ec459d28e5783e71e0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
47004477 | pes2o/s2orc | v3-fos-license | Observations of the Effects of Angiotensin II Receptor Blocker on Angiotensin II-Induced Morphological and Mechanical Changes in Renal Tubular Epithelial Cells Using Atomic Force Microscopy
Objective Angiotensin II (Ang II) plays a profibrotic role in the kidneys. Although many pathways of Ang II have been discovered, the morphological and mechanical aspects have not been well investigated. We observed the changes in tubular epithelial cells (TECs) after Ang II treatment with or without Ang II receptor blockers (ARBs) using atomic force microscopy (AFM). Methods TECs were stimulated with Ang II with or without telmisartan, PD123319, and blebbistatin. AFM was performed to measure the cellular stiffness, cell volume, and cell surface roughness. Epithelial to mesenchymal transition markers were determined via immunocytochemistry. Results After Ang II stimulation, cells transformed to a flattened and elongated mesenchymal morphology. Cell surface roughness and volume significantly increased in Ang II treated TECs. Ang II also induced an increase in phospho-myosin light chain and F-actin and a decrease in E-cadherin. Ang II coincubation with either telmisartan or blebbistatin attenuated these Ang II-induced changes. Conclusion We report, for the first time, the use of AFM in directly observing the changes in TECs after Ang II treatment with or without ARBs. Simultaneously, we successfully measured the selective effect of PD123319 or blebbistatin. AFM could be a noninvasive evaluating strategy for cellular processes in TECs.
Introduction
Renal fibrosis, characterized by increased extracellular matrix (ECM) accumulation on the kidney parenchyma, is the final common manifestation of chronic kidney disease (CKD), regardless of the primary causes [1,2]. Previous studies reported that renal tubular epithelial cells (TECs) played an important role in the development of renal tubulointerstitial fibrosis [3]. TECs release chemokines and profibrogenic cytokines and undergo epithelial to mesenchymal transition (EMT) in pathological conditions [4][5][6]. Therefore, understanding the changes of TECs are important for the prevention and effective treatment of renal fibrosis.
Angiotensin II (Ang II), a major component of the renin angiotensin aldosterone system (RAAS), is known to be a crucial mediator of renal fibrosis [7,8]. Several studies have demonstrated the ability of Ang II to induce EMT of TECs by regulating the synthesis of ECM and production of profibrotic molecules such as transforming growth factor- [9]. Ang II binds to two specific receptors, angiotensin type 1 (AT 1 ) and angiotensin type 2 (AT 2 ) receptor [10]. AT 1 receptor is known to mediate most of the classical physiologic and pathologic effects of Ang II, while the role of AT 2 receptor is not completely established [11]. Many in vitro and in vivo studies have established that RAAS blockade using AT 1 receptor blockers has therapeutic effects 2 BioMed Research International on renal tubulointerstitial fibrosis. [12,13]. However, most of these studies demonstrated this mechanism in indirect ways, including gene and protein expression, associated with renal fibrosis and RAAS. Therefore, further studies with direct measurement of the morphological and mechanical changes of TECs during Ang II stimulation and treatment with Ang II receptor blockers (ARBs) are needed.
Atomic force microscopy (AFM), invented in 1986 by Binnig et al. [14], has become a useful noninvasive imaging tool in biological and medical research [15]. AFM shows the force-distance (FD) curve by measuring the force between its probe tip and the sample surface and can be used to evaluate a sample's physical properties. Hence, the stiffness and adhesive characteristics of cell membranes can be evaluated by AFM [16]. Recently, many studies suggested that the information obtained via AFM helps in understanding the biological and physical mechanism of renal injury [17,18]. Our group previously used AFM to monitor Ang IIinduced conformational changes in mesangial cells [19], and we also successfully observed that the changes in the Ang II-stimulated mesangial cells were effectively disrupted by treatment with telmisartan, a specific AT 1 receptor blocker [20]. However, only a few studies investigated the changes of TECs using AFM. In this study, we used AFM to observe the Ang II-induced morphological and mechanical changes in TECs. Moreover, the effects of various ARBs on Ang II treated TECs were also investigated [21].
Cell Culture and Treatment.
A well characterized, normal rat kidney cell line (NRK-52E; Sigma-Aldrich, MO, USA) was used in this study. Cells were cultured in Dulbecco's Modified Eagle's Medium (DMEM; Gibco-Invitrogen, CA, USA) containing 4.5 g/L of glucose with 10% fetal calf serum in a humidified 5% CO 2 incubator at 37 ∘ C and passaged twice a week. NRK-52E cells between the 28th and 30th passages were used. In preparation for AFM observation, the cells were seeded into a collagen type I-coated 60 mm cell culture dish. After the cells reached confluence, they were washed once with filtered phosphate buffered saline (PBS; pH 7.4), and new DMEM was added. Cells were then incubated for 24 hours with Ang II (Sigma-Aldrich, MO, USA) in the presence or absence of telmisartan (Sigma-Aldrich, MO, USA), an AT 1 receptor antagonist. We also used PD123319 (Sigma-Aldrich, MO, USA), an AT 2 receptor antagonist, as a negative control and blebbistatin (Sigma-Aldrich, MO, USA), a myosin II inhibitor, as a positive control for telmisartan at the same concentration (1 × 10 −6 M) for 24 hours.
AFM Observations.
Contact mode AFM images were obtained using a NANO Station II (Surface Imaging Systems, Herzogenrath, Germany). The AFM was placed on an active vibration isolation table (TS-150; S.I.S., Herzogenrath, Germany) inside a passive vibration isolation table (Pucotech, Seoul, Korea) to eliminate external noise. Silicon cantilevers with the reflective side coated with gold were used for the measurements under liquid conditions. The properties of the probe used in contact mode were as follows: resonance frequency: 13 kHz (±4 kHz); force constant: 0.2 N/m (±0.14 N/m); cantilever length: 450 m (±10 m); cantilever width: 38 m (±5 m); cantilever thickness: 2 m (±1 m); tip radius: 5 nm (±1 nm); and tip height: of 17 m (±2 m). The AFM probe tips were stabilized with DMEM or PBS for at least 10 minutes prior to scanning.
For AFM imaging, the cells were washed twice with filtered PBS and fixed for 20 min in 2.5% glutaraldehyde in PBS at room temperature and 5 ml PBS was added to culture dishes containing fixed cells. TECs fixed with glutaraldehyde were scanned in PBS solution at a resolution of 512 × 512 pixels, at a scan speed of 0.5 line/s. We fixed the cells with glutaraldehyde to get the high resolution image.
The cell stiffness was obtained from the force-distance (FD) curve on live TECs after 24 hours of Ang II or various ARBs application. The live cells were first identified using the contact imaging mode to determine the appropriate site for the FD curve without defects or impurities, and force data were obtained at locations with similar height to prevent edge effects. Live TECs were scanned in DMEM solution at a resolution of 256 × 256 pixels and a scan rate of 2 lines/s. The loading force was adjusted to below 1-2 nN to minimize cell damage. We calculated cell , the cellular spring constant, by modeling the cell-tip interaction as two springs to quantify cell elasticity [16]. cell was defined as 1/ cell = 1/ eff − 1/ cantilever , where eff is the slope of the linear region of the FD curve for a cell and cantilever is determined from each cantilever using a clean culture dish containing DMEM. Data acquisition and image processing were performed with SPIP6 (Scanning Probe Image Processor Version 5.0.3, Image Metrology, Denmark). The fixation process could damage the cytoskeleton; we determine the FD curve in live cell without fixation. After FD curve measurements were completed, a second image was obtained to ensure that the cell had not shifted or was damaged.
Immunocytochemistry.
TECs were washed in PBS before fixing in 4% paraformaldehyde for 1 hour at room temperature (RT). Cells were permeabilized with 0.1% Triton X-100 for 15 min at RT. Nonspecific antibody binding was blocked by incubating in 1% BSA for 30 min at RT, followed by overnight incubation with anti-rabbit phospho-myosin light chain (pMLC; Cell Signaling, #3671; 1 : 200 dilution) or antirabbit E-cadherin (Cell signaling, #3195; 1 : 200 dilution) antibodies at 4 ∘ C. The cells were then incubated with secondary antibodies consisting of anti-rabbit IgG FITC (Sigma, F0382; 1 : 500 dilution) for 2 hours at RT. Rhodamine phalloidin (Invitrogen, R415) is a high-affinity probe for F-actin that is synthesized from a mushroom toxin conjugated with the orange-fluorescent dye, tetramethylrhodamine (TRITC). F-actin staining was carried out for 2 hours at RT with rhodamine phalloidin (0.2 U/mL dilution). Finally, the slides were mounted using the VECTASHIELD HardSet Antifade Mounting Medium with DAPI (H-1500, Vector labs) and detected using a fluorescence or confocal microscope.
Statistical Analysis.
The calculated spring constants of TECs are expressed as mean ± standard deviation (SD). ANOVA was used to evaluate the significance of the differences between the groups. All statistical analyses were carried out using SPSS software version 19.0 (SPSS Inc., Chicago, IL, USA); < 0.05 was considered to be statistically significant.
Morphological Changes in TECs after Treatment with Ang
II and ARB Treatment. Figure 1 shows representative AFM topography (upper panels) and deflection images (lower panels) taken from TECs fixed with glutaraldehyde in liquid conditions. After 24 hours in culture, control cells exhibited a typical epithelial cuboidal shape with cobblestone-like appearance. Cell bodies were convex, and many microvilli were regularly spread over the cell surface (Figure 1(a)). However, TECs cultured in Ang II revealed profound morphological changes. As shown in Figure 1(b), the cells became flattened and elongated and changed to a spindle-like shape. There were small bumps around the nucleus on the cell surface, and microvilli presence decreased when compared to control cells. Simultaneous incubation with telmisartan or blebbistatin disrupted the Ang II-induced morphological changes in majority of the cells, while retaining a cobblestonelike appearance, with the absence of hypertrophy and elongated morphology. (Figures 1(c) and 1(e)). PD12319, which was used as a negative control, had no significant effect on the morphological change of the TECs treated with Ang II (Figure 1(d)).
Mechanical Changes in Live TECs Induced by Ang II and
ARBs. We calculated cell , the cellular spring constant, from the FD curve in Figure 2. In this study, FD measurements were obtained for 30 cells in each group.
Immunofluorescent Findings.
To confirm the transformation of TECs into a fibroblastic phenotype, the expression of pMLC, F-actin, and E-cadherin was investigated via immunofluorescent staining. As shown in Figure 3, pMLC (upper panels) and F-actin (mid panels) expressions markedly increased in TECs with Ang II treatment when compared to the control cells (Figures 3(a) and 3(b)). Conversely, E-cadherin expression (lower panels) markedly decreased in TECs with Ang II treatment when compared to the control cells (Figure 3(c)). As shown in Figures 3(d)
Discussion
In the present study, we observed the Ang II-induced morphological and mechanical changes in TECs and investigated the effect of ARBs on Ang II-stimulated TECs. To our knowledge, this is the first study to visualize and characterize the changes in TECs induced by Ang II and ARBs using AFM. Our principle findings were as follows: (1) after treatment with Ang II, TECs exhibited notable morphological and mechanical changes; (2) Ang II caused the expression of EMT markers, including decreased expression of E-cadherin and increased expression of pMLC and F-actin; and (3) these changes and phenotypic conversion were disrupted by the addition of telmisartan.
Ang II has been reported to promote renal fibrosis by regulating ECM accumulation, inflammation, and cellular proliferation [9,13]. Activation of the RAAS is also widely known to play a crucial role in the EMT of TECs [22]. Many studies have demonstrated that the suppression of RAAS results in renal protective effects and prevents renal fibrosis [10]. Therefore, understanding the changes in TECs and RAAS activation during the renal fibrosis process is important in understanding the mechanisms underlying renal damage.
Although several studies investigated changes in TECs after Ang II treatment, studies that demonstrate morphological and mechanical aspects are limited. In this study, we effectively examined the cell response to Ang II and ARBs with AFM imaging and FD curve measurement. As shown in Figure 1, we performed AFM imaging to directly observe the morphological changes in TECs after treatment with Ang II with or without ARBs. Although the fixation process could lead to cell damage, we used TECs fixed with glutaraldehyde to get high resolution images. After treatment with Ang II, TECs exhibited marked hypertrophy, lost their cobblestonelike morphology, and became elongated in shape, which is typical of fibroblasts. These morphological changes were accompanied with phenotypic changes. Immunofluorescent staining showed that TECs treated with Ang II lost their epithelial marker and newly acquired mesenchymal markers ( Figure 3).
The structural and physical changes of TECs are difficult to visualize. Rabinovich et al. [23] found the existence of repulsive forces between the AFM tip and renal tubular epithelial cells. They reported that the oxalate treatment of renal TECs gave rise to increase elastic modulus of the cells. In the present study, we also used AFM to monitor and obtain mechanical properties and cell stiffness. Table 1 shows the mechanical changes in TECs during Ang II-induced EMT. By using the AFM spring constant, we showed that the contractile response of TECs can generate stiffness, which may deform the surrounding ECM or exchange in tissue containing a TEC layer. In addition to the methods used in this study to calculate cells' stiffness, more advanced methods have been suggested [24,25]. To evaluate cell's elasticity, they determined Young's modulus by AFM. In this study, we calculated the spring constant to quantify cell elasticity. In our future study, we will also consider apply these methods.
As mentioned above, we revealed that Ang II-induced morphological and mechanical changes that were attenuated via telmisartan treatment. It is now widely recognized that RAAS blockade by ARB exhibits a therapeutic effect in renal injury [10]. Our results suggest that telmisartan may disrupt Ang II-induced renal damage by reducing the morphological changes and contraction of TECs. Several studies reported that ARBs diminished renal fibrosis and the expression of profibrotic growth factors such as transforming growth factor-and connective tissue growth factor [7,9,13]. The reduction in molecular and mechanical changes of TECs in our study were presumed to be due to telmisartan-induced biochemical modification. The morphological and mechanical changes of cells have been reported to be associated with the changes of cytoskeletal structures. Ang II could affect the increase of cytoskeleton activity and lead to the changes of the elastic modulus of the cell [26]. These dynamics of cytoskeletal structure could be considered the cause of cell stiffness, but further studies are needed to confirm their contribution.
Conclusion
In summary, we observed morphological changes in TECs induced by Ang II treatment with the help of AFM imaging. Furthermore, the mechanical changes in TECs were evaluated using FD curve analysis. We also demonstrated that these morphological and mechanical changes were effectively prevented by telmisartan treatment. Although the mechanisms underlying these physical changes in TECs have not yet been fully elucidated, AFM could provide noninvasive measurements of the cellular processes in TECs.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request. | 2018-06-12T03:30:44.294Z | 2018-05-20T00:00:00.000 | {
"year": 2018,
"sha1": "c051715ebc954c802cabd6bcbe1b9a74362c7320",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2018/9208795",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8da14fc1031fdd2df19c847790711da1231d1c08",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
226156019 | pes2o/s2orc | v3-fos-license | The state of wooden housing architecture preservation from the interwar period in cities of the Lublin region. Protection possibilities
The article describes the condition of the wooden buildings created in the interwar period in the Lublin region and the method of its conservation, taking into account legal forms in force in Poland and in the world. This is a very substantial topic, because buildings of this type are increasingly disappearing from the landscape of a small city, in favor of catalog construction, thereby destroying the unique small-town landscape.
Introduction
Interwar period in Poland was a time of rapid state reconstruction, not only at political level, but also in architecture. It must be stresses, that war loses incurred in the area of built environment were enormous. Thus, prevailing housing crisis was substantial, therefore the State decided to help citizens in the reconstruction1. At the beginning of the 1920s, the first housing associations began their operation and the major of whichestablished in 1934 -was Towarzystwo Osiedli Robotniczych (Eng. Society of Workers' Housing Estates -self translation). The aim of aforementioned organization was raising housing estates for a less prosperous part of society. First of all, small single-family houses were built2, most often in a terraced manner and blocks of flats. The newly designed households were located mostly in suburbs or on vacant plots.
Wooden buildings in the Lublin region at that time were very popular, due to several factors. Not only wood was an easily available material, but areas spoken of had rich tradition related to timber construction. Hence, above all, it was simple and economic to use this technology instead of other possibilities.
The starting research material used in this study for determining wooden buildings preservation condition, were projects retained in the State Archives and Karty Ewidencji Zabytków Architektury i Budownictwa (Architecture and Building Monuments Record Cards) so-called white cards (polish białe karty). Records analyzed allowed defining the date of each object creation. However, it must be highlighted, that there is no certainty for implementation of retained house projects in showed a form. Some of the buildings, may have had not occurred at all. In addition, it should be noted, that at the beginning of operation of the Polish state, the building regulations of the partitioning powers were in force, including the former Kingdom of Poland until 1928, 50 ewA Netczuk-POL in which no design documentation was required. This raises some issues in determining the actual date when objects were created. To carry out a reliable analysis, there were selected houses, which had sufficient archival documentation.
Wooden buildings of the interwar period -form
In the first years after the war, both the partitioning powers and the former Polish Kingdom building regulations were simultaneously still in force3 until 1928, when a standardized building code was adopted4. The new law limited the height of timber or mixed wooden-brick buildings, erected in urban areas to four fathoms, i.e. 8.52m (this distance was measured from the ground level to the eaves). Moreover, non-fireproof buildings longer than 25 meters had to be separated by a fire wall every aforementioned 25 meters5. The roofs of the buildings had to be covered with flameproof material.
Concerning a style of elaborated buildings, in interwar period, several catalogs were created containing typical single-family wooden houses, however none of them was adopted in this area. Local building tradition was in advance at all times, despite proposed patterns or weak influence of a few model projects, especially those erected in national style. Noticeable are also rare examples of using the same project repeatedly within one city. In the Lublin region popularity gained a style referring to the traditional house construction, while so called manor style, understood in the classical sense, spread in the north. At the end of 1930s, functionalism became popular, especially in larger cities of elaborated region Focusing on structure details, most of timber buildings were raised in post-and-plank construction, sometimes with the use of log cabin. This frame structure was mainly implemented in porch structure, outbuildings and elements of small architecture. While, mixed construction houses, where ground floor was made of stone and wooden upper floor, were rather rare. Building facades, depending on the style and the region, were plastered or boarded. Roofs were most often manufactured as: gable, jerkin head (so called half-hip), stepped, dutch gable jerkin head dutch gable roof (in the southern region).
State of the preservation Not many wooden or mixed-type residential buildings have survived to present day, and their current technical condition is often poor. In this case, comparing design documentation of particular home with its current state is often difficult, sometimes impossible. Reasons for poor condition can be seen in war damages brought by the World War II and the occupier's policy related to mass deportations. Abandoned houses, due to lack of proper maintenance, fell into ruins and were subjected to the process of destruction, leading to their complete disintegration (Fig. 1A). In addition, most of the nowadays inhabitants of these houses are mostly elderly, destitute people. Some of elaborated buildings are also owned by the city or municipality -among others serving as social low standard flats. Another reason for this condition is poor quality of original construction. Wooden buildings of period discussed were a response to prevailing housing crisis, some of them supposed to be temporary. Frequently their original function was changed throughout time and currently are adapted to the new way of use, i.e. for shops (Fig. 1B).
51
Moreover, wooden houses were often rebuilt and their transformations were carried out without any conservation manner, so in this processes initial form was often destroyed. Window frames were replaced in many cases, in a way, where original casement windows have been changed for standard plastic ones. New typical joinery usually neither retained original size of window nor initial glass division, and thus openings lost their historical form. The same statement is true for roofing and in place of the historical coverage, cheap corrugated sheets were introduced. However, the most destructive procedure was thermal modernization of the buildings. Though, the process is required in order to update buildings to contemporary energy-saving requirements, at many occasions it was performed improperly and destroyed not only the detail, but also the entire structure of the building, by changing its historical form (Fig. 1C). The second negative factor was improper reconstruction. It was connected to a fact, that at erection time most of the houses did not have any media, and sanitary facilities inside the building were rare. In this cases, owners wanting to adapt homes to new living conditions, usually created additions. These were often made from modern materials, which resulted in changing original shapes of homes and disfiguring buildings original form (Fig. 1D). In addition, constant improper maintenance of wooden elements or complete absence of conservation activities lead to the destruction of building material, including unique details, such as: decorative corners (polish kożuchowanie), porches or ornamental window headers and aprons.
The enrichment of society is another factor in the disappearance of wooden buildings, often due to poor technical condition and too high cost of reconstruction. In respect of these facts, many timber structures are being demolished, in order to provide a place for new brick households. Highly rare positive practice, which was noted in this manner is translocation of the building, consisting of unfolding the whole structure and reassembling it in a new place. It was also observed that, formerly such activities took place, now they are gaining its supporters again. Another reason for reducing the number of wooden houses is their location. Previously situated on plots at the outskirts of cities, now these buildings are located in a downtown area, at sites attractive for new investments, like multifamily estates or commercial enterprises. In spite of many negative factors, which are affecting this buildings' development also positive examples have been noticed, in which owners with care are looking after their households, keeping its original form and the detail. Single objects are bought or offered to became part of open air museums or antique building museums, and thus are subject to proper conservation after which can serve as exhibits. One example is the Museum of the Lublin Village in Lublin, where currently a project is being carried out, aiming at reconstruction of a typical provincial town of Central Europe from the 1930s. On the exhibition area, there are not only residential wooden buildings, but also public edifices. This display, due to attention to detail, allows to present the character of a small town of the interwar period.
Unfortunately, despite the positive examples, the current condition of wooden architecture is poor. This is indicated by, among others statistical data from the 1980s presented by Ignacy Tłoczek, showing the percentage share of wooden houses in the Lublin region, in relation to the whole building number, which at that time was between 75 and 90%. An example of a city, for which a plan was made in 1928 showing the type of buildings, was Łuków ( fig. 2). In that plan about 70% of existing buildings, was timber structures, located outside the city center. Quantitative changes in wooden housing structure are large. To illustrate this process, author analyzed available archival materials and unpublished papers, for one representative city -Międzyrzec Podlaski, among others analyzed data as listed: The Measure Plan from 19426, To illustrate the changes that have occurred in the last decade, a local visit was carried out in 2019. On the plan from 2008, objects were marked according to the state of 2019 together with changes which occurred since the valorisation of 2005, i.e. objects that have been irretrievably damaged ( fig. 3). In their place, contemporary residential buildings or commercial and service facilities were erected. These changes mainly concern the main communication routes, due to the favorable location close to the center. The dynamics of erecting buildings in the interwar period of Międzyrzec was greater than before World War I (table 1). In the years 1918−1933, on newly incorporated areas and as supplement of the city center were erected 194 new wooden houses. In the years 1933−1939 this dynamics was even greater, because in 1939 there was already around 1300 wooden houses and residential outbuildings. The war brought significant damage, thus in 1948, despite three years of reconstruction, timber residential buildings still constituted just over half of the state before the war. Following decades brought changes in city's landscape. As a result of the factors discussed earlier, wooden buildings began gradually to give way to stone ones. In 2005, it is estimated that the number of wooden houses should be around 360. In 1948−2005, the condition of this form of development decreased by almost 44%. Meanwhile, only in 2005−2019 49 objects disappeared (17% of the previous state). As can be seen from the analysis, wooden housing architecture without systemic protection will soon cease to exist, completely changing the landscape and character of the small town of the Lublin region. The situation is similar in other cities in the region.
change in the condition of wooden residential buildings in międzyrzec Podlaski
Year 18968 Legal forms regarding the protection of wooden buildings Currently law for protection and conservation provides four forms of monuments protection (Dz.U. nr 162 poz. 1568 z późn. zm.)13, as listed: entry in the register of monuments, recognition as a historical monument, creation of a cultural park, establishing protection in the local spatial development plan, as well as an entry in the register of monuments which is not formally regulated14. An immovable monument is an object that is a property in itself or is part of a larger complex, created by man or related to his activities. Such monument stands as a testimony of a previous era or important event, thus its conservation is understood as the public interest, due to its unique values, like: historical, artistic or scientific. It is worth noting, that the objects are protected and taken care of regardless of their condition, thus poor technical situation cannot be a premise to question its value and reject the procedure of entry in the monument's register15. Entry in the register of monuments is the elementary form of protection established in Polish legislation. It takes place on the basis of a decision issued by the Provincial Conservator of Monuments (Wojewódzki Konserwator Zabytków -WKZ), which is preceded by detailed archival, field and other relevant research. An entry in the register of immovable monuments is initiated ex officio at the request of the WKZ or the application of a submitter, whom can be: the institution, property owner or is its perpetual usufructuary. In addition, social associations and organizations, whose foundation is to protect cultural heritage (Article 31 § 2 and 4 of the Code of Administrative Procedure), also have the right to propose object to the register. After successful completion of the entire procedure, the object is entered in the register of monuments16. Unfortunately, described form of protection for wooden houses, built in the interwar period, is neglected. It was noticed, that only a few Cards of Architecture and Building Monuments Records available on behalf of the WKZ, have been created. It was stated, that there are many reasons of monument register unpopularity as a form of these unique objects preservation. One of them is owners insufficient awareness of timber construction value. And the other is society's lack of knowledge about its cultural value.
Another form of protection is a cultural park with special values for a particular region. It is created to protect the unique cultural landscape and preserve the distinctive landscapes with immovable monuments 14 The issues of monument protection in the commune and voivodship records constitute a separate complex issue that will not be developed in this article due to the lack of recognition of wooden residential buildings with this form of protection in the presented city.
56
ewA Netczuk-POL characteristic of the local building and settlement tradition (Article 16 (1) of the Act)17. Each area covered by this form of protection must have a spatial development plan, which includes conservation protection forms.
Currently, it is difficult to separate such an area, in which there is a possibility of preserving a majority of the original urban tissue, containing traditional wooden buildings, because many of these objects no longer exist or they have been repetitively rebuilt, thus losing their unique character. What is more currently surroundings also degraded, among others secondary parcel divisions have been introduced.
Pursuing of historical buildings protection methods, it may be highly useful to contain appropriate information in the study of conditions and directions of spatial development plan for the commune and in local spatial development plan. Such information could relate to use manner, building's nature, maintenance method and possible expansion. The provisions contained in the plans would allow to preserve the unique values of wooden buildings, that were once an element creating a characteristic landscape for a small town.
The form of protection, that has the greatest potential for preserving this buildings's types, is to create a cultural route. It is one of the forms supported by the UNESCO (United Nations Educational, Scientific and Cultural Organization) in the context of cultural heritage18.The definition of cultural route was presented by the International Council on Monuments and Sites (ICOMOS), which since 1998 has its own International Committee on Cultural Routes (ICOMOS-CIIC), it assumes that the cultural route is a water, land or mixed trail that has a unique history, showing development humanity, as a multifaceted exchange of goods, ideas, knowledge and cultural values within countries, regions as well as between them, through the long-term mutual interaction of cultures in time and space, which results in a material or immaterial heritage1920. In Europe, already in 1985, the resolution of the Assembly of the European Parliament mentions the European Cultural Route, and in 1987 the Program of European Cultural Routes was created and in 1997 the Institute of Cultural Routes in Luxembourg21. These institutions promote the the concept of cultural travel in an international context. However, in the case of timber buildings in the Lublin region, more adequate is the definition proposed by L. Puczek and T. Ratz, who present the cultural route, as a thematic and cultural trail, which ethnic value or element of heritage are focus on a given factor, which in itself will be educational and tourist attraction at the same time. They propose a division into routes due to their coverage into: local, regional, national, international22. Unfortunately the lack of an unambiguous definition of route categorization, causes difficulties in proper naming of specific enterprises. Armin Mikos v. Rohrscheidt, who introduces the categorization of tourist and sightseeing routes23 attempts to order those definitions.
This form of protection would give the possibility of greater protection of wooden buildings in the Lublin region, thus allowing monitoring the changes taking place in the structure of a small town. Another effect should be awareness increase of the local community about the uniqueness of timber heritage. The wooden architecture route of the Lublin region could become a tourist product, attracting not only the inhabitants of the region, but also foreign tourists.
Summary
The number of wooden residential buildings is decreasing every year, being replaced by brick buildings. At many occasions remaining timber households have very little architectural value or are being kept in poor technical condition. In some places, wooden heritage was almost completely erased from the landscape, subsequently built over with catalog houses. This is a huge loss for the cultural scenery, especially in small towns. Wooden buildings were most often located in the suburbs, creating a certain "backdrop" for the city. In addition, localization plots were large and had major gardens. All this provided a unique small-town character24.
What is more, those buildings that have survived to this day are subject to significant biological corrosion, which is a consequence of the owners negligence. Another factor destroying this unique timber heritage is poor maintenance and thermal modernization, which is most often carried out in an inappropriate manner and effects in demolition of the unique detail.
It light of aforementioned considerations it can be stated, that appropriate investor training should be carried out, showing how to introduce thermal modernization in such type of constructions properly. It would also be necessary to create a catalogue of characteristic functional-spatial systems and a detail for this region, which could be used to reproduce unique detail. That is why archival and field research are a very important element to deepen knowledge about the construction tradition of interwar period. In addition, it is necessary to carry out a detailed inventory of the preserved objects, in particular the preserved original details and woodwork.
It is worth making every effort to preserve wooden heritage of Lubelszczyzna region in Poland, that is still past years witness and reminds of the difficult time that was after war re-housing of the towns. Especially, that after war damage timber households were preserved and renovated in accordance with all conservation rules. In addition, these buildings represent construction tradition of bygone era, which is now almost completely forgotten. Preservation should be carried out currently and use all possible legal aspects of protection of monuments law. In this case, important step is to enter possible buildings in the register of immovable monuments. Additionally, areas with wooden buildings should be included in the study of conditions and directions of spatial development for the commune, as well as in the local spatial development plan, and both as areas, as well as individual objects should become subjects of conservation care. Such provisions would limit the investors' freedom during renovation works, at the same time preventing secondary divisions of plots.
Yet, the most beneficial solution for both investors and municipalities, would be to create a wooden architecture route of the Lublin region, thus proving the uniqueness of the this area timber development. Propagating this idea would increase tourism in the region and sensitize to the unique buildings of interwar period, becoming a marketing product encouraging to visit these picturesque places. Bibliography | 2020-06-11T09:08:59.823Z | 2020-01-31T00:00:00.000 | {
"year": 2020,
"sha1": "62c9bf8698d340bcb34ded5c1761a79e5217fe5e",
"oa_license": null,
"oa_url": "https://ph.pollub.pl/index.php/teka/article/download/601/1690",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1ab1c75d2dda3397f4454a886434c6e63f9e1c72",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"History"
]
} |
215411172 | pes2o/s2orc | v3-fos-license | Cell-permeable succinate prodrugs rescue mitochondrial respiration in cellular models of acute acetaminophen overdose
Acetaminophen is one of the most common over-the-counter pain medications used worldwide and is considered safe at therapeutic dose. However, intentional and unintentional overdose accounts for up to 70% of acute liver failure cases in the western world. Extensive research has demonstrated that the induction of oxidative stress and mitochondrial dysfunction are central to the development of acetaminophen-induced liver injury. Despite the insight gained on the mechanism of acetaminophen toxicity, there still is only one clinically approved pharmacological treatment option, N-acetylcysteine. N-acetylcysteine increases the cell’s antioxidant defense and protects liver cells from further acetaminophen-induced oxidative damage. Because it primarily protects healthy liver cells rather than rescuing the already injured cells alternative treatment strategies that target the latter cell population are warranted. In this study, we investigated mitochondria as therapeutic target for the development of novel treatment strategies for acetaminophen-induced liver injury. Characterization of the mitochondrial toxicity due to acute acetaminophen overdose in vitro in human cells using detailed respirometric analysis revealed that complex I-linked (NADH-dependent) but not complex II-linked (succinate-dependent) mitochondrial respiration is inhibited by acetaminophen. Treatment with a novel cell-permeable succinate prodrug rescues acetaminophen-induced impaired mitochondrial respiration. This suggests cell-permeable succinate prodrugs as a potential alternative treatment strategy to counteract acetaminophen-induced liver injury.
Introduction
Acetaminophen (paracetamol, N-acetyl-p-aminophenol; APAP) is one of the most common over-the-counter medications used worldwide [1,2]. APAP is considered safe at therapeutic dose but has been associated with acute liver injury and liver failure in cases of intentional and unintentional overdose. In the western world, APAP accounts for up to 70% of acute liver failure cases [1][2][3][4][5]. Central to the development of APAP-induced liver injury is the formation of reactive oxygen species (ROS) and depletion of glutathione [6]. As a result, oxidative stress damages cellular proteins, including mitochondrial proteins, which induces further oxidative stress [1,2,6]. Within recent years, the critical role of mitochondrial function in the development of APAP-induced liver injury has been well established, but details on the exact mechanism of APAP's mitochondrial toxicity still remain controversial [2,3,[6][7][8]. In addition, the majority of research was done in rodent models and the number of ex vivo or in vivo human studies addressing the mechanism of APAP-induced hepatotoxicity and the role of mitochondrial dysfunction are limited [9,10]. Despite the extensive research that has been performed to date on APAP-induced liver failure, the only clinically approved pharmacological treatment option for APAP intoxication is N-acetylcysteine (NAC). NAC replenishes glutathione levels, increases the cell's antioxidant defense and thus, protects from further oxidative damage induced by APAP. It is more of preventive rather than rescuing nature, with lesser benefit for the already damaged cells [5,7,11]. Therefore, alternative treatment strategies that target the already damage liver cells are warranted.
In this this study, we investigated mitochondria as potential therapeutic target for treatment of APAP-induced liver injury in vitro. We first characterized the acute effect of APAP on mitochondrial function in primary human hepatocytes, HepG2 cells, and human platelets using respirometry. We then evaluated the efficacy of a cell-permeable succinate prodrug (NV241), a mitochondrially targeted alternative energy substrate, to rescue the impaired mitochondrial respiration following acute overdose of APAP.
Human liver cells
Human plateable primary hepatocytes (male, Caucasian, 69 years of age) were acquired from ThermoFisher Scientific (Bleiswijk, Netherlands) and plated as previously described [13].
Human platelets
The study was carried out in accordance with the Declaration of Helsinki. All blood cell experiments were performed with approval of the regional ethics committee of Lund University, Sweden (permit no. 2013/181). After written informed consent was acquired venous blood from healthy volunteers was drawn in K 2 EDTA tubes (Vacutainer1, BD, Franklin Lakes, USA) according to standard clinical practice. Human platelets were isolated and counted as previously described [14].
Respirometry
Respiration of human primary hepatocytes was measured using the Seahorse XFe96 Analyzer (Agilent technologies, Massachusetts, USA). The day before the experiment, the primary hepatocytes were plated for four hours at 37˚C and 5% CO 2 at a cell density of 20 000 cells per well on collagen-coated 96-well plates (Agilent Seahorse XFe96 products, Agilent technologies, Waghaeusel-Wiesental, Germany). The plating medium was subsequently removed and replaced with culture medium of the same composition as for HepG2 cells. The cells were kept overnight at 37˚C and 5% CO 2 until use. Prior to the experiment the culture medium was replaced with XF-Base medium (Agilent Seahorse XF, Agilent technologies, Waghaeusel-Wiesental, Germany) containing 10 mM glucose, 2 mM L-glutamine and 5 mM sodium pyruvate (pH 7.4) and the cells were left to equilibrate for 1.5 hours at 37˚C and atmospheric O 2 and CO 2 until start of the respirometric protocol [13].
Mitochondrial respiration of the human carcinoma liver cell line HepG2 and of human platelets was measured with a high-resolution oxygraph (O2k, Oroboros Instruments, Innsbruck, Austria). Data were recorded using DatLab software versions 6 and 7 (Oroboros Instruments, Innsbruck, Austria) and respirometry was performed at 37˚C, with 2 mL active chamber volume and a stirrer speed of 750 rpm. Respirometry protocols with human platelets and HepG2 cells were performed in MiR05 medium (0.5 mM EGTA, 3 mM MgCl 2 , 60 mM Klactobionate, 20 mM Taurine, 10 mM KH 2 PO 4 , 20 mM HEPES, 110 mM sucrose and 1g/L bovine serum albumin) and all respiratory values were corrected for the oxygen solubility factor of the medium (0.92) [15]. Mitochondrial respiration was measured at cell concentrations of 200 x 10 6 platelets per mL and 0.5 x 10 6 HepG2 cells per mL [13,14,16,17].
Respirometric protocols for intact cells
The effect of APAP on mitochondrial respiration was first evaluated in intact human primary hepatocytes. Due to the restriction of 4 additions per sample in the Seahorse Analyzer increasing doses of APAP or vehicle were added to separate samples/wells. After routine respiration, the respiration dependent on oxidative phosphorylation of endogenous substrates, was measured, cells were exposed to vehicle (DMSO, control) or APAP (2.5, 5, 7.5 or 10 mM) for 15 min, followed by the addition of the protonophore carbonyl-cyanide p-(trifluoromethoxy) phenylhydrazone (FCCP, 1 μM) which was added to uncouple the electron transport system (ETS) from the phosphorylation pathway and measure maximal respiration dependent on the ETS alone. This was followed by a simultaneous addition of rotenone (2 μM) and the cell-permeable succinate prodrug NV241 (250 μM) to evaluate if mitochondrial complexes downstream of complex I (CI) are affected by APAP and if the cell-permeable succinate prodrug NV241 can bypass APAP-induced inhibition of mitochondrial respiration. Non-mitochondrial respiration was measured by addition of the complex III (CIII) inhibitor antimycin A (1 μg � ml -1 ) and was subtracted from all respiratory values.
Next, the translatability of the human liver carcinoma cell line HepG2 and human platelets to study drug-induced mitochondrial and organ-specific toxicity was evaluated. HepG2 cells and human platelets were re-suspended in MiR05 and routine respiration was measured. After routine respiration stabilized, increasing, accumulative doses of APAP or vehicle (DMSO, control) were added to each sample. After the highest dose of APAP (10 mM) or vehicle was given, CI-linked mitochondrial respiration was inhibited by rotenone (2 μM) and the cell-permeable succinate prodrug NV241 (250 μM) was added subsequently to investigate if mitochondrial complexes downstream of CI are affected by APAP and if the cell-permeable succinate prodrug NV241 can bypass APAP-induced inhibition of mitochondrial respiration. Nonmitochondrial respiration was measured by addition of antimycin A (1 μg � ml -1 ), which all respiratory values were corrected for.
Respirometric protocols for permeabilized cells
To further characterize the inhibitory effect of APAP on mitochondrial respiration, a substrate-uncoupler-inhibitor titration (SUIT) protocol was applied using HepG2 cells and human platelets. After routine respiration was measured, intact HepG2 cells and human platelets received either APAP (10 mM) or vehicle and were exposed for 10 min. Following the exposure, the plasma membrane was permeabilized using digitonin to allow substrates, which are otherwise impermeable, cellular access, followed by sequential additions of complex-specific substrates and inhibitors [17]. Platelets were permeabilized with 1 μg digitonin per 1 � 10 6 platelets [14] and HepG2 cells were permeabilized with 10 μg digitonin per 1 � 10 6 cells. The optimal digitonin concentrations were determined in separate experiments and found to induce maximal cell membrane permeabilization without disruption of mitochondrial respiration.
Respirometric protocol to evaluate the coupling potential of the cellpermeable succinate prodrug NV241
In human platelets, the effect of APAP (10 mM) or vehicle (DMSO, control) on routine respiration was evaluated for 10 min, followed by the addition of the cell-permeable succinate prodrug NV241 (250 μM) or its vehicle (DMSO, control). Subsequently, coupled mitochondrial respiration, the respiration coupled to phosphorylation by the ATP-synthase, was measured by and calculated as the difference before and after addition of the ATP-synthase inhibitor oligomycin (1 μg/ml) [18]. The respirometric protocol was completed by measuring non-mitochondrial respiration following the addition of the CI inhibitor rotenone (2 μM) and the CIIIinhibitor antimycin A (1 μg/ml), which all respiratory values were corrected for.
Data analysis
As the magnitude of change in the evaluated parameter was not pre-defined, power calculation for sample size was not applied. Experiments with HepG2 cells and human platelets were performed with a group size of six replicates and experiments with primary human hepatocytes were conducted with three separately prepared replicates of the same donor (each including � 4 technical replicates per group). Statistical analyses were performed using Graph-Pad Prism version 7 (GraphPad Software, Inc., La Jolla, California, USA). Data are presented as mean ± range or scatter plot and mean ± range. Because the baseline routine respiration of primary hepatocytes demonstrated more variation before start of exposure to APAP as compared to HepG2 cells and human platelets, quantification and data analysis was performed with data expressed as percentage (%) of routine respiration (first measurement of routine respiration). All other data are expressed as pmol O 2 × sec -1 × cell number -1 . Respiratory states measured by high-resolution respirometry of human platelets were previously found to be normally distributed [14], justifying the use of parametric tests for the present study. Analyses of differences between �3 groups were performed by one-way ANOVA with Dunnet's (Fig 1) or Tukey's ( Fig 7) multiple comparison test. Paired, two tailed student's t-test was used for comparison of two groups (Figs 2, 3, 5 and 6). The half maximal inhibitory concentrations (IC 50 ) were calculated by standard nonlinear curve fitting of normalized values (% of routine respiration, Fig 4). A p-value of 0.05 or less was considered to indicate significant differences.
Effect of acetaminophen on mitochondrial respiration of intact primary hepatocytes, HepG2 cells and human platelets
We first assessed the effect of APAP on mitochondrial respiration in intact human primary hepatocytes. Following exposure to APAP for 15 min routine respiration was dose- dependently decreased compared to control (Figs 1 and 4). Subsequent uncoupling of the electron transport from the phosphorylation pathway with FCCP to measure mitochondrial respiration related to the ETS alone also showed a dose-dependent decrease with APAP as compared to vehicle control ( Fig 1C). To evaluate whether mitochondrial complexes downstream of CI would be affected by APAP and if a cell-permeable succinate prodrug can bypass APAP-induced inhibition of mitochondrial respiration, a simultaneous addition of rotenone and the cell-permeable succinate prodrug NV241 followed. While the magnitude of decrease in mitochondrial respiration in response to this addition differed between vehicle-treated and APAP-intoxicated cells (Fig 1A), the respiration levels after the simultaneous addition of rotenone and NV241 were mostly similar between groups (Fig 1D). The remaining complex II (CII)-linked mitochondrial respiration supported by the cell-permeable succinate prodrug NV241 ( Fig 1D) showed a minor difference between control and APAP-treated primary hepatocytes at the lowest dose tested (2.5 mM, p<0.05) but no effect at higher doses, indicating a lack of dose-dependency. Next, we assessed the suitability of the human hepatocyte carcinoma cell line HepG2 for indepth characterization of the mitochondrial inhibition in liver cells induced by APAP. Similar to primary human hepatocytes, routine respiration supported by endogenous substrates decreased dose-dependently following exposure to APAP (Figs 2A and 4). At a dose of 10 mM, routine respiration was significantly reduced by 60% compared to vehicle control (p<0.01) (Fig 2B). After addition of APAP, CI was inhibited with rotenone to subsequently measure CII-linked mitochondrial respiration in the presence of the cell-permeable succinate prodrug NV241 and isolated from any effects of APAP on CI and to additionally evaluate the ability of NV241 to bypass APAP-induced mitochondrial dysfunction. Addition of NV241 resulted in similar levels of CIIlinked mitochondrial respiration in vehicle controls and APAP-intoxicated cells (Fig 2C).
We then evaluated the translatability of human platelets as surrogate tissue to study APAP's effect on mitochondrial function, using the same protocol as described for HepG2 cells. As primary, non-cultured human cells, human platelets from healthy donors present a source of viable, fresh mitochondria. In intact human platelets, routine respiration supported by endogenous substrates was likewise reduced dose-dependently in response to APAP (Figs 3A and 4). At the highest dose, APAP (10 mM) reduced routine respiration by 52% compared to vehicle control (p<0.001) (Fig 3B). We continued the protocol with the addition of rotenone followed by the cell-permeable succinate prodrug NV241. Like in primary human hepatocytes and HepG2 cells, treatment with the cell-permeable succinate prodrug NV241 resulted in similar levels of respiration in vehicle controls and APAP-intoxicated cells (Fig 3C).
Despite differences in routine respiration before exposure, the sensitivity to inhibition by APAP was similar between the three cell types, with primary hepatocytes demonstrating a slightly lower IC 50 value than HepG2 cells and human platelets (primary hepatocytes: IC 50 6.0 mM, HepG2 cells: IC 50 : 6.6 mM and human platelets: IC 50 : 7.4 mM, Fig 4). This demonstrates that HepG2 cells and human platelets are suitable cellular models for further evaluation of the inhibition of mitochondrial respiration by APAP.
Characterization of the inhibition of mitochondrial respiration in HepG2 cells and human platelets
Further in-depth characterization of the inhibitory effect of APAP on mitochondrial respiration was performed using a Substrate-Uncoupler-Inhibitor-Titration (SUIT) protocol. After exposure to APAP (10 mM) for 10 min, intact platelets or HepG2 cells were permeabilized using digitonin, which was followed by sequential additions of complex-specific substrates and inhibitors at saturating concentrations to allow measurements of maximal respiratory capacities. Representative traces of simultaneously measured respiration of vehicle-treated and APAP-treated HepG2 cells are illustrated in Fig 5A. In HepG2 cells, maximal CI-linked, ADP-stimulated mitochondrial respiration in the presence of the substrates malate, pyruvate, and glutamate (OXPHOS CI-linked ) was significantly decreased by 66% in APAP-treated cells (p<0.001) (Fig 5A and 5B). Despite decreased OXPHOS CI-linked respiration, convergent complex I+II (CI+II)-linked, maximal ADP-stimulated mitochondrial respiration in the presence of malate, pyruvate, glutamate, and succinate (OXPHOS CI+II-linked ) was unchanged in APAP-intoxicated HepG2 cells as compared to control (Fig 5A and 5C). Both maximal convergent CI+CII-and CII-linked mitochondrial respiration dependent on the ETS alone were unaffected in HepG2 cells by APAP (S1 Fig). In human platelets, maximal CI-linked and convergent CI+II-linked, ADP-stimulated mitochondrial respiration, as well as maximal convergent CI+II-linked respiration dependent on the ETS alone was reduced following exposure to APAP: OXPHOS CI-linked (p<0.01, Fig 6A), OXPHOS CI+II-linked (p<0.01, Fig 6B) and ETS CI+II-linked (p<0.01, Fig 6C), respectively. Like in HepG2 cells, maximal CII-linked respiration dependent on the ETS alone (ETS CII-linked ) remained unaffected by APAP (Fig 6D).
Treatment effect of a cell-permeable succinate prodrug on acetaminopheninduced inhibition of mitochondrial respiration
Lastly, we evaluated if the normalization of mitochondrial respiration by this novel pharmacological treatment strategy is linked to phosphorylation activity by the ATP-synthase. This was evaluated in intact human platelets following exposure to APAP (10 mM) for 10 min, with and without subsequent treatment, and calculated as the difference in respiration before and after the inhibition of the ATP-synthase. Mitochondrial respiration coupled to phosphorylation by the ATP-synthase, here referred to as coupled respiration, was decreased by 40% (p<0.01) by APAP (Fig 7). Treatment with the cell-permeable succinate prodrug NV241 rescued coupled respiration and restored it to the level of controls (Fig 7).
Discussion
In this study, we demonstrated that APAP induces an immediate inhibition of mitochondrial respiration in human-derived cells through interference with CI or upstream metabolism while respiration associated with CII and downstream complexes remains unaffected. The toxicity profile of APAP on mitochondrial respiration was not exclusive to hepatic cells and was confirmed in fresh human platelets, presenting them as suitable surrogate tissue to study the role of mitochondrial dysfunction in acute APAP-induced toxicity. Treatment with a cellpermeable succinate prodrug normalized the drug-induced impairment of mitochondrial respiration, demonstrating the ability of succinate to bypass APAP-induced mitochondrial dysfunction and presenting cell-permeable succinate as a potential novel pharmacological treatment strategy for APAP-induced liver injury. APAP is the main cause for acute liver failure in the western world and, with a mortality rate of 0.4%, not uncommonly ends fatally [1][2][3][4]. The critical role of mitochondrial dysfunction in the development of APAP-induced liver injury and failure has been previously reported by others [2,3,6,7,9,10,19]. Inhibition of the respiratory chain, induction of mitochondrial permeability transition, increased mitochondrial oxidative stress, decreased mitochondrial ATP production and increased mitophagy has been associated with APAP overdose [2,3,6,7]. In this study, we demonstrated that APAP induces mitochondrial toxicity through or generated intracellularly at excessive amounts when the APAP-induced oxidative stress has depleted cellular glutathione. Independent of the origin of the toxic species, CII or downstream complexes were left mostly unaffected. The effect on CII-linked mitochondrial respiration observed in primary hepatocytes did not follow a dose-response pattern as only the lowest concentration of APAP tested showed a minor reduction of respiration. Therefore, the observed reduced CII-linked mitochondrial respiration in primary hepatocytes is likely unspecific and not related to APAP. Currently, the only clinically approved pharmacological treatment option for APAP overdose is NAC. NAC replenishes glutathione levels which increases the cell's ability to scavenge ROS. Thus, it protects liver cells from further APAP-induced oxidative injury [1,5,22]. Already damaged liver cells, however, benefit little from NAC treatment. Therefore, alternative treatment strategies are needed that can rescue the already damaged liver cells and prevent the resulting acute liver failure. At the preclinical stage, a limited number of mitochondrial targeted treatment strategies have shown success. The most promising pharmacological strategy, a mitochondrial-targeted antioxidant, decreased the magnitude of liver injury in mouse models of late-stage presenting APAP intoxication by reducing mitochondrial-related ROS production [7,22,23]. In this study, we demonstrated that CII-linked mitochondrial metabolism of a cell-permeable succinate prodrug can bypass and compensate for the decreased CIlinked mitochondrial metabolism following acute APAP exposure. These findings point towards a novel alternative treatment strategy for APAP-induced liver failure: mitochondrialtargeted, cell-permeable succinate prodrugs. Supplementation of an alternative energy source that liver cells can utilize despite the inhibitory effect of APAP on CI-linked metabolism could potentially allow them to maintain the required level of energy production and thus, rescue already injured liver cells. Succinate treatment has previously demonstrated by others to improve bioenergetics and reduce cell death in vitro in models of traumatic brain injury, metformin-induced and oxidant-induced mitochondrial dysfunction [24-26], thus, further supporting this hypothesis. The cell-permeable succinate prodrug presented in this study is the lead candidate of the first generation of an extensive rational drug design program focused around Krebs cycle intermediates for treatment of mitochondrial dysfunction and related disorders. The succinate prodrug has improved cell-membrane permeability over succinate and has shown to release succinate intracellularly, bypass mitochondrial complex I-related dysfunction and support oxidative phosphorylation [12,18,27]. Because NV241 lacks sufficient stability in plasma and serum containing media, we were not able to investigate its treatment effect on long-term cellular effects caused by APAP or in vivo. Currently, compounds which are more stable and suitable for in vivo use are under development for future studies [28]. Succinate has primarily been known as a metabolite of the TCA cycle. Over time, it has emerged to play a role in epigenetics, cell proliferation, paracrine signaling, ROS formation through reversed electron transport (RET) and inflammation [29][30][31]. While the risk of increased ROS through RET is low in the presence of CI inhibition [32], the role of succinate in inflammation, especially during APAP-induced liver damage, needs to be further investigated. In the liver, succinate has been shown to contribute to activation of hepatic stellate cells and Kupffer cells, which phagocytose dead and apoptotic parenchymal cells but also send out pro-inflammatory signals and thus, potentially further aggravate APAP-induced liver injury The cell-permeable succinate prodrug NV241 rescues coupled respiration of intact human platelets following acute intoxication with acetaminophen. Intact human platelets were exposed to acetaminophen (10 mM, green triangle) or vehicle (control, open triangle) for 10 min and subsequently treated with the cell-permeable succinate prodrug NV241 (half-filled green triangle). Coupled respiration, defined as the mitochondrial respiration linked to phosphorylation by the ATP synthase, was calculated as the difference in respiration before and after addition of the ATP-synthase inhibitor oligomycin. Data are expressed as mean plus range. One-way ANOVA with Tukey's post hoc test was performed for analysis of differences. APAP: acetaminophen. �� p<0.01. ��� = p<0.001. n = 6.
https://doi.org/10.1371/journal.pone.0231173.g007 [30,33]. Whether a succinate-induced pro-inflammatory response would aggravate APAPinduced liver injury or stimulate tissue repair pathways remains to be elucidated [30,34]. In this study, human platelets and the human carcinoma liver cell line HepG2 were used as surrogate tissues to study the effect of APAP on mitochondrial respiration as well as the rescue effect of NV241 in acute APAP intoxication. The translatability between human platelets and tissue specific cell lines is continuously reevaluated. Human platelets, a fresh source of viable mitochondria, have been described to rely on oxidative phosphorylation and reflect mitochondrial function of other, more metabolically active tissues [35][36][37][38]. Also cancer cells, long believed to rely solely on glycolysis, have now been described to upregulate their mitochondrial metabolism under certain conditions and rely on mitochondrial function for several cancerogenic processes [39,40]. Even though there are important differences between primary hepatocytes, HepG2 cells and human platelets, our data indicate that they show a relative comparable sensitivity towards drug-induced mitochondrial dysfunction, as indicated by the similar IC 50 values determined in this study. Our data therefore present these cell types as suitable surrogate tissues to study the role of mitochondrial dysfunction in drug-induced toxicity and further indicate that the liver specific toxicity in patients with acute APAP intoxication is likely due to the first-pass metabolism of APAP instead of liver specific metabolism of the drug.
In conclusion, in this study we demonstrated, using human-derived cells, that APAP induces mitochondrial inhibition through CI (or upstream thereof) while CII and downstream complexes are unaffected. We further showed that a cell-permeable succinate prodrug normalizes APAP-induced inhibition of mitochondrial respiration, presenting pharmacological bypass of APAP-induced mitochondrial toxicity with cell-permeable succinate prodrugs as a promising alternative treatment strategy for APAP-induced mitochondrial dysfunction and, potentially, liver injury.
Supporting information S1 Fig. Effect of acetaminophen on the electron transport system of HepG2 cells. Effect of the exposure of intact HepG2 cells to acetaminophen (red square) or vehicle (control, open square) in subsequently permeabilized cells to apply a Substrate-Uncoupler-Inhibitor-Titration protocol and assess the effect of acetaminophen on mitochondrial respiration which was uncoupled from the phosphorylation pathway using FCCP and dependent on the electron transport system alone. (a) Maximal convergent complex I and II-linked mitochondrial respiration dependent on the electron transport system alone (ETS CI+II-linked ) and (b) maximal complex II-linked mitochondrial respiration dependent on the electron transport system alone (ETS CII-linked | 2020-04-08T19:07:43.879Z | 2020-04-06T00:00:00.000 | {
"year": 2020,
"sha1": "215cfd82afef33a71ba193d7e3ec98d99c8229f5",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0231173&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "44523b66b23884826bd027a8eedd11fa2f7ef176",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221402912 | pes2o/s2orc | v3-fos-license | Evaluating implementation of WHO Trauma Care Checklist vs. modified WHO checklist in improving trauma patient clinical outcomes and satisfaction
Abstract: Background: Use of checklist in evaluation of trauma patients has been a critical component of improving the care process and reducing medical errors and increasing patient's quality of life. We aim to assess the impact of the modified World Health Organization Trauma Care Checklist (WHO TCC) on the management of pain, complications, mortality and patient satisfaction in trauma patients. Methods: This was a randomized control trial (RTC). Trauma patients referred to the trauma center and met the eligibility criteria were randomly assigned into three study groups. Group 1 were patients who received trauma care without using the WHO checklist, and only by the standard of care. Group 2 were patients who received trauma care according to the WHO's checklist, and group 3 were patients received trauma care according to the WHO's modified checklist. We used independent t-test and chi-square tests to assess the association between the study variables with checklist groups. The significance level of tests was set for p-value less than 0.05. Results: We observed patients’ level of pain, Injury Severity Score (ISS), Glasgow Coma Criterion (GCS) and patient satisfaction significantly improved across the checklist groups, but more so in the modified checklist group (P less than 0.001). Similarly, findings reveal significant relationships between all clinical characteristics of the patients and checklist groups, except for a CT Scan of the spinal cord. We were unable to establish any significant associations between the checklist groups and the majority of the selected trauma care process measures, except for missed injury (p = 0.001). Conclusions: Both the WHO TCC and the WHO modified checklist, in the initial assessment and during the treatment and care processes, enhance patients’ clinical outcomes. However, patients in the modified checklist compared to WHO TCC reported a higher level of satisfaction. Implications and future directions are discussed.
Introduction rauma is any wound or penetrating or nonpenetrating injury caused intentionally or unintentionally by external factors in the human body. 1 Trauma trauma is the second leading cause of premature death in the young population, regardless of gender. [3][4][5] In patients with severe trauma, the primary goal is patient survival, and the secondary goals are avoiding organ failure, other complications, speeding up recovery, and ultimately achieving the desired quality of life. 6 Therefore, early systematic evaluation of trauma patients is a critical component of improving the care process, reducing medical errors, and increasing patients' quality of life. 7 The efficacy of checklist implementation to improve patient safety, optimize care, and reduce medical errors has been reported airway management, fluid resuscitation, and diagnosis of lifethreatening injuries. [8][9][10][11][12][13]
WHO checklist
The WHO Trauma Care checklist (TCC) is a simple tool that is designed to ensure the safety of trauma patients in life-threatening conditions. 14 TCC identifies minimum sets of steps taken in care of all trauma patients admitted in emergency units, regardless of resource availability. 15 It is designed to standardize and reinforce aspects of early assessment of patients with trauma, thereby reducing the likelihood of diagnostic, therapeutic, and care errors during initial resuscitation. 15 TCC validity has been tested by global collaboration across different emergency units. 14 The WHO TCC consists of two main sections. The first section of the checklist includes immediate and urgent activities that should be followed right after the primary and secondary examinations, which involve eleven steps.
Steps include 1) assessing if airway intervention is needed, 2) evaluation for tension pneumo-haemothorax, 3) check if the oximetry pulse is placed and functioning, 4) check of large-bore IV and liquid has started, 5) conduct complete assessments for and control of external bleeding, 6) evaluation for any pelvic fracture, 7) evaluation for any internal bleeding, 8) assess if spinal immobilization is needed, 9) check the neurovascular status of four limbs, 10)assess if the patient is hypothermic, 11) evaluate for other patient needs (if no contraindication).
The second part of the WHO TCC includes five steps that should be followed before the medical team could leave the patient.
Step 1: Has the patient been given the prescribed medications?
Step 2: Have all lab tests and imaging been reviewed? Step 3: Has it been identified which serial examinations are needed? Step 4: Has patient's treatment plan discussed with the patient or the assigned representative, and step 5: Has the patient's charts that are related to the trauma been completed?
Results of recent studies show that WHO's checklist for trauma care reduces mortality, 16 delivers favorable treatment results, 17 and improves patient self-report of the treatment outcome. 18 Although the WHO checklist has been useful in coordinating and harmonizing trauma care and services, the checklist is short of providing the critical steps for the management of pain in trauma care. Therefore, due to the vital role of pain management in patients, in the current study, we added 'pain management' as an additional step to the first part of the checklist. Hereafter we call the modifiedchecklist "WHO modified checklist". The pain management items include assessing patient's pain intensity and prescribing medications according to the level of pain, as indicated below. We aim to assess the impact of the modified World Health Organization Trauma Care Checklist (WHO TCC) on the management of pain, complications, mortality and patient satisfaction in trauma patients.
Methods
This was a randomized control trial (RTC). The patient population included all trauma patients referred to the trauma center of Ayatollah Taleghani Hospital in Kermanshah, the research site. To be eligible in the study, the research sample had to have the following characteristics:
Inclusion criteria
1. Age between 18 and 60 years old 2. Glasgow Coma Scale (GCC) equal to or more than 10 3. Sustain life-threatening damage to an internal organ(s) determined by the clinical judgment of the treating physician 4. No pregnancy 5. No history of chronic mental illness, lung or kidney disease 6. Not undergoing chemotherapy. 7. No illicit drug dependency 8. Consenting to participate in the study Patients who did not meet the inclusion criteria were excluded from the study participation. Also, during the study process, the study principal investigator excluded patients who refrained from continuing the study and those with the incomplete checklist.
Sampling method and sample size
We used a computer-generated random sample of patients from the list of eligible patients. We determined the sample size based on considering that the relative percentage of improvement in the 19 indicators of the WHO checklist is 25% in the cases were the checklist was used compared to cases where the checklist was not used. 15 We calculated the sample size using a minimum reliability coefficient of 95% and a power of 80%, which led to a sample of 60 patients for each of the three study groups; WHO checklist, modified WHO checklist, and no checklist.
Assignment to the treatment groups
After obtaining study approval from the ethics committee of Kermanshah University of Medical Science (KUMS) in a period of three months in 2018, patients who were referred to the trauma center in Ayatollah Taleghani Hospital in Kermanshah, and met the eligibility were randomly assigned into three study groups.
Group 1: Patients who received trauma care without using the WHO checklist, and only by the standard of care. Group 2: Patients who received trauma care according to the WHO's checklist. Group 3: Patients received trauma care according to the WHO's modified checklist.
During the study, the pain intensity of patients with numerical scale was calculated, and therapeutic interventions were performed. Patients were treated for one month and then assessed for pain severity, the severity of the injury, treatment received, mortality rate, and complications post-trauma complications. Patients were discharged from the hospital and were followed on as needed basis either by phone or face-to-face. We obtained the approval of the ethics committee of Kermanshah University of Medical and Sciences to conduct the study.
Data collection tools
We completed demographic information through a direct interview with the patient or patient's companion or using the information in their medical chart. Demographic information includes gender, age, education, marital status, and place of residence.
Assessment of the severity of the injury: Three researchers in the current study received training regarding the calculation of the Injury Severity Score (ISS) to ensure standardized scoring across their checklist evaluation. The ISS scale measures the severity of the injury on a scale of zero to 75. To examine the extent of trauma, we used a typical trauma scale ranging from a score of 1, meaning a mild injury, and a score of 6, meaning a lethal injury (2=moderate injury, 3=serious injury, 4=severe injury, 5=critical injury, and 6=fatal injury) for any of the face, chest, abdomen, limbs, and external surfaces. To estimate the ISS, the squared of the Abbreviated Injury Scale (AIS) on the three most damaged areas were calculated and summed.
Pain intensity assessment scale: This scale has been used in various studies, and its reliability has been reported (a = 0.94). 19 Patient self-report of pain intensity was assessed by asking a patient to indicate the amount of pain experienced on a scale of zero to ten on a ten-centimeter calibrated line, where zero indicates no pain and ten means the maximum intolerable pain.
Mortality: We estimated mortality by dividing the number of injured patients who participated in this study and divided by the total number of injured patients multiplied by 100.
Medical chart data: Using patient's medical chart we recorded and monitored patient's critical clinical data and medical histories, such as vital signs, diagnoses, medications, physical and radiological examinations, data in the patient's medical chart, the status of clinical examinations, radiological images, laboratory and test results. We also recorded complications from trauma, including cardiac arrest, pneumonia, pulmonary embolism, renal failure, sepsis, septic shock, wound infection, and more.
Complications: This information was extracted from the patient chart and included cardiac arrest, pneumonia, pulmonary embolism, renal failure, sepsis, septic shock, wound infection, etc.
Data analysis
We used STATA software for data analysis. In addition to reporting descriptive statistics, we used independent t-test and chi-square tests to assess the association between the study variables with checklist groups. The significance level of tests was set for p-value <0.05.
Results
Sample demographic characteristics are presented in Table 1, which shows there was no significant differences between these variables and the study groups. As illustrated in Table 2, patients' level of pain, ISS, GCS, and satisfaction significantly improved across the checklist groups, but more so in the modified checklist (P <0.001). Similarly, findings based on Table 3 reveal that there were significant relationships between all clinical characteristics of the patients and checklist groups, except for CT Scan of spinal cord. We were unable to establish any significant associations between the checklist groups and the majority of the selected trauma care process measures, except for missed injury (p = 0.001) ( Table 3).
Discussion
Our study showed that the use of a modified WHO checklist based on pain management in trauma and accident patients is associated with a higher level of patient satisfaction due to the reduction of pain in these patients, compared to the WHO checklist.
Evaluation of patients showed in the gross sensory test, abdominal ultrasound, and abdomen CT scan use of modified checklist resulted in better evaluation and management of patients compared to patients who were evaluated and treated with the WHO checklist and the group without the checklist. It is possible that the modified checklist has the potential to meet the needs and the condition of the patients, and significantly reduce the incidence of medical error. Similarly, the findings of Ebrahimi and Fakhar study showed that the use of a checklist and standard protocol resulted in better evaluation and treatment of patients. 20 However, we did not find any significant difference between the WHO checklist and the modified checklist in evaluation of patients for end pulse test, spinal physical examination, gross motor skill test, abdominal test, temperature assessment, CT scan of spinal cord, history of receiving tetanus vaccine, pneumonia evaluation, and evaluation of vascular thrombosis. But both groups were better off compared with the group that was evaluated without a checklist, which means using a modified checklist or WHO checklist assist the treatment team in evaluating and managing patients. Other studies confirm our findings. 21,22 The use of checklists and guidelines can effectively guide the treatment team in evaluating patients. 18 The use of patient evaluation protocols can speed up the action, increase the accuracy of the team in evaluating patients, and ultimately create more appropriate results. 23 Furthermore, the results showed in the auditory sections, and scalp test patients were assessed by the WHO checklist were better evaluated than other groups, and these results were statistically significant. In the study by Lashour et al. 15 use of the WHO checklist in patients evaluations resulted in a better outcome. Also, the results showed mortality, the incidence of shock, pulmonary embolism, renal failure, the incidence of septic shock, and sepsis were not significantly different in the patients in any of the three groups. However, in most of these areas, the outcomes observed in the modified checklist group were better. In general, our findings support other studies, which have shown the use of checklist and guideline can improve patient outcomes. 24,25 Limitations Our study has several limitations, including the probability of using the incomplete recording of information in the patients' files. We tried to compensate for this limitation by training the data abstractors to be consistent, accurate, and objective in extracting information from patient's chart. Furthermore, this was a single-site study with a small sample. Multisite studies with a larger sample size that include children and older adults (60 and over) are needed to replicate our findings. Additionally, our inclusion criteria limited us to enroll patients with GCS less than 10. Future studies should include patients with low GCS and use behavioral pain scale (BPS) 26 such as facial expression. 27
Conclusion
Both the WHO TCC and the WHO modified checklist, in the initial assessment and during the treatment and care process, enhance patients' clinical outcomes. However, patients in the modified checklist compared to the WHO TCC reported higher level of satisfaction. | 2020-08-20T10:02:49.262Z | 2020-08-16T00:00:00.000 | {
"year": 2021,
"sha1": "541bb47dfbce579400018c7eb3aceded8f72f832",
"oa_license": "CCBY",
"oa_url": "https://www.jivresearch.org/jivr/index.php/jivr/article/download/1579/870",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59c3cd0f3d32f83746ef842fb99b002faa1b7aa7",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255596191 | pes2o/s2orc | v3-fos-license | Molecular elements: novel approaches for molecular building
Classically, a molecular element (ME) is a pure substance composed of two or more atoms of the same element. However, MEs, in the context of this review, can be any molecules as elements bonded together into the backbone of synthetic oligonucleotides (ONs) with designed sequences and functions, including natural A, T, C, G, U, and unnatural bases. The use of MEs can facilitate the synthesis of designer molecules and smart materials. In particular, we discuss the landmarks associated with DNA structure and related technologies, as well as the extensive application of ONs, the ideal type of molecules for intervention therapy aimed at correcting disease-causing genetic errors (indels). It is herein concluded that the discovery of ON therapeutics and the fabrication of designer molecules or nanostructures depend on the ME concept that we previously published. Accordingly, ME will be our focal point as we discuss related research directions and perspectives in making molecules and materials. This article is part of the theme issue ‘Reactivity and mechanism in chemical and synthetic biology’.
Introduction
DNA is the molecular foundation of the biological system as it is the genetic information carrier stored in the nucleus [1]. Compared with other biomacromolecules, such as proteins and polysaccharides, the structure of DNA is much simpler since it is composed of only four types of units. The evolution of DNA structure and biology is the central part of science, having produced many revolutionary technologies changing the lifestyle of human beings (figure 1). Although DNA was discovered as a natural chemical from living systems by Miescher in 1869 [2], the ring structures of ATCG bases were not identified by Levene and Tipson until 1932 [3,4]. The role of DNA as a gene information carrier in the cell was proposed by Oswald Avery in 1944 for the first time [5,6]. Shortly after the establishment of A = T and C = G base-pairing rules by Chargaff [7], Watson and Crick discovered the double-helix structure of DNA inspired by Franklin's X-ray crystallography of DNA in 1953 [8][9][10][11]. The start of molecular biology signalled a breakthrough in genomics and thereafter spurred landmark discoveries in science and technological advances (figure 1).
Molecular biology, as a new field, started from the discovery of the DNA duplex structure, which had not been fully established until Crick et al. enunciated their 'Central Dogma' [12,13] and deciphered the genetic code [14]. In the words of Crick, nearly 'all aspects of life are engineered at the molecular level, and without understanding molecules, we can only have a very sketchy understanding of life itself' (see https:// profiles.nlm.nih.gov/spotlight/sc/feature/biographical-overview). Indeed, Crick's opinion is supported by ever more scientific discoveries. For example, the discovery of DNA polymerase by Kornberg [15,16] led to the establishment of enzymatic synthesis technology [17] and DNA sequencing [18,19] upon which modern biotechnology is based. DNA sequencing initiated by Sanger enables gene sequencing accessible to the general public. With genetic information, we can understand diseases at the molecular level and find cures.
Designable nucleic acids are unique probes for biological studies. Many structure-function studies have resulted in the efficient preparation of oligonucleotides (ONs) ever since Khorana managed to chemically synthesize a gene in the laboratory for the first time [20,21]. Aided by ON synthesis technology, the Dickerson and Rich groups reported the crystal structures of A-, B-and Z-DNA fragments [22,23], which is important because the helical structure of DNA is variable under different environments and closely related to biological properties. Beyond biotechnology, ONs prepared by DNA synthesizers have been extensively applied in materials science [24], nanotechnology [25][26][27], information technology [28,29] and clinical diagnosis and therapies [30]. Accordingly, many different functionalities have been designed and incorporated into nucleic acids to meet specific requirements [31,32].
Unnatural DNA bases: bottom-up elements
The fundamental role of DNA and its quite simple structure has long evoked curiosity. For instance, to investigate if ATCG could be replaced by other functionalities, the Benner group designed unnatural bases with close similarity to ATCG bases and realized enzymatic incorporation of unnatural bases into RNA with high accuracy [33,34]. To determine if hydrogen bonding is necessary for DNA base-pairing, the Kool group found that well-designed aromatic functionalities could work as a pair of hydrophobic bases stabilizing the DNA duplex [35,36]. They also designed a series of size-expanded bases from which more thermodynamically stable DNA duplexes were prepared, such as xDNA and yDNA [37][38][39]. We have designed and synthesized the most size-expanded unnatural base by fusion of an azobenzene with a natural T base to give base zT, which is capable of specific base-pairing with natural A through hydrogen bonding [40]. Unnatural base-pairing was also introduced to the duplex by replacing hydrogen bonding with metalmediated bonding [41,42]. The Romesberg group designed and synthesized more than a thousand hydrophobic bases out of which they screened some unnatural bases that function in a manner similar to that of ATCG bases in the cell system following 'Central Dogma' [43][44][45][46].
The incorporation of unnatural bases into nucleic acids provides a unique insight into DNA biology and function. In order to distinguish natural AT(U)GC bases from unnatural bases, Benner proposed unnatural bases as DNA's new alphabets [33]. Encouraged by the discovery of Benner's unnatural bases in the 1980s, a few groups have since made major contributions to this field [47][48][49][50][51][52][53][54]. In fact, more than 100 unnatural bases have been reported with base-pairing properties, enough to fill the elemental table (figure 2). Several recently published reviews have described the progress in this field, which we are not discussing in detail here [48,[55][56][57].
It is exciting, but challenging, to reconstruct a life system in a bottom-up approach with unnatural bases. On the other hand, unnatural bases may find unique applications RNA interference mechanism by Fire in biotechnology and biomedicine [58]. The Hirao group has created high-affinity DNA aptamers with unnatural bases, which specifically bind to target proteins with improved biostability [59,60]. Our collaboration with the Benner group resulted in the generation of aptamers that selectively bind liver cancer cells. These aptamers evolved from a sixletter DNA library with unnatural bases Z and P [61][62][63].
It has been demonstrated that DNA is a unique data storage device for information technology (IT) [64]. Accordingly, unnatural bases may find unique functions in IT as data storage and read-out systems. Simpler than cellular systems, the addition of a base 'byte' would multiply the capacity of data storage.
Molecular elements: concept and the significance
The convenience of building ON molecules by automated synthesis has inspired applications of ONs in basic research and clinical medicine. To produce such ONs, nucleoside phosphoramidites, first identified in 1981 [65,66], allow sequential addition of new bases to the DNA chain. More than 1000 phosphoramidites have been reported [67][68][69][70][71], providing infinite possibilities for all kinds of technical nucleic acids, or TcNA, available for functionalization within the scope of the periodic table of elements shown in figure 2. However, the rational design of small molecules with specialized functionalities calls for the development of better guidance to address the demand for new nucleic acid-based materials and nucleic acid-based therapeutics, leading to the expansion of TcNA applications. Accordingly, in 2017, we introduced the concept of the molecular element (ME) [40], which we describe below. From the structure of a single-stranded DNA (figure 3a), it is obvious that the backbone of the molecule is uniform and that every unit differs from others by base moieties. Hence, the sugar-phosphate-sugar backbone may be abstracted as the 'bond' of nucleic acids and the base moiety as the 'element'.
Adenine pairs with thymine, and cytosine pairs with guanine. Based on this simplified DNA single-strand structure, we propose the ME concept.
A ME can be any molecule with special functions, including both natural A, T, C, G, U and unnatural bases (figure 3b). MEs are converted to corresponding phosphoramidites by organic synthesis as elementary substances for construction of TcNAs. As shown in figure 3b, from phosphoramidites, an individual ME is bonded together into the backbone of ONs during automated synthesis step-by-step in a programmable approach with up to 99.9% yield (figure 3c). As demonstrated by nature, the sequences of four MEs, A, T, C and G, store huge genetic information and biological functions. The discovery of more functional MEs will lead to the construction of TcNAs with infinite functions, and people can build their dream molecules as easily as shopping in a molecular supermarket.
Through the programmable assembly of functional moieties onto the DNA backbone, TcNAs can be turned into diagnostic probes, catalytic molecules or therapeutic molecules. Examples of such conjugates include lipid-, polymer-or nanoparticle-DNA. MEs can also guide the construction of nanodevices with diverse functions. More to the point, the confluence of ME and TcNA offers these merits: (i) simple and efficient design of molecular-level constructs in that all MEs are bonded in sequence through the same phosphoramidite chemistry; (ii) infinite molecular properties since the properties of TcNAs are determined by the incorporated MEs and their sequences; and (iii) rational design of TcNAs realized under the ME framework in future when more underlying disciplines are discovered.
Engineering technical nucleic acids
The progress in nucleic acid preparations, including both chemical and biological synthesis, has allowed researchers to use nucleic acids as unique tools. TcNAs can be prepared by automated synthesis, and the sequences are programmable. Furthermore, the structure can be readily modified through the incorporation of functional moieties. Commercially available TcNAs have been extensively explored in medicine, chemistry, physics, materials science and even information technology. DNA nanotechnology, DNA-based advanced materials and nucleic acid therapeutics have emerged as the frontiers in interdisciplinary fields (figure 4). Programmable sequence and specific A-T and C-G basepairing modality afford researchers the opportunity to construct designer molecules or nano-scaled devices. For example, aptamers are generated from a library of ONs binding to targets with high affinity and specificity, many of which have been developed as therapeutic molecules for clinical applications [72][73][74][75][76][77]. Molecular beacons, single-stranded ONs with hairpinloop conformation, are also used in a variety of formats, such as in vitro RNA and DNA monitoring, biosensors and realtime monitoring of gene expression in living systems [78][79][80][81]. DNAzymes are also single-stranded ONs capable of catalysing chemical reactions as enzymes; DNAzymes have received attention for bioimaging and biosensor development [82][83][84]. A molecular nanomotor can be constructed by a single-stranded ON, which is fuelled by the hybridization of DNA [85][86][87]. Hybridization and dehydration between base-pairing are processes which, under programmable control, have been used to prepare intellectual hydrogels, soft nanomotor devices and biomimetic DNA nanostructures [88][89][90][91][92][93][94].
DNA has had a remarkable impact on nanoscience and nanotechnology with the most predictable interactions of all molecules. Through specific base-pairing, programmed TcNAs can assemble structural motifs and then connect them, fabricating nano-scaled structures in a bottom-up approach [95][96][97]. DNA nanotechnology presents the advantages of chemical diversity, highly programmable synthesis and precisely controllable structure, as demonstrated by DNA origami and DNA-mediated nanoparticle assembly [98][99][100]. When more MEs are introduced into TcNAs besides natural ATCG, more powerful nanodevices can be engineered with functional TcNAs.
Nucleic acid therapeutics
We are entering an era in which a vast amount of gene sequencing information is available to medical researchers able to take advantage of the completed human genome project, the breakthrough in DNA sequencing technology and large-scale studies of genetic variation.
Originally, molecular medicine involved the application of genetic knowledge to the practice of therapy. Today, a major mission of molecular medicine is to identify pathogenic genetic mutations and develop molecular interventions. Indeed, a major challenge in molecular medicine involves the discovery of molecular therapeutics against disease-causing genetic mutations.
In 1978, it was discovered that the expression of proteins could be disturbed by an exogenous ON complementary to the target gene [101], and the mechanisms of RNA interference were addressed by Mello and Fire in 1998 [102]. Biological experiments have unequivocally verified that both antisense However, it took almost 20 years until the first siRNA was approved for clinical treatment of rare diseases in 2018 [106]. It was a long scientific trek marked by trial and error and continuous structural optimization for therapeutic nucleic acids [31,32,[107][108][109][110][111]. Chemically modified ONs have been extensively studied for the development of therapeutic antisense and siRNA because modifications were found to dramatically improve the drug-like properties of ONs, such as cellular uptake, biostability, target specificity and binding affinity. Simple structure, programmable synthesis and ready functionalization are the prominent properties of TcNA, which will benefit clinical applications. However, major challenges arise from the difficulty in delivering TcNA therapeutics to their target tissues. Many efforts have contributed to the development of delivery systems for TcNAs.
Recently, we demonstrated that the structure of ONs can be optimized by incorporating functional MEs using a programmable approach [112]. For delivery systems, these innovations include our lipid nanoparticle (LNP), as well as viral and polymeric delivery systems [113][114][115][116][117][118]. LNP is the most prominent system, which has been successfully used in the formulation of the first approved siRNA. The formulation of LNP suitable for clinical applications is challenging because the complicated system involves a huge number of chemicals being used as the components. To handle such difficulties, artificial intelligence technology has been applied to the fabrication of LNP [119,120].
Future directions (a) The development of novel molecular elements
Targeted delivery of TcNA therapeutics is very important, as has been demonstrated by the GalNAc platform applied in siRNA functionalization [118]. The development of targeting-ME could enhance the accumulation of TcNA in target tissues and thus improve efficacy. We recently developed a series of microenvironment-targeting MEs and demonstrated that the tumor-specific delivery of ASO is achievable using MEs. Different diseases may vary from each other with a characteristic microenvironment [121][122][123], which can be used in the design and development of novel MEs.
Pseudouridine (Ψ) is an isomer of the nucleoside uridine in which the uracil is attached via a carbon-carbon instead of a nitrogen-carbon glycosidic bond. Its incorporation into messenger RNA (mRNA) enhances translation efficiency [124,125] This ME has been used as an important tool for the development of mRNA therapeutics. Recently, we demonstrated that the structure of ONs can be optimized by incorporating functionalities in a programmable approach [112]. DNA structure is predictable owing to specific base-pairing through hydrogen bonding. While the interaction of TcNA with a serum protein is weak and uncontrollable, the introduction of hydrophobic or cationic functionalities may alter the binding affinity, and change the fate of TcNA in blood circulation [126]. The development of such assembling MEs may provide a sheltering function for TcNAs in the circulatory system.
(b) Correlations between the sequences of molecular elements and functions
Studies in the structural modification of TcNA have increased through the years. Indeed, over past decades, ON modification has largely broadened the functions and applications of TcNAs and contributed to the clinical success of therapeutic nucleic acids. The advent of phosphoramidite chemistry, as previously noted, has accelerated the functionalization of ONs with modified nucleobases, phosphate-protecting groups and modified sugars [31,32,108,[110][111][112][113][114]. Mother Nature has exhibited the power of sequence with four MEs, A, T, C and G, since one single-base mutation results in quite different biological royalsocietypublishing.org/journal/rstb Phil. Trans. R. Soc. B 378: 20220024 morphology. It will be even more striking and significant to find out how ME sequences change biological properties and discover the existence of synergism between drug payloads and MEs. The discovery of latent principles may provide guidance for molecular design of TcNAs and lead to further breakthroughs.
Data accessibility. This article has no additional data. Authors' contributions. R.W.: project administration, writing-original draft; X.W.: writing-original draft; S.X.: data curation; Y.Z.: data curation; D.J.: data curation, writing-original draft; X.Z.: validation; C.C.: writing-original draft; J.J.: writing-review and editing; W.T.: conceptualization, funding acquisition, supervision, writing-review and editing. All authors gave final approval for publication and agreed to be held accountable for the work performed therein.
Conflict of interest declaration. We declare we have no competing interests. Funding. This work was supported by the Ministry of Science and Technology of China (grant no. 2021YFA0909400) and the National Science Foundation of China (grant nos t2188102 and 21877079). | 2023-01-12T14:04:41.230Z | 2023-01-11T00:00:00.000 | {
"year": 2023,
"sha1": "d0aeb1c0126add6b02cd10ee152b27acd3ffa446",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "RoyalSociety",
"pdf_hash": "d0aeb1c0126add6b02cd10ee152b27acd3ffa446",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268619577 | pes2o/s2orc | v3-fos-license | Prevalence of Sinus Mucosal Abnormalities on CT of the Head Performed for Headache When Compared With Those Performed for Other Indications
Background There is a high prevalence of mucosal abnormalities of paranasal sinuses on CT Head scans performed for all indications. The purpose of this study is to see whether or not such abnormalities are more common in scans performed on patients presenting with headaches when compared with those without headaches. Methods Images of CT scans of the brain of 100 consecutive patients from each of the two study groups (a total of 200 scans) were retrospectively reviewed for the presence of sinus mucosal abnormalities and their Lund-Mackay (LM) scores were calculated. A corrected LM score was also calculated using a correction factor for non-visualized sinuses in some scans and osteomeatal complexes in all scans. Radiological reports for these scans were also reviewed to note whether or not they contained any comments on the sinuses. All the reviewed scans were performed between January 1, 2021 and January 22, 2021. Results In the headache group, 17 patients had an LM score above 4 (which was used as the main cut-off point for this study). In the non-headache group, 16 patients had a score greater than 4. The mean LM score in the headache group was 1.24 and in the non-headache group was 1.4. There has been no significant difference in the comparison when corrected LM scores were used. In the headache group, 22 radiology reports contained comments on the sinuses compared to 11 reports in the non-headache group. Conclusion Results of this study indicate that there is no significant difference in the prevalence of clinically important sinus mucosal abnormalities in patients who had a brain CT for headache when compared with other indications. It was found that radiologists tend to comment on the sinuses more often when the indication was headache. It may be reasonable for radiologists to consider reviewing this practice. This might reduce unnecessary referrals to ENT and, more importantly, avoid missing other reasons for headaches.
Introduction
There is a high prevalence of sinus mucosal abnormalities detected incidentally on CT scans of the Head with some studies reporting rates as high as 42.5% in asymptomatic adults [1].A CT head is used to investigate headache especially when it is of abrupt or acute onset [2].Also, headache is a common complaint for patients presenting in Accident and Emergency Departments and nearly 15% of patients from these departments are referred for a CT Head as one of the investigations [3].
It is, therefore, common for radiologists to encounter findings of sinus mucosal abnormalities on CT scans performed for headaches and to face a dilemma about whether or not these findings need to be mentioned in the report.When reported, further dilemmas might ensue within the referring clinical team about whether or not they are clinically significant and relevant to the patient's symptoms.Assuming that the sinus findings are the cause of a patient's headache can lead to misdiagnosis [3].
The purpose of this study is to see if there is any difference in the prevalence and severity of sinus mucosal findings on CT Heads performed for headaches when compared with a similar number of scans in a nonheadache group.The observations from this study could be of value to radiologists in making an informed decision on whether and when they should comment on the presence or absence of sinus findings on head CT in patients with headaches.
Several systems have been developed to stage sinus findings on CT.These include Kennedy, Levine, and May, Harvard, and Lund-Mackay (LM) systems [4].Out of these scores, the LM score is considered to be simple and is shown to have high interobserver reliability and correlate well with disease severity.For this study, we used the LM score to compare both groups.We used a cut-off of 4 as the main point which was described as the minimum score to indicate clinically significant disease that might need treatment [5].
Materials And Methods
We performed a retrospective review of images and reports of 200 non-contrast CT Head scans performed at Southmead Hospital, North Bristol NHS Trust, United Kingdom, between January 1, 2021 and January 22, 2021.This study followed the University of Bristol and North Bristol NHS Trust's ethics committee guidelines for a retrospective review.
The study group included 100 consecutive patients who had a CT of the Head for Headache and another 100 consecutive patients who had it for an indication other than headache.Patients with a history of trauma were excluded from the study.The review also included radiological reports of these 200 patients.
All images were reviewed on the Picture archiving and communication system (PACS).All the available series were reviewed with windowing as required.The radiological reports were reviewed on the Radiology Information System (RIS).Other information like the patient demographics, source of referral, and indication for the study were also obtained from RIS.
Each study was evaluated for the presence of any mucosal abnormality in the frontal, maxillary, anterior ethmoid, posterior ethmoid, and sphenoid sinuses on each side.Each sinus was graded as either normal, partially opacified, or totally opacified.Sinuses with small polyps were included in the partially opacified category and those with large ones are classified as totally opacified.
A score of 0 was assigned to each normal sinus, a score of 1 for each partially opacified sinus, and a score of 2 for each totally opacified sinus.The score was 0 for osteomeatal complexes as these could not be scored on these standard brain CT scans.An LM score was calculated for each patient based on this evaluation.
In addition to the calculated LM score as obtained above, a "corrected LM score" was calculated for all patients using the method described by Nazri et al. [4].The correction factor applied was based on the number of partially or totally unseen sinuses on each scan.The osteomeatal complexes were considered totally unseen.The correction factor used was 1 for a partially unseen sinus and 2 for a totally unseen sinus.
Demographic data and source of referral
The mean age for the whole group of 200 patients was 61 years (SD 21).The mean age for the headache group was 52.8 years (SD 20).For the non-headache group, the mean age was 69.25 years (SD 19).In the headache group, there were 42 males and 58 females.In the non-headache group, the number of males was 51 and females was 49.
As shown in Table 1, more than 50% of referrals in all groups were from the A&E department.Out of the remainder, inpatients were the next common source followed by GPs and outpatients.This pattern is similar in both groups.
Analysis of the sinus abnormalities
In both groups, ethmoid sinuses showed the highest rate of mucosal abnormality.This was followed by the maxillary sinuses.Frontal sinuses were the least commonly involved.The pattern was somewhat similar for both groups.
TABLE 2: Comparative analysis of sinus mucosal abnormalities
There was no evidence of any increased prevalence of mucosal abnormalities in any of the sinuses in the headache group when compared with the non-headache group.On the other hand, in this study group, there was a mildly increased prevalence of mucosal abnormalities in the ethmoid sinuses and the left sphenoid sinus within the non-headache group.
The LM score (calculated and corrected)
LM scores were calculated for all patients and corrected LM scores were also generated.The correction factor was based on the non-scoring status of OM complexes in all patients and the number of other partially or totally unseen sinuses.
Both groups were compared using several LM score cut-off points (above which the patients could be considered as having sinusitis).Table 3 shows the number of patients above various cut-off points in the headache group compared with those in the non-headache group.A comparison between the two groups was also made using the mean LM scores (Table 4).
TABLE 4: LM score comparison (mean and SD)
There was no significant difference in any of the comparative analyses if corrected LM scores were used instead of calculated LM scores.So, these were not represented in the results presented here.
When lower cut-off points are used (above which the mucosal abnormality was considered to be potentially clinically significant), there were more abnormal scans in the non-headache group than in the headache group.At the higher cut-off points (4 and 5), the number was similar in both groups.There was no statistically significant difference in the Mean LM scores between the headache and non-headache groups.
The radiological reports of all patients were reviewed to see if comments were made on the paranasal sinuses.Out of the total 200 reports, 33 contained comments about the sinuses whereas 167 reports did not include any relevant comments.Table 5 compares the mention of sinuses within reports for the headache group vs the non-headache group.
TABLE 5: Mention of imaging observations related to sinuses in radiology reports
The reporting radiologists commented on the sinuses more often when the indication was headache (22%) when compared with scans done for non-headache indications (11%).
Discussion
Even when trauma is excluded, CT Head remains the investigation of choice for headaches of acute onset.Referrals from Accident and Emergency departments for patients with acute atraumatic headaches are one of the main sources for such scans followed by referrals from inpatients, GPs, and outpatients.
There is a very high prevalence of incidental sinus mucosal abnormalities in the general population, which will be seen in CT Head scans performed for any indication.This will result in a significant number of scans performed for atraumatic headaches showing features that could be interpreted as "sinusitis."As shown in this study, radiologists could be twice more likely to comment on the findings in the sinuses when the indication for the CT Head was headache.
Mention of the presence of sinus mucosal abnormality in a CT scan report in a patient with atraumatic headache could lead to a clinical misdiagnosis of sinusitis being the cause of the patient's symptoms [6].It has been reported that in several cases where there was a misdiagnosis of subarachnoid hemorrhage, a diagnosis of "sinusitis" was made instead [7].
Also, studies have shown that findings of sinus mucosal abnormalities should not be used to predict that they are the cause of the patient's symptoms or localize areas of facial pain or pressure [8].It has also been reported that there is no statistically significant association between the extent and stage of CT findings and the severity of the patient's symptoms [9].
In this study, we set out to see if there is any reason to justify or warrant a mention of the presence of sinus mucosal abnormalities in patients who had a CT of the Head for atraumatic headache.To do this we compared not only the prevalence of sinus findings in each group but also the stage of those findings using the LM scoring.
The results of the analysis of this relatively large study group containing 100 patients who had CT Heads for atraumatic headache and a control group of 100 patients who had no headache have shown that there is no evidence of any increased prevalence of sinus mucosal abnormalities in the headache group.The sinus findings and their distribution were largely similar between the two groups.In addition, this study also shows that there is no significant difference in the grade of the disease amongst the sinus-positive scans across both groups, as demonstrated by the comparative evaluation of the LM scores.
Based on these observations, it appears reasonable to recommend that radiologists review their current practice and consider not reporting sinus mucosal abnormalities detected on CT head scans performed for atraumatic headaches and not making any conclusions about sinusitis.The exception to this would be patients where the clinical diagnosis is suspected rhinosinusitis or when there is extensive opacification of multiple sinuses.
Some radiologists may feel strongly about reporting all abnormalities including sinus mucosal thickening.Therefore, another alternative would be to add a caveat to CT reports that sinus mucosal abnormalities are common and might not be the actual cause of the patient's symptoms.
Conclusions
This study included a comparison of sinus mucosal findings on a relatively large number of CT Head scans performed for atraumatic headaches with a control group of an equal number of CT Heads performed for other non-headache indications.In this study group, it is evident that there is no evidence of any increased prevalence of mucosal thickening or opacification of the sinuses in the headache group when compared to the control group.There was also no evidence of an increased number of more advanced-stage mucosal abnormalities based on the LM score.In view of these observations, it may be reasonable for radiologists to
Table 2
compares the sinus mucosal abnormalities in both groups. | 2024-03-23T15:18:05.135Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "8e391aabc8728fc0fff6a2cb2774f37d419269db",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/151954/20240321-11593-ek7zc7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d54c24363da6cb665fb09ccf18d818cadc423cb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3873212 | pes2o/s2orc | v3-fos-license | mTOR/Raptor signaling is critical for skeletogenesis in mice through the regulation of Runx2 expression
The mammalian target of rapamycin (mTOR)/regulatory-associated protein of mTOR (Raptor) pathway transmits and integrates different signals including growth factors, nutrients, and energy metabolism. Nearly all these signals have been found to play roles in skeletal biology. However, the contribution of mTOR/Raptor to osteoblast biology in vivo remains to be elucidated as the conclusions of recent studies are controversial. Here we report that mice with a deficiency of either mTOR or Raptor in preosteoblasts exhibited clavicular hypoplasia and delayed fontanelle fusion, similar to those found in human patients with cleidocranial dysplasia (CCD) haploinsufficient for the transcription factor runt-related transcription factor 2 (Runx2) or those identified in Runx2+/− mice. Mechanistic analysis revealed that the mTOR-Raptor-S6K1 axis regulates Runx2 expression through phosphorylation of estrogen receptor α, which binds to Distal-less homeobox 5 (DLX5) and augments the activity of Runx2 enhancer. Moreover, heterozygous mutation of raptor in osteoblasts aggravates the bone defects observed in Runx2+/− mice, indicating a genetic interaction between Raptor and Runx2. Collectively, these findings reveal that mTOR/Raptor signaling is essential for bone formation in vivo through the regulation of Runx2 expression. These results also suggest that a selective mTOR/Raptor antagonist, which has been developed for treatment of many diseases, may have the side effect of causing bone loss.
Osteoblasts, the bone-forming cells active during bone development and remodeling, 1,2 are derived from bone marrow mesenchymal stem cells (BMSCs). The differentiation of osteoblasts from BMSCs is controlled by transcription factors that are expressed in a defined temporal and spatial sequence. Among them, runt-related transcription factor 2 (Runx2) is considered to be the master transcription factor 3,4 as mice deficient in Runx2 exhibit a lack of mineralization in the skeleton and absence of mature osteoblasts. 5 Runx2heterozygous mice display clavicular hypoplasia and delayed closure of the fontanelles, a phenotype resembling the cleidocranial dysplasia (CCD) syndrome caused by mutations of Runx2 in humans. 6 Runx2 expression at appropriate times and sites is essential for bone development and bone remodeling. However, regulation of Runx2 expression, especially the upstream signaling pathways involved, has not been completely clarified.
The mammalian/mechanistic target of rapamycin (mTOR) is an evolutionarily conserved protein kinase. mTOR functions in two structurally and functionally distinct multiprotein complexes, namely mTORC1 and mTORC2, 7 which are distinct in their unique components and downstream targets. mTORC1 contains Raptor and is sensitive to rapamycin, while mTORC2 contains Rictor and is resistant to rapamycin. S6 kinase1 (S6K1) is the mTORC1 downstream target and can phosphorylate a series of substrates including estrogen receptor alpha (ERα) and S6 ribosomal protein (S6) to control gene transcription, protein synthesis, and other biological processes. 8-10 mTORC1 signaling is considered as the check-point of several extracellular and intracellular signals including growth factors, nutrients, energy metabolism and stress 11 and has been the target for drug development in many diseases, 11 which highlights the necessity of studying the role of mTORC1 in osteoblasts and bone development to monitor and avoid possible side effects on bone.
Recently, different studies of the role of mTORC1 signaling in osteoblast differentiation and bone development produced controversial results. 12 Previous in vitro studies shown that mTORC1 can activate 13,14 or inhibit 15 osteogenesis, and these controversies may be resulted from the differences in the cell types or cell differentiation-stages examined. Furthermore, disturbing of mTOR signaling in osteoblast-lineage cells induced various skeletal disorders. A mouse model with increased mTORC1 activity in neural crest-derived cells due to deletion of tuberous sclerosis 1 (Tsc1) led to increased bone mass through enlargement of the osteoprogenitor pool. 16 Interesting, two independent studies indicated that osteoblastspecific inactivation of Tsc complex caused osteoblasts to differentiate poorly and produce disorganized bone. 17,18 On the other hand, Chen et al. reported that enhanced mTORC1 signaling due to heterozygous mutations in the fibrillin-1 gene resulted in osteopenia. 19 Furthermore, decreased mTORC1 signaling with deletion of mTOR or Raptor in mesenchyme resulted in death shortly after birth and skeletal discrepancy. 20 Raptor deletion in Osterix-expressing preosteoblasts led to osteopenia. 21 In contrast, an in vitro study showed that depletion of Raptor promoted osteoblast differentiation of BMSCs. 22 Taken together, these data reveal that the role of mTORC1 signaling in osteoblasts is still ambiguous and the underlying mechanisms have not been fully illuminated.
In the present study, we found that loss of mTORC1 signaling in preosteoblasts through the deletion of mTOR or Raptor in mice induced severe skeletal defects secondary to impaired osteogenesis and osteoblast differentiation. Further molecular mechanism studies revealed that the mTOR-Raptor-S6K1 axis could promote osteoblast differentiation through regulating Runx2 expression by augmenting the activity of Runx2 enhancer.
Results
mTOR deficiency in preosteoblasts causes a CCD phenotype with impaired osteoblast differentiation. To determine the role of mTOR signaling in preosteoblasts in vivo, we generated conditional mTOR knockout mice (mTOR fl/fl ;Osx-cre, hereafter mTOR Osx ) (Figure 1a) by crossing mTOR fl/fl mice with Osx-cre mice, a transgenic line in which Cre activation is confined to osteoblast precursors. Western blot assay confirmed the loss of mTOR protein in parietal bones from mTOR Osx newborns (Figure 1b). Compared to WT mice and mice heterozygous for the mTOR floxed allele (mTOR fl/+ ;Osx-cre, hereafter mTOR Osx/+ ), mTOR Osx mice showed slower growth rate (Figures 1c and d). Alizarin red and Alcian blue staining showed although newborn mTOR Osx mice had similar skeletal size to their WT littermates (Figure 1e), mTOR Osx mice exhibited the CCD phenotype with clavicular hypoplasia and hypomineralization of the calvarium (Figure 1f), features of mice heterozygous for Runx2.
In addition to the clavicular and calvarial phenotypes, mTOR Osx mice displayed a substantial reduction in bone mass. As shown in Figures 2a and b, femoral trabecular bone of 4-week-old male mTOR Osx mice displayed an approximately 70% reduction in bone volume fraction (BV/TV) compared to WT littermates while mTOR Osx/+ showed an intermediate effect (Figure 2b). The significant reduction in bone mass between mTOR Osx and mTOR Osx/+ confirmed that the decreased bone density in mTOR Osx mice is due to the loss of mTOR in osteoblasts, not to the presence of the Cre transgene. mTOR Osx mice also displayed a decrease of trabecular number (Tb.N., Figure 2c) and trabecular thickness (Tb.Th., Figure 2d). mTOR osx mice also displayed a dramatic reduction of the cortical thickness (Ct.Th., Figure 2e).
We next examined whether the abnormal osteogenesis in mTOR osx mice was a result of inadequate osteoblast differentiation. We cultured calvarial cells from mTOR Osx and WT mice and found that mTOR Osx calvarial cells showed reduced osteoblast differentiation, revealed by decreased alkaline phosphatase (ALP) activity and fewer calcified nodules, measured by alizarin red staining (Figure 2f). Consistent with the reduced ossification and osteoblast differentiation, expression of the characteristic osteoblast marker genes, collagen 1α1 (Col1α1, Figures 2i and j) and osteocalcin (Ocn, Figures 2i and k) were reduced in osteoblasts from mTOR osx mice in vivo accompanied with the decrease of Runx2 expression (Figures 2g and h). Taken together, the CCD phenotype and reduced bone mass, possibly secondary to impaired osteoblastic differentiation in mTOR osx mice, supported the hypothesis that mTOR is critical for bone development.
Raptor deficiency in preosteoblasts also causes a CCD phenotype and reduced bone mass with impaired osteoblast differentiation. To further investigate the specific contribution of mTORC1 in osteoblasts, we generated conditional Raptor knockout mice (Rap fl/fl ;Osx-cre, hereafter Rap Osx ) by mating Rap fl/fl mice with Osx-cre mice (Figure 3a). Western blot assay revealed the loss of Raptor protein in parietal bones from Rap Osx mice ( Figure 3b). Similar to mTOR osx mice, Rap osx mice showed a slower growth pattern after birth (Figures 3c and d). The newborn Rap Osx mice also exhibited the CCD phenotype with clavicular hypoplasia and hypomineralization of the calvarium (Figure 3f) although Rap Osx mice had similar skeletal size to their WT littermates ( Figure 3e). Hypocalcification of calvarial and clavical bones were confirmed in 4-week-old Rap Osx mice by micro-CT and X-ray analysis ( Figure 3g). Moreover, 4-week-old Rap Osx mice had an osteopenic phenotype with decreased BV/TV, Tb.N., Tb.Th. and Ct.Th. in the femur (Figures 4a-e). Further, 6-month-old Rap Osx mice showed CCD phenotype with clavicular hypoplasia and hypomineralization of the calvarium (Supplementary Figure 1A). Micro-CT scanning confirmed calvarial hypoplasia (Supplementary Figure 1B) and showed deceased BV/TV, Tb.N., Tb.Th. and Ct.Th. in Rap Osx mice in comparison with WT mice (Supplementary Figure 1C-G). The significant difference between Rap Osx mice and Rap Osx/+ mice accompanied with the skeletal defects in 6-month-old Rap Osx mice confirmed that it is the loss of Raptor in osteoblasts, not the presence of the Cre transgene, that is responsible for the bone defects in Rap Osx mice.
Primary osteoblast cultures from Rap Osx mice exhibited decreased osteoblast differentiation, determined by decreased ALP activity and mineralized matrix production ( Figure 4f). Consistent with this, Rap Osx mice displayed decreased Col1α1 and Ocn mRNA expression in vivo (Figures 4i-k ) as well as reduced Runx2 protein expression (Figures 4g and h). The similar bone phenotype between Rap Osx mice and mTOR Osx mice supported the hypothesis that mTOR drives skeletal development mainly via the mTORC1 complex. The decreased bone mass in both mTOR Osx and Rap Osx mice might arise through increased bone resorption. To address this, we analyzed femurs from 4-week-old mice for the presence of tartrate-resistant acid phosphatase (TRAP)positive osteoclasts. In comparison with WT littermates, both mTOR Osx and Rap Osx mice displayed a modest decrease in TRAP-positive osteoclasts ( Supplementary Figures 2A and B), indicating that the bone lesions in mTOR Osx or Rap Osx mice are attributable to impaired bone formation as opposed to mTOR/Raptor signaling is critical for skeletogenesis in mice Q Dai et al decreased bone resorption. Further confirming the decrease in osteoblast activity, 4-week-old Rap Osx mice demonstrated a decreased bone formation rate (BFR) and mineral apposition rate (MAR) as determined by alizarin red and calcein labeling (Figures 4l and m). Collectively, these data support the hypothesis that mTOR/Raptor (mTORC1) is critical for osteoblast activity and anabolic bone formation.
S6K1 is a downstream factor of mTOR/Raptor in osteoblasts. S6K1 is the most important downstream regulator of mTORC1 and plays crucial roles in development and aging. To determine whether S6K1 could function downstream of mTOR/Raptor in osteoblasts, we analyzed the level of phospho-S6K1in 7-day-old mice by immunohistochemistry. An antibody specific to phospho-S6K1 (T389) demonstrated robust signal in osteoblast-lining cells in subchondral trabecular bone and in adjacent osteocytes (Figures 5a and d). At the same time, both mTOR Osx and Rap Osx mice showed decreased S6K1 phosphorylation in comparison with their WT littermates (Figures 5a,b,d and e).
We also analyzed lysates from parietal bone of mTOR Osx and Rap Osx newborn mice. As shown in Figures 5c and f, although the expression of S6K1 is comparable between WT mice and mTOR Osx or Rap Osx mice, the level of phosphorylation of S6K1 in mTOR Osx or Rap Osx mice was dramatically decreased. To examined whether the reduction of S6K1 phosphorylation could be responsible for impaired osteoblast differentiation, the expression of constitutivelyactive S6K1 (CAS6K1, T390E) was enforced in Raptordeficient calvarial osteoblasts. CAS6K1 overexpression was determined by immunoblotting ( Figure 5h) and the efficiency of CAS6K1 was confirmed by increased phosphorylation of ribosomal protein S6 (P-S6, S 235/236). Furthermore, the impaired osteoblast differentiation of Rap Osx calvarial cells was significantly improved by CAS6K1 overexpression as demonstrated by increased ALP activity and mineralized nodule formation (Figure 5g). Similarly, enforced expression of CAS6K1 rescued the decreased expression of Runx2 The mTOR-Raptor-S6K1 axis signaling regulates Runx2 expression in osteoblasts. Both mTOR Osx and Rap Osx mice displayed a striking similarity to Runx2 +/ − mice with To exclude the possibility that the decreased Runx2 expression is due to the different cell populations, we cultured BMSCs from Raptor fl/fl mice and then infected the cells with adenovirus expressing GFP and CRE recombinase. As shown in Figure 6a, CRE adenovirus led to reduced expression of Raptor accompanied by decreased levels of Runx2 protein. Thus, the effect of Raptor on regulating Runx2 expression cannot be attributed to differences in cell population. Consistent with the lower Runx2 protein level, CRE adenovirus caused impaired osteoblast differentiation (Figure 6b). The change of Runx2 protein level was reflected in a dramatic decrease of Runx2 mRNA level by CRE adenovirus (Figure 6c), which was accompanied by a decrease of Runx2 downstream osteogenic genes such as Col1α1( Figure 5d) and Ocn (Figure 5e). These data suggest that mTORC1 could regulate Runx2 expression at the transcript level. To further determine whether the impaired osteoblast differentiation induced by inactivation of mTORC1 was due to the reduction of Runx2 expression, we analyzed the effects of enforced Runx2 expression on Raptor-deficient osteoblasts. Calvarial osteoblastic cells from Rap Osx mice were infected with lentivirus expressing Runx2 (Lenti-Runx2) or GFP (Lenti-GFP). Western blot assay confirmed Runx2 overexpression in Rap Osx osteoblasts (Figure 6f). The decreased osteoblast differentiation in Rap Osx calvarial cells was rescued by Runx2 overexpression as determined by ALP activity and mineralized nodule formation (Figure 6g). Consistent with this, the decreased expression of Runx2 downstream genes including Col1α1and Ocn was rescued by Runx2 overexpression (Figures 6h and i). These results indicated that mTOR/Raptor-S6K1 signaling promoted skeletal development and osteoblast differentiation through regulation of Runx2 expression.
S6K1 regulates Runx2 expression via its enhancer. Next, we intended to investigate the mechanisms by which mTORC1 regulates Runx2 expression. First, we examined the effects of CAS6K1 on activity of the Runx2 promoter by an in vitro luciferase assay. As shown in Supplementary Figure 3A, CAS6K1 had no effects on activation of the Runx2 promoter. Kawane et al. demonstrated that the enhancer of Runx2 plays an important role in directing Runx2 expression in osteoblasts. 23 Consequently, we next examined the effects of CAS6K1 on the Runx2 enhancer. As shown in Figure 7a, the activity of the luciferase reporter driven by the 3x 89 bp Runx2 core enhancer (Runx2 enhancer) was promoted by CAS6K1 expression. Furthermore, we found Runx2 enhancer activity decreased in Rap Osx calvarial cells compared with WT group (Figure 7b), which confirmed mTORC1-S6K1may regulate Runx2 expression through its enhancer. The Runx2 core enhancer is bound by Distal-less homeobox 5 (DLX5) and MEF2C, 23 so we examined the effects of CAS6K1 on DLX5-and MEF2C-induced Runx2 enhancer activity. As shown in Figure 7c, CAS6K1 could increase DLX5-induced Runx2 enhancer activity, but not MEF2C (Supplementary Figure 3B). However, we were unable to detect any interaction between S6K1 and DLX5 by co-immunoprecipitation (coIP) experiments (Supplementary Figure 1C). This is consistent with the fact that DLX5 does not contain the S6K1 phosphorylation motif RxRxxS/T (where x could be any amino acid). Thus, S6K1 may regulate Runx2 enhancer activity by phosphorylating proteins other than DLX5. It has been reported that S6K1 promotes estrogen receptor α (ERα ) activity by phosphorylating it on S167 in cancer cells 9,24 and ERα can regulate Runx2 expression in osteoblast progenitors. [25][26][27] We hypothesized that S6K1 may regulate Runx2 enhancer activity by phosphorylating ERα. First, we confirmed the protein interaction between S6K1 and ERα ( Figure 7d). Next, we found that ERα could increase the activity of Runx2 enhancer, while co-transfection of CAS6K1 was able to promote this effect (Figure 7e).We then examined whether mutation of the S6K1 phosphorylation motif renders ERα refractory to activation by S6K1. The S6K1 phosphorylation site of ERα was mutated by converting Ser 171 into alanine. 28 We found that when compared with WT ERα, ERα-S171A was resistant to CAS6K1 effects on the activity of Runx2 enhancer (Figure 7f). These data indicate that S6K1 regulates Runx2 enhancer activity by phosphorylation of ERα. To further test this hypothesis, we analyzed the phosphorylation of ERα (P-ERα, S167) in 7-day-old mTOR Osx and Rap Osx mice by immunohistochemistry. Interestingly, we found that DLX5 and ERα could synergistically increase the activity of Runx2 enhancer and that CAS6K1 could further promote these effects ( Figure 7i). As the 89 bp Runx2 core enhancer lacks an ERα binding motif, we hypothesized that ERα might augment Runx2 enhancer activity through interaction with DLX5. Indeed ERα and DLX5 could interact when both were ectopically expressed in 293T cells (Figure 7j). Next, we confirmed this protein interaction between ERα and DLX5 in calvarial osteoblasts (Figure 7k). To further analyze whether the interaction of ERα and DLX5 was dependent on phosphorylation of ERα, WTand phosphorylation site mutant ERα (ERα-WT, ERα-S171E, ERα-S171A) were co-transfected with DLX5 in 293T cells. As shown in Figure 7l, DLX5 could interact with all three ERα protein, which suggested that the interaction of ERα and DLX5 was independent of ERα phosphorylation and excluded the possibility that mTORC1 regulate Runx2 enhancer activity by affecting the interaction of ERα and DLX5. Next we analyze the binding of ERα and Dlx5 to Runx2 enhancer by CHIP-qPCR in Rap Osx calvarial osteoblasts. As shown in Figures 7m and n, the binding of ERα and Dlx5 to Runx2 enhancer decreased in Rap Osx calvarial osteoblasts in comparison to the WT groups, which indicated that mTORC1-S6K1 signaling can promote the binding of the transcriptional factors ERα and Dlx5 to Runx2 enhancer. These data support the hypothesis that the mTOR-Raptor-S6K1 axis can regulate Runx2 expression via its enhancer, probably by phosphorylating ERα which interacts with DLX5 and augments Runx2 enhancer activity. mTORC1 interacts genetically with Runx2 in vivo. Based on the in vitro results above, we next questioned whether mTORC1 is an upstream regulator of Runx2 in vivo. We postulated that if mTORC1 indeed regulates expression of mTOR/Raptor signaling is critical for skeletogenesis in mice Q Dai et al Runx2, then reduced activity of mTORC1 in parallel with Runx2 haploinsufficiency in vivo should aggravate the bone defects in Runx2 +/ − mice. To test this hypothesis, we generated Rap Osx/+ Runx2 +/ − mutant mice by crossing Rap Osx/+ and Runx2 +/ − mice and analyzed skeletal preparations from newborn WT, Rap Osx/+ , Runx2 +/ − and Rap Osx/+ Runx2 +/ − mice. As shown in Figure 8a, the body weights of Runx2 +/ − and Rap Osx/+ Runx2 +/ − mice were slightly lower than those of WT and Rap Osx/+ mice and the skeletons of Runx2 +/ − and Rap Osx/+ Runx2 +/ − mice were consistently smaller (Figure 8b). Runx2 +/ − mice displayed the previously reported CCD-like phenotype. Heterozygous deletion of Raptor resulted in minor skeletal defects in both the clavicle and calvarium (Figures 8c-e). However, Rap Osx/+ Runx2 +/ − mice showed far more severe bone defects than Runx2 +/ − mice, including a larger hypoplastic area of the cranial bones (Figures 8c and d) and barely visible clavicles (Figures 8c and e). Taken together, these results suggest a genetic relationship between mTORC1 and Runx2, and provide in vivo evidence that mTORC1 can regulate osteoblast function through Runx2.
Discussion
In the present study, we revealed the critical role of mTOR/ Raptor signaling in osteoblasts and bone development through conditional knockout of mTOR and Raptor, respectively. Deletion of mTOR in preosteoblasts induced marked skeletal defects, including dwarfism with short limbs, impaired ossification of the cranial bones, hypoplasia of the clavicles and reduced bone mass, which support the conclusion that mTOR is essential for both endochondral and intramembranous ossification. Further, preosteoblast-specific loss of Raptor reproduced almost all skeletal phenotypes of mTOR Osx mice, suggesting that mTOR functions mainly though mTORC1 in osteoblasts and osteogenesis. On the other hand, other researchers have found that continuous mTORC1 activation led to bone defects. 18,19 These results suggested that mTOR signaling is regulated precisely during bone development, and either upregulation or downregulation of mTORC1 signaling may result in bone diseases. Although the precise balance point of mTORC1 signaling in osteoblasts will need to be investigated in further studies, the possible side effects on bone of changes in mTORC1 signaling caused by any agent which has been or is intended to be used as a treatment target in a range of diseases 11 should be taken into account.
MSCs can differentiate into multiple lineages including the osteoblast, chondrocyte and adipocyte lineages. While it has been demonstrated that mTOR signaling is essential for both chondrocyte 20,29 and adipocyte 30 differentiation, there is limited and controversial information available regarding the independent role of mTOR in osteoblast and osteogenesis. 12 In this study, we revealed several lines of evidence showing that inactivation of mTORC1 inhibited osteoblast differentiation and bone formation. First, osteoblast-specific deletion of either mTOR or Raptor led to decreased expression of osteoblastic markers in vivo. Second. BFR was reduced in Rap Osx mice. Third, osteoblast differentiation of Rap Osx parietal cells was impaired. Taken together, we believe that physiological mTORC1 signaling is essential for osteoblast differentiation and bone formation.
There is limited information available about the mechanisms by which mTORC1 promotes osteoblast differentiation. It has been demonstrated that S6K1can positively regulate the differentiation of both chondrocytes 29 and adipocytes. 10 In our current study, we provided evidence indicating that S6K1 is the major downstream regulator of mTORC1 in osteoblasts and that the mTOR/Raptor-S6K1 axis could promote osteoblast differentiation and osteogenesis.
Mutations of Runx2 result in CCD in humans and Runx2 +/ − mice exhibited CCD-like phenotypes except for supernumerary teeth. [4][5][6] However, some CCD patients do not carry Runx2 mutations, and a decrease to 70% of wild-type Runx2 levels can result in CCD syndrome in mice. 31 These results indicate that CCD may result from other mechanisms that regulate Runx2 expression. Here, we provided evidence that mTOR/ Raptor-S6K1 signaling promotes Runx2 expression. Preosteoblast-specific deletion of either mTOR or Raptor results in bone defects, resembling the CCD phenotype and decreased Runx2 expression in vivo. Moreover, Raptordeficient BMSCs and parietal cells displayed reduced Runx2 expression, and the impaired osteoblast differentiation in Rap Osx parietal cells can be rescued by Runx2 overexpression. The reduced Runx2 expression in Rap Osx parietal cells can be rescued by enforced CAS6K1 expression. Molecular experiments demonstrated that CAS6K1 can augment the activity of Runx2 enhancer, but not the Runx2 promoter. Finally, heterozygous Raptor in osteoblasts aggravates skeletal phenotypes of Runx2 +/ − mice, supporting a genetic link between mTORC1 signaling and Runx2. Taken together, these data suggest that physiological mTORC1 signaling is essential for Runx2 expression. Meanwhile, hyperactivation of mTORC1 in osteoblasts-induced bone defects were also related to decreased Runx2 expression. 18,19 Our study clarifies the critical role of mTOR/Raptor signaling in osteoblasts and bone development and demonstrates that mTOR-Raptor-S6K1 can regulate Runx2 expression, providing new sights into skeletal dysplasias such as CCD. Skeletal whole mount staining. Skeletal whole mount staining with Alcian blue and Alizarin red was performed as described previously. 32 Mice were killed with CO 2 , and all skin was carefully removed. Specimens were dehydrated in 95% alcohol for 24 h, followed by cartilage staining in Alcian blue solution for 42 h at 37°C. After staining, specimens were washed twice in 95% alcohol for 2 h, cleared in 1% KOH for 5 h and stained in Alizarin red solution for 1 h. They were then cleared through 20, 50, and 80% glycerine in 1% KOH, then stored in glycerine.
X-ray and micro-CT analysis. Four-week-old mice were anesthetized with chloral hydrate and subjected to X-ray scanning at 30 KV (Faxitron X-ray, Tucson, AZ, USA). Skulls of 4-week-old and 6-month-old WT and Rap Osx mice were used for micro-CT analysis (μCT80, SCANCO Medical AG, Bassersdorf, Switzerland) with a 10-μm voxel size. The femur of 4-week-old mTOR Osx , mTOR Osx/+ , Rap Os , Rap Osx/+x mice, 6-month-old Rap Osx mice and corresponding WT littermates were collected for micro-CT scanning with a 10-μm voxel size. One hundred slices (total 1 voxel size. One hundred slices (total 1lected for micro-CT scanning witrabecular microarchitecture parameters including bone volume fraction (BV/TV), trabecular thickness (Tb.Th.) and trabecular number (Tb.N.) following the introductions of the manufacturer. 33,34 Fifty slices from the middle of the femur were used to analyze cortical thickness (Ct.Th.). 35 Histological analysis. Femurs from 7-day-old and 4-week-old mice were fixed in 4% paraformaldehyde for 48 h followed by decalcification in 10% EDTA for 2-4 weeks. Specimens were embedded in paraffin then stained with hematoxylineosin and TRAP (Sigma, St. Louis, MO, USA) according to previously described methods. 36,37 mTOR/Raptor signaling is critical for skeletogenesis in mice Q Dai et al Immunohistochemical staining was performed following a previously described protocol. 37 Sections were de-waxed and rehydrated. A solution of 3% H 2 O 2 was used to block the activity of endogenous peroxidase. Antigen retrieval was performed with protease K at 37°C for 15 min. Antibodies to Runx2 (1:200, Santa Cruz Biotechnology, Santa Cruz, CA, USA) and P-S6K1 (T389, 1:200, Merck Millipore, Darmstadt, Germany), P-ERα (S167, 1:200, ABclonal, Boston, MA, USA ) were added and incubated overnight at 4°C. Corresponding biotinylated secondary antibodies were then added and incubated for 1 h at room temperature, followed by color development with an ABC kit (Vector Labs, Peterborough, UK). We counted the number of Runx2-, P-S6K1-and P-ERα-positive cells along the trabecular bone of the distal femur excluding the area within 0.25 mm from the growth plate.
In situ hybridization was performed as previously described. 37 Briefly, DIG-labeled RNA probes were used to detect mRNA expression in femurs of 7-day-old mTOR Osx , Rap Osx and corresponding WT littermates. Probes used in this study: probes for mouse Col1α1 (nucleotides 4466-4783, NM_007742, subcloned in pBlueScript), probes for mouse Ocn (nucleotides 39-342, NM_007541, subcloned in pBlueScript). After hybridization, probes were visualized by anti-DIG biotin-conjugated antibody and an ABC kit (Vector Labs). Then, samples were counterstained with hematoxylin. We counted Col1α1-positive cells along trabecular bone of the distal femur excluding the area within 0.25 mm from the growth plate and Ocn-positive cells along cortical bone of the distal femur.
Cell culture. Four-week-old Raptor fl/fl mice were killed and the hindlimbs were collected. Bone marrow cells were washed out of the long bones and centrifuged at 500 × g for 10 min. The collected BMSCs were cultured in α-MEM with 10% fetal calf serum and 1% penicillin/streptomycin. After 14 days, BMSCs were reseeded at 2.5 × 10 5 /cm 2 . Twenty-four hours later, BMSCs were infected with adenovirus expressing either CRE recombinase or GFP at a MOI of 10. Then, BMSCs were cultured in osteogenic medium (α-MEM with 10% FBS and 1% penicillin/ streptomycin, 100 nM dexamethasone, 50 μM L-ascorbic acid, and 10 mm βglycerophosphate) until required.
Parietal bones of P5 mice (mTOR Osx , Rap Osx and WT) were digested in 1 mg/ml collagenase (Sigma) and 2 mg/ml Dispase II (Sigma) in α-MEM for 5 min, three times. The released calvarial cells were cultured in α-MEM with 10% FBS and 1% penicillin/streptomycin. After 7 days, calvarial cells were reseeded at 5 × 10 4 /cm 2 and cultured in osteogenic medium. For overexpression of CAS6K1 and Runx2, Rap Osx calvarial cells were infected with lentivirus expressing CAS6K1 or Runx2. At the same time WTand Rap Osx calvarial cells infected with lentivirus expressing GFP were used as control groups. Then these cells were cultured in osteogenic medium.
Alkaline phosphatase staining and alizarin red staining. ALP assay was performed after 7 days of osteoblast differentiation according to the specification of the manufacturer (Beyotime Institute of Biotechnology, Shanghai, China). Mineralized nodule formation was detected by Alizarin red staining 14 days after osteoblast differentiation (Cyagen Biosciences, Santa Clara, CA, USA).
Co-immunoprecipitation and western blot. For western blot analysis of parietal bone, total proteins were obtained from P0 mice with SDS buffer (TaKara). For western blot analysis of BMSCs and calvarial cells, total proteins were obtained from BMSCs and calvarial cells after 7 days of differentiation in osteogenic medium. Proteins (60 μg of parietal bone protein or 30 μg of cells) were separated by 10% SDS-PAGE followed by western blotting according to a standard protocol. Antibodies used were: mTOR (Cell Signaling Technology, Danvers, MA, USA), Raptor (Cell Signaling Technology), P-S6K1 (T389, Merck Millipore), S6K1 (Cell Signaling Technology), P-S6 (S235/236, Cell Signaling Technology), S6 (Cell Signaling Technology), Runx2 (Santa Cruz Biotechnology), β-actin (Santa Cruz Biotechnology).
CoIP was performed following a method previously described. 37 Briefly, 293T cells were seeded into a 10 cm dish at a concentration of 3 × 10 6 cells/dish and allowed to settle overnight. At 48 h post-transfection with PEI, cells were lysed and whole cell lysates were used for immunoprecipitation by Flag or HA antibody (Sigma) at 4°C overnight. Western blot assay was performed with the indicated antibodies (anti-HA and anti-Flag, Sigma). For CoIP analysis of ERα and DLX5 in calvarial osteoblast, whole cell lysates were incubated with IgG and ERα (Sangon Bioteck, Shanghai, China) at 4°C overnight. Western blot assay was performed with DLX5 (Abcam, Cambridge, UK) and anti-ERα(Abcam).
Chromatin immunoprecipitation (ChIP) and qPCR. CHIP analysis in WT and Rap Osx calvarial osteoblast was performed following by an Enzymatic Chromatin Immunoprecipitation kit (Cell Signaling Technology) following the instructions of the manufacturer. Briefly, calvarial osteoblasts were cross-linked with 1% formaldehyde for 10 min at room temperature followed by quenched with glycine. Chromatin digestion was performed to obtain DNA fragments from 150 bp to 900 bp by Micrococcal Nuclease. Immunoprecipitation was performed with DLX5 (Abcam) and ERα(Abcam), and IgG was used as a negative control. Precipitated DNA was detected by qPCR with specific primers. Primers for Runx2 enhancer: F:5′-CTGCTTTAGGTAGAGGGCTT-3′, R: 5′-AATCAGAGTGGAGTCTCAGC-3′.
Statistical analysis. All quantitative data are presented as mean ± s.d. from at least three independent samples. Student's t-test was used for statistical evaluations of two group comparisons. Statistical analysis with more than two groups was performed with one-way analysis of variance (ANOVA). Po0.05 was considered statistically significant.
Conflict of Interest
The authors declare no conflict of interest. | 2017-11-08T22:04:14.671Z | 2017-07-07T00:00:00.000 | {
"year": 2017,
"sha1": "a8f3c5f90e366396fcde1430d8ec9b31a40a752d",
"oa_license": "CCBYNCND",
"oa_url": "https://www.nature.com/articles/cdd2017110.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "1403dc210ddfeae75a2999bf15128cc9a890411b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
56129414 | pes2o/s2orc | v3-fos-license | What is the importance of climate model bias when projecting the impacts of climate change on land surface processes?
We present OH observations from Nitrogen, Aerosol Composition, and Halogens on a Tall Tower 2011 (NACHTT-11) held at the Boulder Atmospheric Observatory in Weld County, Colorado. Average OH levels at noon were~ 2.7 (cid:1) 10 6 molecules cm (cid:3) 3 at 2 m above ground level. Nitrous acid (HONO) photolysis was the dominant OH source (80.4%) during this campaign, while alkene ozonolysis (4.9%) and ozone photolysis (14.7%) were smaller contributions to OH production. To evaluate recycling sources of OH from HO 2 and RO 2 , an observationally constrained University of Washington Chemical Mechanism (UWCM) box model (version 2.1) was employed to simulate ambient OH levels over several scenarios. For the base run, not constrained by observed HONO, the model signi fi cantly underestimated OH by a factor of 5.3 in the morning (9:00 – 11:00) and by a factor of 3.2 in the afternoon (13:00 – 15:00). The results suggest that known chemistry cannot constrain HONO and, subsequently, OH during the observational period. When HONO is constrained in the model by observations ( < 50 m), the discrepancy between observation and model simulation improves to a factor of 1.3 in the morning and a factor 1.1 in the afternoon, within the 35% estimated instrumental uncertainty. However, the model produces both a morning and afternoon maximum in OH, in contrast to the observations, which show strong evidence for morning OH production but no distinct morning maximum. Two additional OH sources were also considered, although they do not improve the differences in modeled and measured temporal OH pro fi les. First, the impact of daytime HONO gradients near the ground surface ( < 20 m) was evaluated. Strong HONO gradients were observed between 06:00 and 09:00 MST (mountain standard time), especially within 20 m of the surface. When constrained to HONO observed below 20 m (rather than 50 m), the model produced an even larger morning OH maximum, in contrast to the observations. Second, Cl atoms from ClNO 2 photolysis producing RO 2 from reaction with alkanes, while signi fi cant, produced steady state Cl atom levels (~ 10 3 atoms cm (cid:3) 3 ) that were too low to signi fi cantly perturb measured OH through reactions of organic peroxy radicals produced from Cl reactions with volatile organic compounds.
Introduction
Hydroxyl radicals (OH) maintain the oxidation capacity of the troposphere. The tropospheric OH level is determined by photolytic and recycling sources from the HO X -RO X radical pool [Heard and Pilling, 2003]. Levy [1971] postulated the main OH photolytic production pathway to be initiated by ozone photolysis: Statistical analysis of a long-term OH observation data set (5 years) from rural Southern Germany showed a linear correlation (r 2 = 0.885) between OH concentrations and ozone photolysis rates [Rohrer and Berresheim, 2006]. The correlation observed was greater than the correlation found between measured and box model calculated OH. Other long-term data sets such as those made in the marine boundary layer by Vaughan et al. [2012] have also demonstrated a strong correlation between observed OH and solar radiation.
In spite of these results, solar radiation alone is insufficient to estimate OH levels. Intensive field campaigns that rely on accurate OH for interpretation of photochemical data require deployment of a comprehensive instrumentation suite to constrain photochemical sources of OH [e.g., Kim et al., 2013]. For example, in the correlation plot between ozone photolysis rates and OH concentrations from Rohrer and Berresheim [2006], OH concentrations were observed in the wide range of 2-5 Â 10 6 molecules cm À3 for a given J O3 (O 1 D) value (2 Â 10 À5 s À1 ), indicating that OH concentrations depend on multiple variables aside from solar radiation. Furthermore, several recent studies have suggested that the variables governing OH concentrations remain poorly understood. For example, several OH field-observation studies conducted in high isoprene (2-methyl-1,3-butadiene, C 5 H 8 ) environments with moderate to low NO levels (100 parts per trillion (ppt) or less) have reported significant and systematic underestimation of observed OH levels by photochemical box models [Lelieveld et al., 2008;Hofzumahaus et al., 2009]. Although there is current debate regarding OH measurement uncertainty, it has been suggested that such model underestimation is caused by unknown OH recycling sources from peroxy radicals.
Recent research results also suggest uncertainties in constraining photolytic sources of OH. For example, HONO, an important photolytic source for OH (R3), has been observed in higher levels than those that can be explained by our current knowledge of tropospheric chemistry [VandenBoer et al., 2013, and references therein].
This series of recent reports suggests that our current understanding of recycling and photolytic sources of OH is insufficient to constrain OH concentrations in the troposphere. These uncertainties limit the understanding of photochemical ozone and secondary organic aerosol production initiated by OH oxidation of trace gases in the troposphere.
The majority of HO x field investigations have taken place under summertime and/ or warm conditions characterized by large solar actinic flux and high relative humidity, both of which affect the major photolytic sources for OH. The available OH observations show that our understanding of winter photochemistry is even more limited than it is in summer [Heard et al., 2004;Ren et al., 2006]. Heard et al. [2004] reported higher than expected OH levels at noon (1.5 Â 10 6 molecules cm À3 ) during January [Brown et al., 2013], we evaluate chemical sources and sinks of OH, especially the relative importance of photolytic and recycling sources of OH. We test our understanding of wintertime photochemistry using the University of Washington Chemical Mechanism (UWCM) to determine the influence of ambient HONO and chlorine atom concentrations on the model calculated OH.
The NACHTT campaign included measurements of the major HO x sources, including photolysis of ozone, HONO [VandenBoer et al., 2013] and ClNO 2 [Thornton et al., 2010]. It therefore provides an opportunity to investigate HO x abundance in an urban/ suburban winter environment concurrently with a unique set of measurements to constrain radical sources during a season when unconventional chemical mechanisms are likely to play an important or even dominant role in oxidant formation.
Methods
An overview paper for the NACHTT-11 field campaign [Brown et al., 2013] has detailed information on the observation site, the deployed instrumentation, and the sampling strategy. The 3 weeks of the observational campaign (late-February to mid-March) show (1) the nighttime radical reservoir species such as ClNO 2 and HONO were consistently observed in nighttime urban air, (2) HONO was concentrated near the surface in nighttime, and (3) C 2 -C 5 alkane species composed most of calculated OH reactivity of observed VOC species [Swarthout et al., 2013]. A chemical ionization mass spectrometer (CIMS) for OH observation was located in a ground trailer along with the measurement suite for VOCs (proton transfer reaction-mass spectrometry and gas chromatography-mass spectrometry systems) and an ozone analyzer. The OH inlet was located 2 m above ground level (AGL). The tower-borne observation data (vertical profiles of 3 m to 270 m AGL) used for the data analysis in this paper were filtered to include only elevations between 1 and 50 m AGL in the analysis of OH data. The elevator returned to surface level approximately once every 20 min during continuous vertical profiling and was parked at an elevation below 50 m otherwise. Table 1 summarizes analytical details about the instruments and observed parameters presented here.
The HONO quantification technique, deployed for the NACHTT campaign, is described in VandenBoer et al.
[2013]. The negative-ion proton-transfer chemical ionization mass spectrometer utilized CH 3 COO À to ionize species with a lower gas phase proton affinity than acetic acid. The analytical system was integrated on the tower platform for vertical profile sampling. The nominal sensitivity was~10 Hz ppt À1 , and the lower limit of detection is 3.8 ppt (2σ) with 17% observational uncertainty.
A Chemical Ionization Mass Spectrometer for OH Measurements
Measurements of OH were made with an identical configuration as presented in Kim et al. [2013]. Ambient OH was chemically converted to H 2 34 SO 4 by injection of excess 34 SO 2 , followed by chemical ionization using nitrate ions (NO 3 À , R4) [Tanner and Eisele, 1995].
where m = 0 or 1 and n, p, and r are dependent upon water vapor concentrations.
The ion clusters from the above ion-neutral reaction are dissociated in the cluster dissociation chamber then analyzed by a quadrupole-channeltron unit. The instrument background was checked at 1 min intervals by injecting excess propane (99.99% by Matheson TRIGAS, Inc.) to chemically remove OH from the air sample.
Calibration was conducted at least 3 times a day as described in Tanner where σ H2O is the absorption cross section of H 2 O (7.2 Â 10 À20 cm 2 photon À1 at 184.9 nm; Cantrell et al. [1997]) and φ 184.9nm is the photon flux (photon cm À2 ). The estimated total uncertainty in OH measured during the NACHTT-11 field campaign is 35%, including statistical errors in the calibration processes (3σ) [Mauldin et al., 2010]. The lower limit of detection is estimated to be 5 Â 10 5 molecules cm À3 (2σ) for 10 min integration.
There have been conflicting reports on interferences in OH measurements, especially by the Laser Induced Fluorescence (LIF) technique. The LIF technique for OH quantification has been more widely used than the CIMS technique [e.g., Heard and Pilling, 2003]. Mao et al. [2012] reported that a conventional background characterization by wavelength modulation with LIF results in measurements of significantly higher (1.5-2.5 times) OH levels compared to those obtained from a chemical removal background method in a forest environment rich in biogenic VOCs. Although measurements during NACHTT were not strongly influenced by biogenic VOCs or their oxidation products, we note that the CIMS-based instruments have commonly used the chemical removal background method. It should be noted, however, that the deployment of the CIMS OH observation technique has mostly been limited to pristine, low biogenic VOC environments such as Mauna Loa Observatory or polar regions [Heard and Pilling, 2003;Mauldin et al., 2010;Liao et al., 2011]. In any case, as constraining tropospheric oxidation capacity becomes a crucial research topic to understand regional air pollution and radiative forcers, such as ozone and secondary organic aerosols, respectively [e.g., Lu et al., 2013;Lelieveld et al., 2008], a comprehensive examination of observational and modeling capacity on tropospheric OH should be conducted in many different environments, including the winter season.
University of Washington Chemical Mechanism
The UWCM 2.1 is equipped with HO x (= OH + HO 2 )-RO x (organic peroxy and alkoxy radical)-NO x couplings as described in Wolfe and Thornton [2011]. The source code is open to the public and can be downloaded at https://sites.google.com/site/wolfegm/code-archive. Measurements of commonly encountered alkane, aromatic, and alkene species were made by GC-MS and PTR-MS (Table 1) Saunders et al. [2003]) and implemented in the UWCM box model calculations. The chemical mechanisms of 27 VOC species (methane, ethane, propane, n-butane, i-butane, n-pentane, i-pentane, neopentane, n-hexane, n-heptane, n-octane, n-nonane, n-decane, ethene, propene, 1-butene, c-2-butene, t-3-butene, 1-pentene, c-2-pentene, t-2-pentene, 2-methyl-1-butene, 2-methyl-2-butene, 1-hexene, t-2-hexene, benzene, toluene, and formaldehyde) were incorporated to the UWCM model by including all reactions and intermediate products. These VOCs explained most of the trace gas OH reactivity in the model, as calculated from the observational data set (Figure 1). Measured oxygenated VOCs (OVOCs) contributed insignificant OH reactivity. Formaldehyde (CH 2 O) can be an important photolytic source for HO 2 and was constrained by the observed concentrations in the model calculations. Photolysis rates in the model were calculated using the scheme presented in Saunders et al. [2003]. We ran the model for 3 days with an identical constraint set to obtain a steady state OH daily variation. The 3 day calculation results are presented below.
Observations
The observed daily variations in OH, O 3 , J O3 (O 1 D) HONO, NO X , and OH reactivity from VOCs during the NACHTT campaign are shown in Figure 1. The noontime (average over 11:30 to 12:30 MST; mountain standard time) OH concentrations were, on average,~2.7 Â 10 6 molecules cm À3 . This observed value is higher (by a factor of 1.5 to 1.9) than the concentrations reported from the two previous winter OH observations (Table 2). One may attribute the difference to the fact that J O3 (O 1 D) during the NACHTT campaign was much higher (2 to 10 times) than the other two winter field campaigns. However, as summarized in Table 2, ozone photolysis was not a major contribution toward OH production for any of the three campaigns compared with the sum of HONO photolysis and alkene ozonolysis. The daily variations in the OH production rate from ozone photolysis, HONO photolysis, and alkene ozonolysis during the NACHTT campaign are shown in Figure 2. HONO photolysis is the dominant primary OH source during the daytime (~80% at noon). Alkene ozonolysis was less important to OH production during NACHTT than was observed in previous wintertime field measurements where it was comparable or greater than HONO photolysis. Alkene compounds were observed at typical wintertime U.S. urban background levels [Gilman et al., 2013] and the average OH production rates from ozonolysis of alkenes (11:00-15:00 MST) was 1.1 Â 10 5 molecules cm À3 s À1 . This average is only 2% of the rate observed in Birmingham from 11:00 to 15:00 MST (5.4 Â 10 6 D)), and the percentage fractions of primary OH production pathways between 11:00 and 13:00 in local time. molecules cm À3 s À1 ; Heard et al. [2004]). On the other hand, OH production from ozone photolysis (2.9 Â 10 5 molecules cm À3 s À1 ) during the NACHTT campaign was nearly an order of magnitude higher than the rate observed in Birmingham (5 Â 10 4 molecules cm À3 s À1 ). Lastly, the OH production rate from HONO photolysis (3.1 Â 10 6 molecules cm À3 s À1 ) during the NACHTT campaign was the same as the rate observed in Birmingham (3.1 Â 10 6 molecules cm À3 s À1 ). This major contribution of HONO photolysis to OH production is reflected in the observed asymmetrical OH daily variation toward morning (06:00-09:00 MST). When viewed relative to J O3 (O 1 D), OH concentrations in the morning were higher than those observed in the afternoon, due at least in part to the strong OH production from morning HONO photolysis (Figures 1 and 2) as our subsequent analyses indicate (vide infra).
We can examine this radical source further by distinguishing between the nighttime source of HONO and the variety of proposed, but less certain, daytime sources of HONO (e.g., formation on photoexcited organic substrates [Ammann et al., 1998], from soil pore water [Su et al., 2011], or microbial processes [Oswald et al., 2013]). These daytime HONO processes have not been widely incorporated in tropospheric chemistry modeling studies. Due to its short photochemical lifetime, HONO that is present during daytime, as reported in numerous recent publications, implies a more rapid formation process than the one responsible for its nighttime buildup. Thus, we distinguish daytime production of HONO as chemically distinct from nighttime HONO. These two HONO sources were separated to compare their contribution to the OH formation calculated (Figure 3a). We assume that HONO at, or before, sunrise came solely from nighttime sources such as heterogeneous uptake of NO 2 to ground surfaces or direct emissions and designate it as nighttime HONO. Based on the nighttime HONO levels, the HONO decay after sunrise due to photolysis was calculated, and this loss process was used to estimate the relative contributions of nighttime and daytime HONO to OH production by comparing observed HONO to the calculated nighttime residual HONO as a function of time after sunrise. Radical production from photolysis of HONO was calculated along with the contribution from reaction of O( 1 D) and water ( Figure 3b). All calculations were done using diurnal averages of measurements made below 15 m on clear days while measurements of OH were made (17, 19-26, and 28 February 2011). We assumed that photolysis rate constants vary consistently with time on clear days. Clear days were defined using diurnally integrated J NO2 measurements. Only days that had integrated J NO2 within 20% of the sunniest day were included.
Nighttime HONO accounted for 7% of the total OH formed from HONO and was important only before 08:30 MST. For the first hour after sunrise, OH formed from nighttime HONO accounted for the majority of the HONO radical source. We compare the OH produced from HONO at NACHTT to another campaign where similar observations and calculations were undertaken. During May and June of 2010, measurements (at 10 m AGL) were made in Pasadena, CA, as part of the CalNex campaign [Young et al., 2012]. In Figure 3b, the diurnal integrated radical production from HONO is compared for both campaigns. It is clear that the magnitude of OH production from HONO is similar in both locations, despite very different environments and conditions. However, if we compare the contribution of the reaction of O( 1 D) with water (R2), we see a dramatic difference between the two campaigns. During CalNex, the dominant radical source was the reaction of O( 1 D) with water, which is common for urban summertime conditions [Alicke et al., 2003;Volkamer et al., 2010]. This radical source was approximately twice as important as the contribution from HONO photolysis. In contrast, during the winter conditions of NACHTT, HONO photolysis was the dominant radical source, contributing more than 15 times as many radicals as the reaction of O( 1 D) with water.
Lastly, to quantify OH from HONO formed by the termolecular reaction, OH + NO + M, a well-known HONO formation process, the net OH production rates were calculated from equation (6) and are shown in Figure 2b. The data set was filtered for HONO and NO observations below 50 m.
where L(HONO) HONO+OH is a HONO and OH loss term from HONO + OH → H 2 O + NO 2 and L(OH) OH+NO is an OH loss term from OH + NO + M → HONO + M.
In the morning (06:00-09:00 MST), the net OH production from HONO (netP(OH) HONO photolysis ) was the dominant OH source. Between 11:00 and 15:00 MST, the net OH contribution from HONO is comparable to or less than ozone photolysis (netP(OH) HONO photolysis = 1.9 Â 10 5 molecules cm À3 s À1 versus P(OH) ozone photolysis = 3.2 Â 10 5 molecules cm À3 s À1 ). For comparison, Heard et al. [2004] report winter OH source from HONO photolysis as 6.9 Â 10 5 molecules cm À3 s À1 in Birmingham, England, in January-February 2000 in the same time period, more than 3 times the net source during NACHTT. However, this is a still significant contribution to OH photolytic production from HONO (~37%) during NACHTT because netP(OH) HONO photolysis is twice as high as the OH production rate from alkene ozonolysis (9.2 Â 10 4 molecules cm À3 s À1 ).
Zero-Dimensional Box Modeling Results
The two most important processes maintaining OH levels in the troposphere are primary production and recycling production from peroxy radicals (HO 2 ) and organic peroxy radicals (RO 2 ). Henceforth, we will refer to all the direct production pathways of OH from photolytic sources (e.g., ozone and HONO) and alkene ozonolysis as primary sources to differentiate from recycling processes. Since Levy [1971] first postulated the importance of tropospheric OH in maintaining oxidation capacity, a reaction between HO 2 and NO has been regarded as the main recycling mechanism for OH. During the NACHTT campaign, alkanes comprised the largest fraction of VOC reactivity to OH among the observed VOC classes by PTR-MS and GC-MS (Figure 1b). Peroxy radical photochemistry from alkane oxidation is relatively well documented [Atkinson et al., 2008], and so it would be expected that box -model calculations constrained by observations will reproduce observed OH levels in this environment. Figure 4 contains the observed OH diurnal variation (red trace) and four different model scenarios described in the caption. In addition, Table 3 summarizes 2 h averaged morning (9:00 to 11:00) and afternoon (13:00 to 15:00) OH concentrations from the observed and model calculated results. The base scenario (blue trace) is a UWCM run that has been observationally constrained as described in section 2.2, excluding HONO observations. The significant underestimation of OH in this model run suggests that gas-phase HONO formation processes included in the UWCM model cannot reproduce the observed HONO levels, resulting in a significantly lower contribution of HONO photolysis to primary OH production. A detailed discussion on HONO sources and sinks during the NACHTT campaign can be found in VandenBoer et al. [2013]. Simulated OH levels more closely match the observed OH levels when constrained by average measured HONO below 50 m AGL (black trace, Figure 4). The predicted OH, in general, agrees within the observational uncertainty (35%). However, the model results show two [OH] peaks at 7:00 and 13:00 and suppression of OH in between, which was not shown in the observational daily variation.
There are at least two additional factors that likely influence the modeled OH, although neither can reconcile the different temporal profiles of modeled and measured OH. First, VandenBoer et al.
[2013] observed positive gradients in HONO concentrations near the ground surface, especially during the early morning (06:00-08:00 MST). Measurements of HONO below 50 m AGL were averaged for this work due to the periodic nature of HONO measurements made this close to the surface, as the instrument was investigating vertical profiles up to 250 m AGL on a near-continuous basis. Thus, HONO concentrations at 2 m AGL, where the OH inlet was located, are suspected to be significantly higher than the averaged HONO concentrations from the data set under 50 m AGL, based on the periodic surface level HONO observations, especially if there was a strong surface source of HONO. Such a source could come from surface-deposited HONO during the night, as VandenBoer et al.
[2013] hypothesized. Indeed, averaged diurnal variations of HONO below 20 m (down to 1 m) and below 50 m AGL (down to 1 m) show positive gradients approaching the ground especially during the early morning ( Figure 5).
The higher HONO concentrations observed within 20 m AGL were compared to the previous case and base case. Figure 4 shows that the average HONO measurements below 20 m AGL (black trace, filled triangles), when used as a model constraint enhance OH concentrations, especially in the early morning (20% overestimation, see Table 2 and Figure 4). However, inclusion of this larger morning HONO increases morning OH leading to an even larger estimate of OH near 8:30 than the previous case. This time period (9:00 to 11:00) coincides with the transition between increasing J NO2 and decreasing HONO levels ( Figure 5). This period has been previously described as a maximum in HONO upward flux from the ground surface [Ren et al., 2011]. If this is also the case during NACHTT, OH concentrations near the surface (e.g., 2 m AGL where the OH instrument inlet was located) could be enhanced in the presence of an upward flux of HONO from the ground surface. Although the larger OH level from HONO photolysis is likely to be more realistic, and eventhough the actual HONO source at 2 m may be larger than the 20 m average, the inclusion of this source does not significantly improve the model to measurement comparison. This is contradictory as the diurnally averaged OH appears to be strongly influenced by a morning source other than ozone photolysis (see Figure 1). The predicted HONO mixing ratio in the base case is below 100 ppt, which is~3-5 times lower than observed mixing ratios, congruent with the findings of VandenBoer et al. [2013] that there is a surface source of HONO during the day at this site. Follow-up studies including vertical gradient measurements of OH near the surface are required to explore the potential role of HONO in maintaining near-surface oxidation capacity during the winter.
The second additional factor influencing radical generation during NACHTT is OH produced via organic peroxy radicals generated from reactions between atomic Cl and alkane compounds. Thornton et al. [2010] presented wintertime reactive chlorine observations from February 2009 in Boulder Colorado, which is~25 km west of the NACHTT field site. Observationally constrained atomic chlorine source estimations indicate a factor of a up to 10 times higher Cl radical production rates than previously estimated values, mostly from the Cl radical reservoir species ClNO 2 [Young et al., 2013]. During the NACHTT field campaign, a comparable level of ClNO 2 to that reported by Thornton et al. [2010] was observed. To estimate Cl atom number densities, we assumed that ClNO 2 photolysis was the only Cl atom source and Cl + VOCs as the only Cl atom sink. J ClNO2 was estimated from observed J NO2 and J O3 (O 1 D) using the empirical equation (7) developed in Young et al. [2012].
Alkane compounds can explain most of the OH reactivity with the VOCs measured during the NACHTT campaign ( Figure 1). Due to fast reaction rates of Cl atoms with alkane compounds, these reactions dominate Cl atom reactivity ( Figure 6a). From these observations, we estimated [Cl] steady state as where k′ Cl+VOCs is Cl reactivity to VOCs (s À1 ).
The estimated [Cl] steady state is shown in Figure 6b. Estimated number densities of up to 10 3 molecules cm À3 are significantly lower than previous estimations of 10 4 to 10 5 molecules cm À3 mostly over marine boundary layers [e.g., Kim et al., 2008]. The high levels of alkanes and their rapid reaction with Cl atoms are responsible for the low estimate of [Cl] steady state . We included the estimated [Cl] steady state as a scenario in the model calculations.
The low [Cl] steady state levels did not increase the modeled OH levels ( Figure 4b). Therefore, the potential enhancement in OH production from organic peroxy radicals from alkane + Cl reactions can be ruled negligible.
Summary
We was only responsible for~5% of primary OH production during NACHTT. To evaluate recycling sources for OH, we explored observationally constrained UWCM box model simulations. For the base scenario, without constraining observed HONO, the model results significantly underestimate observed OH levels. We found a 3 to 5 times increase in OH concentrations when the model calculation was constrained by the measured HONO (< 50 m). This model calculated OH temporal variation with observed HONO accounts the observed OH temporal variation within the observational uncertainty. However, observed HONO constrained (< 50 m) model calculated OH systematically underpredicted (up to 30%) the measurements, particularly between 9:00 and 11:00. Two possibilities for the higher than expected OH levels in the morning have been discussed: (1) elevated HONO concentrations near the surface originating from HONO vertical gradients near the ground, based on daytime observations; and (2) reaction of Cl atoms with alkanes to produce higher amounts of organic peroxy radicals, a recycling OH source. We found that adopting HONO levels in the model from those averaged over 1-20 m height does not help to reconcile the observed discrepancy. The model calculation actually significantly overpredicted observed HONO especially in the morning. This suggests that the box model scheme does not properly simulate HONO photochemistry especially in the early morning. Steady-state calculations of Cl atom levels produced insufficient excess organic peroxy radicals and also did not appreciably reconcile modeled and observed OH levels. One possible speculation is that the HONO flux from the ground surface source photolyzed to become a significant OH source while elevated HONO was not observed. This is consistent with fine resolution HONO gradient measurements showing increases of roughly a factor of 2 near the ground surface [VandenBoer et al., 2013].
Most previous photochemistry observations have been conducted during the summer season. This is understandable because higher temperature and solar radiation regulate photochemical processes to produce secondary photochemical products such as ozone and secondary organic aerosols. This study reports high OH levels that may be attributed to uncertain or poorly characterized HONO sources that have not been fully incorporated in a conventional box model (e.g., UWCM). Although most secondary photochemical pollution problems occur in the summer season, they are also possible in the winter [Rappengluck et al., 2013;Schnell et al., 2009]. Thus, more thorough investigations of wintertime OH photochemistry should be conducted, especially considering that tropospheric oxidation capacity is an important control on short-lived radiative forcers, such as methane. | 2018-12-05T21:26:15.231Z | 2013-11-04T00:00:00.000 | {
"year": 2013,
"sha1": "3a82c59550b2256f2381346a17ac55b74953cf28",
"oa_license": "CCBY",
"oa_url": "https://bg.copernicus.org/articles/11/2601/2014/bg-11-2601-2014.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cd5d9c7a0365d9fc8d81066bcde635a8238f1d71",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
238013662 | pes2o/s2orc | v3-fos-license | Primary Meningeal Melanocytoma Located in the Craniovertebral Junction: A Case Report and Literature Review
Primary meningeal melanocytoma is a rare benign tumor in the central nervous system (CNS), comprising less than 0.1% of all intracranial tumors. A 44-year-old man presented with occipital headache, nausea, and vomiting. Computed tomography (CT) and magnetic resonance imaging (MRI) showed a well-defined intradural extramedullary mass lesion at the craniovertebral junction (CVJ). Gross total removal was achieved, and the patient improved symptomatically. The pathologic findings were consistent with meningeal melanocytoma. No tumor recurrence was seen on follow-up MRI two years after surgery. Cases of primary meningeal melanocytoma located at the CVJ are rare. The preoperative differential diagnosis of meningeal melanocytoma from meningioma is sometimes difficult because of their similar appearance on CT and MRI. Complete surgical removal is curative for most cases. We present a case of gross total removal of a meningeal melanocytoma located in the CVJ with references to the literature.
Introduction
Primary meningeal melanocytoma is a benign central nervous system (CNS) tumor derived from melanocytes. The most common location is the posterior fossa or upper spinal cord, [1][2][3] and a lesion located in the craniovertebral junction (CVJ) is rare. 4) As preoperative differential diagnosis of meningeal melanocytoma from meningioma can be difficult, [5][6][7] histopathologic and immunohistochemical examinations are necessary to make a definite diagnosis. 4,8) The optimal treatment of meningeal melanocytoma is complete surgical removal. 8,9) Here, we report a rare case of meningeal melanocytoma located in the CVJ and present a review of the literature.
Case Report
A 44-year-old, previously healthy man presented with a month history of occipital headache, nausea, and vomiting. Magnetic resonance imaging (MRI) revealed a CVJ tumor, at which point he was referred to our hospital. For a few days immediately before admission, symptoms became more severe, but physical examination and laboratory tests were normal.
Imaging studies
Computed tomography (CT) showed a well-circumscribed intradural extramedullary mass lesion in the foramen magnum behind the bulbus medullae. There were no signs of calcification or hyperostosis of the adjacent bony structures. The mass was hyperintense on T1-weighted images but without clear contrast enhancement, and hypointense on T2-weighted images (Figs. 1A-1D). Cerebral angiography showed the lesion was avascular.
NMC Case Report Journal Vol. 8,2021 Operation and postoperative course The patient underwent removal of the tumor via the midline suboccipital approach with C1 laminectomy. After a dural incision, a blackish-grey, well-circumscribed tumor, which was located of the foramen magnum behind the medulla oblongata was found ( Fig. 2A). Most of the tumor was located in the subarachnoid region but partially attached to the dorsal dura mater. After dissecting from the surrounding tissue, gross total removal was achieved and dural attachment was coagulated (Fig. 2B). The postoperative course was uneventful, and the patient improved symptomatically. No radiation therapy or chemotherapy was given after surgery. A follow-up examination 2 years after surgery, MRI showed no evidence of tumor recurrence (Fig. 1E).
Histopathological findings
Histopathological examination revealed the tumor was multicellular and a relatively monomorphic population of spindle cells which formed a fascicled or nested growth pattern. There were variable amounts of pigment deposition consistent with melanin in the cytoplasm. There were few hemorrhages and necroses, but nuclear atypia, pleomorphism, or mitotic activity was not seen. Immunohistochemical study was positive for HMB-45 and S-100 protein.
Discussion
Primary meningeal melanocytoma is a rare benign tumor of the CNS. The prevalence rate is one per 10 million, and it accounts for less than 0.06-0.1% of all brain tumors. 2) The term melanocytoma was first introduced by Limas and Tio in 1972. 10) By ultrastructural studies, they determined this tumor was derived from melanocytes rather than meningothelial cells. The most common location is the posterior fossa or upper spinal cord because leptomeningeal melanocytes are most highly concentrated in these segments. 1,2) Although primary meningeal melanocytoma may occur at the base of the brain, the CVJ is a rare location. Only 13 cases including the present case have been reported so far (Table 1). 2,4,[7][8][9][10][11][12][13][14] Most of the patients have been male and aged from 25 to 71 years with a mean age of 49.9 years. The duration of symptoms ranged from several weeks to 6 years, and the predominant symptom was headache induced by the mass effect. Our patient also presented with headache and vomiting but did not develop obstructive hydrocephalus.
On CT scans, meningeal melanocytoma is characterized by well-circumscribed, isodense or slightly hyperdense extra-axial tumors that are homogeneously enhanced by the addition of contrast media. 15) MRI typically showed hyperintense or isointense on T1-weighted images and hypointense on T2-weighted images. 15,16) With gadolinium enhancement, most of these lesions appear uniformly enhanced, but there are some cases in which contrast enhancement is not clearly evident because of strong T1 hyperintensity on the unenhanced images. 2) Our case also showed hyperintensity on T1-weighted MRI, and gadolinium enhancement was not clear. These findings are strongly suggested for meningeal melanocytoma. However, preoperative differential diagnosis of meningeal melanocytoma from other meningeal tumors is often difficult because of the different content of the melanin pigment and the presence of hemorrhage and their similar appearance on images. 5,6) Tumor calcification and hyperostosis of adjacent bony structures have rarely been described in meningeal melanocytoma, but lack of these signs definitively does not rule out meningioma. 6) Histopathologic and immunohistochemical examination is necessary to make a definite diagnosis of meningeal melanocytoma. 4,8) The tumor cells appear as a well-circumscribed, encapsulated, dark brown to black nodular lesion due to the abundant melanin production. They may be firmly attached to the underlying meninges like with meningioma. In general, the tumors are highly cellular and composed of monomorphic spindle or epithelioid cells arranged in whorls, sheets, bundles, or nests. Mitotic activity is usually low, and necrosis and hemorrhage are absent.
Immunohistochemical staining showed meningeal melanocytomas are positive for HMB-45 and S-100 protein, but negative for keratin, epithelial membrane antigen, glial fibrillary acidic protein, and neuron-specific enolase. 2,4,7,9,15) MIB-1/Ki-67 labeling index helps to differentiate melanocytoma from malignant melanoma, as they are low in melanocytoma. 1,2,9) In our case, the tumor was composed of monomorphic spindle cells, which were arranged in fascicles and nests, and there was abundant melanin in the cytoplasm. Immunohistochemical studies were positive for HMB-45 and S-100 protein. The MIB-1/Ki-67 labeling index was 4%. Therefore, the histopathological and immunohistochemical diagnosis was meningeal melanocytoma.
Meningeal melanocytoma is a slow-growing benign tumor, for which complete surgical removal is recommended. 2,[4][5][6]8,9) Gross total removal is generally possible because the majority of the tumors are located in the intradural extramedullary compartment. 16) Yang et al. 2) reported that meningeal melanocytoma has favorable prognosis, and after a complete excision, no tumor progression or focal recurrence was observed in their study. Of 13 cases summarized, all patients except for one case diagnosed by autopsy underwent gross total removal. The follow-up period ranged from 3 to 96 months (mean 24 months), and no recurrence was observed. All tumors located dorsal side of the CVJ seem to contribute to the ease of total removal. On the other hand, in patients with primary CNS melanocytoma arising from the other site, relapse and malignant transition have been reported, clinical follow-up could be necessary, and adjuvant radiation therapy is advised in cases of incomplete removal and recurrence. 4,7,15,17) In conclusion, we report a rare case of meningeal melanocytoma located at the CVJ. Preoperative diagnosis is often difficult, and histopathologic and immunohistochemical examination is essential. The outcome should be good with complete removal, but regular follow-up is necessary because of local recurrence. | 2021-08-27T17:20:50.364Z | 2021-06-25T00:00:00.000 | {
"year": 2021,
"sha1": "bcebe92027a7b0281e9bc1539d9345b48229fc10",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/nmccrj/8/1/8_cr.2020-0191/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5e1658f95ec446ad82d45e08dea926620fe4ff38",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
88511869 | pes2o/s2orc | v3-fos-license | On a class of explicit Cauchy-Stieltjes transforms related to monotone stable and free Poisson laws
We consider a class of probability measures $\mu_{s,r}^{\alpha}$ which have explicit Cauchy-Stieltjes transforms. This class includes a symmetric beta distribution, a free Poisson law and some beta distributions as special cases. Also, we identify $\mu_{s,2}^{\alpha}$ as a free compound Poisson law with L\'{e}vy measure a monotone $\alpha$-stable law. This implies the free infinite divisibility of $\mu_{s,2}^{\alpha}$. Moreover, when symmetric or positive, $\mu_{s,2}^{\alpha}$ has a representation as the free multiplication of a free Poisson law and a monotone $\alpha$-stable law. We also investigate the free infinite divisibility of $\mu_{s,r}^{\alpha}$ for $r\neq2$. Special cases include the beta distributions $B(1-\frac{1}{r},1+\frac{1}{r})$ which are freely infinitely divisible if and only if $1\leq r\leq2$.
Introduction
In random matrix theory, a Marchenko-Pastur law describes the asymptotic behavior of the spectrum of the so-called Wishart matrices [12]. In free probability, a Marchenko-Pastur (or free Poisson) law plays the role that a Poisson distribution does in probability theory: it is the limiting distribution of ((1 − λ N )δ 0 + λ N δ 1 ) ⊞N when N → ∞. For this reason it is called a free Poisson law in the context of free probability. On the other hand, an arcsine law appears in probability theory as the law of the proportion of the time during which a Wiener process is non-negative. In monotone probability, an arcsine law plays the role of a Gaussian law [14]. In particular, an arcsine law is a monotone stable law with stability index α = 2 [11].
Arizmendi et al. [2] found an interplay between Marchenko-Pastur and arcsine laws. They introduced a class F T A of freely infinitely divisible distributions whose Lévy measures are mixtures of a symmetric arcsine law. The building block of this class is a symmetric beta distribution The free Lévy measure of b s coincides with an arcsine law. Moreover, b s is equal to the free multiplicative convolution of an arcsine law with a Marchenko-Pastur law, and hence, is freely infinitely divisible. Moreover, its Cauchy-Stieltjes transform (or Cauchy transform for short) can be calculated explicitly as This paper studies a class of Cauchy-Stieltjes (or Cauchy for short) transforms related to Marchenko-Pastur laws and monotone stable laws. We deform the above Cauchy transform (1.1) to introduce a family of probability measures which include the symmetric beta distribution b s , Marchenko-Pastur and some other beta distributions as special cases. More explicitly, for 0 < α ≤ 2, we define G α s,r (z) = −r 1/α 1 − (1 − s(− 1 z ) α ) 1/r s 1/α , r > 0, s ∈ C\{0}. (1. 2) The branches of powers have to be defined carefully and the precise definition is presented in Section 3. It can be shown that the function (1.2) defines the Cauchy transform of a probability measure µ α s,r for 1 ≤ r < ∞ and (α, s) satisfying what we call an admissible condition. This condition is related to stable distributions.
The reciprocal Cauchy transforms F α s,r = 1 G α s,r satisfy F α s,r • F α us,u = F α us,ur .
We note that the same relation appears for probability measures introduced by M lotkowski [13]. This relation enables us to calculate the inverse map explicitly: The inverse map of the reciprocal Cauchy transform, which is hard to calculate in general, is crucial to investigate free infinite divisibility. Therefore, the explicit form of (F α s,r ) −1 is quite useful and we can prove the free infinite divisibility of µ α s,r for some parameters. The probability measure µ α s,2 turns out to be a free compound Poisson distribution with Lévy measure a monotone α-stable law a α s/4 . From Proposition 4 of [15], if symmetric or positive, µ α s,2 coincides with the free multiplicative convolution of a Marchenko-Pastur law m and the monotone α-stable distribution a α s/4 : Moreover, µ α s,r is freely infinitely divisible for other parameters, not only for r = 2. An interesting case of µ α s,r is µ 1 −1,r which is a beta distribution with the density r sin(π/r) π x −1/r (1− x) 1/r on (0, 1). We prove that this is freely infinitely divisible if and only if 1 ≤ r ≤ 2. We also mention that, while an arcsine law is not freely infinitely divisible, some monotone stable laws are. This fact was implicitly proved by Biane in a different context; see Corollary 4.5 of [9].
The Voiculescu transform and the R-transform
In this paper, C + and C − respectively denote the upper half-plane and the lower half-plane of C.
An additive free convolution µ ⊞ ν of compactly supported probability measures µ and ν on R is the probability distribution of X + Y , where X and Y are self-adjoint free independent random variables with distributions µ and ν, respectively [19]. This convolution was extended to all Borel probability measures in [8]. A probability measure µ on R is said to be ⊞-infinitely divisible if for any n ∈ N, there is µ n such that µ = µ ⊞n n . For a probability measure µ on R, let us denote by G µ the Cauchy transform and by F µ its reciprocal: G µ (z) = R µ(dx) z−x and F µ (z) = 1 Gµ(z) . Bercovici and Voiculescu [8] proved the existence of η, η ′ > 0 and M, M ′ > 0 such that F µ is univalent in Γ η,M := {z ∈ C + : Im z > M, | Im z| > η| Re z|} and Γ η ′ , is called an R-transform. A probability measure µ is ⊞-infinitely divisible if and only if φ µ is the restriction of an analytic map from C + into C − ∪R [8]. This is also equivalent to the Lévy-Khintchine type representation suggested in [4] R µ (z) = cz + az 2 + for some c ∈ R, a ≥ 0 and a non-negative measure ν satisfying ν({0}) = 0 and R min{1, x 2 }ν(dx) < ∞. We call ν the Lévy measure of µ.
The S-transform
Multiplicative free convolution ⊠ for probability measures on [0, ∞) was investigated in [20,8]. This convolution corresponds to the probability distribution of X 1/2 Y X 1/2 , or equivalently Y 1/2 XY 1/2 , where X and Y are positive free independent random variables. This convolution is characterized by S-transforms defined as follows. For a probability measure µ on R, we let ψ µ (z) := R zx 1−zx µ(dx). ψ µ coincides with a moment generating function if µ has finite moments of all orders. In [8], ψ µ was proved to be univalent in the left half-plane iC + for a probability measure µ on [0, ∞) with µ({0}) < 1. Moreover, ψ µ (iC + ) contains the interval (1 − µ({0}), 0). Then a map χ µ : ψ µ (iC + ) → iC + is defined by the inverse of ψ µ . The S-transform is defined as Using the S-transform, µ ⊠ ν is characterized as in a common domain including an interval of the form (−ε, 0). More generally, a multiplicative convolution µ ⊠ ν can be defined if µ or ν is supported on [0, ∞). While (2.5) is expected to hold also in this case, it is not known whether an S-transform can be defined for every probability measure. It was shown in [20] to hold for measures with bounded support and non-vanishing mean, while the bounded case when µ has vanishing mean was solved in [16]. For the unbounded case, as a partial solution, Arizmendi and Pérez-Abreu defined an S-transform of a symmetric probability measure as follows. For a symmetric distribution µ = δ 0 , there is a unique probability distribution µ 2 = δ 0 on [0, ∞) such that ψ µ (z) = ψ µ 2 (z 2 ) for z ∈ C + . Using a property of ψ µ 2 , we can conclude that ψ µ is univalent in H := {z ∈ C + : Im z > | Re z|}. Moreover, ψ µ (H) contains the interval (1 − µ({0}), 0). Therefore, we can define χ µ = ψ −1 µ : ψ µ (H) → H and S µ (z) := 1+z z χ µ (z). Then (2.5) still holds if µ or ν is symmetric and the other is supported on [0, ∞).
Finally we recall the analogues of compound Poisson distributions, which will be important in this paper.
Definition 2.1. A probability measure µ is said to be free compound Poisson if R µ (z) = λψ ν (z) for a probability measure ν with ν({0}) = 0 and a λ ≥ 0. In this case, λν coincides with the Lévy measure of µ.
The Marchenko-Pastur law m with mean one belongs to the class of free compound Poisson measures; the pair (λ, ν) is given by (1, δ 1 ). m is also characterized by S m (z) = 1 z+1 in terms of the S-transform.
We note that G α s,r (z) can be expanded in a series regarding − 1 z α : for some complex coefficients c n (α, s, r) with c 0 = 1. In the second line, we used the formula − 1 where M > 0 is large enough depending on (η, s, r). Then we have the following.
Proof. We note that (−G α us,u (z)) α is equal to Also, we note that (1 + w) 1/r 1/u = (1 + w) 1/(ru) for small |w|. Then Under further conditions on (r, α, s), the function G α s,r is well-defined in C + with values in C − , and therefore defines a probability measure.
Assume that either of the following conditions is satisfied: s,r is the Cauchy transform of a probability measure, which we denote by µ α s,r . Moreover, G α s,r is univalent in C + . If (α, s) satisfies (1) or (2), it is said to be admissible.
Proof. Let r ≥ 1. We can immediately check that zG α s,r (z) → 1 as z → ∞, z ∈ C + , non tangentially. Therefore, what needs to be proved is that G α s,r analytically maps the upper half-plane to the lower half-plane.
In the case 1 < α ≤ 2, we draw similar pictures; see Fig. 5-8. In Fig. 8, the image of 1−(1−s(− 1 z ) α ) 1/r s is contained in the sector {z ∈ C : 0 < arg z < απ}. Therefore, the image of the map In each step described in the figures, a new univalent map is added, so that after all the steps, the map G α s,r is also univalent in C + .
The admissible condition is related to monotone stable distributions as mentioned in the next section.
(ii) We have µ α s,1 = δ 0 for any admissible (α, s). Therefore, the right inverse of F α s,r can be calculated as (F α s,r ) −1 = F α s/r,1/r from Theorem 3.1. (iii) From the relation (F α s,r ) −1 = F α s/r,1/r , we can conclude that G α s,r does not define a probability measure for 0 < r < 1 and admissible (α, s). The reason is as follows. If µ is a probability measure and not a point mass, then Im F µ (z) > Im z for any z ∈ C + ; see Corollary 5.3 of [8]. Hence Im F −1 µ (z) < Im z if z = F µ (w) and F µ is univalent around w. Therefore F −1 µ cannot be written as F ν for a probability measure ν on R. (iv) The measure µ α s,r satisfies self-similarity with respect to s as follows. If µ is a probability distribution of a random variable X, then let D c µ denote the distribution of cX. For c > 0, we have µ α cs,r = D c 1/α µ α s,r .
A relation to monotone stable and free Poisson laws
Let a α s be a monotone (strictly) α-stable distribution [11] characterized by where (α, s) satisfies the admissible condition. a 2 s is the centered arcsine law with variance s/2 and a 1 s is a Cauchy distribution or a delta measure. The following properties are valuable to note here. The proofs are as follows. Let s := re iθ , r > 0. From the Stieltjes inversion formula, the density p α s (x) of a α s is given by π(|x| 2α −2r|x| α cos(απ+θ)+r 2 ) 1/(2α) , x > 0, sin[ 1 α arg(|x| α +re i(π−θ) )] π(|x| 2α −2r|x| α cos θ+r 2 ) 1/(2α) , x < 0, and θ 2 = π−θ r . L 1 and L 2 are the same half lines as in Fig. 2. L 3 and L 4 are starting at 0. l 1 is tangent to L 1 at 1 since z 1/r is a conformal mapping. Moreover, it approaches L 3 asymptotically. l 2 is tangent to L 2 at 1 from the same reason and approaches L 4 asymptotically. Figure 7: The image of C + under the map z → (1 − s(− 1 z ) α ) 1/r . θ 1 and θ 2 are defined by θ 1 = π−θ r and θ 2 = π(α−1)+θ r . L 1 and L 2 are the same half lines as in Fig. 6. L 3 and L 4 are starting at 0. l 1 is tangent to L 1 at 1 and approaches L 3 asymptotically. l 2 is tangent to L 2 at 1 and approaches L 4 asymptotically. where arg z is defined in (C + ∪ R)\{0} so that it takes values in [0, π]. Now the properties (1) and (2) can be proved easily.
(1) Let us consider (α, s, r) = (1, i, 2). Then µ 1 i,2 is the free multiplicative convolution of the Marchenko-Pastur law and a symmetric Cauchy distribution. This is absolutely continuous with a strictly positive density on R written as We mention that this probability measure belongs to a class proposed in [10].
In fact, Biane considered only special values for arg s, but the same proof can be applied to the above result.
Finally, we note the S-transforms of µ α s,2 and a α s .
5 More on free infinite divisibility of µ α s,r In the previous section, we proved that µ α s,r is ⊞-infinitely divisible whenever r = 2. In this section we will determine infinite divisibility for r = 2. We found the general case is too difficult to treat, so that we only consider the problem for some parameters. The main results of this section are the following.
We also show that some beta distributions are ⊞-infinitely divisible, and some are not.
The case 1 ≤ r ≤ 2
To prove the free infinite divisibility of µ α s,r , we introduce a subclass of ⊞-infinitely divisible distributions.
Definition 5.1. A probability measure µ is said to be in class UI 1 if F µ is univalent in C + and, moreover, F −1 µ has an analytic continuation from F µ (C + ) to C + as a univalent function.
The following property was implicitly used in [6].
Remark 5.3. If µ is ⊞-infinitely divisible, then F µ is always univalent in C + . This can be proved for instance by using the so-called subordination functions. Let µ be ⊞-infinitely divisible and µ t = µ ⊞t be the probability measure corresponding to the Voiculescu transform tφ µ . For s ≤ t, an analytic function ω s,t : C + → C + exists so that it satisfies F µs • ω s,t = F µt . ω s,t is called a subordination function. The reader is referred also to Eq. (5.4) of [5], where the following replacements are required: µ by µ ⊞s and t by t/s. The relation F µs • ω s,t = F µt is equivalent to Moreover it is proved in Theorem 4.6 of [5] that Taking the limit s → 0 in (5.1), we get For instance, the normal law 1 √ 2π e −x 2 /2 dx is in UI from the result of [6]. Moreover, we can easily prove that Wigner's semicircle law, the Marchenko-Pastur law and the Cauchy distribution belong to UI.
UI is closed under the weak topology. This is proved as follows. The convergence of µ n implies the local uniform convergence of the Voiculescu transforms φ µn [8]. Since F −1 µn (z) = z + φ µn (z) converges locally uniformly, the limit function is univalent. Also F µn itself converges to a univalent function. Therefore the limit measure belongs to the class UI.
For the Lévy measure, the Voiculescu transform is It holds that Im 1 − 1 − 1 r(x+iy) r → 0 as y ց 0 if x > 1/r or x < 0 and that as y ց 0. After some more calculations, one can see where τ is the measure in (2.2). τ does not have an atom since lim yց0 iyφ µ (x + iy) = 0 for any x ∈ R. The Lévy measure ν 1 −1,r is equal to 1+x 2 x 2 τ as explained in Section 2. If s = Re iθ is not real, the support of µ 1 s,r is unbounded. The density for large |x| can be calculated as In particular, µ 1 s,r belongs to a class introduced in [10].
5.2
The case α = 1, r = 3 In Subsection 5.1, the free infinite divisibility of µ α s,r was proved for some parameters in terms of the class UI. In Section 3, we succeeded in proving the free infinite divisibility of µ α s,2 since the Voiculescu transform had a quite explicit form. For other parameters, it is difficult to investigate the free infinite divisibility. A possible case is for α = 1 and r = 3. In this case, the Voiculescu transform has a quite explicit form as in the case r = 2 and ⊞-infinite divisibility can be determined completely. Indeed, the Voiculescu transform is φ 1 3s,3 (z) = −3s 1 − (1 + s/z) 3 − z = −3sz 2 − s 2 z 3z 2 + 3zs + s 2 .
In contrast to the case r = 2, infinite divisibility depends on the parameter s if r = 3.
Non infinite divisibility for 1 < α ≤ 2 and large r
We prove the following. | 2013-12-19T09:33:44.000Z | 2011-08-17T00:00:00.000 | {
"year": 2011,
"sha1": "3dd0678fd26aa8be9ad52f1568a4a0b3d08e5687",
"oa_license": "implied-oa",
"oa_url": "https://doi.org/10.3150/12-bej473",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "3dd0678fd26aa8be9ad52f1568a4a0b3d08e5687",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
18415590 | pes2o/s2orc | v3-fos-license | Coronary dominance and prognosis in patients undergoing coronary computed tomographic angiography: results from the CONFIRM (COronary CT Angiography EvaluatioN For Clinical Outcomes: An InteRnational Multicenter) registry
Aims Coronary computed tomographic angiography (CCTA) has become an important tool for non-invasive diagnosis of coronary artery disease (CAD). Coronary dominance can be assessed by CCTA; however, the predictive value of coronary dominance is controversially discussed. The aim of this study was to evaluate the prevalence and prognosis of coronary dominance in a large prospective, international multicentre cohort of patients undergoing CCTA. Methods and results The study population consisted of 6382 patients with or without CAD (47% females, 53% males, mean age 56.9 ± 12.3 years) who underwent CCTA and were followed over a period of 60 months. Right or left coronary dominance was determined. Right dominance was present in 91% (n = 5817) and left in 9% (n = 565) of the study population. At the end of follow-up, outcome in patients with obstructive CAD (>50% luminal stenosis) and right dominance was similar compared with patients with left dominance [hazard ratio (HR) 0.46, 95% CI 0.16–1.32, P = 0.15]. Furthermore, no differences were observed for the type of coronary dominance in patients with non-obstructive CAD (HR 0.95, 95% CI 0.41–2.21, P = 0.8962) or normal coronary arteries (HR 1.04, 95% CI 0.68–1.59, P = 0.9). Subgroup analysis in patients with left main disease revealed an elevated hazard of the combined endpoint for left dominance (HR 6.45, 95% CI 1.66–25.0, P = 0.007), but not for right dominance. Conclusion In our study population, survival after 5 years of follow-up did not differ significantly between patients with left or right coronary dominance. Thus, assessment of coronary vessel dominance by CCTA may not enhance risk stratification in patients with normal coronary arteries or obstructive CAD, but may add prognostic information for specific subpopulations.
Introduction
Coronary computed tomographic angiography (CCTA) has recently been introduced as a highly accurate 1 -4 and prognostically robust 5 -8 non-invasive imaging modality for the assessment of coronary artery disease (CAD). The CONFIRM (COronary CT Angiography EvaluatioN For Clinical Outcomes: An InteRnational Multicenter) registry enrolled ≥20 000 patients from 12 centres across North America, Europe, and Asia with suspected CAD who underwent a ≥64-detector row CCTA, and is the first prospective database evaluating the prognostic role of CCTA. 9 Coronary artery dominance is determined according to the coronary artery that emits the posterior descending artery. Right dominance is the most prevalent pattern of coronary circulation and is found in 72 -90% of individuals, while prevalence of left dominance is reported to be 8 -33%, whereas co-dominance has 3-7% of population prevalence. 10 The relatively low prevalence of left dominance in the general population and the decreasing prevalence of a left dominant or co-dominant coronary system with age have raised the question whether this variant may reflect a biological disadvantage relative to right dominance, and recent studies have hypothesized that left dominance may represent less well-balanced circulation with more myocardium at risk in acute coronary syndromes (ACSs). 11 Indeed, a previous study of 27 289 patients undergoing cardiac catheterization for ACS demonstrated that left dominance was associated with an increased hazard of death during a 3.5-year follow-up, 12 and a US registry reported that left dominance and co-dominance were associated with increased in-hospital mortality in 207 926 patients undergoing percutaneous coronary intervention (PCI) for ACS. 13 However, this work has been based on conventional angiograms. Since it is often difficult to delineate the course of coronary arteries by angiography because it only provides a two-dimensional view of a threedimensional structure, the present study analysed coronary dominance and outcome by multidetector coronary CCTA that not only provides information about the presence and degree of coronary stenosis, but also allows to see the origin and course of coronary arteries by a three-dimensional display of anatomy thereby permitting the determination of coronary artery variations. 14 -17 Although coronary vessel dominance is easily assessed on coronary CCTA, there is sparse information about the prognostic value of coronary vessel dominance in patients referred for CCTA. Therefore, the goal of the present study was to assess the prevalence and prognosis of coronary dominance in a large prospective, international multicentre cohort of patients undergoing CCTA.
Study design, patients, and outcome measures
This study represents 6382 patients from the CONFIRM registry. Briefly, CONFIRM enrolled consecutive adults .18 years of age between 2005 and 2009 who underwent ≥64-detector row CCTA for suspected CAD at 12 centres in six countries (Canada, Germany, Italy, Korea, Switzerland, and the USA). Details of the CONFIRM registry design and data elements have been published. 9,18 -20 Patients with no CAD, nonobstructive, obstructive, and severe obstructive CAD where coronary dominance had been assessed were included in the present analysis. Patients with a balanced coronary artery system were excluded from the analysis, because of the low number of patients in this group. Cases with missing data on dominance were excluded from analysis; therefore, 6382 remaining individuals with and without CAD were included for the final analyses. The primary clinical endpoint of the study was a composite of all-cause mortality, non-fatal myocardial infarction (MI), and early and late coronary revascularizations. Non-fatal MI was defined as evidence of myocardial necrosis consistent with myocardial ischaemia, as detected by changes in cardiac biomarkers together with symptoms of ischaemia, ECG changes, or imaging evidence, according to the ESC/ACCF/AHA/ WHF consensus document on the universal definition of non-fatal MI. 21 Notably, in the CONFIRM registry, post-CCTA treatment regimens were not mandated and our database did not include information on previous or post-CCTA functional testing results. The study complies with the Declaration of Helsinki, and patient consent or a waiver of informed consent (as per recommendations of each institutional review board) was obtained at each site in keeping with site-specific regulations.
Data acquisition, image reconstruction, and CCTA analysis CCTA scanners used in the CONFIRM registry and data acquisition for CCTA have been described in detail previously. 9 Image interpretation was uniformly performed at each site according to the Society of Cardiovascular Computed Tomography guidelines 22 by at least one highly experienced imager who was level III equivalent and/or board certified in cardiovascular computed tomography. Dominance was determined independently at each participating site. The coronary artery system was classified as a right dominant if the right coronary artery (RCA), as a left dominant if the left circumflex coronary artery (LCx), or as a co-dominant if RCA and LCx gave rise to the posterior descending artery. Each site performed per-segment analysis for individual coronary artery segments by using a 16-segment model. A CAD was defined as the presence of any plaque. Coronary atherosclerotic lesions were quantified for lumen diameter stenosis by visual estimation and graded as none (0% luminal stenosis), mild (1 -49%), moderate (50-69%), or severe (.70%). A coronary lesion compromising the lumen by .50% was defined as obstructive. Vessels were classified into four arterial territories: left main artery (LM), left anterior descending artery (LAD), LCx, and RCA. Obstructive CAD in the diagonal branches, obtuse marginal branches, and posterolateral branches was considered as part of the LAD, LCx, and RCA system, respectively. The posterior descending artery was considered as part of the RCA or LCx system, depending on the coronary artery dominance. A .50% stenosis in the LM was considered obstructive in all models. Individuals manifesting obstructive CAD were further categorized as having one-, two-, and three-vessel disease or left main disease. For the purposes of the study analysis, a left main coronary stenosis of ≥50% was considered equivalent to threevessel CAD.
Statistical analysis
SPSS version 12.0 and 17.0 (SPSS, Inc., Chicago, IL, USA) and SAS version 9.2 (SAS Institute, Cary, NC, USA) were used for all statistical analyses. Categorical variables are presented as frequencies and continuous variables as mean + SD. Variables were compared with x 2 statistic for categorical variables and by Student's unpaired t-test, Wilcoxon/ Mann-Whitney non-parametric test, or median comparison test where appropriate for continuous variables. The Kaplan-Meier method and the log-rank A Cox proportional hazards analysis were used to compare cumulative event-free survival by dominance in patients without significant CAD on CCTA and in those with significant CAD on CCTA. The primary outcome variable was a composite endpoint of all-cause mortality, non-fatal MI, and revascularization. Multivariable analyses were calculated with the multivariabe Cox regression model for prediction of the combined endpoint (with 95% confidence intervals). According to univariate significance and baseline differences between groups, risk factors such as age, male gender, hypertension, dyslipidaemia, diabetes, and smoking were included in the multivariate model. Furthermore, the prognostic value of severity of stenosis and significant stenosis location were determined for patients with a right dominant coronary artery system and patients with a left dominant coronary artery system. A twotailed P-value of 0.05 was considered statistically significant.
Study cohort
The CONFIRM registry screened 27 125 CCTA patients at 12 participating centres in six countries. Patients were followed for a median of 2.1 years (interquartile range 1.5 -3.1 years). A total of 956 (3.5%) patients were lost to follow-up; for 20 743 patients, coronary artery dominance pattern had not been evaluated due to different reasons including technical reasons, extensive atherosclerosis, presence of occluding thrombi with large filling defects distally, or prior CABG. Thus, the final study population comprised 6382 patients (47% females, 53% males, mean age 56.9 + 12.3 years) with or without CAD remained for the present analysis and was included in the study. Table 1 depicts baseline characteristics of the patient population, categorized by coronary vessel dominance. Left coronary dominance (LCD) patients tend to have a higher BMI (27.8 + 5.4 vs. 27.2 + 5.3, P ¼ 0.0288), and were more often male (62 vs. 38%, P , 0.0001) and asymptomatic (24 vs. 37%, P ¼ 0.0003) than patients with right coronary dominance (RCD).
CCTA findings
Right dominance was present in 91% (n ¼ 5817) and left dominance in 9% (n ¼ 565) of the study population. Normal coronary arteries were found by CCTA in 3361 (53%) patients, non-obstructive CAD in 1787 (28%), obstructive CAD in 457 (7%), and severe obstructive in 776 (12%; Table 2). Patients with left dominance tend to have a lower Agatston score than those with right dominance Data are presented as n (%) and mean + SD. Patients with a balanced coronary artery system were excluded from the analysis. BMI, body mass index; CAD, coronary artery disease.
Coronary dominance and prognosis in patients undergoing CCTA (420.0 in the right dominance and 363.0 in the left dominance, P , 0.0001, median comparison test; Table 2). In our study cohort, 648 (10%) patients had one-vessel disease, 351 (10%) had two-vessel disease, and 222 (3%) were diagnosed with three-vessel disease. The severity of CAD and stenosis location on CCTA differed significantly among patients with a left dominant and right dominant coronary artery system: patients with left dominance tend to have more nonobstructive CAD (35 vs. 27%, P , 0.0001) and significant stenosis in the left anterior descending or circumflex artery (19 vs. 14%, P ¼ 0.0067 and 10 vs. 7%, P ¼ 0.0203, respectively), whereas patients with right dominance tend to have more often normal coronary arteries (54 vs. 43%, P , 00001) or obstructive CAD in the RCA (10 vs. 5%, P , 0.0001; Table 2).
Event and survival rate
During a follow-up of 60 months, the composite endpoint occurred in 321 (5.0%) patients. All-cause mortality was reported in 100 (1.6%) patients, non-fatal MI occurred in 131 (2.1%), and 120 patients (1.9%) underwent revascularization. When comparing event-free survival during 5 years of follow-up in patients with normal coronary arteries according to coronary vessel dominance, survival rates for the cumulative incidence of all-cause mortality, non-fatal MI, and coronary revascularization did not significantly differ between patients with LCD or RCD (log-rank P ¼ 0.14, Figure 1B), with low cumulative event rates of 1.7 and 0.9%, respectively. Similar results were obtained when a separate analysis for each endpoint in patients with normal coronary arteries was conducted (log-rank P ¼ 0.41 for all-cause mortality, log-rank P ¼ 0.13 for MI, and P ¼ 0.73 for coronary revascularization, data not shown). Likewise, in patients with significant CAD (.50% stenosis), no significant difference was observed in event-free survival between left dominant and right dominant coronary artery systems, with cumulative event rates of 18.8 and 19.1% after 5 years of follow-up for a right-and left dominant coronary artery system, respectively (log-rank P ¼ 0.84, Figure 1A). These results remained the same when a separate analysis for each endpoint in patients with significant CAD was conducted (log-rank P ¼ 0.069 for all-cause mortality, log-rank P ¼ 0.63 for MI, and P ¼ 0.76 for coronary revascularization, data not shown) or when patients with obstructive CAD (stenosis 50 -70%; log-rank P ¼ 0.60, data not shown) or severe obstructive CAD (stenosis .70%; log-rank 0.92, data not shown) were analysed separately. When stratified for sex, patients with LCD and RCD showed similar survival rates for the incidence of all-cause mortality, non-fatal MI, and coronary revascularization (log-rank P ¼ 0.72 for males and log-rank P ¼ 0.3842 for females; Figure 2A and B).
Prognostic value of coronary dominance
Uni-and multivariable proportional hazards models confirmed that obstructive and severe obstructive CAD in both coronary variations were predictors of all-cause mortality, non-fatal MI, and revascularization, and had an incremental value over clinical variables ( Table 3). When female and male patients were analysed separately, results remained the same (P , 0.0001 for females with RCD patients and non-obstructive CAD and P , 0.0001 for males with RCD patients and non-obstructive CAD, data not shown). We further assessed the difference in a prognostic value between left and right coronary vessel dominance in patients with obstructive CAD for the composite endpoint of all-cause mortality, non-fatal MI, and coronary revascularization: Cox regression model analysis showed that the difference in the risk estimate of obstructive CAD between patients with a right dominant and those with a left dominant coronary artery system was statistically not significant (HR 1.04, 95% CI 0.68-1.59, P ¼ 0.8461, right vs. left dominant, Table 4). Similarly, in patients with normal coronary arteries or non-obstructive CAD, no difference in the predictive value between the two coronary dominance pattern was found (HR 0.46, 95% CI 0.16-1.32, P ¼ 0.1496 and HR 0.95, 95% CI 0.41-2.21, P ¼ 0.8962, right vs. left dominant, respectively, Table 4).
Furthermore, significant CAD in one vessel was also identified as a predictor for the combined endpoint with a HR of 16.92 (95% CI 5.5 -52.1, P , 0.0001 vs. normal coronary arteries) in the left dominant system and a HR of 24.43 (95% CI 15.9-37.5, P , 0.0001 vs. normal coronary arteries) in the right dominant system. Consequently, in both uni-and multivariable models accounting for individual Framingham risk factors, the risk was dose-dependently increased when more vessels were affected (data not shown).
Prognostic value of significant stenosis location
After stratification according to stenosis location, the rate of cumulative event for LCD patients with significant LAD stenosis was 8% for non-fatal MI, 9% for coronary revascularization, and 8% for all-cause mortality ( Figure 3A HRs of CAD (non-obstructive: ,50% stenosis, obstructive: .50%stenosis, severe obstructive: .70%stenosis) for the composite outcome of all-cause mortality, non-fatal MI, and coronary revascularization in LCD and RCD compared with normal coronary arteries on CCTA. Patients with a balanced coronary artery system were excluded from the analysis. CAD, coronary artery disease; RF, risk factor; HR, hazard ratio; CI, confidence interval.
coronary revascularization, and all-cause mortality were 10, 12, and 4%, respectively ( Figure 3B). A significant stenosis in the left coronary system (LAD and LCx) was observed in 1489 patients and was associated with an increased risk of the combined endpoint all-cause mortality, non-fatal MI, and coronary revascularization for left dominance (HR 7.01 for LAD and 3.83 for LCx) as well as for right dominance (HR 10.12 for LAD and 8.29 for LCx, Table 5, lower panel).
However, significant left main disease was observed in 85 patients and the presence of LM disease conferred an increased HR for the combined adverse event by 6.45 after multivariable adjustment (95% CI 1.66-25.0, P ¼ 0.007) in patients with left dominance. In right dominance, however, LM disease was not significantly associated with the composite prognosis endpoint (HR 1.35, 95% CI 0.73 -2.51, P ¼ 0.3456 after adjustment for CAD and risk factors;
Discussion
In this prospective multicentre study, we systematically evaluated the prognostic value of coronary dominance assessed by CCTA in a large cohort of patients. When comparing event-free survival in patients with normal coronary arteries or obstructive CAD according to coronary vessel dominance, survival rates for the cumulative incidence of all-cause mortality, non-fatal MI, and coronary revascularization after 5 years of follow-up did not differ significantly between patients with LCD or RCD. In our study, right dominance was present in 91% and left dominance in 9%, which is not significantly different from values given in the literature, varying from 8.2 to 15% for left dominance and from 72 to 90% for right dominance. 10 -12,23,24 Left dominance was observed more often in males (62%) compared with females (38%), while previous retrospective studies indicate that there is no difference in coronary dominance with regard to gender. 23,25 -27 However, these differences may arise due to different selection of patients, e.g. the inclusion of low-to-intermediate risk patients at an advanced age in the present study.
In contrast to our findings, two previous retrospective angiographic studies using cardiac catheterization databases in patients with ACS have shown that left dominance was associated with modestly increased odds of death during a 3.5-year follow-up (HR 1.13; 1.00 -1.28) or in-hospital mortality (HR 1.19; 1.06-1.34) following PCI, respectively. 12,13 Nevertheless, those studies were retrospective analyses done on conventional angiograms and the study population consisting of high-risk ACS patients and patients with prior coronary artery bypass graft differed substantially from our study population. In a recent prospective study of 1425 patients referred for CCTA, non-fatal MI and all-cause mortality were increased (HR 3.15) in patients with left dominance during a 2-year follow-up period. 28 However, potential selection bias due to smaller patient numbers in this study cannot be excluded, and no differences in prognosis for different coronary dominance patterns were observed when coronary revascularization was included in the combined primary endpoint. Taken together, it seems that left dominance may have different prognostic values regarding short-and long-term mortality in patients with ACS compared with patients with stable CAD, thereby, emphasizing the importance of angiographic interventions in left dominance patients with ACS. However, prospective studies in patients with ACS are needed to confirm this.
At present, little is known about the prognostic value of stenosis location in relation to coronary vessel dominance, and only one recent study in 1425 patients referred for CCTA demonstrated that a stenosis in the left coronary system was associated with an increased risk of events, while a stenosis in the RCA did not statistically significant predict events. 28 Our analysis among subgroups with left main disease showed an elevated hazard of the combined endpoint for left dominance that was statistically significant while a stenosis in the left main did not predict events in right dominance. This finding is consistent with previous observations in patients undergoing PCI for ACS. 13 Coronary vessel dominance has influence on the relative contribution of the different coronary arteries to the total left ventricular blood flow 29 and in most individuals with LCD, the RCA is usually small and often fails to reach the acute margin of the heart. Thus, a proximal stenosis of the left coronary artery may result in more extensive ischaemia and worse consequences in a left dominant system than in a right dominant system. In addition, the potential to rapidly form collaterals might be diminished in patients with a left dominant coronary artery system due to the fact that the RCA is not sufficient to perfuse the myocardium. 30 However, to date, the underlying pathophysiology has not been investigated and further research is needed to assess the effect modification by culprit lesion site or coronary collateral formation in patients with left main disease and left coronary system.
The relationship between coronary vessel dominance and the extent of CAD remains uncertain as different studies showed opposing results. Indeed, one previous study has shown that LCD was associated with a higher incidence of atherosclerosis, 31 whereas others showed more extensive CAD in patients with a right dominant coronary artery system 12,23 or did not detect differences in the extent of CAD between LCD or RCD. 26,28 However, this discrepancy can most likely be explained by a potential selection bias due to small study populations, and the differences in modalities used for the assessment of CAD in these studies. In the present study, we observed a higher incidence of CAD (obstructive and nonobstructive) in left dominance patients, whereas the prevalence of normal coronary arteries was more frequent in right dominance. However, no difference in predisposition to three-vessel disease was seen between LCD or RCD which strongly supports the hypothesis that dominance pattern does not predict outcomes in patients with CAD.
Interestingly, in patients with non-obstructive CAD, a right dominance system was identified as a significant predictor of the combined endpoint, whereas left dominance did not predict any events in this subpopulation. The possibility that intermediate lesions may carry an increased risk in right dominant circulations is of particular importance since it would challenge the current paradigm of nonintervention for these non-obstructive lesions. However, there was no statistically significant difference in univariate analysis in this subgroup when right dominance was compared with left dominance. Yet, our study was likely statistically underpowered to detect effect modification between left and right dominance in this subgroup with non-obstructive CAD.
As with any study, certain design limitations are inherent. Of note is the low prevalence of left and co-dominant coronary circulation in the general population. While our study was sufficiently powered to detect an effect size in LCD, we did not include patients with co-dominant circulation in our analysis, since our study was underpowered to detect statistical effect modification in this subgroup. Secondly, as with any observational, open-label registry, potential heterogeneity between sites, interobserver and intersite variability in CCTA diagnosis, and different post-CCTA treatment patterns cannot be excluded. Thirdly, in the CONFIRM registry, CAD was defined using CCTA and not using invasive coronary angiography or other imaging modalities; therefore, the possibility of false-positive or false-negative CCTA findings exists despite the performance of CCTA by international experts. Finally, information regarding the coronary dominance pattern was not uniformly available for our study cohort, since not all CONFIRM sites collected this information. Thus, the final study comprised only 23.5% of the entire CONFIRM population and, as such, may have the potential for selection bias which may limit the generalizability of the data. However, our study population is the largest, presently available prospective CCTA cohort evaluating the predictive value of coronary dominance and may therefore provide solid data and good evidence regarding the prognostic information of coronary dominance.
In conclusion, our findings suggest that that the assessment of coronary vessel dominance by CCTA may not enhance the risk stratification beyond the assessment of the degree of stenosis in patients with normal coronary arteries or obstructive CAD referred for CCTA, but may add prognostic information for specific subpopulations such as patients with left main disease or non-obstructive CAD. | 2016-05-04T20:20:58.661Z | 2010-11-23T00:00:00.000 | {
"year": 2015,
"sha1": "61391729bbcfc1bac2771bcbd717931ae3e4df64",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/ehjcimaging/article-pdf/16/8/853/7134589/jeu314.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a8a7b804c5c9008872aae7904e8ce1fbb91e070",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270086300 | pes2o/s2orc | v3-fos-license | Iron Plaque: A Shield against Soil Contamination and Key to Sustainable Agriculture
Highlights What are the main topic discussed? Description of the formation process of iron plaque. The factors affecting the formation of iron plaque were summarized. What are the implications of these discussions? Understanding the role of iron plaque in environmental processes. Insights into the interactions between iron plaque, plants, and microbes for environmental remediation. Abstract Soils play a dominant role in supporting the survival and growth of crops and they are also extremely important for human health and food safety. At present, the contamination of soil by heavy metals remains a globally concerning environmental issue that needs to be resolved. In the environment, iron plaque, naturally occurring on the root surface of wetland plants, is found to be equipped with an excellent ability at blocking the migration of heavy metals from soils to plants, which can be further developed as an environmentally friendly strategy for soil remediation to ensure food security. Because of its large surface-to-volume porous structure, iron plaque exhibits high binding affinity to heavy metals. Moreover, iron plaque can be seen as a reservoir to store nutrients to support the growth of plants. In this review, the formation process of iron plaque, the ecological role that iron plaque plays in the environment and the interaction between iron plaque, plants and microbes, are summarized.
Introduction
In the past few decades, the contamination of soils by heavy metals (HMs) has raised worldwide concerns due to the intensive human activities on the environment [1][2][3][4].For instance, soils from the central area of Yueliangbao gold tailings (located in central China), were found to be rich in Cu, Pb, Zn, Mn, Mo and Cd, of which concentrations were much higher than that of these species in soils from the surrounding regions [5].A total of 12 metal pollutants, including As, Cr and Hg, etc., were detected in the sediments of the Fuyang river system in north China, potentially risking local ecological safety and human health based on analyses of the geo-accumulation index and Pearson's correlation [6].In Florida Plants 2024, 13, 1476 2 of 23 (the United States), As and Pb concentrations in urban soils exceeded the local criteria for residential site soils by 2.1 mg kg −1 and 400 mg kg −1 , respectively [7].In the suburbs of Multan, a city in east-central Pakistan, the contents of Cd, Cu, Mn, Ni and Pb in Brassica rapa, which is commonly used as fodder for local animals, far exceeded the permissible limits that the World Health Organization (WHO) prescribed for B. rapa as animal fodder, resulting in high carcinogenic health risks to animals, as evidenced by the super high values of the total target health quotient (TTHQ), which ranged from 47.22 to 136.64 (TTHQ > 1 is an indicator of carcinogenic food stuffs, according to the US Environmental Protection Agency) [8].
Unlike organic pollutants that generally consist of carbon chains, HMs cannot be decomposed or eliminated in soils via chemical and biological processes [9].Thus, they will be likely absorbed by plant roots from the soil.In plant tissues, an over-accumulation of HMs will interfere with various metabolic processes, such as damaging the protein structure, replacing essential metals in the biomolecules (e.g., pigments and enzymes), retarding cell division, inactivating photosynthesis and respiration, thereby resulting in significant inhibition of growth and loss of yields [10][11][12][13][14][15].Accordingly, to safeguard against toxic HMs, plants have developed several elaborate strategies in vivo, including compartmentalization of HMs in cell organelles, inactivation of HMs by chelation with organic ligands, exclusion of HMs by using specific transporters and ion channels [16].
Apart from the above-mentioned in vivo strategies, another in vitro one, i.e., IP, which naturally occurs on the surface of wetland plant roots, was found to be effective in blocking the uptake of HMs by roots from soils [17][18][19][20][21].In fact, IP can be considered as a respiratory by-product of plant roots grown in the submerged soils [22].In this case, underground roots are in water-logged conditions that usually lack gaseous oxygen.Through well-developed aerenchyma, abundant oxygen is transferred from the overground tissues to the roots, mostly acting as electron receptors in root cells.Meanwhile, some oxygen and reactive oxygen species (ROS) that are generally produced along the respiratory chains may interact with various Fe species from the soil to generate IP, gradually enveloping the surface of roots.Because of the large surface-to-volume porous structure, IP often exhibits a high binding affinity to metal ions and hence can act as a sink for HMs and physically insulate HMs from the surface of roots [23].
To date, it is widely acknowledged that IP plays a pivotal role in safeguarding hydrophytes against HM toxicity.Therefore, comprehending the formation process and environmental functions of IP holds great significance for soil remediation.This review focuses on elucidating the mechanism underlying IP formation and its impact on impeding HM uptake by hydrophytes.Additionally, we discuss the effects of IP on soil properties and plant growth as well as the interactions between plants, IP and microbes.
Characteristics of IP 2.1. Discovery of IP
The discovery of IP could be traced back to as early as the 1960s.Armstrong (1967) [24] found that in two plant species (Menyanthes trifoliata and Molinia coerulea), the root oxidizing activity was the highest at the root apex, and gradually diminished towards the root base.Concomitantly, iron oxide deposits were formed to substantially accumulate around the root apical region, and once more, to gradually diminish towards the root base.However, it is worth noting that the apical region itself was commonly free of iron oxides because this region exhibited the greatest oxidizing activity, causing the oxidation of ferrous iron to occur at some distance from the root tip [24].Later, Bacha and Hossner (1977) [25] demonstrated a positive correlation between the contents of iron precipitates formed on the roots of rice plants (Oryza sativa 'Brazos') and the initial concentrations of ferrous chloride added in soils.Moreover, they used the techniques of scanning electron microscopy (SEM) and X-ray diffraction (XRD) to examine the morphology and mineral structure of iron precipitates on the rice roots, showing that these iron precipitates corresponded to the poorly crystalline lepidocrocite (γ-FeOOH).By far, numerous studies have shown that IP can be considered as a natural armor to protect wetland plants (e.g., rice, reed and Typha, etc.) from HMs (Table 1), whilst also benefiting from adsorbing nutrients from soils [26][27][28][29][30]. Furthermore, IP can help rush plants survive in strongly acidic soils, where the concentration of inorganic carbon is commonly lower than that of natural soils [27].Specifically speaking, it acts as a carbon sink to fix organic compounds exudated by rush roots, thereby allowing the rapid bacterial recycling of carbon elements back to plants.As a result, IP enabled rush roots to have access to relatively high concentrations of carbon sources that are required for growth metabolism (e.g., photosynthesis) in low-carbon soils [27].[46] proposed that when the oxidation of Fe(II) by O 2 occurred, the formed iron oxide, FeOOH, was incipiently precipitated on the epidermal cell wall of rice roots.As the outermost cell wall decomposed, the FeOOH particles began to fill the cellular spaces to generate polyhedral casts [47].Now, it is generally accepted that IP can be separated into two classes, i.e., amorphous and poorly crystalline IP, and the changes in environmental conditions such as redox potential (Eh), pH and Fe(II) concentration may favor the transformation between them.For instance, the crystallinity of IP that occurred on the roots of Spartina alterniflora increased with the Fe(II) concentration in the soil [48].Crystalline IP mainly consists of iron oxides [32,49,50].In the natural environment, there are at least 16 iron oxide counterparts (Table 2) but in most cases, only ferrihydrite (Fe 2 O 3 •nH 2 O), goethite [α-FeO(OH)] and lepidocrocite [γ-FeO(OH)] are believed to be the major components of crystalline IP [32], while sometimes, minor amounts of siderite (FeCO 3 ) are also present in it [31].[49].
Oxides
Hydroxides and Oxide-Hydroxides The reddish-brown ferrihydrites, regardless of the natural and synthetic ones, are poorly crystalline iron oxide-hydroxides [49].According to XRD patterns (the number of peaks in XRD spectra), ferrihydrites can be basically classified into five types: 2-line, 3-line, 4-line, 5-line and 6-line one, among which the 2-line and 6-line are seen as the two extremes of the crystal order for ferrihydrites, and are more prevalent than others in the environment [49].As structural crystallinity increases, the 2-line ferrihydrite shows two reflections while the 6-line one displays six-eight reflections in the XRD spectra.Environmental reaction conditions play a significant role in shaping the crystallinity form of ferrihydrites.For instance, the crystalline lattice order decreases as the rate of Fe(III) hydrolysis increases, and as the concentration of silicate or soil organic anions increases [51].
The yellow-brown goethite, occurring throughout the global ecosystem, is one of the most thermodynamically stable iron oxides at environmental temperatures.Structurally, goethite is characterized by double chains of Fe octahedra, which are formed by edgesharing and oriented parallel to the crystallographic direction.Within each octahedral unit, the Fe(III) cation is octahedrally coordinated by three O 2-anions and three OH-anions.Notably, the orthorhombic symmetry of goethite arises from the alternating arrangement of these double chains of Fe octahedra with double chains of vacant lattice sites [52].In the idealized structure of goethite, the bond lengths between the Fe(III) cation and the surrounding oxygen atoms, denoted as d(Fe-O), exhibit two distinct values: 1.95 Å for three oxygen atoms and 2.09 Å for the remaining three oxygen atoms [53,54].Moreover, the distinctive edge-sharing and double chain arrangement of the Fe octahedra in goethite give rise to three unique Fe-Fe distances, d(Fe-Fe), specifically 3.01 Å for two edge-sharing Fe atoms, 3.28 Å for another set of two edge-sharing Fe atoms and 3.46 Å for four Fe atoms sharing double corners [55].
Lepidocrocite (γ-FeO(OH)) is a naturally occurring iron oxide mineral widely distributed in the environment, playing a crucial role in the geochemical cycling of iron elements.The structural characteristic of lepidocrocite is the presence of double chains composed of FeO 6 octahedra that share edges, parallel to the crystallographic c-axis.These chains are held together by hydrogen bonding between the oxygen atoms of the octahedral units.In each octahedral unit, the Fe(III) cation is coordinated by five O 2− anions and one OH -anion, with varying Fe-O bond lengths which reflect the different bonding environments of the oxygen atoms [49].Lepidocrocite is known for its ability to adsorb various cations and anions from water, making it useful in natural water purification [56].Additionally, lepidocrocite often coexists with goethite and other iron oxides in nature, also playing a critical role in the cycling of trace elements except for iron [57].
It should be noted that different types of iron oxides coexist in the natural synthesis process of IP, and they interact and transform with each other, ultimately forming a reddishbrown film on the roots of plants.
Formation of IP
Iron exists as an abundant transition metal element in the environment.As shown in Figure 1, in the early stage, after redox by root exudates and microbes, the iron elements (including different valence states) in the rhizosphere are transformed into soluble Fe(II).Then, O 2 is supplied by radical oxygen loss (ROL) through the aeration tissues (aerenchyma), creating an oxygen-rich zone.In the middle and late stages, through the equation of 4Fe(II) + 10H 2 O + O 2 → 4Fe(OH) 3 + 8H + [58][59][60], Fe(II) is transformed into iron oxide, which precipitates on the surface of the roots, thus forming IP and protecting plant roots from HMs via adsorption.
cations and anions from water, making it useful in natural water purification [56].Additionally, lepidocrocite often coexists with goethite and other iron oxides in nature, also playing a critical role in the cycling of trace elements except for iron [57].
It should be noted that different types of iron oxides coexist in the natural synthesis process of IP, and they interact and transform with each other, ultimately forming a reddish-brown film on the roots of plants.
Formation of IP
Iron exists as an abundant transition metal element in the environment.As shown in Figure 1, in the early stage, after redox by root exudates and microbes, the iron elements (including different valence states) in the rhizosphere are transformed into soluble Fe(II).Then, O2 is supplied by radical oxygen loss (ROL) through the aeration tissues (aerenchyma), creating an oxygen-rich zone.In the middle and late stages, through the equation of 4Fe(II) + 10H2O + O2 → 4Fe(OH)3 + 8H + [58][59][60], Fe(II) is transformed into iron oxide, which precipitates on the surface of the roots, thus forming IP and protecting plant roots from HMs via adsorption.Previous studies have shown that IP is more likely to occur in acidic pH environments [61,62].After a series of redox reactions, it ultimately precipitates on the root surface in the form of Fe2O3 or Fe(OH)3 [20].Interestingly, the distribution of IP on the root surface is not uniform.IP is principally found in the elongation and root-hair zones of plant roots but is rarely observed in young lateral roots or newly formed roots [26,63].This may result from the continuous growth of roots during plaque formation, where the older sections of the root (the root base) are exposed to plaque accumulation for more extended periods, thus fostering a more pronounced formation of IP [59].
A wide variety of plants were proven to form IP, comprising underwater plants, emergent plants, terrestrial plants in aquatic environments, and so on.Representatives of Previous studies have shown that IP is more likely to occur in acidic pH environments [61,62].After a series of redox reactions, it ultimately precipitates on the root surface in the form of Fe 2 O 3 or Fe(OH) 3 [20].Interestingly, the distribution of IP on the root surface is not uniform.IP is principally found in the elongation and root-hair zones of plant roots but is rarely observed in young lateral roots or newly formed roots [26,63].This may result from the continuous growth of roots during plaque formation, where the older sections of the root (the root base) are exposed to plaque accumulation for more extended periods, thus fostering a more pronounced formation of IP [59].
A wide variety of plants were proven to form IP, comprising underwater plants, emergent plants, terrestrial plants in aquatic environments, and so on.Representatives of these plants are Oryza sativa, Camellia sinensis, Iris pseudacorus, Canna indica, Rhizophoraceae, Acorus gramineus L., Jumex bulbosus, Pistia stratiotes L. and Elodea canadensis [17,18,43,64].The occurrence of IP is a naturally spontaneous phenomenon in the environment, influenced by a number of abiotic and biotic factors, such as soil properties, moisture levels, root oxygenation capacity, and so on (Figure 2) [65,66].
Effect of Abiotic Factors on the Formation of IP Soil Properties
The physiochemical properties of soils, including texture, organic matter (OM), pH, reduction potential (Eh) and elemental composition, exert significant and diverse influences on the formation of IP.Specifically, soil texture can influence IP formation by altering plant root growth, soil porosity, and the transport of rhizosphere elements.Notably, soils with lower clay content tend to favor the deposition of kinin on root surfaces compared to soils with higher clay content, as demonstrated by Chen et al. (1980) [46].
OM is an important soil component, comprising a wide variety of multifunctional groups derived from the decomposition residues of plants and animals.The degradation of OM facilitates the development of anaerobic or hypoxic conditions in the rhizosphere, hence creating favorable circumstances for root-oriented IP formation.Moreover, OM exhibits a great impact on the adsorption and migration behaviors of metallic elements in soils [67].For instance, OM can effectively chelate more iron from its biogeochemical cycle, causing its accumulation at a higher level in the form of Fe(II) within the rhizosphere to be translocated in flooded environments [67].
The pH of soils significantly affects the concentrations of soluble Fe(II) and Mn(II) in soils, which are indispensable ingredients for IP aggregation via oxidation [62,68,69].Under acidic circumstances, substantial amounts of iron and manganese are present in the form of soluble Fe(II) and Mn(II) in soils, which are favorable to IP formation, whereas, under alkaline circumstances, iron and manganese primarily exist as metallic hydroxides, which are not in favor of their further oxidation.
The Eh value takes charge of Fe(II) concentrations within the rhizosphere by influencing the diffusion rate of iron in soils [70].Christensen et al. (1998) [71] found that in the
Effect of Abiotic Factors on the Formation of IP Soil Properties
The physiochemical properties of soils, including texture, organic matter (OM), pH, reduction potential (Eh) and elemental composition, exert significant and diverse influences on the formation of IP.Specifically, soil texture can influence IP formation by altering plant root growth, soil porosity, and the transport of rhizosphere elements.Notably, soils with lower clay content tend to favor the deposition of kinin on root surfaces compared to soils with higher clay content, as demonstrated by Chen et al. (1980) [46].
OM is an important soil component, comprising a wide variety of multifunctional groups derived from the decomposition residues of plants and animals.The degradation of OM facilitates the development of anaerobic or hypoxic conditions in the rhizosphere, hence creating favorable circumstances for root-oriented IP formation.Moreover, OM exhibits a great impact on the adsorption and migration behaviors of metallic elements in soils [67].For instance, OM can effectively chelate more iron from its biogeochemical cycle, causing its accumulation at a higher level in the form of Fe(II) within the rhizosphere to be translocated in flooded environments [67].
The pH of soils significantly affects the concentrations of soluble Fe(II) and Mn(II) in soils, which are indispensable ingredients for IP aggregation via oxidation [62,68,69].Under acidic circumstances, substantial amounts of iron and manganese are present in the form of soluble Fe(II) and Mn(II) in soils, which are favorable to IP formation, whereas, under alkaline circumstances, iron and manganese primarily exist as metallic hydroxides, which are not in favor of their further oxidation.
The Eh value takes charge of Fe(II) concentrations within the rhizosphere by influencing the diffusion rate of iron in soils [70].Christensen et al. (1998) [71] found that in the soil sediments with appropriate Eh values, reduced forms of iron and manganese readily diffused toward the root surface.Then, the oxygen released by the roots triggered the oxidation of these elements into oxides on the root surface.Yang et al. (2012) [72] reported that in the soils adjacent to the hydrophyte rhizosphere, the Eh value was less than +50 mV, which was insufficient to gather enough Fe(II) ions to be oxidized within the rhizosphere.By contrast, Masscheleyn et al. (1991) [73] suggested that oxygen released from the roots could increase Eh values to at least +100 mV within the rhizosphere, benefiting a densified Fe(II) concentration for the further development of IP on the roots.Non-metallic elements in the rhizosphere also have an effect on IP formation.Selenium (Se) and arsenic (As) are ubiquitous metalloids in natural soils, usually occurring with other metals in the form of oxides (e.g., selenite, SeO 3 2− and arsenite, AsO 3 3− ) [74].Se could largely induce the response to oxidative stress in plants, resulting in an increase in the concentration of ROS in the roots and further stimulating the development of IP to inhibit the uptake of HMs [28,35,75].Also, Se 0 was found to facilitate the process of ROL in rice tissues, which was beneficial to advance IP formation on the roots [76].Similarly, previous studies indicated that the exposure of As to the roots of hydrophilic plants advanced the development of IP, also by responding to the oxidative stress to generate a large amount of ROS, such as H 2 O 2 and O 2− [35,75].In addition, phosphorus (P) not only stands as a critical nutrient for plant growth but also impacts the biogeochemical cycles of iron in soil ecosystems [77][78][79][80].The bioavailability of phosphorus in soils can affect the microbial community structure and activity, which play an important role in iron oxidation [81,82].Furthermore, the addition of sulfide compounds in soils, e.g., hydrogen sulfide (H 2 S), can increase the root oxidation capacity of plants [59].Rice plants grown in both soil pot and hydroponic settings supplemented with H 2 S, ranging from 2.64 to 5.28 mM, promoted the formation of IP Also, this concentration range of H 2 S enhanced rice growth, including seedling vigor, root length and the dry weights of roots and shoots [83,84].However, it should be noted that in freshwater sediments, the excessive amount of H 2 S may not favor IP development on plant roots because H 2 S would chemically precipitate iron into the insoluble form of FeS and FeS 2 [85,86].
Irrigation Regime
The different irrigation regimes give rise to different consequences concerning the synthesis of IP on root surfaces [87].Excessively waterlogged soils will hinder the formation of IP.For example, continuous flooding has proven to significantly increase the population of Fe-reducing bacteria (FeRB), thus accelerating the reduction reaction of iron oxides around the root surface [21].Therefore, a rational irrigation regime is important for creating a favorable environment for the accumulation of IP on the root surface [87].Commonly, periodic flooding regimes can facilitate the conversion of IP from the amorphous form to the crystalline one [88][89][90][91].This is because, in comparison to continuous flooding, periodic flooding (intermittent wetting and drying cycles) speeds up water movement in soils, stimulating bacterial growth and increasing the oxygen bioavailability [92,93].Hence, under periodic flooding conditions, the amorphous iron was more likely to be transformed into the crystalline one within the rhizosphere soils with the abundant Fe-oxidizing bacteria (FeOB) and oxygen sources, thereby promoting the crystalline ratio of IP [94,95].
Other Factors
It is crucial to highlight the significance of planting density in maintaining soil health, recognizing that it is merely one of the numerous influencing factors.When considering planting density, it is evident that it plays a pivotal role in shaping the soil's condition.Similarly, like other influencing factors, plant density has the potential to influence the Eh level of the soil and facilitate the formation of IP.This occurs because the roots of plants release additional oxygen into the soil when they are planted closer together.This extra oxygen not only enhances soil quality but also encourages the formation of IP.Therefore, when aiming to promote soil health and IP formation, it is imperative to consider planting density alongside other influencing factors, as supported by Christensen et al. (1998) [71] and Tripathi et al. (2014) [96].
The formation of IP is greatly augmented when exposed to combined HMs stress, rather than individual HMs stress.A recent study, conducted by Shen et al. (2021) [97], demonstrated that the presence of multiple HMs significantly enhances IP formation at the apical, middle, and basal regions of the root, with a gradual increase in formation over time.When mangrove plants are subjected to combined stress from Cu, Pb and Zn, they exhibit increased metal tolerance, which is associated with the substantial thickening and increased lignification and suberization of the exodermis.This enhanced lignification and suberization of the exodermis effectively delay the penetration of metals into the roots, thereby aiding in enhanced tolerance to heavy metals [98].Furthermore, the deposition of increased lignin within the exodermis leads to a reduction in ROL emitted from mangrove roots [99].The formation of IP on the root surfaces of mangrove plants is intricately intertwined with root ROL, as observed by [100] and Dai et al. (2017) [81].
Biochar, a carbon-rich and porous substrate, exhibits the ability to adsorb organic compounds and nutrients from the soil, thereby enhancing its fertility [101].This adsorption capacity may influence the concentration of iron ions in the soil, potentially affecting the formation of IP.By fortifying biochar with iron (referred to as DCB-Fe), it significantly augments its specific surface area and enhances its surface functional groups, leading to an increase in its adsorptive capacity for heavy metals (HMs) [48,102].The incorporation of biochar into soils is considered an environmentally friendly strategy to mitigate soil contamination, enhance phytoremediation, and reduce health-related hazards.The remediation efficiency of biochar in soils depends on various factors such as soil pH, HM content and porosity [15].Biochar amendments were observed to increase pH and phosphorus levels in soil pore water, resulting in an elevation of IP formation on the root surface.This elevation was shown to reduce concentrations of Cd, Zn and Pb in rice shoots by up to 98%, 83% and 72%, respectively [103].Similarly, laboratory experiments using nano-Fe 3 O 4 -modified biochar have demonstrated that its application promotes IP formation, thereby enhancing the root barrier against Cd [48].
Recent studies have shown that biochar derived from rice straw can be used to reduce Cd, Pb and Zn accumulation in rice shoots.However, it simultaneously increases As content.This increase in As content may be attributed to a decrease in soil pH, which promotes the conversion of As(V) to the more soluble and toxic As(III) form [103,104].
Biotic Factors Effect on IP Formation
Apart from abiotic factors, it is generally acknowledged that numerous biological elements are capable of either directly or indirectly influencing the formation of IP.These influential factors span a wide spectrum, encompassing aspects such as microbial activities [76,[105][106][107], durations of root oxidation [100], the presence and nature of root exudates [108], enzymatic activity within the plant [24], the genotypic variety of the plant [109], as well as the specific cultivar and age of the plant [110].These factors collectively contribute to the ecological dynamics that govern IP formation on plant roots.
Expanding on this foundation, the species and genotypes of plants significantly influence the formation of IP, particularly through their impact on radial oxygen loss (ROL).Diverse hydrophytes, including Typha latifolia L., Phragmites communis L., and Oryza sativa L., exhibit varied capabilities in forming IP [111].For instance, in rice, variations in IP formation among different genotypes and varieties are attributed to disparities in oxygen secretion capacity.These differences critically affect the plants' ability to oxidize and precipitate iron around their roots, thereby influencing the extent and nature of IP formation [35,39,112,113].This highlights how specific biological traits of plants can interact with their environment to modulate their physiological responses and adapt to varying conditions.
Radical Oxygen Loss (ROL) Facilitated by Aeration Tissues (Aerenchyma)
Aerenchyma, a plant tissue featuring thin walls and sufficient intercellular spaces, serves as the primary conduit for oxygen transport from above ground to below ground, which is generally classified into two types: schizogenous and lysigenous aerenchyma [63,114].Schizogenous aerenchyma forms gas spaces through cell separation and differential cell expansion, while lysigenous aerenchyma results from the death and lysis of specific cells in cereal crops like rice [115], maize [116], wheat [117] and barley [118].
Plants 2024, 13, 1476 9 of 23 Aerenchyma plays a crucial role in the growth of wetland and IP formation due to its interconnected intercellular spaces, which form an efficient ventilation system facilitating gas exchange [119].This system enables the transfer of oxygen produced during photosynthesis to the roots, while also providing buoyancy and structural support to the plant [120].For instance, Hydrophytes, such as rice, utilize aerenchyma to transport captured oxygen to the roots for metabolic activities and distribute the remaining oxygen to the entire rhizosphere through pressurized ventilation or simple diffusion [28,121].Beyond that, species like Cyperus alternifolius L., subsp.flabelliformis, Myriophyllum spicatum L., Vallisneria spiralis L., and Juncus effusus L. develop aerenchyma to preserve air and release oxygen from their roots into the rhizosphere.This process leads to the transformation of hazardous dissolved substances into less toxic, insoluble, or unabsorbed forms (Fe 3+ , FeOOH, Mn 3+ , NO 3− ) [122,123].
Under anaerobic conditions, aerenchyma provides a diffusion pathway that reduces the resistance of oxygen transport from the plant's above-ground parts to the flooded or oxygen-deficient roots, ensuring the metabolic needs of the roots and contributing to ROL [76,[124][125][126].
ROL stands as one of the most pivotal processes that trigger the formation of IP and oxidized root channels [79,112].ROL was demonstrated to exert a substantial influence on the pH, Eh and the balance between Fe(II) and Fe(III) in the rhizosphere [127].Through ROL, plants can effectively release or diffuse oxygen into the rhizosphere [128].Consequently, Fe(II) readily undergoes oxidation to Fe(III) and precipitates onto the root surface in the form of hydroxide or hydroxyl oxide, thus giving rise to Fe plaque [20,99,[129][130][131].ROL is regulated by oxidation-reduction reactions mediated by ROS, and different wetland plant species exhibit varying root porosity and ROL rates [13,132].Research has shown that rice genotypes with higher ROL rates have a more pronounced impact on pH, Eh and the balance between Fe(II) and Fe(III) in the rhizosphere.This results in the formation of a more extensive Fe plaque on the root surfaces compared to genotypes with lower ROL rates [127].This underscores the significant role of ROL in plaque formation.Furthermore, Bravin et al. (2008) [133] established that the ROL capacity of rice roots and the soil's buffering capacity are crucial factors affecting oxidation-reduction changes in the rhizosphere.
Hydrophyte Oxidative Systems
Hydrophyte roots possess a robust oxidation system capable of oxidizing metal ions present in the environment.This system, owing to its ability to form IP, protects the root zone from harmful substances.The oxidation system comprises root exudates and enzymatic activities of plant root [134].They both reduce the Fe(III) into soluble Fe(II) in the rhizosphere, preparing Fe(II) for IP formation [135].
Root exudates are essential components of the oxidative secretions released by plants, playing a critical role in the transformation and mobility of Fe and Mn [136].Root exudates (organic acid, phytosiderophores, etc.) were also documented to mitigate HM toxicity, including Al, Zn and Cd, through the exudation of glyoxylic, oxalic, and formic acid [137][138][139][140][141][142].For instance, the oxalate content in the roots was observed to increase upon treatment with Pb in Pb-resistant rice varieties, as demonstrated by Yang et al. (2000) [143], highlighting its potential for HM blocking.Furthermore, excessive organic acids can be enzymatically decomposed into harmless CO 2 and H 2 O 2 , as reported by Ando et al. (1983) [144] and Emerson et al. (1999) [105].
Enzymatic activities play an important role in oxidating Fe(II) [145].The enzymatic antioxidant system maintains a delicate balance between the production and removal of ROS.ROS, comprising superoxide anion (O 2− ), singlet oxygen (1O 2 ), hydrogen peroxide (H 2 O 2 ) and hydroxyl radical (OH), are prevalent in plant cells [146].They play a vital role in cellular metabolism and signal transduction.However, the excessive ROS production in plants causes oxidative stress and damage to biological molecules under stress (HMs, slat or abnormal temperature), leading to cellular dysfunction or death [147].In this case, the activity of the enzymatic antioxidant system is accelerated.The superoxide dismutase (SOD) and catalase (CAT) activities involved in the elimination of ROS are activated [148], resulting in a large amount of O 2 , which is beneficial to create an oxidizing environment in the rhizosphere [147].
Fe-Reducing and Fe-Oxidizing Bacteria
The oxidation of iron can occur through two distinct pathways in nature: chemically driven and biologically driven oxidation, which is decided by the concentration of oxygen.The former is primarily driven by chemical catalysts (≥275 µM), while the latter, is dominated by microbial activity (≤50 µM) [147,[149][150][151][152]. Therefore, in an anaerobic or anoxic environment, biologically driven iron oxidation predominates in IP formation.Among all the microbes, iron-oxidizing bacteria (FeOB) and FeRB serve as the primary driving force in the vicinity of wetland plant roots.
FeOB can be classified into four types [152], namely acidophilic aerobic, neutrophilic microaerobic, anaerobic phototrophic, and nitrate-reducing [153], which significantly impact the kinetics of Fe(II) oxidation and oxygen consumption at the anoxic interface around the roots [154].The acidophilic aerobic and neutrophilic microaerobic Fe(II)-oxidizers contribute the most during the IP formation [155].Two distinct types of FeOB synergistically function in wetland and flooded environments, imparting numerous advantageous effects and contributing to the formation of IP under anaerobic conditions.Neutrophilic microaerobic Fe(II)-oxidizers were initially discovered by Ehrenberg in 1836 and subsequently purified and isolated in the 20th century [156].These bacteria have since emerged as crucial model organisms for investigating Fe(II) oxidation and associated environmental processes [157].They are commonly found in neutral environments, including soil, the aerobic-anoxic interface of redox-stratified aquatic systems, plant rhizospheres, groundwater flow zones and deep-sea sediments.Under these neutral microaerophilic or anaerobic conditions, they utilize Fe(II) as an electron donor and O 2 as an electron acceptor, while organic or inorganic carbon sources facilitate their growth [158].Consequently, the precipitation of Fe(III) occurs on the root surface in the form of FeOOH along with other elements.Acidophilic iron-oxidizing bacteria was first isolated by Colmer and colleagues in 1947 [159].These bacteria typically inhabit acidic environments with a pH range of 1.0-4.0[160], such as acid leachate, acid mine drainage (AMD), deep-sea hydrothermal vents, and hot springs that are abundant in iron, sulfur, and other metallic elements.Within these acidic habitats, Fe(II) remains stable and bioavailable for microbial utilization, enabling iron-oxidizing microorganisms to outcompete oxygen-mediated abiotic oxidation processes for acquiring Fe(II).Consequently, they thrive by utilizing elemental S or Fe(II) as electron donors while employing O 2 , SO 4 2− , or NO 3 − as electron acceptors [161].Additionally, organic or inorganic carbon serves as their carbon source.In a study on reeds, the presence of acidophilic FeOB not only enhances the formation of IP but also diminishes the uptake of Fe and Mn by the reeds.It is postulated that FeOB facilitates IP formation in acidic environments, thereby indirectly impeding heavy metal absorption [162].
FeRB accounts for 12% of all rhizosphere bacteria and are dominant members in the rhizosphere microbial community, along with the FeOB [163].They utilize hydrogen (H 2 ) and acetic acid as electron donors to reduce Fe(III) to Fe(II) under anoxic conditions [164].This makes IP an ideal electron acceptor for FeRB [165], decreasing iron precipitation by influencing both Fe reduction and Eh in soil [47].Beyond that, the presence of FeRB in mangrove wetland sediments could potentially impact the phase transition of iron oxide.In a controlled climate chamber experiment conducted by Zhang et al. (2023) [166], it was observed that inoculation with FeRB strain Pseudomonas sp.SCSWA09 significantly decreased IP formation on the roots of Kandelia obovata seedlings, particularly reducing amorphous IP.This reduction can be attributed to the ability of FeRB to expedite the transformation from amorphous ferrous/ferric hydroxide into crystalline forms, suggesting their influence on IP generation and implying a potential acceleration of active iron cycling in the rhizosphere.
It is important to note that the formation of IP is regulated by a complex biological system involving interactions between biotic and abiotic factors.For instance, secretions from wetland plant roots (such as glucose, glycine, citrate and malate) are oxidized by microorganisms into carbon dioxide, which can impact the pH of the rhizosphere [167].Additionally, an oxidation reaction occurs outside the cell wall and produces protons, thereby influencing the pH of the soil [168].Similarly, Johnson-Green and Crowder (1991) [169] reported significant differences in Fe solution pH after exposure to axenic and non-axenic seedlings.This suggests a weak trend of competition between iron-oxidizing bacteria and chemical oxidation of Fe(II) at low pH levels.Under such conditions, Fe oxidation kinetics are relatively slow (<4), but acidophilic FeOB like Thiobacillus ferrooxidans may enhance Fe oxidation kinetics and contribute to IP formation [105].Therefore, there could be interactions among plants, environmental substances, and microbes during this process.
IP as an Armor for Metal Transfer in Plants
After decades of extensive research on IP, numerous significant discoveries were made regarding the presence of IP, which effectively enhances plant resistance against HM toxicity in soil.In the case of rice, Greipsson and Crowder (1992) [38] observed that exposure to 0.5 mg•L −1 Cu(II), 2.0 mg•L −1 Ni(II) and a combination of Cu(II)+Ni(II) resulted in chlorosis and necrosis in non-IP rice plants, whereas IP rice plants exhibited no signs of toxicity throughout their growth period.In rice exposed to excessive Zn and Cu [170], IP positively impacted the dry weight of shoots and roots, leaf and root length, and reduced the occurrence of chlorotic leaves when exposed to excessive Cu [96].Moreover, under Cd stress, the concentration of Cd(II) in the root and bud as well as the transfer of Cd(II) from root to bud in rice with IP decreased by 34.1%, 36.0%and 20.1%, respectively, compared to rice without IP [17].As is a highly toxic and carcinogenic metallic substance that can be readily absorbed by rice in significant quantities [171].The presence of IP effectively inhibits As uptake by roots, thereby reducing its accumulation in brown rice [172].More surprisingly, IP can also oxidize As(III) to the less toxic As(V), thus reducing the toxicity of As to plants [78,173].It should be noted that when As is oxidized to arsenate by oxygen, IP will reduce absorption [96,113], which may be due to their different structure, resulting in different binding capabilities or ways to IP [174].Li et al. (2016) [66] selected three distinct types of paddy soils, denoted as C, D and N, and artificially manipulated the effective concentrations of Pb by adding 0 mg/kg, 150 mg/kg and 300 mg/kg Pb(II) to soils C, D and N, respectively.Despite higher effective concentrations of Pb in soils D and N compared to soil C, rice plants exhibited significantly lower levels of Pb absorption in these soils.This phenomenon is attributed to a substantial presence of IP coating on the surface of rice roots which effectively reduces the mobilization of Pb in both soil types D and N.These findings suggest that IP generally acts as a protective barrier against toxic metals while enhancing plant growth [175].
Currently, numerous studies have been conducted on the mechanism of IP blocking the absorption of HMs by plants [176].Chemically, most plant roots possess a negative charge, enabling them to adsorb positively charged HMs [177].The presence of IP physically obstructs the interaction between roots and positively charged HMs [178,179].Moreover, due to the abundant functional groups present in iron hydroxides, IP can effectively sequester metal(loid)s through adsorption and/or co-precipitation processes.Consequently, this may influence the availability of metal(loid)s in the rhizosphere and subsequently impact the uptake and accumulation of HMs by plants [180].
Physically, the adsorption mechanism of IP to HM is generally inferred by studying the natural minerals contained in IP Wang et al. (2009) [181] choose goethite, magnetite, e.g., (provided by Sinopharm Chemical Reagent Co., Ltd., Shanghai, China) as representatives of metal (hydr)oxides commonly present in nature, and then found that Cd was absorbed on these different oxide minerals.Hochella et al. (1989) [182] found that the surface structure and nano-scale morphology of minerals play a key role in the dissolution and adsorption reaction between the surface and the soils.The iron isotopic exchange experiments show that ferrihydrite contains labile and non-labile site populations; the number of sites participating in the faster exchange process was reduced by adsorbing arsenate before the exchange experiment.The labile sites, examined with Mossbauer spectroscopy, are found to have different local environments; compared to sites that exchange slower, sites that exchange very quickly (within 20 min) had more distorted octahedral geometry.When bonded to adsorbed arsenate, the distortion oflabile sites was slightly reduced.Adsorbed arsenate may decrease the degree of distortion around the octahedra by forming binuclear, bidentate bonds with the adjacent iron octahedra [183].Arsenate adsorbs on ferrihydrite surfaces mainly as an inner-sphere bidentate (bridging) complex sharing apical oxygens of two adjacent edge-sharing Fe oxyhydroxyl octahedra.Monodentate complexes were also observed, accounting for about 30% of all As-Fe correlations [184].Fuller et al. (1993) [185] analyzed kinetics of ferrihydrite to adsorp and coprecipite arsenate.
In adsorption experiments, a period of rapid (5 min) As(V) uptake from solution was followed by continued uptake for at least eight days, until As(V) diffused to adsorption sites on fenihydrite surfaces within aggregates of colloidal particles.The time dependence of As(V) adsorption is well described by a general model for diffusion into a sphere if it was assumed that the subset of surface sites located near the exterior of aggregates can quickly reach the adsorption equilibrium.In coprecipitation experiments, the initial As(V) uptake was significantly greater than in post-synthesis adsorption experiments because As(V) was coordinated by the surface sites before the process of crystallite growth and aggregation; therefore, the absorption rate was not affected by diffusion.After the initial adsorption, As(V) was slowly released from coprecipitates for at least one month, because crystallite growth led to desorption of As(V).In addition, numerous adsorption models were extensively developed, such as the diffusion layer model [186], three-layer complexation model [187], modified three-layer complexation model [188] and the metal (hydroxide) oxide surface reaction group affinity valence band theory [189], these models proceed from different angles, explaining the adsorption behavior of iron oxide minerals.IP, as a key product of the iron oxidation-reduction cycle, plays a significant role in transforming trace metals and organic matter in flooded soils, which contain high levels of iron ions [190].
The impact of IP on plant growth remains a hotly debated issue.Its influence varies depending on the heavy metal environment surrounding the plant.For instance, a study on water lobelia (Lobelia dortmanna L.) revealed that IP does not affect the root diameter [191].In research involving common bulrush (Typha latifolia), it was observed that the presence or absence of IP had no significant effect on the dry weight of roots and shoots of seedlings, whether in control conditions or in Zn and Cd solutions.However, roots were significantly shorter when IP was present [33].Similarly, Greipsson et al. (1994) [178] found that under Ni and Cu stress, rice roots were shorter when IP was present.Although IP does not always promote root growth, it does not necessarily imply a negative impact on overall plant growth.For example, Møller and Sand-Jensen's study showed that IP around the roots of Lobelia dortmanna L. creates an oxygen diffusion barrier [192], which can be beneficial in high-sediment environments by directing more oxygen to root meristems, thereby improving survival.Likewise, while IP resulted in shorter rice roots under Ni and Cu stress, it significantly enhanced rice shoot growth [176].This suggests that the formation of IP is an effective response of plants to various environmental stresses.
Moreover, IP influences the chemical behavior and bioavailability of nutrients [193][194][195], acting as a nutrient reservoir to store essential elements [196].IP on roots serves as an iron reserve, aiding plants in overcoming iron deficiency.For instance, in Medicago sativa under Cd stress, IP formation enhances photosynthesis efficiency and biomass production [62].IP enriches environmental phosphorus, thereby enhancing plant energy metabolism, nucleic acid biosynthesis, photosynthesis, enzyme activities, and the biogeochemical cycles of iron and manganese [80].Additionally, when Fe(OH) 3 is added to phosphorus-rich nutrient solutions, a significant increase in the P content of rice shoots is observed, correlating positively with the amount of IP attached to the roots [197].The adsorption function of IP mainly stems from its primary component, Fe(OH) 3 .Its amphoteric colloid properties and loose, porous structure provide a large surface area, facilitating the absorption of phosphorus and other elements [196,198,199].Furthermore, IP is suggested to contribute to increased nitrogen accumulation in tea plant roots and stimulate plasma membrane ATP enzyme activity [200].
IP also has an impact on the rhizosphere soil of plants.The formation and reductive dissolution of IP can significantly influence the rhizosphere's iron budget, affecting the mobilization of soil pollutants and nutrients [130].During its formation, there is a release of H + and the secretion of various organic acids such as malic acid, lactic acid, oxalic acid, citric acid and succinic acid into the rhizosphere [60,201].These processes lead to changes in the pH and Eh values of the rhizosphere soil, subsequently affecting the bioavailability and concentration of HMs [23,26,165].
IP also acts as a barrier against oxygen loss, enhancing oxygen supply to the root meristems.This, in turn, influences the composition and distribution of aerobic and anaerobic microorganisms in the soil [36].For example, during the iron oxidation process, the relative abundance of copper bacteria such as Maxilla, Pseudomonas, Rosella, Coleopomonas and Proteus increases, eventually becoming dominant [202].This suggests that the formation of IP leads to a more stable microbial community structure, aiding our understanding of the transformation of organic matter and HMs [203].Figure 3 shows a sketch of the interaction between plants, IP and microbes.
Ecological Role in Environmental Remediation
In today's era of rapid industrial and agricultural technological advancement, environmental pollution, such as heavy metals, has become a global concern.Waste materials from industrial and agricultural activities are discharged into natural environments through sewage and sludge, eventually entering agricultural soils and posing risks to human and environmental safety [204][205][206][207].
Among various soil remediation methods, bioremediation is favored for its cost-effectiveness and eco-friendliness.For instance, Pteris vittata L., known for its robust growth and high tolerance to HM toxicity, is used for biomonitoring and assessment of metal pollution in sediments [208].As a crucial link between plants, microorganisms and soil, IP plays a significant role in the remediation of soil heavy metals.Wetland plants use their aeration tissues to transfer oxygen to their roots, while the low redox potential of sedi- Among various soil remediation methods, bioremediation is favored for its costeffectiveness and eco-friendliness.For instance, Pteris vittata L., known for its robust growth and high tolerance to HM toxicity, is used for biomonitoring and assessment of metal pollution in sediments [208].As a crucial link between plants, microorganisms and soil, IP plays a significant role in the remediation of soil heavy metals.Wetland plants use their aeration tissues to transfer oxygen to their roots, while the low redox potential of sediments leads to the gradual accumulation of substances like Fe(II), Mn(II), H 2 S and CH 4 [38].These conditions create an ideal environment for microbial survival, and both the oxygenating ability of roots and microbial activity support the formation of IP in aquatic plant roots [209].The IP attached to the root surface provides a large binding surface area for the absorption of metals and other elements, effectively remediating polluted soils [36].
Numerous researches indicated that plants with IP adherence, which thrive with high biomass in saline water conditions and possess deep root systems, can flourish in challenging environments and demonstrate strong metal accumulation capabilities [210], especially in their roots [211].For example, Spartina alterniflora Loisel. is identified as an effective species for remediation due to its ability to accumulate considerable amounts of certain HMs (Cd, Cr and Mn), and lead, in its above-ground parts [212].Jia et al. (2018) [213] confirmed that the IP characteristics of wetland plants can regulate iron, manganese and phosphorus in agricultural drainage.
Apart from HMs, environmental pollutants that pose a threat to plant survival include persistent organic pollutants (POPs), microplastics (MPs) and emerging contaminants (ECs) such as waterborne antibiotics and sterol hormones [100].Under waterlogged conditions, certain steroidal hormones and waterborne antibiotics can accumulate on IP, with high adsorption sites and functional groups of IP facilitating their removal [214].Polycyclic aromatic hydrocarbons (PAHs) and polybrominated diphenyl ethers (PBDEs) are persistent organic pollutants commonly found in both industrial discharges and ecosystems [215].IP can serve as a physical barrier to impede the entry of contaminants into plants, contributing to the immobilization of PAHs and PBDEs [216].The formation of IP is a natural process that does not cause secondary pollution, making it a promising strategy for remediating soils contaminated with HMs.The plant-microbe-soil system is complex, and a more comprehensive study is needed to understand the interactions within polluted soils.Meng et al. (2024) [217] revealed for the first time the process of IP generating highly reactive hydroxyl radicals (•OH) and verified the role of the produced •OH in the oxidation and transformation of pollutants in the rhizosphere.In addition to pollutants, the produced •OH may also affect the redox cycling of elements and the composition of the rhizosphere microbial community [218], subsequently impacting the growth of rice.Correspondingly, rhizospheric FeOB and FeRB may significantly alter the composition of IP, thus affecting the generation of •OH by IP.In the future, the oxidative effects induced by •OH generated from IP should be incorporated into the framework of understanding IP's impact on rice plants.
Policy and Sustainability
Integrating IP research into environmental protection policy involves developing guidelines and strategies that utilize IP to mitigate soil and water pollution.Governments and environmental agencies can formulate policies that encourage the use of plants with high IP-forming capabilities in areas affected by heavy metal contamination.This approach can be part of a broader environmental restoration plan aimed at reducing the impact of industrial and agricultural pollutants on ecosystems.In addition, policies can provide funding and incentives for research into the mechanisms of IP formation, its impact on plant growth and soil health and methods to enhance IP formation in different plant species.Such research could lead to new agricultural practices and environmental remediation techniques.
Conclusions
Plants have evolved adaptive and versatile strategies to perceive and respond to fluctuations in element availability, optimizing their growth, development and reproduction under changing environmental conditions.These strategies encompass a range of mechanisms, from chelation and osmoregulation to antioxidant systems, including root secretions, cell walls, cell membranes and vacuolar compartmentalization.These factors significantly influence the mobility of heavy metals (HMs) and microbial activity [219][220][221].Increasingly, IP is being recognized as a microbial armor and nutrient treasury for plants.
The intricate plant-microbe-soil system poses significant challenges in comprehending the interactions within polluted soils.FeOB plays a pivotal role in the iron cycle, yet the metabolic pathways of these bacteria and their involvement in the iron cycle around roots remain enigmatic.It is imperative to investigate the ecological and environmental implications of IP to gain a deeper understanding of its significance.
In conclusion, a profound comprehension of the intricate interactions among plants, rhizosphere microorganisms and polluted soils is crucial for addressing the environmental impacts of soil pollution and developing effective remediation strategies.Key avenues for future research include exploring the metabolic pathways of FeOB and assessing the safety and practical significance of IP plants.
Figure 1 .
Figure 1.The formation process of IP.Through various oxidation-reduction processes occurring outside the root (shown as cross-section), a large amount of soluble Fe(II) is formed, which is readily oxidized by dissolved oxygen in soils.According to the equation of 4Fe(II) + 10H2O + O2→ 4Fe (OH)3 + 8H + , this results in the rapid precipitation of iron oxide on the surface of roots.
Figure 1 .
Figure 1.The formation process of IP.Through various oxidation-reduction processes occurring outside the root (shown as cross-section), a large amount of soluble Fe(II) is formed, which is readily oxidized by dissolved oxygen in soils.According to the equation of 4Fe(II) + 10H 2 O + O 2 → 4Fe (OH) 3 + 8H + , this results in the rapid precipitation of iron oxide on the surface of roots.
Figure 2 .
Figure 2. Schematic diagram of abiotic and biotic factors influencing the IP formation in the rhizosphere.
Figure 2 .
Figure 2. Schematic diagram of abiotic and biotic factors influencing the IP formation in the rhizosphere.
2. 5 .
Native Plants in Phytoremediation: Interactions and Ecological Effects on Soils and Plants in Heavy Metal Contaminated Environments 2.5.1.Effect on Plants and Soils
Figure 3 .
Figure 3. Interaction between plants, IP and microbes.In the rhizosphere environment, plants and microorganisms have their own strategy to influence pH and eh of soil.Meanwhile, microbes can influence plants by rising ROS in root cell.
Figure 3 .
Figure 3. Interaction between plants, IP and microbes.In the rhizosphere environment, plants and microorganisms have their own strategy to influence pH and eh of soil.Meanwhile, microbes can influence plants by rising ROS in root cell.2.5.2.Ecological Role in Environmental RemediationIn today's era of rapid industrial and agricultural technological advancement, environmental pollution, such as heavy metals, has become a global concern.Waste materials from industrial and agricultural activities are discharged into natural environments through sewage and sludge, eventually entering agricultural soils and posing risks to human and environmental safety[204][205][206][207].
2. 6 .
Perspectives and Conclusion 2.6.1.Exploring the Cultivation of High-Yielding IP Plant-Microbe Combinations to Address Environmental Pollution
Table 1 .
The species of HMs that were reported to be blocked by IP.
Table 2 .
The iron oxides and hydroxides | 2024-05-29T15:10:19.991Z | 2024-05-27T00:00:00.000 | {
"year": 2024,
"sha1": "a1779435ddd39d44fe3a67f5b0d4318c1871e6ff",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/13/11/1476/pdf?version=1716799657",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d8aff24a97edee432e96e54f7b1d5b4637204550",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18267225 | pes2o/s2orc | v3-fos-license | Prevention of febrile neutropenia: use of prophylactic antibiotics
Febrile neutropenia (FN) causes significant morbidity and mortality in patients receiving cytotoxic chemotherapy and can lead to reduced chemotherapy dose intensity and increased overall treatment costs. Antibiotic prophylaxis reduces the incidence of FN. Recent research and meta-analyses confirm that prophylactic fluoroquinolones decrease FN and infection-related mortality in patients with acute leukaemia and those receiving high-dose chemotherapy. Fluoroquinolone prophylaxis also lowers the incidence of FN and all-cause mortality following the first cycle of myelosuppressive chemotherapy for solid tumours. Levofloxacin has been the agent studied most thoroughly in this context. Although there is no convincing evidence that colonisation of individuals with resistant organisms due to antibiotic prophylaxis increases FN or mortality, such concerns must be taken seriously and the use of prophylaxis should be limited responsibly for patients with the greatest chance of benefit. Fluoroquinolone prophylaxis is well tolerated and cost-effective and should be offered to patients receiving chemotherapy for haematological malignancies and high-dose chemotherapy for solid tumours in which prolonged (>7 days) neutropenia is expected. It should also be considered for those receiving chemotherapy for solid tumours and lymphomas during the first cycle of chemotherapy when grade 4 neutropenia is anticipated.
For many years, controversy has surrounded the use of prophylactic antibiotics following chemotherapy for malignant diseases. Although effective, the toxicity of the trimethoprimsulfamethoxazole combination led to a decline in its use, and raised questions about the prophylaxis of febrile neutropenia (FN) in general, particularly as mortality from FN was diminishing. However, as discussed elsewhere in this supplement (Cameron, 2009;Krell and Jones, 2009;Kelly and Wheatley, 2009;Jones and Leonard, 2009), data have emerged highlighting a range of risks associated with FN, including the adverse effects of consequent chemotherapy dose reduction and delays, FN morbidity, and mortality rates reaching 4 -6%, as well as the costs of managing the condition (Trueman, 2009;Van de Wetering et al, 2005). Furthermore, some of the newer, effective chemotherapy agents, such as docetaxel and vinorelbine, are especially prone to causing FN (Aapro et al, 2006).
The fluoroquinolones, introduced in the 1980s, have transformed this field, becoming the most commonly used prophylactic antibacterial agents in neutropenic patients because of their broad antimicrobial spectrum, preservation of anaerobic gut flora (Walker, 1999), systemic bactericidal activity (Reeves, 1986), good tolerability and lack of myelosuppression (Del Favero and Menichetti, 1993).
In the past 5 years, major randomised trials and meta-analyses have led to significant progress in our understanding of the efficacy of fluoroquinolone prophylaxis, and the categories of patients (and chemotherapy regimens) associated with the greatest risk of FN, and hence those most likely to benefit from prophylactic treatment. The use of granulocyte colony-stimulating factors, with or without antibiotics, in the prophylaxis of FN is described in this supplement by Kelly and Wheatley. Our review summarises the evidence supporting the use of prophylactic antibiotics following chemotherapy and highlights the situations in which the gains are likely to be greatest.
RECENT RESEARCH
Two large, investigator-led, randomised controlled trials published in 2005 provide firm evidence of the efficacy of fluoroquinolone prophylaxis in two distinct contexts: hospitalised patients expecting prolonged neutropenia, and patients receiving cyclical, mainly outpatient-based, chemotherapy causing neutropenia of a short duration.
Hospitalised patients expecting prolonged neutropenia Bucaneve et al (2005) reported findings from a double-blind, placebo-controlled trial of 760 hospitalised adult patients in whom chemotherapy-induced neutropenia (below 1000 neutrophils per mm 3 ) was expected to last longer than 7 days. The trial included patients receiving chemotherapy for acute leukaemia, lymphomas or solid tumours. They were randomised to receive oral levofloxacin (500 mg daily) or placebo from the start of chemotherapy until the resolution of neutropenia. Intention-totreat analysis showed a lower incidence of fever in patients receiving levofloxacin compared with the placebo group (65 vs 85%, respectively, P ¼ 0.001). Mortality was lower in the levofloxacin group but the study was not powered to prove this.
Cyclical, mainly outpatient-based, chemotherapy causing neutropenia of short duration In the UK Significant (simple investigation in neutropenic individuals of the frequency of infection after chemotherapy þ /À antibiotic in a number of tumours) Trial (Cullen et al, 2005), 1565 *Correspondence: Professor M Cullen; E-mail: michael.cullen@uhb.nhs.uk patients receiving cyclical, mainly outpatient chemotherapy for solid tumours (predominantly breast, lung and testicular) or lymphoma, who were at risk of temporary, severe neutropenia (below 500 neutrophils per mm 3 ) were randomised to receive either levofloxacin (500 mg daily) or placebo for 7 days during the expected neutropenic period in up to six cycles of chemotherapy. A significant reduction in febrile episodes and hospitalisation for treatment of bacterial infection was documented in the levofloxacin group during all cycles of treatment. Thirty-day mortality was lower in the levofloxacin group (1.5%) compared with the placebo group (2.3%), but the difference did not reach statistical significance (Cullen et al, 2005;Leibovici et al, 2006). The Significant Trial was by far the largest study looking specifically at antibiotic prophylaxis of FN in patients with solid tumours and lymphoma receiving moderately myelosuppressive chemotherapy, and it resolved the efficacy question for this group of patients.
META-ANALYSES EXAMINING MORTALITY
As death from FN is relatively rare, meta-analyses are necessary to examine the effects of interventions on mortality. Gafter-Gvili et al (2005) undertook a meta-analysis of trials comparing prophylactic antibiotic therapy (fluoroquinolone-based and other regimens) with placebo or no intervention in patients receiving chemotherapy. They analysed 95 randomised controlled trials conducted between 1973 and 2004 involving 9283 patients. The primary outcome was all-cause mortality, and secondary outcomes included infection-related death, febrile episodes, bacteraemia, adverse events and emergence of bacterial resistance. The metaanalysis showed a statistically significant reduction in all-cause mortality of 34% in patients receiving prophylaxis compared with placebo or no intervention, and a 45% reduction in mortality in those receiving fluoroquinolones. Although the relative risk of death did not differ between haematological malignancies and solid tumours in this meta-analysis, the number of solid tumours was much smaller. Consequently, the meta-analysis has been updated (Leibovici et al, 2006) to include data from GIMEMA (Gruppo Italiano Malattie Ematologiche Maligne dell'Adulto) (Bucaneve et al, 2005) and the Significant Trial (Cullen et al, 2005). Among patients with acute leukaemia, who had undergone bone marrow transplantation, the relative risk of death with fluoroquinolone prophylaxis was 0.67 (0.55 -0.83) -a one-third reduction compared with the control group, which did not receive prophylaxis. Among patients with solid tumours and lymphomas, fluoroquinolone prophylaxis had a significant impact on all-cause mortality during the first cycle of chemotherapy, with a relative risk of 0.48 (0.26 -0.88), compared with controls.
VARIABLES THAT AFFECT FN RISK AND PROPHYLACTIC EFFICACY
The effect of cycle number on the risk of FN has been known but under-appreciated for some years. Studies in small-cell lung cancer and breast cancer have shown that the risk of FN is much greater following the first cycle of chemotherapy compared with later cycles (Holmes et al, 2002;Timmer-Bonte et al, 2005;Vogel et al, 2005). This finding has been confirmed in surveys of larger numbers of patients with lymphoma (Lyman and Delgado, 2003) and multiple tumour types (Crawford et al, 2004). There are several possible explanations for the first-cycle effect. For example, neutropenia that is not accurately predictable for a given patient may be severe in the first cycle, then reduced when subsequent cycles are subject to secondary modification, such as dose reduction. Alternatively, the cytoreductive effects of the first chemotherapy cycle may enable resolution of a cancer-related focus of infection (e.g., beyond an obstructed airway in a patient with lung cancer) or lead to an improvement in performance status.
Other variables that predict increased rate of FN are discussed elsewhere in this supplement (Kelly and Wheatley, 2009).
RATIONAL SELECTION OF PATIENTS FOR ANTIBACTERIAL PROPHYLAXIS
A second publication from the Significant Trial examined chemotherapy cycle effects and other variables that might predict increased efficacy of levofloxacin prophylaxis (Cullen et al, 2007). It showed that the incidence of FN was 8% in first cycles but only 3.3% per cycle thereafter. In addition, prophylaxis was more effective in first cycles (odds ratio 0.42, Po0.001) than in later cycles (odds ratio 0.78). However, FN in cycle 1 predicted a much higher risk of subsequent FN and a trend towards continued prophylactic efficacy in later cycles (Figure 1). Among the cancers studied, the rate of FN was greatest for testicular cancer (27.9%), followed by small-cell lung cancer (17.3%), and lowest for breast cancer (11.5%). Prophylactic efficacy was consistent despite differences in age, sex, performance status, treatment context (adjuvant or advanced) and disease type (except possibly non-Hodgkin's lymphoma).
In the light of the pressure to limit antibacterial use (for reasons discussed below), the data on cycle effects support the practice of offering prophylactic levofloxacin in the first cycle of myelosuppressive cancer chemotherapy, and in subsequent cycles only if there has been a fever in cycle 1. These data also show that prophylactic levofloxacin is effective regardless of the patient's age or performance status, or the type of solid tumour (Cullen et al, 2007).
CONCERNS ABOUT ANTIBIOTIC PROPHYLAXIS Treatment cost
Bucaneve et al (2005) showed that five patients undergoing chemotherapy for cancer needed to be treated with oral levofloxacin to prevent one episode of FN. The average length of prophylaxis in the study was 14 days for patients receiving chemotherapy for solid tumour or lymphoma, and 27 days for patients with acute leukaemia. A 7-day course of levofloxacin in the UK costs d18.10 (BNF, 2009). It therefore costs only d181.00 and d349.07, respectively, to prevent an episode of FN in these two groups. The Significant Trial did not directly address the economic aspects of prophylaxis. However, analysis shows that 23 patients needed to be treated with levofloxacin to prevent one episode of FN (Table 1), and that the cost of prophylaxis for 23 patients for one cycle of chemotherapy is approximately d416.30 (Cullen et al, 2005). The cost of managing one episode of FN in the UK has been estimated as d4064.84 (Holmes et al, 2004), suggesting that antibiotic prophylaxis is cost-effective in these patient groups.
Antibiotic resistance
The main concern over the use of prophylactic antibiotics remains the emergence of antibiotic resistance, and its implications both for the individual patient and at ward level. Figure 1 FN rate per cycle and impact on later events in the Significant Trial (Cullen et al, 2007). FE, Febrile episode.
There is no doubt that routine prophylactic use of antibiotics can cause colonisation of individual patients with resistant organisms, but the clinical relevance of this is unclear. Bucaneve et al (2005) observed a non-significant increase in the incidence of levofloxacin-resistant Gram-negative bacteraemia among patients receiving levofloxacin, but this did not affect outcomes such as infection-related morbidity or mortality. Gafter-Gvili et al (2005) found that the risk of developing fluoroquinolone resistance did not increase significantly secondary to prophylaxis, and that there was a low incidence of infections caused by resistant bacteria in patients who had received prophylaxis.
There have been several reports of the emergence of fluoroquinolone-resistant bacteria in units that practise fluoroquinolonebased prophylaxis (Razonable et al, 2002;Kern et al, 2005). However, there is no convincing evidence that patients have suffered adverse outcomes as a result. Kern et al (2005) found that after fluoroquinolone prophylaxis had been in use for 10 years, there was an increase in the number of cancer patients colonised or infected with fluoroquinolone-resistant Escherichia coli. The practice of prophylaxis was stopped for 6 months in the unit, and a significant increase in the incidence of Gram-negative bacteraemia was found in patients with cancer, accompanied by a decrease in the proportion of fluoroquinolone resistance in E coli bacteraemia. After the resumption of prophylaxis, an increase in the proportion of in vitro fluoroquinolone resistance in E coli bacteraemia was observed, but the incidence of all Gram-negative bacteraemia was reduced to pre-discontinuation levels. The authors suggest that the rate of resistance in their unit is a poor indicator of the potential clinical benefits associated with fluoroquinolone prophylaxis in patients with cancer.
NCCN GUIDELINES
In 2008, the US National Comprehensive Cancer Network published guidelines on the prevention and treatment of cancerrelated infections (Segal et al, 2008). It recommends prophylactic fluoroquinolones for high-risk and intermediate-risk groups, which largely comprise patients receiving high-dose chemotherapy and those with haematological malignancy in which the anticipated duration of neutropenia is longer than 7 days. For most solid tumours undergoing standard outpatient cyclical chemotherapy, in which the anticipated duration of neutropenia is less than 7 days, prophylactic fluoroquinolones are not recommended, because of the risk of microbial resistance. However, even in the latter circumstances, we believe that when grade 4 neutropenia is expected (e.g., etoposide-containing regimens for testicular and small-cell lung cancers, and regimens containing docetaxel, vinorelbine or doxorubicin), in which the risk of FN is very high, fluoroquinolones should be considered, particularly in the first cycle.
CONCLUSION
There is now convincing evidence that antibiotic prophylaxis reduces the incidence of FN and mortality in patients receiving cytotoxic chemotherapy for acute leukaemia and for patients with solid tumours and lymphoma receiving high-dose chemotherapy (Segal et al, 2008). Therefore, we would argue that antibiotic prophylaxis should be offered routinely to these groups of patients.
Fluoroquinolone prophylaxis also significantly reduces FN in patients with solid tumours or lymphoma who are undergoing cyclical standard-dose myelosuppressive chemotherapy (Cullen et al, 2005). A significant impact on all-cause 30-day mortality was also shown in this group (Leibovici et al, 2006). We believe prophylaxis is indicated during the first cycle of chemotherapy in which there is an expectation of grade 4 neutropenia (below 500 neutrophils per mm 3 ).
Fluoroquinolones are the most effective agents for prophylaxis of FN, and are cost effective and well tolerated (Reeves, 1986;Del Favero and Menichetti, 1993;Walker, 1999). When choosing between the fluoroquinolones, clinicians should take into account the patterns of pathogens and resistance in their patient population, and remember that, compared with ciprofloxacin, levofloxacin has additional activity against Gram-positive organisms but less anti-pseudomonal activity (MacGowan et al, 1999;Montanari et al, 1999). Compliance is a major concern when considering oral prophylactic therapy, so once-daily levofloxacin may have an advantage in this regard.
The main concern relating to the prophylactic use of antibiotics remains the development of resistance. Although it is established that fluoroquinolone prophylaxis can result in increased fluoroquinolone resistance in treatment centres, there is little evidence of a resultant increase in FN or infection-related mortality (Bucaneve et al, 2005).
There are also important ethical concerns about withholding a proven treatment from current patients for the sake of an unquantified benefit to patients in the future.
Conflict of interest M Cullen has received consulting fees from sanofi-aventis. S Baijal has declared no financial interests. Table 1 Levofloxacin vs placebo to prevent infection after chemotherapy in patients with solid tumours or lymphoma (Cullen et al, 2005) | 2017-08-31T13:16:38.106Z | 2009-09-01T00:00:00.000 | {
"year": 2009,
"sha1": "7e5a4ea8bed6d7730978ea0607937998d13845e6",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/6605270.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7e5a4ea8bed6d7730978ea0607937998d13845e6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249449100 | pes2o/s2orc | v3-fos-license | The transmission of financial shocks and leverage of financial institutions: An endogenous regime switching framework
We conduct a novel empirical analysis of the role of leverage of financial institutions for the transmission of financial s hocks t o t he m acroeconomy. F or t hat purpose we develop an endogenous regime-switching structural vector autoregressive model with time-varying transition probabilities that depend on the state of the economy. We propose new identification techniques for regime switching models. Recently developed theoretical models emphasize the role of bank balance sheets for the build-up of financial instabilities and the amplification of financial sho cks. We build a market-based measure of leverage of financial institutions employing institution-level data and find empirical evidence that real effects of financial shocks are amplified by the leverage of financial institutions in a financial-constraint re gime. We also find evidence of heterogeneity in how depository financial i nstitutions, global systemically important banks and selected nonbank financial institutions affect the transmission of shocks to the macroeconomy. Our results confirm the leverage ratio as a useful indicator from a policy perspective. JEL Classification: C11, C32, C53, C55, E44, G21
Introduction
Since the Global Financial Crisis (GFC) substantial progress has been made in understanding the interactions of financial constraints, financial market instabilities and the macroeconomy and incorporating those in standard macreconomic models, but further work is needed.
In this paper, we add new empirical evidence on the role of leverage of financial institutions for the transmission of financial shocks to the macroeconomy. In addition, we develop an endogenous regime switching framework with new identification techniques to conduct our novel empirical analysis.
We contribute to the empirical literature by (1) The motivation for our focus on leverage is threefold: First, recent literature on structural macroeconomic models emphasizes the role of bank balance sheets for the build-up of financial instabilities and the amplification of economic downturns. Second, leverage encompasses the entire balance sheet of the financial institution and therefore is a broad indicator for signaling financial vulnerabilities. Third, the leverage ratio is a regulatory tool complementary to the (risk-weighted) capital ratio.
We build a market-based measure of leverage of financial institutions, building on Adrian and Brunnermeier (2016) 1 . We employ financial institution level data to construct a monthly measure of leverage as book assets over market equity. Two arguments suggest a focus on market leverage: First, market leverage developments can signal a situation where financial institutions need to deleverage quickly -, for instance, if debt is used to finance asset growth as for broker-dealers (see Adrian and Shin (2014)) or if financial institutions rely primarily on short-term funding (see e.g. Adrian et al. (2011) and the related literature on maturity trans-1 See also Paul (2020) formation). Second, market values of equity are more informative about financial institutions' losses compared with book values. Book equity values might not be a timely predictor of bank health. Because book values incorporate information on losses with a delay, financial institutions have time to adjust their book leverage in order to avoid hitting the regulatory limit. 2 Financial institutions (banks and nonbank financial institutions) might be more fragile than their book leverage levels make them appear. Furthermore, market capitalization of a financial institutions is a reflection of the market value of the equity holders' stake, and hence an assessment by market participants of the creditworthiness of the financial institution as a borrower.
Low market-to-book ratios suggest that the assessment of market participants is that financial institutions are more leveraged than their books suggest (see also Adrian et al. (2018)). We highlight the role of the financial fragility implied by market leverage for the transmission of financial shocks in our empirical model, which has been pointed to in a model estimated to match four facts about banks' leverage dynamics (see Begenau et al. (2021)).
In addition to our novel empirical analysis, we develop a regime switching vector autoregressive (RS-VAR) model with time-varying transition probabilities. We show how the Markov-switching structural vector autoregression model framework -proposed in Sims and Zha (2006) and Sims et al. (2008), and employed in the analysis of the transmission of financial crises in Hubrich and Tetlow (2015) -is extended in several dimensions. More precisely, we extend previous literature on Markov-Switching models in two important dimensions: 1.
We allow for time-varying probabilities in RS-VAR models. In regime switching models with time-varying probabilities -sometimes referred to as "endogenous switching" modelsthe probability of switching regime can vary over time depending on the state of the economy. 2. We propose new identification techniques for RS-VAR models, allowing a range of general, nonrecursive (over-)identification schemes of the structural shocks. Besides nonrecursive zero restrictions, these also include sign restrictions and narrative sign restrictions, thereby bringing the approaches suggested in Antolin-Diaz and Rubio-Ramirez (2018) and Arias, Rubio-Ramirez and Waggoner (2018) to the class of regime switching models. We also allow for different identification schemes in different regimes. We employ the recently devel-2 So far the information content of market equity about book losses has been mostly highlighted in the accounting literature, indicating that banks have flexibility in accounting for losses, consistent with evidence in Blattner et al. (2022). This flexibility in accounting for losses can be even more prominent for nonbank financial institutions that are part of our analysis. oped Dynamic Striated Metropolis-Hastings sampler for high-dimensional models to estimate the posterior distribution of the model. This new framework allows us to address the economic questions raised above.
The paper is structured as follows. Section 2 discusses the related theoretical and empirical literature, thereby motivating the models estimated in this paper. Section 3 presents our new methodological proposal, outlines the estimation and evaluation of the model and discusses the identification issues that arise for RS-VAR models. Section 4 contains the economic motivation, the data and model specification and the empirical results. Section 5 concludes.
Related economic literature and contribution of this paper
We complement previous empirical studies on financial constraints and economic dynamics by providing empirical evidence on the role of leverage of financial institutions for the transmission of financial shocks to the macroeconomy, with a particular focus on market-based leverage and the differences between the role of leverage of banks and nonbank financial institutions' leverage. We develop a new regime switching SVAR model framework for our empirical analysis that is motivated in part by recent theoretical structural model developments.
We discuss recent related developments in structural theoretical models and recent empirical evidence on nonlinearities in empirical models on the relation between financial constraints and the macroeconomy. We provide and discuss references on the role of leverage versus the capital ratio as well as the motivation for the focus on market leverage in the introduction as well as in the motivation of our empirical analysis in Section 4. A discussion of the relation of our methodological contribution to the literature can be found in Section 3.
Endogenous financial instabilities: Theoretical models
The theoretical literature has made progress recently in incorporating financial instability and associated nonlinearities into macroeconomic models. Structural models such as Kiyotaki and Moore (1997) as well as Brunnermeier and Sannikov (2014) illustrate how systemic risk might arise endogenously, determined by the choices of the model s decision makers. In Kiyotaki and Moore (1997) collateral constraints play a key role for the propagation and amplification of shocks while in Brunnermeier and Sannikov (2014) the reduction in the volatility of output and asset prices leads to increased leverage of financial institutions. 3
Bank balance sheets and leverage of financial institutions
More recently, a number of authors introduced a more sophisticated financial sector into an otherwise standard macroeconomic model. Financial intermediaries' balance sheets have implications for the institutions' access to funds and liquidity that affect their lending activities and thereby economic activity. A fall in the value of a bank's tradable assets and a decline in loan quality can adversely affect the bank s capital. The fall in asset prices will affect lending activity via the collateral channel. Banks have been found to limit their deposit taking in response to a decline in net worth Kiyotaki (2010, 2015); Gertler and Karadi (2013).
The importance of the bank capital channel will also depend on the extent to which nonbank financial institutions can substitute lending and liquidity provisions by banks (see e.g. Durdu and Zhong (2019)).
Leverage of financial institutions is an important characteristic in the presence of large and abrupt asset price movements (see e.g. Gertler and Gilchrist (2018) on the role of leverage, and Adrian and Brunnermeier (2016) and Paul (2020) for more details on the mechanisms).
The leverage ratio of a bank is the ratio of total assets to shareholder equity. Bank leverage is an indicator for external financing opportunities by banks and for risk-taking by banks. It is a cyclical indicator and can amplify the transmission of shocks.
Asset prices affect the balance sheet and thereby affect leverage both in an accounting sense as well as via resulting changes in the agents' behavior (see Paul (2020)). Financial vulnerabilities build up in boom times, when banks enlarge their balance sheets and increase their leverage, relying more on debt as opposed to equity. In the lead-up to the GFC and Great Recession there was a pronounced rise in leverage in the banking sector. During the GFC stock prices fell dramatically, increasing the already high leverage of banks even further.
At the same time, banks' market value of assets in terms of share holdings was also shrink-3 Other contributions that incorporate financial instabilities and nonlinearities include Mendoza (2010) and He and Krishnamurthy (2019) and Boissay et al. (2016). ing, reducing the access of banks to external finance (see e.g. Ferrante (2019)). 4 At some point, banks had to sharply reduce the provision of loans to households and firms to obtain liquidity to avoid a bank run and insolvency (see e.g. Gertler et al. (2016) andGertler et al. (2020). Consequently, banks were more constrained in raising funds and therefore were providing fewer/lower volume loans. 5 Additionally, when banks realized that this shock was not temporary, they deleveraged given a reduced risk tolerance, adding further constraints. 6 The discussion around Basel III and its implementation has aimed to address capital and leverage requirements as lessons from the GFC.
Financial constraints and economic dynamics: empirical evidence
Only a limited number of empirical contributions allow for nonlinearities in the relation between financial constraints and the macroeconomy, while a growing empirical literature has documented stylized facts on the role of financial factors in business cycles and for the development of a financial crisis. 7 Hubrich and Tetlow (2015) investigate whether a financial crisis is just a manifestation of amplified shocks or whether the transmission of shocks does actually change. They find empirical support for the hypothesis of a change in the transmission of financial shocks to the US macroeconomy in episodes of high stress. Hubrich et al. (2013) analyse the effects of financial shocks on the macroeconomy for EU and OECD countries and find evidence for nonlinearities and heterogeneity across countries in the transmission of financial shocks to the macroeconomy. Other studies also highlight empirical nonlinearities arguing that transmission channels may operate differently depending on underlying conditions, e.g. on the credit-to-GDP gap, for instance Aikman et al. (2020) or find different effects depending on the nature of the financial shocks, i.e. whether shocks represent easing or adverse financial conditions (see Barnichon et al. (2019)). 4 Ferrante (2019), building on the framework of Gertler and Karadi (2011), extends a standard New Keynesian model to include a rich financial system in which financially constrained banks lend to firms and homeowners via defaultable long-term loans. In this model financial shocks affecting lending spreads can bring about a widespread recession that has at its core a deterioration in the equity of financial intermediaries and in their leverage capacity. 5 For a recent paper suggesting a endogenous regime switching DSGE model to analyse financial crises in Mexico, see Benigno et al. (2020). 6 Note that asset price driven cycles are more likely in market-based banking systems (IMF, 2009). The build-up of leverage is more likely in market-based systems due to the effective use of collateralization and sophisticated risk management and information-sharing strategies. 7 Credit rises in the run-up to financial crises (Schularick and Taylor, 2012) and recessions associated with financial crises are usually deeper than normal recessions, especially if they are preceded with a build-up of credit (Jordà et al., 2013). Brunnermeier et al. (2019) employing a structural VAR identified with heteroscedasticity, also find significant output effect of financial stress shocks, measured as spread shocks, including corporate bond (GZ) spread shock that is and an interbank lending spread shock as proxied by the 3-month Eurodollar rate over the 3-month Treasuries. A recent strand of literature studies the distribution of future real GDP growth as a function of current financial and economic conditions or bank capital using quantile regression. They find that the estimated lower quantiles of the distribution of future GDP growth exhibit strong variation as a function of current financial conditions (see for instance Adrian et al. (2019) and Boyarchenko et al. (2020)). 8 We complement that literature by taking a parametric approach.
The methodology
Most of the methodological literature focuses on models with constant probabilities of Markov switching. Following the seminal paper by Hamilton (1989), a number of contributions extended the basic Markov switching model and its estimation procedure suggested in that paper in different dimensions, see for instance Chauvet (1998), Kim andNelson (1999), Frühwirth-Schnatter (2006), Sims and Zha (2006) and Sims et al. (2008).
Some papers propose a class of time-varying probability Markov switching regression models, including Filardo (1994), Diebold et al. (1994), Kim (2004), Kim et al. (2008) as well as Bazzi et al. (2017) and Chang et al. (2017). In these papers the probability of regime switching depends on certain variables of interest. They assume a functional form for the dependence of the probability on the state of the economy. Most papers employ a logistic function, sometimes a probit function is used (e.g. Kim et al. (2008)). Only a few recent papers allow for occasionally binding constraints in a VAR context that implies some endogeneity of regime switching, and those include Mavroeidis (2021), Aruoba et al. (2021) and Hayashi and Koeda (2019). 9 In this paper, we propose a Regime-Switching Vectorautoregressive (RS-VAR) model with time-varying transition probabilities, building on and extending the framework presented in Sims et al. (2008).
We extend previous literature in several dimensions: 1. We allow for a time-varying transition matrix in a regime-switching structural VAR model; 2. we allow for a range of general, nonrecursive identification schemes, including sign restrictions and narrative sign restrictions that might be different in different regimes; 3. We highlight and discuss identification issues in Regime-switching Structural VAR models.
The Regime-Switching Model with time-varying transition matrix
For 1 ≤ t ≤ T , let y t be an n-dimensional vector of endogenous variables, let z t be a kdimensional vector of exogenous variables, and let s c t and s v t be a discrete latent variables with s c t ∈ {1, · · · , h c } and s v t ∈ {1, · · · , h v }. We propose a structural vector autoregression with time-varying transition matrix (RS-SVAR) where the predetermined vector x t is [y t−1 , · · · , y t−p , z t ] and is of dimension m = np + k. 10 The exogenous structural shocks ε t are n-dimensional and assumed to be standard normal and independent of the regime process s c t and s v t . The coefficient matrix A 0 (s c t ) is n × n and invertible, A + (s c t ) is n × m, and Ξ(s v t ) is n × n and diagonal, with positive diagonal elements.
We call s c t the coefficient regime and s v t the variance regime. We define the overall regime process to be s t = h c (s c t − 1) + s v t , which can take on h = h c h v distinct values in {1, · · · , h}.
The coefficient and variance regime processes are assumed to be independent, though this condition could be relaxed. 11 We will denote the matrix of probabilities governing the transition of the processes s c t and s v t from time t to time t + 1 by P c t+1|t and P v t+1|t , respectively. The matrix P c t+1|t is h c × h c and the matrix P v t+1|t is h v × h v . The element in row i and column j of these matrices is the probability of transiting from regime j at time t to regime i at time t + 1. The elements of these matrices are all non-negative and the columns of each of these matrices must sum to one. In general, P c t+1|t and P v t+1|t can depend on the endogenous variables y 1 , · · · , y t , the exogenous variables z 1 , · · · , z t , and the matrices A 0 (·), A + (·), and Ξ(·). This implies that P c t+1|t and P v t+1|t could also depend on the exogenous shocks ε 1 , · · · , ε t . In our empirical examples, the transition matrices will depend only on y t− , · · · , y t , for some fixed non-negative value of .
In addition to the time-varying transition matrices, to fully specify the regime processes the initial probabilities must be specified. We denote these by p c 0 , which is an h c -vector with nonnegative elements that sum to one, and p v 0 , which is an h v -vector with non-negative elements that sum to one.
The SVAR parameters of the model will be A 0 (·), A + (·), and Ξ(·). Both the transition matrices and the initial conditions can depend on the SVAR parameters and perhaps some additional vector of parameters that we will denote by q. The transition matrices, of course, could also depend on the endogenous and exogenous data as described above. We will compactly represent all the parameters by θ = (A 0 (·), A + (·), Ξ(·), q). For now, the only restrictions on the SVAR parameters are that the A 0 (·) are invertible and the Ξ(·) are diagonal matrices with positive diagonal.
Identification in RS-SVAR models
One of the contributions of this paper is to highlight identification issues and to propose new identification schemes for RS-SVARs. Constant parameter SVAR models with homoskedastic Gaussian shocks are not identified, but constant parameter models with heteroskedastic shocks are identified, at least up to the ordering of the equations and sign of each equation, see Rigobon (2003). A similar result will hold for the RS-SVAR models we consider. For constant parameters structural VAR models with heteroskedastic shocks, the only identification issues are determining the ordering of the equations and the sign of each equation. For RS-SVAR models these identification issues are present, but there also two additional identification issues.
Identification through heteroskedasticity
Before stating our identification result, we need a restriction to pin down the relationship between A 0 (·), A + (·), and Ξ(·). If D is any diagonal matrix with positive diagonal, then the system given by Equation (1) and the system DA 0 (s c t )y t = DA + (s c t )x t + DΞ −1 (s v t )ε t are ob-servationally equivalent. 12 Thus a restriction is needed to force D to be the identity. In the literature, some authors have chosen the restriction Ξ(1) = I n . While this certainly solves the identification issue, it makes the first variance regime special. We will use the restriction This restriction treats all the variance regimes symmetrically and works better with the usual priors imposed on the Ξ(·). With this restriction, we have the following result.
Proposition 1 Suppose the span of the predetermined data is all of R m and the unconditional probability of being each overall regime is not zero for every t. If h v > 1, then, for almost all parameters values, the RS-SVAR model given by Equation (1) is identified up to the ordering and sign of the equations and the ordering of the regimes.
Proof. See Appendix A.
The hypotheses of Proposition 1 are relatively mild. The first hypothesis is equivalent to the exogenous variables not being collinear and there being at least m observations The second hypothesis says that all the regimes are accessible. If all the initial probabilities are positive or if all the elements of the transition matrices are non-zero, then this hypothesis will be satisfied. Even if neither of these is true, as long at the positions of the zeros in the initial probabilities and the non-zero elements of the transition matrix do not exactly match up, then the hypothesis will be satisfied. In the next section, we discuss the precise meaning of the statement that the model is "identified up to the ordering and sign of the equations and the ordering of the regimes."
Other Identification Issues in RS-VAR models
As we saw in the previous section, multiplication of the system given by Equation (1) by an invertible matrix can result in an observationally equivalent system. In this section, we discuss the need for three more restrictions, all arising from multiplication of the system given by Equation (1) by an invertible matrix. This will make precise what we mean by "identified up to the ordering and sign of the equations and the ordering of the regimes." (see proposition 1).
If one were to permute the rows in Equation (1), then one would obtain an observationally equivalent system. More formally, if Q is a permutation matrix, then the system given by Equation (1) and the system Q A 0 (s c t )y t = Q A + (s c t )x t + Q Ξ −1 (s v t )QQ ε t are observationally equivalent. 13 So, one must have a restriction that picks a unique ordering of the rows out of the n! possible ordering. Since the rows in Equation (1) correspond to equations and each equation contains a single shock, ordering the rows is referred to as ordering the equations or ordering the shocks. This is also referred to as identifying, or naming, the equations or shocks. For instance, we could order the equations so that the financial shock always appears in the first equation. In this case we have identified or named the financial shock. If no such restriction is imposed, then we say the system given by Equation (1) is identified up to an ordering of the equations. The restrictions we will employ to order the equations will be discussed in Section 3.3.
If one were to multiply any equation in any coefficient regime in the system given by Equation (1) by minus one, then one would obtain an observationally equivalent system. More is a diagonal matrix with plus or minus ones along the diagonal, then the system given by Equation (1) and the system given by D( For each coefficient regime, we will use a restriction from this class of restrictions. In particular, for each coefficient regime and equation, we will restrict the sign of that equation so that the impulse response of some variable at some horizon to a positive shock in that equation has a particular sign. In the above example, where the financial shock is ordered first, one could require that the contemporaneous response of output growth to a positive financial shock to be negative. In addition, one could require that the contemporaneous response of the financial conditions index to a positive financial shock to be positive. These would be 13 If σ(·) is a permutation of (1, · · · , n), then Q = [e σ(1) , · · · , e σ(n) ], where e j is the j th column of the n × n identity matrix, is the column permutation matrix associated with σ(·). Permutation matrices are orthogonal and if A is any n × n matrix, then AQ permutes the columns A by σ(·) and Q A permutes the rows of A by σ(·). So, if Q is a permutation matrix and D is a diagonal matrix, then Q DQ permutes the diagonal elements of D. Thus, Q A 0 (·) is invertiable, Q Ξ −1 (·)Q is a diagonal matrix with positive diagonal, and Q ε t is standard normal. consistent with a positive financial shock being detrimental to the outlook in growth. If no such conditions were imposed, then we say the system given by Equation (1) is identified up to sign.
In addition to identifying (or ordering) the equations, one must also identify (or order) both the coefficient and variance regimes. If σ c (·) is a permutation of (1, · · · , h c ) and σ v (·) is a permutation of (1, · · · , h v ), then we can define new discrete latent variables bys c Q v is the h v × h v column permutation matrix associated with σ v (·), then the transition matrix and initial probabilities fors c t are Q c P c t+1|t Q c and Q c p c 0 and the transition matrix and initial probabilities fors v t are Q v P v t+1|t Q v and Q v p v o . Furthermore, the system given by Equation (1) is observationally equivalent to the system given byà 0 (s c
Sign Restrictions
Sign restrictions on the impulse responses have long been used to identify the shocks in the case of constant parameters with homoskedastic shocks, though this identification is only set identification. In the case of regime switching parameters with heteroskedastic shocks, sign restrictions on the impulse responses can be used to identify both the equations and regimes.
Even better, because there are only finitely many ways to identify the equations or regimes, in some cases sign restrictions can uniquely identify the equations or regimes, not just set identify. These ideas can best be explained by an example. For instance, suppose that one of the coefficient regimes is called the financial constraint regime and will be ordered first and that one of the shocks is the financial shock and will be ordered first. Note that the ordering of the equations must be the same across all coefficient regimes, so the financial shock would have to be ordered first in all the coefficient regimes. One could impose the restriction that the contemporaneous impulse response, conditional on being in the financial constraint regime, to a positive financial shock is positive for both financial conditions index and leverage and negative for output growth and interest rates. For some parameter values, in no regime is there a shock whose impulse responses satisfy this pattern, so that parameter would be rejected. In this sense, multiple sign restrictions on the impulse responses to a given shock in a given regime imply that the model is overidentified. 14 For other parameter values, there could be a unique regime and shock whose impulse response satisfied this pattern. In this case the sign restrictions uniquely determine the financial constraint regime and the financial shock and this regime and equation could be ordered first if that were not already the case.
Finally, it could be the case that there are multiple regimes or shocks whose impulse response satisfied this pattern. In this case either the financial constraint regime or the financial shock, or both, would not be uniquely determined. As the number of sign restrictions increase, one would expect that the number of parameters that were rejected to increase and the number of parameters that did not uniquely determine both the regime and equation to decrease. If different sign restrictions on the impulse responses to the same shock across all the different 14 A single sign restriction on the impulse responses to a given shock in a given coefficient regime does not impose overidentifying restrictions because the sign of any equation in any coefficient regime could be changed. Alternatively, as we saw in Section 3.2.2, a single sign restriction on the impulse response to a given shock in a given regime could be thought of as a restriction determining the sign of the given equation in the given regime. regimes are imposed, then either the parameter will be rejected or the regimes will be uniquely determined. Similarly, if different sign restrictions on the impulse responses to all the different shocks, in any regime, are imposed, then either the parameter will be rejected or the equations will be uniquely determined. In this example, only one regime and one equation were being determined, though this idea could easily be extended to determining multiple regimes, i.e. partial identification, and equations or all of the regimes and equations
Zero Restrictions
Zero restrictions on the contemporaneous and predetermined parameters have also long been used to identify the shocks in the case of constant parameters with homoskedastic shocks.
These ideas can also be used to identify the regimes and equations. Given a coefficient regime, if the pattern of zero restrictions on the contemporaneous and predetermined parameters in that regime is different from the pattern in all other coefficient regimes, then the given coefficient was a high probability that the economy was in what we will call the financial constraint regime, which we will order first. We can use this to uniquely determine the financial constraint regime. In our case, since there are only two coefficient regimes, if we can determine the financial constraint regime, then the other regime, which we call the normal regime, will also be uniquely determined. The first step is to define what we mean when we say that there is high probability that we are in coefficient regime k at time t. To do this one must choose a cutoff probability, and if the smoothed probability that we are in coefficient regime k at time t is greater than the cutoff probability, then we say we there is a high probability that we are in coefficient regime k at time t. If the cutoff probability is greater than or equal to 0.5, then there is either a unique coefficient regime that is of high probability at time t or no coefficient regime that is of high probability at time t. Among the parameter values for which there is a unique coefficient regime that is of high probability in October of 2008, we will order the coefficient regimes so that this regime is first and call the first regime the financial constraint regime. If one was unsure of the exact period that economy was in the financial constraint regime, then one could choose a window about October of 2008, and then say the financial constraint regime was the regime that was in high probability over most of this window. For our models, we found that neither the choice of window about October of 2008 nor the cutoff probability, within reason, affected the determination of the financial constraint regime.
Similar ideas could be used to identify the equations. In our model, we wish to uniquely determine what we will call the financial shock. Again, in October of 2008, there was a massive decline in industrial production and it is our contention that the financial shock caused most of this decline. We first must define what we mean by when we say that "shock k caused most of the decline in industrial production at time t". For each overall regime, one can compute the expected value of each time t shock, conditional on the overall regime at time t. One could then compute the expected contemporaneous impulse response of industrial production to each time t shock, conditional on the overall regime at time t. Finally, using the smoothed probabilities, one could then compute the expected contemporaneous impulse response of industrial production to each time t shock. The time t shock with largest negative expected contemporaneous impulse response of industrial production caused most of the decline in industrial production at time t. Except in knife edge cases, there is a unique shock that caused most of the decline in industrial production in October of 2008 and we will order the equations so that shock is first and call the first shock the financial shock. As with ordering the regimes, if one was unsure of the exact period that the financial shock was dominant, then one could choose a window about October of 2008, and then say the financial shock was the shock that caused the most cumulative decline in industrial production over this window.
Identifying the Variance Regimes
In some ways, identifying the variance regimes is more straightforward. The inverse of the diagonal elements of Ξ(·) directly scale the structural shocks, and thus have an economic interpretation. For instance, in our models we are interested in the financial shock, which we order first. So, we order the variance regimes, after we have ordered the coefficient regimes, so that the first diagonal element of the Ξ(·) are in increasing order. This would imply that the first variance regime would have the most variability, at least in terms of the effect of the financial shock. We should point out that under this ordering, in the first variance regime the impulse response to a financial shock will have the largest response for all variables at all horizons in all coefficient regimes.
The Transition Matrices and Initial Probabilities
For the variance regime we will use a constant transition matrix and for the coefficient regime we will have time-varying transition matrices of a particular functional form. Our methodology will certainly allow for both the coefficient and variance regimes to have time-varying transition matrices of completely general functional forms, but in the interest of parsimony, we restrict the variance regime to have a constant transition matrix and the diagonal elements of the coefficient regime transition matrix to be a logistic transformation of a linear function of the endogenous variables. The off-diagonal elements of the coefficient regime transition matrix will be a constant times one minus the diagonal element from the same column. For 1 ≤ i, j ≤ h v , we will denote the constant elements in the variance regime transition matrix by Let p t+1|t (i, j) denote the time-varying probability of switching from regime j at time t to regime i at time t + 1. We assume that the time-varying probability of staying in the j th regime at time t + 1, given that we are in the j th regime at time t, is of the form . (2) The scalarsγ j are the location parameters and the n-vectors γ j,k are the slope parameters. In keeping with our desire to be parsimonious, in most of our examples, = 1 and only a few of the elements of γ j,k will be allowed to be non-zero. We will gather all of these parameters that are not restricted to zero into a vector that we will denote by γ.
In the case of only two coefficient regimes, the diagonal elements completely determine the transition matrix. If there are more than two coefficient regimes, then for i = j, where the q c i, j are non-negative constants such that ∑ h c i=1 q c i, j = 1, under the convention that q c j, j = 0. Let q c denote the vector containing all the q c i, j that are not restricted to be zero.
We will choose the initial probabilities in both the coefficient and variance regime processes so that all the regimes have equal probability. This choice is mandated by the fact that we want the initial probabilities to be invariant to permutations of either of the coefficient or variance regime. If this was not the case, then the initial probabilities would determine, at least partially, the ordering of the coefficient and variance regimes. Unless one was very sure about the initial regime, this would not likely result in a satisfactory condition for ordering the coefficient and variance regimes.
In the case of the variance regime, which has constant transition matrix, choosing the initial variance probabilities to be the ergodic probabilities would also be invariant to permutations of the variance regime. This would be a permissible choice and not increase the number of parameters. However, in most cases this would not deliver substantially different results.
There are no parameters controlling the initial probabilities and the parameters controlling the transition matrices are (q v , q c , γ). As stated before, we gather all of these parameters into a vector that we will denote by q.
The Priors
In this section we describe the priors that we will employ.
For each 1 ≤ k c ≤ h c , we will use the same Sims-Zha prior on each (A 0 (k c ), A + (k c )).
For the hyperparameters in this priors, we will follow the recommendations in Sims and Zha (1998) for monthly data.
We will use the uniform prior across the ξ k v , j . This can be easily implemented as a Dirichlet distribution over (ξ 1, j , · · · , ξ h v , j ), for each 1 ≤ j ≤ n, with all the Dirichlet hyperparameters equal to one.
For the variance regime transition matrices, the parameters are q i, j , for 1 ≤ i, j ≤ h v . We will use a Dirichlet prior on (q v 1, j , · · · , q v h v , j ), for each 1 ≤ j ≤ h v . For the off-diagonal elements, we will choose the Dirichlet hyperparameters to all be equal to one. For the diagonal elements, we will choose the hyperparameter to match the desired duration of each variance regime, though we will assume that the duration is the same across all variance regimes.
For the coefficient regime transition matrices, the parameters are (q c , γ). We will use a Dirichlet prior on (q c 1, j , · · · , q c j−1, j , q c j+1, j , · · · , q c h v , j ), for each 1 ≤ j ≤ h v . We will choose the Dirichlet hyperparameters to all be equal to one, so the distribution will be uniform. For the γ, we will use independent normal distributions. We recommend standardizing all the variables controlling the diagonal elements of the coefficient regime transition matrices. In this case we find that using independent normal distributions works well. Alternatively, if one had prior opinions about the means and variances of the variables controlling the diagonal elements of the coefficient regime transition matrices, then these could be used to set the means and variances of the elements of γ.
Because we assume that the initial probabilities are all equal, there are no parameters associated with the initial parameters.
The Posterior, Filtered and Smoothed Probabilities
In this section, we give formulas for the posterior, filtered and smoothed probabilities. For completeness, these formulas will be explicitly derived in Appendix B. To derive expressions for the posterior and filtered probabilities, we need the following assumption about the exogenous variables.
The idea is that p(z t+1 |Y t , Z t ) is the true, but unknown, distribution of z t+1 , conditional on Y t and Z t , and knowing the path of regimes or the model parameters provides no additional information. Note that this assumption implies that the conditional distribution of z t does not depend on the regimes. Under this assumption, if p(θ) is the prior, then the posterior is proportional to In order to compute the posterior, we must be able to explicitly compute the conditional likelihood, p(y t |s t ,Y t−1 , Z t , θ), and the filtered probabilities, p(s t |Y t−1 , Z t−1 , θ). For RS-SVAR models, the conditional likelihood is normal and easy to compute. Given the initial probabilities, the filtered probabilities can be recursively computed via the Hamilton filter. The recursive formulas are Often, one is more interested in the smoothed probabilities, p(s t |Y T , Z T , θ). This can be done using backward recursion in the Hamilton smoother. The formula for the backward recursion is Since p(s T |Y T , Z T , θ) can be obtained from the last step of the Hamilton filter, we can start the backward recursion at s T and then recursively compute the smoothed probabilities.
Estimation
Having efficient and accurate samplers for simulating the posterior distribution is crucial for signaling financial vulnerabilities. Third, we aim to contribute to the discussion regarding the usefulness of the leverage ratio from a financial stability policy perspective. 16 We empirically investigate the role of balance sheets of financial institutions for the amplification of financial shocks, differences in the transmission of financial shocks in different regimes, and the heterogeneity of financial institutions and implications for the persistence of financial constraint regimes. We build on Adrian and Brunnermeier (2016) and Paul (2019) and construct a novel market measure of leverage of financial institutions. We employ our proposed regime switching structural vector autoregressive (SVAR) model for this investigation. In our empirical analysis we use a market value, micro-data based measure of leverage of financial institutions, building on Adrian and Brunnermeier (2016). We employ the CRSP/Compustat merged database that covers a broad range of publicly listed depository and nondepository institutions, bank holding companies and nonbanks.
Data
Market leverage is constructed using market value equity, not the book value, i.e. based on the expected present discounted value of future cash flows of a financial institution, its creditors and its shareholders; in contrast, book values depend on specific accounting rules.
Therefore, the leverage measure with market equity that we are using here takes into account that before the global financial crisis the leverage of financial institutions only rose mildly, since as debt went up also the market value of asset prices increased. During the crisis asset 16 Note that the supplementary leverage ratio has been introduced in the US in 2014. The aim is to counterbalance the build-up of systemic risk by limiting the risk weights compression (downweighting of seemingly low risk investments) during booms and therefore add a more countercyclical measure than a risk weighted capital ratio. For a discussion, see e.g. Gambacorta and Karmakar (2018). 17 Under Basel III, the supplementary leverage ratio is introduced as a measure that treats all exposures equally, independent of any risk assessment. The non-risk weighted leverage ratio is intended to avoid that banks lever up their balance sheet by investing in assets that appear in low-risk categories. prices collapsed, while financial institutions were not able to reduce their accumulated debt burden as quickly leading to a sharp increase in leverage using the market value of equity. Note that our database includes all listed financial institutions, including a broad range of depository and non-depository credit institutions and a range of nonbank institutions, including security brokers and dealers. We use data from the Fundamentals Quarterly and Security Monthly of the CRSP/Compustat Merged database. We compute a novel monthly market leverage measure based on monthly market equity and quarterly interpolated series for book assets, or -alternatively -liabilities. Our measure builds on Adrian and Brunnermeier (2016), but goes beyond previous literature that uses linear interpolation to convert quarterly book values.
We employ monthly call reports data for interpolation of the quarterly book assets and book liabilities. The source of the data used for monthly interpolation of the quarterly book assets and liabilities is a monthly survey of a sample of the commercial banks in the call reports. 18 We compute leverage as book assets over market equity as well as market equity plus book liabilities over market equity. 19 Recent literature has highlighted that book and market leverage diverge substantially during crises (see also Begenau et al. (2021). We provide evidence that market leverage is a useful indicator for monitoring financial institutions since it reflects market developments in a timely way. The advantage of using financial institutions level data also is that they allow us to analyze the economic implications of heterogeneity across financial institutions. 20 In particular, we focus on comparing model specifications with leverage of depository institutions, with leverage of Global Systemically Important Banks (GSIBs) and with leverage of a particular group of nonbank financial institutions, namely securities brokers and dealers.
In addition to this novel monthly measure of market based leverage, we use US monthly data, seasonally adjusted, for a sample from 1988(12) to 2019(12). We include the following variables: Output growth is measured in terms of growth in industrial production, we 18 The H.8 data from the Federal Reserve presents an estimate of weekly aggregate balance sheets (assets and liabilities) of commercial banks in the United States. The data are based on weekly reports from 875 commercial banks. 19 The latter is often referred to as being a more reliable measure of market leverage since book liabilities are a better proxi for market liabilities than book assets for market assets, and that is what we use for our main empirical analyses. 20 Note that the treatment of mergers and acquisitions in Compustat is as follows: When firms merge, the financial balance sheet items of the target firm gets absorbed into the balance sheet items of the acquirer. Therefore, when the target firm's data series ends, the acquirer's data series reflects the target financial balance sheet items. This provides the background for how structural changes in the course of the GFC will be reflected in this data set. include core CPI inflation and the 2-year Treasury rate; market leverage of financial institutions or leverage of particular financial institutions such as Global Systemically Important Banks (GSIBs) and security brokers and dealers. Market leverage is proxied by book assets over market equity, as described above, and we include a broad financial conditions index including spreads, financial market volatility measures and other financial conditions indicators spanning a broad range of financial markets and financial intermediaries, since we are interested in investigating heterogeneity. It is published by the Federal Reserve Bank of Chicago.
We chose this broad measure of financial conditions since we are interested in investigating and comparing the role of heterogeneity of leverage of financial institutions for economic outcomes.
Model specification
We employ our proposed regime switching vector autoregressive (RS-VAR) model with timevarying transition probabilities. We allow for two regimes in the VAR coefficients, which we label 'financial constraint' regime and 'normal' times. We make the regime probability dependent on the financial variables in the model since we are primarily interested in the transmission of financial shocks. We also allow for two regimes for the variances that follow a Markov process as a way to model heteroskedasticity.
In our regime switching models with time-varying probabilities -that can also be referred to as "endogenous switching" models -the transition probability of being in one regime in the next period, given that we are in a particular regime in this period, can vary over time. We model the transition matrix to depend on the state of the economy, namely we make it dependent on the financial variables in our system. To identify the regimes and structural shocks, we employ sign restrictions and, alternatively, narrative sign restrictions, thereby extending the approaches suggested in Antolín-Díaz and Rubio-Ramírez (2018) and Arias et al. (2018) to regime switching models. We also allow for different identification schemes in different regimes. We explain the details in the next section.
Identifying Regimes and Shocks
Regime identification In Section 3.3, we discussed how to identify the regimes using narrative restrictions. In this section we give the specific details of how we assigned the regimes to either the financial constraint regime or the normal regime. For each regime, we counted the number of months between 2008(9) and 2009(8), inclusive, that the probability of being in that regime was greater than 0.70. Whichever regime had the larger count was labeled the 'financial constraint' regime and ordered first. The other regime was labeled the 'normal' regime and ordered second. The assignment of the financial constraint regime was robust to varying the cutoff probability from 0.50 through 0.95 and choosing a shorter period around the Global financial crisis, or even only using 2008(10). Put in another way, the data was very informative and clear as to which regime the economy was in during the Global financial crisis.
Shock identification In addition to identifying the regimes, in our RS-VAR model we also need to identify the shocks. In particular, we want to identify the financial shock, which we will order first. We used sign restrictions on the impulse responses to achieve this. The contemporaneous response to a positive financial shock in the financial constraint regime was restricted to be negative for output, inflation and short-term interest rate, but positive for the financial conditions index and leverage. The contemporaneous response to a positive financial shock in the normal regime was restricted to be positive for the financial conditions index only. Overall, in about 20 percent of the draws these sign restrictions uniquely identified the financial shock.
Alternative shock identification We have carried out further empirical analyses using narrative restrictions as an alternative shock identification approach. This is a new class of narrative restrictions developed for constant parameter structural VAR models (see Antonlin-Diaz and Rubio-Ramirez, 2018) that we extend to the regime switching structural VAR model setup. The structural parameters are constrained in such a way around key historical events that structural shocks and historical decomposition agree with the narrative. Those narrative sign restrictions combine the appeal of narratives with the advantages of sign restrictions. A small number of key historical events (not whole time series) are used for identification, thereby avoiding measurement error in narrative time series that have been presented in earlier literature.
In our implementation we identify the financial shocks as the one that explains most of the variation in output growth during the Global financial crisis. In addition, a considerably smaller set of sign restrictions are imposed than were when only sign restrictions were used.
We only impose the positive response of the financial conditions contemporaneously, as well as the negative initial output growth response and the initial positive response of leverage.
We present the empirical results using standard sign restrictions as our baseline results in the next few sections, and show some key results for narrative sign restrictions in Section 4.8 and Appendix C.
Regime probabilities
In the following we use smoothed probabilities as well as the time-varying probabilities and their association to historical events to interpret the different regimes. We use the specification with market leverage of GSIBs as our baseline specification and discuss those results first. We choose a model specification with endogenous regime switching driven by the three financial variables in our model, namely the 2-year Treasury rate, leverage and financial conditions. 21 This choice is motivated by our interest in the financial shock transmission. Figure 1 presents the smoothed probability of the first coefficient regime allowing for both variance regimes based on our endogenous regime switching specification. We present the estimate of the median of the posterior distribution. 22 We interpret this regime as a "financial constraint" regime; it covers the end of the Savings & Loan crisis, the 1990/91 recession, the Russian debt default, the GFC and the related recession. Note that the filtered probabilities -that are particularly useful for determining the financial constraint regime in real-time -are presented in Figure 2 and are very similar to the smoothed probabilities. We will use the impulse responses to a financial shock presented in the next section to shed light on the economic dynamics in this regime.
The transmission of financial shocks
An important contribution of our paper is to shed light on the role of leverage of financial institutions for the transmission of financial shocks to the real economy. Our model specification allows for two coefficient regimes that depend endogenously on the financial variables in the system and for two variance regimes. We now discuss the transmission of a financial shock starting with the financial constraint regime and then compare the results to normal times shock responses.
First, we investigate the impulse responses to a financial shock in our model when including leverage of the GSIB institutions under supervision of the Federal Reserve Board. 24 Figure 4 presents the impulse responses to a one standard deviation financial shock in the financial constraint regime (for one particular variance regime) conditional on staying in the financial constraint regime. In other words, we assume that the economy stays in the financial constraint regime for 12 months after the shock has hit. The median response of the whole posterior distribution is displayed with 68% error bands.
The impulse responses show an output response that turns out to be significantly negative, 23 Note that the time-varying probability of staying in a regime can only be interpreted during the time when there is a high probability of being in that regime corresponding to the smoothed probability depicted in the previous Figure. 24 Capitalization of GSIBs are regularly monitored in the Financial Stability Report published by the Federal Reserve. They include Bank of America Corporation, the Bank of New York Mellon Corporation, Citigroup Inc., The Goldman Sachs Group, Inc., JP Morgan Chase Co., Morgan Stanley, State Street Corporation, Wells Fargo Company. Figure 4: Impulse Responses to a financial shock in the financial constraint regime, GSIBs Note: IRFs conditional on regime; red line: median response, blue lines: lower and upper bound of the 68 percent error bounds; financial shock: 1 std shock to financial conditions index; identification with contemporaneous sign restrictions, median and 68% error bands, output growth (IP), core inflation (CPI), 2-y Treasury yield, market leverage, Financial conditions index (Chicago Fed) large and protracted, as would have been expected in the financial constraint regime. We find that leverage of GSIBs initially increases due to a sharp decline in asset prices, and then starts declining since the GSIBs deleverage in response to a financial shock in financial constraints episodes. We interpret that as evidence that deleveraging can lead to amplification effects with adverse implications for the real economy. GSIBs deleverage by liquidating assets, for instance by carrying out (fire) sales of securities and/or extending fewer loans while existing loans mature. This implies a reduction in overall credit supply. At the same time, it will be more demanding to get external financing due to a decline in collateral value.
Next, we compare the responses of the economy in the financial constraint regime and in normal times. Figure 5 shows the responses to a financial shock in normal times. We find that output growth shows a large negative response in normal times, but that response is non-persistent in contrast to the financial constraint regime. Also, in contrast to the financial constraint regime market leverage remains insignificant over the entire horizon.
Since these impulse responses provide an average response for the respective regime and are conditional on the regime, we shed more light on the role of leverage during the GFC Figure 5: Impulse Responses to a financial shock in normal regime. GSIBs Note: IRFs conditional on regime; red line: median response, blue lines: lower and upper bound of the 68 percent error bounds; financial shock: 1 std shock to financial conditions index; identification with contemporaneous sign restrictions, median and 68% error bands, output growth (IP), core inflation (CPI), 2-y Treasury yield, market leverage, Financial conditions index (Chicago Fed) using a counterfactual analysis in Section 4.6. Before that we turn to investigating the role of the heterogeneity of financial institutions in terms of leverage for the transmission of financial shocks.
Heterogeneity of financial institutions: Leverage of Depository Institutions
Our second set of results is for depository financial institutions (including commercial banks, savings and loans, and credit unions) 25 from our CRSP/COMPUSTAT database of listed institutions. Again, sign restrictions are only imposed contemporaneously on all endogenous variables in response to a positive financial shock in one of the regimes, and only on one shock in the other regime.
In comparison to the results for the GSIBs our findings for Depository Institutions display a similar median output growth response for a given financial conditions tightening ( Figure 6). However, the results do not display asymmetric responses in the tails of the posterior distribution as for the GSIBs, and hence -in contrast to the model with GSIBs leverage -25 Depository institutions are financial institutions that receive money from depositors to lend out to borrowers such as commercial banks and the other institutions listed here. negative growth outcomes appear not to be more likely than positive outcomes. We also find that, as for GSIBs (see Figure 4), market leverage initially increases significantly due to the sharp decline in asset prices and then starts declining gradually since financial institutions deleverage. Figure 6: Impulse Responses to a financial shock in the financial constraint regime, depository institutions Note: IRFs conditional on regime; red line: median response, blue lines: lower and upper bound of the 68 percent error bounds; financial shock: 1 std shock to financial conditions index; identification with contemporaneous sign restrictions, median and 68% error bands, output growth (IP), core inflation (CPI), 2-y Treasury yield, market leverage, Financial conditions index (Chicago Fed) The responses to a financial shock in normal times (Figure 7) show large and non-persistent output growth effects and no significant leverage response, similar to the responses we saw for GSIBs ( Figure 5).
Heterogeneity: Securities brokers and dealers
Next we examine the role of heterogeneity by including leverage of security brokers and dealers in the model.
The financial constraint regime is again identified by sign restrictions only contemporaneously on all endogenous variables in one of the regimes, as for previous specifications. As pointed out in Aramonte et al. (2021) higher leverage does not need to correspond to larger balance sheets, but broker dealers clearly used debt to finance asset growth (see Adrian and We find similar results for the real economy implications as before, with a protracted negative output growth in response to a financial shock, with asymmetric responses in the tails of the posterior distribution as for the GSIBs leverage specification. However, market leverage increases more on impact than for other financial institutions and then immediately declines due to deleveraging. We interpret that as reflecting that The dealers' willingness to take risk amplified the growth of the dealer balance sheets going into the crisis, causing crisis losses and a subsequent sharp contraction of balance sheets post-crisis. institutions, but outcomes might be more detrimental in response to financial shocks for GSIBs that are particularly highly leveraged. Figure 11 for a system including depository financial institutions' leverage where the lower bound probability of staying in the financial constraint regime does not go down below 0.90.
Market leverage and financial conditions
To illustrate that market leverage and financial conditions take distinct roles for the transmission of financial shocks, we have carried out a number of counterfactuals. We hold the financial conditions index constant as of October 2008 (to make it similar in terms of timing to our other experiment that focuses on the deleveraging process) and compute the counterfactual probability of staying in the financial constraint regime in the model with leverage of GSIBs.
The counterfactual probability of staying in the constraint regime is not going down as in the case of holding leverage constant. We interpret this as evidence that NFCI and leverage provide different characterizations of the financial conditions of the economy and have different implications for the propagation of shocks and the persistence of the constraint regime. It also illustrates that it is not the market price that is the sole driver for our results, since that would be behind both the financial conditions index and leverage.
We have also estimated our model that includes leverage of GSIBs with the GZ spread instead of the the broad financial conditions index to illustrate that it is not the leverage measures or stock market variables related to financial institutions that are behind our results. We get very similar results in that the probability of staying in the constraint regime is declining much more than the actual probability, confirming our result that it helps to prevent the deleveraging process that has adverse implications for the real economy.
Sensitivity: Narrative restrictions for shock identification
We have investigated the sensitivity of our results to imposing narrative restrictions for shock identification, extending the proposed identification approach by Rubio-Ramirez et al (2018) to our regime switching model. We combine the narrative restrictions with sign restrictions and compare the results with our findings when using standard sign restrictions presented above. We generally find that our results are rather robust when using narrative restrictions.
It is noteworthy, that when using narrative restrictions we need fewer sign restrictions than with the standard sign restriction approach. The results for the RS-VAR including leverage of GSIBs are presented in Appendix C.
Conclusions
We conduct a novel empirical analysis on the role of leverage of financial institutions for the transmission of financial shocks to the macroeconomy. To that end we develop an endogenous regime-switching structural vector autoregressive model with time-varying transition probabilities. First, we allow for the transition probabilities to be dependent on the state of the economy, and thereby to be time-varying. Second, we propose new identification schemes for RS-VAR models, extending sign and narrative restrictions to the regime switching model class. To facilitate economic interpretation, we allow the identification restrictions to differ across regimes. One of our contributions is also to highlight a range of identification issues in the context of regime switching models.
Employing this new modelling framework we provide a novel empirical analysis of the role of market leverage of financial institutions for the transmission of financial shocks to the macroeconomy. We construct a new monthly market-based measure of leverage of financial institutions as book assets over market equity, building on Adrian and Brunnermeier (2016) by employing financial institution level data.
We contribute to the empirical literature by (1) The motivation for our focus on leverage is threefold: First, recent literature on structural macroeconomic models emphasizes the role of bank balance sheets for the build-up of financial instabilities and the amplification of economic downturns. Second, leverage encompasses the entire balance sheet of the financial institution and therefore is a broad indicator for signaling financial vulnerabilities. Third, the leverage ratio is a regulatory tool complementary to the (risk-weighted) capital ratio.
We build a market-based measure of leverage of financial institutions, building on Adrian and Brunnermeier (2016) using institution level balance sheet data 26 . We employ financial institution level data to construct a monthly measure of leverage as book assets over market equity. Two arguments suggest a focus on market leverage: First, market leverage developments can signal a situation where financial institutions need to deleverage quickly -, for instance, if debt is used to finance asset growth as for broker-dealers (see Adrian and Shin (2014)) or if financial institutions rely primarily on short-term funding (see e.g. Adrian et al. (2011) tutions) might be more fragile than their book leverage levels make them appear. Furthermore, market capitalization of a financial institutions is a reflection of the market value of the equity holders' stake, and hence an assessment by market participants of the creditworthiness of the financial institution as a borrower. Low market-to-book ratios suggest that the assessment of market participants is that financial institutions are more leveraged than their books suggest (see also Adrian et al. (2018)). Our empirical results highlight the following conclusions and implications: (1) Our empirical findings support the conclusion from theoretical macroeconomic models for the importance of bank balance sheets in financial constraint regimes versus normal times for the transmission of shocks to the macroeconomy. (2) Deleveraging of financial institutions can lead to procyclical financial amplification effects with adverse implications for the real economy.
Our empirical evidence indicates the importance of monitoring market-based leverage of financial institutions for financial stability and shows that market leverage provides timely information for monitoring. We highlight the role of the financial fragility implied by market leverage for the transmission of financial shocks in our empirical model, that has been pointed 26 See also Paul (2020) 27 So far the information content of market equity about book losses has been mostly highlighted in the accounting literature, indicating that banks have flexibility in accounting for losses, consistent with evidence in Blattner et al. (2022). This flexibility in accounting for losses can be even more prominent for nonbank financial institutions that are part of our analysis.
to in an estimated model framework (see Begenau et al. (2021)). We also provide evidence for a role of the heterogeneity of financial institutions' leverage for the detrimental real effects of deleveraging of financial institutions. We show the differences in the implications of leverage of depository institutions, GSIBs and nonbank financial institutions for economic outcomes and for the probability of persistence of the financial constraint regime. Our results suggest that deleveraging of GSIBS can have much more detrimental effects on macroeconomic outcomes than deleveraging of depository institutions in financial constraint regimes with implications for the probability of staying in a financial constraint regime.
Overall, our results confirm that leverage is a useful indicator from a financial stability perspective, and we highlight in particular the usefulness of market-leverage. It appears that so far the information content of market equity about book losses has been mostly highlighted in the accounting literature, indicating that banks have flexibility in accounting for losses. One might argue that this flexibility in accounting for losses is even more prominent for nonbank financial institutions, highlighting the relevance of our analysis of heterogeneity of financial institutions for regulatory considerations.
Our results on the role of market-based leverage raise the question how to lower the related financial vulnerability. Begenau et al. (2021), for instance, argue that stricter accounting rules
A Proof of Proposition 1
The following lemma is key to proving Proposition 1. In both the statement of the lemma and its proof, it will always be assumed that 1 ≤ k ≤ r and 1 ≤ m ≤ s.
Lemma 1 Let A 1 , · · · , A r be invertible n × n matrices and let D 1 , · · · , D s be distinct n × n diagonal matrices such that the diagonal elements of each D m are distinct and ∑ s m=1 D m = I n . If A 1 , · · · ,Ã r are n × n matrices,D 1 , · · · ,D s are n × n diagonal matrices such that ∑ s m=1D m = I n , π(k, m) is a function that is a permutation of {1, · · · , s} for each k, and A k D m A k =Ã kD π(k,m)Ãk , then π(k, m) is independent of k,Ã k = E k Q A k andD π(k,m) = Q D m Q, where Q is a permutation matrix and E k is a diagonal matrix with plus or minus ones along the diagonal.
Proof. Since π(k, m) is a permutation of {1, · · · , s} for each k, which implies that D m =Q kDπ(k,m)Q k . Because the eigenvalues of a symmetric matrix are unique up to an ordering, the D m are distinct, and the diagonal elements of each D m are distinct, the permutation π(k, m) does not depend on k. If Q is the column permutation matrix associated with π, then for each k there exists a diagonal matrix E k , with plus or minus ones along the diagonal, such thatQ = QE k .
Proof of Proposition 1. Throughout this proof, it will be assumed that 1 ≤ k ≤ h c , If some of the weights were zero, then the normal distributions corresponding to those zero weights would not be determined, though the weights themselves would be. Because the V (k, m) are distinct, the conditional distribution of y t is a mixture of h distinct normal normal distributions.
If the conditional probabilities, p((s c t , s v t ) = (k, m)|x t ), were zero for all x t , then the unconditional probabilities, p((s c t , s v t ) = (k, m)), would also be zero. So by the hypotheses of Proposition 1, p((s c t , s v t ) = (k, m)|x t ) cannot be identically zero for all t. Because the weight associated with each (k, m) is non-zero for some t, this implies that the normal distributions and their weights are uniquely determined by the conditional distributions of the y t .
Note that Assumption 1 does not, in general, imply that p(Z t |θ) = p(Z t ). If p(θ) is the prior, A.3 then the posterior is p(y t |s t ,Y t−1 , Z t , θ)p(s t |Y t−1 , Z t−1 , θ), where Equations (17) and (18) follow from Bayes rule and Equation (19) follows by substituting the expression for the likelihood and canceling and rearranging terms. So, the posterior is proportional to p(y t |s t ,Y t−1 , Z t , θ)p(s t |Y t−1 , Z t−1 , θ).
The recursive formulas for the the Hamilton filter are derived next.
C Impulse responses with narrative restrictions Responses to a financial shock (median) show a protracted negative output response. Market leverage initially increases, then declines due to deleveraging. We arrive at the same conclusion as in our baseline specification for the GSIBs: Deleveraging can lead to amplification effects with adverse implications for the real economy. In normal times, the responses to a financial shock (median) indicate small, nonpersistent negative output response and insignificant market leverage ( Figure A.2) as with the standard sign restriction approach. | 2022-06-08T15:07:05.048Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "644915d43b79b2cae405931bea9df395f3742cac",
"oa_license": null,
"oa_url": "https://doi.org/10.17016/feds.2022.034",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9e0d1ecdd20078b2c110718b87d5b7a2e0980aea",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
234291885 | pes2o/s2orc | v3-fos-license | Study on Design of Concrete Box Girder of A Railway Swivel Cable-Stayed Bridge
. A swivel cable-stayed bridge over the existing railway is a span across the existing railway. The recommended scheme for the main bridge is (128 + 388 + 128) m steel mixed composite beam swivel diagonal pull bridge with span. The cables of the diagonal pull bridge are arranged according to the fan-shaped central double cable plane, taking into account the mechanical performance and aesthetics. The bridge structure adopts semi floating system. The concrete swivel diagonal pull bridge is adopted in the comparison scheme. The design of the bridge is three spans and (138 + 268 + 138) m prestressed concrete box girder is adopted. The cables are arranged according to the central double cable plane, and the bridge composition adopts the consolidation system. Considering the needs of bridge operation and maintenance in the later stage of the bridge, when the dead weight of concrete diagonal pull bridge is within the ideal range, the concrete swivel diagonal pull bridge can be preferred. In order to calculate the dead weight of the selected bridge, the author uses the finite element software to model the whole bridge and calculate the weight of the bridge. The results show that the dead weight of the concrete swivel diagonal pull bridge is too large, which has exceeded the maximum bearing capacity of the existing spherical hinge. In order to continue to use the concrete swivel diagonal pull bridge scheme, it is necessary to optimize the design of the concrete swivel diagonal pull bridge scheme.
Introduction
With the rapid development of China's transportation industry, a large number of new passengers dedicated lines, intercity railways and urban expressways have been built. At the same time, along with the high-speed expansion of the main urban areas of some large and medium-sized cities, it is bound to lead to the emergence of more overpass railway Urban Bridges. Overpass railway bridge is an important project often encountered in highway construction. The selection of bridge construction technology has a great impact on the quality of bridge construction project. Reasonable structural design can ensure the safety performance of the whole project. And the railway that the bridge crosses generally carries a great traffic capacity, which can't be interrupted casually. Therefore, how to carry out the normal construction of the bridge without affecting the railway traffic is particularly important. Usually, the solution is to start with the design scheme of the bridge. The completion of urban bridge over railway will improve the coverage of urban road network, and the operation efficiency of the whole urban road network will also be improved, which will make an important contribution to the development of the city. At present, for the design of urban bridges over railway, the single tower concrete girder cable-stayed bridge constructed by horizontal rotation method is mostly considered from the perspective of reducing the impact on the existing railway operation under the bridge. This type of bridge has the characteristics of beautiful shape and flexible structure[1~12].
Methods
Step1: On the premise of ensuring the feasibility, the project should also meet other requirements put forward by the owner as far as possible, strive to be beautiful in appearance, and arrange the bridge scheme on the premise of minimizing the impact on railway operation.
Step2: As the bridge is located at the side of small pile number, there is planned urban road, and there is planning station East Road on the side of large pile number. Therefore, it is considered to reserve a passage under the bridge side of the bridge.
Step3: Collecting and consulting the design scheme and construction technology of the existing bridge, combining with the project overview of the bridge, the process cases with reference value are selected. And collect the construction technology and experience of the cable-stayed bridge swivel, combined with the current development of the bridge swivel, and then determine the structure of the bridge to be used.
Step4: The bridge design scheme shall minimize the railway operation loss caused by the later operation and maintenance of the bridge, and shall not deteriorate the existing railway operation conditions. Step5:According to the requirements of the railway department, the long-term development of the railway should be taken into account.
Recommended scheme
The span of the bridge is (128 + 388 + 128) meters steel mixed composite beam swivel cable-stayed bridge. In order to reduce the weight of swivel, the middle span of bridge is designed as steel box girder. In order to make the weight difference between the two ends of the spherical hinge as small as possible, the side span is prestressed concrete box girder. As a new type of structural bridge, hybrid girder cable-stayed bridge has significant advantages in stress and structural design. It has excellent performance in self weight and span ability. However, in the later operation of the bridge, a large amount of resources are needed for maintenance, and the maintenance requirements are high. Because the project spans several existing railways, it will bring certain safety risks to the railway operation when it is put into operation in the later stage and regular maintenance and coating are carried out.
Trial design of bridge scheme 1
Considering the needs of bridge operation and maintenance in the later stage of the bridge, the (138 + 268 + 138) meters prestressed concrete box girder swivel cable-stayed bridge is proposed for the trial design of the bridge. Compared with the steel-concrete hybrid girder cable-stayed bridge, the self-weight of the concrete beam cable-stayed bridge will increase significantly. Therefore, the requirements of spherical joint will be greatly improved. However, the concrete box girder has obvious advantages in the later maintenance, and the driving experience is relatively good, the maintenance cost is low, and the overall cost is lower than the economic maintenance requirements.
Trial design of bridge scheme 2
Considering the factors of landscape design, the shape of bridge tower adopts H-type bridge tower. In scheme 2, (128 + 388 + 128) meters steel-concrete composite girder is used to push the cable-stayed bridge. The main beam is a double side box structure. The side box is connected with the side box by a large cross beam. At the same time, a small beam is set between the large beams to increase its stability. The deck of main girder is made of precast concrete. Although the cost scheme of this scheme is lower, temporary piers need to be set between the lines in operation during the construction of the scheme, which will have a greater impact on the line operation. At the same time, the economy is general and the construction period is long.
After the comparison and selection of schemes, the author selects the first scheme of bridge for finite element modeling analysis.
Grid Selection
The main girder of the selected bridge is prestressed concrete box girder. The main girder section is W-shaped web section, the top plate width is 45m, the floor width is 20m, and the box girder height at the center is 3.8m. In the standard section, the thickness of top plate is 30cm, the thickness of bottom plate is 35cm, the thickness of outer inclined web is 40cm, and the thickness of inner inclined web is 30cm. The main beam is provided with a diaphragm at the connection between the bridge tower and the beam end. The thickness of diaphragms at different joints is different, which is 12m at the parent tower, 11m at the connection between the sub tower and the main beam, and 2 meters at the beam end. The concrete grade of bridge is C55.
Modeling and foundation design of main tower
Single column pylon is recommended for this bridge. The shape of the upper tower adopts the traditional national musical instrument Sheng as the design idea. Through the evolution of its shape, combined with the characteristics of its own bridge tower structure. However, in order to simplify the modeling process, the inverted Y-shaped main tower is proposed. The inverted Y-shaped main tower can not only meet the design requirements of the main tower in the comparison scheme, but also meet the design requirements in terms of stress and stiffness, and can resist large bending moment. The tower height above the bridge deck is 68m, in which the vertical height of the left and right sides of the tower body is 50m. The anchorage zone of stay cable is set in the middle tower column section, and the central tower column adopts hollow section. The 3d model of bridge tower is shown in Figure 1 The left and right sides of the main tower body extend upward and meet the middle tower column. The vertical height of the tower body is 50m and the height of the middle tower is 18m, all of which are hollow sections.
Self weight calculation of whole bridge model
Midas civil is used to establish the model of main beam and bridge tower, and the self weight of the whole bridge model is roughly estimated. If the dead weight can be controlled within 60000 tons, it can be proved that the scheme is feasible in terms of swivel weight. At present, the domestic swivel tonnage has reached 46000 tons, and the swivel balance control of cable-stayed bridge structure is relatively mature. After a certain period of design and manufacturing, it can reach 60000 tons of bridge swivel.It can be seen from Figure 4 that the reaction force at the bottom of the bridge tower is 708712.3kn, which can be converted into a mass of about 71000 tons, that is, when the bridge rotates, the weight of the spherical hinge is 71000 tons
Development status of swivel ball joint
The swivel hinge is located at the bottom of the pier and can be used to rotate. Swivel hinge plays an irreplaceable role in the swivel of the bridge. The self weight of the bridge basically acts on the swivel hinge. Therefore, the manufacture and bearing capacity of the swivel hinge should reach a high standard. There are also different structural forms of swivel hinge. We mainly classify it into two types: spherical hinge with shape similar to spherical surface and plane hinge whose action surface is plane. Because the materials used for making swivel hinges are different, we can roughly divide them into two categories according to different raw materials. The raw material is the swivel hinge of concrete, so we call it concrete hinge. If the raw material is steel hinge, we call it steel hinge. When we choose the swivel hinge, we should design the swivel hinge according to the needs of the bridge. Each swivel hinge should be made according to the swivel bridge, so as to make the selected swivel hinge suitable for the actual needs of the project as far as possible, so as to ensure the stable and safe rotation of the bridge during the swivel process. Because of the large span and self weight of the swivel bridge, the spherical hinge has the best bearing effect. Therefore, this paper focuses on the development status of the spherical hinge.
Concrete Swivel Ball Joint
In the swivel ball joint, the ball joint can be classified according to different manufacturing materials. The spherical hinge made of concrete is called concrete hinge. When the raw material belongs to steel, it is steel ball hinge.Among them, in order to ensure the bearing capacity of the spherical hinge, the concrete spherical hinge is generally made of high-grade concrete, and the strength grade of concrete is generally above C50. There are some differences between concrete spherical hinge and steel spherical hinge in structural design. The concrete spherical hinge is generally divided into two parts: the upper and the lower. The upper ball gap is called "upper grinding plate". The upper part of the ball gap and the lower part of the ball need to fit enough to meet the force and rotation requirements. In order to make them meet the rotation requirements, we usually scrape off the surface of the ball gap to make a perfect fit. Before pouring and setting with concrete, the grinding disc and grinding center should be checked manually to ensure that the contact surface can run in well. When rotating, the friction force during rotation should be minimized to make the rotation process smoother. First of all, it is not necessary to obtain the advantages of concrete ball in terms of material selection and technology. Especially in some places where the working condition is not good and some large-scale instruments cannot reach, the concrete spherical hinge can reach the construction site by ordinary methods. In the existing swivel bridge, it is very common to use the concrete ball joint in the swivel bridge. Therefore, according to these projects, the technical parameters of the concrete spherical hinge also have a certain standard. The static friction coefficient of concrete spherical hinge is less than 0.05, and the dynamic friction coefficient is less than 0.03. Among these existing swivel bridges, the self-weight of swivel bridges has reached 9000 tons. The weight of the swivel is very different from that of this project, so the use of concrete spherical hinge is not considered.
Steel ball joint
The steel ball hinge is also composed of two parts, which is different from the concrete ball hinge. The upper and lower parts of the steel ball hinge are not the shape of solid ball gap, but are made of thick steel plate. After the steel plate is pressed into a spherical surface on the special equipment, the upper and lower spherical hinge is connected through the steel shaft, and the circular c2f4 (tetrafluoroethylene) mushroom head and c2f4 sliding blade are arranged at the joint of the spherical joint. Steel itself has great strength, so the bearing capacity is obviously better than the concrete hinge. Therefore, in the long-span swivel bridge, the steel ball hinge has been widely used. At present, when steel ball hinge is used to rotate the bridge, the bearing capacity has reached about 50000 tons. In theory, it is possible to design the swivel of the bridge through the design of the spherical joint and the overall optimization design of the bridge, so the construction risk is relatively small.
conclusion
The calculation results show that the weight of the bridge in scheme I has greatly exceeded the weight of existing swivel bridges in China. Because of the simple weight estimation of the model, the boundary conditions are also applied to the end of the bridge, that is, the fulcrum of the bridge beam end also shares part of the reaction force. Therefore, in the actual project, the weight of the spherical hinge is greater than 70000 tons. If we want to continue to consider the use of prestressed concrete box girder, it is necessary to optimize the design of the girder section and the size of the bridge tower to reduce the weight of the structure. At the same time, the ball joint, the core part of the swivel structure, also needs to have high quality requirements. | 2021-05-11T00:06:10.481Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "785dbd8edf3d349b75afe247e65639a5f5c51b3e",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/13/e3sconf_arfee2021_03026.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9dd8acaa3ac7ef567c445c5f45c10367ed8faad5",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
44050677 | pes2o/s2orc | v3-fos-license | CVaR Robust Mean-CVaR Portfolio Optimization
One of the most important problems faced by every investor is asset allocation. An investor during making investment decisions has to search for equilibrium between risk and returns. Risk and return are uncertain parameters in the suggested portfolio optimization models and should be estimated to solve theproblem. The estimation might lead to large error in the final decision. One of the widely used and effective approaches for optimization with data uncertainty is robust optimization. In this paper, we present a new robust portfolio optimization technique for mean-CVaR portfolio selection problem under the estimation risk in mean return. We additionally use CVaR as risk measure, to measure the estimation risk in mean return. Moreover, to solve the model efficiently, we use the smoothing technique of Alexander et al. [1]. We compare the performance of the CVaR robust mean-CVaR model with robust mean-CVaR models using interval and ellipsoidal uncertainty sets. It is observed that the CVaR robust mean-CVaR portfolios are more diversified. Moreover, we study the impact of the value of confidence level on the conservatism level of a portfolio and also on the value of the maximum expected return of the portfolio.
Introduction
Portfolio optimization is one of the best known approaches in financial portfolio selection.The earliest technique to solve the portfolio selection problem is developed by Harry Markowitz in the 1952.In his so-called mean-variance (MV) portfolio optimization model, the portfolio return is measured by the expected return of the portfolio, and the associated risk is measured by the variance of portfolio returns [1].
Variance as the risk measure has its weaknesses.Controlling the variance does not only lead to low deviation from the expected return on the downside, but also on the upside [2].Hence, alternative risk measures have been suggested to replace the variance such as Value at Risk () that manage and control risk in terms of percentiles of loss distribution.Instead of regarding both upside and downside of the expected return, considers only the downside of the expected return as risk and represents the predicted maximum loss with a specified confidence level (e.g., 95%) over a certain period of time (e.g., one day) [3][4][5].
is a popular risk measure.However, may have drawbacks and undesirable properties that limit its use [6][7][8], such as lack of subadditivity; that is, of two different investment portfolios may be greater than the sum of the individual s.Also, is nonconvex and nonsmooth and has multiple local minimum, while we seek the global minimum [4,8,9].So alternative risk measures were introduced such as Conditional Value at Risk ()-the conditional expected value of loss, under the condition that it exceeds the value at risk [3]. implies that "what is the maximum loss that we realize?" but asks: "How do we expect to incur losses when situation is undesirable?".Numerical experiments show that minimum often leads to optimal solutions close to the minimum , because never exceeds [5]. has better properties than .It is a convex optimization problem, and thus it is easy to optimize [4].It is demonstrated that linear programming techniques can be used for optimization of risk measure [5,9].
Risk and return are uncertain parameters in portfolio optimization models, and estimating them might lead to large error in the final decision.To deal with such situation, one of the widely used and effective approaches is robust optimization technique.In this paper, we have applied this technique to give the robust counterpart of the mean- portfolio selection problem under the estimation risk in mean return.Moreover, we have used as the risk measure to measure the estimation risk in mean return.The rest of the paper is organized as follows.In Section 2, we state the mean- portfolio selection problem.Then because of the inevitable estimation error of the mean return of the assets, we present robust optimization by in Section 3. To solve the model efficiently, we use the smoothing technique of Alexander et al. [10].Finally, in Section 4, we compare the performance of the robust mean- model with robust mean- models using interval and ellipsoidal uncertainty sets on an example.We have observed that the robust mean- portfolios are more diversified and they are sensitive to initial data used to generate each set of samples.Moreover, we demonstrate that the value of confidence level affects the conservatism level, diversification, and also the value of the maximum expected return of the resulting portfolios.
Mean-Conditional Value at Risk
Consider assets 1 , . . ., , ≥ 2, with random returns.Suppose denotes the expected return of asset , and also consider as the proportion of holding in the th asset.We can represent the expected return of the resulting portfolio as follows: Also, we will assume that the set of feasible portfolios is a nonempty polyhedral set and represent that as Ω = { | = , ≥ } where is a × matrix, is an -dimensional vector, is a × matrix, and is a -dimensional vector [4].In particular, one of the constraints in the set Ω is ∑ =1 = 1.Let (, ) denote the loss function when we choose the portfolio from a set of feasible portfolios, and is the realization of the random events (the vector of the asset returns of assets).We consider the portfolio return loss, (, ), the negative of the portfolio return that is a convex (linear) function of the portfolio variables : We assume that the random vector has a probability density function denoted by ().For a fixed decision vector , the cumulative distribution function of the loss associated with that vector is computed as follows: Then, for a given confidence level , the - associated with portfolio is represented as Also, we define the - associated with portfolio as (, ) () .(5) Theorem 1.We always have () ≥ (), that means of a portfolio is always at least as big as its .
Consequently, portfolios with small also have small .However, in general minimizing and are not equivalent.
Since the definition of implies the function clearly, it is difficult to work with and optimize this function.Instead, the following simpler auxiliary function is considered [5]: and/or where + = max{, 0}.This function, considered as a function of , has the following important properties that makes it useful for the computation of and [4]: (1) is a convex function of .
(2) is a minimizer over of .
(3) The minimum value over of the function is .
As a consequence of the listed properties, we immediately deduce that, in order to minimize () over , we need to minimize the function (, ) with respect to and simultaneously min () = min , (, ) .
Consequently, we can optimize directly, without needing to compute first.Since we assumed that the loss function (, ) is the convex (linear) function of the portfolio variables , (, ) is also a convex (linear) function of .In this case, provided the feasible portfolio set Ω is also convex, the optimization problems in (8) are convex optimization problems that can be solved using well-known optimization techniques for such problems.
Instead of using the density function () of the random events in formulation (7) that it is often impossible or undesirable to compute it, we can use a number of scenarios in the names of for = 1, . . ., .In this case, we consider the following approximation to the function (, ) Now, in the problem min (), we replace (, ) with (, ) To solve this optimization problem, we introduce artificial variables to replace ((, ) − ) + .To do so, we add the constraints ≥ 0 and ≥ (, ) − to the problem [5]: It should be noted that risk managers often try to optimize risk measure while expected return is more than a threshold value.In this case, we can represent mean- model as follows: where the first constraint of problem (12) indicates that the expected return is no less than the target value and ≥ 0 used in problem ( 13) is risk aversion parameter that adapts the balance between expected return and ().It is important to note that there is an equivalence between and so that the problems ( 12) and ( 13) generate the same efficient frontiers.Since (, ) is linear in , all the expressions ≥ (, ) − represent linear constraints, and therefore the problem is a linear programming problem that can be efficiently solved using the simplex or interior point methods.
CVaR Robust Mean-CVaR Model
One of the uncertain parameters for mean- model is , and using estimations for this parameter leads to an estimation risk in portfolio selection.In particular, small differences in the estimations of can create large changes in the composition of an optimal portfolio.One way to reduce the sensitivity of mean- model to the parameter estimations is using robust optimization to determine the optimal portfolio under the worst case scenario in the uncertainty set of the expected return.To this end, we represented robust mean- models with interval and ellipsoidal uncertainty sets in the previous studies that have been demonstrated in formulations ( 14) and ( 15 where is a given vector and is a -dimensional matrix.Now, we present robust mean- portfolio optimization problem that estimation risk in mean return is measured by .The robust mean- model specifies that an optimal portfolio based on the tail of the mean loss distribution and the adjustment of the confidence level with regard to the preference of the investor corresponds to the adjustment of the conservative level, considering the uncertainty of the mean return [11]. In this model, is used to measure the risk of the portfolio return as before.In addition, when using the mean- model, we consider the uncertainty of the expected return that can be considered as estimation risk and use to measure estimation risk. with this perspective is denoted as (We use to denote the risk measure discussed in Section 2 in order to differentiate it from and also we use (, ) to denote its associated (, )).Thus, considering the problem (13), a robust mean- portfolio will be determined as the solution of the following optimization problem: For a portfolio of assets, we assume ∈ R is the random vector of the expected returns of the assets with a probability density function ().To determine the mean loss of the portfolio, we define mean loss function, (, ), as follows [11]: So, for confidence level , (− ) can be defined as follows: According to the definition of and robust mean- model, we find that will increase as the value of increases.This corresponds to taking more pessimism on the estimation risk in in the model and to optimizing the portfolio under worse cases of the mean loss.Thus, the resulting robust portfolio is more conservative.Conversely, conservatism of the portfolio is reduced as the value of decreases [11].In Section 4, we will illustrate the impact of the value of on the conservatism level of a portfolio and also on the value of the maximum expected return of the portfolio.
As before, we can consider an auxiliary function to simplify the computations: and use the following approximation to the function (, ): where 1 , . . ., are a collection of independent samples for based on its density function ().We can show that [5] min This problem has ( + + ) variables and ( + + ) constraints that is the number of -samples, is the number of assets, and is the number of -scenarios.When the number of -scenarios and -samples increase, the approximations are getting closer to the exact values.But the computational cost significantly increases and thus makes the method inefficient.
Instead of this method, we can more efficiently determine the robust mean- portfolios using the smoothing method suggested by Alexander et al. [10].Alexander presented the following function to approximate (, ): where () is defined as follows: For a given resolution parameter > 0, () is continuous differentiable and approximates the piecewise linear Table 3: Time required to compute maximum-return ( = 0) portfolios for LP and smoothing approach ( = 99%, = 0.005).
Scenarios (T)
Samples function max(, 0) [10].We can also use this function to approximate (, ) as follows: Using smoothing method, the robust mean- model is as follows: This formulation has () variables and () constraints.Thus, the number of variables and constraints does not change as the size of -samples () and -scenarios () increases.The efficiency of the smoothing approach is shown in Section 4.
Numerical Results
In this section, first we will compare the performance of the robust mean- model with robust mean- models using interval and ellipsoidal uncertainty sets by actual data.Then, we will compare time required to compute the robust portfolios using problems ( 22) and ( 26).The dataset used here contains available returns for eight assets that expected return [12] and covariance matrix of the return of assets which have been given in Tables 1 and 2. In addition, the computations are based on 10,000 -samples generated from the Monte Carlo resampling (RS) method introduced in [13] and 96-scenarios obtained via computer simulation.It should be further noted that the computation is performed in MATLAB version 7.12, and run on a Core i5 CPU 2.40 GHz Laptop with 4 GB of RAM.Problems are solved using CVX [14] and function "fmincon" in Optimization Toolbox of MATLAB.
Sensitivity to Initial Data.
To show the sensitivity of the robust portfolios to initial data, we repeat RS sampling technique 100 times.Each of Figures 1, 2 and 3 display 100 robust actual frontiers (actual frontiers are obtained by applying the true parameters on the portfolio weights derived from their estimated values [15]) for = 99%, 90%, 75%, respectively.As can be seen from figures, the robust mean- actual frontiers change with initial data used to generate samples.Also, this changes increase as the confidence level decreases.Thus, we can regard as an estimation risk aversion parameter.With these qualities, an investor who is more averse to estimation risk will choose a larger .On the other hand, an investor who is more tolerant to estimation risk may choose a smaller .
Portfolio Diversification.
As we know, diversification decreases risk [4].Portfolio diversification indicates distributing investment among assets in the portfolio.We illustrate in the following that compared with the robust mean- portfolios with interval and ellipsoidal uncertainty sets, the robust mean- portfolios are more diversified.In addition, the diversification of the robust mean- portfolios decreases as the confidence level decreases.
To do so, we compute the robust mean- portfolios (for = 99%, 90%, 75%) and robust mean- portfolios with interval and ellipsoidal uncertainty sets for the 8-asset example.The composition graphs of the resulting optimal portfolios are presented in Figures 4, 5 Considering these figures, when the expected return value increases from left to right, the allocated assets in the portfolios with minimum expected return are replaced by a composition of other assets, gently.Observing the right-most end of each graph, we can conclude the composition of the assets of the portfolio achieved from robust mean- model with = 99% is more diversified than that achieved from other models.In Figures 9, 10, and 11, the robust mean- actual frontiers for different values of are compared with robust mean- actual frontiers with interval and ellipsoidal uncertainty sets and mean- true efficient frontier.Since portfolios on the robust mean- actual frontiers with interval and ellipsoidal uncertainty sets are less diversified, they should accept more risk for a given level of expected return and also achieve a lower maximum expected return.Consequently, in the figures, their actual frontiers are more right and lower than the other frontiers and this is one of the disadvantages of the low diversification in the portfolio.Seeing these frontiers, we also deduce that the maximum expected return and the associated return risk increase as the confidence level decreases.But in this case, the variations on the compositions of the resulting maximum-return portfolios might be large, and so the exact solution will not be always achieved.Instead, the maximum expected return of the portfolio is low for = 99%, and the variations will be low.So, the probability of having poor performance of the portfolio will be reduced when there is a big estimation risk of .Thus, resulting robust portfolios will be too conservative.Consequently, an investor who is more risk averse to estimation risk selects a larger and obtains a more diversified portfolio.This justifies that it is reasonable to regard as a estimation risk aversion parameter.problems ( 22) and ( 26) with different number of assets and different number of -samples and -scenarios.The results have been given in Table 3.As we see, the time required to compute robust portfolios via two approaches differs significantly when the sample size and the number of assets increase.For example, the time required to solve the robust mean- problem by two approaches for a problem with 8 assets and 5000 samples and 500 scenarios differs slightly.But, when the number of assets is more than 50, the number of scenarios is more than 500, and the sample size is more than 5000, differences become significant.A problem with 148 assets, 3000 -scenarios, and 25000 -samples is solved in less than 80 seconds using smoothing technique, while by (22) it took over 185 seconds.These comparisons show that when the number of scenarios and samples becomes larger, the smoothing approach is more computationally efficient to determine robust portfolios than other approach.
Conclusions
Since risk and return are uncertain parameters in the portfolio optimization, thus their estimation might lead to large error in the final decision.One of the widely used and effective approaches in optimization to deal with data uncertainty is robust optimization.In this paper, we have presented the robust counterpart of mean- portfolio selection problem under the estimation risk in mean return.We have additionally used as risk measure to measure the estimation risk in mean return.To solve the model efficiently, the smoothing technique of Alexander et al. [10] is utilized.The performance of the robust mean- model with robust mean- models for both interval and ellipsoidal uncertainty sets is compared.Our experiments have verified that the robust mean- portfolios are more diversified.Applying the model for large and real data sets can be considered for future research.
Figure 4 :Figure 5 :
Figure 4: Composition of robust mean- portfolio weights with interval uncertainty set. | 2017-11-29T22:21:11.712Z | 2013-09-22T00:00:00.000 | {
"year": 2013,
"sha1": "e577d3d4a35dc36d58a7f8a2c4cb7341f80e13ef",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/archive/2013/570950.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "caebee095ec2897671a69b481b849dd2f4ae19d9",
"s2fieldsofstudy": [
"Business",
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119100390 | pes2o/s2orc | v3-fos-license | The lateral shower age parameter as an estimator of chemical composition
We explore the feasibility of estimating primary cosmic ray composition at ultra high energies from the study of lateral age parameter of Extensive Air Showers (EAS) at ground level. Using different types of lateral distribution functions, we fit the particle density of simulated EAS to find the lateral age parameter. We discuss the chemical composition calculating the merit factor for each parameter distribution. The analysis considers three different primary particles (proton, iron and gamma), four different zenith angles (0{\deg}, 15{\deg}, 30{\deg} and 45{\deg}) and three primary energies (10^{17.25} eV, 10^{17.50} eV and 10^{17.75} eV).
Introduction
The particle lateral distribution of extensive air showers (EAS) is the key quantity for cosmic ray ground observations, from which most shower observables are derived. An EAS is initiated by a high energy cosmic ray particle in the atmosphere, creating a multitude of secondary particles, which arrive nearly at the same time distributed over a large area perpendicular to the direction of the original particle. The disc of secondary particles may extend over several hundred meters from the shower axis, reaching its maximum density in the center of the disc, which is called the shower core. The density distribution of particles within the shower disc can be used to derive information on the primary particle. Due to the low rate of these events on the earth surface, EAS measurements on ground level are carried out using large arrays of individual detectors, which take samples of the shower disc at several locations [1,2].
One of the parameters commonly used to describe the form of the lateral density distribution is the lateral shower age parameter (LSAP) in the Nishimura-Kamata-Greisen (NKG)-function [3,4]. The name LSAP expresses the relation between the lateral shape of the electron distribution and the height of the shower maximum. Due to the statistical nature of the shower development, the height of the shower maximum is subject to strong fluctuations. Showers, which have started shallower in the atmosphere are called old and they are characterised by a large LSAP value. Young showers have started deeper in the atmosphere, which corresponds to a smaller value of the LSAP. Apart from fluctuations, the height of the shower maximum depends on energy and mass of the shower initiating primary [5]. Therefore, the LSAP is also sensitive to the mass of the primary [2].
In the following sections, using the concept of LSAP, we will present the study of the chemical composition. First, we will show the LSAP calculated with fits of the simulated lateral density distribution using two types of functions: NKG [3,4] and Linsley [6]. Secondly, we find the distributions of LSAP for each primary particle, zenith angle and energy used. Finally, by mean of the merit factor between the distributions we analyze the composition discrimination power of the LSAP.
Monte Carlo simulation
For this work we generated a library of extensive air showers using AIRES 2.8.4a [7] based on the hadronic model QGSJET-II-03 [8,9]. We set a relative thinning of 10 −6 , a weight factor of 0.2 and a ratio between the two weight factors (electromagnetic/hadronic) equal to 88.
For each energy, zenith angle and primary type a total of 600 showers were produced considering a uniform azimuthal distribution between 0 • and 360 • . Only gammas, electrons/positrons and muons with energies above 1.286 MeV, 264 keV and 55 MeV, respectively, have been taken into account.
Lateral shower age
The LSAP was introduced primarily to describe the development of the electromagnetic cascade. Nishimura-Kamata and Greisen found a function which relates the lateral distribution of shower particles with the shower age. That was called NKG function, which has the form: where ρ(R) is the particle density at distance R, N is the total number of shower secondaries, C is the normalization constant, R 0 is the Moliere unit and s is the shower age [3,4].
Since the 70's, many authors have pointed out that NKG function with a single age parameter is not adequate to de- scribe the lateral distribution at all distances, meaning that lateral shower age varies with the axial distance [10]. Linsley has proposed a double parameter function, characterized by α and η [6], to make a correction of the NKG function, expressed as: . (2) The distribution is determined by α when R approches 0, and by η when R approches ∞.
(2) effectively equals, s should satisfy: Therefore, the effective lateral age parameter varies with distance r as: A similar deduction can be applied using a different class of NKG function instead of the Linsley's function. In the Ref. [11] this NKG is defined as: where k is the normalization constant and R s = 5R 0 is the scaling parameter. From the Eq. (7) we obtain: with r = R/R 0 again. Following the same steps as shown above, that is equaling Eqs. (3) and (8), we get the effective lateral age parameter: In order to obtain the value of the lateral shower age parameter, one can use the Eq. (6) or the Eq. (9), but that implies finding the parameters: α and η for Eq. (6) or β for Eq. (9), respectively. These parameters can be found fitting the lateral density distribution of particles on the ground, with the respective LDF function.
In Fig. 1 we show the fit of the lateral density distribution corresponding to iron as primary particle, with an energy of 10 17 we obtained the values α = 1.46 and η = 3.91. With these values and choosing one value for the distance r, one can find the lateral age from the Eq. (6).
The fit of the lateral density distribution corresponding to proton as primary particle, with an energy of 10 17.50 eV and zenith angle of 15 • is shown in Fig. 2. In this case, the fit was performed with the NKG function given by the Eq. (7) and the value of β obtained is 2.1. Again, with that value and choosing one value for the distance r, one can find the lateral age from the Eq. (9).
Merit factor and chemical composition
In order to study the composition discrimination power of the LSAP, we need to evaluate it at a fixed distance from the shower core. Therefore, it is necessary to choose a distance in which the relative lateral density fluctuation is minimum [11,12]. We will choose a distance of 500 m. In Fig. 3 we show that ∼ 500 m is the distance with the least fluctuation.
Once we have established the better distance to evaluate the LSAP, using Eq. (6) and Eq. (9), we realize the distri- bution of this parameter for each primary particle, energy and zenith angle. Figures 4 and 5 show the distributions of lateral age parameter of 600 showers, with primary energy of 10 17.50 eV, four zenith angles and with proton, iron and gamma as particle primaries.
For each zenith angle, we calculate the merit factor between the distributions, given by: where A and B are two distributions, respectively. From Fig. 4 and 5 one can see that the merit factor does not depend significantly on the LDF used, but it does depend on the zenith angle (higher zenith angles correspond to lower merit factors). This means that, when the zenith angle increases, the difficulty to determine the chemical composition of the primary particle also increases. Finally, in Fig. 6 the merit factor as function of zenith angle is shown for three energies under consideration in (9)). With gamma (in black), proton (in blue) and iron (in red) as primary particles, energy of 10 17.50 eV and zenith angles of 0 • , 15 • , 30 • and 45 • . this work. It can be seen that the discrimination power between gamma and nuclei is much stronger than between proton and iron but in any case MF > 2 for all combinations studied. It is also apparent from Fig. 6 that merit factor is independent of the energy.
Conclusions
In this work we performed a study of the LSAP using EAS simulated at three energies (10 17.25 eV, 10 17.50 eV and 10 17.75 eV). Independently of the LDF fitting function used, one can see that the LSAP distributions show a nice separation between light and heavy component. For this reason, the LSAP could be used to obtain a first estimation of the chemical composition of the primary particle. We observed that for all cases, the merit factor decreases when the zenith angle increases, independently of the primary particle energy. Therefore, for inclined showers is more difficult to discriminate between proton and iron using the LSAP.
Simulations of events with a ground array using water Cherenkov stations will be performed to determine how its | 2013-09-13T18:36:44.000Z | 2013-09-01T00:00:00.000 | {
"year": 2013,
"sha1": "a701f3c606762045937a99566753701f46f351cb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2365eb82a5d9729af6d6b665654390b314ca6c68",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
261515701 | pes2o/s2orc | v3-fos-license | Differential diagnoses of cavitary lung lesions on computed tomography: a pictorial essay
Cavitary lung lesions are frequent findings on imaging, with the most common sources being malignancies and infections. They have multiple etiologies and differential diagnoses, which can have overlapping imaging characteristics, posing a diagnostic difficulty. This article is an educational pictorial essay highlighting the pitfalls and differential diagnoses of lung cavities, and focusing on the typical imaging patterns, the clinical and biological contexts of each etiology, illustrated by images that were extracted from the images archiving system of our radiology department. The radiologist should be aware of all etiologies of cavitary lung lesions, including the less frequent ones, and be familiar with their imaging patterns and characteristics, which aids in establishing the diagnosis or, at the very least, narrowing down the evoked diagnoses.
Background
A lung cavity is defined as a lung lesion containing gas and surrounded by a wall of a variable thickness [1].
An excavation is defined as the appearance of a cavity, which can be inside of an opacity (Consolidation, mass, or nodule) [1].
Different mechanisms can lead to formation of cavities; the most common one is loss of substance by necrosis inside of a mass or nodule, which may be neoplastic, infectious, or ischemic.The necrotic material will be evacuated totally or partially by a bronchus and replaced by air.Other mechanisms encompass a mechanical loss of substance without necrosis (e.g., trauma.),and cystic malformation [1].
Computed tomography (CT) scan is a more accurate diagnostic tool in comparison with conventional radiography, allowing the detection and analysis of cavitary lesions, even when they are small, and specifying, in a more sensitive way, the characteristics of lesions and the presence or absence of associated lesions.Moreover, it also allows CT-guided biopsies for suspicious cases [2].
The purpose of this article is to review the pathologies that give rise to cavitary lung lesions that radiologists encounter in their daily practice.These have been categorized into two main groups: frequent etiologies and rare etiologies.
Imaging features to analyze in a cavity
The wall • Thickness: Maximal thickness is a very informing characteristic: A thickness above 4 mm is considered thick, and a thickness below 4mm is considered thin.Thin-walled cavities below 2 mm can be called cysts [3].
In solitary cavities, a thickness below 4 mm is highly suggestive of benignity (92% are benign), a thickness between 4 and 15 mm reflects a moderate risk of malignancy (49% are malignant), and a thickness over 15 mm is very suggestive of malignancy (95% are malignant).• Margins: Can be smooth and regular or irregular.
Malignant lesions are associated with irregular inner and outer margins [4].
Content
The content could be totally air, or partially air (mixed; solid and or liquid content, in addition to air).
The liquid component could be due to blood, pus, bronchial secretions, or liquefied necrosis, causing a horizontal air-liquid level.
Other features to analyze
• Size: The size is especially valuable for follow-up and assessing treatment response.• Number: Single or multiple (e.g., metastases tend to be multiple).• Topography and distribution: Unilateral or bilateral, diffuse or predominant in a certain region (the periphery, the perihilum, the upper regions or the lower regions) (e.g., Tuberculosis typically affects the upper regions) [5].• Evolution: Comparing the current CT findings with the prior CT findings allows the distinction between: • An opacity that underwent excavation.
• An ancient cavity that filled with liquid (infection of the cavity).• An ancient cavity that filled with solid content (aspergillus colonization of a residual cavity, for example) [6].
Pseudo-cavities
Pulmonary emphysema It shouldn't be considered a cavity as it has no wall.On imaging, a pseudo-wall of 1mm thickness can be seen as a result of compressed and piled up parenchymal septa around the emphysema (Fig. 1).Vessels can be seen within the center of the emphysema [7].
Emphysema can be infected and manifest an air-fluid level (Fig. 2).
Cystic bronchiectasis
It is a dilated bronchus characterized with a saccular "pouch-like" ending.What distinguishes them from cavities is the presence of bronchial systematization.They align in a hilo-peripheral axis, or group together in the para-mediastinum area creating a "cluster of grapes" appearance (Fig. 3) [1,8].Honeycombing Honeycombing is a group of clustered small pulmonary cyst-like lesions, generally between 3 and 10 mm in diameter, arranged in multilayers, most commonly in the subpleural space (Fig. 4).This appearance is due to parenchymal destruction and retraction surrounded by fibrotic walls, as a consequence of fibrotic changes [9].This clarifies why they are categorized as pseudo-cavities.
Diaphragmatic hernia
Diaphragmatic hernia can be congenital or acquired (iatrogenic or traumatic), it occurs through a diaphragmatic defect or through a weakened diaphragmatic hiatus.
Herniated intestinal loops or stomach could look similar to cavitary lesions on axial slices, but on coronal and sagittal reconstructions, the continuity to the abdominal cavity through the diaphragm is easily seen along with the collar sign at the level of the diaphragm (Fig. 5).The diaphragmatic defect (discontinuity) is sometimes visible [10,11].
Extra-pulmonary cavities
Pleural empyema is a differential diagnosis for peripheral lung abscess.
A pleural empyema tends to be oval in shape, forms an obtuse angle with the chest wall, and has regular and fine margins.It compresses the adjacent parenchyma (including bronchi and vessels) and separates the visceral and parietal pleura (split pleural sign) (Fig. 6) [12].
Frequent etiologies Infectious
Tuberculosis Tuberculosis is a bacterial infection due to Mycobacterium Tuberculosis, in the clinical context of tuberculosis contagion, subacute or chronic cough, nocturnal sweats, and anorexia.
Tuberculous lung cavities are present in 45% of postprimary tuberculosis and is less frequent in primary infection.
Cavitation is due to caseous necrosis and is located most frequently in the apical and posterior segments of upper lobes, and the superior segments of lower lobes (Fowler).
It appears on CT as a lesion with total air content (it seldom has an air-liquid level), and with a thick irregular wall.
Pulmonary abscess A pulmonary abscess is a cavity filled with pus developed in a pneumonia site.It is associated with clinical and biological infectious signs.
It is often caused by Klebsiella Pneumoniae, Pseudomonas Aeruginosa, or anaerobic organisms.
Rupture of the abscess and the evacuation of pus through bronchi will cause an air-fluid level appearance.
It is characterized on CT with a thick irregular wall and inner margin, enhanced after contrast, often surrounded by consolidation causing poorly defined outer margins (Fig. 8).
An infected necrotic neoplasm is a differential diagnosis that should be recalled especially in older patients and smokers, as it can mimic an abscess [14].Septic emboli Septic emboli are due to hematogenous dissemination; the infectious starting point is generally a right endocarditis, a septic thrombophlebitis, or an infected venous catheter.It is also frequently seen in cases of substance addiction.
It manifests on CT as bilateral asymmetrical alveolar nodules, that can excavate and converge.They are more predominant in the periphery and at the bases.And they often (54% of cases) have a feeding vessel.Wedge-shaped peripheral opacities are also a common manifestation (Fig. 9) [15].
Hydatidosis Hydatidosis or Echinococcosis is a parasitic infection due to Echinococcus Granulosus that occurs due to Echinococcal eggs swallowing, which often happens in rural areas, and when there is contact with contaminated dogs.It is most often asymptomatic apart from complications.
Lung is the second most common site of infection after the liver.
In the early stages of the disease, it manifests on CT as an oval or round-shaped liquid collection, except when a complication occurs: • In case of fissure: It can manifest as a crescentic air collection topping the cyst.Or it manifests as air bubbles within the endocyst (sometimes it translates superinfection instead of fissure) (Fig. 10).
• In case of rupture: An image of floating membrane appears; an irregular air-fluid level with detached wall membranes giving the appearance of a "Waterlily" (Fig. 11).
Ring enhancement of the pericyst after contrast indicates infection or cyst-bronchial tree communication.The differential diagnosis of infected hydatid cysts are abscesses and neoplasms due to the high density content [16,17].Aspergillosis Aspergillosis is a fungal infection, caused by inhaling spores of the aspergillus species, which is a group of saprophytic fungi.
The patient could be asymptomatic, or present with thoracic pain, hemoptysis, and fever.
We distinguish three entities that could cause cavitary lesions: • Aspergilloma: It is an aspergillus colonization of a residual cavity (most commonly a tuberculosis residual cavity).
On CT it manifests as a cavity in an upper lobe or a Fowler segment, containing a well-defined oval or round mass (called fungus ball), dense, and mobile when the patient changes position.When the fungus ball is small, it makes the appearance of "Monod" sign.On the other hand, when the fungal ball almost entirely fills the cavity, only an air-crescent topping the mass is seen (Fig. 12).A CT scan of the chest can be performed in the prone position to demonstrate the mobility of the mass [6].
• Subacute invasive pulmonary aspergillosis: It develops in patients with chronic pulmonary diseases (like chronic obstructive pulmonary disease), and/or in patients with moderate immunosuppression such as diabetes, patients on corticosteroids, malnutrition, and advanced age.
On CT, it manifests as a progressive setting on weeks or months of a consolidation, usually found in an upper lobe, followed by excavation with an air-crescent sign (which reflects worsening of the disease).
The cavity can be filled with a fungus ball giving the appearance of "Monod" sign.Adjacent pleural thickening may occur [18].• Angio-invasive aspergillosis: It occurs in a patient with profound immunosuppression; recent history of neutropenia, allogeneic stem cell transplantation or solid organ transplantation, prolonged use of corticosteroids or use of immunosuppressants.It is a disseminated invasive aspergillosis, causing an occlusion of small arteries by mycelial filaments.It manifests on CT as solitary or multiple solid nodules or masses, which could be surrounded by ground glass (due to perilesional hemorrhage).Next, excavation of the lesion occurs, presenting as an aircrescent sign that tends to appear in the recovery phase of neutropenia.The peripheral ground glass, on the other hand, tends to resolve progressively with evolution.The reverse halo sign can also be seen, but it is a sign that is more suggestive of mucormycosis than aspergillosis [19][20][21].
Pneumocystosis Pneumocystosis is an opportunistic fungal infection due to Pneumocystis Jirovecii.
It is typically seen in patients with a CD4 lymphocytes count below 200 cells/μl, and it is the most frequent cause of cavities in HIV patients.
On CT, it presents as areas of ground glass (or sometimes consolidation) as the most common manifestation, reflecting alveolitis.It is often isolated and can be associated with reticulations or septal thickening, and with small cysts sitting within the ground glass.The ground glass opacities have a central distribution with relative peripheral sparing in 41% of cases, a mosaic pattern in 29%, and a diffuse distribution in 24% of cases (more common in non-infected HIV patients).
An upper lobe predominance has been described.
Fig. 12
Axial CT image of the chest in the lung window of a 59-year-old male, showing an aspergilloma in the right apex of the lung, manifesting as a cavitary lesion filled with a round solid mass, topped by air in a crescentic shape Fig. 13 Axial CT image of the chest in the lung window of a 67-year-old male, showing central areas of ground glass and crazy paving, with cysts within, in a patient with pneumocystosis.
Note an associated mild bilateral pleural effusion
The cysts correspond to necrosis; they are present in 10% of cases and are more frequent in HIV-infected patients than patients without HIV.They can be single or multiple, round or oval, with a thin wall, and are associated with a higher risk of pneumothorax (Fig. 13) [22,23].
Infectious pneumatocele It is due to obstruction of a bronchiole, which causes air trapping.
Infectious pneumatoceles have been described most frequently as a complication of aerogenous staphylococcal pneumonia in children.
It manifests as a cyst with total air content or air-liquid content, it can be of any size, round or oval, and it communicates with a bronchus.
It may occur early during the infection, or late after the infection (Fig. 14) [24,25].
Neoplastic
Primary tumors They occur more frequently in cigarette smokers.
When patients are symptomatic, they commonly present with hemoptysis, coughing, shortness of breath, chest pain and persistent infections.
Excavation of the tumoral mass or nodule happens approximately in 2 to 16% of cases, due to central necrosis.
Epidermoid carcinoma of the lung, also known as squamous cell carcinoma, is the histological type that most often excavates [26].
It manifests on CT as a cavity that has a thick irregular wall.The content can be totally aerial (Fig. 15), or mixed; solid, liquid (necrosis and blood or infection), or both [27].
Association with lymphadenopathies, metastatic lung nodules, pleural effusion or pleural thickening are often seen.
Metastases They typically appear as multiple sharp spherical nodules.15 Coronal reconstruction CT image of the chest in the lung window of a 70-year-old male, showing in the left superior lobe an excavated mass with a thick wall, irregular margins and a totally air content, proven to be an epidermoid carcinoma on pathology.To note bilateral pulmonary emphysema, more marked in the right superior lobe Fig. 16 Sagittal reconstruction CT image of the chest in the lung window of a 63-year-old female, showing lung nodules, some of whom are excavated, in a patient on chemotherapy for a metastatic invasive ductal carcinoma of the breast Excavation of metastases is not very frequent.It commonly happens when the primary tumor is an epidermoid carcinoma, and under chemotherapy (Fig. 16) [28,29].
To note that a feeding artery can be seen in small metastases [30].
Immunologic (autoimmune)
Cavitary lesions of immunologic origin are easy to link to the disease when history of the disease is known, and when the clinical and biological contexts are suggestive.
Granulomatosis with polyangiitis It is a necrotizing granulomatous vasculitis of small and medium-sized arteries, that affects preferentially the lungs (90%), the ENT sphere, the trachea-bronchial tree, and kidneys).On imaging nodules are commonly multiple, bilateral and scattered, commonly ranging from 2 to 4cm in size, and they excavate in more than 25% of cases, especially the nodules above 2cm, giving the appearance of a cavity with a thick wall that becomes more and more thin with evolution (Fig. 17) [31,32].
Rheumatoid arthritis Rheumatoid nodules are not frequently seen in rheumatoid arthritis (10-22% of cases), and they reflect a good prognosis.
They tend to present as multiple nodules (≥ 4 nodules) that are peripherally located with smooth borders, and surrounded by small satellite nodules which can give the appearance of a subpleural rind of soft tissue due to coalescence.Cavitation is seen in 18% of cases (Fig. 18) [33].
Others
Langerhans cell histiocytosis Langerhans cell histiocytosis is a multisystemic disease of unknown etiology, characterized with an infiltration by specialized histiocytes called Langerhans cells [34].
There is evidence of causal relationship between cigarette smoking and pulmonary Langerhans cell histiocytosis.
Lung involvement in this disease is marked by the presence of bilateral lung centrilobular nodules and micronodules, primarily concentrated in the middle and upper regions.These nodules gradually progress and develop into excavations, initially resembling thickwalled cysts that progressively become thinner, indicating an advanced stage of the disease.During this late stage, the cysts frequently merge, resulting in an irregular and distinctive shape called "bizarre shape" (Fig. 19) [35].
Pulmonary infarction Pulmonary infarction complicates pulmonary embolism, and it manifests on CT as a peripheral consolidation, triangular in shape, with a large pleural base and an apex oriented toward the hilum, containing air lucencies (air bubbles more visible in the mediastinal window), with diminished enhancement of the parenchyma involved, it gives the appearance of a reversed halo sign (Fig. 20) [36].
Traumatic pneumatocele Traumatic pneumatocele is due to a laceration in the lung parenchyma with retraction of the parenchyma surrounding it.It is a round or oval cyst, single or multiple, filled with air, blood or both.It is usually surrounded by a contusion in the acute setting (Fig. 21) [37].
Rare etiologies Congenital
Congenital pulmonary airway malformation It is an abnormal bronchial proliferation during the embryonic life, with formation of communicating cysts.
Involvement is often unilateral in the lower region of the lung.They can have a purely air content, or air-liquid content.
Other studies suggested that cysts between 2.8 and 7.9 cm in size were likely to be type 1.Smaller cysts (< 2.8 cm) were more likely to be type 2, and larger cysts (> 7.9 cm) were more likely to be type 4.And that the incidence of pneumonia was higher in type 2 than in types 4 and 1.And that the frequency of mediastinal shift and pneumothorax was statistically significant, and both were more common in type 4 [40].The mediastinal location is more frequent than the pulmonary location.
Intrapulmonary bronchogenic cyst on CT is generally a unilocular or multilocular mass, round, well circumscribed, located usually in the lower lobe.It can have liquid density, or high density (due to proteinaceous content), or air density (Fig. 22) [41].
Pulmonary sequestration Pulmonary sequestration is a territory of the lung that corresponds to a segment or a lobe which does not communicate with the tracheobronchial tree nor with the pulmonary arteries (which permits the diagnosis on CT).It is supplied by the systemic circulation, most commonly, the descending aorta.
It is often located in the left lower lobe.We distinguish intra-lobar (ILPS) and extra-lobar pulmonary sequestration (ELPS): • ILPS: The most common.It presents later in childhood with recurrent infections, and its venous drainage is often towards the pulmonary veins.• ELPS: Is less frequent.It presents in the neonatal period with respiratory distress, and its venous drainage is toward the systemic veins into the right atrium.
ILPS can present on imaging as multiple cystic areas, cavitation, focal bronchiectasis, or areas of atelectasis.ELPS tends to appear as a well-defined solid mass [39,42].
Others
Sarcoidosis It is a systemic granulomatosis, affecting most commonly the lung parenchyma and/or mediastinohilar lymph nodes.It is asymptomatic in most cases, and when it is symptomatic, cough and dyspnea are the most usual symptoms.
On CT, it often exhibits multiple micronodules and nodules with peri-lymphatic and upper lobes predominance, which can merge forming pseudo-masses of fibrosis.
It can present with mediastinal and hilar lymphadenopathies that can calcify.
Excavation in sarcoidosis is rare and usually occurs in active severe sarcoidosis.Moreover, it is often located in the middle and upper lobes, and in the hilar and posterior regions, usually seen within an alveolar consolidation and fibrotic lesions (Fig. 23) [43].
Pneumoconiosis Silicosis and coal workers pneumoconiosis are characterized by the presence of small nodules distributed in the upper zones of the lung, with a posterior predominance.These nodules tend to coalesce and form massive fibrotic lesions, in which cavitation can occur.
To note that mediastino-hilar lymphadenopathies are associated, with a tendency to calcify in an eggshell [44].
Lymphangioleiomyomatosis (LAM)
It is a disease that affects women at reproductive age and is characterized by a smooth muscle proliferation in the lung parenchyma, which obstructs bronchioles, and causes air entrapment and cyst-like spaces.
The pathogeny of this disease is associated with tuberous sclerosis complex mutations (TSC1 or TSC2) and estrogenic influence.
We distinguish 2 types: The sporadic LAM, and LAM with tuberous sclerosis.It manifests on CT as multiple, bilateral, and diffuse round cysts of variable size (often small), with a thin wall (Fig. 24).Some interstitial changes such as reticulations and thickening of interlobular septa, and ground glass can also be seen.
It can be associated with renal angiomyolipomas or chylous effusions in tuberous sclerosis complex.
Pneumothorax is a possible complication.
The diagnosis generally requires a lung biopsy in cases where tuberous sclerosis complex is absent [45,46].
Conclusions
Cavitary lesions have multiple etiologies; considering the epidemiological context, and the clinico-biological context is crucial to orient the diagnosis.
The number, size, distribution, wall, content, evolution and the associated lesions are all essential to analyze.
Abscesses, tumors, and aspergillomas are frequent diagnoses, along with tuberculosis and hydatidosis in epidemic areas.
Multiple small cavities and nodules peripherally located, and with feeding vessels, in a context of sepsis should recall septic emboli.
Small cysts should suggest pneumatocele in a context of trauma or infection, and congenital diseases should be recalled in children.If they are diffuse, they should suggest histiocytosis in smoking young men, and LAM in childbearing age women, and pneumocystosis when associated with ground glass in immunosuppressed patients.
Systemic diseases can only be evoked if their clinico-biological context and when associated lesions are suggestive.
The suggested diagnosis or (narrowed diagnoses) will aid the multidisciplinary team in moving forward with treatment options when indicated, and with further diagnostic tools such as a CT-guided biopsy when needed.
Fig. 1 Fig. 2
Fig. 1 Axial CT image of the chest in the lung window of a 50-year-old male, showing multiple paraseptal and centrilobular emphysemas.Notice the absence of a wall.Only piled up septa around can be seen as pseudo-walls
Fig. 3 Fig. 4 Fig. 5
Fig.3Axial CT image of the chest in the lung window of a 55-year-old male, showing bilateral cystic bronchiectasis in the lower lobes of the lung (with mucoid impactions in this case)
Fig. 6 Fig. 7 Fig. 8
Fig. 6 Axial post-contrast CT image of the chest in the mediastinal window of a 26-year-old female, showing a right pleural empyema with air bubbles within, and an enhancement and thickening of the pleura
Fig. 9
Fig.9 Sagittal reconstruction CT image of the chest in the lung window of an 18-year-old male, showing septic emboli manifesting as multiple peripheral nodules, some of which are excavated, and some of which are triangular in shape
Fig. 10
Fig. 10 Axial CT image of the chest in the lung window, showing a right lung hydatid cyst with air bubbles within (due to fissure)
Fig. 14
Fig. 14 Axial CT image of the chest in the lung window of a 41-year-old male, showing an area of consolidation in the right lower lobe associated with pneumatoceles in a context of staphylococcal pneumonia
Fig.
Fig.15 Coronal reconstruction CT image of the chest in the lung window of a 70-year-old male, showing in the left superior lobe an excavated mass with a thick wall, irregular margins and a totally air content, proven to be an epidermoid carcinoma on pathology.To note bilateral pulmonary emphysema, more marked in the right superior lobe
Fig. 17
Fig.17 Axial CT image of the chest in the lung window of a 56-year-old male, showing bilateral excavated nodules in a patient followed up for granulomatosis with polyangiitis
Fig. 18
Fig.18 Axial CT image of the chest in the lung window of a 45-year-old female, showing confluent peripheral lung nodules, enclosing air (excavation), in a patient followed-up for rheumatoid arthritis
Fig. 19 AFig. 22
Fig. 19 A schema showing the evolution of a lung nodule in Langerhans cell histiocytosis
Fig. 23
Fig.23 Axial CT image of the chest in the lung window of a 62-year-old female, showing bilateral perihilar alveolar infiltrate, with excavation in the left, in a patient followed up for sarcoidosis | 2023-09-05T13:52:14.722Z | 2023-09-04T00:00:00.000 | {
"year": 2023,
"sha1": "3328047d8dc6e8047bb71296e1eb30d279bbda3d",
"oa_license": "CCBY",
"oa_url": "https://ejrnm.springeropen.com/counter/pdf/10.1186/s43055-023-01098-7",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "37c7e34b24262bf657c8a87d1993ede0876584be",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
260378766 | pes2o/s2orc | v3-fos-license | Global Hierarchical Neural Networks using Hierarchical Softmax
This paper presents a framework in which hierarchical softmax is used to create a global hierarchical classifier. The approach is applicable for any classification task where there is a natural hierarchy among classes. We show empirical results on four text classification datasets. In all datasets the hierarchical softmax improved on the regular softmax used in a flat classifier in terms of macro-F1 and macro-recall. In three out of four datasets hierarchical softmax achieved a higher micro-accuracy and macro-precision.
INTRODUCTION
In machine learning, classification is a popular and well-studied problem. Recently neural networks have flourished in different classification tasks, especially when the number of training examples is large. In some cases the classes have a natural taxonomy, creating a hierarchy among them. Hierarchical classification tries to incorporate this hierarchy in the classifier, as apposed to flat classifiers, which need to discriminate between all classes at once.
There are two different types of hierarchical classifiers, local and global. The local classifier is a combination of flat classifiers, while the global classifier adapts the classifier internally. When large neural networks are trained it is often unrealistic to train multiple networks, due to computation time, and recreate the hierarchy of the classes by using multiple local classifier. Therefore we use a global classifier. We do this by exchanging the softmax with a hierarchical softmax, such that any Neural Network can be modified to a hierarchical classifier. We show that this adjustment makes Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the the network a truly global hierarchical classifier and that it can enhance the performance in several classification tasks. 1 The paper is structured as follows. In Section 2 previous works on hierarchical classifiers and hierarchical softmax is covered. Our proposal for the hierarchical softmax is presented in Section 3. Then in Section 4 we describe several datasets and Section 5 discusses the experimental setup. In Section 6 we compare the results of models with a regular softmax and with a hierarchical softmax on these datasets. Finally, in Section 7 we give our conclusions and proposals for future work.
PREVIOUS WORK ON HIERARCHICAL CLASSIFICATION
Most classification algorithms could be considered flat classifiers.
They distinguish between all classes at once. When there are a large number of classes, this can become difficult. Instead, hierarchical classification can be used. A hierarchical classifier tries to incorporate the hierarchical structure of the class taxonomy, when this is present. [25] proposed a survey on hierarchical classification and built a unifying framework for distinguishing methods. First the structure of the taxonomy is considered, which can be a Tree or a Directed Acyclical Graph (DAG). Working with trees is easier, because a node in a DAG can have more than one parent node. With regard to the classifier, this can be local (top-down) or global (big-bang).
The local hierarchical classifier is not a full hierarchical method on its own. Instead it is a group of flat classifiers that during training considers a subset of classes, based on where in the taxonomy the flat classifier is used [25]. A global classifier takes the hierarchy of the classes into a single model [2]. The advantages of a global classifier are a smaller model and class dependencies are automatically incorporated [25]. The global classifier also has the convenience that the number of parameters is far less than for the same classifier in a local hierarchy. More importantly, a misclassification at a certain level is unrecoverable in a local hierarchy, while in a global hierarchy this can be compensated.
Global Hierarchy
There are different types of global classifiers. First, there are global approaches based on the approach of [23] that use class clusters. An example specific to text mining is given in [13]. The second type of global classifiers are built on the multi-label classification. In this approach the non-leaf nodes are supplemented with the information of their parent nodes. Finally, the last type of a global classifier is a modification of a local classifier to incorporate the class hierarchy directly. Although harder to construct, the output of this last method might be easier to explain than the one from a local based approach [25]. During both training and testing probabilities of all classes can be assessed.
Hierarchical Softmax
Hierarchical softmax was first described by [4]. In the context of neural network language models, hierarchical softmax was first introduced in [20]. Other versions of hierarchical softmax are proposed in [18] and [19]. [5] used the specification from [20] in their FastText classifier. While most of these methods use a binary tree to speed up training and inference time, we try to exploit the natural hierarchy found in the taxonomy of classes for improving the performance. In this taxonomy a node can have more than two child nodes.
Hierarchical Text Classification
Hierarchical classification was first used for text classification by [12]. They used a local classifier per parent node for training, at each node selecting a subset of features relevant for that step in the classification process. A similar hierarchical structure with an SVM at every node was used by Kang et al. [10] for speech-act classification. Ono et al. [21] used a form of local classifier per level, where they tried the lowest level (leaf nodes) first. If the uncertainty was too high, they moved up in the hierarchical level.
METHODOLOGY
Hierarchical classification can be considered as a classification that takes the hierarchical structure of the taxonomy of classes into account, as opposed to a flat classifier, which only takes the final classes into account. By imposing the hierarchical structure, the model does not need to learn the separation between a large number of classes. It can now focus on classifying categories, or subclasses within a category. The taxonomy can be formalised as a tree or a DAG. We consider here the case where the taxonomy is a tree. A taxonomy represented by a Tree is easier to construct, as each child node only has one parent.
A local hierarchical neural network would be infeasible. The network has many parameters and having to learn a new neural network from scratch at every parent node would result in too many parameters, and considerably longer training and inference times. Therefore, we consider making a global classifier by using a hierarchical softmax. Hierarchical softmax easily extends the neural networks by replacing the regular softmax. In this section we discuss the general case of the global hierarchical classifier and the specific case for the hierarchical softmax.
Global Hierarchy
Global classifiers take advantage of the whole hierarchical structure in the classes at once [25]. Each node in this hierarchical structure is associated with the probability of the path from the root to that node. We illustrate this in Figure 1. If the node is at depth with parents 1 , . . . , −1 , the probability of arriving in this node is:
Hierarchical Softmax
In order to calculate the conditional probabilities the hierarchical softmax uses a softmax at every node. The softmax used in the hierarchical softmax to calculate the conditional probability of belonging to node = conditional on being in the parent node of = −1 becomes: where is the weight vector corresponding to parent node and child node . The weight vectors include the bias terms. ℎ is therefore the last hidden state concatenated with a one, and provides the same input for each parent node, independent of the depth . The number of weight vectors is equal to the number of child classes of parent node .
Compared to a flat classifier with a regular softmax ( = 1), the total number of weights does increase with ( − 1) * (ℎ + 1), where is the number of parent nodes, ℎ is the dimension of the hidden dimension and one is added to account for all the additional bias terms. Although the total number of weights is increasing, this is considerably less then if we would consider a new neural network at every parent node, as it is done in the local classifier per parent node.
Each weight vector now has a new purpose. In a flat classifier with the regular softmax, exp(ℎ ) attributes the evidence of class compared to all leaf nodes =1 exp( ℎ), where equals the number of classes, or leaf nodes. While in the hierarchical softmax, the importance of node , exp( ℎ), is compared to a subset of nodes = 1, . . . , , =1 exp( ℎ). This gives the hierarchical softmax the potential advantage of only having to make the distinction between smaller subsets. In other words, the additional ( −1) * (ℎ +1) parameters empower the * (ℎ +1) parameters to specialise in discriminating within their respective subgroups.
3.2.1 Training of the Hierarchical Softmax. In order to understand the training of a network with a hierarchical softmax component we need to calculate the gradients of the loss function with respect to the parameters, and ℎ. This will also show that the hierarchical softmax is truly a global classifier, as the whole network is updated based on the performance of all relevant parent nodes.
The loss function we use is the Cross Entropy function. For observation the loss is calculated as a function of the estimated . . . C Figure 1: Hierarchical structure of a two level global classifier.
class probabilities ( ): The indicator , is 1 if observation belongs to class , therefore the element of the sum that remains is the negative log probability of the correct class . We then substitute (2) in the loss (6), where is the set of all parent nodes that lead to the correct class, and the correct child of parent . In (7) we rearrange the log, making it easier to calculate the derivatives we are looking for: where the indicator function 1 ∈ equals one if is in and zero otherwise. Likewise, the Kronecker delta is defined as = 1 if = , and 0 otherwise. The derivations of (8) and (9) are given in Appendix A.
These gradients can be used in the Stochastic Gradient Decent algorithm. More importantly, (9) shows the update of the hidden state (and therefore the rest of the network) is a combination of the performances across all child nodes that belong to the parent nodes that make up the path to the correct class. This shows that a neural network with a hierarchical softmax is truly a global hierarchical classifier.
DATASETS
We consider four text classification datasets, in which we can find a hierarchical structure in the classes. In Appendix B we present the exact taxonomy used.
TREC
First, we consider the TREC 10 Question Answering Track Corpus [15], abbreviated as TREC. This dataset consists of 5952 questions (5452 train, 500 test), each belonging to one of the 50 classes. These classes are split up between 6 categories. Figures 2 shows that the distribution between categories is highly unbalanced. The TREC training set is highly unbalanced as well. The number of training observations per class range from 4 till 962.
20NewsGroups
The second dataset is the 20NewsGroups dataset [14]. We find 6 categories, on top of the 20 classes. In Figure 3 the distribution of the classes between the categories can be seen. This distribution is relatively balanced. This dataset contains 11293 training observations and 7527 test observations. The training set is relatively balanced. Most classes have between 500 and 600 observations. There is one outlying class with only 377 training observations.
Reuters-21578
As third and fourth dataset we use two configurations of the Reuters-21578 dataset [8]. These are Reuters-8 and Reuters-52.
Reuters-52.
Respectively the Reuters-52 dataset contains the 52 most frequent classes, also distributed between 4 categories. Figure 5 shows the distribution of classes among the categories is highly unbalanced. The Reuters-52 dataset contains 6532 training and 2568 test observations. This dataset is by definition more unbalanced than Reuters-8. The minimum number is only a single training observation.
EXPERIMENTS
In our experiments we employ an LSTM [9] with hierarchical softmax and compare the results with an LSTM with regular softmax.
LSTMs are a popular and well performing architecture for text classification, because of their ability to process sequential data [3]. Since the meaning of a word might also depend on the words that follow, we also consider the Bidirectional LSTM (BiLSTM) [6].
Hyperparameters
In order to determine the best hyperparameters we use k-fold crossvalidation on the training set, where = 4. The macro-F1 measure is used as validation criteria. Besides the Bidirectional component, different dimensions for the hidden state (ℎ ), are tried. The 300 dimensional GloVe [22] word embeddings pretrained on the Wikipedia 2014 and Gigaword 5 (6B tokens) corpus are used. Furthermore, we use early stopping (with a stopping criteria based on cross validation) and dropout (50%) [28] to prevent overfitting. We train using the Adam optimiser [11] with a learning rate of 0.001, and batch size of 10.
Evaluation metrics
We evaluate the performances based on four metrics, F1, precision, recall, and accuracy. Our main criteria is the F1 measure. The F1 is the harmonic mean of the precision and recall. We value both and do not want a linear trade-off between them. Since we are dealing with unbalanced multi-class classification, the macro-F1 is used. We report macro-precision and macro-recall to give some insight in whether the precision and recall differ and which might be higher or lower. Note that the macro-F1 is in general not the harmonic mean of the macro-precision and macro-recall. Rather, the macro-F1 is the average over the harmonic mean of precision and recall of the individual class. The micro-accuracy is a popular measure used for these datasets [7,17,26,27]. We include it such that the performance of our models can be compared with other papers.
RESULTS
In all datasets, the hierarchical softmax outperforms the regular softmax in terms of our main criteria, macro-F1. The results are tabulated in Tables 1-4. In this section we discuss the results in more detail.
We find that in all datasets the highest macro-F1 validation scores for a Bidirectional LSTM. The optimal dimension of the hidden state for the regular and hierarchical softmax is the same in all datasets, in most ℎ = 150, while in R-52 the optimal is ℎ = 100. In the TREC dataset the hierarchical softmax performs better than the regular softmax in terms of all performance measures. The precision is slightly better than the recall for both the hierarchical and flat model. The accuracy is higher than the macro measures, due to the unbalanced classes. Furthermore, the accuracy shows that the hierarchical softmax did not perform as well as the state-of-the-art (SOTA) [17]. The hierarchical model also outperforms the flat model in the second dataset, the 20NewsGroups. Here all measures are relatively close, indicating good trade-off between false positives and false negatives. Since the classes are relatively more balanced, the microaccuracy is for both models closer to the macro measures. The hierarchical model was closer to the state-of-the-art [26] than the flat model. However, it did not perform as well. In the Reuters-8 dataset the hierarchical softmax outperforms the regular softmax in all performance measures. All measures are relatively close, indicating a good trade-off between false positives and false negatives. Despite the large class unbalance the microaccuracy of the hierarchical model is relatively close to the macro measures. Furthermore, compared to the state-of-the-art [27], the hierarchical model is very close to the state-of-the-art. In terms of our main performance criteria, macro-F1, the hierarchical softmax outperforms the regular softmax in the Reuters-52 dataset. While the recall seems to be the bottleneck, and the hierarchical softmax performs better on the macro-recall, the macro-Precision is higher for the regular softmax. The larger difference between recall and precision indicate a worse trade-off in the flat model. With a difference of 0.2 percentage points, the microaccuracy of the regular softmax is not much higher than the microaccuracy of the hierarchical softmax. The flat and hierarchical models have a larger gap between the micro-average and macro measures, due to the higher class unbalance. Finally, we note that both and are close to the state-of-the-art [7].
CONCLUSION
We conclude that the hierarchical softmax makes a good candidate for making a neural network a global hierarchical classifier. We show that it can improve performances of a recurrent network on four different text classification datasets, in terms of macro-F1 and macro-Recall. The performances on the different datasets show that the hierarchical softmax can handle different types of class taxonomies, balanced and unbalanced, in terms of both training observations per class, as well as child nodes per parent node.
With regard to the state-of-the-art, it is not our goal to improve the SOTA, instead we show that changing a regular softmax with a hierarchical softmax in a dataset with a natural hierarchy in the classes leads to an improvement. Although we did not improve the state-of-the-art, we do come close with the hierarchical softmax on a parsimonious model. We also note that the SOTA models are a different model for each dataset, while we consistently perform well on all datasets with the same model. Future work can study if the state-of-the-art might be improved if the hierarchical softmax is used in the respective model. Furthermore, we consider a two-level hierarchical taxonomy, by introducing one level of parent nodes in between the root and the leaves. In future work, the taxonomy could be extended with an additional hierarchical layer, i.e. by grouping parent nodes.
The hierarchy is currently determined based on the hierarchy in the class taxonomy. Alternatively the construction and evaluation of different hierarchical structures could be automated.
The performance of the hierarchical softmax depends on the probability estimates of the conditional probabilities of moving from a parent node to a child node. Better estimates might be obtained by using Bayesian neural networks, as their probability estimates are significantly better [1,16,24].
The hierarchical softmax is not only applicable for text classification. In theory it can replace a softmax in any classification task. It would also be interesting to see how this approach fares in other classification tasks, for example in image classification.
A DERIVATION OF THE DERIVATIVES
In this Appendix we derive the derivatives (8) and (9). We start with the derivative with respect to the weights, then follows the derivative with respect to the hidden state.
In both derivations we use the derivative of the softmax estimate of the probability of node , ( | ), with respect to the inner product of the weight vector and the hidden state ℎ:
A.1 Derivative with respect to the weights
We calculate the derivative of the Cross Entropy loss of observation with respect to a given weight vector to show that the weight updates are relatively straight forward. The derivative can be split up using the chain rule: In the first part we substitute (7), rearrange the summation of derivatives, and apply the chain rule: The derivative of the natural logarithm is trivial: In the derivative of the probability of the correct child node belonging to a given parent node on the path to the correct class, with respect to the inner product ℎ, we have to consider two things. First, if the parent node (to which the weight vector belongs), is the same as the given parent node we consider . Secondly, like in a regular softmax, whether the weight vector corresponds to the correct child node, i.e. if = : The results of (15)- (17) are substituted in (14). The ( | )'s cancel out, and the -1 is brought inside the sum to rearrange − ( | ). In the ∈ we pass over all parent nodes that belong to the set that make up the path to the correct class. Since we only consider one , we check whether is in . The indicator function 1 ∈ is one if is in and zero otherwise.
The second part of (11) is trivial: Combining the two results of (21) and (22) gives us: This result means we only update weights that belong to the parent nodes that make up the path from the root to the correct class. Like in a regular softmax, the updates depend on whether we are updating the weight vector corresponding to the correct child node or an incorrect one.
A.2 Derivative with respect to the hidden state
The derivation of the Cross Entropy loss of observation with respect to the hidden state is to show that the network is updated using knowledge of the performance across the hierarchy of classes.
The first part of (26) is given in (15), the second part can calculated as follows: We have to consider all child nodes of parent . Therefore we sum over all child nodes of parent . Substituting the trivial derivatives of (28)-(29) into (27) gives us: Combining (26), (15), and (31) results in: The update of the hidden state (and therefore the rest of the network) is a combination of the performances across all child Global Hierarchical Neural Networks using Hierarchical Softmax , , nodes that belong to the parent nodes that make up the path to the correct class. This shows that a neural network with a hierarchical softmax is truly a global hierarchical classifier.
B HIERARCHICAL STRUCTURES
This Appendix discloses the hierarchical structures used as class taxonomies. Figures 2-5 cover TREC, 20NewsGroups, Reuters-8, and Reuters-52 respectively. The name of the dataset represents the root node. This is connected to the categories we consider. Below the categories are the classes that belong to a respective category. | 2023-08-03T06:42:34.449Z | 2023-08-02T00:00:00.000 | {
"year": 2023,
"sha1": "0549a4fc96311c886ada9de0a567d659966472b6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0549a4fc96311c886ada9de0a567d659966472b6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
54746027 | pes2o/s2orc | v3-fos-license | Teachers in Film: Inspiration for Autonomous and Transformative Teaching or a Warning against It?
Can teachers promote a change? Are teachers free in their work? In the first part of this article I present a theoretical answer to these two questions and show the relation between them, based on the theories of two thinkers. Paulo Freire and Joseph Schwab teach us that teachers are autonomous and that they can lead processes of change in the school system as well as in society. Films about teachers can inspire teachers to dare to use the freedom they have in class and to be promoters of change. On the other hand, the same movies may also instill in teachers doubts regarding their own competence to create change and fears of the outcomes of such a change. In the second part of the article, I raise awareness of the various overt and elusive messages present in movies about teachers, I discuss several examples, and I suggest a few warnings and recommendations regarding how teacher education programs might approach and utilize films about teachers so as to support the teachers' self-image as autonomous and capable of social transformation.
Introduction
Can teachers promote a change -in their students, in the education system, and in society at large? Are teachers autonomous in their work? Indeed, these questions are interconnected. In the first part of this Article, I present a theoretical answer to these two questions and show the relation between them, based on the theories of two thinkers: Paulo Freire (1921Freire ( -1997 and Josef Schwab (1909Schwab ( -1988.
Following the theoretical discussion, I address more practical questions: do teachers perceive of themselves -and does society perceive of them -as being capable of making a change? Are teachers thought of as free and autonomous? Are they encouraged to be so? I address these questions through the prism of movies about teachers, which provide a useful reflection on the image of teachers and teaching. In the second part of the article, I raise awareness of the various overt and elusive messages present in movies about teachers, I discuss several examples, and I suggest a few warnings and recommendations regarding how teacher education programs might approach and utilize films about teachers so as to support the teachers' self-image as autonomous and capable of social transformation.
Part I: Autonomous and Transformative Teaching
In this part I very briefly present Schwab's theory of autonomous teaching and Freire's theory of dialogical pedagogy. I then proceed to demonstrate the interrelation between the issues of autonomy in teachers' work and teachers' capability to promote change.
Autonomous Teaching
Do teachers have the freedom to decide about essential aspects of their work? Schwab's answer is yes, teachers are free; for him, this is both a descriptive statement and a prescriptive one.
First, Schwab [11] defines the work of teachers as a 'practical art.' Teaching is an art because it is an activity that does not fall under fixed rules, it is constantly dynamic, and it demands creativity; more specifically, it is a practical art because teaching situations are always concrete ones in which teachers must act and react to the complexity of circumstances. 1 In order to further elaborate on the characteristics of a practical rather than a theoretical domain, Schwab [11] discusses the essential differences between theory and practice (p. 107-110). A theory is narrow by definition; it looks at one aspect of the subject, observing it only from one point of view. On the other hand, a theory is abstract and general, because it ignores all the details of the concrete circumstances. In contrast, a practical situation is always complex, rich with specific details; moreover, it involves 1Schwab adopts here Dewey's term of a 'problematic situation' as the basic unit of human thinking [3]. multiple aspects that may be seen from various vantage points. Therefore, different theories can support different aspects of a practical decision, but no single theory can ever cover the totality of the situation or offer a complete solution to a practical problem. Therefore, Schwab [11] claims that good teachers must master a set of practical arts that include the ability to apply a general theory to a concrete situation, and the "art of the eclectic", which enables one to intertwine various theories while acknowledging the incompleteness of each and every one of them (pp. 111-116). These ideas are presented in the first article named "the practical"; they are further elaborated in three consequent articles that were ultimately comprised into "the project of the practical" (1969 [11], 1971 [12], 1973 [13], 1983 [14]).
The arts are based on a certain conception that recognizes knowledge as a human creation and not as a set of facts given to us from an objective source. Schwab's epistemology may be identified with the philosophical concession on the possibility of attaining an absolute truth, as well as with the acceptance of the subjective and interpretive nature of human knowledge. ( [12] pp. 495-498) Schwab shows this conception of knowledge to be connected to a key characteristic of the schoolteacher's mission: the ability to perceive every educational situation from different points of view simultaneously. This complexity in the teacher's decision-making process stresses the impossibility of controlling these decisions from the outside, by any set of rules or commands that exist prior to the concrete educational situation. On these grounds, Schwab shows that teachers are necessarily and unavoidable autonomous in their work: "Teachers will not and cannot simply be told what to do. Teachers are not assembly line operators and will not so behave" (Schwab [14] p. 245).Schwab clarifies that teachers' freedom does not necessarily entail a rebellious attitude or a nonconformist approach. Their freedom is simply forced on them by the constant need to interpret their general instructions and adapt them to the endlessly complex concrete situation in class. This is Schwab's descriptive statement.
The prescriptive statement that follows is that a teacher's freedom must be respected. Schwab calls for greater involvement of teachers in processes of pedagogical planning and decision making, such as the curricular committees that he suggests to create in every school (Schwab [14] pp. 244-250). Notably, he claims that curricular planning must always take into consideration four 'commonplaces' of education: the pupil, teacher, learning material, and social milieu( [13] pp. 502-503). Furthermore, Schwab stresses the importance of balancing these four aspects and encourages teamwork and proper communication between the representatives of the four aspects. Above all, he emphasizes the priority of teachers as curricular planners and explaining this priority as follows: There are two major reasons for this emphasis. First, the children of the school as learners: their behavior and misbehavior… what raises hopes, fears and despair in respect to learning… what they disdain, what they see as relevant to their present or future lives… are better known to no one but the teacher. It is he who tries to teach them. It is she who lives with them for the better part of the day and the better part of the year ( [14] p. 245).
As such, teachers' involvement in decision making is appropriate because they know the pupils, who are placed at the center of pedagogical thinking. 2 Schwab then adds another justification for teachers' necessary autonomy, arguing that teachers' involvement in curricular planning…: …creates the only language in which knowledge adequate to an art can arise. Without such knowledge, teachers not only feel decisions as impositions, they find that intelligence cannot traverse the gap between the generality of merely expounded instructions and the particularities of teaching moments ( [14] p. 246).
Thus, teachers' involvement in decision making intensifies their ability, willingness, and motivation in applying the decisions. Enabling teachers to actively participate in designing the curriculum they teach yields better harmony between the general guidelines of curriculum and the multiple choices they make in their work. Respecting teachers' autonomy in formal and systematical ways suits the autonomy that they necessarily apply in class anyway.
Thomas Robby [9], who had studied and worked with Schwab, offers a retrospective view on Schwab's work, in an article who examines Schwab's influence, forty years after the publication of "Practical 1" [11]. Robby discusses the educational domains that were changed, and those who were not sufficiently influenced to his opinion. On one hand he suggests that: "Schwab's critique legitimized many practical orientations not before possible. The field is definitely more pluralistic"( [9] p. 87). On the other hand he admits: "There has been no systematic exploitation of Schwab's total vision for education, and now he is fading as the most over quoted but underused footnote in the research literature" ( [9] p.88).To conclude he describes Schwab's attitude as an inspiration for educators to adopt a critical stance, not just as a theoretical view, but as a source of motivation for creating change: "Schwab rejects the "tragic view" of educational failures, regarding them as opportunities to do better. He believes in reform, while never ceasing to criticize when and where it falls short. Practical 1 is not a screed, but an invitation to dialogue, collaborate, and improve. This optimistic view is central to all his work and is how his thinking can help us embrace the ongoing life of education." ([9] p. 88).
996
Teachers in Film: Inspiration for Autonomous and Transformative Teaching or a Warning against It?
Transformative Teaching
Paulo Freire encourages teachers to empower their students through dialogic pedagogy, thereby creating political change. First, he argues that education is always political, and can never be politically or morally neutral ( [6] p. 12). Evidently, curricular contents are chosen with political considerations and express the values and perspectives of the decision makers. Furthermore, Freire explains that even the methods of teaching and the behavior of teachers and pupils in class express and enforce certain political choices: a teacher who lectures and expects her pupils to quietly write down her exact words is promoting values of obedience and total acceptance of authoritative power without doubt or criticism. In contrast, the dialogic teacher that Freire prefers holds class discussions and invites the pupils to actively participate in dialogic learning.
Moreover, dialogic pedagogy includes critical reading of texts and critical analysis of the social and cultural conditions of the pupils' lives. The dialogic teacher may even encourage the pupils to participate in designing the course's curriculum and in choosing the subjects to be studied. This pedagogy is political, as it encourages pupils to be active participants and critical social and political transformers( [6] pp. [10][11][12][13][14]. A central assumption underlying dialogic pedagogy is that learning is not and should not be merely the "passing" of knowledge, created elsewhere, from the teacher to the pupils. As opposed to the 'banking method' based on 'depositing' information in the pupils' minds(which is what Freire suggests happens in traditional schools), Freire stresses that the process of learning is connected to the process of research in whereby knowledge evolves. Thus, he considers the class to be where knowledge is created and recreated, in a permanently dynamic process.
We may recognize here again the epistemological approach that denies the unity of knowledge as a static body of facts, containing the single truth about reality, but rather identifies knowledge as the dynamic flow of human interpretations. The teacher is thus defined as a researcheror even in Freire's definition an artist -given the creativity teaching requires and the aesthetic and dramatic aspects of teaching ( [6] pp. 115-116).Notably, the objection to authoritative truth leads to educational and political objections to authoritative pedagogy and authoritative politics.
Furthermore, Freire discusses 'situated pedagogy,' whereby pupils bring issues from their lives into the classroom, which are combined into the contents being studied ( [6] pp. 103-114), enabling space for expression of pupils' attitudes and approaches. Indeed, Freire stresses the importance of acquiring the ability and habit of critically assessing pupil's present assumptions and views, rather than treating them necessarily as truths. Freire [7] uses the term 'concientization' to describe the educational work through which hidden interest and agendas are exposed and people are led to critical consciousness of their social reality.
Notably, Freire objects to the popular understanding of his pedagogy as "open education" that gives up on academic efforts or social and moral commitment. On the contrary, he stresses the rigorous demand of dialogic pedagogy, which requires profound, active involvement in the issues inquired into or the texts read in class. Thus, dialogic pedagogy is transformative because it empowers pupils, providing them the motivation, courage and tools to transform their societies.
Interestingly even today, nearly fifty years after Freire's "classic" work about critical pedagogy [5], Freire is still read as a source of inspiration for educators who are committed to social change. The following are just a few examples of issues and notions that apply Freire's pedagogy as foundations for transformative education in various domains: eco-pedagogy [1], adult education [2], transforming communities through sports education [16], and empowering students through theater and through art [8].
Autonomous Teaching as Transformative
This brief presentation of Schwab and Freire's basic theories reveals the interesting relations between their messages. Primarily, we can recognize casual relations between the teacher's autonomy, which Schwab stresses, and the teacher's ability to promote change, which Freire emphasizes. The ability to perform dialogic pedagogy, as Freire presents it, depends on the teacher's freedom to make choices and not to be limited by a fixed curriculum. This is a very simple relation revealed by common sense: for example, a teacher who adopts Freire's 'situational pedagogy' and choses to base her course curriculum on issues from her pupils' lives cannot, by definition, receive this curriculum in advance from the education system authorities.
Moreover, these two thinkers share parallel lines of thought and basic principles. First, both scholars accept similar epistemological assumptions regarding knowledge as dynamic, subjective, and interpretive. 3 Second, on the basis of these assumptions, both scholars characterize teaching as a creative and innovative activity. With this theory of knowledge and these conceptions of teaching in mind, we can justify teachers' autonomy as a necessary condition immanent to their role and their missions as promoters of change.
In other words, I suggest that autonomous and transformative teaching are both dependent on the idea that knowledge is dynamically produced in class and that teaching is a creative undertaking. I will explain this conclusion by negation: if the teacher's role is merely to transmit to passive pupils, ready-made knowledge that was created elsewhere, then it is neither a role that requires autonomy nor one that permits social change. The traditional 3Never the less, neither of these scholars is a postmodernist or relativist concerning the ability and the importance of social values and educational goals. Rather, their writings reflect mutual commitment to humanist fundaments such as respect for individual autonomy and social responsibility. . teacher passively follows a curriculum that was prepared for her by disciplinary experts, while nourishing in her students habits and behaviors that will secure the present social order and prevent change. If, on the contrary, teaching is an active and creative process in which teachers and pupils create knowledge jointly, then teachers must be free to improvise, and pupils are encouraged to change.
Villacañas [17] stresses an aspect of Freire's work that clarifies another way in which Freiere and Schwab are interrelated. He argues against the tendency of education scholars to emphasize the ethical component of Freire's project. Rather, he explains that educational goals like dialogue, equality, freedom, and tolerance should not be treated as abstract ethical values that are universally valid regardless of the social, political and economic circumstances in which people are situated. Instead, Villacañas suggests that according to Freire, these goals should be central to teacher's work on account of their educational efficiency vis-a-vis the specific pedagogical problems posed by circumstances. This interpretation of Freire's concept of teaching stresses the concrete, local, and practical aspects of teaching, much aligned with the characterization Schwab assigns to the profession.
Yet I note one additional way to conceive of the autonomy and transformative role of teachers as interrelated. Namely, only teachers who act as free agents in the schools and classes in which they teach can become role models for the pupils and can offer them a positive personal example of the courage and commitment that enables one to change the world.
Part II: Teachers in Film
Many movies present teachers as autonomous pioneers of transformation -indeed, almost magicians. The character of the teacher-heroine makes brave decisions and inspires change in her pupils, society, the education system, and the world at large. Watching films about teachers may inspire and encourage education students or beginner teachers; such movies enable emotional identification, expose difficulties, enhance reflection, and engage fruitful discussion. The films may serve as an excellent tool to enhance in teachers the conceptions of autonomous and transformative teaching as Freire and Schwab portray it.
Nonetheless, films about teachers also reflect and enforce society's dual attitude towards the school-teaching profession. On the one hand, teaching is presented as a social mission, a valuable choice worthy of moral appreciation. On the other hand, even the most optimistic films expose teachers' low status and the disrespect they receive from pupils, parents, the educational system itself, and the academic world. Moreover, many of the movies send ambivalent messages concerning teachers' image as independent promoters of change. Indeed, some films express conservative attitudes -both explicitly articulated by some of the characters, and as a hidden agenda delivered through motives in the plot and other means of cinematic expression. Therefore, an uncritical viewing might yield assimilation of negative attitudes regarding the teaching profession, which may substantially obstruct the process by which beginner teachers form their professional identity.
In this Part, I exemplify the complex and ambivalent messages related in films about teachers by discussing a few select examples of motives that I find to be present in many movies. I classify the examples into three 'genres': 'pedagogic tragedies, ''pedagogic tales and legends,' and 'pedagogical action movies.'
Pedagogic Tragedies
Many films present the teacher as a tragic hero: the person who turns to teaching not of her own free will but rather from lack of choice and pays a severe price for this decision. In some films (e.g., To Sir with Love, [1] ), the teacher did not desire this profession but was forced to teach after failing in other fields. Of course, this idea exists not only in films; it is reflected in the popular saying: "those who know -do, and those who don't know -teach. "When films portray teachers as 'losers 'who do not identify with their work, this message may be reinforced.
Moreover, such movies may yield additional possible influences. Watching the unmotivated teacher in the film might give the teachers in the audience an experience of catharsis -an opportunity to unload the frustration and ambivalent feelings they have about their work. Alternately, such films may provide an opportunity to consider the teacher's lack of motivation as a problem that challenges the viewer and calls upon her to react, to differentiate herself from the on-screen anti-hero, and to create a solution.
Never the less, the teacher's life is shown to be hard and miserable. Teachers in films are usually lonely peoplesingle, separated (e.g., Educating Rita, [19] , Dead Poets Society, [20]), or widowed(e.g., Good Will Hunting [22],La Lengua de las Mariposas [23]), and if they are not soat the beginning of the film, they become so at some point, when their devotion to the intensive work they do drives their partner away (e.g., Freedom Writers [27]). The teacher is symbolically presented as a monk or a nun, consistent with the historical tradition of schools in Europe that were run by monasteries. Moreover, alongside teachers' emotional and sexual seclusion comes economic humility. It is a well-known fact that teachers in many countries do not earn much; but some films accentuate this situation to the absurd, whereby teachers take on two other jobs in order to finance the 'hobby' of teaching(e.g., Freedom Writers [27]).Absurd or not, the phenomenon of teachers working another job so as to 'make ends meet 'appears not only in fiction movies but also in recent American documentaries that expose teachers' hard work(American Teacher, [29]Teach [30]).
As is generally the case in the tragedy genre, eventually someone dies. Many pedagogic tragedies present a teacher who attempts to perform with the students a dangerous process of pedagogical, moral, or political change, venturing against social norms and values. Some of these attempts end 998 Teachers in Film: Inspiration for Autonomous and Transformative Teaching or a Warning against It? tragically when the teacher, a student, or both lose their lives as a result of this pedagogic attempt.
The well-known film Dead Poets Society [20] takes place at a prestigious boarding school in the United States in the 1950s. Mr. Kitting, the literature teacher, encourages his students to rebel against the competitive and conservative institute to which they belong, to focus on the pleasures and experiences of the 'here and now,' and to fulfill their artistic and creative tendencies. In his literature classes, he invites them to rip out academic parts of their books, to bring poetry materials from their personal lives, and to stand on the tables in order to acquire new points of view on situations.
Mr. Kitting reflects a perfect example of a teacher fulfilling Schwab's ideal of autonomy: changing ways of interpreting and applying the curriculum, recognizing personal needs and tendencies of the pupils, and reacting to them. He also reflects Freire's dialogic teacher who exercises situated pedagogy, invites the students' lives into the classroom, and enhances critical understanding of their circumstances.
Indeed, Mr. Kitting creates a change in a few students. One of them choses to participate in a theater group against his father's orders. The father reacts firmly and threatens to send the boy to a military boarding school, leading the boy to collapse under contradictory forces and commit suicide. In the suicidal scene, the boy wears a wreath of thorns and takes the image of Jesus Christ, sacrificing himself on the altar of pedagogic reform.
After the boy's death, the question of responsibility for his death arises explicitly: the school administration blames the teacher, while the pupils are torn between the school's demand to blame him and the emotional connection and commitment they have to their hero. Indeed, the film's message is controversial. When watching it with education students and teachers, I encounter various responses. Some claim that the film presents the teacher as a hero offering reform within a conservative and repressive society that is not ready to accept it. This interpretation attributes responsibility for the boy's death to the father and the school, and not to the teacher, and conceives the boy's death as a sad but necessary price of the social change. This is a reasonable interpretation, but I believe the film to be a reactionary work that sends a severe warning to teachers: beware of making changes and of encouraging pupils to contradict norms because they may pay a high personal price for your 'experiments.' A recent Film looks at a teacher who puts his students in risk from a different angel. Whiplash [31] shows a violent drum teacher who abuses his students and hurts them both physically and emotionally. This teacher believes that excellence comes out of suffering, and pushes his student to the edge, knowing that they might break, but promising them real artistic success if they don't. The issue of suicide is hinted here too, as we hear of the teacher's former student who couldn't deal with his pressure and had killed himself. As opposed to him, the present student almost breaks, but eventually finds inner forces to overcome the teacher's challenging attitude. As viewers we are left with the doubt whether the personal and emotional price the student pays is worth it, and is this sort of teaching legitimate at all.
The Spanish film La Lengua de las Mariposas [23] takes place right before the Spanish Civil War. Don Gregorio, An elderly teacher, tries to educate his pupils with atheistic and humanistic values and to nourish in them love for nature and mankind. He exercises an unauthoritative pedagogy that focuses on humanistic subjects such as literature and fine arts. Don Gregorio's values and approaches are very different from the violent nationalistic and fascist approaches that evolve in the social and political milieu around his pupils. Moncho, a sensitive and bright child, is inspired by the teacher to explore nature and becomes familiar with birds and butterflies. In a symbolic scene, the teacher gives him an apple, and thus tempts him to doubt religion and social and political norms. Freire would probably approve with Don Gregorio's attempts to empower his students and encourage them to adopt social and cultural criticism.
Towards the end of the film, the political extremism intensifies. Conservative nationalist forces gain power and, in the final scene, remove the teacher to be executed. Moncho, the beloved pupil, joins the children of the villages in throwing stones at the truck that transports Don Gregorio and the other political prisoners while cursing the prisoners, for being "communists," "anarchists," and "traitors." Here too, the final scene is open to interpretation, because Moncho mixes into his shouting of political words the names of birds and butterflies that he had learnt from the teacher. Does this mean that he has internalized the teacher's values after all? Or perhaps he is merely confused by all these Latin words and is an innocent victim of adult conflicts that are forced upon him? Different viewers react differently to this scene, but one message people commonly find in it is that teachers cannot educate children towards ideological directions that differ from those of their societies' dominant forces; whoever tries to do so risks his life.
Indeed, other pedagogical tragedies addressing teachers' attempts to promote social change also end with the death of pupils, teachers, or both (e.g., La Journe de la Joupe [28]). All these movies may deliver to beginning teachers a threatening message about their limited ability to promote moral and political changes and about the price that might be paid for seeking change. After watching movies of this genre with beginner teachers, it is important to hold an open discussion. These films may be helpful in analyzing the complexity of educational processes, but facilitators should raise the viewers' awareness regarding the elusive but present conservative approaches in these movies, in order to prevent a subconscious absorption of the fear of change. We must make sure that the movies do not discourage teachers from trying to educate towards values they believe in despite contradictory social norms -or perhaps even because of them.
Pedagogic Tales and Legends
In contrast, many pedagogical movies have a happy ending. A teacher may, for example, help a pupil to discover a talent and to develop it against all odds. In the film Billy Eliot [24], Billy, who grows up in a poor family in a British mining town, crosses cultural and social barriers to become a dancer -ultimately even the star of a London ballet troupe. In the ending scene, Billy performs Swan Lake. His swan costume elicits associations of The Ugly Duckling and implies that such a cultural transformation takes place only in fairy tales. The film not only questions whether such personal transformation is possible, but also highlights the barriers to social mobility and the impermeability of social and economic divides, shedding doubt regarding whether a talented individual can overcome all these gaps to attain personal success.
Freire might make use of this film to discuss the teacher's mission in leading students in the journey out of their weakened status, but he may also critique the movie as achieving the opposite result: while pleasing the viewers with the personal success of one imaginary child in the movie, we are placated from raising the real questions about the social powers. According to this interpretation, the teacher may be seen as another tool used by conservative and capitalist powers to instill an illusion of change, while avoiding any real change that may be attainable only through a political revolution.
The French movie Les Choristes [25] offers another example of the miraculous change that can happen through art and music. Mr. Clement Mathieu is a supervisor at a boarding school for children and adolescents with disciplinary problems including disobedience and violence. Mathieu introduces the boys to classical music and founds a choir. As they sing like angels, their attitudes and behavior improve, as does the general quality of their lives. Morange, the introvert and misbehaving child, cannot read music at the beginning, but is discovered as a wonder voice and grows up to be a world-known conductor. In addition, a few miraculous processes occur in the film: the dark school building is set on fire, the oppressive institution is closed, and the mean headmaster is fired. On top of all this, one fantasy that probably crosses every teacher's mind at least once in her career -the fantasy to adopt a parentless pupil as her own son -is fulfilled. Indeed, as Mathieu leaves the school, he takes Pepino, the little orphan that waited every Saturday at the gate for his dead father to pick him up; Pepino gets a new father and the childless teacher wins a son.
The movie provides an enjoyable experience of fulfillment of pedagogical and personal fantasies, accompanied with wonderful music that opens our heart; but ultimately, we might be left with a disturbing feeling that these miracles will not occur in our own classrooms, because such things 'only happen in the movies. ' Notably, the feeling of fantasy is not just a subjective interpretation. Through various hints, the movie declares itself to be nonrealistic. First, the film starts with a picture of an opening book -a clue that what we are watching is a tale.
Second, the sound-track does not fit what we see. One scene shows a group of children, untrained in music, and singing without instrumental accompaniment in an ordinary classroom. The scene's sound track, in contrast, is the professional singing of a choir, replete with an orchestra, which was surly recorded in a well-equipped studio or a concert hall with ideal acoustic conditions. A viewer who is aware of the cinematic symbolism would notice the 'fraud,' but even an innocent viewer would sense that the scene is somehow nonrealistic.
We can interpret this fantasy in various ways. A fantasy can offer a horizon that drives and motivates teachers and people in general to act; alternately, it can be an illusion -a fata morgana. We could interpret the difference between the pictures we see and the sounds we hear as the gap between the present situation in the classroom and the future vision of the teacher (whose achievement comprises his professional purpose). Under the latter interpretation, the movie can be considered an inspiration for transformative teaching. Under the former interpretation, it may discourage teachers with the notion that change is not really possible -especially in their poorly equipped classrooms with inadequate acoustics.
The unrealistic transformation that teachers perform in their students in the movies provides a cause for critique of the lack of authenticity and reliability of the plot, as emerges from a review [4] about the British movie, To Sir with Love: "Since he never gets beyond the introductory sentences, we have to accept a lot on faith. The pretense that these milky aphorisms will make gentlemen out of hoodlums may be what suburban audiences want to hear -but can even they believe it?... The music isn't bad but little else sounds genuine" (p. 50). Nevertheless, even movies that open with the declaration: "based on a true story" (i.e., Freedom Writers [27], the Ron Clark Story [26], McFarland [32]) do not really seem to impress the viewers as being realistic, because of the cinematic choices that must be made in order to fit a long educational process into 90 minutes. Moreover, educational processes are always complex, as Schwab had shown us; these processes are often simplified in movies so that they can be presented through a limited set of characters and events. Finally, moreover, teachers' transformative work tends to seem unrealistic in movies because certain aspects of the educational change are invisible to the eye, and movies must exaggerate them in order to make them visible on screen.
Here too, critical viewing is advisable. The discussion should first deal with revealing and consciously recognizing the sense of fantasy delivered by these movies. Freire's advice about active and interactive reading may help disclose the hidden oppressive effect of these cinematic texts. Then, we can progress by asking what parts of the educational success the films depict may actually be possible in real classrooms, with real pupils. In such discussions, I found that viewers indeed find motives to inspire them in their educational work, despite the clear differences between their real classrooms and those on the screen. 1000 Teachers in Film: Inspiration for Autonomous and Transformative Teaching or a Warning against It?
Pedagogical Action Movies
The group of movies about teachers that I call 'action movies' includes films that show difficulties and failures alongside achievements and successes. The basic plot of many of these movies is similar. The film usually opens with a difficult encounter between the teacher (who is full of good intentions)and a group of pupils (who are indifferent and bored, or wild and violent, and have no motivation to study and no respect for their teachers). Between the teacher and the class there is a wide gap in language and residences area, and usually a difference in skin color. The teacher's attempts to make the pupils learn yield no results, until a breaking point in which the teacher realizes she must change her methods of teaching in order to 'get to' the pupils.
At this point the teacher deserts the books, curriculum, or examinations; changes the subjects learnt; and creates a different interaction with the pupils that is nonacademic and informal. Following this breaking point, her relationship with the students and the students' willingness to cooperate improves. By the end of the movie, a significant transformation has taken place in the students' attitudes, behavior, academic achievements, and even physical appearance.
This line of plot is repeated in so many movies: Freedom Writers [27], Dangerous Minds [21] ,The Ron Clark Story [26], and others.
The movie To Sir with Love [1] may be considered the pioneer of this genre. Here, the breaking point of the teacher, Mr. Thackeray, includes throwing the books to the garbage bin, deserting the curricular subject, and starting an open conversation with the class about "life, survival, rebellion, sex, marriage…" i.e., issues that are relevant to students and problems they are worried about in their daily struggles. In this conversation, the teacher also declares that from that moment on he is going to treat the pupils as adults and encourage them to take responsibility for their social reality: "it is your duty to change the world." This scene shows exactly what Freire would have expected the liberator teacher to do. Mr. Thackeray replaces frontal teaching with dialogic pedagogy and also practices 'situated pedagogy, 'whereby real life is discussed and analyzed in class. Schwab too may have found this movie to encourage teachers to practice their autonomy in class and to apply discretion based on their pupils' reactions. On the other hand, Schwab may have had some reservations, as I present below.
Furthermore, in this movie, as in many others, the teacher's decision to desert the curriculum involves a resolve to leave the classroom physically and go outside. As a result of his new pedagogy, Mr. Thackeray's class decides to go to the museum, and the pupils discover that art and culture can be fun and are closer to them than they ever imagined. In other movies, the teacher takes them out to a restaurant (Freedom Writers [27]), to the amusement park, or to the pupils' homes (Dangerous Minds [21]).
Another motive that is repeated in these films is the conflict between the teacher and the school management; the teacher works without any support from colleagues or from the people in charge and makes changes at the risk of being fired, while the headmaster always objects to all the educational initiatives or projects that the teacher is trying to execute. Indeed, Schwab likely would have disapproved of the teacher's acting as a lonely knight, neglecting teamwork, and hurting the balance between different aspects of the educational process that Schwab highly regards. In addition, Schwab would have probably found these teachers' decisions to quit the curriculum altogether to be too radical and to greatly surpass the autonomous interpretation opportunities within the system. I agree with this critique and consider quitting the curriculum and leaving the school system in order to practice a meaningful education to be a problematic decision. As we saw, a few films repeat the motive of throwing away or destroying books; the formal curriculum is presented as irrelevant to the student's lives. The message we get is that teachers must quit teaching in order to start educating. I question the alleged dichotomy of 'teaching' and 'educating.' I find problematic the message that allegedly, meaningful educational work cannot be conducted through the help of books and through the disciplines currently existing within the curriculum.
Indeed, teachers should engage in a critical viewing of these films. The fields that we teach -mathematics, language, natural sciences, social sciences, history, literature, philosophy, and others are rich with interesting issues and meaningful texts, which could be taught in a mode that is relevant, intriguing, and even mind-blowing. I seek to challenge the populist idea that in order to be relevant, the teacher must transcend the classroom or compromise the academic studies and to give up on strict standards of intellectual work. Instead, I turn to Schulman's emphasis on teachers' special ability, which he defined as 'pedagogic content knowledge.' [15] While acknowledging this ability, Schulman responds to the popular mistakes and says: "those who know -do, and those who know and understandteach." As a philosophy teacher in high school, I indeed experience the possibility of teaching a theoretical discipline, which is taught only inside class rooms and through texts and books, with systematic and strict methods of thought. I find that I am able to instill independent thinking and a critical approach to life, and to treat issues that are relevant to the student's lives' and may even promote transformation in their views and approaches.
In this regard, I find support in the writings of both scholars discussed above. As noted, Freire stressed that the dialogic pedagogy is not 'free education' and that it includes academic rigor, hard work, and systematic reading of texts( [6] pp. 4-6, 10-11). Moreover, Freire and other thinkers from the 'Radical Pedagogy' school advice teachers to work within public schools and to make small, gradual changes that encourage students to think critically about their social and political reality and thereby to yield cautious, local transformations -as opposed to a total revolution in the school system( [6] pp. 180).
A similar emphasis may be found in Schwab's words when he explains his notion of teachers' autonomy as not necessarily a rebellious type of freedom, but rather a work of constant interpretation ( [13] p.247). Schwab also notes that processes of change in education, like in other practical fields such as medicine and law, can only be undertaken cautiously and gradually in order to maintain the system's ability to function, and the stability of the child's educational process ( [11] p. 112).
Thus, as oppose to the dramatic steps that the teachers in the movies make to yield a magnificent revolution, both Freire and Schwab promote a more modest pedagogy that recognizes the teacher's freedom to design the curriculum and enhances the students' ability to change from within classrooms, while respecting existing curricula (Schwab) and academic books (Freire).
To conclude, let us reflect again on 'pedagogic action movies' and look at the dual messages they transmit. On the apparent level, these movies present the teacher as a super-hero -a brave free thinker with unstoppable energy mobilized for the good of his pupils. This hero may be thought of as a source of inspiration and motivation for young teachers. On an elusive level, however, such films send the teacher a frightening message, saying that if you are not a 'super-man,' you better accept the norms of school, because any change will include a lonely battle and potential personal risks. These films' hidden message -a discouraging one for teachers -is that good teaching is a dangerous adventure, which cannot take place within the normal framework of the schoolteacher's work. A critical discussion may bring these messages to the surface to 'immunize' beginner teachers against any subconscious discouragement they may produce.
Conclusions
Freire and Schwab teach as that teachers are autonomous and that they can lead processes of change in the school system as well as in society. Indeed, films about teachers can inspire teachers to dare to use the freedom they have in class and to be promoters of change. On the other hand, the same movies may also instill in teachers doubts regarding their own competence to create change and fears of the outcomes of such a change. In order to utilize films about teaching as tools to address teachers' self-image, fears, and apprehensions, the facilitator should provide a guided and critical viewing that leaves room for a rich and complex interpretation: one that raises the teachers' awareness to the ambivalent messages present in society and expressed in movies about the teacher's role. Through such critical viewings and candid discussion, Freire and Schwab's empowering notions regarding teachers' autonomy and transformative roles can be vividly presented and can inspire teachers while also tackling the common stereotypes, misconceptions, unrealistic expectations, and potential conflicts that surround the teaching profession. | 2019-05-09T13:07:16.645Z | 2016-05-01T00:00:00.000 | {
"year": 2016,
"sha1": "33de6a9b1527971a02449ca28d3c52ec9d7369a2",
"oa_license": "CCBY",
"oa_url": "http://www.hrpub.org/download/20160430/UJER9-19505981.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a4e4146812f9e073b0f2d86a121ab446920b7c17",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
} |
245152812 | pes2o/s2orc | v3-fos-license | Conductive Stimuli-Responsive Coordination Network Linked with Bismuth for Chemiresistive Gas Sensing
This paper describes the design, synthesis, characterization, and performance of a novel semiconductive crystalline coordination network, synthesized using 2,3,6,7,10,11-hexahydroxytriphenylene (HHTP) ligands interconnected with bismuth ions, toward chemiresistive gas sensing. Bi(HHTP) exhibits two distinct structures upon hydration and dehydration of the pores within the network, Bi(HHTP)-α and Bi(HHTP)-β, respectively, both with unprecedented network topology (2,3-c and 3,4,4,5-c nodal net stoichiometry, respectively) and unique corrugated coordination geometries of HHTP molecules held together by bismuth ions, as revealed by a crystal structure resolved via microelectron diffraction (MicroED) (1.00 Å resolution). Good electrical conductivity (5.3 × 10–3 S·cm–1) promotes the utility of this material in the chemical sensing of gases (NH3 and NO) and volatile organic compounds (VOCs: acetone, ethanol, methanol, and isopropanol). The chemiresistive sensing of NO and NH3 using Bi(HHTP) exhibits limits of detection 0.15 and 0.29 parts per million (ppm), respectively, at low driving voltages (0.1–1.0 V) and operation at room temperature. This material is also capable of exhibiting unique and distinct responses to VOCs at ppm concentrations. Spectroscopic assessment via X-ray photoelectron spectroscopy (XPS) and Fourier transform infrared spectroscopic methods (i.e., attenuated total reflectance-infrared spectroscopy (ATR-IR) and diffuse reflectance infrared Fourier transformed spectroscopy (DRIFTS)), suggests that the sensing mechanisms of Bi(HHTP) to VOCs, NO, and NH3 comprise a complex combination of steric, electronic, and protic properties of the targeted analytes.
■ INTRODUCTION
In today's densely inhabited society, there is an increasing need for the design and synthesis of new materials for low-power portable gas sensors with potential applications in monitoring atmospheric pollution, 1,2 home and work safety, 3,4 filtration of air for personal safety, 5 and breath diagnostics. 6 Full realization of these applications would significantly benefit from the design and fabrication of low-cost, low-power wireless gas sensors that do not rely on expensive equipment or trained technicians for analysis. 7 Nanomaterial-based chemiresistive sensors offer a unique approach toward this goal, with vast potential for addressing the increasing demand of portable sensors in environmental and healthcare applications. 8 Primary demonstrations of nanomaterial-based sensors, such as those fabricated from metal oxides, 9 carbon nanotubes (CNTs), 10 and synthetically modified graphene, 11 have confirmed the value of nanostructured materials in terms of high sensitivity, 12 low power consumption, 13 and rapid response time. 14 Yet specific limitations, such as ambiguity of sensing mechanisms, selectivity to analytes, and cost-effectiveness of device integration methods, limit the practical applications of nanomaterial-based sensors. 15 Crystalline conductive coordination polymers (CPs), such as metal−organic frameworks (MOFs) 16 and coordination networks (CNs), 17 offer a promising alternative as a new emerging class of materials with broad applicability in chemiresistive detection. 18−25 High conductivity and tunable surface chemistry, combined with modular porosity and high surface area for gas uptakeall accessible through bottom-up self-assemblygive this class of materials a set of unique attributes that are particularly well suited for applications in gas sensing. 18,19,23,26 Despite this promise, most conductive coordination polymers that have thus far been employed in chemical sensing have two significant shortcomings. First, they are based on two-dimensional (2D) lattices comprising firstrow transition metals with square planar or octahedral c o o r d i n a t i o n g e o m e t r i e s a r o u n d t h e m e t a l site. 16,18,20,21,23−25,28−31 While these low-dimensional materials exhibit high sensitivity to small reactive gases and vapors, the reliance on 2D lattices fundamentally limits gains in selectivity that can be achieved through stereoelectronic tuning of a binding site with a more complex coordination geometry. To address this fundamental limitation, we reasoned that expanding beyond first-row transition metals to create conductive networks with complex topologies and new, unsaturated coordination environments may promote gains in selectivity through simultaneous tuning of steric and electronic attributes of intermolecular interactions of sensing materials with analytes. The use of bismuth ions within a coordination network can enable solutions to these limitations by allowing tailoring of multiple useful and functional properties, such as charge delocalization and a tunable coordination environment. Additionally, flexible coordination sites capable of undergoing analyte-induced changes within the the coordination environment can provide room to investigate the contributions of structural features in relation to sensing within a well-ordered material. Furthermore, the advantages of utilizing microED can help overcome the challenges associated with obtaining suitably large crystallites of 2D framework materials, where the lack of single-crystal diffraction studies in established framework systems conceals structural information and characterization studies. This limitation restricts the fundamental understanding of the interactions of host framework materials with guest analytes. 24
■ MOLECULAR DESIGN
The molecular design of the conductive coordination network capitalizes on several unique characteristics of bismuthcontaining compounds and materials and extends these characteristics to generate a new material with promising functionality. Currently, bismuth-based materials and coordination compounds have applications in healthcare, 32 photo-catalytic function, 33,34 radiation technology, 35 and gas adsorption and storage. 36 The unique flexible coordination sphere of bismuth, 37 Lewis acidity, 38 nontoxicity, 39 stability, 40 as well as the high affinity for soft and hard ligands, enable desirable structure−property relationships, 41 particularly when bismuth is used as a constituent within CPs. 42 Specifically, bismuth-based CPs 43 and porous metal−organic frameworks (MOFs) 44 have demonstrated valuable structure−property relationships, such as conductivity 43,45 and photocatalysis. 33 These properties are tunable through the strategic selection of constituent organic linkers in bismuth-containing CPs that can dictate the coordination environment around the bismuth metal node. 43 The unique nature of bismuth-based coordination networks allows for the tailoring of multiple useful and functional properties, such as charge delocalization, 45,46 band gap, and direction of assembly, or dimensionality through careful selection of organic ligands. 43 Several of these properties are highly desirable in the context of chemiresistive sensing. First, conductive CPs may be designed by selecting constituents that contain loosely held valence shell electrons and ligands that permit their efficient through-bond charge delocalization, 17,43 allowing the integration of the semiconductive material into amperometric devices for chemical sensing. This charge delocalization has been well documented within both bismuth oxide lattices 47 and bismuth-based metal−organic coordination networks. 45,46 Second, the flexible coordination geometry of bismuth provides control over dimensionality of the coordination network structure, 43 resulting in unique structure− property relationships through ligand modification strategies and through the choice of bismuth metal salt. Third, the structures of Bi(III)-containing compounds often present a vacant or flexible coordination site at the bismuth center, which may serve as an electron acceptor site. 40,43 The coordination environment around the bismuth ion may undergo further interaction with analytes, thereby enabling selective chemical detection of analytes with a three-dimensional (3D) coordination sphere of bismuth accompanied by electronic transduction of signal. Capitalizing on these advantages can provide a path to control these functional properties in selective chemical sensing.
Our molecular design is inspired by previously reported literature of bismuth-based semiconductive coordination networks interconnected with triphenylene-based ligands. 46 The precedent set by Li et al. utilized hexakis(alkylthio)triphenylene (alkyl: methyl, ethyl, and isopropyl) triphenylenes reacted with bismuth halides to produce semiconductive hybrid networks that featured flexible network dimensionalities and tunable electronic properties. 43 We reasoned that substituting the alkylthio substituents with hydroxy groups may promote similar coordination chemistry with bismuth ions while generating a material with good stability to water and air due to the robust nature of hydroxy-substituted triphenylenes and the strong nature of Bi−O bonds, 48 compared to their sulfur substituted analogues. The 2,3,6,7,10,11-hexahydroxytriphenylene (HHTP) ligand exhibits a large π-conjugated system and threefold symmetry ( Figure 1) and has previously been reported to form conductive metal−organic frameworks using first-row transition metals 21,23,25 and lanthanides. 49 A useful attribute of HHTP and HHTP-based MOFs is that they can undergo electron-transfer interactions that can be coupled to proton-transfer events. 50 This colocalized ability to interact with analyte protons and electrons using HHTP may provide an additional level of selectivity in sensing devices for protic guests. Despite the useful properties displayed by HHTP, CPs employing this ligand are unprecedented for metal complexes with bismuth. We aimed to achieve bottom-up assembly of a conductive CP that provides a three-dimensional (3D) ligand coordination environment around the metal center tailored for enhanced selectivity in response to specific gas-phase molecules. Thus, we subjected bismuth (III) acetate to aqueous reaction conditions and paired this node with a polyaromatic organic linker to observe a dark green micro-crystalline powder. Bi(HHTP) exhibited distinct structural transformations upon dehydration and hydration of the pores within the network (here termed Bi(HHTP)-α and Bi-(HHTP)-β, respectively), likely driven by hydrogen-bonding interactions with the oxo groups on the ligand, which induced changes in the coordination environment of both bismuth centers and unit cell parameters. This type of dynamic flexibility, such as the slipping and/or expansion of the layers, has been previously investigated in 2D HHTP-based MOFs using quantum mechanical calculations. 51 ■ EXPERIMENTAL PROCEDURE Synthesis and Characterization. We used hydrothermal synthesis that combined Bi(OAc) 3 and HHTP to produce Bi(HHTP) ( Figure 1). Reaction optimization procedures carried out after powder X-ray diffractometry (pXRD) analysis revealed the presence of residual starting material Bi(OAc) 3 , when Bi(HHTP) was synthesized using a 2:1 molar ratio of Bi(OAc) 3 and HHTP (see Figure S5). This residual starting material can be removed with a purification procedure (overnight stirring in H 2 O at 50°C) followed by subsequent washes with ethyl acetate (see Section 1 in the Supporting Information (SI)), or a Soxhlet extraction technique using ethyl acetate (only effective using small scale synthesis, see Section 1 of the SI). Residual Bi(OAc) 3 starting material can be avoided altogether through the use of a stoichiometric 1:1 molar ratio of Bi(OAc) 3 and HHTP (see Section 1.4 in the SI). The resulting dark green/blue conductive, microcrystalline powder [Bi(HHTP)] was initially characterized using pXRD analysis (Figure 2), scanning and transmission electron microscopy (SEM and TEM, respectively), and elemental analysis. The experimental pXRD pattern of Bi(HHTP) exhibited a high-intensity peak in the low-angle range at 8.36°2θ. This peak corresponds to an interatomic distance of 10.6 Å and the (002) plane, which bisects the unit cell of Bi(HHTP) ( Figure 2).
Other major peaks appearing in the pXRD pattern included the (102̅ ), (200), (202̅ ), (202), and (321̅ ) planes, which were attributed to interatomic distances of 8.7, 8.0, 6.3, 5.9, and 3.3 Å using Bragg's law, respectively. The (002) and (202̅ ) planes intersected a section of one HHTP ligand when viewed along the crystallographic c-axis, while the (200) and the (102̅ ) planes intersected and ran parallel to the bismuth atoms ( Figure 2b). The (321̅ ) crystalline plane runs parallel to the π−π stacking distance and corresponds to an interatomic distance of 3.3 Å. We contribute the slight offset of the (321̅ ) peak to the limitations in resolution of microED and the highly disordered solvent present within the void space of the Bi(HHTP)-β structure, which could have affected the layering of π−π stacking planes. For higher-resolution crystal structure analysis, Bi(HHTP) material was analyzed using a synchrotron light source at Argonne National Laboratories (Beamline 11-BM) ( Figure 2a).
■ RESULTS AND DISCUSSION
Morphological characterization of Bi(HHTP) via SEM analysis revealed rectangular-shaped crystallites of varying lengths ( Figure 3a). TEM imaging, obtained after 1.0 mg of Bi(HHTP), was sonicated in acetone for 16 h and dropcasted onto a carbon grid, provided visualization of rectangular, sheetlike materials ( Figure S9). Further characterization of Bi(HHTP) using TEM analysis revealed the presence of a distinct crystallite with a length of ∼2 μm (Figure 3b). Selected area diffraction analysis (SAED) on this crystallite showed well-ordered diffraction spots in reciprocal space (Figure 3c), which we used as a complementary method of measuring interatomic distances along diffraction planes. The distances between the diffraction spots were calculated according to the equation derived from Bragg's law (eq S1). Two interatomic distances (3.3 and 6.3 Å) measured within the SAED nanocrystal were also present in the pXRD pattern ( Figure 2). The 6.3 Å distance observed in the nanocrystal was slightly offset in the pXRD (6.6 Å) and likely corresponded to the (202̅ ) hkl plane, while the 3.3 Å interatomic distance corresponded to the (321̅ ) plane, which is parallel to π−π stacking distances (Figure 2b).
Although Bi(HHTP) displayed high crystallinity, efforts to grow a single crystal large enough for single-crystal X-ray diffraction (SCXRD) using methods such as slow evaporation, high pressure/temperature synthesis, and slow addition techniques were unsuccessful; thus, we focused our attention on microelectron diffraction (MicroED). 52 Although the MicroED method was popularized by structural biologists for the characterization of proteins, this technique has proven invaluable for the field of small-molecule characterization, 52 and even more recently, the characterization of both coordination networks and MOFs. 53 MicroED enabled the structural characterization of Bi(HHTP) and permitted the correlation of the hkl planes in this structure to the ones observed in the experimental pXRD spectrum ( Figure 2).
Analysis of Crystallographic Structure from MicroED. For MicroED analysis, electron diffraction data was collected using a Talos F200C transmission electron microscope equipped with a Thermo-Fischer CetaD detector. To prepare sample grids (quantifoil or pure carbon TEM grids), a TEM grid was placed in a vial containing dry powder and gently shaken. Images were collected in a movie format as crystals were continuously rotated under a focused electron beam. Typical data collection was performed using a constant tilt rate of 0.3°/s between the minimum and maximum tilt ranges of −72°to +72°, respectively (see the SI for details). Structural characterization by MicroED revealed that Bi(HHTP) exhibited two distinct structural forms, Bi(HHTP)-α and Bi(HHTP)-β, respectively. Bi(HHTP)-α exhibited monoclinic (α, γ = 90°, β = 94°) type Bravais lattice with symmetry group P2 1 /c and intricately connected layers (vide infra). Bi(HHTP)β displayed different cell parameters (α, γ = 90°, β = 97°), occupied pores (likely water molecules from incomplete drying), and distinct coordination geometries, but the same symmetry group, P2 1 /c ( Figure S7). Pawley refinement was conducted using the crystallographic information file (cif) obtained from MicroED for Bi(HHTP)-α, which provided unit cell parameters and presented a R wp of 7.07% and a R p of 12.54% (see Section 3 in the SI).
Topological analysis performed using a ToposPro program package 54,55 and the Topological Types Database (TTD) collection of periodic networks was used to determine the network topology model in the coordination network (Section 2 in the Supporting Information). The topological description includes a simplification procedure (graph theory approach), which was used to describe the crystal net topology and designate a 2,3-C4 topological type net for Bi(HHTP)-α, which corresponds to this structure in its standard representation ( Figure S15). The cluster simplification procedure was also implemented to identify more complex building units of a structure and characterize their connection mode, where the fragments of Bi(HHTP)-α form infinite chains linked through Bi−O linkages ( Figure S19b) and exhibit rod packing with 2M4-1 topology and point symbol {4}. 55 The Bi(HHTP)-α sheet contains dimeric one-dimensional (1D) zigzagging chains of alternating nonplanar HHTP ligands that connect one 1D chain to another through the longest Bi 1 − O bond of 2.6 Å. These dimeric chains contain alternating uncoordinated semiquinone groups and stack in the crystallographic b-direction through π−π stacking interactions. Binding interactions present inside one-dimensional chains connecting HHTP constituents are approximately 4.1 Å long. Both Bi(HHTP)-α and -β adopt a herringbone-like packing motif, similar to HHTP (see Section 1 in the SI), 56 where bismuth ions cause distortions in the π−π stacking of the matrix through catechol bidentate chelation and slight rotation within the coordination sphere. Compared to bismuth-based MOFs made using carboxylate ligands, which exhibit Bi−O bond lengths ranging from 2.2 to 3.0 Å, 44 we observed a smaller array of bond lengths, 2.0−2.6 Å, commonly seen in bismuth catecholate coordination. The π−π stacking distance in Bi(HHTP)-α was measured at 3.3 Å, which matches the interatomic distance obtained from diffraction peaks in pXRD. Bi(HHTP)-α displays two coordination environments ( Figure 1c), distorted tetragonal pyramid (Bi 2 ) and distorted quadrilateral (Bi 1 ); the latter is similar to a dimeric bismuth(III) catecholate coordination complex involved in a five-coordination environment reported previously. 57 Bi(HHTP)-β exhibited two distinct bismuth coordination spheres with sixand five-coordinate environments; the former (Bi 2 ) contains an aqua ligand ( Figure 4d). Specifically, the coordination polyhedra of Bi 1 and Bi 2 contain a distorted pentagonal pyramid (CN = 5) and distorted one-capped octahedron (CN = 6), respectively.
We hypothesize that Bi(HHTP)-β hydrate was stabilized when water occupies the slitlike pores of the network ( Figure 4b), altering unit cell parameters and permitting further interaction of each oxygen heteroatom in HHTP to neighboring layers. After hydration, bismuth containing CN = 4 in Bi(HHTP)-α shifted from an eclipsed environment, with respect to other bismuth atoms in adjacent layers, to a staggered conformation due to oxygen now in proximity within the pores of Bi(HHTP)-β ( Figure 4). The presence of uncoordinated hydroxy groups facing inward within the pores (present in both structures) is likely further stabilized through hydrogen bonding (H-bonding) with the water molecules in Bi(HHTP)-β.
Additional Physical and Chemical Characterization. IR Analysis. Attenuated total reflectance infrared spectroscopy (ATR-IR) of Bi(HHTP) revealed the presence of vibrational bands ( Figure S21) at 1420 and 1157 cm −1 , which are characteristic of catechol vibrational modes. 58 Because the vibrational modes strongly depend on atomic masses, heavy bismuth ions should present vibrational bands at lower frequencies (500−100 cm −1 ). Thus, the appearance of new bands in this region may also be attributed to new Bi−O bond vibrational frequencies.
Surface Area Analysis. Structural characterization of the specific surface area of activated and degassed (at 85°C and 635 Torr for 24 h) Bi(HHTP) using Brauner−Emmet−Teller (BET) analysis was performed using N 2 adsorption− desorption isotherms, collected at 77 K on a Micromeritics 3FLEX instrument. Preliminary results indicated a surface area of 26.8 m 2 g −1 ( Figure S22). The low surface area measured from BET analysis using nitrogen (probe radius of 1.8 Å) is reasonable when compared to the accessible solvent surface area calculated using Materials Studio software, where a probe radius of 1.2 Å calculated a surface area 101.6 Å 2 and a free volume of 22.62 Å 3 ( Figure S20).
Elemental Composition. Elemental microanalysis and inductively coupled plasma mass spectrometry (ICP-MS) confirmed the elemental composition of Bi(HHTP) ( Table S2). The percent mass of carbon, hydrogen, and bismuth observed experimentally within the coordination network were 38.3, 1.51, and 33.1%, respectively. These values were closer in value to the percent mass theoretical calculations (39.0, 1.62, and 37.7%, respectively) based on the empirical formula of Bi(HHTP)-β ((C 36 H 12 O 12 )Bi 2-2(H 2 O)), relative to the empirical formula for Bi(HHTP)-α ((C 36 H 12 O 12 )Bi 2 ), whose theoretical masses yielded values for carbon, hydrogen, and bismuth are 41.1, 1.51, and 39.4%, respectively. This comparison suggests the prevalence of the Bi(HHTP)-β structure within the sample, although the percent volume ratio of the two structures may fluctuate depending on drying conditions and can be further investigated using systematic thermal gravimetric analysis (TGA) analysis or statistical microED techniques.
Thermal Analysis. The thermal gravimetric analysis (TGA) profile of Bi(HHTP) revealed a total of ∼34% weight loss with the highest rate of decomposition occurring at 466°C ( Figure S23). There was an initial mass loss of ∼8% from 100 to 200°C, potentially due to the loss of volatile solvent molecules such as acetone or H 2 O, which is consistent with the presence of Bi(HHTP)-β or the hydration of the material. We observed a similar mass loss for Bi(OAc) 3 (38%) and a higher mass loss for the organic linker, HHTP (56%).
Analysis of the Oxidation State. X-ray photoelectron spectroscopy (XPS) enabled the analysis of bismuth in a low (3+) valence oxidation state through emission lines at binding energies of 160.1 and 165.3 eV, assigned to Bi 3+ 4f 7/2 and Bi 4f 5/2 (see Section 3 in the SI). 59 We were unable to fully deconvolute the region of the O 1s primary emission line present at 532 eV to assign C−O and CO bonds, due to the likely presence of H 2 O both within the pores of the network and within the coordination sphere of Bi(HHTP)-β creating uncertainty around the correct electronic state of the ligand. Based on the deconvoluted primary C 1s emission line ( Figure S25b) and considering the presence of Bi 3+ , one possible oxidation state of the ligand to result in an overall neutral coordination network is a bis-semiquinone catechol state (sq, sq, cat) to generate Bi 3+ within the network ( Figure S25d). The C 1s spectra were consistent with this oxidation state, as they present C−O, CO, and C−OH bonds in 2:2.6:1 ratio. Another possibility that renders a neutral framework is that bismuth atoms within the network are in a ratio of Bi 3+ /Bi 2+ oxidation state. These two oxidation states of HHTP generate an alternating (sq, sq, sq) and (sq, sq, cat) state ( Figure S26b,c) and would generate a −2.5-overall charge on the ligand. This network structure would also create a radical ion on HHTP, which is plausibly what we are observing in electron paramagnetic resonance (EPR) spectroscopy (Section 4 in the SI).
Electronic Properties. Conductivity measurements of Bi-(HHTP) were performed using a four-point probe technique, which required 100 mg of material pressed into 6 mm diameter pellet of 0.2 mm thickness. Bi(HHTP) showed a bulk conductivity of 5.3 × 10 −3 S·cm −1 (Section 3 in the SI, eq S2). Pellets of the precursors Bi(OAc) 3 and HHTP exhibited no measurable conductivity using a two-point probe digital multimeter (Extech EX430 series), which had a maximum resistance limit of measurement at 40 MΩ.
To investigate the Arrhenius activation energy for electrical conductivity of Bi(HHTP), a two-point probe on a 50 mg pressed pellet was employed to collect the current change under different temperatures (25−110°C) with a linear sweep voltage from −2.0 to 2.0 V ( Figure S24). The activation energy determined by this method was 425 meV. The optical band gap was determined by plotting the absorbance squared vs energy (eV) and estimated to be 1.61 eV based on the value of the absorption edge ( Figure S28). Density functional theory (DFT) calculations were performed on the simulated structure of Bi(HHTP) using functional Perdew−Burke−Ernzerhof (PBE) and generalized gradient approximation (GGA) approximations ( Figure S29). The high symmetry points in the first Brillouin zone demonstrated that the Dirac bands approached the Fermi level through the Y-A and E-C (crystallographic c) directions, where a low band gap of approximately 0.1 eV was observed for Bi(HHTP)-α and 0.08 eV for Bi(HHTP)-β. The partial density of states analysis showed that, compared with bismuth, the p orbitals from the C and O atoms contribute significantly to the Dirac bands.
Chemiresistive Gas Sensing. We hypothesized that Bi-(HHTP) would be a promising chemiresistive sensing material due to its flexible coordination sphere around the bismuth metal center, which may act as a potential binding site and accommodate gaseous probes, causing a direct perturbation of the charge transport with the semiconductive network. There is also the presence of free, uncoordinated hydroxy groups in both Bi(HHTP)-α and Bi(HHTP)-β that can promote Hbonding interactions in the vicinity of the bismuth atom. To characterize the fundamental ability of Bi(HHTP) to sense small reactive gases through electronic doping interactions, we examined the chemiresistive responses of Bi(HHTP) toward both oxidizing (NO) and reducing (NH 3 ) gaseous analytes. To further probe Bi(HHTP)'s capacity to detect analytes through a combination of electronic doping and H-bonding interactions, we also examined the response of Bi(HHTP) toward a range of H-bond donors (MeOH, EtOH, iPrOH) and H-bond acceptors (acetone).
To carry out the sensing procedure, we dropcasted 10 μL of a Bi(HHTP) suspension (1−2 mg/mL in H 2 O) onto five devices containing interdigitated 10 μm gap gold electrodes, which generated devices with resistances in ∼30 MΩ range (see the SI, Section 4 for details). Since the suspension of Bi(HHTP) used for device fabrication was sonicated in H 2 O and dried 16 h in ambient air, we hypothesize that the Bi(HHTP)-β structure was the dominant form within the devices. Furthermore, due to the similar band gaps of the α and β structures ( Figure S29) and simulated XRD patterns (Figure 2a), we do not believe that the differences in the structures could lead to considerable differences in chemiresistive response. The devices were dried overnight in ambient air and then placed into an edge connector, wired to a breadboard and potentiostat (PalmSens) that applied 1.0 V voltage at room temperature. The devices were then enclosed in a Teflon chamber with gas inlet/outlet ports connected to Smart-Trak mass flow controllers delivering target concentrations of gases from premixed tanks purchased from AirGas (tanks of 10 000 ppm of NH 3 in N 2 and 10 000 ppm of NO in N 2 ). The concentrations of gaseous analytes were modified by adjusting flow rates (N 2 as the balance/purging gas). Generally, five devices at a time were exposed to each gas at different concentrations (5−1000 ppm) of the chosen analyte at a N 2 flow rate of 0.5 L/min and then purged with dry N 2 to examine Bi(HHTP)'s recovery.
For volatile organic compound (VOC) sensing, a Kintek FlexStream gas generator was used to produce vapors of the analyte (EtOH, MeOH, acetone, or iPrOH), which was diluted in N 2 (4 L/min) to the desired concentration. Each organic vapor was calibrated before use in the generator by heating the internal permeation glass chamber/tube, loading a vial of the desired VOC inside the tube and setting the span flow rate at for N 2 at 4 L/min (see Section 4.7 in the Supporting Information). Notably, we observed that altering flow rates between analytes affects the response of the material, where higher flow rates are used to deliver lower concentrations; thus, we chose to keep the flow rate constant and vary the rate of evaporation of the analyte through the control temperature within the vapor generator to acquire concentration-dependent experiments (eq S10). In all sensing measurements, the devices were kept at ambient temperature.
Chemiresistive Sensing Response. Although many examples of MOF-based sensors exist, to the best of our knowledge, this report constitutes the first example of bismuth-based CP chemiresistive sensing. The favorable semiconductive nature of Bi(HHTP) facilitated the integration of Bi(HHTP) into devices through dropcasting to examine the chemiresistive response of Bi(HHTP) to the four VOCs (acetone, EtOH, MeOH, and iPrOH) and 40, 20, 10, and 5 ppm of NO and NH 3 . Bi(HHTP) exhibited a decrease in conductivity to the 60 We also examined the response of Bi(HHTP) to NO and NH 3 in the presence of humidity (5000 ppm of H 2 O, Figures S40 and S41). We observed a significant decrease in response in the presence of humidity (from −34.4 ± 3.2 to −19.9 ± 0.76% −ΔG/G o ) when sensing NO and a considerable increase in response for NH 3 (from 39.6 ± 7.0 to −81.8 ± 7.3% −ΔG/G o ) in the same concentration of H 2 O. These results may point to the importance of the presence of H-bonding in the sensing mechanism of NH 3 .
Bi(HHTP) devices exhibited unique chemiresistive responses toward VOCs that changed in the direction of normalized conductance depending on the analyte ( Figure 6). Both MeOH and acetone displayed an increase in normalized conductance (−ΔG/G o ) upon exposure, while EtOH and iPrOH demonstrated a decrease in normalized conductance (−ΔG/G o ) upon exposure to specific concentrations of the analyte. All exposures to the VOCs were observed to be reversible. To better understand the responses and H-bonding interactions of Bi(HHTP) with the four VOCs, we compared the pK a values, dipole moment, and dielectric constants of each compound (Table S5). The pK a values of the VOCs increase from MeOH to acetone. While EtOH and iPrOH have similar dipole moments (1.66D), MeOH and acetone have higher dipole moments. Other considerations include the dielectric constants (ε), which decrease down the line from methanol, ethanol, isopropanol, and all the way to the lowest value, acetone. The combination of these electronic and structural properties may explain the observations noted during sensing of VOCs. Furthermore, the presence of water molecules in the pores of Bi(HHTP)-β, as demonstrated by MicroED, may compete as host sites for H-bonding with VOCs. Thus, sensing responses to VOCs may have contributions from two competing mechanisms: one involving Lewis acid and base interactions, and another one involving Brønsted acid or Hbonding interactions with the surface of Bi(HHTP), which we further investigated using several spectroscopic techniques (vide infra). I−V curves of Bi(HHTP) during exposure to 1000 ppm of EtOH vapor suggested Ohmic contacts after saturation, excluding the possibility of Schottky barrier modulation mechanism during the sensing of VOCs ( Figure S39).
Limits of Detection.
To examine the limits of detection (LODs), we focused our attention on two representative biomarkers that are known to be common breath metabolites, 61 acetone and EtOH(vide inf ra). We varied the concentration of these VOCs by increasing the temperature of the chamber housing the analyte from 25 to 40°C and recorded three sequential exposures ( Figure 6). Bi(HHTP) had an average response of 43.8 ± 7% to 670 ppm of acetone after averaging across three devices exposed for 5 min and recovered in N 2 for 5 min, sequentially. To 2094 ppm of EtOH, Bi(HHTP) has an average response of −28.5 ± 2%.
To determine the LODs in response to NO and NH 3 , we calculated the change in response of Bi(HHTP) upon 15 min of exposure toward NO at different concentrations (5−40 ppm) (for full calculation, see the SI, eqs S7−S9). The theoretical LODs, calculated based on the response after 15 min of exposure to either NO or NH 3 (5−40 ppm), were 0.15 and 0.29 ppm, respectively. These LOD values are comparable to M 3 (HXTP) 2 -based systems, 21,25−28 but do not exceed previously reported MPc-based 2D framework sensitivity to NO. 22 Here, however, Bi(HHTP) displays a unique reversibility to low concentrations of NO (and partial reversibility to concentrations above 20 ppm), as observed by sensing and pXRD experiments (Figures 7 and S53, respectively), that is not observed in either of these previous systems. These reversible sensing characteristics can be particularly advantageous for nanomaterial-based sensors that can be fabricated to withstand repeated exposures to NO for an enhanced long term durability. For VOCs, the LOD values were 41.2 ppm for acetone, 278 ppm for MeOH, 50.2 ppm for iPrOH, and 185 ppm for EtOH. These values are similar to other reported chemiresistive values for alcohol sensors fabricated from metal oxides or reduced graphene oxides. 62 Furthermore, the system we present allows for differentiation between analytes based on the direction of response using a single conductive network. These sensing responses to four VOCs using one conductive network have not been previously observed in chemiresistive sensing. Previously, an array of 2D MOFs was required to distinguish between similar analytes (e.g., MeOH and iPrOH). 19 The unique responses seen in Bi(HHTP) may arise from the interaction of these analytes within the bismuth coordination sphere, offering an exclusive advantage over 2D systems with lower-coordinate metal nodes.
Studies of the Sensing Mechanism with NO and NH 3 Using MicroED, XPS, EPR, ATR-IR, and Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS). We first used MicroED to elucidate structural or electronic density changes in Bi(HHTP) induced upon exposure to NO and NH 3 (exposed for 1 h at high concentration, 10 000 ppm or 1% of analyte in N 2 ). MicroED confirmed that the coordination network maintained its crystallinity, network topology, and space group upon exposure to the gases ( Figures S51 and S52). Gas exposure did, however, induce a slight expansion in unit cell parameters (cell length α and angle β) for both structures of Bi(HHTP) ( Figure S51). We hypothesize that this change may have been induced by either occupation of the pores within the coordination network or through structural changes induced by analyte interaction with the host sites within the network. To confirm these structural changes induced by analyte exposure, we utilized pXRD analysis on samples before and after 1 h exposure to 10 000 ppm of NH 3 and NO (Figures S52 and S53, respectively). After NH 3 exposure, Bi(HHTP) exhibited a significant shift in the peak corresponding to the (321̅ ) plane. This plane runs parallel to π−π stacking layers, which suggests that NH 3 exposure may be increasing the distances between these planes. This change could result from NH 3 occupying the available void volumes within Bi(HHTP) and on the edge sites of the structure, causing the expansion and increase in spacing of these layers. After recovery in N 2 for 2 h, this shift did not return to its original position, consistent with our observations in sensing that NH 3 induces dosimetric response Bi(HHTP). For NO exposure, we observed a slight shift in the (002), (200), (202̅ ), and (321̅ ) planes. These peak shifts partially recover after a 2 h N 2 exposure, which is consistent with our observation in sensing that response to NO is partially reversible at concentrations above 20 ppm. These slight deviations in peak position could also indicate NO occupying the available volume within the pores of Bi(HHTP), which is feasible considering the bond length of N−O (1.15 Å), causing increases in distances between Bragg planes.
To gain deeper insight into the changes to the surface chemistry, oxidation states of constituents of Bi(HHTP), and material−analyte interactions, we used X-ray photoelectron spectroscopy (XPS), electron paramagnetic resonance (EPR), DRIFTS, and ATR-IR spectroscopy. XPS was used to confirm the elemental composition of Bi(HHTP), as well as identify chemical shifts typically associated with changes in the population of electronic states. EPR allowed the observation of the effective analyte binding on the location and population of unpaired spins and/or changes in the oxidation state of metal and ligand constituents within the bulk material. 24 In turn, IR techniques provided complementary details regarding the nature of the material−analyte interactions based on changes in the vibrational modes of the participating species.
XPS comparative analysis (carried out at 10 −9 Torr) was used to analyze the composition of Bi(HHTP) in its pristine state and after exposure to NO and NH 3 . First, a pristine sample of Bi(HHTP) was purged for 1 h with N 2 , while another batch was saturated with NO or NH 3 (1%, 10 000 ppm) for 1 h and sealed (left for over 24 h as samples were shipped out for analysis, see the SI for details). High-resolution deconvoluted spectra of the C 1s emission line after NO dosing revealed an increase in the peak area assigned to the C− O---Bi binding energy and decrease in the peak area corresponding to the CO---Bi binding energy ( Figure S47b), which supports the hypothesis that the interaction is occurring within the network causing a shift in the chemical environment near the semiquinone/catecholate region. Although not further oxidized, the deconvoluted region of Bi 4f 7/2 and Bi 4f 5/2 in the NO-doped Bi(HHTP) displayed a slight shift toward lower binding energies (Figure 7d). This shift may be attributed to electron density transferring from the ligand or bismuth node to NO, causing higher conductivities, less charging, and thus lower binding energies of less tightly bound emitted electrons. We attribute the observations during XPS analysis to be applicable only to the irreversible chemiresistive response to NO. For NH 3 exposure, the C 1s region displayed a slight increase in the area corresponding to the CO---Bi bond ( Figure S47c), and in the region corresponding to the C−O---Bi bond. We also observed the presence of a new N 1s peak corresponding to the presence of nitrogen on NH 3 adsorbed within the network ( Figure S47f). Taken together, XPS data point to significant electronic perturbations near the bismuth metal node, possibly at the catechol region of the ligand after NO/NH 3 analyte exposure.
To complement the understanding of material−analyte interactions by XPS and EPR, we employed DRIFTS. Difference spectra were collected upon exposure to 10 000 ppm of each gaseous analyte. After exposure to NO, the presence of negative-going bands at 1255, 860, and 800 cm −1 were attributed to the alteration of bismuth-catechol bonding and supported additional spectroscopic data acquired through ATR-IR ( Figure S45) of NO interacting at or near the bismuth center, possibly resulting in oxidative damage to the network. After exposure of pristine Bi(HHTP) to 10 000 ppm NH 3 followed by purging with N 2 , positive bands remained at 1250 and 1565 cm −1 , which suggested possible chemisorbed NH 3 species interacting with Lewis Acid Site (LAS) within the network. Exposure to NH 3 caused the appearance of negative going ν(OH) and δ(HOH) bands indicating interactions with or removal of water within the network. Furthermore, we observed varying degrees of reversibility for Bi(HHTP) toward these gaseous analytes; which was quantified by the recovery to the background absorbance after exposure to an analyte and purge with N 2 gas ( Figures S59 and S60). Bi(HHTP) demonstrated moderate reversibility toward NH 3 and no reversibility toward NO at this concentration. This concentration-dependent reversibility for NO-related DRIFTS experiments observed with 10 000 ppm may reflect a different mechanism of sensing and/or active sites at high ppm concentrations of NO compared to low ppm concentrations of NO used for chemiresistive measurements. This possibility appears to be consistent with chemiresistive measurements, which showed decreasing reversibility of response with increasing concentrations of NO. Interestingly, NH 3 DRIFTS experiments demonstrated negative-going bands corresponding to either dehydration of the network or disruption of Hbonding within the network; this negative-going water-related response could be related to a decrease in conductivity for NH 3 sensing experiments, which would be commensurate with electron donation (i.e., NH 3 adsorbing to LAS and Brønsted acid sites [BAS]) onto a p-type semiconductor.
EPR spectroscopy was collected at room temperature in the solid state. EPR analysis of the pristine Bi(HHTP) material displayed a broad absorbance band with low intensity centered at g = 2.000, which indicated that unpaired electron density resided primarily in a ligand-centered orbital or possibly located on adsorbed oxygen molecules. A slight increase in the intensity of the resonant absorbance was observed when the sample was exposed to NO (10,000 ppm for 1 h, Figure S48). This increase in absorbance was also observed for NH 3exposed Bi(HHTP) (10,000 ppm for 1 h, Figure S48). The exposure to NH 3 also resulted in a shift of the g-value to g = 1.991. This result suggests that NH 3 induced a slight change in the coordination sphere around the EPR active center, consistent with what is observed at the bismuth site in XPS.
To summarize, XPS and both methods of the infrared analysis indicated that exposure of Bi(HHTP) to NO and NH 3 yielded a significant variation in the electronic state of the ligand and bismuth node. Although the bismuth center was not formally oxidized beyond its pristine state, a shift of the Bi 4f 7/2 and the Bi 4f 5/2 emission lines by XPS analysis indicated a change in electron density surrounding the bismuth node. Due to the strong binding of the analyte NH 3 within the network, we were able to observe the presence of a N 1s peak in the XPS spectrum ( Figure S46f). For NO and NH 3 , we also observed possible LAS and hydrogen-bonding interactions that were likely accompanied by charge-transfer interactions with the network. DRIFTS experiments for VOC analytes revealed LAS and hydrogen-bonding interactions, with possible protonation/ dehydration events occurring within the network. Furthermore, in our DRIFTS experiments, we observed a strong general correlation between negative/positive going water bands for all VOCs and the direction of chemiresistive response. This observation again may point to the importance of H-bonding interactions (either through BAS interactions or change in structural conformations) when considering the mechanism of sensing.
Mechanistic Studies with VOCs Using DRIFTS. Because the VOC analytes in this work showed highly reversible interactions with Bi(HHTP), ex situ analysis by MicroED, XPS, and EPR proved less informative in this context. As such, we turned our attention to the in situ characterization of host− guest interactions between analytes and the coordination network using DRIFTS. This method enabled in situ IR analysis of the solid-state material, while simultaneously permitting analyte exposure, aiding in the elucidation of host−guest interactions ( Figures S55−S58). Gas delivery for in situ DRIFTS analysis was handled with a custom-made manifold allowing delivery of vacuum, gas analytes, VOCs, and pure N 2 to purge samples. We observed varying degrees of reversibility for Bi(HHTP) toward VOC analytes; this reversibility was quantified by the recovery to the background absorbance after exposure to an analyte and purge with vacuum. Difference experiments revealed four distinct spectroscopic signatures of VOCs interacting with the network. First, exposure to acetone and EtOH produced negative-going Bi(HHTP) bands within the fingerprint region of the IR spectrum, whereas MeOH and iPrOH did not. Second, negative-going bands corresponding to either dehydration of the network or disruption of hydrogen bonding within the network resulted from exposure to acetone and MeOH. These bands were present in the characteristic water regions (3000 and 1600 cm −1 ). Third, all of the VOCs were characterized to interact with the network at LAS, most likely at available bismuth sites. Fourth, the background absorbance of Bi-(HHTP) demonstrated high reversibility toward iPrOH and EtOH, moderate reversibility toward MeOH, and partial reversibility toward acetone. These experiments demonstrated that both steric properties of the VOCs (e.g., MeOH versus EtOH) as well as the protic nature of the VOCs (e.g., iPrOH versus acetone) played significant roles in guiding the host− guest interactions at the network interface. We hypothesized that exposure to both MeOH and acetone would result in the depletion of charge carriers (holes) through either electron transfer, H-bonding, or proton-coupled electron-transfer interactions. Bi(HHTP)-β contained water both within the pores and within the coordination sphere of the bismuth nodes; thus, another possible explanation for the observed sensing responses may be two VOCs interacting through different mechanisms of H-bonding to water molecules and displacing their positions within the pores, triggering a structural change that promotes the mobility of charge carriers within the network. Future studies in transistor device architectures may help clarify the details of material−VOC interactions.
■ CONCLUSIONS
This report constitutes the first demonstration of a bismuthbased coordination polymer toward chemiresistive sensing. To the best of our knowledge, Bi(HHTP) is among the first HHTP-based network structures solved using electron diffraction techniques. 63 Bi(HHTP) consisted of polyaromatic HHTP ligands interconnected with bismuth metal nodes and exhibited an unprecedented network topology with intricately connected layers, along with good electrical conductivity (5.3 × 10 −3 S·cm −1 ), when compared to other HHTP-based 2D MOFs. 25 Bi(HHTP) can be synthesized at room temperature with environmentally friendly aqueous conditions using a nontoxic metal and relatively inexpensive starting materials. Compared to other reported bismuth-based MOFs that are commonly linked using polyaromatic carboxylate linkers and secondary-building units that exhibit larger pore apertures, Bi(HHTP) adopts a herringbone-like packing (similar to HHTP packing) with slit-shaped pores.
We demonstrate the utility of this material toward chemical sensing of NO and NH 3 with limits of detection of 0.15 and 0.29 ppm respectively, low driving voltages (0.1−1.0 V), and operation at room temperature. The LOD values for NO and NH 3 are like those reported using first-row transition-metal HHTP-based 2D MOF sensors 21,23 and rival that of 2D MOFs made using layer-by-layer liquid-phase epitaxial techniques. 27 Bi(HHTP) is not as sensitive as MPc-based 2D MOFs in response to NO (Table S4). 18,20,22 What is particularly noteworthy is that Bi(HHTP) has a unique, promising selective and reversible response toward NO at concentrations of 20 ppm and below. Although reversible NO binding has been demonstrated in other MOF systems, 64 it has not been observed in chemiresistive sensing using conductive coordination networks. Current limitations of Bi(HHTP) in the context of chemiresistive sensing may be centered on the limited control over the spatial orientation on the surfaces of devices and the thickness of the film may be resolved in the future through further optimization. We also demonstrate the utility of Bi(HHTP) toward sensing four structurally analogous VOCs (acetone, MeOH, EtOH, and iPrOH) to exhibit unique and reversible responses.
This work opens the door to developing a new class of semiconductive crystalline materials using high Z-effective nodes with the ability to accommodate high coordination numbers and adaptable coordination environments. This flexible coordination sphere can permit the examination of structure−property relationships of bismuth-based coordination networks other symmetrical polyaromatic linkers with different heteroatoms. Our work demonstrates that harnessing electronic doping combined with the possibility of H-bonding interactions can lead to unique responses to structurally analogous analytes with similar functional groups (e.g., alcohols) and differences of one hydrogen atom (e.g., EtOH and acetone). Furthermore, advancing the development of these materials can enable a new class of sensors with ambient operating temperatures, low driving voltages in devices, and enhanced selectivity toward specific analytes for optimized performance. | 2021-12-16T17:41:30.708Z | 2021-12-13T00:00:00.000 | {
"year": 2021,
"sha1": "e752751413a1cb114c38407e89a20e36ee6cdfa0",
"oa_license": "CCBYNCND",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9201806",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "d94bda46aee561d659938e959821b436530b9966",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255440584 | pes2o/s2orc | v3-fos-license | Improved In Situ Characterization of Electrochemical Interfaces Using Metasurface‐Driven Surface‐Enhanced IR Absorption Spectroscopy
Electrocatalysis plays a crucial role in realizing the transition toward a zero‐carbon future, driving research directions from green hydrogen generation to carbon dioxide reduction. Surface‐enhanced infrared absorption spectroscopy (SEIRAS) is a suitable method for investigating electrocatalytic processes because it can monitor with chemical specificity the mechanisms of the reactions. However, it remains difficult to detect many relevant aspects of electrochemical reactions such as short‐lived intermediates. Herein, an integrated nanophotonic‐electrochemical SEIRAS platform is developed and experimentally realized for the in situ investigation of molecular signal traces emerging during electrochemical experiments. A platinum nano‐slot metasurface featuring strongly enhanced electromagnetic near fields is implemented and spectrally targets the weak vibrational mode of the adsorbed carbon monoxide at ≈2033 cm−1. The metasurface‐driven resonances can be tuned over a broad range in the mid‐infrared spectrum and provide high molecular sensitivity. Compared to conventional unstructured platinum films, this nanophotonic‐electrochemical platform delivers a 27‐fold improvement of the experimentally detected characteristic absorption signals, enabling the detection of new species with weak signals, fast conversions, or low surface concentrations. By providing a deeper understanding of catalytic reactions, the nanophotonic‐electrochemical platform is anticipated to open exciting perspectives for electrochemical SEIRAS, surface‐enhanced Raman spectroscopy, and other fields of chemistry such as photoelectrocatalysis.
Introduction
Electrochemical reactions underpin many technologies ubiquitous for a future carbon-zero world such as green-hydrogen generation for long-term sustainable energy storage [1] and CO 2 degradation to combat the current trends of climate change. [2] Unfortunately, in general, the monitoring, and therefore understanding, of many electrochemical reactions remains a challenge. In particular, resolving the electrochemical CO 2 reduction reaction (CO 2 RR) with high efficiency, selectivity, and sensitivity remains an issue [3] especially due to the competition with the hydrogen evolution reaction at high current densities. [4] During the CO 2 RR to desired carbon products, a compulsory step to the key intermediate CO is still not fully understood and requires further investigation. [5] For the detection and characterization of molecules, optical spectroscopy, mass spectrometry, chromatography, and fluorescence microscopy are often used. [6] Provided that analyte concentrations are high enough, optical spectroscopy methods, such as infrared (IR) or Raman spectroscopy, are highly advantageous because they allow for the retrieval of the spectral fingerprint of molecules via the detection of their rotational or vibrational modes. Unfortunately, during electrochemical reactions, most adsorbed intermediates occur in low concentrations and limit their detection with conventional spectroscopic techniques. [7,8] Surface-enhanced IR absorption spectroscopy (SEIRAS) is a derivative technique from conventional IR spectroscopy based on the enhancement of the local electromagnetic (EM) near fields. To increase the sensitivity of SEIRAS during electrochemical reactions typically a rough metal surface has been chosen to enhance the local electromagnetic near fields. [9] Rough and highly disordered metallic nm-sized edges coming from perforations and extrusions in the metallic film locally confine and enhance the EM fields. Unfortunately, this approach is random, does not allow for spectral tailoring of plasmonic hotspots, and consequently generates a relatively weak EM near-field enhancement. Even after improvements in the sensitivity of SEIRAS using an attenuated total internal reflection (ATR) geometry, [7,8] the characterization of CO adsorption on catalysts is still hampered by weak signal traces. [10][11][12] We overcome the challenge of detecting weak signal traces by taking inspiration from other fields of nanophotonics. In biomolecular sensing, a plethora of alternatives are used to improve molecular detection using controlled and tuneable EM near-field enhancement via the excitation of resonances through tailored system parameters on the nanoscale. Examples are plasmonic nanoparticles, non-plasmonic nanogap dimers, [13] metasurfaces based on plasmonics [14] or exotic phenomena like quasi-bound states in the continuum, [15] waveguides [16] or 2D-integrated [17] platforms, among others. [18] Plasmonic-based sensors have become the method of choice in label-free detection of biomolecules. They can be used either as 1) refractive index sensors or 2) by coupling the resonances to the molecular modes and analysing the perturbation of the intensity either in reflection or transmission, [19] termed perturbed intensity sensing here.
In fact, some recent progress has been made to integrate plasmonic structures for refractive index sensing with electrochemistry. [19][20][21] There are also recent examples of plasmonic structures for perturbed intensity sensing for surface enhanced Raman spectroscopy used to monitor electrochemical reactions [22] or to study the mechanism of an electrocatalytic reaction. [23] Literature of plasmonic imaging provides other examples of electrochemical reactions of single nanoparticles, [24] plasmonics-supported and electrochemical monitoring of molecular interactions focused on fluorescence and confocal microscopy, [25,26] and plasmon-accelerated electrochemical reactions. [27,28] However, to the best of our knowledge, the integration of nanostructured metasurfaces for perturbed intensity sensing in SEIRAS has never been shown in combination with electrochemistry.
Here, we detect in situ the CO vibrational mode at 2033 cm −1 emerging during the electrochemical conversion of CO into CO 2 using a platinum nano-slot metasurface on a CaF 2 substrate (Figure 1a) by coupling its resonance to the molecular vibrational mode and analyzing the perturbation of the intensity in reflection. We investigated the vibrational mode of linearly adsorbed CO on platinum (CO linear ) at 2033 cm −1 because it is the most intense vibrational mode of CO on platinum. [29][30][31][32][33] The material of choice was platinum as it could fulfill all requirements, namely to function as a working electrode, support strong metasurface-driven resonances, and adsorb CO on its surface. [34] Moreover, Pt is a catalytic material for many reactions, making this platform very useful not only for the CO oxidation reaction but also for other reactions. The decision on the inverse structure (i.e., the slots), was made to preserve a connected metallic film that can carry electrical current. The nano-slots feature nanorod-like resonances as predicted by Babinet's principle. Babinet's principle predicts that if a structure features a resonance in transmission (reflection) under a certain polarization of the incident light its inverse structure will feature a similar resonance in reflection (transmission) under a 90° change in polarization as long as their geometrical parameters are the same. [35] Nano-slots can be tuned to enhance the electric and magnetic near-fields due to the excitation of a magnetic dipole aligned parallel to the long axis of the slot. [35] Consequently, a strong extended hot spot of the electric field is compressed inside the slot. Therefore, compared to resonant rod-type antennas, the inverse counterparts have been shown to feature superior detection of molecular signal traces due to linearly instead of exponentially decaying EM near-fields and single hot spot being more extended. [35] The slots can only be excited with transverse electric (TE) or s-polarized light. [35] We perform SEIRAS in an ATR geometry to further improve the sensing performance while maintaining free accessibility of the electrode surface for reactants and products, and to minimize the contribution of the electrolyte to the IR spectrum. [7,8] We confirm the detection of adsorbed CO via the observation of the typical Stark shift and resolve a so far scarcely studied [36][37][38] effect due to the decrease of the CO coverage on the surface of platinum during the electrochemical oxidation. Furthermore, the presence of a second peak at 2086 cm −1 on the spectral location of the linear vibrational mode could be attributed to the effect of the crystal orientation. Finally, we establish a methodology for designing similar nanophotonic-electrochemical platforms.
Numerical Design of Catalytic Nano-Slot Metasurface
We start the implementation of our electrochemical sensing platform with the numerical design of the chosen nano-slot metasurface geometry. The structure consists of a unit cell composed of a single slot in an otherwise connected platinum film submerged in water on CaF 2 ( Figure 1b). Notably, we model adsorbed CO by including an artificially created material covering the inside walls parallel to the long axis of the slot. The choice for the parameters of the unit cell was guided by Huck et al. [35] and modified in accordance with fabrication constraints. Huck et al. [35] optimized a gold nano-slot metasurface in the mid-IR for normal incidence illumination in air for highquality factors (Q-factors) and electric near fields. The Q-factor relates the initial energy stored in a resonator to the energy dissipated in one radian of the cycle of oscillation. [39] On the basis of our simulations, the nano-slot metasurface achieves a resonance with a modulation in the absorbance of over 82% and a Q-factor of ca. 6.3 (see Experimental Section for details on the Q-factor calculation). Furthermore, the metasurface numerically exhibits an electric near-field intensity enhancement |E/E 0 | 2 of 560. This value can be increased in future experiments by decreasing the width of the slots [35] but was limited here due to fabrication constraints. The maximum electric near-field enhancement occurs inside the slots close to the faces parallel to its long axis (Figure 1c), with its electric field pointing orthogonally to it. The parameters of the unit cell of the nano-slot metasurface are defined in Figure 1b, where p x and p y are the unit cell lengths in x and y. l, w, and h are respectively the length, width, and height of the slot, and t is the thickness of the molecular layer used to model adsorbed CO. The gap g between two slots in x is g = p x − l.
Huck et al. [35] found that the highest Q-factor and electric near field enhancement occurs when w is small, p y = λ res /2, and g = λ res /2, where λ res is the central wavelength of the resonance. However, to satisfy the experimental conditions the nano-slot metasurface was simulated in water instead of air, and for an angle of incidence θ = 72°. Under these conditions, tuning the resonance to 2033 cm −1 ≈4.92 µm leads to the appearance of a Rayleigh anomaly (RA) such that λ RA > λ res , where λ RA is the central wavelength of the RA. The RA is a phenomenon associated with light diffracted parallel to the surface of a periodic structure. [40] When λ RA > λ res , the resonance lifetime and electric near-field enhancement is strongly reduced. [41] Consequently, a metasurface where λ RA > λ res will exhibit poor sensing performance. For this reason, g was reduced to 220 nm to push the resonance on the evanescent side of the RA (Figure 1d). The nano-slot metasurface enhances the electromagnetic near-fields of TE polarized light in an ATR configuration coming in at an azimuthal angle φ = 0° and polar angle θ = 72° w.r.t. the Pt film (xy-plane). b) Sketch of the Pt on CaF 2 nano-slot unit cell. Two CO model layers were included parallel to the long edges of the slot (magenta) with dimensions l × h × t. A 1 nm thick Ti adhesion layer was used in the fabrication of the structures but is not considered in the numerical simulations due to its negligible effect on the resonance position. The geometrical parameters of the unit cell for (c), (d), (e) are h = 30 nm, w = 200 nm, l = 1380 nm, p y = 1400 nm, p x = 1600 nm. In (c) no CO model layer was included. c) Electric near field intensity (taken at h = 30 nm) of the unit cell including arrows (black) showing the direction of the electric field inside the slot. The maximum near field intensity is 560. d) The simulated reflectance spectrum of the metasurface with (pink) and without (blue) the CO model layer (t = 5 nm). The spectrum includes the Rayleigh anomaly (RA). e) The differential absorbance with no CO (blue) compared to a 2.6 Å thick CO layer (pink) showing clearly visible absorption bands. In coupled-resonator systems, the excitation efficiency of a resonator is significantly dependent on the ratio of its losses to external radiation γ e , i.e., light scattering, and intrinsic material absorption γ i which strongly depends on the system design and parameters chosen [42] . When γ e ∼γ i the system is critically coupled and the second oscillator will lead to a dip in the absorption cross section. SEIRAS performance can be maximized by utilizing a system that is close to the critical coupling condition. [42,43] Here, the nano-slot metasurface is near the critical-coupling condition with γ e /γ i = 1.2. Thus, when the resonance overlaps with the vibrational mode of adsorbed CO at 2033 cm −1 the coupling between the two resonators leads to a small peak in the reflectance spectrum (Figure 1d), where t was set to 5 nm as an example to visualize the modulation of the resonance. The differential absorbance is defined as log (I 0 /I), where I and I 0 are the reflectance measured with and without a CO model molecular layer, respectively (Figure 1c), were used to extract the signal traces of adsorbed CO. To obtain a more realistic expectation for the differential absorbance resulting from adsorbed CO on the nano-slot metasurface, the CO layer was modeled with a thickness of 2.6 Å (Figure 1e) in agreement with the literature. [44]
Metasurface Characterization
First, the effect of the metasurface-driven resonance position on the coupling with the vibrational mode of CO linear is studied in ATR mode using a focal plane array detector (Figure 2a). To test our nanophotonic-electrochemical platform, we first tuned the resonance position to match the vibrational mode of CO linear in 0.5m K 2 CO 3 saturated with carbon monoxide. Then, Adv. Funct. Mater. 2023, 33, 2300411 we detuned the resonance to the blue and red spectral regions by decreasing and increasing the slot length l by 200 nm from 1.33 µm, respectively (Figure 2b). There is a good fit between the numerically and experimentally obtained resonance positions, with a discrepancy of less than 40 cm −1 . As predicted by the simulations, a dip is observed in the resonance attributed to the vibrational mode of CO linear . The Q-factor of the experimentally measured metasurface-driven resonance matching this mode (Figure 2b) is 2.9 which is slightly lower than the simulated value. The smaller experimental Q-factor is due to fabrication imperfections compared to the ideal numerical model. [45] Following these results, slots with a length of ca. 1.33 µm were found to match the vibrational mode of CO linear . According to the literature [29][30][31][32][33]46] this mode should be spectrally located between 2020 and 2080 cm −1 . On the basis of our experiments, CO linear is located at 2033 cm −1 .
The differential absorbance highlights a more intense and well-defined CO linear signal for the sample which has the best spectral overlap (Figure 2c). Two peaks can be observed at 2033 and 2086 cm −1 . The CO signals have a Fano-type line shape due to the narrow discrete nature of the vibrational mode of CO linear interfering with the broad spectral line of the metasurface-driven resonance. [47] The redshifted sample yields a highly asymmetric CO signal due to the strongly off-resonance coupling between the resonance and the vibrational mode of CO linear . [48] In addition, the redshifted sample presents a strong peak ≈1843 cm −1 , which is attributed to a second configuration of adsorption, the CO bridge (CO bridge ). [29,30,32,46] The scanning electron microscopy images show good quality of the fabricated nanostructures (Figure 2d). For the next part of this work, the electrochemical behavior of the sample with matching spectral overlap of its resonance with the vibrational mode of CO linear is studied.
CO Adsorption at Open Circuit Potential
Here, we follow in situ the CO adsorption during the saturation of an electrolyte at open circuit potential (OCP) and characterize the CO adsorption by performing SEIRAS concurrently with electrochemical cyclic voltammetry. The transition from the argon-saturated (Ar sat ) to the CO-saturated (CO sat ) electrolyte is accompanied by a shift of the OCP due to a change of the equilibrium determining redox reaction (Figure 3a). At the equilibrium potential of the Ar saturated electrolyte (ca. 1000 mV RHE ) CO is oxidized and the OCP drops towards negative values where CO adsorbs on the Pt surface.
The SEIRAS measurements were taken in 0.5m K 2 CO 3 with and without CO using s-polarized light (Figure 3b). Looking Adv. Funct. Mater. 2023, 33, 2300411 Figure 3. Electrochemical and spectroscopic response of the nanophotonic platform at the OCP during a gas transition from an Ar-saturated electrolyte to a CO-saturated one. a) The evolution of the OCP of the platinum nano-slot metasurface during the transition from an argon-saturated (Ar sat ≈1000 mV RHE ) to a CO-saturated (CO sat ≈260 mV RHE ) electrolyte. b) FTIR spectra taken with s-polarized light of the Pt nano-slot metasurface/ electrolyte interface in Ar sat and CO sat electrolyte. The heat map represents the integrated area below the resonance between 2600 and 1800 cm −1 collected by an array of 64 by 64 detectors. c) The evolution of the differential absorbance CO linear peaks during the CO bubbling process. d) Comparison of CO linear signals obtained in CO sat electrolyte after 80 min of CO bubbling with a pure Pt layer (p-polarized light) and with the nanophotonic-electrochemical platform (s-polarized light).
at the differential absorbance (Figure 3c), a distortion of the baseline appears at 2460 s (725 mV RHE ). Then, after ca. 2800 s (430 mV RHE ) two clearly distinguishable CO linear peaks emerge. These peaks become more discernible with time as the coverage of adsorbed CO increases. As the intensity of the peaks stabilizes the maximum coverage of CO is reached.
The CO signal obtained with the nano-slot metasurface compared to that obtained with a pure platinum layer (30 nm) at the OCP is increased by an estimated factor of 27 (Figure 3d). Both samples have been evaporated simultaneously. This gives both systems the same material properties such as surface roughness. For this reason, the 27-fold difference between the signals obtained with the two systems can be directly linked to the metasurfaces-driven enhancement provided by nanostructuring the surface of the working electrode.
For adsorbed CO to interact with incident light, the orientation of the transition dipole moment of the CO vibrational mode relative to the electric field component needs to be nonzero. [49] Consequently, only the (interior) side walls parallel to the long axis of the slots can be considered active representing a ratio of active to total surface of 3.6% compared to a smooth platinum layer. This leads to an experimentally determined local signal enhancement of above 700.
The second peak at 2086 cm −1 was only observed using the nano-slot metasurface. The most likely explanation could be that the higher resolution achieved with the nano-slot metasurface allows for the deconvolution of this peak from the background, which was not possible in previous architectures based on a continuous Pt film. According to the literature, several possibilities exist. The first assumption is that CO could adsorb on different crystal orientations with different binding energies. [46,50] As reported by A. Cuesta et al., [50] adsorption on Pt(111) single crystals were found at ≈2070 cm −1 , [36,50,51] while CO adsorbed on Pt(100) electrodes was detected between 2027 cm −1 [52,53] and 2050 cm −1 [50,54] These two values are in good agreement with the ones observed here (2086 and 2033 cm −1 ). Another possibility is the adsorption of CO on terraces (higher frequency band at 2086 cm −1 ), steps, and defects (lower frequency band at 2033 cm −1 ). [29,30,55]
CO Oxidation on Platinum
The behavior of the nano-slot metasurface was evaluated during the electrochemical oxidation of carbon monoxide using electrochemical cyclic voltammetry. The anodic scan in CO-saturated electrolyte presents an initial state with a low current (Figure 4a, black line). When an applied potential of ≈550 mV RHE is reached, the current density plateaus at around +25 µA cm −2 , which is attributed to CO oxidation which can be written as [56,57] where * is a free adsorption site on platinum, CO ad and OH ad correspond to adsorbed CO and OH on Pt, respectively. At ca.
1150 mV RHE the current density starts to decrease. The origin of this decrease is still debated in the literature. One explanation attributes the decreasing current density to competing adsorption of CO and OH on the Pt surface at higher potentials. [58] Another possibility discussed is that the formation of a thin oxide or hydroxide Pt layer prevents the oxidation of CO. The latter assumption is supported by the reduction dip (from 860 to 620 mV RHE ) of platinum in argon-saturated electrolyte (Figure 4a,b). The behavior of the cathodic scan is similar, except that the onset of CO oxidation is shifted to more negative potentials resulting in a hysteresis. Moreover, a shift in the onset of the hydrogen evolution reaction in Ar sat and CO sat electrolyte is observed, highlighting the poisoning behavior of adsorbed CO on the platinum surface. [59] Similarly to our Fourier-transform infrared (FTIR) measurements in CO sat electrolyte under OCP (Figure 3c), two CO linear peaks were also found during the electrochemical potential sweeps (Figure 4c,d). There is a spectral shift during the anodic and cathodic scan (between 50 and 550 mV RHE ) which is attributed to either a higher π -back-donation from the metal to CO [38,60] and/or to the Stark effect. The Stark effect results from the interactions between the surface electric field and the dipole moment of the adsorbates. [60][61][62] During the anodic scan (Figure 4e), the most intense peak shows a blue shift of 53 cm −1 V −1 in agreement with the literature. [36][37][38]61,63] The second peak shows a blueshift of 33 cm −1 V −1 . Between 650 and 750 mV RHE a redshift is observed which is not well documented in the literature. [37,55,60,64] The redshift is attributed to a decrease of the CO coverage due to its oxidation into CO 2, decreasing the dipole-dipole interactions. [55,65] The observation of the coverage effect was possible here due to the high resolution reached with the nano-slot metasurface. It was not resolved with a continuous platinum film. At higher anodic potentials, the CO linear peaks disappeared due to CO oxidation. Since CO only oxidizes at the Pt surface the electrolyte remains saturated with CO highlighting that there is no contribution of dissolved CO to the IR signal. During the anodic scan, there is a slight increase in the area of the first peak (≈2033 cm −1 ), while the area of the second peak slightly decreases. This behavior could be explained by a surface migration of adsorbed CO to a more stable position. [31,37,38,52] Alternatively, the reconstruction or roughening of the Pt surface with electrical polarization [66,67] could lead to a modification of the surface microstructure and CO adsorption energy. [68] Looking at spectra obtained during the cathodic scan (Figure 4f), the second peak (≈2086 cm −1 ) almost disappeared. This supports the assumption that the cause is the platinum surface modification at high applied potentials. At high cathodic potentials (150 to 50 mV RHE ) a decrease of the CO peak is observed and attributed to the hydrogen evolution reaction, [63] indicating that the adsorption of hydrogen displaces adsorbed CO.
Conclusion
To the best of our knowledge, we have developed the first hybrid nanophotonic-electrochemical platform for SEIRAS based on a platinum nano-slot metasurface. The resonance of the metasurface was numerically modeled giving a maximum electric www.afm-journal.de www.advancedsciencenews.com 2300411 (7 of 10) © 2023 The Authors. Advanced Functional Materials published by Wiley-VCH GmbH near-field intensity enhancement of 560. The resonance was tuned to couple with and enhance the CO vibrational mode at 2033 cm −1 . The principle behind the sensing improvement due to the electric near-field enhancement was tested by fabricating on-resonance and detuned metasurfaces and carefully analyzing the resonance. The numerical simulations and SEIRAS experimental results were in good agreement. Two peaks were resolved for CO linear which could be attributed to adsorption of CO on Pt(111) and Pt(100). The vibrational mode of CO linear was best observed with a spectrally overlapping resonance leading to an experimental signal improvement of more than 27 over a conventionally used platinum film. During the electrochemical oxidation of CO, a classic Stark effect was observed. Moreover, thanks to the high resolution provided by the nano-slot metasurface, a redshift of the vibrational mode of CO linear was observed, linked to a decrease of the coverage of adsorbed CO due to its oxidation. We anticipate our proof-of-concept nanophotonic-electrochemical platform for SEIRAS to guide new system designs and material combinations suitable to characterize different electrochemical interfaces, reaction products, and short-lived intermediates.
Experimental Section
Numerical Simulations: The simulations were performed in CST Studio Suite 2021 using the finite-element frequency-domain Maxwell solver. CaF 2 was simulated using a refractive index, n, of 1.4, the surrounding medium as water with n = 1.33 and platinum using the data given by Rakić et al. [69] The inside walls perpendicular to the electric field were covered with a model material to represent the vibrational mode of CO linear at ≈2033 cm −1 (see Supporting Information for more details). The titanium adhesion layer was not simulated as including it did not lead to substantial spectral shifts in the resonance position. An impedance-matched open port with a perfectly matched layer introduced linearly polarized light at an angle of 72° across the CaF 2 layer towards the nano-slots. At 72° the light was internally reflected at the CaF 2 -Pt interface. Therefore, the boundary opposite the open port was set as perfect electric conductor. The unit cell was defined and then simulated as an infinite periodic array via Floquet boundaries. A field monitor was placed at the center of the slot in the xy-plane. The highest field enhancement is found slightly above the apex of the slots. The value of the highest field enhancement of the system was evaluated within the volume of the numerical model. To extract the Q-factor and coupling ratio γ e /γ i , the simulated resonance was fitted in reflectance ( Figure 1d, blue curve) using temporal coupled mode theory according to Hu et al. [70] Metasurface Fabrication: CaF 2 was selected as the substrate due to its transparent nature in the mid-IR spectral range, low solubility, and high chemical stability. The measurements shown in Figures 3 and 4 used metasurface arrays with at least 2200 by 2700 unit cells resulting in a pattern area of approximately 13.3 mm 2 , which ensured that there are more than enough unit cells for the measured resonance to correspond to the mode of the infinite periodic array used for the numerical simulations. After sample cleaning (acetone bath in an ultrasonic cleaner followed by oxygen plasma cleaning) the substrate was spincoated first with an adhesion promoter (Surpass 4000), then with a layer of negative tone photoresist (ma-N 2403) which was baked at 100 °C for 60 s, and finally with a conducting layer (ESpacer 300Z). The metasurface patterns were created by defining the unit cell and reproducing it in the x and y-directions. Then, the patterns were written via electron-beam lithography (Raith Eline Plus) with an acceleration voltage of 30 kV and an aperture of 20 µm. The exposed resist was developed in ma-D 525 for 70 s at room temperature. The patterned surface was then coated with a titanium adhesion layer (1 nm at 0.4 Å s −1 ) and a platinum film (30 nm at 2 Å s −1 ) using electron-beam evaporation (PRO Line PVD 75, Lesker). Finally, an overnight lift-off in mr-REM 700 concluded the top-down fabrication process. A pure 30 nm thick platinum film on 1 nm titanium on CaF 2 functioned as a reference for the in situ SEIRAS measurements.
In Situ SEIRAS and Electrochemical Measurements: SEIRAS was performed using a Vertex 80 coupled with an IMAC chamber from Bruker. Each sample was mounted on a VeeMax III (purged with nitrogen) from PIKE Technologies in attenuated total internal reflection (ATR) mode with a light polarizer, an electrochemical Jackfish cell, and a CaF 2 prism beveled at 72°. A classical three-electrode system was used with a Saturated Calomel Electrode (E = 0.244 V SHE ), a platinum wire as counter electrode, and the platinum sample as working electrode. The IMAC chamber is equipped with a focal plane array detector composed of 64 × 64 MCT-detectors (a total of 4096 detectors), which allows to perform a mapping of the studied sample. Each detector collects its own spectrum and then, the active slot covered area is detected by integration of each spectrum between 1600 to 2800 cm −1 (Figure 3b, inset). Finally, an average can be determined using the spectra of the detectors that probed the resonance. A baseline correction is applied to this average as well as a Savitzky-Golay filter to smoothen the data.
For the characterization of the resonance, its position was determined using three samples composed of arrays with different nanostructure sizes. For each sample, the resonance was measured in K 2 CO 3 (0.5m, pH 11.9) electrolyte saturated with Ar and then saturated with CO. Prior to the first characterization, a cyclic voltammogram (20 mV s −1 ) was recorded in order to confirm the cleanliness of the electrode surface. Then, an initial background was acquired using p-polarized light and the Fano resonance was characterized using s-polarized light. Each spectrum was recorded with a resolution of 4 cm −1 and the final mapping results from a collection of 32 scans. The enhancement of the nano-slot metasurface is obtained by comparison of the vibrational mode of CO linear on a pure Pt layer (30 nm) without nanostructures.
The adsorption of CO during the transition from Ar sat to CO sat electrolyte (0.5m K 2 CO 3 ) was studied using the nano-slot metasurface with the best overlapping resonance with the vibrational mode of CO linear . Cleaning and background acquisition protocols were the same as described above. Carbon monoxide slowly flowed into the electrochemical cell and spectra were acquired regularly during the transition from Ar sat to CO sat electrolyte at the OCP.
The oxidation of CO during potential sweeps was investigated after 2 h of CO bubbling. A cyclic voltammogram, with a slow scan rate (0.25 mV s −1 ), from the OCP to + 1700 mV RHE and back to −100 mV RHE was performed and a spectrum was acquired every 100 mV.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 2023-03-22T15:20:29.058Z | 2023-03-20T00:00:00.000 | {
"year": 2023,
"sha1": "47d6a3c6a45304dd99f751e06e6ef1aac7608250",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adfm.202300411",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "d05aec90ff72117e2d235277da3dc6154730ba3e",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
86323160 | pes2o/s2orc | v3-fos-license | Digestible lysine levels in diets for laying Japanese quails
The objective of this study was to estimate the digestible lysine requirement of Japanese quails in the egg-laying phase. A total of 336 female Japanese quails (Coturnix coturnix japonica) of average initial age of 207 days were distributed in a completely randomized experimental design, composed of 6 treatments (lysine levels) with 7 replicates and 8 birds per experimental unit, with duration of 84 days. Experimental diets were formulated from a basal diet, with corn and soybean meal, with 2.800 kcal ME/kg and 203.70 g/kg crude protein, showing levels of 9.50; 10.00; 10.50; 11.00; 11.50; and 12.00 g/kg digestible lysine; diets remained isoprotein and isocaloric. The following variables were studied: feed intake (FI); lysine intake (LI); egg production per bird per day (EPBD); egg production per bird housed (EPBH); production of marketable eggs (PME); egg weight (EW); egg mass (EM); utilization efficiency of lysine for egg mass production (UELEM); feed conversion per mass (FCEM); feed conversion per dozen eggs (FCDZ); bird availability (BA); percentages of yolk (Y), albumen (A) and shell (S); specific egg weight (SW); nitrogen ingested (NI); nitrogen excreted (NE); and nitrogen balance (NB). Significant effect was only observed for LI, EW, EM, UELEM, FCEM, Y, A and SW. The digestible lysine level estimated in diets for laying Japanese quails is 11.20 g digestible lysine/kg diet, corresponding to an average daily intake of 272.23 mg
Introduction
Quail raising has stood out in the aviculture sector, especially for egg production, for being extremely attractive and profitable for the Brazilian agribusiness, which makes it a good option, be it for small or big farmers.
The advancements in the knowledge of the nutritional requirements of birds, at their many phases, has constantly brought improvement to the quality of the diet; firstly in the sense of reaching maximum production, followed by the search for the lowest price of the feed and for the conversion of these animals into egg numbers (Ceccantini & Yuri, 2008).Thus, the great knowledge of the metabolism of protein in birds and the production of amino acids on a commercial basis have enabled the utilization of the concept of ideal protein at the formulation of diets.
This concept can be defined, theoretically, as the exact balance of the amino acids in the diet capable of meeting, without excess or deficiency, the requirements of all the essential amino acids for production and maintenance of birds, expressing them as percentage in relation to the lysine which is adopted as reference amino acid.
Lysine is the second limiting amino acid in diets for birds; its use, in lower or excessive levels, regarding the nutritional requirement of this nutrient in birds, may bring metabolic damages, which could compromise bird performance (Kidd & Kerr, 1998).
For many decades, studies on the utilization of lysine, based on the concept of ideal protein in the diets of birds, have been developed, because of the great applicability, ease of utilization in the formulation of diets and low costs of the acquisition of L-lysine-HCl; however, in quail raising, these studies are recent.
Estimating the digestible lysine requirement for Japanese quails at laying, Pinto et al. (2003) suggested the level of 11.17 g digestible lysine/kg of diet for diets containing 195.60 g crude protein (CP)/kg of diet.Rodrigues et al. (2007) evaluated the digestible lysine nutritional requirements in diets for Japanese quails in the laying phase and concluded that the digestible lysine requirement was 10.30 g/kg of the diet.Assessing the digestible lysine nutritional requirement in diets for Japanese quails in the laying phase containing 195.0 g CP/kg of diet, Demuner et al. (2009a) concluded that the digestible lysine requirement estimated was 10.90 g/kg of diet.
The objective with this research was to estimate the digestible lysine level in diets for Japanese quails during the egg-laying phase.
Material and Methods
A total of 336 female quails of the Japanese subspecies Coturnix coturnix japonica of 207 days of age with initial body weight of 179.82±0.73g were distributed in a completely randomized experimental design composed of six treatments (lysine levels), with seven replicates and eight birds per experimental unit.The experiment lasted 84 days.Birds were housed in galvanized wire cages equipped with nipple drinkers and trough feeder, at an animal density of 106 cm²/bird per experimental unit.
The lighting program was of 16 daily hours and maximum and minimum temperatures were measured once daily at 8h00; relative air humidity of the facility was measured twice daily, at 8h00 and 16h00, with maximum minimum thermometers and dry and wet bulb thermometers, placed at the center of the shed, at the height of birds.
Water and feed were supplied ad libitum.Feed was supplied twice daily, aiming at avoiding waste.Collection and counting of eggs were performed every day, in the morning.
The digestible lysine levels utilized in the formulation of diets were based on studies with broilers.There are no sufficient studies to determine feedstuff digestibility of amino acids with quail use.Thus, the composition, the nutritional values and the digestibility values of the ingredients utilized in the formulation of diets were according to Rostagno et al. (2005).
The following performance variables were evaluated: feed intake (g/bird.day),lysine intake (mg/bird.day),egg production per bird per day (%), egg production per bird housed (%), egg weight (g), egg mass (g/bird.day),utilization efficiency of lysine for egg mass production (egg mass/digestible lysine intake, expressed in grams of mass produced per grams of digestible lysine intake), feed conversion per egg mass (kg of diet/kg of eggs) and per dozen eggs (kg of diet/egg dz), bird availability (total dead birds -total live birds × 100), nitrogen ingested (g), nitrogen excreted (g) and nitrogen balance (g).
After weighing, eggs were identified and cracked.The yolk of each egg was weighed, and its shell was washed and dried in the air, for determination of its weight; albumen weight was calculated by the difference between egg weight and yolk weight plus eggshell weight.
Specific egg weight was determined through immersion of all intact eggs collected into NaCl solutions with densities varying from 1.055 to 1.090 g/cm³, with 0.005 g/cm³ intervals and evaluated for density or specific egg weight, by the Archimedes principle (Thompson & Hamilton, 1982;Yannakopoulos & Tserveni-Gousi, 1986).
For the estimate of nitrogen balance, at the end of the experimental period, in four randomly chosen replicates of each treatment, eight birds were housed in galvanized wire cages on a battery pattern, provided with trough feeders and drinkers, on galvanized metal sheet and PVC, respectively, and galvanized metal sheet tray coated with plastic, for the collection of excreta.Birds were subjected to an experimental adaptation period of three days, and right after, excreta collection started, twice a day, for three consecutive days; excreta were stored in freezer.After collection period, the material was weighed, homogenized and samples were taken and dried in an oven.Feed intake in the collection period was recorded and experimental diets corresponding to each experimental unit were sampled for further laboratory analyses.
Analyses of dry matter and total nitrogen, experimental diets and excreta collected were performed according to the methodology described by Silva & Queiroz (2002).Nitrogen balance was calculated by the difference between the amount of nitrogen excreted and ingested by quails.
The data were analyzed on software SAEG (Sistema para Análises Estatísticas e Genéticas, version 9.1), developed at Universidade Federal de Viçosa, 2007, by means of procedures for variance and regression analyses.The study adopted α = 0.05.
Results and Discussion
The maximum average temperature reached was 30.56±3.2 ºC, and the minimum was 20.03±0.92ºC.The average relative air humidity was 80.4±2.6% in the morning and 69.9±6.4 % in the afternoon.In the adult phase, the thermal comfort range or thermoneutral zone of quails is between 18 and 22 ºC and the relative air humidity, between 65 and 70% (Oliveira, 2004).Thus, according to the values recorded for average air temperature and relative air humidity, throughout the experiment, quails underwent periods of heat stress.
The digestible lysine levels did not affect (P>0.05)feed intake (Table 2); these results are similar to those found by Ribeiro et al. (2003) and Demuner et al. (2009a), who worked with Japanese quails in the laying phase.On the other hand, the results found do not corroborate those obtained by Rodrigues et al. (2007), who, evaluating five levels (8.80; 9.60; 10.40; 11.20; and 12.00 g) of digestible lysine/kg of diet for Japanese quails in the egg-laying phase, observed quadratic effect on feed intake.These authors explained that the increase in digestible lysine levels in the diet elevated feed intake.Linear increase (P<0.05) was verified in digestible lysine intake as its concentration in the diet rose (Table 2); among the levels studied (9.50 -12.00 g digestible lysine/kg of diet), every 0.50 g digestible lysine/kg of diet increased digestible lysine intake by 12.98 mg.These results are similar to those found by Pinto et al. (2003) and Rodrigues et al. (2007), who found linear increase of 25.8 and 21.7 mg in lysine intake for every 1.00 g digestible lysine/kg of diet increase, respectively.
Egg production per bird per day and per bird housed and production of marketable eggs were not affected (P>0.05) by the digestible lysine levels in the diets (Table 2).These results are similar to those found by Demuner et al. (2009 a,b), who evaluated the nutritional requirements of digestible lysine for Japanese quails at the laying phase.However, the values achieved are not in accordance with those presented by Rodrigues et al. (2007), who defined the level of 10.30 g digestible lysine/kg of diet, which resulted in higher percentages in the egg production of Japanese quails.
The digestible lysine levels utilized in experimental diets showed quadratic effect (P<0.05) for egg weight; the level of 11.20 g digestible lysine/kg of diet promoted the highest egg weight (Table 2).The results obtained in the present study are in agreement with those found by Oliveira et al. (1999), Pinto et al. (2003), Ribeiro et al. (2003) and Demuner et al. (2009b), who verified higher results for egg weight from Japanese quails as they increased the lysine level in the diet.For their part, Garcia et al. (2005), Rodrigues et al. (2007) and Demuner et al. (2009a) did not verify changes in egg weight of Japanese quails resulting from the increase in the lysine level in the diet.
Egg mass also varied (P<0.05) with digestible lysine levels; it increased quadratically up to the estimated level of 11.20 g digestible lysine/kg of diet (Table 2).The result achieved is consistent with those observed by Pinto et al. (2003), where the level of 11.17 g digestible lysine/kg of diet was the one which maximized egg mass.Contrarily, Ribeiro et al. (2003), Rodrigues et al. (2007) and Demuner et al. (2009a) did not verify effect of digestible lysine levels on the egg mass of Japanese quails.Since egg production did not vary between the treatments, the response pattern of egg mass is directly linked to the egg weight results.
In accordance with the results obtained for digestible lysine intake (Table 2), utilization efficiency of digestible lysine for egg mass production reduced linearly (P<0.05) as the concentration of digestible lysine in the diet increased.Taking only the extreme digestible lysine levels analyzed (9.50 and 12.00 g digestible lysine/kg of diet), the intake of one gram digestible lysine resulted in respective production of 42.0 and 34.6 g egg mass.Brumano (2008), who worked with white-egg-laying hens in the period from 42 to 58 weeks of age, observed quadratic effect of the digestible methionine + cistine to lysine utilization efficiency on total egg production.
Feed conversion per egg mass varied quadratically (P<0.05) with increase in lysine levels; it increased up to the estimated level of 11.20 g digestible lysine/kg of diet (Table 2).Corroborating this result, Pinto et al. (2003) and Demuner et al. (2009a) also verified positive influence of digestible lysine on feed conversion per egg mass of Japanese quails in the egg-laying phase; the best responses were obtained at levels 10.50 and 10.90 g digestible lysine/kg of diet, respectively.NM -natural matter; CV -coefficient of variation; FI -feed intake; LI -digestible lysine intake; EPBD -egg production per bird per day; EPBH -egg production per bird housed; PME -production of marketable eggs; EW -egg weight; EM -egg mass; UELEM -utilization efficiency of digestible lysine for egg mass production; FCEM -feed conversion per egg mass; FCDZ -feed conversion per egg dozen; BA -bird availability.
No effect of digestible lysine levels was verified (P>0.05) on feed conversion per dozen eggs (Table 2).Likewise, Oliveira et al. (1999), Ribeiro et al. (2003), Rodrigues et al. (2007) and Demuner et al. (2009b) did not find effect of digestible lysine levels on the same variable.The fact that feed intake and egg production per bird per day or per bird housed did not vary by treatment explains the results obtained for feed conversion per dozen eggs.
Bird availability was not affected (P>0.05) by digestible lysine levels in the diets (Table 2), still presenting a mortality rate in the experimental period of 7.6%, corresponding to the weekly mortality of 0.63%.Although bird availability was not altered in between treatments, the average weekly mortality value of 0.63% in this study is considered high for the standards of this species.Analyzing data from 26 Japanese quail commercial raising broods, Oliveira (2007) found weekly mortality of 0.49%.A possible explanation for this higher mortality rate could be the temperature and air humidity effect, which were above the values of the thermal comfort range, which might have contributed to the discomfort of birds.
For yolk, quadratic effect (P<0.05) was observed in relation to the digestible lysine levels in the diets (Table 3).Reis et al. (2006), working with the total lysine nutritional requirement of European quails at egg-laying, assessing the levels of 8.50; 9.50; 10.50; 11.50; and 12.50 g digestible lysine/kg of diet, verified linear increase for yolk as the lysine levels in the diet increased.Different results were observed by Ribeiro (2003) and Rodrigues et al. (2007), who did not find any effect of digestible lysine levels on the yolk weight of Japanese quail eggs.
Quadratic effect (P<0.05) of digestible lysine levels was observed on albumen (Table 3).These results are in accordance with those found by Cupertino (2006), who, assessing the digestible lysine nutritional requirement of laying hens from 54 to 70 weeks of age, obtained increasing linear effect of digestible lysine levels on the quantity of egg albumen.On the other hand, the results obtained by Ribeiro et al. (2003), Reis et al. (2006) and Rodrigues et al. (2007) did not show effect of lysine levels on albumen in quail eggs.
No effect (P>0.05) of digestible lysine levels related to eggshell or percentages of yolk, albumen and shell was observed.The results corroborate those found by Ribeiro et al. (2003) and Rodrigues et al. (2007).
There was quadratic effect (P<0.05) of digestible lysine levels on specific egg weight (Table 3).The level of 10.90 g digestible lysine/kg of diet resulted in 1.072 g/cm³, enabling the occurrence of eggs with lower shell quality as compared with other levels of lysine studied.However, in studies conducted by Rodrigues et al. (2007), the digestible lysine levels did not have effect on specific egg weight of Japanese quails.Nevertheless, even showing a 0.19% variation between the highest specific egg weight (1.074 g/cm³) and the lowest specific egg weight (1.072 g/cm³), which could result in eggs with thinner shell, we can observe that there was no interference with eggshell quality, which can be confirmed by the production of marketable eggs, which, in absolute values, presented the greatest percentage (98.64%),close to the level estimated for the highest egg weight (11.20 g digestible lysine/kg of diet).
Likewise, according to the equation obtained, the level of 11.20 g of lysine/kg of diet resulted in higher egg mass, obtained from the higher egg weight and a high egg production per bird per day, which possibly could have contributed to a worse specific egg weight, but not interfering with production of marketable eggs, which was also kept high.
The level of 11.20 g digestible lysine/kg of diet increased egg weight, egg mass, feed conversion per egg mass, yolk weight and albumen weight, which demonstrates that this level promoted satisfactory results on Japanese quail performance and egg quality.
Table 1 -
Composition of experimental diets (g/kg as fed)
Table 2 -
Influence of digestible lysine level on performance variables in Japanese laying quails
Table 3 -
Digestible lysine levels on weight and percentage of yolk, albumen, shell and specific egg weight (SW) of Japanese laying quails NM -natural matter; CV -coefficient of variation.
Table 4 -
Digestible lysine levels on the values of nitrogen ingested, nitrogen excreted and nitrogen balance of laying Japanese quails NM -natural matter; CV -coefficient of variation. | 2019-03-30T13:03:38.025Z | 2013-07-01T00:00:00.000 | {
"year": 2013,
"sha1": "998b4ac21b8cdbda46f6b2ffd58a5bf399089c53",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rbz/a/NM8L389cwvxDmdxKLnk36tK/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "998b4ac21b8cdbda46f6b2ffd58a5bf399089c53",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
5042799 | pes2o/s2orc | v3-fos-license | Surface-anchored poly(acryloyl-L(D)-valine) with enhanced chirality-selective effect on cellular uptake of gold nanoparticles
Chirality is one of the ubiquitous phenomena in biological systems. The left handed (L-) amino acids and right handed (D-) sugars are normally found in proteins, and in RNAs and DNAs, respectively. The effect of chiral surfaces at the nanoscale on cellular uptake has, however, not been explored. This study reveals for the first time the molecular chirality on gold nanoparticles (AuNPs) functions as a direct regulator for cellular uptake. Monolayers of 2-mercaptoacetyl-L(D)-valine (L(D)-MAV) and poly(acryloyl-L(D)-valine (L(D)-PAV) chiral molecules were formed on AuNPs surface, respectively. The internalized amount of PAV-AuNPs was several times larger than that of MAV-AuNPs by A549 and HepG2 cells, regardless of the chirality difference. However, the D-PAV-AuNPs were internalized with significantly larger amount than the L-PAV-AuNPs. This chirality-dependent uptake effect is likely attributed to the preferable interaction between the L-phospholipid-based cell membrane and the D-enantiomers.
Results and Discussion
Characterization of AuNPs. The synthesized small and polymer molecules were grafted onto water-dispersible AuNPs by a simple co-incubation, respectively. Transmission electron microscopy (TEM) showed that both the MAV-AuNPs (Fig. 3a,b) and PAV-AuNPs (Fig. 3c,d) were approximately spherical in shape and narrowly distributed ( Figure S1).
The density of MAV and PAV molecules on the AuNPs being calculated from TGA ( Figure S2) was 2.4 molecules/nm 2 and 0.8 molecules/nm 2 , respectively. However, the density of AV unit on the PAV-AuNPs (16.2 units/nm 2 ) was significantly larger than that on the MAV-AuNPs (2.4 units/nm 2 ). The CD signal of PAV-AuNPs was significantly stronger than that of MAV-AuNPs because of more AV units on the PAV-AuNPs (Fig. 3e). Importantly, the L-PAV-AuNPs and D-PAV-AuNPs (or the L-MAV-AuNPs and D-MAV-AuNPs) showed essential mirror image CD spectra in the region of 190 to 300 nm (Fig. 3e). Although the free L-and D-MAV molecules showed mirror CD spectra, their CD direction was reversed after immobilization onto the AuNPs (Fig. 3f). Such reversal of CD signals after small molecule adsorption onto NP surface has been reported previously 24,25 . The more negative surface zeta potential of PAV-AuNPs in water (− 36 mV) than that of the MAV-AuNPs (~ −26 mV) is consistent with the larger number of carboxyl groups (more AV units) on the PAV-AuNPs (Table 1). Moreover, the hydration diameter of PAV-AuNPs (~25 nm) was significantly smaller than that of MAV-AuNPs (~35 nm), revealing that the MAV-AuNPs were aggregated to a larger degree in water. Furthermore, the AuNPs stability was monitored by measuring the AuNPs surface plasmon resonance (SPR), a parameter that is extremely sensitive to nanoparticle size and interparticle spacing 26 . A slight red-shift (4 nm) was observed in the SPR peaks of PAV-AuNPs compared to that of the citrate-AuNPs, without significant peak broadening ( Figure S3a). By contrast, the MAV-AuNPs underwent a red shift of 13 nm, accompanying by slight peak broadening, which are indicative of slight aggregation ( Figure S3a). Compared with the small molecules, the polymer chains with a steric bulk can cap the gold cores more effectively to enhance the colloidal stability of NPs, leading to better dispersion in medium 14 .
The stability of the NPs in 10% FBS/DMEM, in which cells were cultured, was further characterized. The SPR peak of PAV-AuNPs was kept at 521 nm without change, whereas the SPR peak of MAV-AuNPs was slightly blue-shifted (9 nm) ( Figure S3b,c). The diameter change was further examined by TEM. The MAV-AuNPs were better dispersed after adsorption of proteins by incubation in FBS containing medium ( Figure S4a,b,e,f), whereas the dispersion of PAV-AuNPs was not affected by the medium significantly. These results are well consistent with the SPR results ( Figure S3b,c). Moreover, the diameters of the MAV-AuNPs and PAV-AuNPs before and after protein adsorption showed no significant difference in a dry state, which is consistent with previous observation 27 . Hence, the hydrodynamic diameters of the MAV-AuNPs and PAV-AuNPs being incubated in 10% FBS/ DMEM were measured by DLS (Table 1). In 10% FBS/DMEM, the hydrodynamic diameters of MAV-AuNPs and PAV-AuNPs were both 38 nm without significant difference, and their surface charge became less negative (Table 1) due to the adsorption of serum proteins including albumin, fibrinogen, and immunoglobulin etc. Therefore, both MAV-AuNPs and PAV-AuNPs possess excellent colloidal stability in 10% FBS/DMEM, and their surface charge and hydrodynamic diameters had no significant difference.
With these characterizations, one can conclude that the L-PAV-AuNPs and D-PAV-AuNPs, or the L-MAV-AuNPs and D-MAV-AuNPs have identical physicochemical properties except of the reverse molecular chirality, respectively. The PAV-capped AuNPs show significantly enhanced chirality that those MAV-capped ones. Cellular uptake. Next, the cellular uptake of these chiral molecules-capped NPs was investigated by using A549 cells and HepG2 cells. First, MTT assay was performed to evaluate the cytoviability, which reflects the cell metabolic activity based on the ability of mitochondrial succinate/tetrazolium reductase system in living cells. According to literatures and our experiences, the highest cellular loading of NPs such as thiol acids-protected AuNPs and poly(lactic-co-glycolic acid) (PLGA NPs) could be achieved within 24 h 28,29 , and some cationic NPs are rapidly internalized within a few hours 30,31 . After the cells were treated with PAV-AuNPs at an Au concentration of 40 μg/mL, no significant cytotoxicity was found for the A549 and the HepG2 cells ( Figure S5a,b). When the Au concentration of PAV-AuNPs increased to 80 μ g/mL, the cytoviability was still above 80% though cell toxicity was significant for the A549 cells (p < 0.01) ( Figure S5a). No cytotoxicity was observed for the HepG2 cells ( Figure S5b). These results suggest that the AuNPs represent low toxicity to cells. Therefore, 50 μg/mL of AuNPs (MAV-AuNPs or PAV-AuNPs) was used to study the cellular uptake, which showed no significant cell toxicity for both A549 and HepG2 cells. The cellular uptake amount of AuNPs (including minor amount of cell-surface attached) was quantified by ICP-MS with high detection sensitivity. Figure 4a shows that the internalized amount of PAV-AuNPs was significantly larger than that of MAV-AuNPs (p < 0.01) regardless of the chirality, whereas the amount of internalized L-MAV-AuNPs and D-MAV-AuNPs by A549 cells had no significant difference (p > 0.05). The non-influence of small chiral molecules reveals the very weak chirality effects, which is consistent with previous finding that the uptake of L-and D-peptides has no significant difference 32,33 . By sharp contrast, the AuNPs capped with PAV of a larger number of AV repeating unit per area did show the chirality-dependent uptake by A549 cells: the internalized amount of D-PAV-AuNPs was nearly 2.6 times higher than that of the L-PAV-AuNPs (Fig. 4a). The internalized amount of MAV-AuNPs and PAV-AuNPs by HepG2 cells had the similar trend to the A549 cells ( Figure S6a). Moreover, the relatively larger uptake of D-PAV-AuNPs over L-PAV-AuNPs (p < 0.01) was observed at different NPs concentration (10-100 μg·mL −1 ) ( Figure S7), although the absolute amount for both types of NPs increased along with the increase of NPs concentration. These results suggest that the chiral effects on cellular uptake can be enhanced by the polymerization of chiral monomers, and the PAV polymers with stronger ellipticity than their smaller counterparts enhance the chirality-dependent interaction with cells, leading to significantly higher uptake of the PAV-AuNPs than MAV-AuNPs.
The chirality-dependent NP uptake is likely tied with the feature of cell membrane. One possibility is that a larger number of targeted receptors exist on the A549 and HepG2 cells (the typical cancer cells) and specifically interact with the D-enantiomers than the L-enantiomers. To testify this hypothesis, A549 cells and HepG2 cells were pre-treated with D-valine for 1 h before incubated with the PAV-AuNPs to block the possible receptors on the cells, which specifically interact with D-enantiomers. Figures 4b and S6b show that the pre-treatment with D-valine impaired the NP cellular uptake to some extent, but the level was insignificant (p > 0.05) regardless of the surface chirality. The polymer structure may play a role in the interaction of PAV-AuNPs and cells. Thus, A549 cells and HepG2 cells were pre-treated with D-PAV for 1 h before incubated with the PAV-AuNPs. Figures 4b and S6b show that the internalized amount of AuNPs was largely decreased (p < 0.01) regardless of the surface chirality. These results suggest that the larger uptake of the D-enantiomers is likely attributed to the specific receptors or biomolecules in A549 or HepG2 cells that can recognize the right-handed polymers rather than the small counterparts. In fact, the investigation on the interaction between surface chiral molecules (e.g. peptides with different chiral amino acid groups) and cells has not found chirality-dependent cell receptors so far 32,[34][35][36] . Hence, some other factors or biomolecules in the cells may take the role on the chirality-dependent cellular uptake.
Upon contact with biological fluid, the NPs are rapidly covered by biomolecules, especially proteins, forming a biomolecular corona that effectively screens the bare NP surface 2 . The surface-adsorbed proteins can interact with the receptors of cell membrane, and thereby influence the following cellular uptake. Wang et al. have found that surface chirality could influence the protein adsorption 16 . Zhou et al. found that serum proteins play an important role in mediating the selective adhesion of cells on chiral surfaces 3 . To clarify this point, the internalization experiments were further conducted in a serum-free medium. As shown in Figures S3c and S4, the L-PAV-AuNPs and D-PAV-AuNPs in serum-free medium showed well dispersion too, ruling out the possible influence of NP aggregation on cellular uptake 37 . Figures 4c and S6c show that the cells still ingested much more D-PAV-AuNPs than L-PAV-AuNPs (p < 0.01), and the internalized amount was intensively enhanced than that in the serum-containing medium. The reduction of NPs uptake in serum is likely attributed to the decrease of the surface energy of NPs and the nonspecific interactions between NPs and cell membrane as a result of protein adsorption 38 . Figure S8 shows that L-PAV-AuNPs adsorbed significantly larger amount of serum proteins and albumin than D-PAV-AuNPs. When being incubated under the same conditions, for example, in cell culture medium containing 10% FBS or merely albumin, the largely adsorbed proteins are reported to enhance cellular uptake of NPs [39][40][41] . However, the results here are on the contrary. Nonetheless, these results substantiate the conclusion that the chirality-dependent cellular uptake of PAV-AuNPs is mainly governed by their surface-chirality property, rather than the surface protein corona. It is known that the extracellular substances can be transported into cells through several different pathways such as transmembrane diffusion, phagocytosis, and receptor-mediated or nonspecific endocytosis 29,42,43 . To ascertain the uptake mechanisms of the PAV-AuNPs, some special inhibitors were used to pretreat the cells before co-culture. All the inhibitors presented no significant cytotoxicity at the concentration used ( Figure S5c,d). Figures 4d and S6d show that the uptake efficiency of the A549 cells and HepG2 cells was significantly blocked (~31% decrease) by addition of 100 μM NaN 3 , revealing the energy-dependent process. Moreover, cellular uptake of both types of PAV-AuNPs was largely blocked by amantadine-HCl that could prevent budding of clathrin-coated pits (p < 0.05, p > 0.01) (Fig. 4d), and especially by the amiloride-HCl due to the disturbing of Na + /H + channels (p < 0.01). Therefore, internalization of the PAV-AuNPs by the A549 cells should be mediated by macropinocytosis and clathrin-mediated endocytosis mechanisms, and macropinocytosis takes the major role for both PAV-AuNPs regardless of their surface chirality. For the HepG2 cells, the NP uptake was only significantly inhibited by amiloride-HCl (p < 0.01), regardless of the surface chirality ( Figure S6d).
Interactions of surface chiral molecules and lipids. The above cellular uptake results suggest that the chirality-dependent endocytosis should be governed by the very initial steps during the uptake, e.g. NP adhesion to the cell membrane and interaction with the membrane phospholipids 44 . However, the adhesion of NP to the cell membrane is difficult to disentangle in the presence of simultaneous internalization. Thus, L-lecithin molecules, a type of phospholipids and composed of phosphoric acid with choline, glycerol or other fatty acids, were used as a model to study the interactions of chiral PAV molecules and lipids by isothermal titration calorimetry (ITC) 45 . As shown in Fig. 5a,b, the complexion of L-lecithin with L-PAV-AuNPs and D-PAV-AuNPs was consistently exothermic throughout the titration process. Of all the heat profiles, both the complexion of L-lecithin with L-PAV-AuNP and D-PAV-AuNPs could be satisfactorily fitted by a single set of binding sites available model 46 , and best-fit parameters were calculated by using nonlinear least-squares fitting. The results ( Table 2) show that the interaction of L-and D-PAV-AuNPs and L-lecithin featured a favorable enthalpy change (Δ H < 0), which was offsetted partially by an unfavorable entropy loss (Δ S < 0), resulting in an overall negative free energy change (Δ G < 0). Moreover, the complex stability constant K value for the D-PAV-AuNPs (9.86 × 10 3 M −1 ) was about 2.4 folds higher than that for the L-PAV-AuNPs (4.04 × 10 3 M −1 ), revealing a remarkably stronger affinity for the D-PAV-AuNPs to L-lecithin. The maximum ratios of L-lecithin to L-PAV-AuNPs and D-PAV-AuNPs were about 32.1 and 115, respectively. These results indicate that the L-lecithin molecules prefer to interact with D-PAV-AuNPs. In order to further mimic the interaction of surface chiral PAV molecules and cell membrane, a model study was performed by using the surface chiral molecules-immobilized gold-covered electrode and lipid bilayers, whose interaction was monitored by QCM-D. The bilayers composed of L-lecithin were prepared by surface-mediated vesicle fusion 47 . The hydration diameter of vesicles was about 100 nm (number-average). QCM-D can be used to monitor the kinetics of adsorption of small molecules, proteins, or in our case, the interaction with lipid vesicles and chiral PAV molecules grafted on the gold-covered electrode. The lipid vesicles interacted rapidly with the L-PAV (Fig. 5c), whereas rather slowly with the D-PAV (Fig. 5d). As the frequency shift measured by QCM-D is related to the adsorbed mass, more vesicles were attached onto the D-PAV surface than onto the L-PAV surface eventually (Fig. 5c,d). Hence, the L-lecithin-based vesicles prefer to interact with the D-PAV than with the L-PAV. With these results, one can conclude that the left-handed phospholipids-based cell membrane has chiral selective interactions with molecules or NPs. The cell membrane interacts more strongly with the D-PAV-AuNPs than the L-PAV-AuNPs (Fig. 6), leading to larger uptake amount of D-PAV-AuNPs by A549 and HepG2 cells.
To further assess the chirality-dependent or independent cellular distribution of PAV-AuNPs, CLSM and TEM (Fig. 7) characterizations were performed. The PAV-AuNPs (red color) were mainly distributed in the cytoplasm, and no signal was detected in the nuclei of A549 cells regardless of the chirality of PAV (Fig. 7). There were more D-PAV-AuNPs (Fig. 7A, Right) than L-PAV-AuNPs (Fig. 7A, Left) in the cytoplasm, and were closer around the cell nuclei (Fig. 7A). TEM observation shows that almost all the L-PAV-AuNPs were located in lysosomes (Fig. 7B, Left). Most of the D-PAV-AuNPs were aggregated in lysosomes too, but some escaped from lysosomes and entered into the cytoplasm as a consequence of larger internalization (Fig. 7B, Right). For HepG2 cells, the L-PAV-AuNPs and D-PAV-AuNPs were mainly distributed in lysosomes regardless of chirality ( Figure S7). The cell slice images obtained by TEM here cannot be used to determine the cellular uptake amount of the NPs because the thickness and layers of the cell slice are different. Table 2
Conclusions
This work provides new insights into cellular uptake being triggered by chiral polymers-capped NPs, demonstrating the chirality-dependent uptake amount and intracellular distribution. Two groups of AuNPs modified with MAV and PAV of different chirality were successfully prepared. These two groups of particles had a diameter about 16 nm and narrow size distribution in a dry state. The density of MAV and PAV being grafted on AuNPs was 2.4 molecules/ nm 2 and 0.8 molecules/nm 2 , respectively. Compared to MAV, the PAV grafting endowed the AuNPs with enhanced optical activity due to a larger number of AV units. All the chiral molecules-capped AuNPs possessed good colloidal stability in culture medium, and showed similar physicochemical properties except of reversed ellipticity. While the small chiral molecules-capped L-and D-MAV-AuNPs did not show significant difference in terms of uptake amount by A549 cells and HepG2 cells, the chiral polymers-capped PAV-AuNPs did show chirality-dependent uptake behaviors, and the D-PAV-AuNPs were internalized with significantly larger amount than the L-PAV-AuNPs. The chirality-dependent cellular uptake is likely attributed to the chiral selective interaction between the cell membrane and the chiral PAV on the NPs, as inferred from the fact that the L-phospholipid-based vesicles prefer to interact with the D-PAV molecules. Identification of this chirality-dependent cellular uptake of NPs provides a new idea that chiral effect can act as a novel strategy for designing bio-interface materials and may open a new avenue for further development of AuNPs for biomedical applications.
Synthesis and characterization of poly(acryloyl-L(D)-valine).
Schematic illustration of synthesis of poly(acryloyl-L(D)-valine) (L(D)-PAV). * Represents chiral center. Polymerization was carried out in a 10 mL dry Schlenk flask equipped with a magnetic stirrer. Methyl 2-(butylthiocarbonothioylthio)propanoate (50.4 mg), azodiisobutyronitrile (AIBN, 3.4 mg) and acryloyl valine (1.37 g) were dissolved in 5 mL N,N-dimethylformamide (DMF). The mixture was deoxygenated by purging with nitrogen for 20 min, and then heated at 70 °C for 4 h. The reaction was stopped by exposure to air. The mixture was precipitated in excess diethyl ether, and then separated by centrifugation. The dissolution and precipitation cycle was repeated 3 times. The polymers were dried under high vacuum for 48 h at room temperature to give a yellow solid product, which was characterized by gel permeation chromatography (GPC, eluent tetrahydrofuran, Waters 1515 Isocratic HPLC).
Synthesis and characterization of 2-mercaptoacetyl-L(D)-valine.
Scientific RepoRts | 6:31595 | DOI: 10.1038/srep31595 Synthesis of 2-mercaptoacetyl-L(D)-valine (L(D)-MAV). 2-Mercaptoacetyl-L(D)-valine molecules were synthesized using a previously reported method with modification 16 . First, the chloracetyl-L(D)-valine was synthesized by using reagents of chloracetylchloride and L(D)-valine with the similar method mentioned in Supporting Information (Synthesis and characterization of acryloyl-L(D)-valine monomers). 1.1 mL triethylamine and 1.1 mL thioacetic acid were mixed within an ice bath, into which 1 g chloracetyl-L(D)-valine dissolved in 18 mL dichloromethane was added slowly under stirring and nitrogen bubbling. This solution was stirred overnight. After washed 3 times with water, the solvent was removed by vacuum evaporation. The solid product was purified by chromatography on silica gel with ethyl acetate/hexane (v/v = 30:70) as the eluent. The obtained 2-(thioacetyl)acetyl-L(D)-valine (50 mg) and several drops of HCl solution (1 M) were mixed in 10 mL methanol for 2 h to yield 2-mercaptoacetyl-L(D)-valine, which was characterized by a Bruker Esquire 3000 plus ion trap mass spectrometer (Brucker-Franzen Analytik GmbH, Bremen, Germany). To clarify the uptake mechanism, the energy dependence of cell-NP interaction was assessed by treatment with 0.1% (w/v) sodium azide (NaN 3 ) 49 . Different pharmacological inhibitors, including 2 mM amiloride-HCl (Amilo), 1 mM amantadine-HCl (Aman), 100 μM genistein (Ge), and 10 μg·mL −1 cytochalasin D (CytD) were also used to treat the cells for 1 h before incubation with the PAV-AuNPs, respectively. Then the cells were treated with PAV-AuNPs for another 4 h.
Synthesis of AuNPs
To further investigate the chirality-dependent uptake mechanism, the cells were pretreated with 1 mg/mL D-valine and D-PAV for 1 h before incubation with the PAV-AuNPs, respectively. Then the cells were treated with PAV-AuNPs (containing 1 mg/mL D-valine or D-PAV) for another 24 h. The D-PAV used here to pre-treat the cells was polymerized by using the general free radical polymerization (no thioester bond. M w : 18743 Da; polydispersity: 1.7) to avoid the possible ligand exchange (the thioester bond can bind to the gold surface too).
Isothermal titration calorimetry (ITC) measurement. The isothermal titration calorimetry measurements were performed by using a thermostated and fully computer-operated isothermal calorimetry (VP-ITC, GE, USA) instrument. All microcalorimetric titrations between L-lecithin and PAV-AuNPs were performed in aqueous solution (water, pH 7.0) at atmospheric pressure and 298.15 K. Each solution was degassed and thermostated by a ThermoVac accessory before the titration experiment. The titration experiment was involved of 30 injections of L-lecithin (titrant, 5 μL per injection from a 10 mM stock L-lecithin solution) at 5 min interval into the sample cell (1439 μL) which contained the 5.6 × 10 −5 mM PAV-AuNPs (L-PAV-AuNPs or D-PAV-AuNPs) solution. The heat of L-lecithin dilution in the water alone was subtracted from the titration data for each experiment. The data were analyzed to determine the binding stoichiometry (N), complex stability constant (K), standard molar reaction enthalpy (Δ H) and other thermodynamic parameters of the reaction by using the supplied Origin 7.0 software. One set of binding sites model was used to fit the data as reported previously 45,50 . The reported thermodynamic parameters were an average of duplicate experiments. QCM-D. Liposomes composed of lecithin (Aladdin Company) were prepared from a lipid solution in chloroform (total lipid amount: 15 mg). The solvent was removed under a stream of nitrogen and the resulting lipid film was placed under vacuum at least for 4 h. The dried lipid film was then hydrated in PBS to yield a final concentration of 1000 μM. To form small unilamellar vesicles (SUVs), the lipid solution was sonicated according to the protocols described in literature by using a ultrasonicator (MISONIX Ultrasonic liquid Processors) 44 . The hydration diameter of the SUVs was measured by DLS with a high performance particle analyzer (Zetasizer Nano, Malvern) equipped with a 633 nm wavelength laser. The scattering intensity was recorded at a 173° angle in kilo counts per second.
The interaction with liposomes and chiral PAV molecules was measured by Quartz Crystal Microbalance with dissipation (QCM-D, Q-Sense E4, Sweden) under a flow condition. The PAV molecules were grafted on gold-coated piezoelectric crystals via the strong thiocarbonylthio-Au bond. Briefly, the crystals were incubated in 5 mg/mL PAV molecules (in ethanol) at 37 °C for at least 4 h. Then the crystals were washed with ethanol and water 5 times, respectively. Intracellular distribution. For confocal laser scanning microscopy (CLSM, Leica TCS SP5) measurement, the cells were seeded on a glass bottom cell culture dish (diameter, 20 mm) at a density of 5 × 10 3 cells/cm 2 and allowed to attach for 24 h. Then, the cells were treated with L(D)-PAV-AuNPs (Au, 50 μg/mL) for another 24 h, before 5 washes with PBS were applied to remove free L(D)-PAV-AuNPs. The cells were fixed with 0.4% paraformaldehyde at 37 °C overnight, and washed with PBS 3 times. The cells were further treated in 0.5% (v/v) Triton X-100/PBS for 10 min at 37 °C to increase the permeability of cell membrane. After being blocked with 1% BSA/PBS at 37 °C for 2 h, the samples were stained with 4′ ,6-diamidino-2-phenylindole (DAPI, Sigma, 1:50) and FITC-phallotoxins (Invitrogen, 1:100) at 37 °C for 30 min, following with 5 washes with PBS.
For TEM cell section analysis, the cells were seeded on a 6-well plate at a density of 5 × 10 4 cells/cm 2 . After cultured for 24 h, the cells were further incubated with the L(D)-PAV-AuNPs (Au concentration 50 μg/mL) for 24 h. The cells were then washed 5 times with PBS, trypsinized, centrifuged, and fixed with 2.5% glutaraldehyde at 4 °C for 2 h. After 3 washes with PBS (10 mM, pH 7.4), the samples were fixed with 1% perosmic oxide for 2 h at 4 °C. After being washed in water, the samples were dehydrated in a series of ethanol solutions with increased concentrations, embedded, and sliced with a thickness of ~50-70 nm.
Statistical Analysis. The experimental data are expressed as mean ± standard deviation, and the significant difference between groups was analyzed by using one-way analysis of variance (ANOVA) (for two groups) and two-way ANOVA (for more than two groups) in the Origin software. The statistical significance was set as p < 0.05 and p < 0.01, respectively. | 2018-04-03T00:16:09.510Z | 2016-08-17T00:00:00.000 | {
"year": 2016,
"sha1": "4d86ff922abc9040c04bc79224ad1c51dfc2b174",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep31595.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4d86ff922abc9040c04bc79224ad1c51dfc2b174",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
16859067 | pes2o/s2orc | v3-fos-license | The Mass Function of Dark Matter Haloes in a Cosmological Model with a Running Primordial Power Spectrum
We present the first study on the mass functions of Jenkins et al (J01) and an estimate of their corresponding largest virialized dark halos in the Universe for a variety of dark-energy cosmological models with a running spectral index. Compared with the PL-CDM model, the RSI-CDM model can raise the mass abundance of dark halos for small mass halos at lower redshifts, but it is not apparent on scales of massive mass halos. Particularly, this discrepancy increases largely with the decrease of redshift, and the RSI-CDM model can suppress the mass abundance on any scale of halo masses at higher redshift. As for the largest mass of virialized halos, the spatially flat $\Lambda$CDM models give more massive mass of virialized objects than other models for both of PL-CDM and RSI-CDM power spectral indexs, and the RSI-CDM model can enhance the mass of largest virialized halos for all of models considered in this paper. So we probably distinguish the PL-CDM and RSI-CDM models by the largest virialized halos in the future survey of cluster of galaxies.
Introduction
The central problem in modern cosmology is the formation of large scale structures in the universe. In the standard picture of hierarchical structure formation, dark matter dominates the universe, and a wide variety of observed structures, such as galaxies, groups and clusters of galaxies, have formed by the gravitational growth of Gaussian primordial density fluctuations. Due to self-gravitational instability, the fluctuations of dark matter have collapsed and virialized into objects which are so-called 'dark matter halos' or 'dark halos'. The larger halos are generally considered to have formed via the merger of smaller ones collapsed first. The distribution of mass in the gravitationally collapsed structures, such as galaxies and groups (or clusters) of galaxies, which is usually called the mass or multiplicity function, has been determined by observation.
As the observational data relevant to these issues improve, the need for accurate theoretical predictions increases. By far the most widely used analytic formulae for halo mass functions are based on extensions of the theoretical framework first sketched by Press & Schechter (1974). The Press-Schechter (PS) model theory did not draw much attention until 1988, when the first relative large N-Body simulation revealed a good agreement with it. The mystery of the 'fudge factor' of 2 in PS theory was solved by approaching the 'cloudin-cloud' problem with a rigorous way (Peacock & Heavens 1990;Bond et al. 1991). The reliability of the PS formula has been tested using N-Body simulation by several authors, which turns out the PS formula indeed provides an overall satisfactory description of mass function for virialized objects. Unfortunately, none of these derivations is sufficiently rigorous such that the resulting formulae can be considered accurate beyond the regime where they have been tested against N-body simulations. Although the analytical framework of the PS model has been greatly refined and extended in recent years, in particular to allow predictions for the merger histories of dark matter halos (Bond et al. 1991), it is well known that the PS mass function, while qualitatively correct, disagrees in detail with the results of N-body simulations. Specifically, the PS formula overestimates the abundance of halos near the characteristic mass and underestimates the abundance in the high mass tail. In order to overcome this discrepancy, Jenkins et al. (2001) proposed an analytic mass function which gives a fit to their numerical multiplicity function.
In particular, a power spectrum of primordial fluctuation, P p (k), should be assumed in advance in the calculation of mass function. Inflationary models predict a approximately scale-invariant power spectra for primordial density (scalar metric) fluctuation, P p (k) ∝ k n with index n = 1 (Guth & Pi 1982;Bardeen et al. 1983). The combination of the firstyear Wilkinson Microwave Anisotropy Probe (WMAP) data with other finer scale cosmic background (CMB) experiments (Cosmic Background Imager [CBI], Arcminute Cosmology Bolometer Array Receiver [ACBAR]) and two observations of large-scale structure (the Anglo-Australian Telescope Two-Degree Field Galaxy Redshift Survey [2dFGRS] and Lyman α forest) favour a ΛCDM cosmological model with a running index of the primordial power spectrum (RSI-ΛCDM), while the WMAP data alone still suggest a best-fit standard power-law ΛCDM model with the spectral index of n ≈ 1 (PL-ΛCDM) Peiris et al. 2003). However, there still exist the intriguing discrepancies between theoretical predictions and observations on both the largest and smallest scales. While the emergence of a running spectral index may improve problems on small scales, there remain a possible discrepancy on the largest angular scales. It is particularly noted that the running spectral index model suppress significantly the power amplitude of fluctuations on small scales Yoshida et al. 2003). This imply a reduction of the amount of substructure within galactic halos (Zentner & Bullock 2002). Yoshida et al. (2003) studied early structure formation in a RSI-ΛCDM universe using high-resolution cosmological Nbody/hydrodynamic simulations. They showed that the reduced small-scale power in the RSI-ΛCDM model causes a considerable delay in the formation epoch of low-mass minihalos (∼ 10 6 M ⊙ ) compared with the PL-ΛCDM model, although early structure still forms hierarchically in the RSI-ΛCDM model. Thus the running index probably affect the abundance of dark halos formed in the evolution of the universe.
Among the virialized structures, galaxy clusters are extremely useful to cosmology because they may be in detail studied as individual objects, and especially are the largest virialized structure in the universe at present. The mass of a typical rich clusters is approximately 10 15 h −1 M ⊙ , which is quite similar to the average mass within a sphere of 8h −1 Mpc radius in the unperturbed universe. However, the theoretical estimate of the mass of the largest collapsed object in the RSI-ΛCDM cosmological framework has still not been presented. Therefore, we will calculate the mass function of collapsed objects by J01 mass functions respectively and present the first calculation of the largest virialized object in the Universe in a RSI-ΛCDM model to explore the effect of running spectral index of primordial fluctuation on structure formation.
The reminder of this paper is organized as follows. We describe mass function of dark halos in Section 2. The largest virialized dark halos in the universe are presented in Section 3. The conclusion and discussion are given in Section 4.
Mass Function of Dark Halos
In the standard hierarchical theory of structure formation, the comoving number density of virialized dark halos per unit mass M at redshift z can be expressed as: n(M, z) = dN/dM = ρ 0 f (M, z)/M where ρ 0 is the mean mass density of the universe today and, instead of PS formula in this letter, the mass function f (M, z) takes the form of an empirical fit from high-resolution simulation (Jenkins et al. 2001) Here σ(M, z) = σ(M)D(z) and D(z) = e(Ω(z))/e(Ω m )(1 + z) is the linear growth function of density perturbation (Carroll et al. 1992), in which e(x) = 2.5x/(1/70 + 209x/140 − x 2 /140 + x 4/7 ) and Ω(z) = Ω m (1 + z) 3 /E 2 (z). The present variance of the fluctuations within a sphere containing a mass M can be expressed as σ 2 (M) = 1 is the Top-hat window function in Fourier space and r M = (3M/4πρ 0 ) 1/3 . The power spectrum of CDM density fluctuations is P (k) = P p (k)T 2 (k) where the matter transfer function T (k) is given by Eisenstein & Hu (1999), and P p (k) is the primordial power spectrum of density fluctuation. The scaleinvariant primordial power spectrum in the PL-ΛCDM model is given by P p (k) = Ak ns with index n s =1 and that in the RSI-ΛCDM model is assumed to be P p (k) = P (k 0 )(k/k 0 ) ns(k) , where the index n s (k) is a function of length scale The pivot scale k 0 =0.05 h Mpc −1 , n s (k 0 )=0.93, and dn s /d ln k=-0.03 are the best-fit values to the combination data of the recent CMB experiments and two other large-scale structure observations . For both PL-ΛCDM and RSI-ΛCDM models, the amplitude of primordial power spectrum, A and P (k 0 ), are normalized to σ 8 = σ(r M = 8h −1 Mpc), which is the rms mass fluctuations when present universe is smoothed using a window function on a scale of 8h −1 Mpc. In this section, we assume spatially flat ΛCDM models characterized by the matter density parameter Ω m , vacuum energy density parameter Ω Λ . For both PL-ΛCDM and RSI-ΛCDM models, we take cosmological parameters to be the new result from the WMAP: Hubble constant h = 0.71, Ω m = 0.27, σ 8 = 0.84 Spergel et al. 2003).
The mass function of dark halos directly involve the calculation of primordial power of density fluctuation. In order to explore the difference between the two kinds of primordial power spectrum, we first calculate the mass function of dark matter halos in a wide range of redshift, which are plotted in Fig.(1). It is noted that there is a slight difference between the PL-CDM model and RSI-CDM model at lower redshifts. Compared with the PL-CDM model, the RSI-CDM model can raise the mass abundance of dark halos for small mass halos at lower redshifts, but it is not apparent on scales of massive mass halos. Particularly, this discrepancy increases largely with the decrease of redshift. Similar to the result (Yoshida et al. 2003), the RSI-ΛCDM model can suppress the mass abundance on any scale of halo masses at higher redshift. According to the hierarchical formation theory of structure, there is fewer higher mass halos at higher redshift and the higher mass halos are formed by the merger of lower mass haloes at the relative late stage. As pointed previously, the RSI model can suppress the power spectrum at small scale, so this just leads to a considerable delay in the formation of low mass haloes instead of high mass haloes. Therefore, compared with the PL-CDM model, the mass function is uniformly lower for the RSI model at higher redshift of z=6.
The Largest Virialized Dark Halos In the Universe
Based on the theoretical expression above, we can easily get the total number N of the virialized objects with the mass larger than M where dV is the comoving volume element for the Friedmann-Robertson-Walker metric and dV dz takes the form where D a = d A H 0 /c, d A is the angular diameter distance and f = z 0 dz/E(z). It is obvious from the Eq.(3) that the total number N decrease with the increase of the mass M. Setting N = 1, we can finally obtain the largest mass M M AX of virialized object In this section, we consider three cold matter (CDM) models, i.e. the standard CDM (SCDM), spatially flat ΛCDM models, and an open CDM (OCDM)for both PL and RSI Table 1. Then we calculate the largest virial mass M M AX in a variety of cosmological models for both PL and RSI power spectrum model, the results of which are demonstrated in Table 2. From Table 2 we can see that the different cosmological models may yield the different result about virial mass for the largest virialized halos. The spatially flat ΛCDM models give more massive mass of virialized objects than other models for both of PL and RSI power spectral models. Therefore, it can distinguish different cosmological models by the largest mass of virialized halos. Due to the accumulative effect of the integration for volume(or redshift) over the whole space in the universe, the prediction for virial mass is slightly greater than the observed typical one. In addition, we also notice that the RSI-CDM model can enhance the mass of largest virialized halos for all of models considered here.
Conclusions and Discussion
Motivated by the new result on the index of primordial power spectrum from a combination of WMAP data with other finer scale CMB experiments and other large-scale structure observations, we present the first study on the mass functions of J01 and their corresponding largest virialized dark halos in the Universe for a variety of dark-energy cosmological models with a running spectral index. It is well known that structures in the universe forms hierarchically in standard CDM models. The most massive structure form rather late in the universe. It is also noted that there is a slight difference between the mass abundance of PL-CDM and RSI-CDM model at lower redshifts. Compared with the PL-CDM model, the RSI-CDM model can raise the mass abundance of dark halos for small mass halos at lower redshifts, but it is not apparent on scales of massive mass halos. Particularly, this discrepancy increases largely with the decrease of redshift, and the RSI-ΛCDM model can suppress the mass abundance on any scale of halo masses at higher redshift. As for the largest mass of virialized halos, the spatially flat ΛCDM models give more massive mass of virialized objects than other models for both of PL-CDM and RSI-CDM power spectral models. Therefore, it can distinguish different cosmological models by the largest mass of virialized halos for both of PL-CDM and RSI-CDM models. In addition, we also notice that the RSI-CDM model can enhance the mass of largest virialized halos for all of models considered here. So we probably distinguish the PL-CDM and RSI-CDM models by the largest virialized halos in the future survey of cluster of galaxies. Therefore, the obtained largest virialized object can be referred to as the complement to the observations of CMB, SN Ia and large scale structure in the future cosmological observation. Yoshida et al. (2003) found that although the hierarchical formation mechanism do not work well in RSI-ΛCDM model compared with that in PL-ΛCDM model and it also is not clear that the PS theory can be used in RSI-ΛCDM model, the mass function measured by high-resolution cosmological N-body/hydrodynamic simulations overall match the PS mass function for both RSI-ΛCDM and PL-ΛCDM model. In addition, because the running spectral index model predicts a significant lower power of density fluctuation on small scales than the standard PL-ΛCDM model Yoshida et al. 2003), it should also attract considerable attention in studies on strong lensing Chen 2003aChen ,b, 2004aZhang 2004) and weak lensing by large-scale structure (Ishak et al. 2004), especially on skewness (Pen et al. 2003;Zhang et al. 2003;Zhang & Pen 2005), which characterizes the non-Gaussian property of κ field in the nonlinear regime. | 2014-10-01T00:00:00.000Z | 2005-04-09T00:00:00.000 | {
"year": 2005,
"sha1": "d147bb73e4c3727830de126afdf5ba9e7d60e0f6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/astro-ph/0504223",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c56f1278df30a54b3767eaf650ca31946d0d210f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
247533633 | pes2o/s2orc | v3-fos-license | Application of Baricitinib in Dermatology
Abstract There are four JAK subtypes: JAK1, JAK2, JAK3, and tyrosine kinase 2 (TYK2). Small molecule Janus tyrosine kinase (JAK) inhibitors can inhibit a variety of pro-inflammatory cytokines. Baricitinib is the first generation of JAK1/2 inhibitor targeting the ATPase of JAK, which blocks the intracellular transmission of cytokines through JAK-STATs. Thus far, it has been approved for the treatment of rheumatoid arthritis (RA); however, an increasing number of studies have suggested that baricitinib can be used to treat dermatological diseases, such as atopic dermatitis (AD), psoriasis, vitiligo, and alopecia areata. Baricitinib can be a new choice for the treatment of dermatological diseases, which cannot be treated with conventional drugs. We reviewed the application, efficacy, side effects, precautions, limitations and prospect of baricitinib in atopic dermatitis, psoriasis, vitiligo and alopecia areata (AA) in recent 5 years including clinical trials and case reports. Among them, the application in the field of alopecia areata is the most encouraging, and we reviewed the mechanism in detail.
Introduction
Baricitinib (Olumiant TM ) is a small-molecule, reversible competitive inhibitor of the Janus kinase (JAK) family. 1 JAKs are intracellular tyrosine kinases linked to the intracellular domains of many cytokine receptors. 2 Type 1 and type 2 cytokine families interact with specific JAK subtypes for signal transduction. When cytokines bind to their corresponding receptors, signal transduction occurs through the JAK/signal transducer and the activator of the transcription (STAT) pathway. JAKs mediate phosphorylation of specific receptor tyrosine residues. As docking sites for STAT proteins and other signalling molecules, they phosphorylate STAT proteins recruited to receptors via a single tyrosine residue. Activated STAT proteins separate from the receptor, dimerise, translocate to the nucleus, and combine with gammaactivated site members to regulate gene transcription. 3 There are four JAK subtypes: JAK1, JAK2, JAK3, and tyrosine kinase 2 (TYK2). Baricitinib mainly acts on JAK1 and JAK2 subtypes and has a weak potency against other subtypes. 4 The adenosine triphosphate enzyme of JAK, 5 which blocks the intracellular transduction of STAT proteins, is the target of baricitinib. The molecular structure of baricitinib is C 16 H 17 N 7 O 2 S (Figure 1), with a molecular weight of 371.42 g/mol. On 13 February 2017 the European Union approved baricitinib for sale in the local market for treatment of rheumatoid arthritis (RA), 1 and in 2018, it entered the United States market for treatment of RA. 6 Subsequently, it was approved by China for patients with moderate to severe rheumatoid arthritis unresponsive to treatment with other traditional DMARDs (disease-modifying anti-rheumatic drugs). Notably, baricitinib can also be used to treat atopic dermatitis (AD), systemic lupus erythematosus, and other diseases. effect in the treatment of interferon (IFN)-related diseases 8 diabetic nephropathy, 9 and refractory juvenile dermatomyositis. 10
Application of Baricitinib in Dermatology
Baricitinib has been widely used in dermatology as a new molecular-targeted therapy. Increasing evidence suggests that baricitinib is effective against AD, alopecia areata (AA), psoriasis, and vitiligo. (Table 1) Many inflammatory dermatoses are driven by inflammatory mediators that rely on JAK/STAT signals, and the use of JAK inhibitors has become a new strategy for the treatment of diseases for which conventional drugs have not been effective. 11 Atopic Dermatitis AD is one of the most common chronic inflammatory skin diseases. Elevated inflammatory cytokines in AD affected skin include Th2 (interleukin [IL]-4, IL-13, IL-31, IL-5), Th22 (IL-22), Th1, and thymic stromal lymphopoietin (TSLP). 12 These cytokines are associated with increased signalling through all four JAKs. IL-4 and IL-13 bind to IL-4 (either the α or γ chain) and IL-13 (α1) receptors to induce JAK1 and JAK3, respectively, resulting in the activation of STAT6. IL-5 binds to the IL-5 receptor (β chain), thus inducing the expression of JAK1 and JAK2, resulting in the activation of STAT1, STAT2, and STAT5. In addition, TSLP binds to the α unit of the IL-7 heterodimer and TSLP receptor and induces the activation of JAK1 and JAK2, thus activating STAT5. 13 Studies have shown that 4 mg of baricitinib combined with glucocorticoids significantly improves the signs and symptoms of moderate to severe AD in adults, with rapid effects and good safety. 14
Psoriasis
Psoriasis is a chronic inflammatory skin disease characterised by a high division of keratinocytes. The increased proliferation of keratinocytes is caused by high levels of inflammatory cytokines. The IL-23/Th17 axis plays a key role in psoriasis pathogenesis. 15 The IL-23 receptor relies on the heterodimer of JAK2 and TYK2 for signal transduction, which highlights the role of JAKs in the pathogenesis of psoriasis and the therapeutic potential of JAK inhibitors. 16 An animal experiment showed that local treatment with baricitinib inhibited the expression of inflammatory markers upregulated by 12-O-tetradecyl phorbol-13-acetate. The injection of baricitinib into mouse ears significantly reduced ear swelling, leukocyte infiltration, epidermal cell proliferation, and dermal angiogenesis. Moreover, baricitinib significantly decreased the phosphorylation of STAT3 and STAT1 and the expression of inflammatory cytokines. 17 A double-blind controlled study showed a significant improvement in the incidence and prevention of moderate to severe psoriasis in patients with a 75% reduction in the psoriasis area and severity index (PASI)-75 after 12 weeks of treatment with 8 mg or 10 mg baricitinib compared to patients taking a placebo. The majority of patients reached PASI-75 after the first 12 weeks of baricitinib treatment and maintained this response for the following 12 weeks. 18 Of note, there is a case report of a patient with rheumatoid arthritis who developed reverse psoriasis while being treated with baricitinib. This may be due to the baricitinib-induced enhancement of IL6, IL8 (C-X-C motif ligand [CXCL]-8), and IL36 gamma gene expression. 19
Vitiligo
The IFN-γ-related chemokine CXCL-10 is involved in the pathogenesis of vitiligo, and IFN-γ signalling is mediated by the JAK/STAT pathway, especially through JAK1 and JAK2. JAK inhibitors can block this pathway, thereby blocking the effects of IFN and CXCL-10. Interestingly, a patient with rheumatoid arthritis and vitiligo showed a reduction in the area of vitiligo lesions after treatment with baricitinib. 20,21 Alopecia Areata Etiology, Pathogenesis, and Advanced Treatments AA is a polygenic autoimmune disease characterised by temporary scarless alopecia and follicular preservation, affecting nearly 2% of the general population at some point in their lives. AA is divided into four subtypes: ophiasis, sisaipho, sudden greying, and diffuse AA. Many inflammatory cells such as CD8 + T cells, mast cells, and natural killer (NK) cells have been observed in AA tissues. These inflammatory cells attack the growing hair and cause hair loss. 22 The primary treatment for patients with small lesions includes the use of topical glucocorticoids, topical injection of glucocorticoids, contact immunotherapy, and topical use of minoxidil. Systemic use of glucocorticoids and immunosuppressants can be recommended for patients in advanced AA or those displaying large lesions. 23 However, traditional treatments have limited effects in patients with AA totalis or universalis. Therefore, molecular-targeted drugs have emerged.
1937
The Mechanism of Action of Baricitinib AA is an autoimmune disease caused by disturbances in follicular immune privilege, a state of protection of certain sites of the body which prevents an inflammatory immune response when exposed to antigens. 24 Low expression of major histocompatibility (MHC) class I and II molecules in the normal population exempts hair follicles from autoimmune attack. 25 In some conditions, such as cancer, infection, or stress, immune functions are inhibited through a decline in transforming growth factor β1 (TGFβ1), insulin-like growth factor 1(IGF-1), and α melanocyte-stimulating hormone (α-MSH). T lymphocytes, NK cells, and other inflammatory cells; 26 in particular, CD8 + NKG2D + T lymphocytes congregate around hair follicles. 22 These cells can produce large amounts of IFN-γ, resulting in upregulation of MHC class I expression in the human follicle epithelium (Figure 2). Consequently, the immune privilege of follicles is compromised. 27 CD8+ NKG2D+ T cells play an important role in the genetic development of AA. In 2010, a genome-wide association study (GWAS) found that the cytomegalovirus UL16-binding protein (ULBP) gene cluster on chromosome 6q25.1, which encodes the activating ligands of the natural killer cell receptor NKG2D, is strongly associated with AA. During the active phase of this disease, expression of the ULBP gene cluster in the diseased scalp, particularly in the hair follicle dermal sheath, of AA patients was significantly upregulated. 28 IFN-γ acts on the IFN-γ receptor in hair follicle epithelial cells, which relies on the JAK/STAT signalling pathway. Simultaneously, CD8 + T cells produce IFN-γ, which signals JAK1 and JAK2 to enhance IL-15 production. After binding to the IL-15a receptor (a chaperone protein), IL-15 binds to the surface of CD8 + T cells and activates IFN-γ production through JAK1 and JAK3 signalling. 22 This causes the hair follicles to break down and massive numbers of lymphocytes to attack the epithelial cells of the hair follicles, causing hair loss. IFN-γ is considered the main immune factor that causes AA. Each cytokine receptor is linked to two parallel JAK isomers that exist as homodimers or heterodimers. 4 When the IFN-γ receptor receives the signal, the JAK enzyme located in the intracellular part cannot be phosphorylated, so it cannot conduct intracellular signal transduction ( Figure 3). Therefore, the IFN-γ signalling pathway, which controls the immune response, is blocked.
Clinical Research
A Phase II randomized controlled study where baricitinib was used to treat adult AA showed that 33.3% and 51.9% of patients with AA had alopecia tool scores of <20 at 36 weeks after oral administration of 2 mg and 4 mg dose, respectively, and baricitinib was well-tolerated. 29 One case report of a patient with AA, who lost all of her hair after a local injection of triamcinolone acetonide, showed that 97% of the scalp hair, eyebrows, and eyelashes regrew after receiving 2 mg baricitinib treatment. This patient received treatment for 13 months without any adverse effects. 30 In another study, the initial dose of baricitinib was 7 mg/day. After six months, the dose was changed to 7 mg in the morning and 4 mg in the evening, and dose of oral corticosteroids was gradually reduced to 3 mg/day for patients with AA. After nine months, the hair of the patients recovered completely. In follow-up animal experiments, mice treated with baricitinib showed a significant reduction in inflammation, decreased CD8 + T cell infiltration, and decreased expression of MHC class I and II. The results of an IFN gene expression assay showed that IFN gene expression returned to normal after using baricitinib, 31 which again demonstrates the role of baricitinib in modulating MHC class I and II expression through the IFN pathway.
Perspective
Baricitinib is a first generation JAK1/2 inhibitor. JAK is at the end of the cytokine receptors located in the cell membrane and controls the signal transduction of many cytokines, such as the IL-6, IL-10, IL-3, and IL-5 families. 2 Each cytokine receptor is linked to two parallel isomers of JAK that exist as homodimers or heterodimers. When cytokines bind to their receptors, JAK phosphorylation occurs, which leads to the phosphorylation of STAT proteins in cells. Subsequently, these proteins are transported to the nucleus to act directly on the cell's DNA and ultimately regulate gene expression. 2 Signal transmission is directed from the outside to the inside of the cell. Chronic inflammatory diseases associated with upstream cytokine disorders can be treated with baricitinib. In addition, long-term treatment with low dose baricitinib was well tolerated, and only few patients reported serious adverse reactions and complications after baricitinib treatment. Baricitinib is currently approved for treatment of rheumatoid arthritis; however, numerous animal and clinical trials have confirmed its role in other chronic inflammatory diseases. AA is an immune disorder, and traditional treatments are inadequate for some AA patients with a large shedding area. Baricitinib has shown good potential for the treatment of intractable AA. The first generation of JAK inhibitors have the side effects of immunosuppression. Studies on JAK-deficient mice suggest that offspring without JAK are not viable and that JAK3 knockout (KO) mice present a strong reduction in T and B cell numbers. Mutations in the JAK3 gene manifest as severe combined immunodeficiency syndrome (SCID) in humans. 32 Some clinical studies have confirmed that the application of JAK inhibitors can have side effects, such as infection, malignancy, and major adverse cardiovascular events (MACEs). A long-term study of 3770 patients with rheumatoid arthritis receiving baricitinib treatment showed that the standardized incidence ratios (IRs) of severe infection, herpes zoster, and MACEs were 2.6, 3.0, and 0.5, respectively. The IR of malignant tumours was 0.6 in the first 48 weeks, and remained stable thereafter (IR 1.0). 33 Clinical disease screening of patients before application of JAK inhibitors and continuous monitoring during application are essential. 34 Some experts believe that more targeted secondgeneration JAK inhibitors for only one subtype or topical application show better effects and lower adverse reactions. 35,36
Statement
This study was exempted from ethics requirements which was approved by Beijing Chaoyang Hospital. | 2022-03-19T15:20:51.540Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "d2fea51cef4ea06e5659cb8eeba53c0896f2f7f3",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4ee8dcc752b445ee0617afd2f0ee964c93f853fa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2991315 | pes2o/s2orc | v3-fos-license | Prevalence of human pegivirus-1 and sequence variability of its E2 glycoprotein estimated from screening donors of fetal stem cell-containing material
Background Human pegivirus-1 (HPgV-1) is a member of the Flaviviridae family whose genomic organization and mode of cellular entry is similar to that of hepatitis C virus (HCV). The E2 glycoprotein of HPgV-1 is the principle mediator in the virus-cell interaction and as such harbors most of HPgV-1’s antigenic determinants. HPgV-1 persists in blood cell precursors which are increasingly used for cell therapy. Methods We studied HPgV-1 prevalence in a large cohort of females donating fetal tissues for clinical use. PCR was used for screening and estimation of viral load in viremic plasma and fetal samples. Sequence analysis was performed for portions of the 5′-untranslated and E2 regions of HPgV-1 purified from donor plasmas. Sequencing was followed by phylogenetic analysis. Results HPgV-1 was revealed in 13.7% of plasmas, 5.0% of fetal tissues, 5.4% of chorions, exceeding the prevalence of HCV in these types of samples. Transmission of HPgV-1 occurred in 25.8% of traceable mother-chorion-fetal tissues triads. For HPgV-1-positive donors, a high viral load in plasma appears to be a prerequisite for transmission. However, about one third of fetal samples acquired infection from non-viremic individuals. Sequencing of 5′-untranslated region placed most HPgV-1 samples to genotype 2a. At the same time, a portion of E2 sequence provided a much weaker support for this grouping apparently due to a higher variability. Polymorphisms were detected in important structural and antigenic motifs of E2. Conclusion HPgV-1 is efficiently transmitted to fetus at early embryonic stages. A high variability in E2 may pose a risk of generation of pathogenic subtypes. Although HPgV-1 is considered benign and no longer tested mandatorily in blood banks, the virus may have adversary effects at target niches if delivered with infected graft upon cell transplantation. This argues for the necessity of HPgV-1 testing of cell samples aimed for clinical use.
Background
Human pegivirus was identified in 1995-1996 in a search for the aetiological agent of hepatitis in patients tested negative for known hepatitis viruses [1,2]. The virus was initially named GBV-C/HGV. However, this name was later abandoned after numerous unsuccessful attempts to establish a reliable association with liver disease. The virus was assigned a new name, "pegivirus" [3], which now designates the corresponding genus [4]. The genomic organization of pegivirus is similar to that of HCV featuring a 9.4-kb RNA genome (single-stranded, positivesense), 5′-and 3′-untranslated regions (UTR) and an open reading frame encoding for polyprotein subsequently cleaved by proteases to produce functional and structural proteins. The most studied of them are the E1 and E2 glycoproteins which facilitate viral entry and harbor the majority of antigenic determinants [5]. In contrast to HCV, pegivirus does not seem to have an ORF encoding for a distinguishable core protein [6]. Pegivirus genome contains less variability than HCV, particularly in E2 region [7,8]. Nevertheless phylogenetic studies were able to outline six genotypes [9][10][11] with another one suggested recently [12].
HPgV-1 has gained elevated attention after demonstration that coinfection of HIV patients with HPgV-1 results in higher CD4+ counts, increases survival and thus could be regarded as a favorable prognostic factor [13]. HPgV-1 genotypes appear to perform unequally in the inhibition of HIV cell entry [14,15]. Numerous studies suggested the leading role for E2 in the HPgV-1-HIV antagonism [16]. Particularly, as a membrane-targeting protein, E2 can perturb HIV gag assembly on the plasma membrane [17]. Of interest, membrane fusion and HIV interaction rely on motifs that also shape up E2 immunoreactivity [18][19][20].
HPgV-1 can establish persistent human infection and is found in one to 19% of healthy donors depending on their socioeconomic status and lifestyle [5]. Mother-to-infant transmission occurs more frequently for HPgV-1 than HCV, and mother's viremia appears to be the determining factor [21][22][23]. Because of the lack of evident disease association [24], pegivirus has not been included in the list of infectious agents whose screening is mandatory for blood banks [25]. However, HPgV-1 was suggested to be linked to non-Hodgkin's lymphoma, perhaps by affecting immune regulation [26]. Interestingly, HPgV-1 replication sites were shown to reside in the bone marrow, spleen [27,28] and peripheral blood mononuclear cells [29]. In view of this, a hematopoietic stem cell precursor was hypothesized to be the primary target of HPgV-1 infections [30]. However, no stem cell infection by HPgV-1 has yet been reported in culture and clinics.
Hematopoietic stem cells (HSC) are considered to be the principle curing component in many cell therapy applications [31]. At the fetal age of less than 12 weeks, liver and spleen contain a high amount of HSC [32] and hence offer some advantages for regenerative cell medicine [32][33][34][35][36]. Here we report the data on HPgV-1 prevalence from the routine pathogen screening implemented at our Center. In supplement to previous reports on mother-to-infant transmission [21][22][23], we focus on the establishment of HPgV-1 infection at early fetal stages. To estimate genetic variation of HPgV-1 in donors of fetal stem cell-containing material, we sequenced 5′-UTR and E2 regions and noted a high variability in the latter. Possible consequences of HPgV-1 delivery to the sites targeted by stem-cell therapy are discussed.
Samples
Plasma and fetal tissue samples were harvested after elective termination of pregnancy at the site of surgery. A vacuum-assisted procedure was used. Each donor signed informed consent for research and clinical use of donated samples. Activity of the Emcell Cell Therapy Center is covered by State licenses, ethic approval forms and certificates that can be found at www.emcell.com. The Center disposes of a large collection of stem cell-containing samples for clinical needs. Screening results from a subset of these samples served as the basis of this study.
Fetal samples were processed as described earlier [33]. Briefly, aborted fetuses of 6-12 weeks of age were transported in a sterile transport medium made of Dulbeccomodified Eagle's medium (DMEM) without L-glutamine with gentamicin (100 mg/ml; Thermo Fisher Scientific, Waltham, MA, USA). Fetuses were washed three times in Hank's balanced salt solution (HBSS) without calcium and magnesium (Sigma-Aldrich, St. Louis, MO, USA) and divided into organs. Chorionic connective tissue was separated from chorionic villi and then processed as a separate sample hereafter referred to as "chorion". Whole fetal organs and chorions were washed again three times. The efficiency of this washing procedure for removing surfacebound microbial agents was demonstrated earlier [37] [38]. Fetal organs and chorions were homogenized mechanically in HBSS without calcium and magnesium. Cell suspension was filtered through a 100-μm filter (Becton-Dickinson, Franklin Lakes, NJ, USA) and cryopreserved in the presence of 5% dimethyl sulfoxide (DMSO; Sigma-Aldrich) in HBSS with the use of an ICE Cube 14 freezer (Sy-LAB Geräte GmbH, Neupurkersdorf, Austria). Samples were stored in liquid nitrogen.
Donor's blood was collected in a BD Vacutainer® Barri-cor™ Plasma Blood Collection Tube (Becton-Dickinson) and, no later than in 4 h, centrifuged at 800 rpm for 10 min. Collected plasma was in most cases immediately passed to screening for infectious agents, among which were HPgV-1 and HCV. In parallel, an aliquote of plasma was nitrogen-frozen and re-used only once in case if additional tests had been appointed.
Polymerase chain reaction (PCR) for screening and sequencing Viral RNA was isolated using a Nucleospin Virus Dx kit (Macherey-Nagel, Duren, Germany) from an aliquote of fetal cell suspension containing about 10 4 -10 5 cells in 300 μl of HBSS or 300 μl of plasma. Routine screening of samples was performed using Amplisens HGV and HCV PCR kits (Interlabservice, Moscow, Russia) in which reverse transcription (RT) and amplification are combined in a single step. To exclude cross-contamination, batches of 10 to 20 clinical samples were tested alongside with a mock control sample that passed through the RNA preparation step. PCR results were detected by Taqman probe fluorescence in a CFX96 real-time PCR system (Biorad, Hercules, CA, USA). The threshold of amplification Cq was determined as a PCR cycle at which the amplification kinetics exceeds the level of 50 relative fluorescence units (RFU). A two-tailored Student's t -test was used to estimate the significance of differences between mean Cq values. Viral load was estimated using a standard curve of the dependence of Cq on log concentration of viral RNA. To plot this curve, we ran RT-PCR with dilutions of positive control samples of known HCV or HPgV-1 RNA concentration (copies/ml) supplied with the kits. Dilutions and calculations were done as per manufacturer's manual.
Nucleotide sequence analysis
The quality of sequences was visually inspected and ambiguous bases were corrected using the Chromas trace viewer (Technelysium, South Brisbane, Australia). MEGA5.2 freeware [40] was used for sequence alignment (the ClustalW algorithm with default parameters), estimating evolutionary distance and inferring phylogenetic trees (the Neighbor-Joining algorithm, the Tamura-Nei model including both transition and transversions, partial deletion of positions with 80% of sequence coverage). The credibility of grouping was assessed by bootstrapping, a statistical method based on numerous resampling of each nucleotide position in the alignment and calculating the probability of obtaining the same grouping. In our analysis, 1000 bootstrap replicates were used. The following sequences were used as references for alignument (denoted by their GenBank accession numbers): U36380 for genotype 1, D90600 and AF104403 for genotype 2a, U63715 for genotype 2b, AB003288 for genotype 3, AB018667 for genotype 4, AY949771 for genotype 5, and AB003292 for genotype 6 [9] [39] [41].
HPgV-1 prevalence in plasma and fetal samples
We took advantage of a large dataset accumulated in the course of routine pathogen screening of plasma and stem cell-containing suspensions which is a part of the standard operational procedures implemented at our Center [33,38]. In this work, we focus only on suspensions deriving from human fetal tissues (each time collected after elective termination of pregnancy). The term "fetal tissues" means a cell suspension consisting of the liver and some other tissues, depending on preparation conditions and clinical needs. Chorions constitute another subgroup of samples. When analyzed as a single category, chorions and fetal tissues will be hereafter referred to as "fetal material". In addition, donor's plasma was routinely collected for diagonostic purposes and tested for pathogens along with fetal samples using commercial real-time PCR kits.
The screening revealed HPgV-1 RNA in 13.7% of plasma samples (Fig. 1a) while the prevalence of HPgV-1 in fetal samples was much lower (5.0% and 5.4% of tissues and chorions, respectively). HPgV-1 was detected more frequently than HCV which demonstrated the prevalence of only 3.1% in plasmas and 0.3% in fetal samples. Other routinely tested viruses (HIV, HBV, type 1/2 herpes simplex virus, Epstein-Barr virus, parvovirus B19) were each found in less than 0.01% of samples (data not shown). For comparison, we also provide results of HPgV-1 testing in patients referred to our Center. HPgV-1 and HCV were detected in 4.7% and 0% of the patient plasmas, respectively (n = 147).
We next estimated HPgV-1 and HCV load in donor plasmas from the kinetics of PCR amplification. The test kits that we use for routine screening have the similar sensitivity (500 and 380 copies per ml of plasma for HPgV-1 and HCV, respectively). The PCR efficiency is close to 100% as declared by manufacturer and noted in our runs (data not shown). Hence a comparison of PCR template quantities between samples can be drawn from the difference in their Cq, a threshold cycle at which the kinetics of amplification passes to the exponential stage (Fig. 1b). The lower Cq the higher is the viral RNA content in the sample. For donor plasmas, the mean Cq was lower in HPgV-1-detecting PCR (Fig. 1c). The mean viral load, determined from the mean Cq (see Materials and Methods), was 3.14 × 10 6 and 1.57 × 10 4 for HPgV-1 and HCV, respectively. This data suggests that, in the studied population, HPgV-1 can reach higher blood titers than HCV.
Dependence of HPgV-1 prevalence in fetal material on donor's viremia
To further classify HPgV-1 occurrence in preparations used for cell transplantation, we created a dataset of triads consisting of donor's plasma and samples of donated chorion and fetal tissues. Eighty-nine triads comprising HPgV-1-positive plasma were analyzed (Fig. 2a). Pegiviral RNA was detected in donated fetal samples in 43.7% of these triads. Chorions and tissues were HPgV-1-positive in 16.9% and 11.2% of all triads, respectively, whereas both types of fetal samples contained pegiviral RNA in 14.6%. The higher HPgV-1 RNA content shows that the chorion has a higher capacity to acquire HPgV-1, which could be expected from the anatomy of embryo-maternal contact.
It should be noted that the stringency of our sample preparation procedure and the use of mock controls (see Materials and Methods) render unlikely contamination of harvested samples with HPgV-1-infected blood. The observed difference in HPgV-1 prevalence between chorions and tissues further disproves the possibility of lab contamination which would otherwise produce uniform data invariant to the sample type.
Next we asked if the odds to find HPgV-1 in donated tissues depend on the viral level in donor's plasma. We examined Cq values of PCR from HPgV-1-positive plasma samples. For triads with non-infected and infected cognate tissues, mean Cq was 21.0 and 18.4 RFU respectively (Fig. 2b). This corresponds to the viral load of 1.27 × 10 6 of 1.27 × 10 7 copies/ml, respectively. Therefore, a higher viral load level in plasma increases the chance of finding the virus in fetal samples. In addition to fetal samples which acquired HPgV-1 from viremic donors (infected tissues; Fig. 2b), there was an interesting subset of 15 triads where plasma was pegivirus-negative and at least one fetal sample was nevertheless infected. In this subset, infection was detected in six chorions and eight fetal tissue samples 17 18 19 20 21 22 23 24 25 26 27 28 To give a numerical estimate of HPgV-1 vertical transmission, we took a rather conservative approach and considered only triads in which the virus was found either in fetal tissues only or both in fetal tissues and chorion. Triads in which only chorion was infected were not included because of the possibility of an admixture of maternal blood. Thus we scored 23 triads of 89 as transmitting which gives the rate of mother-to-fetus transition of 25.8%.
Nucleotide sequence variability of 5′-UTR and E2 regions RNA-containing viruses are known to have a potential to generate multiple subtypes (quasispecies) providing rich material for selection of those able to evade host defense mechanisms and cause disease. In pegivirus, variability in the 5′-UTR and E2 regions contributes significantly to the net genotypic diversity [9,10,12]. We analyzed portions of these regions by sequencing corresponding PCR products generated from viremic donors. A genotype 2a sequence was used as a reference for the sequence nomenclature. Position 1 is the ATG start codon for the putative polyprotein [6] which corresponds to position 555 in the reference strain D90600. Consequently, the 5′-UTR primer set [39] targets the fragment of −425 to −184 nucleotides (nt), and the E2 set [9] targets the 998-1680 fragment. For 5′-UTR and E2, we obtained 29 and 10 sequences and built the alignments covering −383 to −189 nt and 1035 to 1613 nt, respectively.
The mean pairwise evolutionary distance between references was larger for E2 than 5′-UTR sequences by 0.035 units (Table 1). This agrees with a higher capacity of E2 to support phylogenetic relationships suggested earlier [9]. For samples from HPgV-1-positive donors, we detected an even larger difference in the mean pairwise distances between E2 and 5′-UTR alignments (0.074 units). Accordingly, the mean distance for E2 between samples (0.115) approaches the distance between genotype references (0.140). This might reflect an accelerated accumulation of polymorphisms in the E2 region of HPgV-1 in our samples.
To genotype HPgV-1 in viremic donors, we performed the 5′-UTR phylogenetic analysis of the references and 28 of our HPgV-1 isolates (Fig. 3). In the most of these samples (25 of 28), the virus was found to belong to genotype 2a. Genotype 2b was assigned in two samples, and one sample was placed to the genotype 3 group. Note that bootstrap values for revealed clusters were quite high (82% for genotype 2 and 99% for genotype 3), whereas deeper genotyping (2a and 2b) was weaker supported by bootstrapping (43%). Sequences in the genotype 2a subcluster were very similar (evolutionary distances below 0.017) and could no further be classified reliably ( Fig. 3; inlet).
It was reported earlier [9,10,39] that the portions of E2 and 5′-UTR sequenced in this work are sufficient to reproduce the genotypes delineated using the full-length sequences. Hence we then performed the phylogenetic analysis for the E2 region. Nine of ten E2 sequences were placed to the genotype 2 cluster featuring bootstrap support of only 52%. Inclusion of more references (namely, U44402 and U45966 for genotype 2a and D87709 for genotype 3) did not change the tree topology (data not shown). The genotype 2 cluster contained samples for which both E2 and 5′-UTR sequences were available (bold in Fig. 3). However these samples did not exhibit as strong adherence to genotype 2 in E2 analysis as they did in 5′-UTR analysis. Indeed, the bootstrap values for the genotype 2 clusters were different between the two analyses (82% and 52%, respectively). Moreover, in the E2 tree, branch bifurcation points for samples and references lie in the same range of distances (0.038-0.075; except for isolate v43). This is in contrast to the 5′-UTR tree where bifurcation of references occurs at larger distances than that of samples. It appears that the reliability of E2-based genotyping has been compromised, further arguing for a higher variability of the E2 region of HPgV-1 in donors of fetal material.
Sequencing of 5′-UTR and E2 regions in fetal samples was not successful. The yield of sequencing-grade cDNA was extremely low, probably, due to a low amount of HPgV-1 or/and RNA degradation. We were able to sequence 5′-UTR in only four samples (v34, v35, v45 and y21) and E2 in two samples (v31 and v35). In the portions where alignment could be performed, the sequences were identical to those representing HPgV-1 in cognate maternal plasma (data not shown).
An elevated variability of the sequenced portion of E2 at the amino acid level
The E2 glycoprotein carries a series of functional motifs involved in membrane fusion and interaction with components of the immune system [5]. We analyzed the alignment of translations of nucleotide sequence region from 1080 to 1567 nt which corresponds to the polyprotein region of 361 to 537 amino acids (positions 157 to 333 in the E2 glycoprotein [20]). The profile of Shannon entropy, which is one of the ways of visualizing protein sequence variability, contains four peaks reaching the value of 1 (Fig. 4) indicative of a strong variability. A high prevalence of non-synonymous nucleotide substitutions (dn/ds ratio) was detected at five positions that coincide with the entropy peaks. A shift towards higher high dn/ds is usually interpreted as an indication of positive selection going on at the analyzed positions.
Interestingly, the variability peaks are located in putative secondary structure motifs such as polypeptide chain extended strands and alpha helices. These elements in their own turn appear to be associated with a series of antigenic determinants. Examples of this kind are (i) the epitopes for B-cells and CD38+ T-cells predicted by computational analysis [45] and (ii) the epitope for neutralizing murine anti-E2 monoclonal antibodies revealed in an earlier search for immunodominant antigenic sites [18]. Furthermore, the region between two alpha helices (270 -300) harbors a membrane fusion peptide suggestively playing a key role in the viral entry [46] and, largely overlapping, putative HIV-inhibiting peptide [19]. Other polymorphism-tagged extended strands also appear to be linked to functional elements such as a putative glycosylation site (position 197) [5] and palmitoylation motif (167-175) [45]. Although no role has yet been specified for these elements in HPgV-1 lifecycle, glycosylation and palmitoylation are known to play a role in altering the dynamics of protein-membrane interactions [47].
We next questioned what sorts of amino acid substitutions are prevalent in the eight selected variable motifs (Fig. 5). Three polymorphisms associated with secondary structure elements add or eliminate a phosphorylation Sequences of 5′-UTR and E2 regions were analyzed (N, number of sequences). Pairwise distances were estimated separately for two groups of sequences (reference genotypes and samples) with the use of the Tamura-Nei model. Means of distances within each group are given with corresponding standard deviations (S.D.). Differences between the means (bold) for references and isolates are significant at the p < 0.001 level (Student's t-test) site (positions 183, 204 and 228). Addition of a novel phosphorylation site (although with a low reliability of prediction) is "attempted" at position 260. This change seems to be supported by selection as argued by a peak in the dn/ds profile (Fig. 4). On the other hand, there were two homotypic substitutions (producing no change in the overall amino acid property), Ser236Thr and Ala274Val. These might be considered as evidence of functional and structural indispensability of the corresponding motifs. Remarkably, substitutions at 183, 204, 228, 260 and 274 occur also in genotype references suggesting common driving forces of molecular evolution. In contrast, there were substitutions absent in sequences of selected genotype references. Among these substitutions, noteworthy are Glu287Lys, Gly298Glu and Arg299Pro/Gly drastically changing the amino acid charge. It is remarkable that these substitutions fall in antigenic epitopes, further implying an evolutionary search for novel immunity-evading variants.
Discussion
Most previous studies on HPgV-1 vertical transmission were based on the detection of the virus in new-born infants [22,23,48]. Here we shifted the focus to early-age fetuses and found a high level of pegivirus prevalence (Fig. 1) and transmission (Fig. 2). Our target cohort had an elevated proportion of individuals carrying the virus in plasma (13.7%; Fig. 1a) as compared to two other groups: customers of a commercial medical lab ("The DNA Lab", Kyiv, Ukraine) and our Center's patients (8.7% and 4.0%, respectively). Interestingly, this order of decreasing percentages might correlate with the socioeconomic differences in the populations these cohorts likely originate from. It seems logical to assume that Fig. 3 Phylogenetic trees inferred from 5′-UTR and E2 sequences. Neighbor-Joining method was applied to the alignment of sequences of HPgV-1 samples and references: U36380 for genotype 1, D90600 and AF104403 for genotype 2a (2a-1 and 2a-2), U63715 for genotype 2b, AB003288 for genotype 3, AB018667 for genotype 4, AY949771 for genotype 5, and AB003292 for genotype 6. Bootstrap support values higher than 40% are given next to the branches. Bold: samples for which both 5′-UTR and E2 sequences are available. Some sequences were collapsed in groups denoted by triangles. The genotype 2a-2 group is enlarged in (inlet on the right). Scale: evolutionary distances represented as the number of base substitutions per site women donating aborted fetal material can afford much less medical care than patients receiving expensive stem cell-based treatment at our Center. This agrees with the worldwide prevalence profile for HPgV-1 showing elevated levels in developing countries [5]. The occurence of HPgV-1 was higher in chorions than fetal tissues (Fig. 2a) suggesting gradual transmission. It remains unclear whether HPgV-1 RNA detected by PCR represents a true intracellular infection or results from an uptake of mother blood during earlier embryofetal stages. Also, it is impossible to show that we deal with an infection-competent virus because of a lack of cell culture system suitable for in vitro infection studies. A considerable proportion of infected fetal samples derived from non-viremic donors (Fig. 2c). This presumes the residence of HPgV-1 along the reproductive route, supporting the possibility of infecting the baby during delivery [23,49]. HPgV-1 persistence at vaginal surfaces seems further plausible given a high frequency of its sexual transmission [50]. Therefore, we conclude that the fetus may acquire the virus not only via blood. Nevertheless the blood level remains the main prognostic factor for HPgV-1 transmission (Fig. 2b) agreeing with the results of earlier perinatal screens [21,23].
HPgV-1 in most of donors belong to genotype 2 ( Fig. 3) which is predominant in Europe [10]. While genotyping by the phylogenetic analysis of the 5′-UTR sequence alignment was quite reliable, sequences of a portion of E2 barely support their grouping with genotype 2 references. This could be due to accelerated evolution in E2 suggested by the analysis of inter-sample and inter-genotypic distances (Table 1). Polymorphisms were found in E2 motifs possessing important structural and functional elements that play a key role in cell binding, HIV inhibition (Figs. 4 and 5) and formation of antigenic landscape. The latter is especially important given attempts of pegivirus to evade the immune pressure [51]. However HPgV-1 seems to balance between the need for a broader E2 antigenic diversification and constraints imposed by the necessity to preserve important functions. This may explain the seeming paradox that pegivirus, reported to be less variable than HCV [7], reaches higher blood titers and rates of vertical transmission (Figs. 1, 2 and previous reports [5]). It was suggested that antibody escape is not the first priority for pegivirus [52]. We speculate that, while HCV tends to accumulate as much E2 variability as possible, the pegiviral strategy may focus more on preservation of E2 natural functions resulting in a broader tropism, particularly towards hematopoetic and immune cells. Indeed, the primary replication sites of HPgV-1 were suggested to localize not in the liver but in the bone marrow and spleen [28]. Establishment of infection in these tissues may require a higher fidelity of functions involved in cell binding, thereby limiting the range of variability in corresponding motifs.
The hypothetic affinity of HPgV-1 to hematopoetic precursors [30] evokes an assumption that contaminated fetal cell suspensions (Fig. 1) may contain this virus in fetal HSC. Therefore, the accidental use of HPgV-1-contaminated cells for transplantation may pose a risk of transporting the virus to the HSC homing sites. Given an inhibiting effect of pegivirus on proliferation (demonstrated recently for differentiating immune cells [51]) the efficacy of HPgV-1-infected stem cells in replacement therapy may be reduced. The consequences may be further aggravated in immunosuppressed patients. Furthermore, we note a higher rate of HPgV-1 detection in comparison to other viruses included in the mandatory test panel (HCV, HIV, HBV, herpex simplex virus, Epstein-Barr virus, parvovirus B19). Therefore, we argue that it would be rather premature to abandon pegivirus PCR testing of samples designed for stem cell therapy despite the fact that such test is not mandatory for blood banks. On the other hand, it looks encouraging to employ pegivirus, benign and capable of gaining high titers, as a vector for HSC-mediated delivery of corrected genetic alleles. This could be put as the basis of widely-discussed strategies for correcting genetic disorders [31,53] provided all the effects of HPgV-1 in targeted niches are properly evaluated.
Conclusions
HPgV-1 displays a higher prevalence in donors of fetal stem cell-containing material than HCV. HPgV-1 vertical transmission is quite frequent and could be detected at early development stages. Donor's blood is the main, but Fig. 4 are shown. References are the same as in Fig. 3. Positions are as per # D90600 (2a-1). Dots represent identity to the consensus. Dash, sequence is not available. Predicted secondary structure elements and epitopes are shown in the "Motif" row. Phosphorylation potential is given in likelihood units. Values above the threshold (0.5) predict a high probability of the site phoshorylation not the only, source of fetal infection. Most of HPgV-1 isolates belong to genotype 2 as could be determined by sequence analysis of 5′-UTR. Sequencing of the E2 glycoprotein in a smaller subset of samples placed them to genotype 2 as well, but with a lower reliability. This could result from accelerated accumulation of variability in E2 exceeding the extent optimal for reliable genotyping. Polymorphisms often occur in E2 motifs bearing structural, functional and immunogenic importance. An ongoing selection for better-fit variants could be implied. This makes difficult to predict the effect of HPgV-1 persistence in host cells, particularly HSC believed to be the primary site of HPgV-1 replication. Given the wide therapeutic use of fetal stem cells (represented largely by HSC), we advocate the necessity of HPgV-1 testing of fetal material and donors thereof.
Acknowledgments
We thank Ihor Dubrovskyi, CEO of "The DNA Laboratory", for the data on pegivirus prevalence. We also thank Dr. Maria Obolenska (Institute of Molecular Biology and Genetics, Kyiv, Ukraine) for kind admission to use her lab's facilities for some DNA manipulations.
Funding
This work was funded by Emcell.
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Authors' contributions YV designed the study, summarized screening data, performed PCR for sequencing, analyzed sequencing results, and wrote the manuscript; IK is the main expert on HPgV-1, performed routine PCR for screening; KK performed routine PCR for screening; KS prepared fetal samples. All the authors thoroughly read and approved the manuscript.
Ethics approval and consent to participate Each woman donating fetal material gave informed consent and signed an appropriate form. The research and clinical practice at our Center are performed according to the Law of Ukraine "On transplantation of organs and other anatomical materials to the person", the Law of Ukraine "On licensing types of economic activity", under the License of Health of Ukraine for Medical practice, the License for the activities of the bank of umbilical cord blood, other human tissues and cells issued by the State Service of Ukraine on prevention of HIV-infection/AIDS and other publicly dangerous diseases (No. 222-VIII, 02.03.2105). According to the licensing conditions, an ethical approval is required and has been granted by the Bureau for ethical policy and patient rights of Emcell.
Consent for publication N/A. | 2017-09-02T08:07:36.945Z | 2017-08-31T00:00:00.000 | {
"year": 2017,
"sha1": "dacde46d239020c5e612c1a9b78c0760b58a304c",
"oa_license": "CCBY",
"oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/s12985-017-0837-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dacde46d239020c5e612c1a9b78c0760b58a304c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
237467752 | pes2o/s2orc | v3-fos-license | An Ominous Cause of Headache in a Teenager
We present the case of an adolescent male who presented to the emergency department with headache and vomiting. We discuss the differential diagnosis and the need to maintain a high index of suspicion to avoid missing ominous causes of headache. In this case, the patient had a pineoblastoma, detected on a noncontrast CT scan. The CT scan was done as part of the emergency department workup to evaluate headache accompanied by vomiting in this otherwise healthy teenager.
Introduction
The pineal gland is a part of the endocrine system located in the brain. It is responsible for melatonin secretion. Pineal parenchymal tumors are tumors of the pineal gland and include pineocytomas, papillary tumors, pineal parenchymal tumors of intermediate differentiation, and pineoblastomas [1]. Pineoblastomas are most commonly diagnosed in children and young adults [2]. On average, the age of onset is 12.6 years [3]. Adult cases comprise less than 10% of overall cases, and adult pineoblastoma cases have been found to require different treatments than pediatric cases [4]. There is a male predominance, particularly among the pediatric population [5].
Patients with pineoblastoma often present with nonspecific headache and vomiting from elevated intracranial pressure, resulting from a buildup of cerebrospinal fluid [6]. The most common symptoms that present in pediatric patients with pineoblastoma are headache and vomiting, as well as weakness, unsteady gait, dizziness, and diplopia [5]. More rarely, patients present with Parinaud syndrome, consisting of double vision, fever, and challenges in speaking [7]. Variants in the DICER1 gene have been identified as a risk factor for pineoblastoma, and patients with RB1 mutations have worse outcomes than those lacking the mutations [8]. Age is also considered a risk factor as these tumors are more prevalent in children and adolescents. This case report will focus on an adolescent male who presented to the emergency department and was diagnosed with a pineoblastoma.
Case Presentation
A 16-year-old boy with no significant past medical history presented to the emergency department with headache, neck and back pain, and stiffness for two to three days, accompanied by six episodes of vomiting on the day of presentation. He was taking acetaminophen and ibuprofen without significant relief. He denied fever, upper respiratory symptoms, chest pain, shortness of breath, abdominal pain, dizziness, or weakness. His vaccines were up-to-date and he denied sick contacts or drug use.
On examination, his vital signs were unremarkable apart from an elevated blood pressure of 151/70 mmHg. He was alert and oriented. His pupils were equal and reactive bilaterally with some photophobia. He had decreased extension and flexion of his neck, and both midline and paraspinal cervical tenderness. He had equal strength and sensation bilaterally. The remainder of the examination was unremarkable.
The patient was empirically given ceftriaxone and dexamethasone due to concern for meningitis. Noncontrast brain CT was obtained in preparation for a lumbar puncture to evaluate for meningitis and demonstrated hydrocephalus with transependymal cerebrospinal fluid migration ( Figure 1). 1 2 3, 4, 1, 5 1 FIGURE 1: CT scan demonstrating pineoblastoma (arrow).
CT: computed tomography
An obstructive hyperdense lesion was noted at the level of the third ventricle, likely related to the pineal gland. The patient was transferred to a pediatric hospital where an MRI revealed a pineoblastoma. A ventriculoperitoneal shunt was placed, the tumor was resected, and the patient was started on radiation therapy.
Discussion
The clinicians on this case conducted a thorough medical history and physical examination. The presence of neck pain, stiffness, and vomiting frequency were causes of concern. The patient was empirically covered for meningitis as a precaution. Performing a noncontrast brain CT during prelumbar puncture protocol was critical to detecting the pineoblastoma. Lumbar puncture to test for meningitis was not performed due to the detection of the mass.
While pineoblastomas are more prevalent in children compared to adults, they remain rare, and the presentation is common to other conditions. This patient exhibited the two most common symptoms, headache and vomiting, along with others associated with the disease. The patient's age was also a risk factor for pineoblastoma. However, these symptoms and risk factors are by no means indicative of the presence of a pineoblastoma.
Differential diagnoses in an adolescent male with headache and vomiting symptoms include meningitis, migraine, vertebral or carotid artery dissection, subarachnoid hemorrhage, sleep deprivation, stress, or substance abuse-related conditions such as cannabinoid hyperemesis syndrome ( Figure 2).
FIGURE 2: Causes of acute headache in adolescents.
The patient was precautionarily treated for meningitis, though current vaccination status and lack of fever were suggestive of an alternate etiology. Migraines can also present as headache and vomiting. Approximately 28% of adolescents have migraines, albeit with a female predominance [9]. Repeated vomiting and nausea are also symptoms of cannabinoid hyperemesis syndrome, associated with regular cannabis usage, which has become more prevalent of late [10], and many patients routinely deny drug use. In the United States, by age 16, 29.6% of teens have used marijuana and 14.4% are current users [11].
Because of the frequency of drug use denial and the popularity of marijuana among this age group, it can be tempting to dismiss these symptoms in teens as due to this syndrome. The relative prevalence of these other conditions compared to a pineoblastoma can make it easy to misdiagnose a patient presenting with vomiting and headache. However, a misdiagnosis would lead to late detection of the tumor, which can impact the prognosis.
Numerous factors have been found to influence pineoblastoma prognosis. Patients whose disease has not disseminated at the time of diagnosis have a higher two-year survival rate than those whose disease has disseminated at the time of diagnosis. Other factors that influence prognosis include aggressive surgical resection, chemotherapy, and X-ray therapy, with the most effective being a combination of the three [4]. Studies are inconclusive regarding the impact of tumor size on prognosis. In one study, tumor size greater than 30 mm was associated with a poorer prognosis than tumor size less than 30 mm, though this was not found to be statistically significant [12]. Another study found that measurement of tumor at diagnosis did not impact overall survival [5]. Further research is needed to confirm the impact of these factors. However, the impact of disease dissemination and the possible impact of tumor size on prognosis highlights the importance of an early diagnosis.
Conclusions
This case could easily have been missed. The patient's symptoms of headache and vomiting could have been attributed to other, more common, and relatively benign conditions. In the case of this patient, such a misdiagnosis could have greatly delayed tumor diagnosis and treatment, which could have negatively impacted his prognosis. As with many case reports, this one reminds us of the importance of performing a careful history and physical examination and maintaining a high index of suspicion for potentially ominous causes of headache.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2021-09-11T05:25:42.440Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "9ad413a1acb5c55e64c2c915cd4a6ba5ebe31ac0",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/67166-an-ominous-cause-of-headache-in-a-teenager.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9ad413a1acb5c55e64c2c915cd4a6ba5ebe31ac0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264967009 | pes2o/s2orc | v3-fos-license | Local Goup HVCs: Status of the Evidence
The evidence for locating the High Velocity Clouds in the Local Group is summarized and evaluated. Recent measurements of the H$\alpha$ surface brightness and metallicity of a number of HVCs appear to be fatal to the Galactic fountain as a significant contributor to the HVC phenomenon, but not to the existence of the fountain itself. Observations of extragalactic analogues to HVCs remain the {\it sine qua non} for deciding whether the Local Group hypothesis is viable, but the constraints based on existing surveys appear to be rather weak. MgII quasar absorption lines restrict how many HVC analogues exist at intermediate redshift, depending on where these lines originate. It is concluded that the evidence remains ambiguous, none of the main hypotheses is fully consistent with all of the data, and the Local Group hypothesis remains a viable explanation for the HVC phenomenon.
Introduction
Several years ago, the old hypothesis that the High Velocity Clouds (HVCs) are members of the Local Group was revived by Blitz et al. (1999) in a modern cosmological context.The revival of this idea generated some interest in the community because the most formidable objections to it were obviated by the introduction of dark matter, and because the HVCs then played an important role in galaxy formation and evolution.If the idea were right, the HVCs would be imbued with cosmological significance, and could be studied in some detail because they are quite close at hand.Most important, the Blitz et al. (1999) study made a number of specific predictions that could be tested with observations that are relatively straightforward, and it could be learned, in principle, whether the idea is right or wrong in short order.Some of these tests were carried out, and it turned out that the results, rather than clarifying the issue, added complications, and the nature of the HVCs remains murky.Some of these will be reviewed here to give a flavor of where things stand as of this writing.Space limitations preclude reviewing all of the relevant observational material, but the two other most commonly discussed hypotheses in the recent literature will also be addressed: the Galactic fountain (Shapiro and Field 1976), and stripping from the Magellanic clouds and other dwarf galaxies in the Local Group.
Blitz
It is worth noting that some authors point out that the HVCs are likely to be a composite phenomenon, with more than one origin, and the discussion of the HVCs ought to take this diversity into account.This view begs the issue on two counts.First, whether or not the HVCs have a single origin, there is likely to be one dominant origin for the HVCs responsible for either most of the mass or most of the individual catalogued clouds, and the application of Occam's razor demands that we know what that dominant origin is.Perhaps more important, if the clouds are of Local Group origin, then they almost surely have cosmological significance, and they must have counterparts throughout the Universe.Thus the real question is whether the HVCs play a significant role (past or present) in galaxy formation and evolution, or whether they are simply curiosities related to the Milky Way alone.
The Modern Local Group Hypothesis
Of the various pieces of evidence for and against the numerous hypotheses proposed to explain the dominant origin of the HVCs, most are rather weak and ambiguous (e.g.Wakker & van Woerden 1997), and each hypothesis is probably more notable for its weaknesses rather than for its strengths.The modern Local Group hypothesis, however, has the advantage of deriving from a simple dynamical argument that explains the most fundamental aspects of the available data.Using the relatively complete HVC catalogue of Wakker and van Woerden (1991), Blitz et al. (1999) showed that a single compact hypothesis could explain both the spatial and velocity distribution simultaneously of HVCs over the entire sky.If the Local Group hypothesis turns out to be incorrect, the competing ideas will have to reproduce both the distribution on the sky and the kinematics of the clouds in a straightforward manner, something none of them has been able to do so far.
The model identifies the HVCs with the earliest structures to form in the Universe, and they are thus necessarily dark matter dominated.Using only the gravity of the Milky Way and M31 (with minor modifications), the model reproduces the observed spatial concentrations on the sky, the shape of the envelope of observed velocities, the amplitude of the distribution and the preponderance of higher absolute velocities in the northern hemisphere.Because the model is so simple, it was possible to make several predictions related to the Hα surface brightness of the clouds, metallicities, and the detectability of extragalactic analogues, and to contrast these with the Galactic fountain model, the model favored by most writers on the subject during the 1990s.
The Hα Test
In the late 1990s, Hα had been detected toward the largest of the HVCs, and toward the Magellanic Stream (MS - Weiner & Williams 1996;Tufte, Reynolds & Haffner 1998;Bland-Hawthorn et al. 1998).The largest HVCs are close to the Milky Way in all models, and have a mean distance of about 10 -20 kpc in the LG hypothesis, consistent with direct distance determinations along two lines of sight (Danly, Albert & Kuntz 1993;van Woerden et al. 1998).There was some debate initially about whether the source of excitation was photoionization from escaping UV radiation from the Galactic plane, or shock heating by passage of the clouds through a tenuous Galactic halo.In the Local Group hypothesis, most of the HVCs are at distances larger than 50 kpc and should exhibit Hα surface brightnesses smaller than the weakest of the Hα detections regardless of the excitation mechanism.Local Group HVCs will have either lower incident ionizing radiation than the detections for the MS, or they will impinge lower density halo gas, but in either case, the Hα emitted from the HVCs should have lower surface brightness.
Several sets of observations were carried out, but the most extensive to be published to date are those of Weiner, Vogel and Williams (2001, this volume) who observed HVCs in the southern hemisphere.It had been expected that most of the HVC detections would either be considerably fainter than the measurements of the MS, if the HVCs are Local Group objects, or much brighter, if they are part of a Galactic fountain.By calibrating the measured surface brightness to a cloud or clouds of known distance, absolute distances to the clouds can be determined by tying the calibration to either a model of the ionizing radiation escaping from the Galaxy or to estimates of the halo density if the HVCs are shock heated.
Judging from Figure 2 of Weiner et al., the situation is much more complex.It had been hoped that the MS, with its known distance, would serve as a calibration for the Hα observations, but it turned out that observed fluxes toward the MS vary by two orders of magnitude, making the stream all but useless for calibration purposes.Second, two HVCs, complexes A and M, have measured distances smaller than the MS clouds, yet their Hα surface brightness is fainter than many of the MS lines of sight.Using a conservative photoionization model and using clouds A and M as calibrators, Weiner et al. conclude that the fainter HVCs they observed have distances inconsistent with an origin in a Galactic fountain.They point out, however, that most of the MS detections are not consistent with the ionization model they use.
More puzzling still is the detection by Weiner, et al. of the HI associated with Sculptor, a dSph galaxy with a large cloud of HI at a distance of 80 kpc.This galaxy should be a relatively good calibrator for the Hα measurements because the field-of-view covers a large fraction of the cloud, and because the cloud has a well-determined distance.Many of the Hα detections in Weiner et al. are factors of 2 -5 below the measured surface brightness of Sculptor, suggesting that these clouds have distances of 100 -200 kpc!Unfortunately, the interpretation of the Hα measurements seem to be fraught with uncertainty, and instead of giving the clean result that had been hoped for, the measurements have raised numerous questions in their own right.So far, measurements have only been made toward southern hemisphere HVCs, which according to the Local Group hypothesis, should be closer than average, and some of them may be contaminated by debris from the MS.Another test would be to observe the small HVCs with high negative velocities within a radian of M31, which should have large distances from the MW and well-determined distances from M31.Where the projected distance from M31 is large, the mea-Blitz sured Hα fluxes should be lower, on average, than those observed in the southern hemisphere.
The Metallicity Test
An unambiguous prediction of the Local Group hypothesis is that the HVCs should have substantially subsolar metallicities.But how low is appropriate?If the HVCs are truly primordial, their metallicities should be zero, but it has been difficult to find any intergalactic gas with zero metallicity, especially for gas with column densities like that of the HVCs: the Lyman limit systems.Blitz et al. (1999) suggest values < 0.1 -0.3 solar based on various measurements of intergalactic gas.HVCs originating in a Galactic fountain should have metallicities that are at least solar because most come from regions interior to the Solar distance and because they are accelerated by supernovae, stellar winds from O stars, etc. from which they should get higher than solar metallicities.In order to achieve high velocities relative to the LSR, these clouds must not become too well mixed with halo gas, and if this is the case, the high metallicities will be maintained.Gibson (2001, this volume) summarizes most of the metallicity data published to date (including some of his own unpublished data), and shows that most of the HVCs do indeed typically have metallicities of 0.3 or less.Many of them do not have measurements of the ionized component, and may have metallicities lower than the tabulated value.The highest metallicity cloud in Gibson's list is probably different from all other catalogued HVCs: the original HI detection in the is not confirmed in the Leiden-Dwingeloo HI survey.
The most straightforward conclusion from the metallicity data is that the measurements are inconsistent with the Galactic fountain model, seemingly fatal to it.Even though the number of good measurements are few, no bona fide HVC has either solar or supersolar metallicity!Combined with the results from the Hα test, it seems that the Galactic fountain model can be ruled out as being a significant contributor to the HVC phenomenon.This does not mean, of course, that the Galactic fountain doesn't exist.Indeed, Blitz et al. give other evidence for the existence of the Galactic fountain, but it does not, apparently play an important role in the HVC phenomenon.The model, it seems, has to be fundamentally altered to fit the data.
Nevertheless, the metallicity data give values somewhat higher than what might be naively expected from the Local Group hypothesis.Furthermore, all but one of the HVCs that have been measured are in the southern hemisphere where there may be some confusion with MS gas; the interpretation of the origin of the metallicities is therefore not unambiguous one way or the other as Gibson (this volume) points out.As is true for the Hα test, it would be useful to have some of the small, high negative velocity clouds near M31 measured, but finding suitable background sources is difficult.
The Extragalactic Analogue Test
In the absence of direct distance measurements, which probe only the nearest clouds, the most direct test of the Local Group hypothesis is to find analogues in other groups similar to the Local Group.The number density of HVC analogues ought to be related to environment, and it is likely that in rich clusters, for example, the HVC analogues have been, for the most part, accreted.In these systems HVC analogues would still be expected to occupy the outer reaches of a cluster, but perhaps with rather low surface filling fraction.
Estimates of the detectability of HVC analogues requires knowledge of the size and column density (or mass) of the local HVCs.Blitz et al. provided an estimate based on an assumed median distance of 1 Mpc and no correction for beam convolution, which could not be made from the Wakker & van Woerden (1991) data.The size and mass estimates have some flexibility however; the median distance can be as close as 500 kpc without producing difficulties for the dynamical modeling (but requires a larger ratio of dark matter to baryons), and beam smearing is clearly important for the small clouds that make up the majority of the sample (e.g.Wakker & van Woerden 1991;Braun & Burton 1999).Assuming a distance of 700 kpc and an estimate of the effect of beam smearing from the data of Hartmann & Burton (1997), a typical HVC has a diameter of about 14 kpc, a typical HI mass of about 5 ×10 6 M⊙, and a similar ensemble of HVCs in a distant group has a surface filling fraction on the sky of about 1%.
Even with the earlier, larger size and mass estimates, direct detection of HVC analogues by either emission line or absorption line experiments would be difficult, as pointed out by Blitz et al. because of the low surface filling fraction, and because of beam dilution, except for relatively nearby groups.Nevertheless, several sensitive HI surveys have been made, and the most sensitive of these, the Arecibo HI Sky Survey (AHISS; Zwaan et al. 1997), failed to detect any HVC analogues (Zwaan & Briggs 2000).i These authors argued that they should have detected 70 HVCs around groups and about 250 HVCs around galaxies in their survey based on the sizes and masses given by Blitz et al. (1999).They concluded that if the HVCs are indeed related to the Milky Way, they must have distances < 200 kpc; their conclusions cast considerable doubt on the Local Group hypothesis.
The Zwaan & Briggs result was, however, recently reevaluated by Braun & Burton (2001), who find several fundamental flaws.First, the assumed noise of the AHISS was found to be somewhat underestimated and the sensitivity somewhat overestimated.Second, most of the groups and individual galaxies Zwaan & Briggs considered are too far away and the covering fraction of HVC analogues is too small for the AHISS non-detections to place significant limits on the number of HVC analogues in these systems.Third, Zwaan & Briggs included any field galaxy that fell within 1 Mpc of the AHISS survey strip in their analysis, but Braun & Burton (2001) argue reasonably that distance is too high by about an order of magnitude.When Braun & Burton (2001) make the necessary corrections, they find only one group, the NGC 628 group, within a distance range that could put significant constraints on the number of extragalactic HVC analogues.Yet the number of clouds in that group that could be present and still be consistent with the Zwaan & Briggs non-detections is comparable to to the number expected to be in the Local Group from the catalogue of Wakker & van Woerden (1991) ! Braun & Burton (2001) find a similar result when they considered individual field galaxies.Thus, the non-detections in the AHISS survey do not place useful constraints on HVC analogues in other systems.The reason for the difference from the Zwaan & Briggs analysis is that Braun & Blitz Burton (2001) show that only in the nearest galaxies and groups are mass limits sufficiently sensitive and in these groups the survey samples only a small fraction of the relevant projected area of the sky.Burton & Braun (2001) go further and examine all of the relevant HI surveys, and find that none of those published to date put a limit the number of HVC analogues in other systems inconsistent with a generalization of the Local Group hypothesis.
To overcome some of the difficulties with using the AHISS, Zwaan (2001) did a targeted survey of six galaxy groups with the Arecibo telescope.By doing Monte Carlo simulations of HVC analogues within these groups, he concluded that between 6 and 28 sources should have been detected, depending on the assumptions, if the number of HVC analogues in each group is 100.Zwaan did detect several sources, but argued that none are HVC analogues.However, in none of the groups is the fractional surface area covered by the observations larger than 0.005.Based on a surface filling fraction of 0.01 in each group, the total number of detections expected in Zwaan's data is two, just the number of HI clouds he detects that are not associated with a galaxy.These detections may be analogues of the nearby complexes A, C and M. In any event, the number of extragalactic HVC analogues is still poorly constrained by the observations.Yet, even if none of the HI surveys to date can rule out that HVC analogues are seen in other groups, shouldn't it be possible to find the largest such systems in some other groups?Why hasn't the upper end of the HVC luminosity function been detected in other systems?Part of the answer is that the HVC luminosity function cannot be determined from the data at hand because individual HVC distances are not known.On the other hand, at least one HVC analogue has now been identified in the nearby Universe, with a mass of 1.7 × 10 7 M⊙, a diameter of 15 kpc at an estimated distance of 3.2 Mpc, and no stars to a limiting µ(B) ∼ 27 mag arcsec −2 (Kilborn et al 2000).The distance, based on a heliocentric velocity of 450 km s −1 and the assumption that it is in Hubble flow, is rather uncertain.Thus, at least one HVC analogue of mass and size within the range expected, and incidentally requiring a large amount of dark matter to be stable, has been identified, but where are the others?Blitz et al. (1999) catalogued a number of high mass HVC analogues from the literature, but because of their proximity to massive galaxies, it cannot be certain that these are not tidal features, though in most cases, they do not have tidal morphologies.
One solution is to look at the Sculptor group, the group nearest the Milky Way at a distance of about 1.5 Mpc.Many of the higher mass clouds should have been detected in the HIPASS Survey, and in the southern extension of the Leiden-Dwingeloo HI survey (Arnal et al. 2000).However, the Sculptor group is situated behind the Magellanic Stream and is confused in velocity at many positions with it.Nevertheless, the velocity dispersion of the group is much larger than that of the HI in the MS and should be separable from it.A tentative confirmation of an increased HI velocity dispersion in the direction of Sculptor was published by Putman (2000), but the data have not yet been analyzed in detail.The HIPASS survey has not been corrected for stray Galactic radiation in the sidelobes and so is useful only at velocities beyond those of the Galactic emission.With its larger beam, the Villa Elisa HI survey has a somewhat lower mass sensitivity than HIPASS, but it can be corrected for sidelobe contamination and will be a good test.Not finding HVC analogues in the Sculptor group, if these surveys are as sensitive as are claimed, would likely prove fatal for the Local Group hypothesis.
The MgII Test
If HVC analogues populate groups of galaxies like the Local Group, they should occasionally be seen in quasar absorption lines, since these lines of sight are sensitive to much lower HI column densities than 21-cm emission lines.Recently, Charlton, Churchill & Rigby (2000) examined the statistics of of moderate redshift MgII and Lyman limit absorbers in QSO absorption lines as a probe to see what sort of contribution might come from HVC analogues.
Stripped to its essentials, the argument made by Charlton et al. is as follows.Strong MgII absorbers are found in 58 systems toward 51 quasars in a survey by Steidel, Dickinson & Persson (1994).However, all but 3 of the 58 absorbers have identified galaxies with a coincident redshift within 40h −1 kpc of the quasar, so presumably all but 5% of the strong MgII absorbers are from the galaxies, which are generally normal and bright (L ≥ 0.1L * ).This leaves only a small contribution possible from a population of HVC analogues.However, the covering fraction of HVC analogues, per galaxy group, is equal to N HV C /N gal × (R HV C /R gal ) 2 , where N HV C and N gal are the number of HVCs and galaxies within a particular group, and where R HV C and R gal are mean radii of HVCs and galaxies in the groups.For the values of these quantities of 300, 4, 7.5 kpc and 40 kpc respectively, the HVC covering fraction is about 2.5 times that of galaxies.Thus either the surface filling fraction of MgII absorbing gas is << 1, or the Local Group hypothesis overpredicts the number of MgII absorbers.Similar arguments are made about weak MgII absorbers, and Lyman Limit systems.
This simple, persuasive argument has at least one serious weakness.The criterion for positional coincidence in the Steidel et al. (1994) survey is that the galaxy be within 40h −1 kpc, or about 60 kpc, and the required velocity coincidence is several hundred km s −1 , limited primarily by the precision of the galaxy redshifts.The impact parameter is, however, much larger than a typical HI radius even for a large galaxy.Furthermore, Dickinson & Steidel (1996) find that the absorbers are consistent with a spherical distribution in the galaxies, rather than a disk-like distribution.Thus it may be be that a substantial fraction of the MgII absorbers are due not to the galaxy itself, but to HVCs along the line of sight, either close to the galaxies as an HVC is being accreted or in the intergroup gas of a parent galaxy.If, for example, this is the case in about half the galaxies, then the Charlton et al. constraint is considerably softened, and the statistics of the MgII absorbers rather than providing a strong constraint against the Local Group hypothesis, could instead provide important support for it.
Other Distance Indicators
The availability of direct distance determinations remains disappointingly sparse.However, Braun & Burton (2000) and Brüns, Kerp, & Pagels (2001) have suggested a new way to determine the distances to the HVCs.The basic idea is that in some cases, both the column density and angular diameter of an HVC are well Blitz measured quantities; aperture synthesis observations sometimes make it possible to identify dense clumps in the HVCs.If it is possible to determine the density of the clumps independently, one can solve directly for the distance.Braun & Burton (2000) have used this technique to estimate the distances to several clouds which are typically in the range of several hundred kpc, strengthening the Local Group hypothesis.
Both Braun & Burton (2000) and Brüns et al. have found clumps with linewidths so narrow that it is possible to get an upper limit to the kinetic temperature of the clumps.If the depth of the clump is comparable to its dimensions on the sky, then its density and therefore internal pressure depends only on its distance.If the external pressure is known, then the distance to the clump can be found under the assumption of pressure equilibrium.The difficult part is getting a measure of the external pressure.Braun & Burton (2000) use an extension of a model of Wolfire et al. (1995), but it is unclear whether the model is applicable to the very low pressures of the intergalactic medium.The derived distances are therefore highly model dependent.Brüns et al. take a different tack and assume that the clumps are virialized, an unjustified assumption for deriving the distance.If the assumptions in both of these models are correct, then the distance determinations are probably sound.However, because the estimates are so dependent on the assumptions, they cannot be used as primary distance discriminants for the HVCs.
Summary Evaluation
Of the four tests, two, the metallicity test and the Hα test appear to rule out the Galactic fountain as contributing significantly to the HVC phenomenon.This leaves the Local Group hypothesis, and either tidal or ram pressure stripping of gas from Local Group galaxies as the main contenders for being the dominant origin of the HVCs.It could be that some of the HVCs are debris from the MS, but clouds that are stripped from the Magellanic Clouds should lie on a great circle with the MS, and most of the HVCs do not.No other galaxies in the Local Group have been identified as potential progenitors for these clouds, because, if the HVCs are not self-gravitating, they must be rather short lived.Thus although some authors have suggested that the HVCs might be tidal remnants (e.g.Wakker et al. 1999), there is neither kinematic nor dynamical evidence to support this idea.
Searches for extragalactic HVC analogues have not placed strong constraints on the number of HVCs with mean diameters of 15 kpc and mean masses of 5 ×10 6 M⊙.However, the larger mean size and mass originally estimated by Blitz et al. (1999) is difficult to sustain in view of the results of the HI surveys of Zwaan & Briggs (2000) and Zwaan (2001), if the number of HVCs is as large as that implied in the catalogue of Wakker & van Woerden (1991).However, it is also possible, that many objects identified as tidal features in groups of galaxies and even near field galaxies, are actually extragalactic HVC analogues and do not have their origins in galaxies.These may plausibly be extragalactic analogues of complexes A, C and H, the HVCs closest to the Milky Way and probably in the process of being accreted.After all, if complex C were viewed from, say, M81 with telescopes comparable to what we have been using, its long stringy morphology in close proximity to the MW would be very suggestive of a tidal feature.
The dynamical evidence remains the best evidence for the Local Group hypothesis, and although the various tests are not in obvious contradiction to it, neither do they provide strong confirmation.Rather, the Hα observations remain puzzling, and the MgII absorbers provide an important constraint only if the gas associated with these systems is much more extended and is distributed spherically, both of which are very different from the gas seen in spiral galaxies at zero redshift.Although the metallicities measured so far fall within the range predicted by Blitz et al. (1999), they do remain uncomfortably high, higher then typical metallicities measured in LG dwarf spheroidal galaxies, for example.Nevertheless, if the Local Group hypothesis turns out to be incorrect, it will be challenging for an alternate hypothesis to produce a good simple explanation for both the kinematic data and the spatial distribution of the HVCs, which has been where other ideas have always been the weakest. | 2019-04-14T01:33:01.992Z | 2001-05-08T00:00:00.000 | {
"year": 2001,
"sha1": "fc8b14ac23711d253fe8381a21e590eff96a0169",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "952207514259c7cfb5a084573e7cd92822acfe6d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
249583602 | pes2o/s2orc | v3-fos-license | Use of AI Voice Authentication Technology Instead of Traditional Keypads in Security Devices
Traditional keypads and text-based passwords are vulnerable to scams and hacks, leading to enormous levels of embezzlement and frauds apart from various other threats to data security. AI based voice authentication holds unpa-ralleled value for data protection, security, and privacy, by providing an effective alternative to traditional password-based protection. This paper reports the findings of a limited literature review that forms the basis for further research towards enhancing the reliability and security of AI voice authentication. Based on the findings of the review of existing literature, this paper proposes that integration of the blockchain technology with the AI voice authentication can significantly enhance the data security, starting from mobile devices to the security of big agencies and banks. The key processes in implementing an AI voice authentication system are proposed as a conceptual model, to facilitate further research for implementation.
Introduction
Amidst ongoing cyber-attacks, privacy issues, data breaches, and security issues, AI (artificial intelligence) is gaining unprecedented value.AI's contribution to facial recognition, content creation, and voice recognition alone has changed the dynamics of the whole web.We are observing a massive adaptation of technologies like Alexa, Siri, Amazon Echo, and Google Home.These technologies are changing the dynamics of web search, and the shift is from typing text to voice recognition.In 1964In -1965 Woody Bledsoe, Helen Chan Wolf, and Charles Bisson were the first group of people who took the initiative in automated facial recognition technology.Nowadays, all fields, including data security, surveillance systems, mobile and app development, e-commerce, and trading, are shifting toward facial recognition technology because of automation and better security [1].
According to the annual report of Internet Crime Complaint Centre (IC3) of FBI, crimes involving cyber-attacks and malicious cyber activities are steadily increasing over the years.In 2021, IC3 received 847,376 complaints involving cybercrimes, which was 7% increase from the year 2020.The potential losses from these complaints were estimated to be over $6.9 billion [2].Before the world got its hands on using technology to get better financially, people on the other side of the law were also using it for that purpose.That's one of the main reasons we must make sure our financial system remains secure.
In 2021, from January to September, almost 281 million people were affected by ongoing data leaks and data breaches, according to the data provided by the Identity Theft Resource Center (ITRC) [3].In the first half of 2021 alone, scammers and criminals could steal a total of £754 million, which was 30% more than the amount scammed in the corresponding period in 2020.Most of these scams were done by APP (Authorized Push Payments), but banks could save about £760 million by advanced security systems [4].A recent IBM study suggests that almost 1/5th of data breaches occur due to compromised credentials.Their report states that by 2021, nearly 25% of industries will have implemented AI-based security systems, 40% will be implementing them, and 35% will suffer a data breach.With AI voice authentication, $3.81 million was saved [5].
The use of AI-based neural networks in spam detection, zombie detection, malware classification, denial-of-service (DoS) detection, computer worm detection, and forensic investigations is unparalleled.In the realm of AI, Artificial Neural Networks (ANN) is a computational mechanism that simulates functional and structural features and was proven to be at least 20.5 times faster in detecting DoS attacks.Another system called Intelligent Agent Applications works with an automated computer-generated response system that communicates, cooperates, and shares data in such a collaborative manner that it can detect all types of responses.Almost all cybercrimes and fraud work on some similar patterns, and with the identification of such patterns, we may be able to build an unbreakable system soon, with blockchain, AI, and cybercrime pattern alert mechanisms [6].
Integrating AI voice authentication with the blockchain technology can not only ensures privacy but can also prevent data breaches.Passwords entered by hand and knowledge base authentications are traceable and vulnerable to hacking.Issues like forgetting passwords, the use of the same password, time consumption, and locked accounts are all annoying problems people face in current verification procedures [7].However, the most pertinent question is: how can blockchain technology be adopted at scale before it can be integrated with AI to Journal of Computer and Communications create security systems?A large scale can be achieved with some adjustments.Currently, the system is capable of retaining all financial transactions, but privacy is very critical.Blockchain technology can achieve more than anyone ever imagined, by establishing a security key and a way to manage it.It can be a step toward creative destruction.For this to happen, all current users must agree on a user agreement and rights guide under which they will be part of the blockchain system.Creating rights is the only way to encourage people to create a supervisory authority in the current blockchain system.This is because blockchain technology can permeate all industries.While it is not an easy process, it is critical to deal with tax evasion, cyber fraud, and money laundering occurring on a blockchain under the privacy umbrella.It simply means that security and supervisory duties can only be coordinated, if everyone is in agreement.A working group has already been formed by the Digital ID & Authentication Council of Canada [8].
This paper is proposing a conceptual model for implementing an AI voice authentication system integrated with the blockchain technology, based on a limited and targeted narrative review of the existing literature on block chain and AI voice authentication.This paper does not consist of a systematic review of the literature [9] and is limited to exploring the current level of knowledge in the relevant fields, as it informs author in ongoing research towards creating a robust system for implementation.Using the findings from this narrative approach to the literature review [10], a conceptual model for implementing a blockchain based AI voice authentication system is developed and presented.
Literature on AI Voice Authentication
Traditional authentication systems suffer from various drawbacks that include having to remember many different login ids and passwords, inability to use when the user's hands are not free as in driving a vehicle, and the possibility of duplication and fraudulent misuse [11].These drawbacks can be overcome by adopting human voice for user authentication.There are many algorithms gaining maturity for human recognition and authentication based on the voice [11].Besides, voice is the only biometric feature which is capable of being stored, compared, and authenticated remotely-either through a phone or through the internet [12].
Today, the use of AI voice authentication technology in security devices extends beyond just technological innovation.With blockchain integration and AI voice verification, the user is only susceptible to APP (Authorized Push Payments).AI voice recognition systems like Siri and Google Home are going to have more than 8 billion users by 2023.Artificial intelligence technology will replace traditional keypads and typing technology, and the parameters on which it will be implemented are security, authentication, and privacy.
AI-based voice authentication is difficult to integrate with all security devices and systems to ensure data privacy.A first step can be taken with those devices where data compromise can be costly.Scams involving ATMs are common worldwide.Scammers employ various crimes, including the Lebanon loop, card skimming, and cash trapping.In this case, ATM owners need to stop the unauthorized deployment of these types of malwares on their ATMs.This can be solved by ensuring that only authorized users can run the code, see it, and use it to withdraw cash.ATM PC core BIOS must be protected with a code that can't be hacked or manipulated by anyone [13].
Smart cities will utilize the growing Internet of Things (IoT) technology to process and manage modern cities in the future.In future, applications and systems such as Google Business, cloud computing, geographic information systems, and big data will create the roots of modern urbanization.In simple terms, computer applications and digital systems are set to integrate from the commerce sector into the health sector, including all fields.But this digital system is not protected and is vulnerable to attacks by hackers and cybercriminals.For such cities to become a part of the future, two things are required, the first being active surveillance by AI integrated bots and human management teams to detect frauds and anomalous activity, and the second being the incorporation of a public blockchain, supervised by a regulatory authority on the rights and duties agreements [14].
Glowacki and Piotrowski [15] suggested a new architecture for voice identity distribution, to prevent any possible "unauthorized subscriber impersonation and unauthorized voice message edition".This architecture uses the data hiding technique to provide voice authentication.Another study conducted on the users of Google's wearable glasses found that it was possible to achieve user authentication with near perfect accuracy (99% detection rate and 0.5% false alarm rate after only an average of 3.5 user events) by combining a set of touch behavioral features and voice features [16].Panda [17] presented an algorithm that allows voice authentication after analyzing the user voice from varied environmental conditions.Such advancements in the technology are increasing the reliability of voice authentication [18].
In addition to ATMs, there have been breaches at vaults, security agencies, and all types of businesses.A voice authentication system based on artificial intelligence will eliminate all these traditional hacks and scams.One can use voice authentication to create a bank vault that opens only when the cash delivery van arrives with a person whose voice acts as a password.There are a lot of possibilities and prospects of this technology as far as innovation is concerned.The vulnerabilities associated with traditional keypads will be eliminated in all fields and dimensions.Currently, it is not possible to increase the privacy of these security devices.However, with the integration of AI base voice verification and blockchain technology, a highly secure system can be created.Through blockchain, we will be able to create a protected private system that cannot be compromised without both end-user keys.It will be difficult for cyber scammers to compromise the system through AI voice verification.It will replace the traditional keypads and improve the privacy and security of the system.
Blockchain technology can be used as a fraud prevention tool [19].The AI can be used to create a system where no transaction can take place until the user's identity is verified.A blockchain cannot do everything, but it can be an asset when it comes to ballot stuffing, Sybil, and continuous attacks.Use of blockchain can become the differentiating factor for applications from various domains, in terms of their security and privacy [20] [21].By integrating AI with public rights and policy, it may be updated to a point where it can detect fraudsters and detect the patterns of their tactics.It could create an alarm system to remind the security system that something is amiss [22].
Most transactions, communications, and business are now conducted through handheld devices.Almost 2 billion people use mobile apps to pay their bills.Millions of users are being added to these numbers daily [23].However, the majority of these mobile verification systems rely on keypad passwords, pattern locks, or even face recognition.The time has come for us to update our mobile security to a level where it's nearly impossible to crack.The current systems can be consolidated with a blockchain-based AI voice authentication security system.This will eliminate the risk of password theft, scams, and other vulnerabilities.The only way for someone to bypass a security system is when the user opens it with a voice authentication.
A Conceptual Model for Integration of Blockchain with AI Voice Authentication
As the review of existing literature showed, integrating with blockchain can significantly enhance the effectiveness of the AI voice authentication in minimizing threats to data security.Blockchain, however, is not accessible to all users unless they are part of a single transaction.With a surveillance body, rights and obligations agreement, and privacy protection, we can take blockchain-based AI security systems to a new level where they can be integrated with different systems across cities and countries.
Creating an AI voice authentication system that integrates blockchain is complex.The first step should be to develop a separate voice authentication system based on AI and build a blockchain-based matching database.This paper proposes a conceptual model on how to create and implement an artificial intelligence system.
Highlighting Voice Enrolment
Each of us has a unique voice, to be extracted and recorded by an AI algorithm.
There are two major aspects of voice authentication.
Physiological Features
In this category, all physiological features like tone, pitch, and volume are analyzed and enrolled by AI.
Behavioral Features
This includes features like accents, regional dialects, and idiosyncrasies [24].Journal of Computer and Communications These voice features are detected by the sensor module of the AI voice authentication system.This sensor module can be a microphone, headphone, or device that can capture voice and break it into different features [25].
AI is now at a point where it can understand such diversified patterns, and the connected blockchain can act as a locker.Medical researchers are currently using decentralized blockchain databases in which AI-based machine learning algorithms can detect patterns and symptoms among patients through different types of imaging.A system was created to give doctors information about what is happening inside a patient based on images and collected medical data.The system they created involved: 1) Developing an artificial intelligence algorithm based on secure medical data obtained via smart contracts.
2) Using a distributed network of blockchains to train a global model using localized deep learning of CT scans, ECGs, EEGs, and other medical tests [26].
A probable feature of this system could be using AI to detect human voices to verify security using facial features and incorporating it with the blockchain backend.The creation of an AI-protected blockchain system will not be impossible if a method for detecting diseases is developed with the help of a few medical tests.
AI voice enrolment uses a recording of the voice.Afterward, the recording is sent to a biometric engine, where multiple templates are combined to create a voice print used for identification.Whenever the user speaks, a specific key is generated based on the match of the voice.This key will open the other end of the key, ensuring the security system is unlocked.Multifactor AI voice authentication will replace traditional keypads and password codes and vulnerabilities [27].
In this system, there are five basic modules: sensor module, feature extraction module, feature matching module, database module and decision-making module (see Figure 1).As a result of combining these modules, a blockchain-based voice authentication database can be created step-by-step [25].
Speech Biometrics/Voice Biometrics
Voice biometrics is the science of identifying a specific person based on his voice.This identity verification system is not as easy as it sounds.More than 70 body parts are required to produce a unique voice; this AI voice authentication Figure 1.Creation of blockchain based voice template for authentication.system also involves pitch, volume, language, and style.When a person's voice is recorded, a specific biometric engine generates a voice template to merge multiple voice recordings.AI voice biometrics includes tone, volume, pitch, language, and many other factors that combine to produce the most accurate template for identification.The machine learning mechanism chooses the most suitable recordings to create such a template that can cover all possible input from the users.This feature extraction module is based on the extraction of sound clips and creating the most relevant template for identification.Compared to speech biometrics, in keypad biometrics there is no such multifactor association that can provide multiple layers of security for the user.Due to the integration of the blockchain, it is difficult to breach, unless the user opens it with their own voice [26].
Voice as Password and Automatic Speech Recognition
The voice template must be recorded and stored in a blockchain-enabled database.Based on this template, users will verify their voice input's accuracy.Putting it simply, AI matches the input voice with a template in the blockchain database based on the input information (Figure 1).After the two voices match, the security system automatically unlocks.There are features in this template that AI matches to ensure that the security system remains unbreakable.To verify the authenticity of the system, artificial intelligence analyses the voice input on its own based on the available template.Therefore, if the user is real, the system will unlock.The AI verifies the voice authentication by matching the available template in the database against the input voice to unlock the system.
Blockchain Integration
A blockchain-based security system means that the security system cannot be unlocked until the one-side key is verified through voice authentication by AI (Figure 2).Since blockchains are undatable, once a security system based on a verification procedure is created, one cannot change it.An AI voice authentication system that integrates blockchain will ensure the security system won't be compromised until the key is released by voice verification.The important thing is how it works.Using machine learning mechanisms, AI will verify the voice, releasing the key and unlocking blockchain-based security systems [28].Blockchain needs a stable organization that can oversee it at the state level to trace fraudulent activities back to their roots, which is only possible by integrating rights and duties with the users.
Voice authentication will ensure that only the authorized user can unlock the database, whether an ATM security code, a house security code, or an agency security code.There is no way anyone can unlock it except for APP (Authorized Push Payments).The old keypads will be eliminated here because of two major factors.The 1st factor is the security assurance.AI voice authentication ensure more security as compared to keypad codes because of no loopholes of breaching.The 2nd factor is blockchain integration that makes sure that the other end is safe and sealed until the voice is verified.We can merge blockchain with traditional keypads passwords, but it will not eliminate the scam and hacking threats.
It will not be beneficial to adopt a blockchain database security system in such a case.However, by contrast, an AI voice authentication system with blockchain end eliminates all such concerns, giving users more than one reason for adopting this technological innovation.
Discussion
Blockchain voice authentication holds incredible potential.Nearly any field can benefit from it.There are many ways to implement security systems, from the security of an ATM to the security of a secret intelligence system.A high-security prison requires that only the warden and officers pass through cells.Elite prisons are occasionally subject to breakouts, but one can prevent these by using AI voice authentication lock technology based on blockchain.Hackers' risks and the risk of clever criminals getting an edge over the system will be eliminated.
The security of government agencies, secret services, and government associations can all be strengthened with the help of an AI voice authentication system.These days, it has become common that many orders require a code or approval from a higher-up to begin the process.Voice authentication and verification can make these orders more secure.Generally, security orders are transmitted using Morse code to verify that they are from the authorized sources.It can, however, be replaced with an AI-based voice verification system and can eliminate the risks of security and privacy.Morse codes, keypad codes and typing messages can all be eliminated with this new technology.
Data and document security are one of the major concerns nowadays.Documents shared between two parties should not be breached by a third party.In today's age of innovation, a country's secret documents are nothing less than an asset.Whether they are locked in a briefcase with a code or have digital locks, these documents can easily be compromised.With the integration of a voice authentication system based on artificial intelligence, sensitive documents can be more secure.
Conclusion
Looking at the state of developments in technology, the traditional keypad will eventually be replaced by AI voice technology.AI voice technology is the next technological step in privacy and security.The mass adoption of voice assistants like Siri, Alexa, and Google Assistant is already happening, and the same will happen with the security system soon.But as far as blockchain secrecy and decentralization are concerned, it will remain an open question.Without a model of rights and duties, there is no way to utilize public information to trace perpetrators and criminals unless AI is integrated in a manner that allows it to do so on its own.Further research will be required in developing robust systems for integrating blockchain technology with AI voice authentication and for creating protocols and controls acceptable to all the global stakeholders. | 2022-06-12T15:14:17.708Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "11335fb890daf44b4df776b89985946626182813",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=117753",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8b4f1877041e9b78db304700859b229d5fc4fee5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
42013889 | pes2o/s2orc | v3-fos-license | Incidence of Ventilator-Associated Pneumonia in Critically Ill Children Undergoing Mechanical Ventilation in Pediatric Intensive Care Unit
Background: Among hospital-acquired infections (HAIs) in children, ventilator-associated pneumonia (VAP) is the most common after blood stream infection (BSI). VAP can prolong length of ventilation and hospitalization, increase mortality rate, and directly change a patient’s outcome in Pediatric Intensive Care Units (PICU). Objectives: The research on VAP in children is limited, especially in Iran; therefore, the identification of VAP incidence and mortality rate will be important for both clinical and epidemiological implications. Materials and Methods: Mechanically ventilated pediatric patients were assessed for development of VAP during hospital course on the basis of clinical, laboratory and imaging criteria. We matched VAP group with control group for assessment of VAP related mortality in the critically ill ventilated children. Results: VAP developed in 22.9% of critically ill children undergoing mechanical ventilation. Early VAP and late VAP were found in 19.3% and 8.4% of VAP cases, respectively. Among the known VAP risk factors that were investigated, immunodeficiency was significantly greater in the VAP group (p = 0.014). No significant differences were found between the two groups regarding use of corticosteroids, antibiotics, PH (potential of hydrogen) modifying agents (such as ranitidine or pantoprazole), presence of nasogastric tube and total or partial parenteral nutrition administration. A substantial number of patients in the VAP group had more than four risk factors for development of VAP, compared to those without VAP (p = 0.087). Mortality rate was not statistically different between the VAP and control groups (p = 0.477). Conclusion: VAP is still one of the major causes of mortality in PICUs. It is found that altered immune status is a significant risk factor for acquiring VAP. Also, occurrence of VAP was high in the first week after admission in PICU.
Background
Given the high incidence of healthcare-associated infection, especially in resource-limited countries, infection-control practices and surveillance systems play an important role in improving patients' safety, and decreasing the effect of life-threatening adverse events on healthcare systems. Healthcare-associated infections are usually underestimated in such countries [1]. Incidence of VAP ranged from 3% to more than 50% of ventilated PICU patients in different studies [2][3][4]. VAP incidence varies based on settings and geographical distribution [5]. Other important factors that may influence the reported rate include: study methodology [6,7], definition criteria (microbiological criteria versus non-microbiological criteria) [2,8], use of VAP prevention bundle programs [9], and medication practice in different PICUs [10].
Although there are some epidemiological studies that have been carried in neonatal intensive care units (NICU) in Iran [11], to the best of our knowledge, there is only one published study that has investigated the incidence of VAP in Iranian children in PICU [5,12].
The primary aim of this study is to describe the incidence of VAP, and the secondary aim is to determine the effect of VAP on mortality, and to determine risk factors for VAP in the Mofid Children's Hospital PICU.
Patients and Methods
In this cross-sectional study, ventilated children were assessed regularly in the PICU of a tertiary teaching center in Tehran (the capital city of Iran) over 12 months in 2013-2014. The PICU throughout the study at Mofid Children's Hospital was a 12-bed multidisciplinary care unit, with approximately 600 admissions annually. It is a mixed medical/surgical PICU wherein all patients are co-managed by pediatric pulmonologists and pediatric critical care physicians.
Any patient who needed respiratory support with mechanical ventilation was recruited to the study. The designed questionnaire was used to obtain demographic data and VAP assessment, which included risk factors and diagnostic criteria. Each patient was assessed by a single examiner (AA) within 24 hours of intubation (baseline), after 48 hours, and after 7 days of intubation (if still ventilated). Patients were evaluated for Centers for Disease Control and Prevention (CDC) criteria for VAP during the second and third assessments. [13][14][15].
Each patient with VAP fulfilled the imaging, laboratory and clinical criteria. Microbiological confirmation was not applied for diagnosis of ventilator-associated pneumonia based on CDC criteria [13]. Consecutive patients who met the VAP criteria were approached and recruited. We classified our cases as early-onset and late-onset VAP to determine the influence of timing of onset of pneumonia, which may have a possible effect on mortality attributable to VAP [16]. Among them, those who fulfilled VAP criteria during the first week of intubation (after 48 h of intubation) were considered to have early-onset VAP. Diagnosis of late-onset VAP was made for those who fulfilled VAP criteria after 7 days of intubation [17]. All patients were followed up until transfer to the ward for the mortality outcome. We matched the VAP group with a control group for assessment of VAP-related mortality in critically ill ventilated children.
Categorical data were reported as frequencies and percentages, and continuous data were reported as mean and standard deviation. The chi-square test was used to compare categorical data. This study was approved by the review board of the pediatric infections research center.
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This study was also approved by the Ethical Committee of the Pediatric Infections Research Center (PIRC) review board (number: 1392-1-91-11078-13443) in Shahid Beheshti University of Medical Science. Informed parental consent was obtained for all cases included in the study.
Results
Of 83 ventilated critically ill children in our PICU, 44 cases (53%) were male and 39 cases (47%) were female. The mean age of the patients was 29 months (the youngest was 1 month and the oldest was 12 years).
The incidence of VAP was 19/83 (22.9%) in mechanically ventilated patients. The three most common co-morbidities among patients with VAP were bacterial pneumonia, aspiration pneumonia and chronic heart failure (CHF), respectively. VAP developed in nearly all (94.7%) of the cases during the first week after admission to the PICU.
Early VAP was diagnosed in 16 patients (19.3%). Late VAP was diagnosed in 7 patients (8.4%). Among those with early VAP, late VAP was also detected in four cases. There was no significant difference regarding VAP-related mortality between those with and without VAP (P = 0.601). Furthermore, there was no significantly difference between early-onset and late-onset VAP in terms of VAP-related mortality (P = 0.533). Demographics, risk factors and mortality of children with and without VAP are summarized in Table 1. Given the primary results of a simple logistic regression test, which was meaningful only for immune status, a multiple logistic regression test was not performed [18].
Discussion
In this cross-sectional descriptive study on mechanically ventilated PICU patients, a control group was considered to investigate the risk factors for acquisition of VAP and the associated clinical outcome at Mofid Children's Hospital in Iran. The estimated incidence of VAP in our PICU was 22.9%, with a high mortality rate (47.4%). Among several VAP risk factors, only altered immune status was associated with higher risk of VAP. On the other hand, given the high-risk cases that were prone to VAP (more than about half of the cases), more intense VAP prevention strategies should have been considered in our PICU.
Incidence of VAP is differs greatly based on setting and location in critically ill children in PICU [5]. Overall, reported prevalence is about 3% to 27% [13,19]. Although VAP is considered as the second most common hospital-acquired infections in the PICU, after bloodstream infections (BSI) [13,14,20], reported incidence is higher in some studies [21,22]. El-Kholy et al. reported that VAP was the most commonly identified device-associated nosocomial infection (90%) among 490 pediatric patients [23]. Awasthi et al. revealed that VAP developed in 36.2% of children requiring mechanical ventilation in India [24]. Another recent national multicenter study on nosocomial infections in Spain, reported very low incidence of VAP (1.3%) among children undergoing mechanical ventilation in PICU [25]. Our estimated incidence is in line with another study conducted on Iranian children, reporting the incidence of VAP to be 27% [12].
Less is known about the risk factors of VAP in critically ill children in PICU. Previous studies reported that VAP mostly occurred in children who were ventilated for more than 4 days [24], or re-intubated [12]. Among investigated risk factors in our study, only altered immune status was statistically significant in those who developed VAP. It should be mentioned that the presence of the nasogastric tube, concurrent treatment with antibiotics, and PH-modifying agents such as ranitidine or pantoprazole were seen in nearly all children in the VAP and control group. Based on our results, the probability of a VAP event is greater during the first week of mechanical ventilation, which agrees with other reports [26,27]. Prolonged hospitalization prior to the onset of mechanical ventilation has been suggested by some researchers, and is probably an underappreciated risk factor for development of VAP, although we did not found it to be a significant risk factor [28].
We found that VAP has a high in-hospital mortality. The estimated mortality rate is about 5-14%, based on the limited available reports on VAP in children [5], but may be as high as 50%. VAP is considered to be an important risk factor for increasing mortality in infants and children in PICUs [14,29].
Our estimated mortality rate was 47.4%, which was extraordinarily high compared to many other reports [30,31]. High VAP-related mortality in our study could be attributed to younger age, co-morbidities, and lack of standard VAP Prevention Guidelines.
There are lots of reports that support applying VAP prevention programs to decrease the possibility of VAP among ventilated patients [32].
The small sample size may influence the statistical power of both risk factors and outcomes in this study. Also, the mortality in patients with no VAP was equally high (40.6%); hence, the high mortality cannot be attributed solely to VAP. It is likely that the overall mortality in our unit is high.
Conclusions
VAP is still one of the major causes of mortality in PICU. It was found that altered immune status is a major risk factor for acquiring VAP. Incidence of VAP was high in the first week after admission to the PICU. The results of this study emphasize the importance of applying early VAP prevention strategies in the PICU to reduce mortality. Further and larger prospective case-control studies are needed to evaluate the risk factors and outcomes of VAP. | 2018-04-03T06:03:43.930Z | 2017-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "ff5bb37f69cff262f92e9fac999b9df50227be0c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9067/4/7/56/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff5bb37f69cff262f92e9fac999b9df50227be0c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233695177 | pes2o/s2orc | v3-fos-license | EOT20: A global ocean tide model from multi-mission satellite altimetry
. EOT20 is the latest in a series of empirical ocean tide (EOT) models derived using residual tidal analysis of multi-mission satellite altimetry at DGFI-TUM. The amplitudes and phases of seventeen tidal constituents are provided on a global 0.125-degree grid based on empirical analysis of seven satellite altimetry missions and four extended missions. The EOT20 model shows significant improvements compared to the previous iteration of the global model (EOT11a) throughout the ocean, partic- 5 ularly in the coastal and shelf regions, due to the inclusion of more recent satellite altimetry data as well as more missions, the use of the updated FES2014 tidal model as a reference to estimated residual signals, the inclusion of the ALES retracker and improved coastal representation. In the validation of EOT20 using tide gauges and ocean bottom pressure data, these improvements in the model compared to EOT11a are highlighted with the root-square sum (RSS) of the eight major tidal constituents improving by ∼ 3 cm for the entire global ocean with the major improvement in RSS ( ∼ 3 . 5 cm) occurring in the coastal 10 region. Concerning the other global ocean tidal models, EOT20 shows an improvement of ∼ 0 . 2 cm in RSS compared to the closest model (FES2014) in the global ocean. Variance reduction analysis was conducted comparing the results of EOT20 with FES2014 and EOT11a using the Jason-2, Jason-3 and SARAL satellite altimetry missions. From this analysis, EOT20 showed a variance reduction for all three satellite altimetry missions with the biggest improvement in variance occurring in the coastal region. These significant improvements, particularly in the coastal region, provides encouragement for the use of the EOT20 15 model as a tidal correction for satellite altimetry in sea-level research. All ocean and load tide data from the model can be freely accessed at https://doi.org/10.17882/79489 (Hart-Davis et al., 2021). ◦ degree resolution grid with the FES2014 model interpreted onto the same grid resolution. The outputting of the data onto a regular grid is simply done to allow for an easy combination with the FES2014 model as well as to be more user friendly. The north-south extent of the model extends 66 ◦ N and 66 ◦ S, with the model defaulting to the FES2014 tides in the higher latitudes. This extent is chosen due to the limited altimetry data further beyond this latitudinal band and the difficulty in modelling the tides in the polar regions. Dedicated studies to the Arctic region such as of Cancet et al. demonstrate the complexity of modelling ocean tides in the polar regions and emphasize their importance for satellite altimetry. Future iterations of the EOT model will tackle the estimation of tides in the higher latitudes. A land-sea mask was added to the model based on the GMT tool that uses the GSHHG coastline database which is a high-resolution database that contains information about coastlines as well as lake and river boundaries. These data has a mean point separation of 178 meters which has been interpolated to a 0.125 ◦ resolution for use in the EOT20 model.
. The satellite altimeter data used in this study obtained from OpenADB at DGFI-TUM . The corrections listed in Table 2 are applied to all these missions. Most missions are retracked using the ALES retracker , marked by † , with TOPEX and ERS using ocean ranges as provided in SGDR datasets.
Mission
Cycles 2 Residual Tidal Analysis of Satellite Altimetry The development of EOT20 focused on improving tidal estimations in the coastal region which has been a historically difficult 60 region to accurately estimate tides. EOT20 follows a similar scheme as the former model, EOT11a, consisting of three major steps: the creation of an SLA product including the correction of a reference ocean tide model; the estimation of the residual tides based on this SLA product; and the combination of the reference model with the residual tides to form a new global ocean tide model. These three steps provide a summary of the creation of EOT20 which is expanded in the following sections.
The Altimetry SLA product 65
The tidal analysis is based on the analysis of SLA derived from satellite altimetry missions (Table 1) obtained from the Open Altimeter Database (OpenADB, https://openadb.dgfi.tum.de, Schwatke et al., 2014). These missions are selected as they provide extended time-series along similar altimetry tracks, with the Jason missions being a follow-on from TOPEX/Poseidon and Envisat a follow-on of the ERS missions, thus providing appropriate data for the estimation of tidal signals. The SLA from these altimetry missions is calculated according to that described in Andersen and Scharroo (2011): where H is the orbital height of the satellite, R the range, M SS the mean sea surface and h geo is the sum of the geophysical corrections (as listed in Table 2 Passaro et al. (2018) ERS sea state bias REAPER Brockley et al. (2017) TOPEX sea state bias TOPEX Chambers et al. (2003) Inverse barometer before 2017 DAC-ERA Carrere et al. (2016) Inverse barometer from 2017 DAC Carrère et al. (2011) Wet troposphere GPD+ Fernandes and Lázaro (2016) Dry troposphere VMF3 Landskron and Böhm (2018) Ionosphere NIC09 Scharroo and Smith (2010) Ocean and load tide FES2014 Lyard et al. (2020) Solid earth and pole tide IERS 2010 Petit and Luzum (2010) Mean sea surface DTU18MSS Andersen et al. (2016) Radial error MMXO17 Bosch et al. (2014) The same corrections are used for each satellite altimetry mission to allow for consistency, with the only differences oc-75 curring in the sea state bias correction. The ALES retracker (Passaro et al., 2014) is applied to the Jason missions and the ENVISAT mission based on data availability at the time of running the model, with the other altimetry missions using the REAPER (Brockley et al., 2017) and TOPEX sea state bias corrections (Chambers et al., 2003). This discrepancy in the chosen retracker is designed to benefit from the ability of the ALES retracker in obtaining data closer to the coast which Piccioni et al. (2019b) showed had positive improvements on the accuracy of the EOT tide model for the major tidal constituents compared to 80 using the other retracked data. Therefore, depending on the retracker that is used, a coastal flag is implemented into the model that limits the distance to the coast. For missions using the REAPER and TOPEX retrackers, a coastal flag is implemented that restricts the use of SLA data up to 7 km from the coastline. For missions using the ALES retracker, however, this distance to the coast is decreased to 3 km (Passaro et al., 2020). An additional flag is also added limiting the absolute value of sea level anomalies to ± 2.5 m (Savcenko and Bosch, 2012). The altimetry data is further adjusted to account for radial errors estimated 85 in the cross-calibration of the SLA data using the multi-mission crossover analysis approach presented in Bosch et al. (2014).
As shown in Table 2, the ocean and load tide correction for all missions is the FES2014 oceanic tide model. This is one of the major changes from the previous version of the global EOT model, EOT11a, which used one of the previous versions of the FES model, FES2004. The results of Lyard et al. (2020) showed considerable improvements in FES2014, particularly in the coastal and shelf regions. These improvements are largely driven by the improved efficiency of data assimilation and accuracy 90 of hydrodynamic solutions. It is, therefore, anticipated that some of the improvements made between the versions of EOT will be due to the improvement in the reference model.
Once all these corrections are applied, the SLA can be estimated for all eleven altimetry datasets which are then gridded onto a triangular grid based on the techniques presented in Piccioni et al. (2019b). Once collected, the data is then weighted using a Gaussian function based on the distance to the grid point. The use of data from multiple satellite tracks for each node provides a long SLA time series which is important in reducing the aliasing effect and in decorrelating tidal signals with alias periods close to each other (Savcenko and Bosch, 2012). These issues occur due to the low temporal resolution obtained from satellite altimetry (e.g. the Jason missions only sample the same position once every 9.915 days) resulting in tides not being properly estimated. The alias periods for the major tidal constituents for the Jason and 105 the ERS orbits are presented in Smith (1999). The use of nodes with data from multiple altimetry missions, therefore, creates a long enough time series to improve the temporal resolution and reduce possible aliasing effects in the tidal estimations.
Residual Tidal Analysis
From the weighted SLA, residual tidal analysis is performed using weighted least-squares and the Variance Component Estimation (VCE) for each grid point of the model. The least-squares approach is applied to the harmonic formula to derive 110 the amplitudes and phases of single tidal constituents from the SLA observations. In EOT20, the seventeen tidal constituents considered and computed are: 2N2, J1, K1, K2, M2, M4, MF, MM, N2, O1, P1, Q1, S1, S2, SA, SSA and T2. The weighted least square analysis follows a standard procedure solving the following equation for each grid point (Piccioni et al., 2019b): with l being the vector of SLA values, A the design matrix, W the diagonal matrix of weights, and x the vector of unknowns.
115
The unknowns of vector x are: the in-phase and quadrature coefficients of the tidal constituents being considered; the sea level trend; and the constant values defined as the mean sea level from each specific mission at each node (Piccioni et al., 2019b).
The VCE is implemented to allow for the combination of datasets from multiple satellite missions and allows for appropriate weighting of missions based on their variances to provide a more accurate estimation. The VCE method has been utilised in a variety of applications and it was introduced into the previous global model, EOT11a (Savcenko and Bosch, 2012), which 120 followed the formulation detailed in Teunissen and Amiri-Simkooei (2008) and Eicker (2008). The VCE is calculated using iterations as the unknowns and the variances, σ, are initially unknown. The formulation is as follows: with N x and N y equal to the weighted sum: The variances are iteratively calculated by: where r i is the partial redundancy with Ω i =vP bbv ,v being the vector of residuals and P bb is the dispersions matrix of measurements (Savcenko and Bosch, 2012).
Following the residual analysis, significant residual signals were obtained for all of the tidal constituents. For the M2 and 130 N2 tides (Figure 1), for example, the residual amplitudes can exceed 2 cm with the largest residual tides being seen in the coastal region. Relatively high residual tides are also seen in the western boundary currents, such as the Agulhas Current and the Gulf Stream. The tides observed are the residual elastic tides that consist of both the ocean and the load tides. Therefore, additional analysis has been done to separate these two components for further analysis. There are several techniques that are described that make this possible (e.g., Francis and Mazzega, 1990) with EOT using the method presented in Cartwright 135 and Ray (1991). This method involves using the complex elastic ocean tide admittance decomposed in complex spherical harmonics as described by Savcenko and Bosch (2012): The ocean spherical harmonic admittances of the load tides are described as: 140 where β = αn 1+αn with α n = 3 2n+1 ρw ρe h n . The love numbers, h n , were taken from Farrell (1972) with ρ w and ρ e being the density of the ocean and earth. After synthesis of the load tides, the residual ocean tides were computed as the difference between the load and the elastic tide,
Model Formation
Once the ocean and load tide residuals are produced, the full tidal signal is restored by adding the residuals to the FES2014 145 tidal atlas. The residuals are interpreted onto a 0.125 • degree resolution grid with the FES2014 model interpreted onto the same grid resolution. The outputting of the data onto a regular grid is simply done to allow for an easy combination with the FES2014 model as well as to be more user friendly. The north-south extent of the model extends 66 • N and 66 • S, with the model defaulting to the FES2014 tides in the higher latitudes. This extent is chosen due to the limited altimetry data further beyond this latitudinal band and the difficulty in modelling the tides in the polar regions. Dedicated studies to the Arctic region 150 such as that of Cancet et al. (2019) demonstrate the complexity of modelling ocean tides in the polar regions and emphasize their importance for satellite altimetry. Future iterations of the EOT model will tackle the estimation of tides in the higher latitudes. A land-sea mask was added to the model based on the GMT tool that uses the GSHHG coastline database (Wessel and Smith, 1996), which is a high-resolution database that contains information about coastlines as well as lake and river boundaries. These data has a mean point separation of 178 meters which has been interpolated to a 0.125 • resolution for use 155 in the EOT20 model. In complex coastal regions, such as regions with islands or in semi-enclosed bays, properly defining the coastlines becomes extremely valuable when validating the model against in-situ tide gauges. This is largely a result of artifacts forming when estimating tides in regions where the coastline has not been properly defined. For example, the Cook Strait between the two islands of New Zealand provide a unique coastal structure which shows a sharp change in the amplitude of major tides (e.g.
160
M2, N2, S2 and K2 as shown in Walters et al. (2001)) and, therefore, requires a more accurate coastline definition. Preliminary studies of EOT20 (not shown) demonstrated that for tide gauges within the Cook Strait the root square sum (RSS) difference between the model and tide gauges was reduced by 0.2 cm for the eight major tidal constituents when applying a more accurate land-sea mask. An overall reduction in RSS is seen throughout the ocean when using an accurate land-sea mask. The EOT20 model follows the framework of the EOT11a model when estimating the tide via residual analysis. However, significant changes and additions have been done to EOT20 with the objective of improving coastal estimations. These changes are in the reference tide model used in the residual analysis; the use of more recent developments in coastal altimetry (e.g. the development of the ALES retracker (Passaro et al., 2014)); the increased coverage of satellite altimetry based on the launching 175 of further missions (e.g. Jason-3); the use of an accurate land-sea mask onto the model output data; and using a triangular grid for the residual analysis. These additions all combine to optimise the estimation of ocean tides in the EOT20 model.
Tide Gauge Comparison
Since the 1800s, tide gauges have been used to study the ocean tides and the variation in sea level. Over the years, more and more tide gauges have been installed around the world resulting in a vast array. This comprehensive record of tide gauges can 180 be used to evaluate the changes in sea level over time as well as better understand the ocean tides. Tide gauges, therefore, provide a suitable source of data in the validation of ocean tide models, particularly in the coastal region. There are limitations particularly in the distribution of tide gauges, with certain regions containing a vast number of tide gauges (e.g. in Northern Europe) and some regions containing little to no data (e.g. the Mozambique Channel). Furthermore, tide gauges are mostly restricted to the coastal region and, therefore, do not provide sufficient observations of the open ocean region. With that in 185 mind, Ray (2013) estimated tidal constants from bottom pressure stations in the open ocean regions which has been used to compare and assess the accuracy of global ocean tide models (Stammer et al., 2014). These data are combined with coastal and shelf data from Stammer et al. (2014) as well as the TICON dataset (Piccioni et al., 2019a) to create a comprehensive dataset of tidal constants (shown in Figure 4) to evaluate the accuracy of the EOT20 model throughout the global ocean. Stammer et al. (2014), the TICON dataset is also divided into three regions with the coast being defined as any tide gauges found shallower than 10 m, the shelf defined as being between 10 m to 100 m depth and open ocean being anything deeper than 100 m. This is done to assess how the model performs in the coastal region, a historically difficult region to model accurately. Several major ocean tide models are also compared to the same tide gauges in order to act as reference to the ability of the EOT20 model. The models used are EOT11a (Savcenko and Bosch, 2012), FES2014 (Lyard et al., 2020), GOT4.8 (Ray,195 2013) and DTU16 (Cheng and Andersen, 2017). To provide suitable comparisons, duplicate tide gauges were removed and restrictions were implemented based on the model characteristics (i.e. only tide gauges between 66 • S and 66 • N were used).
This results in 1,226 tide gauges and bottom pressure sensors being available for validation of the models. It should be noted that 230 of the tide gauges used in this study are assimilated into the FES2014 model. The root-mean-square (RMS) and rootsquare-sum (RSS) between models and gauges were estimated following the techniques described in Stammer et al. (2014) for 200 the eight major tidal constituents (M2, N2, S2, K2, K1, O1, P1 and Q1) which are commonly available from the tide models used.
The comparison between EOT11a and EOT20, shows a significant improvement in the EOT20 model for the full dataset (Table 3). This is consistent for all of the tidal constituents, with a major improvement seen in the M2 tide (1.5 cm) and the S2 tide (0.9 cm). For all of the regions (Figure 5), EOT20 continues to show improvements compared to EOT11a particularly 205 in the coastal region with a mean RSS reduction of 2.5 cm. In the coastal region, EOT20 shows a reduced RMS for all the tidal constants with large reductions occurring again for the M2 (2 cm) and S2 tide ( This suggests that the adjustments and additions made to the EOT model, such as the incorporation of the ALES retracker 210 in the estimation of the SLA, produce substantial differences to the performance of the model in the coastal region without harming the performances in other regions. EOT20 also shows a reduced RSS when compared to the other global models, particularly compared to the reference model, FES2014. The largest improvement comes in the M2 tidal constituent while the results for the remaining tidal constituents are quite consistent between FES2014 and EOT20. In the coastal region, FES2014 and EOT20 both show significant improvements to the other models, being approximately 1 cm better than the closest model 215 in this region ( Figure 5). In the shelf and open ocean regions, all the models generally show similar results to one another with FES2014, EOT20 and DTU16 only varying by a few millimeters. Therefore, the better performance of EOT20 seen in Table 3 can mostly be put down to the results seen in the coastal region.
This is further highlighted in the TICON dataset which contains significantly more coastal tide gauges compared to the other two regions. Again, EOT20 shows a substantial reduction in RMS for the M2 tidal constituent of 2.3 mm compared to the next 220 best model (FES2014). For the remaining tidal constituents, EOT20 and FES2014 never vary by more than 1 mm in terms of RMS values. This improvement compared to FES2014 is mainly seen in the coastal region, which is in line with previous regional studies of EOT done using FES2014 as the reference tide model and the ALES retracker (Piccioni et al., 2018).
In the shelf region, the reduction of RMS in the M2 tide from EOT20 is still seen compared to FES2014 but reduces to 1 mm. The RSS of EOT20 is 0.2 mm higher than that of FES2014 while DTU16 further reduces the RSS by 0.2 mm. This is 225 dataset specific however, with EOT20 performing the best by 0.05 mm in the TICON shelf data and DTU16 performing the The constituents not included in the previous analysis, are compared to the FES2014 model and the TICON tide gauge dataset (presented in Figure A3). Only the TICON tide gauge dataset is used based on the availability of appropriate tidal constituents for the analysis. For eight of the nine tidal constituents, the two tide models show similar results to one another.
The SA, solar-annual, tidal constituent shows the largest improvement between FES2014 and EOT20 of 1.9 cm but for both 235 models this is the poorest estimated tide. This constituent is estimated by FES2014 based on free hydrodynamic solutions and does not contain any data assimilation (Lyard et al., 2020). Here, the EOT20 model utilises the extended time-series of altimetry data to make a more accurate estimation of the tide based on the residual analysis, thus providing somewhat of an improvement compared to FES2014. However, it is clear that the solutions of EOT20 are still imperfect due to the poorer performance of the reference tide model and due to the temporal aliasing of this long-period constituent. It should be noted that 240 the assessment of the models using in-situ tide gauges themselves would benefit from additional high-quality extended timeseries in order to more accurately estimate long-period constituents. The other free hydrodynamic tidal solutions estimated similarly in FES2014 (MM, MF and SSA) show smaller errors when compared to tide gauges and the differences between the RMS of the two models are significantly reduced.
The S1 tidal constituent is the relatively worst performing tidal constituent from the EOT20 model with an increased RMS that is used in the creation of the SLA product which may leak into the estimation of the ocean tides (Ray, 2020). The ionospheric correction used in EOT20 is aimed at optimising the performance of the tide model in the coastal region, however, this may be negatively impacting the estimation of certain tidal constituents, like the S1 tide. Furthermore, Ray and Egbert (2004) discuss the impact that geophysical corrections (mainly inverse barometer and dry troposphere) have on the estimations 250 of the S1 tide from altimetry data. A future study of the EOT model will investigate the use of different geophysical corrections to optimise the estimation of ocean tides with particular focus on the S1 tidal constituent.
The results of the tide gauge and ocean bottom pressure analysis suggest rather encouraging results from the EOT20 model. for all three satellite altimetry missions. The red line represents FES2014 -EOT20, while the blue line represents EOT11a -EOT20.
Sea Level Variance Reduction Analysis
In order to further assess the models ability, sea level variance reductions of three satellite altimetry missions were assessed and are presented. As seen in Figure 4, tide gauges and ocean bottom pressure do not provide full coverage of the open ocean so comparing the sea level variances of ocean tide models provides a suitable assessment of the performances of the models.
260
The missions chosen are Jason-2 and Jason-3 which are used in the residual tide analysis as well as SARAL which is not used in the analysis. A few steps are required in order to estimate sea level variance reduction. First, the along-track SLA is estimated using the corrections listed in Table 2 with the only differences being in the ocean and load tide correction. For this correction, two tide models (EOT11a and FES2014) were used to be compared to EOT20. The SLA for each cycle of all three missions was then estimated and then gridded onto a four-degree grid. Once done, the variance of each of the SLA products 265 was estimated (Savcenko and Bosch, 2012). Figure 6, presents the results of the scaled SLA variance differences between the three tide models. For the Jason-2 mission, which is the mission with the most cycles, the SLA variance differences between all tide models are very similar to one another with EOT20 showing an overall mean-variance reduction of 0.54 mm and 0.26 mm when compared to EOT11a and FES2014 respectively. The largest discrepancy is around 60 • to 66 • south, where EOT20 shows a lower SLA variance compared to 270 EOT11a and FES2014. When looking at how the SLA variance differences change based on the distance to coast for Jason-2 ( Figure 7, top), it can be seen that EOT20 shows the largest reduction of variance in the coastal region. This is particularly the case when looking at the differences between EOT11a, with EOT20 reducing the variance by approximately 0.4 cm in the first 100 km from the coast. As they move further from the coast, the difference between the two models begins to reduce and converge towards zero. The variance difference between FES2014 and EOT20 show similar results. Closer to the coast, EOT20
275
shows a reduced variance compared to that of FES2014 with differences exceeding 1 mm but as they move further from the coast the difference between the two models converges towards zero. Like with the EOT11a model, FES2014 begins to show a reduced variance compared to EOT20 800 km from the coast.
For the Jason-3 mission, a reduction in SLA variance can be seen from the EOT20 model, with The discrepancies between the models again being very small ( Figure 6). The mean-variance reduction of EOT20 is 0.92 mm and 0.89 mm when compared 280 to EOT11a and FES2014 respectively. The variance reduction can be seen throughout the ocean, with larger reductions in the coastal region (Figure 7, middle). Like in the Jason-2 mission, the variance differences decreases further away from the coast.
Although the variance reduction diminishes further from the coast, unlike in the other two missions EOT20 shows continued variance reduction throughout the ocean.
The SARAL mission presents differing results from those seen in the Jason missions. It should be noted that SARAL has 285 considerably less cycles and has a different orbit compared to the Jason missions. However, the results still provide valuable insights into the performances of the models. When looking at the scaled variance differences, the results become a bit more variable between the models with EOT20 showing reductions in variance in regions such as the Indian Ocean and the North Atlantic Ocean but showing increased variance in regions such as the South Atlantic Ocean and the South Pacific Ocean.
Overall, EOT20 shows a mean reduction of variance compared to EOT11a of 1.29 mm despite EOT11a outperforming the 290 model in certain regions. The mean-variance reduction of EOT20 compared to FES2014 is 0.35 mm, however, there are regions where FES2014 shows better performances, particularly in the South Atlantic. Again, the overall reduction in variance is largely driven by the models' performance closer to the coast (Figure 7, bottom) with reductions compared to EOT11a and FES2014 exceeding 3 mm and 2 mm respectively closer to the coast, while these differences reduce towards zero further away from the coast. In this study, an updated version of a global ocean tide model, EOT20, is presented. Model developments were aimed at updating the previous model, EOT11a, with a focus on improving the coastal estimations of ocean tides by utilising recent developments in coastal altimetry, particularly the use of the ALES retracker and sea state bias correction. In the residual analysis, SLA data is gridded into a triangular grid aimed at increasing the efficiency of the model and thus better-describing 300 tides in the coastal and higher latitudinal regions. A further update was in the use of a newer version of the reference model (FES2014) for the residual analysis performed to create the EOT20 model which showed significant improvements to the previous reference model used (Lyard et al., 2020).
To evaluate the performance of the EOT20 model, validation against in-situ observations and through sea level variance analysis was done. First, the models performance was compared with tide gauges and ocean bottom pressure sensors for 305 the eight major tidal constituents. The results suggested that EOT20 showed significant improvements compared to EOT11a throughout the global ocean, with major improvements being seen in the coastal region. Furthermore, when compared to other global ocean tide models, EOT20 showed the lowest overall error for all eight tidal constituents with a major improvement being seen in the M2 tide. This positive performance was largely driven by the improved accuracy of the model compared to observations in the coastal region. In the shelf and open ocean regions, EOT20 was on par with the best tide models 310 in these regions, DTU16 and FES2014. The additional tidal constituents provide valuable data for the creation of the tidal correction used for satellite altimetry. The results of these additions show positive results compared to the FES2014 model but improvements can still be made in determining some of these tides, particularly the S1 tidal constituent. Further investigations will be done at DGFI-TUM into the estimation of additional minor tidal constituents as well as the optimization of the current estimations. The sea level variance analysis continued to show positive results for EOT20. EOT20 reduced the mean variance 315 compared to both FES2014 and EOT11a for all three satellite altimetry missions studied. Again, the largest reason for the improvement was seen in the coastal region with EOT20 showing similar results compared to the other models in the open ocean regions. These results of the new EOT20 model suggest that it will serve as a useful tidal correction for satellite altimetry.
Errors resulting from tide models are considered to be one of the main limiting factors for temporal gravity field determination and the derivation of mass transport processes (Koop and Rummel, 2007;Pail et al., 2016). In the creation of EOT20, 320 a first look into the uncertainties was done but due to the unavailability of uncertainty estimations from the FES2014 model used as the reference model these uncertainties are incomplete and, therefore, are not presented. This is a topic of discussion and future development that will be assessed in future studies.
As the fields of coastal altimetry and ocean tides develop, the ideas and methods of improving the EOT model continue to grow. A clear next step for the EOT model is to assess its ability to estimate tides in higher latitudes by including more satellite 325 missions (e.g. Cryosat-2) and to introduce further data such as synthetic-aperture radar altimetry from Sentinel-3. Furthermore, more recent developments in the estimation of internal tide models (Carrere et al., 2021) suggest that improvements may be made to the estimation of ocean tides from residual analysis when the internal tidal correction is applied to the SLA data. These potential avenues of improvement will be addressed in future iterations of the EOT model. | 2021-05-05T00:08:05.652Z | 2021-08-10T00:00:00.000 | {
"year": 2021,
"sha1": "6a68a38ecda9b57ae0f9ec5e04283e35f2b1e703",
"oa_license": "CCBY",
"oa_url": "https://essd.copernicus.org/articles/13/3869/2021/essd-13-3869-2021.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "aa900bbdad27b7948fd6712c0adf22772498c918",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
224895316 | pes2o/s2orc | v3-fos-license | Is Employment Relations Towards Deregulation and Institutional Convergence Across the Globe?
Massive academics are at say of deregulations and institutional convergence in employment relations across the globe, be them in advanced or emerging economies. To benefit of doubt: Is it really in all countries in the world? Or are they the representative stories? Thus, the objective of this paper is to explore the recent context of industrial and labor relations in Nepal. To this end, the authors consider the recent Labor Act of 2017 as the fundamental basis for the examination of employment relations against the growing deregulations and institutional convergence in the neo-liberal contexts of post-globalization, highly competitive markets, export-oriented business, super digital technological advancement and increased labor migration abroad. The Labor Act of 2017 shows that the employment relations is around the system approach with collective bargaining, collective dispute settlement and the provision of arbitration, mediation or third-party system, to name a few. This indicates that the employment relations of deregulations and institutional convergence in the advanced and emerging economics, including Asia are some of the representative ones. In that the factors to influence such employment relations vary across the countries even though there has been increased competition, digitalization, labor migration and knowledge-intensive work around. In contrary, within the local, national context or institutional labor framework, the actors and the path dependence of overall employment relations do matter in the study of changing employment relations. Thus, the Labor Act of 2017 in Nepal is put into consideration as a case study to acknowledge the employment relations against the changing, global one. Further, the story would contribute to revitalization of industrial and labor relations and divergence of employment relations in the pool of literatures.
Introduction
Bojindra Prasad Tulachan, Ph.D., Assistant Professor, Department of Global Culture Industry Management, Calvin University, Yongin, Republic of Korea.
Balabhadra Rai , Ph.D., Assistant Professor, Department of Global Culture Industry Management, Calvin University, Yongin, Republic of Korea.
DA VID PUBLISHING D Initially, transformation of industrial relations has begun from the advanced economies of the West (Erickson & Kuruvilla, 1998;Jürgens, Klinzing, & Turner, 1993;Kochan, Katz, & McKersie, 2018), which has spread to recently developed countries in Asia (Kuruvilla & Erickson, 2002) and the trend is on the go in the emerging countries as well. Fundamentally, transformation of industrial relations is towards deregulations and institutional convergence to the system approach (Baccaro & Howell, 2011;Jefferys, 2019) in that Hyman (2018) is worried about the future of industrial relations in Europe. However, it is a puzzle that the Nepalese industrial relations is essentially following the traditional industrial relations system. The recent Labor Act of 2017 covers of collective bargaining, collective dispute settlement and the provision of arbitration, mediation or third-party system (Labor Act, 2017), which reflects that the Nepalese industrial relations is still maintaining traditional industrial relations from system perspective.
Massive academics are in support of deregulations and institutional convergence of industrial relations. However, very small is known about the industrial relations of Nepal from the perspective of growing deregulations and institutional convergence. The Labor Act of 2017 of Nepal clearly depicts that the Nepalese industrial relations is against the mainstream literatures of deregulations and institutional convergence of industrial relations (Tulachan & Felver, 2019). In the neo-liberal economics in the advanced, recently advanced and emerging economies, the evidences in the literatures project support to the thesis statement of deregulations and institutional convergence across the globe (Kinderman, 2019;. In that the scholars have failed to make case studies of least developed countries, rather they have been portrayed in the similar directions by making representative studies in the regional terrain. This trend of academics in industrial relations has left back the most salient part of industrial relations of the least developed countries. Thus, the objective of this paper is to unveil the untold story of the Nepalese industrial relations that is against the mainstream trend of recent employment relations in the pool of mainstream literatures. In doing so, this paper collects the latest literatures in industrial and labor relations of deregulations and institutional convergence in the advanced, recently advanced and emerging economies. In that how the deregulations and institutional convergence are made in recent years would be major part of the investigation. Also, the paper explores the increased non-standard employment of part time, seasonal, contract kind of work from the perspective of standard employment system of industrial relations and the role of the gig economy in the neo-liberal context of global economy (McDonough, 2017;Mai, 2017;. To that end, massive efforts have been made in one way or the other to reduce such highly increased precarious jobs (Jaehrling, Wagner, & Weinkopf, 2016;Grimshaw, Johnson, Rubery, & Keizer, 2016). With these in the background, the odd course of the Nepalese industrial relations of permanent employment system and the provision of collective bargaining, collective dispute settlement and the provision of arbitration, mediation or third-party system would be the concentration in that it is unlikely to project towards deregulations and institutional convergence stories of the mainstream literatures of industrial and labor relations.
The paper is divided into four segments. Following introduction, the second segment concentrates on review of literatures with focal variables of deregulations and institutional convergence of industrial relations in advanced, recently advanced, and emerging economies. In this part, the paper further projects what has made such deregulations and institutional convergence in such a rapid scale. Then, it asks as why there is hardly any deregulations and institutional convergence as such compared to other countries in the world. In the third segment, the paper makes a comprehensive case study as why and what has made Nepal to reintroduce the Labor Act of 2017 with the provision of collective bargaining, collective dispute settlement and the provision of arbitration, mediation or third-party system. Finally, the paper makes a conclusion with future research directions of case studies of the least developed economies around studies of deregulations and institutional convergence of industrial and labor relations.
Review of Literature
Following 1980s and 1990s, there has been significant changes in industrial relations. Most of the traditional industrial relations are towards transformations. The transformations of industrial relations started in the advanced economies (Hyman & Ferner, 1998;Erickson & Kuruvilla, 1998;Kochan et al., 2018) because of the increased technological advancement, increased competition, global and liberal economies, increased factor markets and the trend of massive culture of production of goods and services. The drivers of such changes were similar in the recently developed economies in Asia Pacific regions, most importantly (Bamber & Leggett, 2001;Kuruvilla & Erickson, 2002). The trend has also been to East Asia and in the South Asia, eventually. The major driving forces are of economic crisis/recession of 1980s in the United States. Seemingly, the Taft-Hartley Act came up to the Wagner Act against the mounting strikes and strong trade unions. Also, the oil shock during the 1970s has further influenced to the transformation of industrial relations in the West (Kuruvilla & Erickson, 2002).
With the increased trend towards transformation of industrial relations, most of the countries have deregulated the traditional industrial relations of system approach. The provisions of collective bargaining, collective dispute settlement and the provision of arbitration, mediation or third-party system have been towards individual bargaining and no more provision of arbitration or third-party settlement in the employment relations. Matter of fact, the standard employment relations is towards highly non-standard ones, not only in advanced economies but also in emerging economies in various parts of the world. Thus, there has been increased precarious jobs with vulnerability that is under debate whether it is good for the economy and where it is leading towards in matters of employment.
Deregulation or Institutional Convergence? A Case Study of Nepal
Industrial relations in Nepal began following WWII. The labor movement of 1947 paved the way towards the promotion of industrial relations. However, the labor movement could not get fully successful because of the highhandedness of the employers and of the strong support of the government (Tulachan & Felver, 2019). The government during the period was kind of Oligarchy that they did not want to open up labor-friendly environment and the institutionalization of industrial relations. Matter of fact, the workers and the trade unions had to collaborate with the political parties for the dethronement of the anarchical regime (Tulachan, 2019).
As the number of workers were getting united and the tussle was on the go, King Mahendra made a monarchial coup d 'etat in 1960and banned trade unions until 1990(Acharya & Bhattarai, 2012. With such an official ban, however, the workers and trade unions continuously worked offline and made stronger themselves. Following 1980s, the political environment was more liberal as the wave of democracy in South Asia appeared. The influence of the democratic wave did not leave Nepal alone with the Monarchial system of single political party. The most influential was of the success of the Indian Independence of 1947 in India that encouraged the Nepalese workers and trade unions that they would be able to establish the stolen labor rights and establish institutional labor frameworks for them. Once the successful introduction of democracy was held in Nepal in 1990, the workers and trade unions grew up massively (Acharya & Bhattarai, 2012). The Labor Act of 1992 and Trade Union Act of 1992 were milestone in the establishment of labor rights in Nepal (Dahal, 2002). The labor framework was more towards the system framework of collective bargaining, collective dispute settlement and the provision of arbitration, mediation or third-party system (Labor Act, 1992). However, in practice, the labor framework was more of the workers and trade unions friendly. This is because when the trade unions were banned from 1960 to 1990, they worked with the political parties more closely for their power and survival (Tulachan, 2019). They also supported a lot in the same manner in the democratic elections to their mother political parties. Thus, they held power in that they could influence the mother political parties and of the government in times of need.
The social movement of 2006 was another critical juncture in the Nepalese politics (Bhandari, 2014). This was because the Maoists who were in the insurgency from 1996 to 2005 got into the mainstream politics (Upreti, 2008). With that, the new constitution was deployed and Nepal entered into the Republic of Nepal with seven of the states. In 2017, the new Labor Act of 2017 was introduced in Nepal, which was in the same manner of system approach. This clearly marks that the system approach is still in practice in Nepal. The Labor Act of 2017 clearly adopts of collective bargaining, collective dispute settlement and the provision of arbitration, mediation or third-party system (Labor Act, 2017). This indicates that the standard employment relations is being carried out against the increased non-standard employment relations in other parts of the world. The representative papers in South Asia reflect similar direction of deregulations and institutional convergence in industrial relations. This puzzle has pushed the authors to write this paper as it makes sense (amidst increased deregulations and institutional convergence literatures) in citing the case of Nepal in terms of maintaining traditional industrial and labor relations, substantially.
Conclusions
The conclusion marks that Nepal is at odd course of industrial and labor relations, whereas most of the advanced and emerging countries are in the direction of deregulations and institutional convergence of employment relations. The journey is from standard employment relations to non-standard employment of contractual, seasonal, daily, and hourly work. Thus, there appears increased gig economy in most parts of the world in the context of neo-liberal economies. However, the scenario of employment relations is quite different for long in Nepal. The Labor Act of 2017 is the greatest epitome of the practice of the system approach of collective bargaining, collective dispute settlement and the provision of arbitration, mediation or third-party system. The case study of Nepal would contribute to revitalization of industrial and labor relations and of the divergence of employment relations as why Nepal is in the high road of system approach against tide of decline of industrial and labor relations in different parts of the world. | 2020-10-19T18:09:22.821Z | 2020-09-28T00:00:00.000 | {
"year": 2020,
"sha1": "8cf91562b69f53e6ce679baa9a6b349995cb6bfc",
"oa_license": null,
"oa_url": "https://doi.org/10.17265/1548-6591/2020.05.001",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "42bbe679e4635a4752b135719043fd833edc1380",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
211169860 | pes2o/s2orc | v3-fos-license | Impact of sarcomatoid differentiation and rhabdoid differentiation on prognosis for renal cell carcinoma with vena caval tumour thrombus treated surgically
Background Sarcomatoid differentiation in renal cell carcinoma (RCC) with vena caval tumour thrombus has been shown to be associated with aggressive behaviours and poor prognosis; however, evidence of the impact of rhabdoid differentiation on prognosis is lacking. This study evaluated the impact of sarcomatoid differentiation and rhabdoid differentiation on oncological outcomes for RCC with vena caval tumour thrombus treated surgically. Methods We retrospectively analysed patients treated surgically for RCC with vena caval tumour thrombus at our institute from Jan 2015 to Nov 2018. Prognostic variables were evaluated for associations with progression-free survival (PFS) and cancer-specific survival (CSS) by Kaplan–Meier survival analysis and log-rank test. Univariate and multivariate analyses were performed to determine independent prognostic variables. Results We identified 125 patients with RCC and vena caval tumour thrombus, including 17 (13.6%) with sarcomatoid differentiation alone, 8 (6.4%) with rhabdoid differentiation alone and 3 (2.4%) with both sarcomatoid and rhabdoid differentiation. Compared to pure RCC, patients with sarcomatoid differentiation but not rhabdoid differentiation have worse PFS (p = 0.018 and p = 0.095, respectively). The univariate and multivariate analyses both showed sarcomatoid differentiation as a significant predictor of PFS. Compared to pure RCC, patients with sarcomatoid differentiation (p = 0.002) and rhabdoid differentiation (p = 0.001) both had significantly worse CSS. The univariate analysis showed sarcomatoid differentiation, rhabdoid differentiation, metastasis and blood transfusion as significant predictors of CSS (All, p < 0.05). In the multivariate analysis, sarcomatoid differentiation (HR 3.90, p = 0.008), rhabdoid differentiation (HR 3.01, p = 0.042), metastasis (HR 3.87, p = 0.004) and blood transfusion (HR 1.34, p = 0.041) all remained independent predictors of CSS. Conclusions Sarcomatoid differentiation and rhabdoid differentiation are both independent predictors of poor prognosis in RCC with vena caval tumour thrombus treated surgically.
Background
Renal cell carcinoma (RCC) is the most common kidney tumour, comprising an estimated 2.2% of all new cancer diagnoses with 403,262 new cases and 175,098 deaths in 2018 [1]. Overall 4-10% of patients with RCC present with venous tumour thrombus [2]. Sarcomatoid differentiation in RCC is characterized histologically by a dedifferentiated growth pattern of epithelial neoplasm into malignant spindle-shaped mesenchymal cells [3]. Sarcomatoid differentiation can arise in any histologic subtype of RCC; thus, it is no longer considered a distinct histologic subtype [4]. Approximately 5% of all RCCs and up to 15% of stage IV cases contain sarcomatoid differentiation [5,6]. Previous studies suggested that sarcomatoid differentiation is associated with aggressive behaviours, poor response to targeted therapy and worse prognosis [6][7][8]. However, the impact of sarcomatoid differentiation on prognosis for RCC with vena caval tumour thrombus treated surgically has not been studied extensively.
Rhabdoid differentiation in RCC is characterized by "sheets and clusters of variably cohesive, large epithelioid cells with vesicular nuclei, prominent nucleoli and large paranuclear intracytoplasmic inclusions" [9]. It is present in approximately 5% of all RCCs and 27% of grade 4 RCCs [9,10]. Rhabdoid differentiation in RCC is considered a predictor of poor prognosis, similar to sarcomatoid differentiation. Therefore, the World Health Organization International Society of Urological Pathology (WHO/ISUP) grading system formally classifies RCC with either sarcomatoid differentiation or rhabdoid differentiation as grade 4 [11]. The impact of rhabdoid differentiation in RCC on prognosis has been studied to some extent, but the available reports have inconsistent conclusions. Furthermore, there is little evidence on the prognostic role of rhabdoid differentiation in RCC with vena caval tumour thrombus treated surgically.
Therefore, this report describes the survival outcomes of a consecutive series of patients treated surgically for RCC with vena caval tumour thrombus and our evaluation of the impact of sarcomatoid differentiation and rhabdoid differentiation on survival outcomes.
Patients
After receiving approval from the Peking University Third Hospital Medical Science Research Ethics Committee, we retrospectively analysed the data of patients treated with nephrectomy and thrombectomy for RCC with vena caval tumour thrombus at our institute from Jan 2015 to Nov 2018. Among the 131 patients pathologically diagnosed with RCC with vena caval tumour thrombus, 6 were excluded from the study: 1 with metachronous vena caval tumour thrombus, 2 with two-stage operation for RCC with vena caval tumour thrombus and 3 with incomplete follow-up data. Thus, 125 patients were included in our study. None of the patients underwent neoadjuvant therapy before surgery. Comprehensive clinical and pathological data was collected for each patient, including age, gender, tumour size, thrombus level, blood transfusion, TNM stage, histologic subtype, Fuhrman grade, tumour necrosis, sarcomatoid differentiation, rhabdoid differentiation and adjuvant target therapy of tyrosine kinase inhibitors.
Clinical and pathological evaluation
Tumour size was collected as the largest diameter reported in computed tomography or magnetic resonance imaging examination. The level of tumour thrombus was assigned using the Mayo classification [12]. TNM stage was determined according to the 8th edition American Joint Committee on Cancer TNM classification [13]. Histologic subtype was assigned based on the 2016 WHO classification of renal tumour [14]. The tumour grade was determined following the Fuhrman system. A commonly accepted definition of sarcomatoid differentiation and rhabdoid differentiation morphology was used [3,9]. One urological pathologist reviewed the pathologic specimens.
Surgical procedures
First, nephrectomy was performed following routine procedures, and lymph node dissection was performed for patients suspected to have lymph node metastasis based on enhanced CT or PET/CT results. Second, the inferior vena cava (IVC) and contralateral renal vein were isolated and blocked as follows: (a) For Mayo I tumour thrombus, the IVC tumour thrombus was squeezed back into the renal vein using the milking technique, and the IVC was partially blocked with vessel forceps. (b) For Mayo II tumour thrombus, several short hepatic veins and lumbar veins were ligated to expose the retrohepatic segment of the IVC, and the contralateral renal vein and distal and proximal IVC were blocked with rubber bands. (c) For Mayo III tumour thrombus, the liver was mobilized to expose the hepatic portal vein before blocking the IVC. (d) For Mayo IV tumour thrombus without entrance to the atrium, the milking technique and Foley catheter-assisted technique could be used to downgrade the tumour thrombus to level III. (e) For Mayo IV tumour thrombus into the atrium, thoracoabdominal midline incision and cardiopulmonary bypass were commonly necessary. Next, the junction of the renal vein and IVC was curvilinearly incised, and the tumour thrombus was pulled out once confirmed to be completely isolated. Finally, the IVC was sutured continuously after flushing the lumen with heparin saline.
Follow-up
Follow-up was executed every 3 months for the first 2 years and semi-annually thereafter and included physical examination, laboratory tests and chest and abdomenpelvis scans. Follow-up information was obtained through review of outpatient records and telephone calls. Progression-free survival (PFS) was calculated from the date of surgery to radiological evidence of tumour progression, death from any cause or the last follow-up. Cancer specific survival (CSS) was calculated from the date of surgery to death from RCC or the last follow-up.
Statistical analysis
Normally distributed continuous variables were reported as means and standard deviations. Non-normally distributed continuous variables were reported as medians and interquartile ranges. The Student's t test and Mann-Whitney U test were applied to compare continuous variables. The Chi-square test was applied to compare categorical variables. The Kaplan-Meier method with log-rank test was used for survival analysis and comparisons. Univariate and multivariate Cox proportional hazard models were performed to identify independent predictors associated with PFS and CSS. All statistical analyses were conducted with SPSS Statistics 22.0 (IBM Corp, Armonk, NY, USA). Two-tailed tests were used for all comparisons, and p < 0.05 was considered statistically significant.
Baseline characteristics
A total of 125 patients treated surgically for RCC with vena caval tumour thrombus were included in our study. Thrombus levels were Mayo I, II, III, and IV in 38, 49, 25 and 13 patients, respectively. Among those patients, 17 (13.6%) had sarcomatoid differentiation alone, 8 (6.4%) had rhabdoid differentiation alone and 3 (2.4%) had both sarcomatoid and rhabdoid differentiation. The patients' clinicopathological demographics are outlined in Table 1 and are stratified by the presence of sarcomatoid and/or rhabdoid differentiation. There was no significant difference in gender, age, thrombus level, histological subtype, T stage, nodal status or adjuvant target therapy between the patients with sarcomatoid and/or rhabdoid differentiation and the patients with pure RCC. RCC with sarcomatoid and/or rhabdoid differentiation tended to have a higher incidence of synchronous metastasis than pure RCC, but this difference was not significant (39.3 vs 21.6%, p = 0.060). However, RCC with sarcomatoid and/or rhabdoid differentiation more frequently had larger tumour size (median 8.5 vs 10.4 cm, p = 0.012) and higher blood transfusion (median 1600 vs 400 cc, p = 0.038) than pure RCC. Similarly, RCC with sarcomatoid and/or rhabdoid differentiation more frequently displayed high-grade disease (84.6 vs 59.6%, p = 0.018) and tumour necrosis (71.4 vs 45.4%, p = 0.015).
Discussion
RCC with sarcomatoid differentiation was first reported as a distinct histologic subtype termed sarcomatoid RCC in 1968 [15]. Subsequent studies confirmed that sarcomatoid RCC can occur in all subtypes of RCC, but it does have a higher incidence in clear cell RCC [4,6,16]. Sarcomatoid differentiation is currently considered a rare histologic variant that predicts aggressive behaviour and poor prognosis. According to previous studies, RCC with sarcomatoid differentiation more frequently has larger tumour size, higher risk of necrosis and higher tumour stage and grade [6,7,16], which is consistent with the results of our study. In our cohort of RCC with vena caval tumour thrombus treated surgically, the presence of sarcomatoid differentiation in RCC was found to be an independent predictor for PFS and CSS after adjusting for other known prognostic factors. The association between sarcomatoid differentiation with poor oncologic outcomes has been consistently confirmed by many previous studies [5,6,16,17]. Using the Surveillance, Epidemiology, and End Results-Medicare database, Trudeau et al. [7] identified one of the largest RCC cohorts including 234 RCCs with sarcomatoid differentiation. The results of that study showed that RCC with sarcomatoid differentiation has a worse 5-year CSS compared to pure clear cell RCC (67% vs 14%).
RCC with sarcomatoid differentiation is classified as grade 4 by the WHO/ISUP grading system [11]. However, grade 4 RCC with sarcomatoid differentiation has significantly worse CSS than grade 4 RCC without differentiation [10,18]. We believe that the equivalence of sarcomatoid differentiation and grade 4 classification in RCC may underestimate the prognostic value of sarcomatoid differentiation. Furthermore, Adibi et al. [19] found that the percentage of sarcomatoid differentiation (PSD) was a prognostic factor for overall survival in RCC. Zhang et al. [20] suggested that PSD was an independent predictor of prognosis. However, the prognostic value of PSD in patients with RCC is still under debate [21,22]. Our study failed to include PSD in the multivariate analysis model due to insufficient data. Insufficient pathologic material can result in incomplete and inaccurate assessment of PSD in retrospective studies. This may be one explanation for the conflicting conclusions of the above studies.
Rhabdoid differentiation, which can arise in any histologic subtype of RCC, including clear cell, papillary, chromophobe and unclassified RCC, may be a prognostic variation of RCC, similar to sarcomatoid differentiation. However, rhabdoid differentiation has not been studied as thoroughly as sarcomatoid differentiation. Gökden et al. [9] reported an incidence of rhabdoid differentiation of 4.7% and revealed associations between rhabdoid differentiation and increased grade and stage for the first time. Delahunt et al. [11] reviewed previous studies that reported survivals ranging from 8 to 31 months. In a more recent study of grade 4 RCC, 45 [18]. To our knowledge, our study is the first to evaluate the prognostic impact of sarcomatoid differentiation and rhabdoid differentiation in RCC with vena caval tumour thrombus. We identified an incidence of rhabdoid differentiation of 8.8%, and our results supported the above hypothesis that rhabdoid differentiation in RCC is associated with adverse prognostic factors. Furthermore, we confirmed that rhabdoid differentiation in RCC is a predictor of CSS independent from sarcomatoid differentiation, thrombus level and other prognostic variables. In a prior cohort of 49 clear cell RCCs with rhabdoid differentiation, the presence of rhabdoid differentiation was shown to be an independent predictor of poor prognosis, which is consistent with the results of our study [23]. In contrast, a study of grade 4 RCC showed that rhabdoid differentiation alone was not associated with worse CSS [18]. To our knowledge, the study by Zhang et al. [10] with 111 cases and a 2-year survival of 46% is the largest reported to date; it demonstrated that RCC with rhabdoid differentiation confers an increased risk of death compared to grade 3 RCC. However, the multivariate subgroup analysis of grade 4 RCC revealed that rhabdoid differentiation was not associated with CSS. The existing studies have consistently supported the incorporation of sarcomatoid differentiation and/or rhabdoid differentiation into grade 4 RCC to improve outcome prediction. However, we suggest that it is inappropriate to treat sarcomatoid differentiation and rhabdoid differentiation equally when evaluating the prognosis of RCC.
Furthermore, we found histologic subtype to be a significant predictor of PFS and CSS, in keeping with previous studies [24,25]. However, studies of grade 4 RCC accounting for sarcomatoid differentiation and/or rhabdoid differentiation failed to show an independent association between histologic subtype and prognosis [18,20]. Since the cohorts only included grade 4 RCC, some non-clear cell RCCs that could not be graded using the existing Fuhrman classification system may have been excluded generating selection bias. The small sample for the sarcomatoid differentiation and rhabdoid differentiation cohorts may have been insufficient to detect differences in prognosis between the histologic subtypes. In our study, potential predictors, such as thrombus level and tumour necrosis, were not significantly associated with PFS or CSS. In contrast, most recent studies have supported the impact of thrombus level on oncologic outcomes in RCC with vena caval tumour thrombus [17,24,26], although some results are conflicting [27]. Currently, the prognostic significance of tumour necrosis is less certain. A study based on 3017 cases of clear cell RCC showed that the WHO/ISUP grading system achieves a better predictive ability for prognosis when the presence of tumour necrosis is [28]. However, in several studies, tumour necrosis was not shown to be an independent predictor of oncologic outcomes [18,23,26]. Compared to the cytokine therapy era, the targeted therapy era has achieved significant improvement in survival in RCC. However, RCC with sarcomatoid differentiation has been shown to have a poor response to targeted therapy [4,29], and this may influence prognosis and survival. Our study has some limitations, including its retrospective nature, single-centre experience and relatively shorter follow-up compared to previous studies. Although multivariate analyses were used to identify independent predictors of PFS and CSS, it is possible that unmeasured differences existed considering the small sample of our study. In addition, we did not perform lymph node dissection routinely for all patients which may have reduced the reliability of our results regarding the prognostic impact of lymph node involvement. Finally, there was some heterogeneity in treatment after surgery which can result in different oncologic outcomes.
Conclusion
Our study shows that sarcomatoid differentiation and rhabdoid differentiation are associated with worse CSS in patients with RCC and vena caval tumour thrombus treated surgically. Furthermore, RCC with sarcomatoid differentiation was an independent predictor for worse PFS. Blood transfusion was an important predictor of early perioperative mortality.
Additional file 1: Table S1. Univariate and multivariate Cox proportional hazard regression analyses of PFS. Table S2. Univariate and multivariate logistic regression analyses of perioperative mortality within 90 days. | 2020-02-20T00:37:47.081Z | 2020-02-18T00:00:00.000 | {
"year": 2020,
"sha1": "de36a4b09a1ab23beca3c3b6ea7aef4436f14d4a",
"oa_license": "CCBY",
"oa_url": "https://bmcurol.biomedcentral.com/track/pdf/10.1186/s12894-020-0584-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "de36a4b09a1ab23beca3c3b6ea7aef4436f14d4a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202287143 | pes2o/s2orc | v3-fos-license | Slow Religion: Literary Journalism as a Tool for Interreligious Dialogue
: Intercultural and interfaith dialogue is one of the challenges faced by society. In a world marked by globalisation, digitisation, and migratory movements, the media is the agora for people of di ff erent faiths and beliefs. At the same time, the media is adapting to the online space. In this context, narrative journalism emerges, breaking the rules of technological immediacy and opting for a slow model based on the tradition of non-fiction journalism. With slow, background-based reporting and literary techniques, narrative journalism tells stories with all their aspects, giving voices to their protagonists. Is this genre a space in which to encounter the Other? Could narrative journalism be a tool for understanding? These are the questions that this research aims to investigate through the content analysis of 75 articles published in Jot Down , Gatopardo , and The New Yorker , along with 38 in-depth interviews with journalists associated to them.
Introduction
Narration is one of the seven forms of dialogue between people (Merdjanova and Brodeur 2011). Stories allow individuals to explain the reality in which they live, with all its complexities (Payne 2002). From an anthropological point of view, journalism and religions share a similar function (Didion 1979;Sharlet 2014).
The social function of journalism is to improve democracy (Schudson 2011; Tocqueville 1835), a democracy in which societies coexist peacefully and have their fundamental rights guaranteed. In fact, information and knowledge have become key factors for social and economic development (Bell 1973). The past decade has been a turning point for the three social environments represented by the publications analysed in this research. Two key factors shaped an era, a society, and the media that emerges from it: the economic crisis of 2008 and the consolidation of the digital and global era (Herrscher 2014). Both phenomena have forced journalism to reinvent itself (Greenberg 2018;Rosenberg 2018;Schudson 2011), to look for new forms of power and, at the same time, fulfil its social function while also generating sufficient benefits for journalists to earn a living from their profession (Albalad 2018;Benton 2018;Sabaté et al. 2018b;Berning 2011).
Multiple business models have emerged to seek this balance. Meanwhile, the world media map has multiplied (Albalad and Rodríguez 2012), with the emergence of new publications that have contributed to the existence of a competition for exclusivity, for being the first to cover a story, achieve a greater audience, and continue to feed the advertising model that, in many cases, has stopped working (Neveu 2016). This so-called fast journalism (Le Masurier 2015;Greenberg 2015) manifests the rhythm of a liquid modernity (Ray 2007) in which knowledge needs somewhere to fit (Durham Peters 2018). The concept of liquid modernity refers here to the Zygmunt Bauman (2013) term that In 2012, Boynton redefined and adapted it to the contemporary era, baptising the genre as "new new journalism". At present, and because it is identified with a social movement that opposes postmodern immediacy (Mattelart and Mattelart 1997), the term "slow journalism" has appeared (Barranquero-Carretero 2013). In Latin America, all of the aforementioned are considered "crónica" (feature), which is a name that, according to Caparrós (2015), already carries implicit connotations of its characteristic temporality. For the author, "a crónica is very specifically an always failed attempt to capture the fugitive character of the time in which one lives" (Caparrós 2015). However, he himself decided that this concept is too ambiguous and overused, and that a word he considered more audacious, "lacrónica", best describes the genre.
Although the rise of narrative journalism in the contemporary era can be traced back to New York in the 1960s (Sharlet 2014;Weingarten 2013), through authors such as Tom Wolfe, Jane Grant, Jimmy Breslin, or Gay Talese, the first work to be considered narrative journalism is A Journal of the Plague Year by Daniel Defoe, published in 1722 (Herrscher 2012;Chillón 1999). However, other authors such as Sims (1996) or Bak and Reynolds (2011) detailed certain references previous to the aforementioned date. Another example is the case of Dingemanse and de Graaf (2011), who spoke of the Dutch pamphlets of the 1600s as a tributary of narrative journalism, or Albalad (2018), who put forth the Chronicles of the Indies as a still older antecedent of the genre. For Puerta (2011), the origin could be placed in the Book of Genesis, and even in Mesopotamia or in the discovery of the Epic of Gilgamesh. Other influences of narrative journalism are the realistic novels of Zola, Balzac, and Dickens (Sharlet 2014;Herrscher 2012) and Shakespeare's plays (Albalad 2018;Herrscher 2012). Whitman and Thoreau are considered the architects of North American narrative journalism, which was developed during the US Civil War in Walden Pond. Whitman sought his references in what he considered to be the best gathered experiences of humanity: The Old and New Testaments, Homer, Aeschylus, or Plato (Sharlet 2014).
Authors who analysed the historical evolution of narrative journalism include Bak and Reynolds (2011), Chillón (1999), Herrscher (2012) and Albalad (2018). Except for the first two, the rest are from the school of thought that studies narrative journalism from the Ibero-American point of view. To this group, we can add Angulo (2013), who was dedicated to the analysis of the gaze and immersion in this genre; Palau (2018), who examined it in its various applications on specific issues, such as migration; Puerta (2019), who studied the work of Alberto Salcedo Ramos, as well as Palau and Naranjo (2018), who compared the genre in Spain and Latin America. These authors revolve around the Gabriel García Márquez Ibero-American Foundation for New Journalism. As part of it, Albalad and Rodríguez (2012) dedicated themselves to the study of digital narrative journalism.
In the English-speaking context, the International Association for Literary Journalism Studies focusses on what it calls literary journalism and on its digitisation. This group is made up of authors such as Sims (1996), Hartsock (2000), Berning (2011), or Weingarten (2013. Jacobson et al. (2015) wrote about the digital resources of slow journalism. Neveu (2016) focussed on the business of digital narrative journalism, while authors such as Wilentz (2014) studied the figure of the digital narrative journalist and their skills. Le Cam et al. (2019), Cohen (2018), Sherwood and O'Donnell (2018), and Johnston and Wallace (2016) studied the working conditions of journalists following digitisation and agreed on journalism being an identity rather than a profession. In the field of religion, Díez Bosch (2013) analysed the profile of journalists who specialised in religion, specifically in Catholicism. The author pointed out that knowledge is what makes journalists consider themselves specialised in this subject, regardless of the publication for which they write. Carroggio (2009), La Porte (2012), Arasa and Milán (2010), Eilers (2006), Wilsey (2006), and Kairu (2003) also dealt with this profile. Cohen (2012) did so in the case of the coverage of Judaism.
Within the English-speaking field, this analysis also focusses on authors who have analysed some aspect of the publications that are part of the sample. This is the case of Yagoda (2000), Kunkel (1995), or Thurber (1957), who are the authors with the most material produced specifically about The New Yorker.
Although focussed on narrative journalism and its digital aspect, this study does not exclude authors who focus on digital journalism. In 2001, Communication et langages published two articles that identified the characteristics of cyber journalism, by Cotte (2001), Jeanne-Perrier (2001), and Masip et al. (2010), which coincided with those written by Micó (2006). Specifically, the author (Micó 2006) detailed the characteristics of the style of digital journalism as well as its properties. For Díaz Noci and Salaverría (2003), digital text is deeper rather than long, but affirm that depth should not influence comprehension. Larrondo (2009) highlighted hypertextuality as the most outstanding feature in the construction of digital discourse and pointed out that reporting is the most flexible genre for adapting to digital journalism. Herrscher (2012), Chillón (1999), andVivaldi (1999) also focussed on this genre as being the most relevant in narrative journalism. Berning (2011) reiterated that reporting is the most malleable genre for the digital space, and also studied hypertextuality. For the author, narrative journalism was already hypertextual before the digital era, since detailed narration and scene by scene description (Wolfe 1973) are already links that lead to other dimensions of the narration. The risk that Herrscher (2014) saw in digital hypertexts is that the reader can lose the narrative thread. Rost (2006), Deuze (2011), or Pavlik (2001 have studied other phenomena linked to digitisation, such as the interactive process or participation, which are outputs that Benton (2018) saw as applicable to narrative journalism.
Based on this genre, this study also expands on its link with existing literature on the mediatisation of religion and interfaith dialogue. The definition of mediatisation used is that established by Hjarvard (2011). The concept of mediatisation itself captures the spread of technologically-based media in society and how these media are shaping different social domains. In this sense, the urgency of the term deep mediatisation is also remarkable, describing a new and intensified stage of mediatisation caused by the wave of digitisation (Hepp et al. 2018). The mediatisation of religion (Hjarvard 2011) defines the process in which media represents the main source of information about religious issues and in which, at the same time, religious information and experiences become moulded according to the demands of popular media genres (Lövheim and Lynch 2011). Hjarvard's (2011) theory argues that contemporary religion is mediated through secular and autonomous media institutions and is shaped according to the logics of those media. For White (2007), Sumiala (2006), Lövheim and Linderman (2005), the reciprocity between the media and religion is evident, since both spaces influence each other. It is also explained in this way by Hoover and Lundby (1997), Sumiala et al. (2006), or Zito (2008. For Hoover and Clark (2002), the paradox is that people practice religion and speak of the sacred in an openly secular and inexorably commercial media context. The media determines religious experience and defines the sacred, as well as the lines between "us" and "them" (Knott and Poole 2013;Couldry 2003;Couldry 2000). Lövheim (2019) focussed on the role of identity and gender determined by this mediatisation of religion. Candidatu et al. (2019) analysed it in the diaspora situation of young migrants.
In this study, the concept of dialogue is taken into account. Merdjanova and Brodeur (2011) pointed out that narration is one of the seven synonyms of dialogue; that is, it is one of the forms of verbal exchange between humans, together with conversation, discussion, deliberation, debate, interview, and panel. For Braybrooke (1992), one of the ground rules of intercultural and interreligious/interfaith dialogue is that it takes time, because it implies trust, continuity, and patience, which are conditions common to the development of narrative journalism. At the same time, dialogue is one of the characteristics that Wolfe (1973) specified as a norm for a text to be considered narrative journalism. On the other hand, Eilers (1994) analysed the theological dimensions of communication, especially Revelation, which he treated as a form of dialogue. Abu-Nimer and Smith (2016) affirmed that interreligious and intercultural education are not a single curricular item; they need to become an "integral part of formal and informal educational institutions".
In this situation, the technique of storytelling (Salmon 2008) appears as a space for the expression of one's own identity and shows the effectiveness of what in psychology is called narrative therapy (Payne 2002). An experience that proves this hypothesis is the existence of initiatives such as MALA (Muslim American Leadership Alliance), which gives space to young American Muslims to explain Religions 2019, 10, 485 6 of 24 their experiences, calling for the empathy of people of their same profile, but also that of people with different profiles. Thus, narrative emerges as dialogue (Merdjanova and Brodeur 2011). According to Hartsock (2000), good storytelling involves the reader, activates their neural circuits, and helps to captivate them. Salmon (2008) warned of the risk of telling stories about current events. According to him, the art of storytelling can become the art of manipulation. This is discussed by Zito (2008) and Sharlet (2014), who made clear that every fact is both real and imaginary from the moment it passes through the lenses of perception and imagination of a journalist's memory. For Sharlet (2014), "the literary journalist needs to be loyal only to the facts as best as he or she can perceive them". Buxó (2015) highlighted the importance of taking into account the symbolic function of language. For this reason, Sharlet (2014) emphasised that "narrative journalism is not the product of a technique but the documentation of a tension between fact and art". Sharlet (2014) agreed with Sims (1996) in that it deals with the art of facts, art versus anti-art, belles-lettres versus the five Ws, literary piety versus ruthless journalism.
Maybe the distinction is this: Fiction's first move is imagination, non-fiction's is perception. But the story, the motive and doubt, everything we believe-what's that? Imagination? Or perception? Art? Or information? D'Agata achieves paradoxical precision when he half-jokingly proposes a broader possibility: the genre known sometimes as something else. (Sharlet 2014). Narrative journalism emerges as a possible call to understanding and empathy (Griswold 2018;Salcedo Ramos 2018), and arouses emotions (Salmon 2008) that contrast with journalistic information. Could this effect be achieved by means of a collection of data? Do readers simply want to receive information, or do they want to feel an experience? (D'Agata 2009). The description of the genre using the techniques that Wolfe (1973) and Sims (1996) specified gives an answer to this question. The former speaks of the use of the first and third person, scene-by-scene construction, dialogue, and exhaustive detail. Sims (1996) referred to the same, calling it structure, rigor, voice, and responsibility. However, he adds immersion and symbolic realities (Buxó 2015), the equivalent of Wolfe's (1973) attention to detail, elements that are presented as small truths and metaphors of daily life explained with literary techniques that allow for them to be converted into stories (Sharlet 2014).
For Didion (1979), from an anthropological perspective, religions can be considered stories that society tells in order to live. The dilemma raised by Sharlet (2014), whereby the only essential truth of narrative journalism is the perfect representation of reality, is emphasised in this aspect. The dilemma is the same as that of religions, which makes this genre the most appropriate to document them (Sharlet 2014). The author pointed out that understanding religions is key to understanding narrative journalism, because both explain stories and share the same paradox, the same dilemma; so, according to him, the problems inherent in talking about religions are linked to the development of narrative journalism. They share essential reality, the impossibility of representing reality, at the same time as the desire to explain it to improve the world.
Among all the diverse approaches to narrative journalism, there are some authors that better suit the data and research questions that this investigation sets out. The conditions that Tom Wolfe (1973) outlined for considering a text narrative journalism (which are scene-by-scene construction, realistic dialogue, status details, and an interior point of view) are reflected in the analysed texts and highlighted in the interviews carried out in this study. The evolution that narrative journalism has had according to the results obtained by this research are aligned with Sims (1996), Herrscher (2014, and Albalad (2018).
Berning's analysis of digital narrative journalism is described by the practice of interviewed journalists and, again, in the analysed texts. The research also takes the perspective of Jeff Sharlet (2014), linking narrative journalism and religion, and highlighting the symbiotic role they have with each other.
Materials and Methods
The methodology for developing this research is based on in-depth interviews (Voutsina 2018;Elliott 2005;Johnson 2002) and content analysis (Van Dijk 2013). These methodology authors were chosen for several reasons that aim to contribute to the rigour and scientific approach of the presented research. First of all, the four mentioned authors have a vast and consolidated set of publications about the mentioned techniques in social research, so they present them in different backgrounds and contexts and obtaining diverse kind of results depending on the objective of each research. For instance, Voutsina (2018) and Johnson (2002) thought about different types of in-depth interviewing and the different possibilities of results that the researcher can obtain according to several factors. Specifically, Voutsina (2018) focussed on semi-structured interviews, which are the kind of interview carried out in this research. Elliott (2005) has been chosen because of the innovation of his approach. The author used narrative as a tool to explore the boundaries between qualitative and quantitative social research. Van Dijk (2013) is a referent by his specific analysis of the news discourse. So, these are authors that help us fit our method on their contributions and remain aware of the pros and contras of each technique. The publications considered are also from different moments during the last two decades, so the evolution that these techniques may have had has also been taken into account. These are also techniques and authors that have been used in similar research and by authors investigating in similar fields. The chosen authors are also featured for highlighting the ethical aspects of its methodology and taking them diligently into account.
The in-depth interviews were conducted with 37 professionals and experts in narrative journalism, and linked to the publications that are part of the sample. All of them accepted being quoted and mentioned in this research. As mentioned in the introduction, this research includes the journalistic approach of the issue, so future stages of it would include evidence from audiences and people engaged in religious traditions and religious leaders. The total number of interviews is 38, because Robert Boynton was interviewed twice. The in-depth interviews have made it possible to provide context and have helped understand the attitudes and motivations of the subjects (Voutsina 2018;Elliott 2005;Johnson 2002 The choice of interviewees is based on their career, experience, and links to the publications studied. In addition, people in different positions and from different generations and political views have been interviewed in order to obtain a global view on the subject, while obtaining results based on the symmetry of criteria and gender balance. The interviews were carried out in person (23), by telephone (5), by video conference (8), and by email (2). These conversations took place in four countries: Spain, the United States, Mexico, and Canada. The researchers travelled from Spain to the United States and Canada.
The witness and arguments expressed by experts and professors who are working in other institutions are used here to support and complement the opinions of those who express the vision of the analysed media. It has also been considered that external views from people working in prestigious institutions would enrich and make the research more critical.
The in-depth interviews have been useful for this analysis to confirm, check, and contrast the results obtained in the content analysis. The different explanations, opinions, and witnesses helped the team understand and give context to data, make it richer, and also clarify the differences and similarities among the media analysed. In this sense, and according to Voutsina (2018), in-depth interviews help collect in-depth data and approach the global data reflexively; with this technique, the discourse benefits from added nuances. The contact with the lived experience is also an added value of this technique (Johnson 2002).
The content analysis was developed over 75 articles in two research phases. The first was made up of the analysis of 45 articles, 15 of each of the three publications selected for the sample. Each of them comes from a different section of each publication and deals with different themes. This first phase served to gather global results in the first instance, as well as test the questionnaire created to carry out the analysis. This questionnaire is made up of four parts: identification, form, content, and audience. The first places the piece according to its section, author, and title. In the second part, the elements, report, and structure of each piece of news are taken into account, while also considering the presence and appearance in both digital form and on paper. The fields that are specified correspond to the elements that are to be analysed: subtitles, multimedia complements, images, positioning and extension strategies, manifested in the number of scrolls and pages on paper. Regarding the content, the questionnaire delves into narration. For this reason, the subject and tense are identified. The audience is treated based on the interactions with each of the pieces on the social networks on which they have been published. The use of content analysis (Van Dijk 2013) as a technique for this research is supported by previous studies on narrative and digital journalism, such as those by Jacobson et al. (2015), and Domingo and Heinonen (2008). Authors such as Gillespie (2015), Guo (2014), or Larssen and Hornmoen (2013) also use this technique.
Once the first part of the content analysis was completed, the second and more specific part was devoted to the study on the coverage of religions in the media analysed. The same evaluation sheet was used, although it was extended with a new section titled "Religion". It includes nine new fields that analyse the pieces in order to answer specific questions about religion in the publications. In this sense, what is detected is: the faith to which each story refers, the role of religion in the piece, the tone used to treat it, the presence or absence of leaders, the presence or absence of quotes by these leaders, and the existence of informative and substantive elements on the faith in question. The questionnaire also studies if the piece promotes prejudices or helps eliminate them. This last part of the questionnaire has been applied following the example of media analysis carried out by the World Association for Christian Communication (2017) in its research on the coverage of migration in Europe, projects in which the authors of this article have collaborated.
In this second part, 30 articles were evaluated, 10 from each of the publications analysed. The selection of these was carried out with a basic criterion: the appearance or coverage of religions. None of the magazines analysed has a section dedicated to religion; therefore, the articles studied were located and selected using the search engines of the magazines' digital versions through the keyword "religion". The criterion of currency prevailed; for this reason, the 10 most recent articles from each of the magazines were chosen at the time of making the selection, in April 2019.
Thus, taking into account the first and second phase of content analysis for this research, a total of 75 articles were analysed, 25 from each of the publications that make up the sample: The New Yorker, Gatopardo, and Jot Down.
With the techniques carried out, the research introduces the issue of the relationship between literary journalism and religions, contextualises it, and shows how the content of this analysed genre takes into account religions-all from the journalistic approach, having analysed the content and interviewed professionals in the field. As mentioned, the possible interfaith function that literary journalism can have may be confirmed with further evidence than journalistic ones. The research decided to first study the journalistic agents of the research to better know the role and presence of religion in the literary journalism.
Results and Discussion
Narrative journalism is faithful to the norms of traditional literary journalism, even though it does not fulfil the characteristics established by digital journalism (Restrepo 2019; Sabaté et al. 2018a). One of the main challenges of this investigation is to discover if narrative journalism about religion differs from these characteristics, as well as to unravel its particularities. Firstly, it is worth noting the analysis carried out on articles in general from the various sections of the three magazines. There were a total of 45 (15 from Jot Down, 15 from Gatopardo, and 15 from The New Yorker).
As shown in Figure 1, all the elements of narrative journalism are present in a high degree: above 40% in all the articles analysed. In this case, the least used element is dialogue. By focusing on the articles in which religion is present, most also fit into the four categories that Wolfe (1973) considered necessary for narrative journalism: scene-by-scene construction, realistic dialogue, status details, and an interior point of view (Sims 1996;Sharlet 2014). In all the media analysed, these characteristics consistently appear in a percentage higher than 50% in articles on religion, as shown in Figure 2.
As shown in Figure 1, all the elements of narrative journalism are present in a high degree: above 40% in all the articles analysed. In this case, the least used element is dialogue. By focusing on the articles in which religion is present, most also fit into the four categories that Wolfe (1973) considered necessary for narrative journalism: scene-by-scene construction, realistic dialogue, status details, and an interior point of view (Sims 1996;Sharlet 2014). In all the media analysed, these characteristics consistently appear in a percentage higher than 50% in articles on religion, as shown in Figure 2. Comparing both series of data indicates that, in general, both the global articles and those dealing with religion fulfil, to a high extent, Wolfe's (1973) conditions of narrative journalism, as reiterated by Sims (1996). Focusing on each of the characteristics, status details are 100% present in the articles on religion, and 95% in the global articles. Regarding scene-by-scene construction, its percentage of usage is also higher in articles on religion (76%) than in global articles. Regarding the interior point of view, it is used more often in global articles (80%) than in those that address religion (76%), although the difference is minimal. The use of dialogue is higher in articles on religion (56%) than in articles on global issues (42%). This format, based on the Socratic method, incorporates the need for the reader to receive the content in a didactic way. It is based on the idea that through dialogue and encounter with the Other, people learn about this Other (Abu-Nimer and Alabbadi 2017; Merdjanova and Brodeur 2011). For Volf and McAnnally-Linz (2016), "When encounters with others go well, we become more ourselves. As people and communities, we are not created to have hermetically sealed identities." According to Merdjanova and Brodeur (2011), in addition, narration is one of the forms of human dialogue and one of the ways in which people construct their beliefs and identities. It is significant that texts on religions use the technique of dialogue more than other types of texts. It is about offering the content about religions in an understandable way, one which through the stories appeals to the presumptions that the audience may have about a specific religious group (Restrepo 2019;Griswold 2018).
This research has also focussed on which faiths are addressed most in the narrative journalism media analysed. They are displayed in Figure 3. Elements of narrative journalism (Wolfe, 1973) in articles on religion 40% 50% Presence of faiths in Jot Down, Gatopardo and The New Yorker Comparing both series of data indicates that, in general, both the global articles and those dealing with religion fulfil, to a high extent, Wolfe's (1973) conditions of narrative journalism, as reiterated by Sims (1996). Focusing on each of the characteristics, status details are 100% present in the articles on religion, and 95% in the global articles. Regarding scene-by-scene construction, its percentage of usage is also higher in articles on religion (76%) than in global articles. Regarding the interior point of view, it is used more often in global articles (80%) than in those that address religion (76%), although the difference is minimal. The use of dialogue is higher in articles on religion (56%) than in articles on global issues (42%). This format, based on the Socratic method, incorporates the need for the reader to receive the content in a didactic way. It is based on the idea that through dialogue and encounter with the Other, people learn about this Other (Abu-Nimer and Alabbadi 2017; Merdjanova and Brodeur 2011). For Volf and McAnnally-Linz (2016), "When encounters with others go well, we become more ourselves. As people and communities, we are not created to have hermetically sealed identities." According to Merdjanova and Brodeur (2011), in addition, narration is one of the forms of human dialogue and one of the ways in which people construct their beliefs and identities. It is significant that texts on religions use the technique of dialogue more than other types of texts. It is about offering the content about religions in an understandable way, one which through the stories appeals to the presumptions that the audience may have about a specific religious group (Restrepo 2019; Griswold 2018).
This research has also focussed on which faiths are addressed most in the narrative journalism media analysed. They are displayed in Figure 3. significant that texts on religions use the technique of dialogue more than other types of texts. It is about offering the content about religions in an understandable way, one which through the stories appeals to the presumptions that the audience may have about a specific religious group (Restrepo 2019;Griswold 2018).
This research has also focussed on which faiths are addressed most in the narrative journalism media analysed. They are displayed in Figure 3. Among the total of articles analysed, Catholicism is featured most, followed by Protestantism and Islam. It is a trend that is consistent with the figures of religious self-identification indicated by data found at a global level. This research also evaluated this phenomenon (Figure 4). Among the total of articles analysed, Catholicism is featured most, followed by Protestantism and Islam. It is a trend that is consistent with the figures of religious self-identification indicated by data found at a global level. This research also evaluated this phenomenon (Figure 4). In Jot Down, the main faith detected is Catholicism, followed by Islam and Christianity in general. When comparing these results with demographic figures, it can be seen that 67.7% of Spaniards consider themselves Catholic (CIS 2018), and that the second most popular faith in communities in Spain is Protestantism, followed by Islam (Observatory of the Religious Pluralism in Spain 2019). Thus, the most covered faiths in Jot Down are also the most numerous in number of followers in Spain.
In Gatopardo (Figure 5), 60% of the pieces address Catholicism, 10% address Islam, another 10% address Hinduism, while 20% address other religions. In this last category, several stories are considered that address specific beliefs and spiritualities in some parts of the country. According to the National Survey on Religious Beliefs and Practices in Mexico (ENCREER/RIFREM website 2016), in the Central American country, 85% of individuals identify as Catholic, 8% identify as Protestant Christian, and 0.1% identify as other religions. In this case, the results of the articles are again proportional to the representation that these confessions have in the country of the publication analysed. In Jot Down, the main faith detected is Catholicism, followed by Islam and Christianity in general. When comparing these results with demographic figures, it can be seen that 67.7% of Spaniards consider themselves Catholic (CIS 2018), and that the second most popular faith in communities in Spain is Protestantism, followed by Islam (Observatory of the Religious Pluralism in Spain 2019). Thus, the most covered faiths in Jot Down are also the most numerous in number of followers in Spain.
In Gatopardo (Figure 5), 60% of the pieces address Catholicism, 10% address Islam, another 10% address Hinduism, while 20% address other religions. In this last category, several stories are considered that address specific beliefs and spiritualities in some parts of the country. According to the National Survey on Religious Beliefs and Practices in Mexico (ENCREER/RIFREM website 2016), in the Central American country, 85% of individuals identify as Catholic, 8% identify as Protestant Christian, and 0.1% identify as other religions. In this case, the results of the articles are again proportional to the representation that these confessions have in the country of the publication analysed.
considered that address specific beliefs and spiritualities in some parts of the country. According to the National Survey on Religious Beliefs and Practices in Mexico (ENCREER/RIFREM website 2016), in the Central American country, 85% of individuals identify as Catholic, 8% identify as Protestant Christian, and 0.1% identify as other religions. In this case, the results of the articles are again proportional to the representation that these confessions have in the country of the publication analysed. In The New Yorker (Figure 6), Protestantism is the predominant faith (60%) among the topics in which religion is present. It is followed in a much smaller percentage (10%) In The New Yorker (Figure 6), Protestantism is the predominant faith (60%) among the topics in which religion is present. It is followed in a much smaller percentage (10%) In all three cases, the representation of religious faiths in the articles coincides with their presence in each of the countries of which the journals are native. The presence of religions corresponds to the national reality of each publication. This fact allows the research to detect a kind of social conscience of these narrative journalism magazines in regard to religion, and is an element that allows the investigation to sense that they may become a tool for interfaith dialogue, as they proportionally represent the faiths that are present in their surroundings. As The New Yorker, Jot Down, and Gatopardo covered these several confessions proportionally, their readers may be aware of their existence and reality, so they could become more informed, and thus obtain more knowledge about confessions that could be unknown for them and avoid prejudices. It is about knowledge that is predicated on promoting understanding (Ahmed 2018). According to Abu-Nimer and Smith (2016), "a constructive contact with those who are different from 'us' requires having intercultural and interreligious competences as integral like skills in this increasingly interconnected world. In the cases of Gatopardo and The New Yorker, it is also worth mentioning the presence of the category of other religions, which includes the possibility that their reader base may be familiar with realities that are not as popular as other religions present in the media. This is in fact one of the particularities of this type of publication: the presence of topics that do not usually get coverage in general media (Reynolds 2018; Díaz Caviedes 2014; Guerriero 2014). The term "general media" is here understood as media that covers the traditional subjects (politics, society, sports, culture) globally and looks for a wide reach, generally nowadays combining information from press agencies and pieces written by their journalists. In all three cases, the representation of religious faiths in the articles coincides with their presence in each of the countries of which the journals are native. The presence of religions corresponds to the national reality of each publication. This fact allows the research to detect a kind of social conscience of these narrative journalism magazines in regard to religion, and is an element that allows the investigation to sense that they may become a tool for interfaith dialogue, as they proportionally represent the faiths that are present in their surroundings. As The New Yorker, Jot Down, and Gatopardo covered these several confessions proportionally, their readers may be aware of their existence and reality, so they could become more informed, and thus obtain more knowledge about confessions that could be unknown for them and avoid prejudices. It is about knowledge that is predicated on promoting understanding (Ahmed 2018). According to Abu-Nimer and Smith (2016), "a constructive contact with those who are different from 'us' requires having intercultural and interreligious competences as integral like skills in this increasingly interconnected world. In the cases of Gatopardo and The New Yorker, it is also worth mentioning the presence of the category of other religions, which includes the possibility that their reader base may be familiar with realities that are not as popular as other religions present in the media. This is in fact one of the particularities of this type of publication: the presence of topics that do not usually get coverage in general media (Reynolds 2018;Díaz Caviedes 2018;Guerriero 2014). The term "general media" is here understood as media that covers the traditional subjects (politics, society, sports, culture) globally and looks for a wide reach, generally nowadays combining information from press agencies and pieces written by their journalists. Narrative journalism media does not present this structure. They open new spaces for representations of topics overlooked elsewhere with several techniques: free sections (not following the traditional distribution), free timing (giving journalists the time that a subject need, that could be months or years), and reporting and practising investigative journalists, not to cover the same topics that mass media covers or cover it from a new and unexpected approach (Guerriero 2014;Herrscher 2014).
This study also questions how topics on religion are covered following the guidelines of media monitoring analysis in accordance with the methodology of the World Association for Christian Communication (2017). This entity measures the rigor of media coverage of certain issues, based on the extent to which people's freedom of expression and the different beliefs that appear in the media are respected. Among other points, it focusses on three main aspects: the presence of the people that are spoken about (in this case, religious leaders or people associated to religions), the number of quotes they publish, and the amount of background information on the subject (in this case about the religion itself). Taking this example, Figure 7 shows the appearance of each of these elements in the stories analysed: Generally, the three publications studied include the presence of religious leaders, statements or quotes by them, and background on the faith that is addressed in each story (a background that is linked to the high degree of status details that has been detected in the pieces about religion; see Figure 2). Thus, in a global way, another factor may be determined that could highlight the hypothesis that narrative journalism may be a possible space to contribute to interfaith dialogue, seeing as it uses the most representative aspects that are considered for respecting the freedom of expression of the people spoken of in each story (World Association for Christian Communication 2017).
When examining these results publication by publication, there are differences in some aspects. The presence of leaders is more evident in The New Yorker and in Gatopardo than in Jot Down, which is a publication that makes greater use of the study and analysis of background information when covering issues of religion. In relation to this aspect, there is a much smaller number of quotes in Jot Down (present in 20% of the articles studied) than in Gatopardo (70%) and The New Yorker (80%). In fact, overall, The New Yorker is the publication, among those analysed, that takes these elements into account the most.
On the coverage of religions, and also following part of the analysis methodology of the World Association for Christian Communication (2017), this study has also taken into account how the articles studied address the stereotypes related to some faiths (Figure 8). Generally, the three publications studied include the presence of religious leaders, statements or quotes by them, and background on the faith that is addressed in each story (a background that is linked to the high degree of status details that has been detected in the pieces about religion; see Figure 2). Thus, in a global way, another factor may be determined that could highlight the hypothesis that narrative journalism may be a possible space to contribute to interfaith dialogue, seeing as it uses the most representative aspects that are considered for respecting the freedom of expression of the people spoken of in each story (World Association for Christian Communication 2017).
When examining these results publication by publication, there are differences in some aspects. The presence of leaders is more evident in The New Yorker and in Gatopardo than in Jot Down, which is a publication that makes greater use of the study and analysis of background information when covering issues of religion. In relation to this aspect, there is a much smaller number of quotes in Jot Down (present in 20% of the articles studied) than in Gatopardo (70%) and The New Yorker (80%). In fact, overall, The New Yorker is the publication, among those analysed, that takes these elements into account the most.
On the coverage of religions, and also following part of the analysis methodology of the World Association for Christian Communication (2017), this study has also taken into account how the articles studied address the stereotypes related to some faiths ( Figure 8).
Down (present in 20% of the articles studied) than in Gatopardo (70%) and The New Yorker (80%). In fact, overall, The New Yorker is the publication, among those analysed, that takes these elements into account the most.
On the coverage of religions, and also following part of the analysis methodology of the World Association for Christian Communication (2017), this study has also taken into account how the articles studied address the stereotypes related to some faiths (Figure 8). Overall, the results show that narrative journalism articles may possibly contribute to dismantling stereotypes; 63% of the pieces examined do so. However, 23% promote them, while 10% are considered neutral. The high use of the elements of narrative journalism in these articles could be related to their ability to challenge prejudices. At the same time, following this hypothesis, and as the research would introduce in the following points, the role of the journalist could be also linked with Overall, the results show that narrative journalism articles may possibly contribute to dismantling stereotypes; 63% of the pieces examined do so. However, 23% promote them, while 10% are considered neutral. The high use of the elements of narrative journalism in these articles could be related to their ability to challenge prejudices. At the same time, following this hypothesis, and as the research would introduce in the following points, the role of the journalist could be also linked with the role of the facilitator, in the sense of becoming a kind of mediator. Nevertheless, the existence of pieces that can promote stereotypes even when using narrative journalism leads to a reflection on the education, deontology, and professional practice of the people involved in narrative journalism. Future researches considering audiences' readings may be able to better confirm this aspect.
The Narrative Journalist that Covers Religion
Curiosity (Restrepo 2019; Conover 2018; Villanueva Chang 2017), perspective, resistance Lee Anderson 2018;Guerriero 2014;Yagoda 2000), and perfection in form (Lobo 2018;Sims 2018;Collins 2018) are the main characteristics of a narrative journalist (Sabaté et al. 2018a). These relate directly to features that correspond to the demands of narrative journalism set forth by Wolfe (1973) and Sims (1996). Sherwood and O'Donnell (2018) spoke of identity journalism. MacFarquhar (2018), Blumenkranz (2018), Banaszynski (2018), Bowden (2018), and Guerriero (2014) argued that this profile is very specialised, that it requires both learned and innate skills, and that it is developed by a "chosen few". Weingarten (2013) and Yagoda (2000) highlighted this genre along the lines of what Albalad (2018) called "caviar journalism". With digitisation, the possibility of being considered for publication in this type of media has grown (Greenberg 2018;Díaz Caviedes 2018), although key elements for learning the trade have been lost, such as face-to-face contact between veteran professionals and students (Banaszynski 2018).
For Griswold (2018) and Sharlet (2018), there is a distinction between narrative journalists and narrative journalists who cover religion: the ability to put aside one's beliefs and listen to the Other. This is one of the types of dialogue highlighted by Eck (1987)-the dialogue of life, which opens up the possibilities of visiting, participating, and sharing experiences with different local communities. Sharlet (2014) admitted that "as a writer, I practice participant observation, so, with as clear-as-can-be disclaimers-'Look, I do not really share your beliefs . . . '-I've often joined in". For Griswold (2018), "it is about suspending one's point of view in order to encounter the Other". According to the author: In covering religion, the skills are the same as covering any ideology. So, as a reporter, one has to be able to suspend one's point of view in order to encounter people who are really different. People who believe different things, who believe that certain people are going to hell, who may have different political views that for them are not political, they are religious. It is important as a reporter to be able to sit down and listen to all those people at great length, without passing judgement or feeling threatened by differences. (Griswold 2018).
A sensibility is detected here beyond the characteristics that define narrative journalists. This sensibility leads the research to introduce the relation between the narrative journalist with the figure of the dialogue facilitator. According to Abu-Nimer and Alabbadi (2017), the profile of a dialogue facilitator could be comparable to the guide of a journey, and specify that "no one can walk the path for another person, but a guide can make the journey meaningful and enjoyable, despite the challenges and rocky areas on the trail". For them, the facilitator does not direct, but makes the process of understanding possible. Abu-Nimer and Alabbadi (2017) also pointed out that a facilitator is impartial, although aware that the reality they interpret is based on their own subjectivity. Therefore, they are at a certain distance from the actors and design the way for them to understand each other effectively.
Following this description and taking into account the definition of the narrative journalist of some authors (Sabaté et al. 2018a) and interviewees Griswold 2018;Sharlet 2018), the role of the narrative journalist who covers issues of religion may be compared to the role of a facilitator. With curiosity, a gaze of their own, excellent writing form, and resistance (Restrepo 2019; Boynton 2018; Guerriero 2014), this type of professional is also able to put their beliefs, convictions, and presumptions on hold, and position themselves face-to-face with an Other who is different (Griswold 2018;Lee Anderson 2018;Lévinas 1985;Buber 1923;Stein 1916), in order to listen to them, understand them, and make themselves understood.
The importance of the use of the first person in this type of narrative journalism is detected in this aspect. This technique, which is ground-breaking in the face of traditional journalism, defines narrative journalism (Reynolds 2018;Sims 2018). For Sharlet (2014), when covering religions, this aspect becomes relevant, because according to him, literary journalism deals with perceptions, and a narrative journalist must be faithful to facts to the extent that they perceive them. In religion, "things unseen" are often documented; therefore, the demand for transparency in the process is even higher. In this regard, the research must take into account the symbolic function of language and the role that it plays in the production and reception of this type of texts (Buxó 2015). The author (Sharlet 2014) gave Whitman as an example, and outlined how this writer explained his method transparently and in the first person. Schultz (2018) defended this demand to explain to the reader how the facts have been established. For Clover (2018), the use of the first person creates the author's own style. However, according to Lobo (2018), this element is a clear distinction between narrative journalism in Latin America and North America, seeing as in the United States, the first person is more normalised in narrative journalism texts. It is also detected in Gatopardo and more in the two American magazines than in Jot Down. In this sense, Gay Talese (2019) was sceptical towards the idea that the emergence of digital space contributes to journalists' transparency.
I practice the journalism of 'showing up'. It demands that the journalist deal with people face to face. Not Skype, no emailing back and forth-no, you must be there. You must see the person you are interviewing. You must also ask the same question a few times, to be sure the answer you are getting is the full and accurate one. (Gay Talese 2019).
A Digitally Non-Digital Journalism
Talese's (2019) scepticism is not exceptional, and shows the relationship that this genre has with the digital world. The knowledge of digital tools is not enumerated among the capabilities of narrative journalists in any of the interviews. In fact, one of the main characteristics of digital narrative journalism is the non-fulfilment of digital journalism's rules of style and writing (Sabaté et al. 2018b;Albalad 2018). According to Micó (2006), digital writing should have: updated data, universal information, simultaneity, interactivity, multimedia, hypertext, and versatility.
"We want to tell stories through an approach that has never been addressed before, even they take longer time", affirmed Leila Guerriero (2014). In this sense, Marcela Vargas (2014) said that "the spirit of Gatopardo is not 'breaking news'". Global information is present in this type of journalism, which ends up dealing with major issues (Restrepo 2019; Salcedo Ramos 2018; Guerriero 2014). On the other hand, the media outlets in which narrative journalism appears are not mainly interactive. They use some means of contact with the public, such as social media (Restrepo 2019;Díaz Caviedes 2018;Foguet 2014), which measure the temperature of the evolution of topics. However, the audience does not intervene in the production of the texts (MacFarquhar 2018;Stayner 2018;Foguet 2014;Ruiz Parra 2014). In relation to the use of multimedia resources, both global analysis and analysis of articles on religion show that narrative journalism media do not fully exploit the possibilities of the online environment (MacFarquhar 2018;Ratliff 2018;Fernández 2014;Jonás 2014;Vargas 2014;Berning 2011). Of the 30 articles on religion analysed, only one uses multimedia elements. In this sense, the consideration of the concept of "immersion" (Conover 2018;Angulo 2013) related to the effect of multimedia elements appears as a debate. Despite all the interviewed experts considering the text to be the main way for the audience to be immersed in the story, younger generations of professionals consider multimedia elements a useful complement for the text. According to Monica Račić (2018), "multimedia elements have to be present to give information that the text itself does not offer and that helps audience to better understand the story". For Roberto Herrscher (2014), "immersion can be only achieved by audience imagination when reading". This research has also looked at the positioning elements that have been used in the articles. These elements have been located in 23 of the 30 articles. However, the variety of these elements is limited. There is the use of bold text (in 11 articles), internal links (in seven articles), and links to related articles (in five articles). Therefore, digital positioning is taken into account in a subtle way.
Finally, when studying versatility, it can be seen that in this type of journalism, it is mostly present in digital format, even though it is heavily influenced by the layout, structure, and format of the paper. Digital narrative journalism texts are proof of a certain "paperisation" of the internet (Albalad 2018;Foguet 2014). However, they are versatile in a specific aspect: length. Eliza Griswold (2018) said, "I write longer than my editor would like, but the fact of not having a limit to tell the story is something hugely enjoyable".
It should be noted, before analysing this aspect, that Micó (2006) also detailed the style conditions of digital journalism: accuracy, clarity, conciseness, density, precision, simplicity, naturalness, originality, brevity, variety, appeal, colour, sonority, detail, and propriety. Narrative journalism is accurate, dense, precise, and original; it has colour, sound, detail, and propriety. However, it is not concise, simple, or brief. Narrative journalism seeks literary excellence (Kormann 2018;Lobo 2018;Guerriero 2014;Herrscher 2014) and does not set any limitations that may interfere with the achievement of this goal. In this way, it develops a type of journalism that has also been called long-form , that does not follow any canon except for that which each story requires (Račić 2018;Herrscher 2014). This is displayed in the following figures (Figures 9-11) on the length of the articles that cover religion: Finally, when studying versatility, it can be seen that in this type of journalism, it is mostly present in digital format, even though it is heavily influenced by the layout, structure, and format of the paper. Digital narrative journalism texts are proof of a certain "paperisation" of the internet (Albalad 2018;Foguet 2014). However, they are versatile in a specific aspect: length. Eliza Griswold (2018) said, "I write longer than my editor would like, but the fact of not having a limit to tell the story is something hugely enjoyable".
It should be noted, before analysing this aspect, that Micó (2006) also detailed the style conditions of digital journalism: accuracy, clarity, conciseness, density, precision, simplicity, naturalness, originality, brevity, variety, appeal, colour, sonority, detail, and propriety. Narrative journalism is accurate, dense, precise, and original; it has colour, sound, detail, and propriety. However, it is not concise, simple, or brief. Narrative journalism seeks literary excellence (Kormann 2018;Lobo 2018;Guerriero 2014;Herrscher 2014) and does not set any limitations that may interfere with the achievement of this goal. In this way, it develops a type of journalism that has also been called long-form , that does not follow any canon except for that which each story requires (Račić 2018;Herrscher 2014). This is displayed in the following figures (Figures 9-11) on the length of the articles that cover religion: Similar to Larrondo (2009), Díaz Noci andSalaverría (2003) emphasised that a digital text is deep rather than long, referring to the hypertextual depth of digital articles. In this respect, the will to break with what is established in narrative journalism (Restrepo 2019; Reynolds 2018; Sims 2018) is denoted once again, seeing as it is longer than it is hypertextually deep. 13,681 12,025 6,310 11,439 15,126 Gatopardo Figure 10. Length of articles on religion studied in Gatopardo.
Similar to Larrondo (2009), Díaz Noci andSalaverría (2003) emphasised that a digital text is deep rather than long, referring to the hypertextual depth of digital articles. In this respect, the will to break with what is established in narrative journalism (Restrepo 2019; Reynolds 2018; Sims 2018) is denoted once again, seeing as it is longer than it is hypertextually deep. This unlimited length also indicates how temporal flexibility is managed in these media (Burstein 2018;Vargas 2014. For Del Campo Guilarte (2006, the productivity of technologies cannot replace human slowness and imperfections. Precisely, slow journalism is the name given to this genre, which does not prioritise immediacy of publication and grants each topic the time it requires (Restrepo 2019;Rotella 2018;Guerriero 2014;Herrscher 2014). A prior agreement is detected with a type of reader who prefers to wait to receive a product (Sabaté et al. 2018b) that provides all the elements for understanding a story. According to Rubén Díaz Caviedes (2014), "these conditions are a privilege for a journalist".
This research questions to what extent this digital disloyalty influences that narrative journalism may be introduced as a possible tool for intercultural and interreligious dialogue. It stems from the need of some communities linked to specific faiths to get out of the bubble that digital space can represent. Although Castells (1996) pointed out that new media technologies can contribute to the construction of networks between social groups, the creation of online communities (Dawson and Cowan 2004) linked to religion is, in some cases, at a stage prior to maturity (Díez et al. 2018); thus, digital dialogue between different communities is still a distant reality (Díez et al. 2018;Leurs and Ponzanesi 2018). One of the reasons for this is that many communities still do not consider the internet a space (Spadaro 2014), but rather a tool or an instrument of communication, not considering the further possibilities it has. They are in the stage of "religion online", and not yet in the stage that Helland (2000) defined as "online religion". He distinguished communities that act with unrestricted freedom and a high level of interactivity (online religion) versus those who seem to provide only religious information and not interaction (religion online).
Dialogue requires conditions that are more linked to slow journalism than to digital immediacy, seeing as dialogue requires time and implies continuity, patience, and building trust (Braybrooke 1992). According to Abu-Nimer and Alabbadi (2017) and Merdjanova and Brodeur (2011), dialogue demands a safe space for participants to be able to overcome their assumptions and question their own previous perceptions and prejudices. Dialogue also requires a facilitator to guide it. According to the interview with Jeff Sharlet (2018), digital space appears here as a channel that allows elements 20 The New Yorker This unlimited length also indicates how temporal flexibility is managed in these media (Burstein 2018;Vargas 2014). For Del Campo Guilarte (2006, the productivity of technologies cannot replace human slowness and imperfections. Precisely, slow journalism is the name given to this genre, which does not prioritise immediacy of publication and grants each topic the time it requires (Restrepo 2019; Rotella 2018;Guerriero 2014;Herrscher 2014). A prior agreement is detected with a type of reader who prefers to wait to receive a product (Sabaté et al. 2018b) that provides all the elements for understanding a story. According to Rubén Díaz Caviedes (2018), "these conditions are a privilege for a journalist".
This research questions to what extent this digital disloyalty influences that narrative journalism may be introduced as a possible tool for intercultural and interreligious dialogue. It stems from the need of some communities linked to specific faiths to get out of the bubble that digital space can represent. Although Castells (1996) pointed out that new media technologies can contribute to the construction of networks between social groups, the creation of online communities (Dawson and Cowan 2004) linked to religion is, in some cases, at a stage prior to maturity (Díez Bosch et al. 2018); thus, digital dialogue between different communities is still a distant reality (Díez Bosch et al. 2018;Leurs and Ponzanesi 2018). One of the reasons for this is that many communities still do not consider the internet a space (Spadaro 2014), but rather a tool or an instrument of communication, not considering the further possibilities it has. They are in the stage of "religion online", and not yet in the stage that Helland (2000) defined as "online religion". He distinguished communities that act with unrestricted freedom and a high level of interactivity (online religion) versus those who seem to provide only religious information and not interaction (religion online).
Dialogue requires conditions that are more linked to slow journalism than to digital immediacy, seeing as dialogue requires time and implies continuity, patience, and building trust (Braybrooke 1992).
According to Abu-Nimer and Alabbadi (2017) and Merdjanova and Brodeur (2011), dialogue demands a safe space for participants to be able to overcome their assumptions and question their own previous perceptions and prejudices. Dialogue also requires a facilitator to guide it. According to the interview with Jeff Sharlet (2018), digital space appears here as a channel that allows elements of dialogue to have greater reach, but it is not digital dynamism that is going to foster it. It is here that narrative journalism may be seen as a safe space (Abu-Nimer and Smith 2016) in which both the dynamics and content may become adequate for creating dialogue. In fact, dialogue helps to differentiate between the person and the subject, to see the individual within a large group that can be perceived as an adversary (Abu-Nimer and Alabbadi 2017; Merdjanova and Brodeur 2011). This is what narrative journalism aims to do: it talks about big issues based on individual stories (Guerriero 2014;Herrscher 2014), distinguishes people from concepts, and calls upon the reader to understand a specific reality from a different point of view (Díaz Caviedes 2018) that makes them reconsider their previous ideas (Guerriero 2014;Herrscher 2014). For Eliza Griswold (2018), "the key is explaining how complex people are, how complex humanity is in a way that hopefully makes it possible for people to consider the way what they thought about others before they read". For Berning (2011), digitisation gives the journalists more sources for making it possible, for the audience to check and explore all the elements of a story.
In this context and taking these elements detected into account, the narrative journalist profile could be related to the role of the facilitator. These figures guide and mediate a process of dialogue that they have been a part of, suspending their own beliefs and actively listening to the Other (Sharlet 2018;Griswold 2018;Abu-Nimer and Alabbadi 2017;Merdjanova and Brodeur 2011), leaving their comfort zone and inviting readers to also leave theirs. It is in this zone that dialogue could begin and allow people to put themselves in the place of the Other and understand them.
A narrative journalist could be able to make use of literary art to carry out their task, which could be related to the facilitator's one; this art, sown by precedents (Abu-Nimer and Alabbadi 2017; Merdjanova and Brodeur 2011), may reinforce the empathising effect of the stories. Examples of articles analysed showing these conditions are "La gente piensa que el obispo no es católico" (Gatopardo. Authored by Emiliano Ruiz Parra (2019)) or "The renegade nuns who took on a pipeline" (The New Yorker. Authored by Eliza Griswold (2019)).
To confirm if this relationship between the two professional figures and the effect that this genre may have related to interfaith dialogue, future research may point out the approach of the audience, people involved with several religious traditions and also religious leaders. The aim of this investigation has been to introduce this possible relationship, put it in its contexts, and study in-depth the coverage and presence of religions in this kind of media.
Conclusions
This research introduces the possibility that narrative journalism could become a tool for interfaith dialogue. The results obtained based on the methodology, content analysis (Van Dijk 2013), and in-depth interviews (Voutsina 2018;Elliott 2005;Johnson 2002) allow us to determine that, with the present data, the main hypothesis has been pointed out with evidences from the in-depth interviews and the content analysis carried out. Results show that it covers the different religious realities of its surroundings in a proportional and representative way, detailing how the social presence of some faiths in different geographical contexts is proportional to the appearance of these faiths in the corresponding publications. This representation is reinforced by the fulfilment of the rights of freedom of expression and communication of these religious communities (World Association for Christian Communication 2017). In narrative journalism publications, religion is not spoken about without first talking with religion-that is, with the protagonists of the topics that are covered. The results of the content analysis show a high presence of these agents in the stories. In addition, the high level of detail required by narrative journalism (Sims 1996;Wolfe 1973) makes the background have an outstanding presence and effect in the articles on each faith. This background completes the information that the protagonists give and allows a better understanding of the different faiths. For the most part, all these reasons lead narrative journalism articles that cover religious topics to create conditions that have the potential to challenge religious stereotypes. However, the present investigation takes a very specific approach that shows the dynamics of literary journalism covering religion from the journalistic approach. Future related research may also consider evidence related to the religious leaders and audience.
In the same sense, the way that narrative journalists practice their profession is also a factor that could point out narrative journalism as a possible tool for interreligious dialogue. This study has detected that a narrative journalist's abilities, processes, and knowledge could be related with those practiced by dialogue facilitators. They appear as the key figure in a process of understanding. They experience this process in each story they cover, leaving aside their prejudices (Griswold 2018;Sharlet 2018) and listening actively: the two key actions of dialogue according to (Abu-Nimer and Alabbadi 2017).
Finally, the research points out how narrative journalism and dialogue require a slow rhythm that is detached from the speed of the online space (Braybrooke 1992). In this sense, the digital disloyalty of narrative journalism adapts to the rhythm and dynamics that dialogue requires, since this genre appears as a safe space for understanding in the midst of postmodern acceleration (Durham Peters 2018). Thus, digital space is simply a platform that can increase the reach of dialogue, but due to its rhythm, it does not contribute to it taking place. The main contribution could be made by narrative journalism, with its characteristics, and by narrative journalists, through the practice of their profession. They try to get the audience out of their comfort zone (Abu-Nimer and Alabbadi 2017), to go beyond their prejudices (Restrepo 2019; Griswold 2018; Sharlet 2018), and position themselves in this awkward space in which dialogue could take place (Merdjanova and Brodeur 2011). Future research may show, in this sense, the effect that this genre has on audience and the approach from people engaged with religions and from religious leaders in considering it a possible element to contribute to interfaith dialogue. It is about society feeling addressed in the encounter with the Other (Volf and McAnnally-Linz 2016;Torralba 2011;Lévinas 1985), based on the narration and revelation of stories (Eilers 1994). At a time of mass migration (United Nations Refugee Agency 2018; Ares 2017) and the rise of fake news (Quandt et al. 2019) and hate speech (Parekh 2019;Gagliardone et al. 2015), the tools that promote and contribute to this encounter (Volf 2015), such as narrative journalism, might be a guarantee for the future. | 2019-09-11T07:07:15.417Z | 2019-08-18T00:00:00.000 | {
"year": 2019,
"sha1": "f916a474eaefb78d68d4fdc91aa79ce4794c8ac6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-1444/10/8/485/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bd1b58410fa160d380813fc06f367a8f569570a3",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
118603381 | pes2o/s2orc | v3-fos-license | Topological defect solutions for a system of three scalar fields
In this paper, we studied on the defect structures as topological by non-linear three scalar fields. By using modified Adomian decomposition method (MADM), and Adomian decomposition method (ADM) we have found the solutions of three scalar fields. Then we compared the obtained results each other by numerical solution. Also, we consider the static case and draw ϕ(x), χ(x), and ρ(x) with the choice of different values for parameter r.
Introduction
Recently, a lot of literatures have been considered nonlinear equations as ordinary differential equation (ODE) and partial differential equation (PDE). These nonlinear equations are used in the different branches of physics, engineering and the other sciences. One of applications is defect structures that plays an important role in cosmology and high energy physics and can be of topological or non-topological nature [1,2,3]. Kink-like and lump-like are topological and non-topological structures respectively, which can be described by real scalar fields in 1 + 1 space-time dimensions under the action of nonlinear interactions. One of main applications of topological defects is in cosmology especially for formation of structure in the early Universe, because topological defects are as carriers of attractive gravitational force [4,5]. So far, the defect structures solutions have investigated by orbit method [6,7]. In the present work, we focus attention on three coupled real scalar fields which are topological or kink-like defects [8]. For example the single real scalar field support just single defect as tanh-like kink, and the double sin-Gordon model may support two different defects, large and small kink [8]. In other words, a system containing two or more real scalar fields give rise to at least two other classes of systems those that support defects that engender internal structure and those that support junctions of defects [6,7,9,10]. Also, we know that the regular hexagonal network of defects described by two and three real scalar fields [11,12,13,14].
Also the three fields solutions in the Einstein equations for describing black holes with the cosmic string discussed by [15]. In general, these give us motivation to study three scalar fields. However, to make the present investigation as general as possible.
In most of the natural problems and modelings we encounter the nonlinear equations which could be solved by using different methods, such as Variational Iteration Method (VIM) [16], Modified Variational Iteration Method (MVIM) [17,18], Homotopy Perturbation Method (HPM) [19,20,21,22,23], Adomian Decomposition Method (ADM) [24,25], Modified Adomian Decomposition Method (MADM) [26] and so on. Here we investigate the three last ones with comparing the results to show the accuracy of these methods and obtain a reliable solution for this physical problem.
The present work are organized as follows: In the section 2 we will study topological defect by three scalar fields. Afterward, we will present the general form of ADM and MADM methods in section 3. Then, we will apply the ADM and MADM methods to three coupled scalar fields and obtain the corresponding solutions for the fields φ(x), χ(x) and ρ(x) in section 4. In section 5, the result is showed in a table and the fields φ(x), χ(x) and ρ(x) are drew in terms of position for three different values of parameter r. Finally, we conclude in section 6 that this solutions is capable to describe the topological defect.
Three scalar fields system
We start with the following Lagrangian density [1], where x α and x α are as x 0 = x 0 = t, x 1 = −x 1 = x and U (φ, χ, ρ) is the potential which is a linear function of the three fields. The variation of Lagrangian density with respect to fields lead to equations of Euler-Lagrange in the following form ∂ ∂t where point symbol demonstrates derivative with respect to time. So, the above model we have If the field is constant we get ∂U ∂φ = ∂U ∂χ = ∂U ∂ρ = 0. However, for static fields configurations (i.e. φ = φ(x), χ = χ(x), ρ = ρ(x)) we have So the above equations of motion are a system of three non-linear differential equations. The energy density associated with these configurations could be written as We note, for existence of soliton solutions the following restrictions are require Let us now consider models described by three scalar fields, given by [6,7,8,9] by inserting (6) into (4), the equations of motion for above system be in the form In next section, we will study a brief review on ADM and MADM methods.
A brief review of ADM, and MADM methods
In here we explain ADM and MADM methods in below separately.
Fundamentals of Adomian decomposition method
We start with a general nonlinear differential equation in the following form where linear term is represented by Lu and L is a linear operator and easily invertible. We choose L as the highest-ordered derivative, R is the reminder of the linear operator, namely a term consists only u with a coefficient (constant or variable). The nonlinear term is represented by N u, L −1 is defined as n-fold integration for L = d n dt n . For example, we can write L = d 2 dt 2 as where in that case one can obtain u as follows, The first three terms are identified as in the assumed decomposition u = ∞ n=0 u n , where A n are called Adomian polynomials and depend only on u components and make a rapidly convergent series (any nonlinearity are written in terms of A n and Nu need not even be analytic). The Adomian polynomials for one variable function are generated by the following formula : we write here the first five Adomian polynomials for convenience, The Adomian polynomials for two variables function f (u(x), v(x)) are as follow: where f µ,ν (u 0 , v 0 ) = ∂ µ+ν ∂u µ ∂v ν f (u(x), v(x))| x=0 , We can write the practical solution in n-term approximation [24] where u 0 , u 1 , u 2 , ... are determined by the following recursive relation as mentioned above,
Fundamentals of modified Adomian decomposition method
In ADM method the zeroth component u 0 (x) is usually identified by the function f (x) defined in Eq. (17). It is obvious that Adomian method changes the differential equations to obtain an easily computable components. The closed form for the solution u(x) if exists can be immediately obtained because of the rapid convergence presented by the method. The modified decomposition method suggests that the function f (x) defined above in Eq. (17) be decomposed into two parts, namely f 0 (x) and The proper choice of the parts f 0 (x) and f 1 (x) depends mainly on trial basis. In view of this decomposition of f(x), a slight variation only on the components u 0 (x) and u 1 (x) should be introduced. The proposed variation is that only the part f 0 (x) be assigned to the zeroth component u 0 (x), whereas the remaining part f 1 (x) be combined with the other terms given in u 1 (x) to define it. In view of this assumption, we formulate the following recursive relation for the modified decomposition method [26], The success of the modified method depends mainly on the proper choice of the parts f 0 (x) and f 1 (x). We have been unable to establish any criterion to judge what forms of f 0 (x) and f 1 (x) can be used to yield the acceleration demanded. It appears that trials are the only criteria that can be applied so far. | 2019-04-17T15:39:41.248Z | 2015-06-22T00:00:00.000 | {
"year": 2015,
"sha1": "aa0dc458f8e82ced79d07c5ec3cc0d11ddf4237a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/622/1/012046",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "50a00da03fcf7c35367220f259e7780bce6486f8",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
267492537 | pes2o/s2orc | v3-fos-license | Radiotherapy for Advanced Hodgkin Lymphoma with Initial Bulk: A Combined Analysis of Two Randomized Trials
Purpose The role of consolidative radiation therapy (RT) in patients with advanced Hodgkin lymphoma with initial bulk is unclear. GITIL/FIL HD0607 and FIL HD0801, 2 randomized controlled trials with similar design and methodologies, did not identify a benefit to consolidative RT after a metabolic complete response to 6 cycles of doxorubicin, bleomycin, vinblastine and dacarbazine. However, their limited sample sizes reduced statistical power to detect a small but clinically meaningful benefit to RT. Methods and Materials In a secondary analysis of these 2 phase 3 trials, reconstructed patient data were used to compare outcomes for early and complete responders randomized to no RT or RT to the site(s) of initial bulk. Estimates of progression-free survival (PFS) in the intent-to-treat (ITT) and per-protocol (PP) analyses were generated using the combined data and compared between groups using the log-rank test. Results A total of 412 patients were included in the ITT analysis, and 373 patients were included in the PP analysis. Median age was 30 to 32 years, 42% of patients were stage IIB, and 73% of bulky sites were located in the mediastinum. For the no RT versus RT groups, 5-year ITT PFS estimates were 90.1% versus 90.1%, respectively (P = .81). Five-year PP PFS rates were 90.9% versus 92.9%, respectively (P = .31). There was no observed difference between no RT and RT groups in subgroups according to size of bulky disease: 5 to 7 cm (P = .78), 7 to 10 cm (P = .25), and >10 cm (P = .69). Conclusions In this combined analysis of 2 randomized phase 3 clinical trials, consolidative RT to initial sites of bulky nodal involvement was not associated with a PFS benefit in patients with advanced Hodgkin lymphoma in metabolic complete response after 2 and 6 cycles of doxorubicin, bleomycin, vinblastine and dacarbazine.
Introduction
Radiation therapy (RT) plays an integral role in the management of early stage Hodgkin lymphoma (HL), but its role in advanced disease is uncertain. 1,2][5] The GITIL and FIL cooperative groups executed 2 randomized trials for adults with advanced HL to examine treatment intensification in patients with residual positron emission tomography (PET)-positive disease after 2 cycles of doxorubicin, bleomycin, vinblastine and dacarbazine (ABVD). 6,7Both trials also randomized patients with bulky disease (nodal mass >5 cm) at diagnosis that achieved a metabolic CR after 2 and 6 cycles of ABVD to RT or observation. 8,9Although these studies attempted to address the utility of RT in the setting of bulky disease, each study suffered from limitations including absence of a predefined statistical design 8 and small, underpowered sample size. 9he similarities between the 2 studies' design, treatment era, inclusion criteria, chemotherapy regimen, use of PET imaging, RT dose, and prespecified endpoints provide a unique opportunity to combine data and further examine the efficacy of RT in the setting of bulky disease.Through a combined analysis of reconstructed patient data from these published trials, we aimed to investigate the role of consolidative RT in patients with bulky advanced HL who experience metabolic CR after chemotherapy.
Methods and Materials
HD0607 and HD0801 were specifically selected for a combined analysis due to their similar study design, patient population, inclusion/exclusion criteria, treatment, and endpoint of interest definitions.A systematic review was performed to evaluate for other contributing trials and yielded no additional findings (Appendix E1 and Fig. E1).Both trials enrolled adults with advanced HL (stage IIB-IV) who had at least 1 site of bulky disease (largest dimension ≥5 cm) measured on transverse sections of the computed tomography (CT) imaging and achieved metabolic CR by PET/CT after 2-and 6-cycles of ABVD.The Consolidated Standards of Reporting Trials diagram for the combined analysis is displayed in Fig. E2.
RT procedures were overall consistent between studies, with sites of initial bulk specified to be treated with approximately 30 Gy using conventional fractionation.RT detail is provided in Table E1.The primary endpoint of this combined analysis was PFS, defined in both studies as time from registration/randomization to the date of progression/relapse or death from any cause.
Reconstructed patient data were extracted from the published figures using a robust, 2-stage, iterative extraction process based on the Kaplan-Meier estimation method (Appendix E1). 10 This established methodology is used to analyze time-to-event data when specific patient-level data are either not available or when specific data points are not consistently reported across multiple studies. 10,11Reconstructed outcome estimates and Kaplan-Meier plots for each trial are demonstrated in Table E2 and Figs.E3 and E4.
Descriptive analyses were performed on categorical data where appropriate to summarize the combined cohort.Survival outcomes in the combined analysis were estimated using the Kaplan-Meier method.The log-rank test was used to test for differences between groups.Statistical analyses were performed using R version 4.2.
Results
Baseline patient and disease characteristics of both trials that have been previously reported are summarized in Table E3.Of all 445 bulky sites (patients on HD0706 could have >1 bulky site), 73% were in the mediastinum and 27% were located at sites other than the mediastinum.The maximum size of the bulky lesion(s) was evenly distributed across the 3 size categories (5-7 cm, 7-10 cm, 10+ cm).
A total of 373 patients were included in the per-protocol analysis: 191 in the no RT group and 182 in the RT group (Fig. 2).Five-year PFS rates in the no RT versus RT groups were 90.9% (95% CI, 86.9-95.1)versus 92.9% (89.1-96.9),respectively (P = .31).
Both studies reported the size of largest bulky disease for all 412 patients randomized but only reported PFS outcomes based on size using the per-protocol cohort (n = 373).Combined 5-year PFS estimates between treatment groups stratified by size of largest bulky disease are summarized in Table E4.There were no PFS differences between the no RT versus RT groups by size category, nor was there an association between size and PFS in patients who did not receive RT (Fig. E5; P = .59).
Discussion
In this unique combined analysis of 2 randomized trials, we failed to demonstrate an association between consolidative RT and PFS in patients with bulky advanced HL.These findings are particularly important considering the controversy surrounding the role of RT for this patient population, where current data are limited, and its routine use is difficult to justify.We performed a systematic review to rule out the existence of other randomized trials reporting outcomes analogous to those included in this combined analysis and found no additional studies.Though both studies produced high-quality randomized evidence, each is individually underpowered, and one was not powered by a defined statistical design "mainly because of a lack of published comparators."8p2 Each individual trial on its own would not be expected to provide adequate statistical power to detect a small improvement in PFS.However, trials that enroll the same population and randomize patients to similar treatment groups offer an opportunity for combined analysis. 12he evidence supporting the use of RT in HL is heterogeneous and difficult to interpret in the context of modern therapy.Its use has been associated with improved PFS in prior non-ABVD-based trials without PET response imaging. 1,13Considering the current area of treatment for advanced HL with ABVD guided by PET/CT imaging, it Advances in Radiation Oncology: May 2024 is unclear whether the results of this trial with older chemotherapy and without functional imaging remain valid.Our findings are more consistent with Alliance/CALGB 50801, a phase 2 trial of PET-adaptive therapy for patients with bulky, early stage HL who were treated with 6 cycles of ABVD alone without RT in the setting of metabolic CR. 14 The RATHL trial did not allow consolidative RT for advanced HL patients in metabolic CR and reported similar rates of 3-year PFS. 15 RATHL was identified but not included for combined analysis because of the lack of PET/CT imaging after completion of chemotherapy.A combined analysis of patterns of failure in HD0607 and HD0801 was not possible given inconsistency in reporting, but failure within sites of initial bulk was rare.These data suggest that local control at the bulky site(s) would only reduce the risk of progression by a small amount; an adequately powered trial to clarify this benefit may not be feasible.Even a small difference in PFS may not provide clinical benefit with modern targeted salvage systemic therapies. 16,17his study is limited by the lack of individual patientlevel data not amenable to reconstruction, precluding sensitivity analyses of predictors of recurrence and limiting our ability to control for factors such as International Prognostic Score, site of bulk, or RT dose.Thus, these findings should be interpreted with caution.
Conclusion
In conclusion, this combined analysis of 2 randomized trials was unable to detect a PFS benefit to consolidative RT in patients with advanced HL who present with initial sites of bulky disease and have achieved metabolic CR after 2 and 6 cycles of ABVD.
Figure 2
Figure2Progression-free survival in the combined per-protocol analysis.
Figure 1
Figure1Progression-free survival in the combined intent-to-treat analysis. | 2024-02-06T18:20:00.224Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "241176427b13c97ed5800c86beda5e85518363d8",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8f8832e9e5a64aef78e52276c9418af1a091353",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258477757 | pes2o/s2orc | v3-fos-license | Triple Band Dual Sense Circularly Polarized Slot Antenna for S and C Band Applications
|This paper proposes a microstrip-fed simple square slot patch antenna, which produces a triple band and is circularly polarized. The designed antenna consists of an L-shaped patch radiator in which the lower part of L is modi(cid:12)ed to a circle instead of a rectangle, and two rectangular strips are inserted from the opposite corners of the ground plane. Two small rectangular slits have also been used in the design to generate the triple band and widen the bandwidth too. The antenna has been fabricated and measured, and it shows a good agreement between them. The measured impedance bandwidths (IBWs) are 44.06% (2.3{3.6 GHz) and 73.68% (4.8{10.4 GHz), and the axial ratio bandwidths (ARBWs) are 37.29% (2.4{3.5 GHz), 13.6% (4.8{5.5 GHz), and 32.35% (5.7{7.9 GHz) in the lower, middle, and upper band, respectively.
INTRODUCTION
Different types of multiband antennas have been developed as a result of the spike in demand for devices that operate at numerous frequencies.Slot antennas have gained popularity among these because of their many benefits, including their low profile, simplicity in fabrication and mounting, and broad operating spectrum.Numerous multiband slot antenna designs exist due to these characteristics [1].Circularly Polarized (CP) antennas and terminals offer an added benefit for mobile wireless devices because CP signals have superior propagation characteristics in multipath situations than linearly polarized ones.The majority of traditional techniques for creating CP slot antennas include adding strips, stubs, or cutting slits through the ground plane [2].In [3] and [4] annular slot antenna structures are presented which generate dual bands and triple bands, respectively.In [4], the proposed antenna is designed using an L-shaped feed and two nonconcentric annular slot configurations, but the triple bands generated have low axial ratio bandwidth.In [5], the antenna consists of two layers: an electromagnetically coupled low-band patch present in the bottom layer and a probe-fed dual-band patch on the top layer.In [6], in order to achieve CP, a T-shaped slit is cut in the ground plane, and a rectangular parasitic stub is added to the antenna radiator.In [7], a slot antenna loaded with metallic strips, and a split-ring resonator (SRR) is proposed in which the CP bands are generated when the SRR and copper strips are excited by the microstrip feed.The coplanar waveguide (CPW)-fed antenna structure is proposed in [8] and [9].In [8], a rectangular patch with two unequal rectangular strips is connected by a CPW feedline, and an inverted L-shaped stub is placed in the ground plane to create CP modes.In [10], an antenna is designed in which a U-shaped radiator, rotated by 45 • , is placed at the top in which an I-shaped strip is connected, and then an inverted L-shaped strip is connected to the end of the I-shaped strip in order to generate different CP modes.In [11], the antenna is designed by using a hexagonal slot with L-shaped slits.Three slits are added to the hexagonal slots for generating CP modes at different frequencies.[12] presents a structure in which circular polarization is achieved by placing a semi-circular radiating patch along a slit in it and a modified structure in the ground plane.Similar to [7], Ref. [13] has a compact antenna structure that is loaded with a D-shaped complementary split ring resonator (SRR) which shows dual-band polarization.Likewise, in [14], a printed inverted F antenna loaded with a rectangular complementary split ring resonator (CSSR) is designed and studied based on the various antenna parameters.A dual-band dual-sense CP antenna consisting of U-and Lshaped patches is designed for the implementation purpose in WLAN and WiMax application areas [15].Similarly, in [16], a U-shaped patch is considered along which an L-shaped parasitic patch is also used which results in the generation of a dual-band CP antenna.
To overcome those complexities, a simple square slot antenna is designed and fabricated in which an L-shaped radiator is placed in which instead of a rectangle a circle is introduced in the lower part of the radiator, and the prototype is modified which is referred from [17].The simulated results show that it achieves an Impedance Bandwidth (IBW) of 2.4-3.5 GHz (37.29%) and then 5-10 GHz (66.66%) in the lower and upper bands, respectively, whereas the axial ratio falls under the 3 dB range of 2.5-3.5 GHz (33.33%), 5-5.5 GHz (9.52%), and 5.7-7.6 GHz (28.57%) in the lower, middle, and upper bands, respectively.Considering the resonating frequency to be 3 GHz, some articles are compared with the proposed structure regarding their antenna size, IBWs, and Axial Ratio Bandwidths (ARBWs) which are listed in Table 1.The antenna parameters are calculated using some prefixed design equations which are listed below [18].
The expression of ε reff is given by: The change in length is given by: ∆L h = 0.412 The effective length (L eff ) becomes: 3) The effective length, for a resonance frequency (f 0 ), is given by: Width (W ) is given by: (5)
ANTENNA DESIGN AND GEOMETRY
The structural shape of the proposed antenna is shown in Figure 1.The overall area covered by the proposed antenna is 60 × 60 mm 2 .A 50 Ω microstrip feed line is used to design the antenna which is further fabricated on an FR4 substrate having a thickness of 1 mm and relative permittivity of 4.4.A modified L-shaped radiator is printed on the upper side of the substrate, whose work is to generate the resonance frequency bands whereas a modified ground plane structure is placed on the bottom side of the substrate.The patch consists of a rectangle (S5 × S6), and the lower part of the rectangle is merged with a circular stub of radius 3. 2.
Table 2. Parametric values of the designed antenna.
DESIGN PROCEDURE
The evolution process of the designed antenna is explained in a step-by-step process and is shown in the final proposed antenna.Ant 1 is a simple structure having a square slot cut in the ground plane of the desired value mentioned in Table 2 and is fed by a microstrip feed line having a rectangular patch.In Ant 2, the microstrip feedline is shifted toward the right side of the patch, and the two rectangular strips S1 × S2 and S3 × S4 are placed in the opposite corners of the ground plane.In Ant 3, a circle of radius 3.3 mm is placed in the lower end of the patch to give it an L shape in which instead of a rectangle, a circle is placed.In Ant 4, a small slit of size 3 × 2 is placed on the rectangle S1 × S2.
Ant 5 is the final proposed antenna in which again a small strip of size 1 × 3 is placed on the rectangle S3 × S4.
RESULTS AND DISCUSSION
The simulated software work is carried out with the help of Ansys HFSS ver.21.To understand the design procedure, the antenna evolution steps are clearly shown in Figure 2 and explained in the design procedure section.To visualize the performance of the desired antenna, the impedance bandwidth and axial ratio values of Ant 1-5 are plotted and shown in Figure 3. Figure 3(a) shows the Impedance bandwidth Vs Frequency plot, and Figure 3(b) shows the Axial ratio Vs Frequency plot.From both plots, we can observe that Ant 1 is linearly polarized as the axial ratio does not fall under the 3 dB range, and the impedance bandwidth ranges from 6.2-7.6 GHz.In Ant 2, the feedline is shifted towards the right side, and then two square slots of asymmetric size are placed in the opposite corners of the ground plane.Here the antenna is also linearly polarized as it does not fall under the 3 dB range, and its impedance bandwidth lies from 6.3 to 7.4 GHz.Ant 3 comes up with a structure having a circle placed in the lower end of the rectangular patch.Its impedance bandwidth lies from 2.5 to 3.5 GHz, and again it falls from 5.1 to 10 GHz.The axial ratio of Ant 3 lies under the impedance bandwidth range, and it goes from 2.6 to 3.5 GHz and again from 6.3 to 6.7 GHz which shows that the antenna is circularly polarized.In Ant 4, a small slit is cut in the S1 × S2 rectangle which shows that the impedance bandwidth ranges from 2.5 to 3.5 GHz and again from 5.1 to 10.1 GHz while its axial ratio also falls in this range which lies from 2.6 to 3.4 GHz and again from 6.1 to 6.7 GHz.Finally, Ant 5 is the proposed antenna in which again a small slit is cut in the S3 × S4 rectangle from which we can see that the impedance bandwidth goes from 2.4 to 3.5 GHz and again from 5 to 10 GHz, and the axial ratio bandwidth also falls in the impedance bandwidth range which lies in 2.5-3.5 GHz, 5-5.5 GHz, and 5.7-7.6 GHz.Hence as the axial ratio falls under the impedance bandwidth range, we can clearly state that our designed antenna is circularly polarized.
PARAMETRIC ANALYSIS
The proposed antenna achieves circular polarization when two rectangular strips of different sizes are introduced in the ground plane along with placing a circular stub in the patch.Results have also been observed by changing the slot width and varying the size of two slits.Here six parameters, rectangle S1 × S2, rectangle S3 × S4, circle radius (r), slot width (Sw), and two slits (a × b) and (c × d) are considered for the parametric analysis and have been analyzed with the help of Ansys HFSS by varying one parameter at one time while keeping the other parameters constant.
Effects due to Rectangle S3 and S4
In Figure 5, the effects of rectangle S3 × S4 on the antenna have been studied.Here from Figure 5(a), we can observe that all four values of S3 and S4 give almost the same results, but a noticeable change is found in Figure 5(b) which shows the axial ratio graph.By considering both plots, we find that S3 = 11 mm and S4 = 20 mm give better result than other values.
Effects by Circular Stub Radius (r)
Figure 6 shows the effect on the antenna due to the variation of the circle's radius.Here 3 values of radius r = 3.2, 3.3, and 3.4 mm are taken into account among which 3.3 mm is chosen for the design as it shows good results both in terms of impedance and axial bandwidth.
Effects in the Variation of Slot Width (Sw)
In Figure 7, the variation of slot width in terms of antenna design is shown.Here 3 values of Sw that are 0.5, 1, and 1.5 mm are taken for the analysis purpose among which Sw = 1 mm shows the best show that the AR lies in 2.4-3.5 GHz, 4.8-5.5 GHz, and 5.7-7.9GHz in lower, middle, and upper bands, respectively.We can observe that the simulated and measured values hold a good agreement between them.
(a) (b) The radiation patterns of the proposed antenna are shown in Figure 11.For the three frequencies, 3 GHz, 5.2 GHz, and 6.5 GHz, both XZ and Y Z planes are represented in which both simulated and measured values are noted in the graph, and it can be observed that they are very similar to other.Further, it can be stated that the antenna is Right Hand Circularly Polarized (RHCP) in the lower band at 3 GHz frequency and Left Hand Circularly Polarized (LHCP) both at 5.2 GHz and 6.5 GHz in the middle and upper bands, respectively.
Figure 12 represents the graph of antenna gain for both the simulated and measured values whose frequency range varies from 2 GHz to 8 GHz as the axial ratio falls in between this range.
The proposed antenna is simulated using ANSYS HFSS software, and for its practical use, the antenna prototype is fabricated on an FR4 substrate and is measured inside an anechoic chamber.
3 mm and is fed by a microstrip feed line.Two rectangular strips (S1 × S2 and S3 × S4) are placed in the opposite corners of the structure in the ground plane to generate CP waves.Two rectangular slits are also present in the structure.One slit (a × b) is cut in the rectangular strip S3 × S4, and the second slit (c × d) is cut in the rectangular strip S1 × S2 to generate the triple band as well as widen the bandwidth.The proposed antenna is compact in size, and its overall dimension is 60 × 60 × 1 mm 3 which is simulated with the help of Ansys HFSS and is also fabricated and measured.The desired parametric values are mentioned in Table
5. 1 .Figure 4 Figure 4 .
Figure 4 shows the plot of impedance bandwidth and axial ratio in terms of varying the S1 and S2 for different values.Here four values of S1 and S2 are considered among which S1 = 8 mm and S2 = 20 mm are taken into account.From Figure 4(a) we can see that the S 11 parameters are almost the same for all 4 values of S1 and S2, but the difference can be noticed in the axial ratio graph for which rectangle 8 × 20 mm 2 is considered for the design.
Figure 10
Figure10represents the plot of simulated and measured values of impedance bandwidths and axial ratio bandwidths of the designed antenna.From Figure10(a) we can observe that the simulated impedance bandwidth covers 2.4-3.5 GHz in the lower band and 5-10 GHz in the upper band whereas the measured impedance bandwidth covers 2.3-3.6 GHz and 4.8-10.4GHz in the lower and upper bands, respectively.Similarly, from Figure10(b) we can say that the simulated AR lies in 2.5-3.5 GHz, 5-5.5 GHz, and 5.7-7.6 GHz in the lower band, middle band, and upper band, respectively, whereas the measured results show that the AR lies in 2.4-3.5 GHz, 4.8-5.5 GHz, and 5.7-7.9GHz in lower, middle, and upper bands, respectively.We can observe that the simulated and measured values hold a good agreement between them.
Figure 9 .Figure 10 .
Figure 9. Effects of c and d on antenna performances (a) S 11 , (b) axial ratio.
Figure 13 represents the top and bottom views of the fabricated structure.
Figure 12 .
Figure 12.Simulated and measured gain of the designed antenna within the AR band.
Table 1 .
Comparison with existing antennas. | 2023-05-04T15:14:25.654Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "eb05a3e0a8369f5bd62bad35018657d7e4a31281",
"oa_license": null,
"oa_url": "https://www.jpier.org/ac_api/download.php?id=23022101",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f0068fff21376aa9a473bf479dc9b44351f6d1bf",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
259287490 | pes2o/s2orc | v3-fos-license | Enumerative Theory for the Tsetlin Library
The Tsetlin library is a well-studied Markov chain on the symmetric group $S_n$. It has stationary distribution $\pi(\sigma)$ the Luce model, a nonuniform distribution on $S_n$, which appears in psychology, horse race betting, and tournament poker. Simple enumerative questions, such as ``what is the distribution of the top $k$ cards?'' or ``what is the distribution of the bottom $k$ cards?'' are long open. We settle these questions and draw attention to a host of parallel questions on the extension to the chambers of a hyperplane arrangement.
In Section 2, a host of applied problems are shown to give rise to the Luce model. These include the celebrated Tsetlin library, a Markov chain on S n , described as: "at each step, choose a card labeled i with probability proportional to θ i and move it to the top" -π(σ) is the stationary distribution of this Markov chain.
It is natural to ask basic enumerative questions: pick σ from π(σ). * souravc@stanford.edu † diaconis@math.stanford.edu ‡ genebkim@stanford.edu The recent paper by Ben-Hamou, Peres and Salez [6] couples sampling with and without replacement so that tail and concentration bounds, derived for partial sums when sampling with replacement, are seen to apply "as is" to sampling without replacement.
A final item; throughout, we have assumed that the weights θ i are fixed and known. It is also natural to consider random weights. For a full development, see [39].
Section 5 develops the connections of the Tsetlin library to the Bidigare-Hanlon-Rockmore (BHR) walk on the chambers of a real hyperplane arrangement. Understanding the stationary distributions of these Markov chains is almost completely open. Section 2 begins with a review of enumerative group theory. These questions make sense for continuous groups. Georgia Benkart made fundamental contributions here through her work on decomposing tensor products.
Enumerative Group Theory
Let G be a finite group. A classical question is "pick g ∈ G at random. What does it look like?" For example, if G = S n , • What is the distribution of F(g), the number of fixed points of g?
• How many cycles are typical?
• What is the expected length of the longest cycle?
• What about the length of the longest increasing subsequence of g?
• What about the descent structure of g?
• How many inversions are typical?
All of these questions have classical answers (references below).
For G = GL n (q), parallel questions involve the conjugacy class structure of a random g ∈ G.
For a splendid development (for finite groups of Lie type), see Fulman [26], which has full references to the results above. The recent survey of Diaconis and Simper [22] brings this up to date. It focuses on enumeration by double cosets H \ G/K.
The questions above make sense for continuous groups, where they become "random matrix theory." For example, when G = O n (the real orthogonal group), one may study the eigenvalues of g ∈ G under Haar measure by studying the powers of traces O n (Tr(g)) k dg.
Patently this asks for the number of times the trivial representation appears in the kth tensor power of the usual n-dimensional representation of O n . See [19] for details. Georgia Benkart did extensive work on decomposing tensor powers of representations of classical (and more general) groups. She worked on this with many students and coauthors. Her monograph with Britten and Lemire [7] is a convenient reference. Most of this work can be translated into probabilistic limit theorems. We started to do this with Georgia during MSRI 2018, but got sidetracked into doing a parallel problem working over fields of prime characteristic in joint work with Benkart-Diaconis-Liebeck-Tiep [8].
Most all of the above is enumeration under the uniform distribution. A recent trend in enumerative (probabilistic) group theory is enumeration under natural non-uniform distributions. For example, on S n , • The Ewens measure π θ (σ) = Z −1 (θ)θ C(σ) . Here, θ is a fixed positive real number, C(σ) is the number of cycles of σ, and Z −1 (θ) is a simple normalizing constant. The Ewens measure originated in biology, but has blossomed into a large set of applications. See Crane [17].
• The Mallows measure π θ (σ) = Z −1 (θ)θ I(σ) , where I(σ) is the number of inversions of θ. This was originally studied for taste testing experiments but has again had a huge development.
• More generally, if G is a finite group and S ⊆ G is a symmetric generating set, let ℓ(g) be the length function and define P θ (g) = Z −1 (θ)θ ℓ(g) . Ewens and Mallows models are special cases with G = S n and S = {all transpositions} and S = {all adjacent transpositions}.
Most of the questions studied above under the uniform distribution have been fully worked out under Ewens and Mallows measures. See the survey by Diaconis and Simper [22] for pointers to a large literature.
The above can be amplified to "permutons" [30] and "theons" [16]. It shows that enumeration under non-uniform distributions is an emerging and lively subject. We turn next to the main subject of the present paper.
The Luce model
This section gives several applications where the Luce model appears.
Psychology
In psychophysics experiments, a panel of subjects are asked to rank things, such as: • Here are seven shades of red; rank them in order of brightness.
• Here are five tones; rank them from high to low.
• The same type of task occurs in taste-testing experiments. Rank these five brands of chocolate chip cookies (or wines, etc.) in order of preference.
This generates a collection of rankings (permutations) and one tries to draw conclusions.
Patently, rankings vary stochastically; if the same person is asked the same question at a later time, we expect the answers to vary slightly.
Duncan Luce introduce the model (1) via the simple idea that each item has a true weight (say, θ i ) and the model (1) induces natural variability (which can then be compared with observed data).
Indeed, he did more, crafting a simple set of axioms for pairwise comparison and showing that any consistent ranking distribution has to follow (1) for some choice of θ i . This story is well and clearly told in [32] and [33].
We would be remiss in not pointing to the widespread dissatisfaction over the "independence of irrelevant alternatives" axiom in Luce's derivation. The long Wikipedia article on "irrelevance of alternatives" chronicles experiments and theory disputing this, not only for Luce but in Arrow's paradox and several related developments. Amos Tversky's "elimination by aspects (EBA)" model is a well-liked alternative.
Exponential formulation
Luce's work followed fifty years of effort to model such rankings. Early work of Thurstone and Spearman postulated "true weights" θ 1 , . . . , θ n for the ordered values and supposed people perceived θ i + ε i , 1 ≤ i ≤ n with ε i independent normal N (0, σ 2 ). They then reported the ordering of these perturbed values.
Yellott [44] noticed that if in fact the ε i had an extreme value distribution, with distribution function e −e −x/r , −∞ < x < ∞, then the associated Thurstonian ranking model is exactly the Luce model! It is elementary, that if the random variable Y has an exponential distribution (P(Y > x) = e −x ), then log Y has an extreme value distribution. This gives the following theorem (used in Section 4): Theorem 2.1. For 1 ≤ i ≤ n, let X i be independent exponential random variables on [0, ∞) with density θ i e −xθ i (so X i = Y i /θ i with Y i the standard exponential). Then, the chance of the event Proof. Consider the event X 1 < X 2 < · · · < X n . The chance of this is θ 1 · · · θ n θ n (θ n + θ n−1 )(θ n + θ n−1 + θ n−2 ) · · · (θ n + · · · + θ 1 ) , which is indeed equal to Thus, the order statistics follow the Luce model (1).
For an application of the exponential representation to survey sampling, see Gordon [27].
Tsetlin library
The great algebraist Tsetlin was forced to work in a library science institute. While there, he postulated (and solved) the following problem: Consider n library books arrnged in order 1, 2, . . . , n. Suppose book i has popularity θ i . During the day, patrons come and pick up book labeled i with probability θ i /w n and after perusing, they replace it at the left end of the row. This is a Markov chain on S n and Tsetlin [43] showed that it has (1) as its stationary distribution.
The same model has been repeatedly rediscovered; in computer science, the books are discs in deep storage. When a disc is called for, it is replaced on the front of the queue to cut down on future search costs. See Dobrow and Fill [23].
The model (and its stationary distribution) appear in genetics as the GEM (Griffiths-Engen-McCloskey) distribution [24].
Over the years, a host of properties of the Tsetlin chain have been derived. For example, Phatarfod [38] found a simple formula for the eigenvalues and Diaconis [20] found sharp rates of convergence to stationarity (including a cutoff) for a wide class of weights. See further Nestoridi [35]. All of this is now subsumed under "hyperplane walks"; see Section 5.
• More generally, π(σ) is monotone decreasing in the weak Bruhat order on permutations.
Order statistics and a natural choice of weights
Many questions in probability and mathematical statistics can be reduced to the study of the order statistics of uniform random variables on [0, 1] by using the simple fact that, if X is a real random variable with continuous distribution function 1]. This implies that standard goodness of fit tests (e.g., Kolmogorov-Smirnov) have distributions that are universal under the null hypothesis (they do not depend on F). If Y is uniform on [0, 1], then − log Y is standard exponential as above, so order statistics of independent exponentials are a mainstream object of study. A marvelous introduction to this set of ideas is in Chapter 3 of [25] with Ronald Pyke's articles on spacings [37] providing deeper results.
With this background, let Y 1 , Y 2 , . . . , Y n be independent standard exponentials on (0, ∞). Denote the order statistics by The following property is easy to prove [25]. Theorem 2.3. With above notation, are independent exponential random variables with distributions where E 1 , . . . , E n are independent standard exponentials (density e −x on (0, ∞)).
It follows from our Luce calculations that the chance that the smallest spacing is
, and that the smallest spacing is , and so on. Specifically, The whole permutation is given by the Luce model (1) with θ i = n − i + 1. This classical fact is due to Sukhatme [42]. We will call these Sukhatme weights in the following discussion.
Application to poker and the ICM (iterated card model)
In tournament poker (e.g., the World Series of Poker), suppose there are n players at the final table with player i having θ i dollars. It is current practice among top players to assume that the order of the players, as they are eliminated, follows the Luce model (with the player having the largest θ i least likely to be eliminated; thus most likely to win all the money), and so on. This is called the ICM (iterated card model) and is used as a basis for splitting the total capital and for calculating chances as the game progresses. For careful details and references, see Diaconis-Ethier [21], which disputes the model.
Applications to horse racing
In horse racing, players can bet on a horse to win, place (come in second), or show (come in third). The "crowd" does a good job of determining the chances of each of the n horses running to come in first. Call the amount bet on horse i just before closing, θ i . However, the crowd does a poor job of judging the chance of a horse showing. Often, there is sufficient disparity between the crowd's bet and the true odds that money can be made (perhaps one race in four). This is despite the track's rake being 17% of the total. A group of successful bettors uses the θ i 's and the Luce model to evaluate the chance of placing. For details, see Hausch, Lo and Ziemba [29] or Harville [28].
With this list of applications, we trust we have sufficient motivation to ask "what does the distribution (1), π(σ), look like?"
The top k cards
Throughout this section, without loss of generality, assume θ 1 + · · · + θ n = 1. For θ and k fixed, let denote the measure induced on the top k cards by the Luce measure. It is cumbersome to compute, e.g., On the other hand, the Luce measure is just sampling from an urn without replacement. If {θ i } are "not too wild" and k is small, then sampling with or without replacement should be "about the same." This is made precise in two metrics.
In Theorem 3.2, {θ i } form a triangular array, but again, this is suppressed in the notation. The remarks below point to non-asymptotic versions.
Proof of Theorem 3.1. From the definitions, where the maximum is over all σ 1 , . . . , σ k distinct (because, if they are not distinct, then we The right-hand side is maximized for σ 1 , . . . , σ k−1 with the largest weights.
Proof of Theorem 3.2. A prepatory observation is useful: This is just the chance that there are two or more balls in the same box if k balls are dropped independently into n boxes, the chance of box i being θ i . This non-uniform version of the classical birthday problem has been well-studied. If X ij is 1 or 0 as balls i, j are dropped into the same box and .
The exponent for the right-hand side of Theorem 3.1 is Simple asymptotics show that for k = c √ n, c > 0, this is and k ≪ √ n suffices for product measure. To be a useful approximation to the first k-coordinates of the Luce measure with k = c √ n, giving a similar approximation in total variation.
is small. To see that this condition is needed, take θ 1 = 1 2 , θ i = 1 2(n−1) for 2 ≤ i ≤ n. For k = 2, This does not tend to zero when n is large. The two-sided bounds for log(1 − x) show Theorem 3.1 is sharp in this sense for general k. Example 3.5. As discussed above, if the infinity distance tends to zero, then total variation tends to zero. Here is a choice of weights θ i so that the total variation convergence holds for the joint distribution of the first k coordinates of the Luce model to i.i.d. is close, but not in infinity distance.
Fix k, 1 ≤ k ≤ n and let θ i = k −7/4 for i ≤ k and Remark 3.6. (a) Our proof of Theorem 3.2 used the Poisson approximation for the non-uniform version of the birthday problem. There are other possible limits which can be used to bound P − Q TV . See [14].
(b) It is easy to see that where e k is the kth elementary symmetric function. From here, Muirhead's theorem shows P − Q TV is a Schur-concave function of θ 1 , . . . , θ n , smallest when θ i = 1 n .
Introduction
For naturally occurring weights, the bottom k cards behave very differently from the top k cards.
To illustrate by example, consider the Sukhatme weights of Section 2.2.4: The results of Section 3 show that, for large n, That is, σ 1 /n has a limiting β(1, 2) distribution.
Using Theorems 3.1 and 3.2, the same holds for σ i /n for fixed i ≪ √ n. Of course, large numbers have higher probabilities, but all values in {1, 2, . . . , n} occur.
In contrast, consider the value of bottom card σ n . Intuitively, this should be small since the high numbers have higher weights. We were surprised to find P(σ n = 1) ∼ 0.516 . . .
In fact, we computed, using a result that follows, that The section below sets up its own notation from first principles.
Main result
Let N denote the set of positive integers and let N N be the set of all maps from N into N. Consider the topology of pointwise convergence on N N . This topology is naturally metrizable with a complete separable metric, and so we can talk about convergence of probability measures on this space. Now suppose that for each n, σ n is a random element of the symmetric group S n . We can extend σ n to a random element of N N , by defining σ n (i) = i for i > n. Proposition 4.1. Let σ n be as above. Then σ n converges in law as n → ∞ if and only if for each k, the random vector (σ n (1), . . . , σ n (k)) converges in law as n → ∞.
Proof. Since the coordinate maps on N N are continuous in the topology of pointwise convergence, one direction is clear.
For the other direction, suppose that for each k, (σ n (1), . . . , σ n (k)) converges in law as n → ∞. Notice that for any sequence of positive integers a 1 , a 2 , . . . , the set is a compact subset of N N , since any infinite sequence in this set has a convergent subsequence by a diagonal argument. Take any ε > 0. By the given condition, σ n (i) converges in law as n → ∞ for each i. In particular, {σ n (i)} n≥1 is a tight family, and so there is some number a i such that for each n, P(σ n (i) > a i ) ≤ 2 −i ε.
Therefore if K denotes the set defined in (4) above, then for each n, This proves that {σ n } n≥1 is a tight family of random variables on N N . Therefore the proof will be complete if we can show that any probability measure on N N is determined by its finite dimensional distributions. But this is an easy consequence of Dynkin's π-λ theorem.
The above proposition implies, for instance, that if σ n is a uniform random element of S n , then σ n does not converge in law on N N , because σ n (1) does not converge in law.
Let 0 < θ 1 ≤ θ 2 ≤ · · · be a non-decreasing infinite sequence of positive real numbers. For each n, consider the Luce model on S n with parameters θ 1 , . . . , θ n . Let σ n be the reverse of a random permutation drawn from this model. That is, σ n (1) is the last ball that was drawn and σ n (n) is the first. As we know from prior discussions, an equivalent definition is the following. Let X 1 , X 2 , . . . be an infinite sequence of independent random variables, where X i has exponential distribution with mean 1/θ i . Then σ n ∈ S n is the permutation such that X σ n (1) > X σ n (2) > · · · > X σ n (n) .
Theorem 4.2.
Let σ n be as above. For each x ≥ 0, let where we allow f (x) to be ∞ if the sum diverges. Let with the convention that the infimum of the empty set is ∞. Then σ n converges in law as n → ∞ if and only if x 0 < ∞ and f (x 0 ) = ∞. Moreover, if this condition holds, then the limiting finite dimensional probability mass functions are given by the following formula: For any k and any distinct positive integers a 1 , . . . , a k , lim n→∞ P(σ n (1) = a 1 , . . . , σ n (k) = a k ) = Before proving the theorem, let us work out some simple examples. Suppose that θ i = i for each i. This corresponds to the Luce model with the Sukhatme weights. Then clearly f (x) < ∞ for all x > 0, and hence x 0 = 0. Also, clearly, f (0) = ∞. Therefore in this case σ n converges in law as n → ∞. Moreover, by the formula displayed above, On the other hand, for the case of uniform random permutations, θ i = 1 for all i. In this case, f (x) = ∞ for all x, and hence x 0 = ∞. Thus, the theorem implies that σ n does not converge in law (which we know already).
Strangely, σ n does not converge in law if θ i = log(i + 1) + 2 log log(i + 1). To see this, note that in this case, which violates the second criterion required for convergence. This shows that we cannot determine convergence purely by inspecting the rate of growth of θ i . The criterion is more subtle than that.
What happens if the tightness criterion does not hold? In this case, the formula for the limit of P(σ n (1) = a 1 , . . . , σ n (k) = a k ) remains valid, but it may not represent a probability mass function, i.e., the sum over all a 1 , . . . , a k may be strictly less than 1.
Proof of Theorem 4.2. Take any k ≥ 1 and distinct positive integers a 1 , . . . , a k . Take n ≥ max 1≤i≤k a i . Let E n be the event {σ n (1) = a 1 , . . . , σ n (k) = a k }. Then By the dominated convergence theorem, this gives lim n→∞ P(E n ) = Thus, we have shown that for any k and distinct positive integers a 1 , . . . , a k , lim n→∞ P(σ n (1) = a 1 , . . . , σ n (k) = a k ) exists, and also found the desired formula for the limit. However If this is nonzero, then there is at least one x > 0 for which But this implies that Thus, x 0 < ∞. Next, we show that f (x 0 ) = ∞. Suppose not. Then x 0 > 0, since f (0) = ∞. Fix a positive integer a. For each n ≥ a, and let A n be the event {σ n (1) ≤ a}. Let F n be the event {max i≤n X i ≤ x 0 }. Take any x ∈ (0, x 0 ) and let G n be the event {max i≤n X i ≤ x}. Then P(A n ) ≤ P(A n ∩ (F n \ G n )) + P((F n \ G n ) c ) = P(A n ∩ (F n \ G n )) + P(F c n ∪ G n ) ≤ P(A n ∩ (F n \ G n )) + P(G n ) + P(F c n ).
If the event A n ∩ (F n \ G n ) happens, then max i≤n X i belongs to the interval (x, x 0 ], and one of X 1 , . . . , X a is the maximum among X 1 , . . . , X n . Thus, in particular, one of X 1 , . . . , X a is in (x, x 0 ]. Plugging this into the above inequality, we get (1 − e −θ i x ) = 0. Thus, taking n → ∞ on both sides, we get Now notice that the definition of A n does not involve x. So we can take x ր x 0 on the right, which makes the first term vanish and leaves the rest as it is. Thus, But the assumed finiteness of f (x 0 ) implies that the product on the right is strictly positive. Thus, we get an upper bound on lim n→∞ P(A n ) which is less than 1. But observe that this upper bound does not depend on a. This contradicts the tightness of σ n (1), thereby completing the proof of one direction of the theorem.
Next, suppose that x 0 < ∞ and f (x 0 ) = ∞. We consider two cases. First, suppose that x 0 = 0. Then f (x) < ∞ for each x > 0. But Therefore by the Borel-Cantelli lemma, X i → 0 almost surely as i → ∞. Now take any i and integers n and a bigger than i. Then the event σ n (i) ≥ a implies that max j≥a X j > min {X 1 , . . . , X i } , because otherwise the ith largest value among (X j ) n j=1 cannot be one of (X j ) j≥a . Thus, But the right side is a function of only a (and not n), and tends to zero as a → ∞ because X j → 0 almost surely as j → ∞. This proves tightness of {σ n (i)} n≥1 when x 0 = 0. Next, consider the case x 0 > 0. For convenience, let us define the partial sums Take i, n and a as before. Let x be a real number bigger than x 0 , to be chosen later. The event σ n (i) ≥ a implies that at least one of the following two events must happen: (a) There are less than i elements of (X j ) n j=1 that are bigger than x, or (b) X j > x for some j ≥ a. This gives
Now note that for any
By the inequality 1 − x ≤ e −x , we have g n (x) ≤ e − f n (x) . Thus, Let m be the largest integer such that θ m ≤ 1/(x − x 0 ). Suppose that n ≥ m. Then But m → ∞ as x ց x 0 , and f (x 0 ) = ∞ by assumption. Thus, the above inequality shows that given any L > 0, we can first choose x sufficiently close to x 0 , and then choose n 0 sufficiently large, such that for all n ≥ n 0 , f n (x) ≥ L. Now take any ε > 0 and find L so large that for all y ≥ L, e −y (1 + y + y 2 + · · · + y i−1 ) Choose x and then n 0 as in the previous paragraph corresponding to this L. Then find a so large that which exists since f (x) < ∞. For this choice of a, the above steps show that P(σ n (i) ≥ a) ≤ ε for all n ≥ n 0 . This proves tightness of {σ n (i)} n≥1 when x 0 > 0, completing the proof of the theorem.
Introduction
The Tsetlin library has seen vast generalizations in the past twenty years. In this section, we explain walks on the chambers of a hyperplane arrangement due to Bidigare-Hanlon-Rockmore [9] and Brown-Diaconis [12]. The Tsetlin library is a (very) special case of the braid arrangement. These Markov chains have a fairly complete theory (simple forms for the eigenvalues and good rates of convergence to stationarity). But the description of the stationary distribution, the analog of the Luce model, is indirect, involving a weighted sampling without replacement scheme. Thus the problem
Hyperplane walks
We work in R d . Let A = {H 1 , H 2 , . . . , H k } be a finite collection of affine hyperplanes (translates of codimension one subspaces). These divide R d into • chambers (points not on any H i ). Let C be the chambers.
• faces (points on some H i and on one side or another of others). Let F be the faces. A key notion is the projection of a chamber onto a face (Tits projection). For C ∈ C and F ∈ F , PROJ C → F is the unique chamber adjacent to F and closest to C (in the sense of crossing the fewest number of H i 's). In the above figure, PROJ C → F = C ′ .
With these definitions, we are ready to walk. Choose face weights {w F } F∈F with w F ≥ 0 and ∑ F∈F w F = 1. Define a Markov chain κ(C, C ′ ) on chambers via: • from C, choose F ∈ F with probability w F and move to PROJ C → F. the walk becomes "pick a coordinate at random and replace it with ±1 chosen uniformly." This is the celebrated Ehrenfest urn model of statistical physics. Dozens of natural specializations of these Boolean walks are spelled out in [12].
Example 5.2 (Braid arrangement). Take
, the chambers are points in R d with no equal coordinates. It follows that the relative order is fixed within a chamber, so chambers can be labeled by permutations. The faces are indexed by "block ordered set partitions": coordinates within a block are equal and all coordinates in the first block are smaller than the coordinates in the second block, and so on.
For the projection, suppose the chamber labeled π is thought of as a deck of cards in arrangement π (with π(i) the label of the card at position i). Suppose d = 5 and the face is F = 1 3/2/4 5. Remove cards labeled 1 and 3 from π (keeping them in their same relative order, then remove the card labeled 2 and place it under cards 1, 3. Finally, remove cards labeled 4, 5 and place them at the bottom of the five card deck. This is PROJ π → 1 3/2/4 5.
The Tsetlin library arises from the choice That is the walk on S n with "choose label i with probability θ i and move this card to the top."
Riffle shuffling arises from
Another way to say this -label each of d cards in the current deck with a fair coin flip, remove all cards labeled "heads" keeping them in their same relative order, and place them on top. This is exactly "inverse riffle shuffling," the inverse of the Gilbert-Shannon-Reeds model studied by Bayer-Diaconis [5].
There are hundreds of other hyperplane arrangements where the chambers are labeled by natural combinatorial objects, and there are choices of face weights so that the walk is a natural object ot study. Indeed, any finite reflection group leads to a hyperplane arrangement with H v being the hyperplane orthogonal to the vector v determining the reflection. Any finite graph leads to a "graphical arrangement." For a wonderful exposition, see Stanley [41].
As said, the Markov chains κ(C, C ′ ) admit a complete theory with known eigenvalues and rates of convergence. We will not spell this out here; see [12], but turn to the main object of interest -the stationary distribution.
Let A be a general arrangement with chosen face weights {w F } F∈F and κ(C, C ′ ) the associated Markov chain on C, the chambers of the arrangement. π(C) ≥ 0 and ∑ C π(C) = 1 is stationary for κ if ∑ C π(C)κ(C, C ′ ) = π(C ′ ) -thus π can be thought of as a left eigenvector with eigenvalue 1. When does a unique such π exist? This π is the analog of the Luce model and becomes the Luce model for the braid arrangement as above. The following result gives a "weighted sampling without replacement characterization" of π(C). Theorem 5.4 (Brown-Diaconis). Suppose {w F } are separating. The following algorithm generates a pick from π(C): • place all {w F } in an urn.
• draw them out, without replacement, with probability proportional to size (relative to what is left).
• from any starting chamber C (the choice does not matter), project on F |F | , then on F |F |−1 , and so on until F 1 . The resulting chamber is exactly distributed as π(C).
Of course, for the Tsetlin library, this is just the Luce measure on permutations. The following subsection delineates the few examples where something can be said about π.
Understanding π
Suppose a group of orthogonal transformations acts transitively on A preserving κ(C, C ′ ). Then, π(C) is uniform over C (supposing separability). Examples include riffle shuffles, the Ehrenfest urn, and "random to top" (the Tsetlin library with θ i = 1 n , 1 ≤ i ≤ n). For more on this, see [35].
Simple features of π can sometimes be calculated directly. See Pike [36] and its references.
Aside from the present paper, the only other examples that have been carefully studied are in the following graph coloring problems.
Graph coloring
Let G be a connected and undirected simple graph. Let X be the set of 2-colorings (say by ±) of the vertex set of G. Define a Markov chain on X by • from x ∈ X • pick an edge e ∈ G uniformly at random • change the two endpoints of e in x to be + + e or − − e with probability 1 2 .
Thus "neighbors are inspired to match, at random times." This is a close cousin of standard particle systems such as the voter model. All the theory works. The process is a hyperplane walk for the Boolean arrangement of dimension D, where D denotes the number of edges in the graph G. All eigenvalues and rates of convergence are easily available.
The only thing open is "what can be said about the stationary distribution?" To understand the question, suppose the graph is an n-point path The distribution π is far from uniform. All + or all − have chance 1 2 of staying, but + − + − · · · is impossible. Of course, π(x) is invariant under switching + and −. It is easy to show that, under π, the π process is a 1-dependent point process (see [10]). This means various central limit theorems are available.
How much more likely is "all +" than "many alternations"? This problem was carefully studied in a difficult paper by Chung and Graham [18] (see also [13]). They show, under π, all + (or all −) have chance of order C/2 n , but many alternations has chance of order C ′ /n!. Very nice systems of recursive differential equations appear.
The point is, even in the simplest case, understanding the stationary distribution leads to interesting mathematics. We offer the present paper in this spirit.
Semigroups and beyond
The past ten years have shown yet broader generalization of the Tsetlin library. Kenneth Brown extended it to idempotent semigroups (allowing walks on the chambers of a building) [11].
Ben Steinberg, working wiht many coauthors, extended further in the semigroup direction. A convenient reference is the book length treatment [34].
In another direction, a sweeping generalization of much of moderm algebra based on hyperplane and semigroup walks has been developed by Aguiar and Mahajan [1,2,3]. The three large volumes contain hundreds of fresh examples.
In none of these developments is the stationary measure understood. | 2023-06-30T06:43:00.224Z | 2023-06-28T00:00:00.000 | {
"year": 2023,
"sha1": "26972918e5a75987a21afa70cac7cbe78e993186",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "26972918e5a75987a21afa70cac7cbe78e993186",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
52183578 | pes2o/s2orc | v3-fos-license | In Vivo Anticancer Activity of a Nontoxic Inert Phenolato Titanium Complex: High Efficacy on Solid Tumors Alone and Combined with Platinum Drugs
Abstract Due to the toxicity of platinum compounds used in the clinic as anticancer chemotherapies, titanium serves as a safe and attractive alternative. Lately, we introduced a new family of Ti complexes based on readily available phenolato ligands, demonstrating incredibly high hydrolytic stability, with the lead compound phenolaTi demonstrating wide cytotoxic activity toward the NCI‐60 panel of human cancer cell lines, with an average GI50 value of 4.7±2 μm. Herein, we evaluated in vivo: a) the safety, and b) the growth inhibitory capacity (efficacy) of this compound. PhenolaTi was found to be effective in vivo against colon (CT‐26) and lung (LLC‐1) murine cell lines in syngeneic hosts and toward a human colon cancer (HT‐29) cell line in immune‐deficient (Nude) mice, with an efficacy similar to that of known chemotherapy. Notably, no clinical signs of toxicity were observed in the treated mice, namely, no effect on body weight, spleen weight or kidney function, unlike the effects observed with the positive control Pt drugs. Studies of combinations of phenolaTi and Pt drugs provided evidence that similar efficacy with decreased toxicity may be achieved, which is highly valuable for medicinal applications.
Introduction
Cisplatina nd its derivatives have been established as significant chemotherapeutic drugs used in the clinic for av ariety of cancers. [1][2][3][4] Cisplatin is commonly used in ovarian and lung cancers,m ostly in combination with other drugs. [5][6][7][8][9][10][11] Oxaliplatin, ad erivative of cisplatin, is often used in colon cancer,especially in combination with fluorouracil. [12][13][14] However, development of resistance to thesed rugs and the toxicityo ft he Pt ion led to as earch for other metal-based drugs. Amongt he transition metalstested, two titanium-based complexes-budotitane and titanocene dichloride-reached clinical trials, but failed due to rapid hydrolysis and the formation of undefineda ggregates. [15][16][17][18][19][20][21][22][23][24][25][26][27][28][29] To overcome these obstacles, our laboratory developed an ew family of titanium(IV) complexes based on phenolato ligands. [30][31][32][33][34][35][36][37][38][39][40][41][42] In particular,d iaminobis(phenolato)-bis(alkoxo)Ti IV (phenolaTi,F igure 1) demonstrated remarkable stability in aqueous media and an extended shelf life, along with enhanced in vitro cytotoxicity toward various cancer cell types. [43] In previous studies, the phenolaTic omplex also displayeds ynergistic or additive characteristics when combined in vitro with cisplatin or oxaliplatin, [44] and an antitumorigenic effect when tested in vivoi nm ice inoculated with lymphoma growinga s ascites. [43] Furthermore, evaluating this complex on the NCI-60 panel of human cancer cell lines (by the Developmental Therapeutics Program( DTP) of the US National Cancer Institute (NCI)), demonstrated significant cytotoxicity (with an average GI 50 value of 4.7 AE 2 mm)t oward all cell lines tested, particularly colon and lung. Of added significance wereo ur findings that phenolaTi is also active in vitro against cisplatin-resistant, as well as MDR1 (ABCB1) drug-resistantcells, suggesting adistinct mechanism of action. [43] Here, we expando ur findings to include the in vivo effect of phenolaTio nd ifferents olid tumors, Due to the toxicityo fp latinum compounds used in the clinic as anticancer chemotherapies, titanium serves as as afe and attractive alternative. Lately,w ei ntroduced an ew family of Ti complexes based on readily available phenolato ligands, demonstratingi ncredibly high hydrolytic stability, with the lead compound phenolaTid emonstrating wide cytotoxic activity toward the NCI-60 panel of human cancer cell lines, with an average GI 50 value of 4.7 AE 2 mm.H erein, we evaluated in vivo: a) the safety,a nd b) the growth inhibitory capacity (efficacy) of this compound. PhenolaTiw as found to be effective in vivo againstc olon (CT-26) andl ung (LLC-1) murine cell lines in syngeneic hosts and toward ah uman colon cancer (HT-29) cell line in immune-deficient (Nude)m ice, with an efficacy similar to that of known chemotherapy.N otably,n oc linical signs of toxicityw ere observed in the treated mice, namely,n oe ffect on body weight, spleen weight or kidney function, unlike the effects observed with the positive control Pt drugs.S tudies of combinationso fp henolaTi and Pt drugs provided evidence that similare fficacy with decreased toxicitym ay be achieved, which is highly valuable for medicinal applications.
of both murine and human origin, and comparethese findings, in terms of toxicitya nd efficacy,w ith those of the two commonly used platinum-based drugs relevant to the tested cancer types:c isplatin and oxaliplatin. In particular,c ombination studies enabled the achievement of high efficacy with reduced toxicity.
In vitro cytotoxicity
In vitro cytotoxicity of phenolaTiw as tested previously toward severalc ell lines, including human HT-29 colon cancer cells. [43,44] In an effort to increase solubility of the complex in aqueous media, nanoparticles of phenolaTic omplex were obtained as previously described by ar apid conversion of av olatile oil-in-water microemulsion into adry powder,c omposed of nanoparticles. [36] The in vitro effect of this emulsion on murine colon CT-26 and lung LLC-1 cancer cell lines is demonstrated in Figure 2. Previous studies also gave evidence that the nanoparticlef ormulation does not significantly impact the cytotoxicity and is itselfinactive. [36,38] In vivo toxicity Balb/c mice were subjected to PBS (control), phenolaTi (1.6 mg kg À1 ,t he highest concentration soluble without formulation), cisplatin (5 mg kg À1 ), or oxaliplatin (5 mg kg À1 )e very other day for four weeks. Whereas mice treated with cisplatin or oxaliplatin demonstrated av ariety of deleterious effects, including decreased body weight andg rooming, culminating in diminished survival (Figure 3), mice treated with phenolaTis urvived the chemotherapeutic challenge, and did not demonstrate any of these symptoms. Notably,a lthough increasing the phenolaTic oncentrationt o4 0mgkg À1 in formulation still did not bring about any signs of toxicity in the treated mice (after five injections;t reatment frequency:e very other day), increasingi tf urther to 80 mg kg À1 caused sudden mortality after the third injection (treatment frequency:e very other day).
As cisplatin induces nephrotoxicity,w es oughtt ot est whether phenolaTih as as imilar effect on the kidney.I nterestingly,7 2h following the injection of the drugs (20 mg kg À1 , i.p.) marked increases in urine excretion-to-water consumption ratio ( Figure 4A), BUN levels ( Figure 4B), urinary albumin-to- . Nephrotoxicity assessment in male C57BL/6 mice72hfollowing a single i.p. injection of vehicle, cisplatin(20 mg kg À1 )orp henolaTi (20 mg kg À1 ). Note that only cisplatin resultedi ni ncreased urine excretionto-water consumption ratio (A), blood urea nitrogen (BUN;B), albumin-tocreatinineration( ACR;C), urine albumin levels (D), and urinary kidney injury marker1(KIM-1;E)a sw ell as reducedc reatinineclearance (F). Data are the mean AE SEM in 4-5 animals per group. *P < 0.05 relative to vehicle-treated group; # P < 0.05 relative to cisplatin-treated group. Figure 4E)w ere observed in C57BL/6 mice treated with cisplatin, comparedw ith vehicle-treated control animals. In addition, as ignificant reduction in CCr ( Figure 4F)w as also documentedi nc isplatin-treated mice. None of these changes were found in phenolaTi-treated mice. Ta ken together, these findings suggestt hat phenolaTii sp ractically nontoxic in mice when compared with both cisplatin and oxaliplatin, which encouraged further efficacy studies.
In vivo efficacy
As eries of tumor growth inhibition studies by phenolaTiw ere carried out. PhenolaTiw as first tested in comparison with cisplatin on Balb/c and C57BL/6 mice inoculated s.c. with CT-26 and LLC-1 cells, respectively ( Figure 5). PhenolaTi( 0.5-5mgkg À1 )w as employed both directly and in nanoparticles form marked as "phenolaTiF"t oi ncrease solubility and enable higher doses, whereby the dose mentioned is the dose of the active agent in the formulated compound. Generally,treatment started immediatelyf ollowing detection of tumors, and grouping the animals uniformly.W hereas both drugs demonstrated as imilar in vivo efficacy on both models, phenolaTid emonstrated no decrease in body weight relative to cisplatin. Interestingly, the presence of the formulation only slightly impacted the efficacy,w hereby increasing the dose of actived rug in formulationd id not increase the efficacy of phenolaTi. Additionally,i na nothere xperiment using phenolaTi, phenolaTiF ,a nd cisplatin on Nude mice inoculated with cisplatin-resistant A2780-CP human ovarian cancer cells, am arkedly decreased efficacy was observed for the cisplatin-treated relativet ot he phenolato-treated groups ( Figure S1, Supporting Information).
In an additional set of experiments,u sing the above-mentioned mice strainsa nd cancer cell lines,t he effect of ac ombination of phenolaTia nd cisplatin was addressed. PhenolaTi and cisplatin werec omparedw ith regards to tumorg rowth inhibition( tumor volume and weight) as well as toxicity (body and spleen weight) ( Figure 6). In both models, the combinations showede nhanced efficacy.N otably,t he spleen weights of mice treated with cisplatin were lower than those of the controlg roup, whereas the spleenso ft hose treatedw ith phe-nolaTia nd phenolaTiFremained similart ot hose of the control group. Furthermore, within the timeframes used (up to 17 and 15 days post-inoculation for Figures6A, B, respectively), pheno-laTid id not enhancet he toxic outcomes of cisplatin treatment, while again demonstrating antitumorefficacy.
The studies were expanded to include ah uman cancer model.F indings similart ot he above were observed with immune-deficientN ude mice, inoculated s.c. with human HT-29 colon adenocarcinoma cells that were subjected to pheno-laTi, phenolaTi F, cisplatin, and oxaliplatin commonly used in the clinicf or colon cancer. Mice were subjected to i.p. injections of the drugs every day/other day.A ll drugs (or combinations thereof)w ere antitumorigenic with the combinationo f phenolaTi with cisplatin showing increased efficacy.I no ne experiment ( Figure 7A), the side effects of cisplatin were even somewhat diminished when combined with phenolaTi,w hereby the body weights remained similart ot hat in the control group. Thus, the phenolaTic onsistently showed no side effects relative to the marked toxicity demonstrated by cisplatin alone, although decreased effects were developed by oxaliplatin as well. Nevertheless, somewhat reduced efficacy waso bserved for oxaliplatin and its combinations.
Nephrotoxicity test
The Nude mice inoculated with HT-29 cells and treated with phenolaTi or cisplatin ( Figure 7B)u nderwent evaluation for chronicr enal dysfunction by morphological damage to the kidney. Histological examination revealed necrosis, protein casts, vacuolization and desquamationo fr enal tubulare pithelial cells in the cisplatin-treated mice. PhenolaTi at 5mgkg À1 did not cause tubulard amage as determined by PAS stainingo ft he kidney (Figure 8), indicating no nephrotoxicity induced by the novel drug.
Discussion
This is the first study to demonstrate that both phe-nolaTia nd its formulatedv ersion phenolaTiF effectively impair solid tumord evelopment in both immune-competent and immune-deficientm ice injected with murinea nd human cancer cell lines, respectively,i ncluding cisplatin-resistant ovarian carcinoma cells. Of added significance is the lack of apparentt oxicity that distinguishes the Ti drug from the commonly used Pt-basedc hemotherapeutics. No body weightl oss or spleen weight changes were de- tected in phenolaTi-treated animals, as well as no hair loss, grooming or any behavioral changes.M oreover,b ecause cisplatin is known to be nephrotoxic,v arious parameters relating to kidney function were evaluated and none were impaired by the phenolaTicomplex. Therefore, the phenolaTit itanium complex is an attractive candidatefor anticancer chemotherapy.
Combinationtherapy is ac ommon methodology,a sc ombining drugs may achieve ad esired effect with reduced doses of each drug, therebyr educing side effects. In addition, multiple mechanisms of action can overcomed rug resistance. [45][46][47][48] In all combinations studied herein, no antagonistic behaviors were detected, implying unrelated mechanisms, as also supported by previous NCI-60 results. [43] Moreover,i ns ome experiments the combined drugs achieved better efficacy than each drug alone, whereas the side effects of cisplatin remained similar as when the drug was administered alone;t herefore, combining phenolaTiw ith ad ecreasedc oncentration of cisplatin gave similare fficacy,but with reduced Pt-generated side effects.
In the presents tudy,i no rdert of ind an optimal dose for treatment, especiallyc onsidering the lack of toxicityo fp henolaTi, various concentrations of phe-nolaTiw eree xamined. Interestingly,aclear dose-response was not detected, which may imply that alternative formulationss hould be evaluated. As the formulationd egradation in the animal is presently unknown,i ti sp ossible that the active material is released before arrivinga ti ts biological target, and due to the limited solubility,i so nly partially effective. Because the efficacy of phenolaTii na ll concentrations used was high (mostly TGI > 50 %), and also similar to that of cisplatin, it is also possible that the efficacy recorded is the highest achievable under the experimental conditions.
Conclusions
The phenolaTic omplex is an effectivea nticancer drug as established on several murine and human solid tumor modelsa nd is nontoxic at ar ange of highly effective doses. Ta ken together with the ability to circumvent drug resistance, this complexi sa na ttractive novel anticancer drug. Further preclinical studies with alternative formulations shoulds pecifically establish the therapeutic window and pharmacokinetics of the drug, to enable its subsequente valuation in clinical settings.
Mice:B alb/c, C57BL/6, and immune-deficient (Nude) female mice (5-6 weeks old) were obtained from Harlan (Israel) and held in an SPF facility (AAALAC accreditation #1285). Mice were treated in accordance with NIH guidelines and approval by the institutional committee for ethics in animal experimentation.
Growth inhibition assay:C ytotoxicity was measured on CT-26 colon cells and LLC-1 lung cells using the MTT assay as previously described. [49] Approximately 0.6 10 6 cells in medium were seeded into a9 6-well plate and allowed to attach for ad ay.T he cells were consequently treated with the reagent tested at 10 different concentrations. Doses for control Pt-based drugs were selected based on literature and toxicity limitations. [50][51][52][53] After as tandard of 3days incubation, MTT (0.1 mg in 20 mLR PMI) was added and the cells were incubated for additional 3h.A fter the incubation period, the MTT solution was removed, and the cells were dissolved in 200 mLi sopropanol. The absorbance at 550 nm was measured by aS park 10M Multimode Microplate Reader spectrophotometer (Tecan Group Ltd. Mannedorf, Switzerland). Relative IC 50 values were determined by a nonlinear regression of avariable slope (four parameters) model by GraphPad Prism 5.04 software, with error values based on the STD of at least 3 3r epetitions (three separate measurements conducted on three different days to give nine repeats altogether). Cytotoxicity measurements on HT-29 human colon cancer cells were published previously. [44] In vivo studies:F or tumor growth inhibition experiments, 5-6-week-old Balb/c mice, C57BL/6, or immunedeficient (Nude) mice were inoculated subcutaneously (s.c.) with 1 10 6 CT-26 colon cancer cells, 5 10 5 LLC-1 lung cancer cells, or 5 10 6 HT-29 human colon adenocarcinoma cells, respectively.T umors manifested within 4-10 days post-inoculation when mice were randomized into groups with similar average tumor dimensions. The mice were then treated 3t o5times weekly with the tested drug by intraperitoneal (i.p.) injections. Control groups were subjected to phosphate-buffered saline (PBS) or microemulsion solution in PBS devoid of the active drug. For all models, tumor volume (length width 2 0.52) was assessed by caliper measurements every 2t o4 days. Mice were euthanized once the tumors reached ethical limit of 15 mm length or if the animals displayed health indicators that met the ethical criteria for sacrifice. Tumor growth inhibition (TGI) was defined as the difference in size between mean control group and mean treated group, expressed as ap ercentage of mean control group:% TGI = [1À(mean drug-treated / mean control )] 100. Ar egimen of an agent that produces at least 50 %T GI is generally classified as potentially therapeutically active.
Histological examination for tubular damage:F ollowing euthanasia, kidneys were removed and fixed with 10 %f ormalin, renal tissues were sectioned (3 mm) and stained with periodic acid-Schiff (PAS) reagents for histological examination. Tubular damage in PAS-stained sections was examined by microscopy (200 magnification) as described earlier. [54] Nephrotoxicity test:E ight-to 10-week-old male C57BL/6 mice were euthanized 72 ha fter as ingle i.p. injection (20 mg kg À1 )o f cis-diammineplatinum(II) dichloride (cisplatin), phenolaTi, or PBS as av ehicle control. Urine was collected before euthanasia using mouse metabolic cages (CCS2000 Chiller System, Hatteras Instruments, NC, USA). Blood was collected under deep anesthesia by retro-orbital bleeding, and serum and urine levels of creatinine as well as serum urea levels were measured using the Cobas C-111 chemistry analyzer (Roche, Switzerland). Blood urea nitrogen (BUN) was calculated by serum urea levels (BUN mg dL À1 = [urea] mm 2.801). Creatinine clearance (CCr) was calculated using urine and serum creatinine levels (CCr mL h À1 = [urine creatinine] mg dL À1 (urine volume)/[serum creatinine] mg dL À1 24 h). Urine levels of al- and body weight (right). For combinations:the added drugs wereapplied at the same concentration each as whenapplied alone;"1/2 cisplatin" or "1/2 oxaliplatin" refers to half the concentrationoft he Pt drug as applied alone. Data are the mean AE SEM in 5-10 animals per group. *P < 0.0001 relative to control;**P < 0.01 relativetocontrol; # P < 0.0001r elative to cisplatin-treatedgroup. Statistical analysis:T wo-way ANOVAw ith Bonferroni multiple comparisons test was performed for the tumor volume and body weight changes over time using GraphPad Prism (version 5.04 for Windows, GraphPad Software, San Diego, CA, USA:w ww.graphpad.com). Statistical significance was determined at the level of P < 0.05. One-way ANOVAw ith Bonferroni multiple comparisons test was performed for the tumor and spleen final weight, available in the Supporting Information (Table S1). | 2018-09-16T06:23:00.033Z | 2018-10-19T00:00:00.000 | {
"year": 2018,
"sha1": "668cc9f7ac8c13100c2531a96244068f2244d5ee",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cmdc.201800551",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "668cc9f7ac8c13100c2531a96244068f2244d5ee",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
60499010 | pes2o/s2orc | v3-fos-license | Energy-efficient Algorithms for Ultrascale Systems
c (cid:13) The Authors 2017. This paper is published with open access at SuperFri.org The chances to reach Exascale or Ultrascale Computing are strongly connected with the problem of the energy consumption for processing applications. For physical and economical reasons, the energy consumption has to be reduced significantly to make Ultrascale Computing possible. The research efforts towards energy-saving mechanisms of the hardware have already made energy-aware hardware systems available. However, to achieve a strong energy reduction, hardware mechanisms must be complemented with new energy-efficient software that can exploit them so that the foreseen energy savings actually result. In the software area, there also exist a multitude of research approaches towards energy saving, often concentrating either on the system software level or the application organization level, reflecting the expertise of the corresponding research group. The challenge of reducing the energy consumption dramatically to make Ultrascale Computing possible is so ambitious that a concerted action combining research efforts through all the software levels seems reasonable. In this article, we discuss the current research efforts and results related to energy efficiency in the diverse areas of software. We conclude with open problems and questions concerning energy-related techniques with an emphasis on the application or algorithmic side.
Introduction
The performance of high-end HPC systems has been increased roughly by a factor of 1000 in each of the last two decades.With the world's most powerful systems already well past the Petaflop/s level in 2014, a projection of this trend leads to the prediction that by 2022, Exascale computing will be possible.However, progress towards this goal is threatened by energy issues because, based on the current technology, systems with Exascale performance would use excessive amounts of energy (e.g.Tianhe-2, a 33 PFlops system, needs about 18 MW).Moreover, due to physical constraints, the performance of processing elements can no longer be assumed to follow Moore's Law.Accordingly, because of physical constraints and environmental issues, power and energy consumption are considered to be one of the largest challenges for Exascale systems.The US DOE Exascale Initiative has set a target of 20 MW for the power consumption of an Exascale system.To achieve 1 ExaFLOP using 20 MW, the average energy cost per flop must be limited to 20 picojoules (20 pJs/FLOP), including all costs for memory accesses and communication [108].However, the supercomputers on the current Top500 list need between 300 and 8000 pJs/flop 6 .
Consequently, reducing the energy consumption for computing has become an increasingly important research topic in recent years, with the research community following two main research directions: The first direction is concerned with power-aware and thermal-aware hardware design, including low-power techniques on all levels, i.e. the circuit and logic level, the processor, the memory and the interconnects.The second research direction is based on the development of power-aware software for the entire software stack, including operating systems, compilers, applications and algorithms.This second direction is the topic of this survey article, in which we summarize important contributions towards energy reduction that can be provided by the system software or the programming model and discuss how these contributions can be used for the construction of energy-efficient algorithms and applications.An important step towards a systematic development of energy-efficient algorithms is the energy-oriented investigation of benchmark programs.As an example, the energy characteristics of benchmark programs such as SPEC CPU and PARSEC are investigated and algorithmic techniques for energy saving are considered.The emphasis of our investigation is on large-scale complex computing systems, which will be referred to as Ultrascale or Exascale systems in the following.
The rest of the article is structured as follows: Section 1 gives a brief overview of the hardware mechanisms that can be used to reduce energy consumption.Section 2 deals with system support for energy efficiency and presents some energy metrics as well as novel energy measurement and power management techniques.Section 3 studies how the programming model and the software development process can support the construction of energy-efficient algorithms and applications.Section 4 considers the energy consumption of algorithms and discusses algorithmic techniques to enhance energy awareness at the programming level.The final section concludes the article with a discussion of important research directions that are crucial for reaching energy efficiency in algorithms.
Hardware mechanisms for energy saving
Nowadays, computers include different power management techniques which support the reduction of energy consumption.Examples are dynamic voltage frequency scaling (DVFS), clock gating, and power gating.Moreover, the usage of special instructions and specialized coprocessors can also help to reduce energy consumption.
DVFS [4] can reduce the clock frequency and voltage level of different components of the compute node (processors, DRAM memories, etc.) at the expense of some performance degradation.Currently, DVFS is broadly supported by low-power and high performance processors provided by different manufacturers under different names (e.g.SpeedStep in Intel processors and PowerNow or Cool 'n' Quiet in AMD processors).There are three factors that need to be considered when DVFS is applied: (a) the dynamic power, which has a quadratic relationship with frequency-voltage scaling; (b) the static power, which increases exponentially with the voltage; and (c) the performance, which has a linear relationship with the frequency.Because of its negative performance impact, DVFS may only be effective for non CPU-bounded applications, see Section 4.1 for more details.
Clock Gating [97] reduces the power consumption by disabling the clock in those parts of the circuit that are idle or, like in the case of flip-flops, maintain a steady state that does not need to be refreshed.The power used to drive the clock signal can represent more than a half of the overall power consumption.Therefore, clock gating can potentially achieve a significant energy reduction.This technique can be controlled both at hardware and software level.Hardware-level approaches typically provide a finer granularity, allowing also to disable components inside a functional block.Software-level approaches are usually applied at entire functional blocks, but they allow more elaborated energy-saving policies.
Power gating [96] is a more aggressive approach in which a functional block is disconnected from the power supply, powering off all its components.Nowadays, existing processors contain clock gating logic managed by a power reduction policy for almost every functional block.For some components clock gating is used in combination with power gating features.Given that the entire functional unit is disconnected, power gating achieves a better power reduction than clock gating.However, given that the functional unit state is erased, it is necessary to provide mechanisms for saving and restoring the states of the functional units, which increases the complexity and complicates resource utilization when applying power gating to active components that need to preserve their state.
The use of special instructions can also help to reduce the energy consumption for computeintensive applications.Examples are the SIMD vector instructions provided by the AVX (advanced vector extensions) instructions for the x86 architecture or the AES (advanced encryption standard) instructions to support encryption and decryption.Those instructions lead to an effective use of the corresponding transistors, thus reducing the energy consumption per operation [71].
Similarly, the use of specialized coprocessors or accelerators, such as GPU (Graphics Processing Unit), MIC (Many Integrated Cores) or FPGA (Field Programmable Gate Array), can also lead to a smaller energy consumption compared to general purpose CPUs.As an example, the NVIDIA "Fermi" generation of GPUs requires about 200 picojoules of energy to execute one instruction, which is 10x less than for the most efficient x86 CPU.
System support for energy efficiency
In order to obtain the benefits offered by an Ultrascale or Exascale system, it will be increasingly important to provide system services for an effective management of the system resources on behalf of the applications.Those services can be offered to the applications through the programming environment or through specialized libraries, but they should be as transparent to the user as possible to support application porting and sustainability.As energy is a cross-layer issue, several aspects of the system software and the operating system should be involved in energy efficiency resource management, but it is also paramount to provide metrics and facilities to monitor and express energy at the processor and system level.
Resource management
Currently, power requirements are driving the co-design of HPC systems, which in turn sets the course for a radical change in how to express the need for increasingly scarce resources, as well as how to manage them.Knowing that Ultrascale and Exascale systems will inevitably rely on a high-level heterogeneity of resources and new HPC usage challenges (such as providing performance hand-in-hand with energy efficiency), they need to become more and more self-aware with respect to performance, energy and resilience [36].New usages, like many-task computing paradigms, will force the system to host, schedule, and load balance millions of heterogeneous tasks.Existing research provides analytical studies quantifying and comparing expected performance of new solutions proposed.
Another approach is to use layered solutions, such as the use of algorithm-specific checkpointing combined with system-level checkpointing [19], or to use imperfect fault predictors [10].Following this trend, decentralized approaches for a multi-objective, energy-aware resource management will be a likely replacement for centralized approaches when these do not scale up.Gossip-based [65] and hierarchical approaches [124] are examples that have been proposed for load balancing.However, the scale to which they have been evaluated and the complexity of their balancing requirements is far from what is expected for Exascale.
Energy metrics
In order to properly evaluate a specific system property, it is necessary to define corresponding metrics.With regard to energy, the main basic metric is usually the unit of work or amount of heat transferred, measured in Joule (J), while the power, i.e. the amount of transferred energy in time, is measured in W att (W ).
In the computing system context, several initiatives related to energy measurement and management have been started, mostly grouped under the umbrella of Green IT.Some of them focus on distributed systems, aiming at identifying specific metrics for assessing energy efficiency in these systems.A good example is GreenGrid, which is "an association of IT professionals seeking to dramatically raise the energy efficiency of datacenters through a series of short-term and longterm proposals" [104].They propose to use two main metrics for evaluating energy efficiency in datacenters: Power Usage Effectiveness (PUE), and Datacenter Infrastructure Efficiency (DCiE) [11,16].PUE is defined as follows: while DCiE is specified as its reciprocal: The energy for the total facility is the overall amount of energy consumed by the whole data center, including IT systems and facilities.The IT systems energy is the energy consumed by just the IT equipment such as processing, storage, and network components for data management and processing.The facilities include all other subsystems, such as UPS and power management systems, cooling systems, lighting systems, etc.Other interesting initiatives in the direction towards widely used metrics and, possibly, standards, are Energy Star [110] and SPECpower [64].Energy Star specifies specific rules, provides a rating for energy efficiency, called the Energy Star score, and is based on SPECpower.SPECpower is mainly a benchmark for evaluating the energy efficiency of server-class compute equipments.Several Performance-per-Power metrics have been proposed which report the ratio between a given performance metric (such as response time, throughput, utilization, delay, bandwidth, etc.) and the energy consumed for obtaining such a performance.An example is the metric transactions per second per Watt (TPS/Watt), using the throughput as performance metrics.
For the particular characteristics of Exascale platforms, specific energy efficiency metrics are not yet specified and a metric that is able to take performance, scalability, as well as energy efficiency into account still needs to be introduced.
Energy measurement techniques
A major challenge for energy measurement and monitoring is their use on heterogeneous platforms through a standard access monitoring interface.Standardized monitoring interfaces for energy and resource utilization are necessary to support local and global control decisions and should be able to handle the diversity of hardware devices, such as GPUs, embedded CPUs, and nonvolatile low-power memory and storage.An example for a standardized access to performance counters is the PAPI interface, which currently can be used on a large number of platforms including the Intel Core i7 architecture, NVIDIA GPUs, the Intel Xeon Phi and IBM Blue Gene/Q systems [78].
For CPU power monitoring, one approach consists in finding the relationship between the power consumption and the utilization level.The utilization level is computed from different workloads that stress different components of the system (CPU, memory, I/O, etc.).In the literature [31,81] it has been shown that the power consumption and the utilization level are related linearly, regardless of the type of workload and the configuration of the processor, e.g. in terms of operational frequency or the number of active cores.
As an alternative, the CPU performance can be indirectly modeled by means of hardware counters that capture different hardware events, such as the number of cache accesses or the number of instructions issued [98].Performance monitoring counters do not require program modifications or an intrusion into the hardware structure and they can accurately reflect the activity levels of the processor or the memory subsystem.An example of this modeling technique is given in [66], where the event-based power prediction is enhanced by using the correlation of the power consumption with the change in core die temperature and the ambient temperature.Recent Intel CPU architectures include the Running Average Power Limit (RAPL) energy sensors to measure the power consumption of different components, including the CPU and the memory controller.The use of these counters is an efficient and low overhead alternative to measure the power of a system using specialized power meters [45].Energy modeling approaches and a comparison with measured energy values are discussed in [88].
Power management techniques
The Advanced Configuration and Power Interface (ACPI) [26] is an open standard for device power management co-developed by Hewlett-Packard, Intel, Microsoft, Phoenix, and Toshiba.It specifies different global and device energy states, which range from fully operational to completely powered off, and provides an interface to manage and monitor the power of the infrastructure components.ACPI can be accessed by the user with the aid of user-defined policies, such as specifying an application power level, or by the operating system, which applies power policies based on the platform load, such as switching the components to a low power state after a time of inactivity.
There are also advanced tools that provide support for a real-time power management of the infrastructure components, including servers, storage, network, and cooling equipment.Examples are the Intel Datacenter Manager [28], the IBM Systems Director Active Energy Manager [27], and the HP Power Advisor [50].They provide a single cross-platform view, can be used at multiple hierarchy levels, and support different energy policies, such as power capping, power saving and generation, and the analysis of power history data logs.In addition, most of these tools are fully integrated in the infrastructure management software, allowing it to perform energy-aware tasks, such as workload scheduling.
Several approaches address the improvement of the system energy efficiency.An example is given in [44], where DVFS is used to control the CPU power based on different policies which are applied considering the number of executed instructions, the memory traffic, and the consumer power of the processor.Memscale [33] applies dynamic frequency scaling to the complete out-of-chip memory subsystem (memory controller, memory channel, and DRAM device), as well as dynamic voltage scaling to the memory controller.It includes a control algorithm that minimizes the overall system energy based on performance counter monitoring.This work was extended [32] to multiple memory devices and controllers.[62] presents an energy model for the execution of a parallel conjugate gradient method split between the CPU and the GPU.The approach considers the CPU, GPU, and RAM energy consumption and uses the information to perform an energyaware workload distribution minimizing the execution time.A more global approach is followed in [24], where a runtime optimization technique is presented for improving energy efficiency in processors, disks, and networks.
The effectiveness of DVFS is restricted by the range of the minimum and the maximum voltages at which the transistors can operate.Moreover, DVFS is difficult to apply when workloads of different characteristics are executed.To overcome these problems, the idea of complementing DVFS with power gating has been proposed.[75] introduces PGCapping, a system that integrates power gating with DVFS for chip multiprocessors.[1] presents a gating-aware scheduler and a power gating scheme for GPGPU execution units that achieve significant energy saving in simulations.
When considering large computing infrastructures, the power proportionality arises, besides the energy efficiency, as a crucial concept.Power-proportionality means that the system's energy usage is proportional to its workload.In this way, the machine would consume no power in the idle state and would gradually increase the power consumption as the workload increases.An Exascale architecture should be both energy efficient and power proportional.However, existing systems are far from fulfilling this requirement.Consequently, it is necessary to develop new hardware and software tools that help to achieve it [38].Examples for such tools are described in [105] and [6].The first one shows a power-proportional distributed storage system for data centers that powers down servers according to the load level and considering the performance degradation, availability and data consistency.The second one presents a distributed filesystem based on the Hadoop DFS.It provides power proportionality minimizing the number of active nodes, including power-proportional capabilities for failures such as minimizing the number of nodes that need to be restored when there is a failure of the filesystem.[47] describes a solution to provide energy proportionality for networks by dynamically adapting the energy consumption of a network through traffic patterns analysis and by finding minimum power network subsets.A survey of techniques that aim to improve the energy efficiency of computing and network resources is given in [80], covering techniques that operate both on parallel and distributed system levels.
Monitoring and Benchmarking
With specific regard to Exascale platforms, there are three main challenges for energy efficiency metrics and monitoring: (1) scalability, (2) standard access monitoring methods, and (3) its application on heterogeneous platforms [53].Monitoring everything produces extremely large trace files making their analysis prohibitive.Alternatives are statistical models [83], time series approaches [67], and data filtering with a distributed analysis that produces small trace files with a small runtime overhead [60,84].
At node level, it is crucial to find the relationship between the power consumption and the utilization level computed, which seems to be linear [31,81].As discussed above, one possibility is to use hardware counters to model the CPU performance [98] and Intel RAPL to measure the CPU and memory controller power consumption [45,66].At the whole compute infrastructure level, power proportionality arises as a crucial concept [39,70].Even if the current hardware components are not power-proportional, we can see in the literature examples of system wide [47,105] and system specific models to achieve power-proportionality.In any case, standardized monitoring interfaces for energy and resource utilization are needed to handle the diversity of hardware and support local and global control decisions based on well-known and accepted metrics, see Section 2.3.
The energy metrics collected at node and system level must be provided to the operating system and the system software to optimize important energy-consuming operations in extremescale systems.One of these operations is data movement, as it is recognized that today data movement and storage uses more power than computation in many HPC usages.As an example, [37] indicates explicitly that managing data movement may be an energy-efficiency technique.
Coupled to monitoring frameworks, benchmarking provides useful and complete tools for the proper evaluation of distributed systems.Many stable benchmarking suites are available for HPC systems, such as the NAS Parallel Benchmarks (NPB) [13] and LINPACK [35], which for example is used for the performance evaluation and comparison of the Top500 list entries, see www.top500.org.There are also some interesting attempts towards standards in benchmarking.The most authoritative ones are the Standard Performance Evaluation Corp (SPEC) [101] and TPC [109].The Standard Performance Evaluation Corp (SPEC) has developed solutions that can be adopted in distributed and cloud environments, such as SPECvirt, SPEC SOA, and SPECweb.With specific regard to energy, SPEC define the SPECpower ssj2008 benchmark [64], considering performance and energy efficiency altogether.TPC is a non-profit corporation defining transaction processing and database benchmarks through verifiable TPC performance data to the industry.The TPC benchmarks can be considered as application-level benchmarks in distributed environments and they are a basis for the evaluation of the actual performance offered by standard transactional software on the top of (physical or virtual) machines.
Programming models and software development
An important aspect for the development of energy-aware applications is the use of suitable programming models.This is the main topic of this section, along with a coverage of energyaware scheduling algorithms and software development approaches.
Hierarchical programming models
Applications for Exascale computing are expected to incorporate multiple programming models.For example, a single application might incorporate components that are based on MPI and other components that are based on other paradigms.The particular combination of programming models may differ over time (different execution phases of the application) or space (e.g.some of the nodes run MPI, and others run shared-memory libraries).It is widely believed that to cope with these models, Exascale systems will require support for hierarchical programming models, which include more than two levels of today's models (such as MPI + OpenMP) [42].The particular combination of programming models may differ over time (e.g.different execution phases of the application) or space (e.g.some of the nodes run MPI, and others run shared-memory libraries).It is widely believed that to cope with these models, Exascale systems will require support for hierarchical programming models, which may include more than two levels of today's models (such as MPI + OpenMP) [42].In Exascale systems, hierarchies with a higher number of levels and a larger degree of parallelism will coexist with more heterogeneous hardware, making load balancing and communication reduction a critical task.Those features can be addressed through functional portability and performance portability.Even through functional portability can be achieved due to standardized environments such as MPI or OpenCL, performance portability, however, is often a crucial issue, as the required abstractions are still not present in the current HPC code generation tools.Performance portability for future systems might require a durable abstraction expressed in programming models that do not exist for HPC code generation so far [58].
Examples for existing hierarchical programming models are the TwoL [85] and the Tlib [86] approaches, which are both defined on top of MPI and allow a flexible and hierarchical grouping of processes into groups each of which can execute multi-processor tasks (M-tasks).The Mtasks are the basic execution units and each M-task can be executed by an arbitrary number of processing cores.In the TwoL approach, the M-tasks can be combined using a coordination language, which allows the specification of input-output and control dependences between Mtasks.M-tasks without a dependence between them can be executed in parallel on disjoint groups of processors.The runtime system can select a suitable number of processing cores for each M-task and can decide which of the M-tasks are executed in parallel.If the internal M-task communication is based on collective MPI operations, it is often advantageous to execute M-tasks in parallel as this reduces the communication overhead.This approach can also be used to enable an energy-efficient execution of M-task programs [87], since the runtime system can perform the mapping of M-tasks to cores based on an energy minimization instead of a performance maximization goal.It is also possible to provide different implementations for M-tasks, such as a standard MPI implementation, a GPU implementation and a specialized implementation for MIC processors, and select the most energy-efficient implementation at runtime, depending on the hardware resources available.To support such an energy-efficient mapping, it is important that the runtime system has access to suitable monitoring facilities (see Section 2.5) or can use suitable energy metrics (see Section 2.2).
The M-task model can also be used to support performance portability, since the same M-task program can be executed on different hardware platforms and the runtime system is responsible for the appropriate mapping to the hardware resources.For different hardware platforms, the runtime system can select different mappings and different M-tasks could be executed in parallel, if this results in a faster or more energy-efficient execution.
Many task approaches
The ever-increasing performance of supercomputer systems is enabling the emergence of new problem-solving methods that require an efficient execution of many concurrent and interacting tasks, usually integrating data analysis and visualization, to maximise the productivity on Exascale systems [37].Hence, Exascale systems will need new problem-solving approaches beyond hierarchical models.
One of the most promising candidate approaches is the many-task programming model, with the workflow model currently being the most widely used many task-like technique.An example of these tools is Swift/T, a description language and runtime system that supports the dynamic creation and execution of workflows with varying granularity on high-component-count platforms.The Swift/T system [117] provides an asynchronous dynamic load balancer (ADLB), which dynamically distributes the tasks among the nodes [119].The problem is that communication and synchronization for shared global resources (as files) could degrade performance in case of the absence of data locality.Current research has shown that emerging high-speed networks outperform physical disk solutions, which reduces the relevance of disk locality [7].Thus, most solutions provided for ultrascale will be based on the intensive usage of RAM and NVRAM memory near the processors.However, existing software engineering methods and models do not provide a mechanism to express energy aspects in applications and they still rely on system services that are not energy-aware.
Energy-aware scheduling algorithms
In order to cope with energy saving while considering the particularities of Exascale systems, i.e. various levels of heterogeneity, fault tolerance, strong energy consumption constraints, it is mandatory to move towards an energy-aware resource management [22], including scheduling algorithms that are able to handle various levels of heterogeneity and the diversity of available resources [73].
Power-aware scheduling algorithms for homogeneous systems are already available for more than one decade [46,51,72].Popular approaches commonly use DVFS to reduce the power consumption of processing elements during idle times and during slack times of non-critical jobs [115].Other approaches even power off the entire computing node with only a small impact on the resulting makespan [76].
In many HPC usage scenarios, data movements consume more power than computations do, so that reducing data movement can be considered an energy-efficiency technique [37].Therefore, energy-aware scheduling algorithms should guide the system to schedule computation jobs to the nodes containing the required data, thus avoiding costly data movement and considering the trade-offs between data locality and load balance.While traditional task clustering algorithms reduce the makespan by zeroing edges of high communication costs, a Power Aware Task Clustering (PATC) algorithm has recently been proposed [115] that guides the edge zeroing process with the objective of reducing the power consumption.The initial experiments were performed on homogeneous small clusters (100 PEs), where promising results have been obtained, specifically yielding up to 39% energy saving, which is more than double compared to 16% obtained on EADUS and TEBUS algorithms [122] that do not use DVFS.Energy-aware algorithms have also been developed and tested against heterogeneous clusters.The EETCS (Efficient-Energy based Task Clustering Scheduling) algorithm [69] significantly reduces the power consumption by shrinking the communication energy consumption when allocating parallel tasks to heterogeneous computing nodes.Another example is RADS (Resource-Aware Scheduling Algorithm with Duplication) [79], which saves up to 15% resource power consumption compared to similar algorithms.
Current scheduling and load balancing mechanisms are using meta-heuristics to solve the multi-criteria optimization problem taking into account the overload of the system and the incoming task requirements.Traditional multi-objective optimization algorithms, including population based metaheuristics aiming to estimate Pareto optimal sets, require an adaptation in order to be effective in the case of ultrascale dynamic optimization.In [22] a two-stage approach is proposed: First, a list of preliminary schedules resulting from a static multi-criteria optimization method is computed at design time.Then the schedules are adapted, using low cost operations, according to the particular requirements of the running applications and the characteristics of the available resources.However, the approach has not been tested in the context of large scale dynamic scheduling.Another aspect to be considered is the exploration of the relationship between tasks and computing resources and the proper usage of data location [14].Existing scheduling techniques for Exascale rely on various combinatorial optimization algorithms.For example, in [103] a new approach is proposed for simultaneously reducing the energy consumption while maximizing system performance.The method consists in computing the Pareto front of optimal solutions to the bi-objective problem of minimizing energy and makespan for a bag of tasks allocated to a set of heterogeneous compute resources.
The ultrascale dimension, the heterogeneous architecture of current parallel systems, and the need to re-schedule due to system faults have not been taken into consideration yet, especially not together with energy awareness.The task scheduler needs to support locality-awareness and be capable of supporting function shipping and data shipping as interchangeable alternatives.For this purpose, all data movement operations need to be abstracted as asynchronous tasks whose completion can trigger additional computation tasks and data movements.Moreover, the current slow meta-heuristic based mechanism should be redesigned to ensure a real-time reaction especially in the case of re-scheduling.A set of strategies, such as minimal energy consumption with deadline matching in scheduling mechanism assuming no faults, or energy aware rescheduling in the case of faults without time limits, should be defined as working conditions for the resource management system.
Energy-aware software development process
In a complex and highly distributed context, energy awareness should be applied at any level, both hardware and software, and within them.It needs to be addressed at different layers and services adopting a holistic approach.With regard to software, energy efficiency and optimization could be implemented and enforced at several levels: (a) at low level, through specific scheduling algorithms; (b) at code level, by optimizing programs and compilers and also by adopting specific, e.g.hierarchical, programming models and design patterns; and (c) at higher levels, in the software development process.In the latter case, the goal is to design the overall software architecture taking into account energy aspects and metrics, thus also considering a possible deployment in an Ultrascale infrastructure for the overall software.This approach comes from software performance engineering [99,100], which is a systematic, quantitative technique to construct software systems that meet performance objectives.It includes performance requirements and goals into a software development process, a technique also known as performance-driven development [68,74,77].As in the test-driven development [15], the performance-driven development is an iterative process composed of development and performance evaluation phases at each cycle.
The idea of an energy-aware software development process, which aims at enabling and taking into account energy efficiency and other important deployment properties and requirements at the early stages of the software lifecycle, is not new in literature but quite unexplored, especially in large scale parallel and distributed contexts.The first attempt in such a direction is green software engineering [21,61] and development [2,95].All of those approaches mainly suggest adopting a green, sustainable software development process taking into account energy properties, but so far just provide some suggestions and guidelines for this purpose, mainly at lower levels, e.g.code, programming models, or design patterns.A slightly more concrete solution is discussed in [106] where a reference model for sustainable software development, called GreenRM, is defined according to the ISO/IEC 14001 environmental requirements.But also in this case a model mainly containing only some guidelines is defined.Therefore, addressing energy, green and sustainability issues in the software development process is still an open problem.
Energy-efficient algorithms
As stated in the introduction, a huge reduction in the average energy cost per flop is required for Exascale systems [108].There have been large efforts on the hardware side which aim at a reduction of the energy consumption, including new memory systems and new processor technologies with power management, see Section 1.However, while these techniques can help to significantly reduce the energy consumption of unloaded systems, their contribution to the energy consumption of loaded systems is quite limited.Most of the efforts for reducing the energy consumption of loaded systems are directed towards an efficient control of the power management techniques according to the system load, but the contribution of these techniques may not be sufficient to reach the 20 MW target for Exascale systems.
A major problem in current approaches is that the algorithms or the applications being executed have no direct interaction with the hardware system to express or control energy needs.Such an interaction is needed to bring energy-awareness to the application level and to support a goal-directed use of algorithmic changes or transformations of the application code.In this section, we give an overview of the most important aspects for the energy awareness of algorithms, including the energy characteristics of algorithms, the effect of algorithmic changes and transformations on the resulting energy consumption, as well as adaptivity approaches used to cope with the increasing heterogeneity of HPC systems resulting from the integration of accelerators such as GPU, MIC or FPGAs.Finally, we show some specific examples for energyefficient algorithms from different areas.
Energy characteristics of algorithms
Hardware mechanisms introduced during the last years to reduce the overall energy consumption of processors (see Section 1) will also play an important role for future Ultrascale systems.Thus, it is important to study the influence of these techniques on algorithms and applications.In particular, it has to be investigated whether these techniques can be employed to reduce the energy consumption of algorithms and which specific characteristics of algorithms have an effect on the resulting energy consumption.If the influencing factors are known and can be captured quantitatively, this information can be used to tune applications towards a smaller energy consumption by applying suitable algorithmic transformation techniques.
The energy consumption E of an algorithm can be described by the power consumption P of the execution resources employed and by integrating P over the execution time of the algorithm: E = tmax t=t0 P (t)dt.Typically, the power consumption varies during the execution time of the application, depending on the specific execution situation of the application and the resulting usage of the different execution resources.The variations of the power consumption during the execution time can be measured in detail with specialized power meters and power acquisition systems [90] (see Section 2.2), but hardware counters can be used as well (e.g.Intel RAPL interface).However, the specific interaction of computation and power consumption is [90] complex and it is challenging to predict which algorithmic properties lead to which amount of power consumption at a specific point in the execution time.
The power consumption of processors comprises a dynamic and a static power consumption part [56].The dynamic power consumption P dyn is related to the switching activity of the processor during execution and it can be expected that it is smaller during processor idle periods.The static power consumption P stat captures the leakage power, which becomes more important for processors with smaller transistor size, and it is present even if there is no switching activity of the transistors.It has been stated that in 2014 25%-40% of the total power consumption in server chips was caused by leakage power [48].For DVFS processors, the dynamic power consumption increases significantly with the operational frequency f , and often, a dependence P dyn (f ) = γ • f α with 2.5 ≤ α ≤ 3 is assumed, where γ is a suitable parameter.The dependence of the static power consumption P stat on f is typically quite small and is often neglected and assumed to be constant [56].
The average power consumption of algorithms increases with the operational frequency.Fig. 1 shows the dependence of the energy and the power consumption on the frequency for the SPEC CPU2006 floating-point benchmarks, which consist of real (sequential) programs from different application areas, (see [48] and [90] for more details).It can be observed that for most of the programs a frequency between 2.0 and 2.5 GHz leads to the smallest energy consumption.It can also be observed that different SPEC programs lead to different amounts of power consumption, which shows that there is a dependence of the power consumption on the features of the application.This effect is even larger for parallel applications, as those included in the PARSEC benchmarks that contain parallel programs from different application areas, see [17].Fig. 2 shows the average energy and power consumption of the PARSEC benchmarks for different frequencies.As shown, the variation of the power consumption is much larger than for the SPEC benchmarks.Fig. 2 also shows that the difference between the largest and the smallest average power consumption for the different applications is more than 100% (see [89] for details).It can be concluded that parallel execution adds significant variations to the power consumptions observed.
The observation that the power consumption may be quite different for different algorithms and applications leads to the question which algorithmic properties have an influence on the resulting power consumption.For parallel applications, the speedup obtained plays a role and it can be observed that applications with a larger speedup tend to have a larger power consumption than applications with a smaller speedup [90].This can be explained by the fact that applications with a smaller speedup typically include more idle times during which some parts of the processing cores can be powered down, thus reducing the average power consumption.However, there are other influences that will be discussed in more detail in the next subsection.
Algorithmic techniques towards energy awareness
There are some efforts to explore the energy effects of specific programming techniques for selected algorithms, mainly from the area of linear algebra [5], with the goal of advancing towards an energy optimization of algorithms.Seminal articles in the literature demonstrate that a huge number of technical applications can be decomposed into up to 7 or 13 "Dwarfs" [9], which are a small set of common kernels with a tremendous impact on a huge number of computing-intensive applications and libraries.Thus, it seems advisable to concentrate on those kernels.
Systematic approaches that investigate the energy effects of algorithmic changes and transformations are very rare.Some recent results show that standard techniques used for performance optimization, such as tiling, have only a minor effect on the energy consumption [41], since loading and storing data to the on-chip caches constitute the largest contribution to the dynamic energy consumption.Therefore, alternative techniques, such as register tiling [91], seem to be more promising for the energy optimization of algorithms than standard techniques used for performance optimization.Currently, it is not feasible to think of a single solution for the energy optimization of algorithms, as the energy behavior of the algorithms is closely related to specific architectures.
Several approaches model the energy consumption of application programs on CPUs or GPUs [23].These models usually distinguish between the dynamic and the static power consumption, but they do not take algorithmic properties of the application into consideration.There are also some approaches that model the energy consumption of individual algorithms by considering the operations performed [59], however these approaches are difficult to transfer to other algorithms and they require a significant effort for the analysis at the algorithmic level.Another attempt in finding a relation between properties of the algorithms and the resulting energy consumption and execution time is described in [25], but the results are only presented at the level of micro-benchmarks.So far, there is no broad investigation that determines which algorithmic properties have which effect on the energy consumption for a specific architecture.Thus, there is a need to develop algorithm-specific energy models and mechanisms to express the energy behavior of the algorithms on the underlying system.A survey of power and efficiency issues for numerical linear algebra methods [102] identifies several major techniques for energy savings, e.g.profiling, trading off performance, static and dynamic saving, and concludes that the current techniques are application-specific and difficult to generalize.The impact of different CPU workloads on power consumption and energy efficiency is studied in [111], showing that different workloads can lead to significant differences in energy efficiency.
In addition, the architecture of different HPC and Exascale systems is expected to be quite heterogeneous and rapidly developing [37], as they might include specialized niche market devices, such as GPUs, MIC and FPGA accelerators.This perspective constitutes a major challenge for the system software, comprising the operating system, runtime system, I/O system, and interfaces to the external environment, since the system software is responsible for an effective use of the hardware resources.However, algorithmic properties of an application also play an increasingly important role and it is required that the programmer uses the right programming techniques for the specific architecture of a given HPC system.This places a large burden on the programmer to tune her or his applications towards a better performance.Since this is often quite time-consuming, autotuning approaches [114] and efforts towards Self-Adapting Numerical Software (SANS) [34] have been proposed.Those aspects will be considered in more detail in the next subsection.
Autotuning approaches towards energy efficiency
Autotuning software is able to optimize its own execution parameters with respect to a specific objective function, which was usually the execution time, but might as well be the energy consumption.The methods for autotuning are diverse, including model-based parameter optimization, or an optimization based on candidate sets generated by the autotuning software.Autotuning based on a set of equivalent candidate implementations for an algorithm considers different candidate implementations using different programming techniques for the formulations of the algorithm, which, for example, may differ in their loop structure by applying loop transformations such as loop fusion, loop interchange, loop tiling, or loop unrolling.Moreover, different parameters for the loop transformation, such as block sizes for tiling or unrolling factors, can be used.The idea of the autotuning approaches is to automatically select one of the candidate implementations for a specific HPC architecture to reach a given optimization goal, such as minimal execution time or minimal energy consumption.The selection can be made both offline or online.
Offline autotuning performs the autotuning procedure at software installation time.In this scenario, the installation of the autotuning software or library can take a significant amount of time due to an extensive evaluation of the different candidate implementations using runtime tests or energy measurements.However, at runtime, the best implementation variant selected during the installation is directly used, with little or no overhead.Offline autotuning can be applied if there is no significant dependence of the runtime of the implementation variants on characteristics of the specific input.A number of offline autotuning libraries aiming at performance optimization already exist for decades: ATLAS [116] and PHiPAC [18] for dense matrix computations; OSKI [113] and SPARSITY [52] for sparse matrix computations; or FFTW [40] for fast Fourier transformations.Offline frameworks, such as PERI [118], SPIRAL [82] and Green [12], allow the programmer to setup an application to be autotuned for a given microarchitecture.If supported by a model-based approach [121], the installation time overhead can be reduced.Model-based approaches use an analytical model of the execution platform and the algorithm to be executed, and select a set of implementation variants and parameter values which are then tested at installation time, which may reduce the number of variants to be tested significantly.
Besides the overall execution time of a specific algorithm, additional optimization goals, such as energy consumption or computing costs, need to be considered by auto-tuners.Therefore, more sophisticated methods capable of exploiting and identifying the trade-offs among these goals are required, like those presented in [43] where the authors present and discuss results of applying a multi-objective search-based auto-tuner to optimize for three conflicting criteria: execution time, energy consumption, and resource usage.Offline autotuning approaches for energy usage vs. performance degradation in scientific applications are discussed in [107], where the authors conduct several experiments in which the tuning is performed with respect to software level performance-related tunables, such as cache tiling factors and loop un-rolling factors, as well as for the processor clock frequency.[63] presents an energy-oriented autotuning for the ATLAS library.
If the execution time of the implementation variants depends on characteristics of the specific input, offline autotuning has to be replaced by online autotuning, where applications are able to monitor and automatically tune themselves to optimize a particular objective (execution time, energy consumption, etc.), as in the case shown for ordinary differential equations in [55].Online autotuning can especially be used successfully for time-stepping methods.In this case, the time steps can be performed with different implementation variants and parameter values until the best implementation variant is found.Then this implementation variant is used for the remaining time steps, as shown in [62].A model-based pre-selection phase can be used to reduce the number of implementation variants that need to be tested at runtime.For ordinary differential equations, this approach has been applied successfully [55], and it has been shown that the autotuning overhead at runtime is not too large.An automated online performance tuning approach for general applications is provided by the Active Harmony automated runtime system [29], which allows runtime switching of algorithms and tuning of libraries and application parameters to improve the resulting performance on a given hardware platform.The system uses a server which uses a Nelder-Mead method to search through a potentially large parameter space.The server sends a parameter selection to a client, which then measures the resulting performance and sends the corresponding information back to the server.This procedure is repeated until a good parameter selection has been found.
Another example for online autotuning is PowerDial [49], which converts static configuration parameters that already exist in a program into dynamic knobs that can be tuned at runtime, with the goal of trading QoS guarantees for meeting performance and power usage goals.The system uses an online learning stage to construct a linear model of the choice configuration space which can be subsequently tuned using a linear control system.In the SiblingRivalry [8] model, requests are processed by dividing the available cores in half, and processing two identical requests in parallel on each half.Half of the cores are devoted to a known program configuration, while the other half of the cores are used for an experimental program configuration chosen using a self-adapting evolutionary algorithm.The faster configuration (either the known or the experimental one) is always kept and the other one is terminated.The authors show that over time, this model allows programs to adapt to changing dynamic environments and often outperform the original algorithm that uses the entire system.
As mentioned before, most of existing autotuning models consider the execution time as main objective function.However, the resulting energy consumption can also be directly used as an optimization goal of an autotuning approach.This can be based on energy measurements using hardware counters as they are, for example, provided by the Intel RAPL interface (see Section 2.5) or on a model for the energy consumption of the algorithm (see [62] for more information).
Examples of energy-efficient algorithms
Examples of energy-efficient algorithms can be found in the graph theory area.In [20], the authors propose a new algorithm which solves the min cut/max flow problem on a graph.It is based on augmenting paths and building two search trees, one from the source and the other from the sink, which are reused to avoid rebuilding them from scratch.Experimental comparisons show that the algorithm is faster and minimizes the energy usage for functions in vision.Another example is [94], in which a large-scale energy-efficient graph traversal is proposed.More recently, the initiative "EDGAR: Energy-efficient Data and Graph Algorithms Research" of the Berkeley Labs has been started to design new parallel algorithms to reduce communication costs of data and graph analysis algorithms in Exascale, aiming at a reduction of the execution time and the energy consumption.An important observation in this context is that the power required to transmit data in a network also depends on the length of the wire in traditional cooper networks, i.e., data exchanges between neighboring nodes in a network require less energy than exchanges between non-neighboring nodes.The energy consumption of different MPI collective communication operations has been investigated in [112], showing that the size of the execution platform plays an important role.A quantitative analysis of the energy costs of data movements between different levels of a memory hierarchy (main memory, L3, L2 and L1 cache) has been reported in [57].The analysis is based on a set of micro-benchmarks that continuously access data stored in a given level of the memory hierarchy and measure the resulting energy consumption.An experimental evaluation captures several benchmarks, including the NAS parallel benchmarks suite and applications from the Exascale Co-Design centers.The results show that, in current systems, scientific applications spend between 18% and 40% of their total dynamic energy in moving data and between 19% and 36% in stalled cycles.The energy consumption of different data access patterns in PGAS (Partitioned Global Address Space) models has been investigated in [54].
Sorting algorithms are among the most important fundamental algorithms in computer science and many applications depend on efficient sorting techniques.Energy efficiency also plays an important role here and using an energy-efficient sorting could help in reducing the overall energy consumption significantly.The energy consumption of different basic sorting algorithms such as odd-even sort, shellsort, or quicksort has been investigated in [123], showing that quicksort leads to the smallest energy consumption and that the choice of a suitable recursion depth for quicksort may have a large influence on the energy consumption.An external sort benchmark JouleSort for evaluating the energy efficiency of a wide range of computer systems from clusters to handhelds is described in [92].The energy consumption of vector and matrix operations as well as sorting and graph algorithms is investigated in [93], showing that the energy consumption depends on the memory parallelism that the algorithms exhibit for a given data layout.
Other examples of energy-efficient algorithms can be found in thread scheduling [30], financial applications [3], and big data applications [120].All these research efforts use memorization as a techniques to avoid repeating computation by caching previous results, thus achieving a better energy efficiency in application execution.
Discussion
The summarizing state-of-the-art analysis of energy-aware programming has shown that there already exists a multitude of research directions and results in many areas of computing.From this current research situation, we can derive a number of open problems to be solved for a successful energy-aware programming.As energy is a cross-layer issue, we argue that a holistic energy-aware approach is needed, which requires the development of interacting interfaces between the different software and hardware layers.Such an approach will allow researchers to investigate different directions of the ETP4HPC agenda.Three of these directions are addressed below: new energy-aware algorithms for Exascale, software engineering for extreme parallelism and energy-aware systems support for managing extreme scale systems.
New energy-aware algorithms for Exascale: Advancing the state-of-the-art at an algorithmic level needs to include energy-awareness into the algorithm/application level.One way of achieving this is the introduction of interacting interfaces between the different hardware and software layers, combined with algorithm-specific mathematical energy models.We argue that this will enable a dynamic adjustment of the computation and communication characteristics of algorithms/applications with the goal to achieve a perceivable reduction of the overall energy consumption.Such a new layered approach with its interacting interfaces will also allow a direct interaction between the control of the power management and the algorithm or application being executed.With the aid of annotations, applications may provide a parameterized energy model which can be exploited to articulate a policy for managing trade-offs on different system architectures.A general goal is that future energy-aware algorithms should not only be evaluated based on FLOPs but also based on energy cost of operations.
Software engineering for extreme parallelism: To hide the complexity of the development process of algorithms and applications for Exascale systems, we propose to develop a high level language environment supporting an energy-aware software development.This language environment should be intuitive and easy to handle for application programmers from diverse application areas.This can be achieved by using a human-like language or a descriptive or graphic annotation approach.For an increase of the acceptance and usability, it is important that such a language environment allows a seamless integration of different programming models, accompanied by support for a hierarchical development of all necessary Exascale system coordination, control and monitoring functions in a reasonably human-understandable way.It necessarily should provide energy consumption indicators which system designers and developers can rely on during software development so that they can achieve a reduction of the energy footprint of the resulting program code.Considering the heterogeneity of Exascale systems, a high-level software development process is needed in order to allow a seamless integration of multiple energy-aware programming models beyond the state-of-the-art.We propose a research agenda in this field targeted towards abstract hierarchical programming models and optimized many-task programming models.The first direction will allow the annotation of power and en-ergy consumption information by defining energy patterns and constraints in the hierarchical programming model.Based on this abstract model, one can build a general hierarchical optimization technique for collective communication algorithms, such as MPI operations, which will not be platform specific but will address the scale of the HPC platform.The second direction should evolve existing programming models to enable locality-based optimizations through the intensive usage of RAM and NVRAM memory near the processors, thus avoiding data movements, along with an energy aware scheduling that will guide the system to schedule computation jobs in the nodes containing the required data taking also into account the trade-offs between data locality and load balance.Energy-aware system support for managing extreme scale systems: Expressing the crosslayered nature of energy can be achieved by providing system mechanisms that support energy efficiency in extreme scale systems.The first research topic is the design of metrics and tools for exporting energy features, at node and system level, to the applications through (approximate) energy monitoring and management services.These services will be provided to the upper levels of the hierarchy to allow optimizations in runtime resources, libraries and applications.The second topic should investigate the elaboration of energy-efficient data access and communication models relaying on a better exploitation of data locality and layout, and supporting the development of cross-layer locality-aware I/O software.Equally promising and complementary to the previous topics, researchers should look into energy profiling at component and application level in order to dynamically redirect the workload to those components that can yield the maximum amount of throughput.Ultimately, it should be possible to predict the energy consumption of particular code segments.This information can be used to enable a dynamic provisioning of resources, to provide the ability to manage new important resources, such as power and data motion, through an energy aware scheduler and dispatcher, and an energy-aware load balancer that is conscious of the system energy, node energy, and data-locality needs.Last but not least, we need to elaborate novel energy-aware models, APIs and tools to automatically map applications onto heterogeneous architectures trying to optimize performance over energy ratio.
Figure 2 .
Figure2.PARSEC benchmarks executed with eight threads on an Intel Core i7 Haswell processor: energy consumption (left), and power consumption (right) for varying frequencies[89] | 2019-02-13T14:08:15.674Z | 2015-04-06T00:00:00.000 | {
"year": 2015,
"sha1": "9e7970752e161832c7da1083e6c13522b6cdb8ce",
"oa_license": "CCBY",
"oa_url": "https://superfri.org/index.php/superfri/article/download/41/132",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9e7970752e161832c7da1083e6c13522b6cdb8ce",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.