id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
230522651 | pes2o/s2orc | v3-fos-license | Exploring Science Communication Effectiveness in the U.S. Federal Government Research Process: A Case Study with the U.S. Livestock Producers’ Antimicrobial Use Research
Several U.S. federal government agencies collect and disseminate scientific data on a national scale to provide insights for agricultural trade, research, consumer health, and policy. Occasionally, such data have potential to provide insights to advance conversations and actions around critical and controversial issues in the broad agricultural system. Such government studies provide evidence for others to discuss, further interpret, and act upon, but to do so, they must be communicated well. When the research intersects with contentious socio-political issues, successful communication not only depends on tactics, but as this study illuminates, it also depends on relationship quality between research producers, study participants, and end-users. USDA’s Animal and Plant Health Inspection Service (APHIS) conducted firstof-its kind national studies on cattle and swine producers’ use of antimicrobials. The use of antimicrobials in animal agriculture is considered a critical and controversial issue pertaining to antimicrobial resistance. In recognition of the anticipated wide-ranging interests in these studies, APHIS sought to understand stakeholders’ perceptions and experiences of the federal government research process and products with aim of improving their science communication and relations. This study reports on findings from in-depth interviews with 14 stakeholders involved in the antimicrobial use studies to make recommendations for improving communication and relations between the agency and its stakeholders. From this research, we draw implications that are transferrable to numerous types of government science communication efforts within agricultural sectors.
Introduction
Several federal agencies collect and disseminate scientific data on a national scale to provide insights for agricultural trade, research, consumer health, and policy. These data have potential to provide insights to advance conversations and actions around critical and controversial issues in the broad agricultural system. Such studies provide evidence for others to discuss, further interpret, and act upon, but to do so, they must be communicated well.
Monitoring animal health on a national level is an important service provided by the United States Department of Agriculture (USDA) National Animal Health Monitoring Service (NAHMS). This program, led by USDA's Animal and Plant Health Inspection Service (APHIS), synthesizes input from other government agencies, livestock producers, academics and industry professionals to conduct national research studies that provide valuable animal health information to animal agriculture stakeholders. These studies, conducted at regular intervals, ensure accurate information is reported to track animal health trends that affect animal welfare, agricultural trade, research, consumer health, and policy (USDA APHIS, 2010). Until recently, one aspect these studies had yet to fully capture was the use of antimicrobials, which includes antibiotics 1 , in animal agriculture production practices. The rise in infectious microorganisms resistant to antimicrobials, termed antimicrobial resistance (AMR), is considered one of the most pressing and serious threats to animal and human health (World Health Organization, 2018). However, in the U.S., data tracking and characterizing antimicrobial use in both humans and animals is insufficient to take well-informed actions (World Health Organization, 2018). Sales figures on antibiotics have been one indicator of use, but such data are of limited value for understanding implications of use (Charles, 2016). Thus, in December 2014, the USDA released an Antimicrobial Resistance Action Plan (USDA APHIS, 2015). Part of this plan included measuring livestock producers' use of antimicrobials, which is what the NAHMS 2017 antimicrobial use (AMU) studies set out to do. Their studies aimed to provide data (not discussion or recommendations) to advance understanding of AMR and inform others to design AMU best practices in livestock production.
The NAHMS 2017 AMU studies coincidentally came on the heels of the new Food and Drug Administration (FDA) Veterinary Feed Directive regulations and guidance aimed at limiting medically important antimicrobials to disease treatment, control, and prevention and requiring additional veterinary oversight and documentation (U.S. FDA, 2017). These regulatory actions were cautiously lauded by human health advocates with some reservations as to whether the actions go far enough (see PEW, 2016) and accepted by animal pharmaceutical companies and most producers as having a neutral or positive impact (Farm Foundation, 2016). Beef producers indicated they are willing to share their management practices but do not believe their operations contribute to AMR (Lee et al., 2017).
Consumers have varying degrees of acceptance of AMU in animal agriculture. Consumers and the general public are more keyed in on antibiotics, and as such even those within the industry 1 "Antimicrobials are products that kill microorganisms or keep them from multiplying (reproducing) or growing. They can be either naturally occurring or synthetic (manmade) and are most commonly used to prevent, control, or treat diseases and infections caused by microorganisms. Antibiotics are antimicrobials that can kill bacteria or inhibit their growth or reproduction. All antibiotics are antimicrobials, but not all antimicrobials are antibiotics." (American Veterinary Medical Association, 2020, para. 5, 7) are more likely to mainly focus on antibiotics as well. A consumer survey showed 22% ranked antibiotics among their top three food safety concerns, and 74% supported using antibiotics in sick animals with caveats that there is a sufficient withdrawal period (47%) or that they never enter the food system (27%) (International Food Information Council Foundation, 2016). In sum, while human health advocates push for more government oversight of AMU in animal agriculture, livestock producers do not believe their use to be a major issue in AMR. Combined with consumers' ambivalence toward AMU in animal agriculture, the complexity of AMU and AMR in both the human and animal health realm continues to be a point of contention. With the variety of stakeholders interested in this kind of data, the complexity of AMU and AMR, and the sociopolitical controversy surrounding antimicrobials in animal agriculture, successfully communicating the NAHMS AMU studies and other future studies like it is of critical importance. This study, therefore, provides an examination of the science communication process and products through the lens of key stakeholders for the NAHMS AMU studies.
Literature Review
The AMU study reports consists of descriptive numerical data on swine and cattle producers' use of antimicrobials that was collected via confidential surveys. It includes data on the types of antimicrobials used, duration of use, how antimicrobials were administered, and site demographics (USDA, 2019a, USDA, 2019b. Given the scientific nature of the AMU study and the contentious nature of the topic itself, literature on communicating scientific uncertainty is most relevant. Communicating aspects related to scientific uncertainty are particularly important when the science deals with a controversial issue like antimicrobial use and resistance. Communicating uncertainty is related to perceptions of transparency in the process, which can foster trust (Johnson & Slovic, 1995) and help decision makers weigh risks (Joslyn & LeClerc, 2012). With controversial issues, a rich body of literature illustrates how some stakeholders and individuals will leverage scientific uncertainty "to protect their economic interests or ideological preferences" (National Academies of Sciences, Engineering, and Medicine, 2017, p. 61). Such efforts can detour scientific understanding, but more light is being brought to such strategies and the existence of media "echo chambers" among the public. Druckman (2015) contends that explaining how studies were developed, data collected, and conclusions reached can build audiences' credibility perceptions about the science and increase their interest in scientific discovery. To decrease the uncertainty surrounding AMU data and facilitate relationships based on trust and transparency, two conceptual frameworks and literature on relational quality served as a foundation for this study. Specifically, Knowledge Transfer and Exchange (KTE), Knowledge Transfer and Translation (KTT) and organizational-stakeholder relationship quality and trust provided a guide for this research with an aim to provide initial recommendations for communicating the study.
Knowledge Transfer and Exchange
The KTE is a research information dissemination process between research producers and users (Smith, 2014). This framework emerged as communication research evolved to show that knowledge exchange was much more than just a one-way effort. KTE explains that partnerships in research efforts that include research producers, decision-makers, stakeholders, and research end-users are extremely effective in increasing the likelihood of research evidence being used in practice or policy (Lavis et al,, 2003). KTE advocates the importance of including these partnerships early on in the research process in order to overcome barriers that may occur. The idea behind this is that researchers and stakeholders will be on the same page throughout the research process, gaining a better understanding of expectations and view research as a process instead of simply a product (Smith, 2014). Trust among all parties is a major pinnacle in this model, which takes many positive shared experiences between groups. Knowledge transfer efforts are successful through ongoing communication efforts that include and address industry or community issues that ultimately are incorporated into decision-making (Smith, 2014).
Knowledge Translation and Transfer
Another framework that takes knowledge transfer a step further, is Knowledge Translation and Transfer (KTT). This framework is similar to KTE but accelerates the "transformation of knowledge into use through synthesis, exchange, dissemination, dialogue, collaboration, and brokering among researchers and research users" (Ontario, 2016, para.1). This framework is successfully used by the Ontario Ministry of Agriculture, Food and Rural Affairs to reach stakeholders in Canadian agriculture and accelerate the transfer of knowledge through research. Ontario (2016) illustrates how KTT focuses on a stakeholder needs-based approach and takes into account demand for research topics. This framework breaks down its efforts into three groups, including program, policy, and commercialization research. Program research is research designed to improve a program through audiences of that particular value chain (e.g. animal agriculture), while policy research attempts to answer important questions needed for policy development. Commercialization research applies to development or improvement of a commercial product (such as antimicrobials) that targets organizations that bridge gaps between research and the industry market. Planning a successful KTT effort also emphasizes collaboration, partnerships, and having clear audience channels designed. KTT explains the idea of a "knowledge broker" who helps share and disseminate research among researcher producers and users. This person (or group) designs and decides appropriate information dissemination channels that ultimately build trust between the organization and stakeholders (Ontario, 2016). The Science of Science Communication National Research Agenda summarizes what is currently known about communicating science issues (National Academies of Sciences, Engineering, and Medicine, 2017). The agenda also brings up the notion of knowledge "brokering" and the use of "boundary organizations" to help facilitate the flow of information, as KTT also echoed.
Relationship Quality and Organization-Stakeholder Trust
As the KTE and KTT frameworks illustrate, effective relationships with key stakeholders throughout the science generation and dissemination process can lead to greater impacts. Relationship quality between an organization and its stakeholders consists of 6 dimensions (Hon & Grunig, 1999): Control mutuality: the degree to which parties agree on who has rightful power to influence one another; Trust: one party's level of confidence in and willingness to open oneself to the other; Satisfaction: the extent to which one party feels favorably toward the other because positive expectations about the relationship are reinforced; Commitment: extent to which one party believes and feels the relationship is worth spending energy to maintain and promote. Exchange relationship: both parties provide benefits because each has received (or expects to) benefits from the other; and Communal relationship: both parties provide benefits to the other because [of] concern for the other's welfare-even when they get nothing in return. (p. 19-20) Although trust appears as a mere singular component of relationship quality, arguably, each of these dimensions interplay with one another to shape trust. The concept of trust can be separated into two main types: relational trust and calculative trust (Earle, 2010). Earle synthesized trust literature relevant to risk management to further define its nature. Relational trust, the more resilient of the two, is defined by perceived intentions of the other and shared values. Calculative trust, which is more fragile, is determined by actions and competence. Earle found the majority of research examining the relative importance of the two types shows relational trust to be more important. "Knowing whether the intentions of the other are good or bad (relative to oneself) is more important than knowing what the other can do" (Earle, 2010, p. 542).
The dimensions of relationship quality offer more concrete guidance as to what effects broader feelings of trust between an organization and stakeholders. Although organizational and scientific process transparency is often touted as another key influencer of trust, science communication studies have only shown that transparency is demanded, but may not actually increase positive attitudes (Abrams, Zimbres, & Carr, 2015) or trust (Frewer et al., 2002). Fairness of a process or outcomes, however, are closely related to trust in science communication (National Academies of Sciences, Engineering, and Medicine, 2017), and fairness seems directly related to the dimensions of relationship quality. That is, if both an organization and stakeholder feels the relationship is fair, other dimensions of relationship quality are likely to be favorable.
A 2018 consumer survey showed federal agencies are held responsible for food safety over any other entity; however, trust in those agencies is low (Center for Food Integrity, 2018). This aligns with research on environmental science communication (Brewer & Ley, 2013) and Pew Research (2017) polling, showing the federal government to have little trust among about 82% of general public. This statistic illustrates the importance of equipping other, more trusted entities to serve as those knowledge brokers (referencing back to KTT framework)who may have their own trust issues as well. However, working with knowledge brokers does not absolve federal agencies of making attempts to improve trust among its other publics.
Summary
KTE, KTT, and relationship quality and organizational-stakeholder trust provided a framework for creating a study that takes a holistic inquiry of the nature of government-sponsored science communication as not just a product, but as a process. This literature review provides a lens through which to formulate recommendations to better understand and improve the science communication process for government research activities.
Purpose and Objectives
The APHIS NAHMS studies, particularly the AMU study, provided a case study through which to explore agricultural stakeholders' perceptions and experiences of the federal government research process and product (i.e., the report) and to formulate recommendations to improve communication and relations among a federal agency and its stakeholders. KTE and KTT frameworks recommend identifying and involving stakeholders throughout the research process to increase the likelihood for research to be used. Specifically, questions such as determining what knowledge and how that knowledge should be transferred are essential to answer (Lavis et. al, 2003). Thus, the first study objective was to describe the AMU studies reports stakeholders' preferences for scientific livestock industry information for eventual use in their communication efforts. Additionally, both the KTE and KTT frameworks illustrate the importance of strong, trusting relationships between the research organization and their stakeholders. Therefore, the second objective was to explore NAHMS stakeholders' perceptions of APHIS and the NAHMS studies (the AMU study, in particular) to characterize organizational-stakeholder relationship quality and identify opportunities to improve relations. This research has implications for the numerous types of government science communication efforts within agricultural sectors.
Methods
In-depth interviews are a common method used in phenomenological approaches within qualitative inquiry. One of the key benefits of in-depth interviews is they allow for participants' experiences and perceptions to be explored in their own terms without abstract measures and for these descriptions to be further probed and clarified (Marshall & Rossman, 2014). When it comes to selecting and sampling interview participants, purposive sampling is best suited to identifying individuals who have had experiences relating to the phenomenon to be researched (Robinson, 2013). This was a necessary approach since this study sought to explore experiences pertaining to a particular federal government unit and its research. The interview guide was developed by the first two authors based on the theoretical framework and study objectives. It included questions about how stakeholders receive and seek out scientific livestock industry information and the demands they face for using and re-packaging that information for different audiences. The questioning then focused on their experiences in relationships (current and past) with federal agencies, particularly APHIS; how their own stakeholders view APHIS and the NAHMS studies; barriers to building better relationships; and recommendations for improving relations.
Participants were 14, purposively sampled, representatives from national livestock industry groups (n = 5), university extension specialists in animal health (n = 4), and national advocacy groups (n = 3) supporting AMU as-is, against it, and more neutral, as well as agricultural journalists with one from a pork outlet and another from a cattle outlet (n = 2). (Note: Additional details cannot be provided without potentially compromising participants' identities.) All interviewees were responsible for some aspect of communication on behalf of their organization. The research team and representatives from USDA-APHIS deemed these stakeholders to be illustrative of knowledge brokers or boundary organizations who can help facilitate the flow of research information in agriculture. (Note: the USDA-APHIS representatives only provided general input on the types of stakeholders and were not privy to the identities of the individuals selected.) Some of the organizations represented had existing relationships with USDA APHIS to support the development of NAHMS studies. All but one extension specialist were aware of NAHMS studies, but not always by name. Participants were assured confidentiality and we clearly delineated our roles as independent researchers to protect the study's integrity and participants' feelings of trust. Interviews were conducted via online audio conference software and lasted about 35 to 60 minutes. The interviewer was accompanied by an assistant who took notes and corroborated key points in an initial debriefing about the discussion that took place. Interviews were transcribed by a transcription services company and then the interviewer along with the lead author coded for key ideas, topical markers, examples, and themes relevant to the objectives using thematic analysis within one month of completing the interviews (Rubin & Rubin, 2013).
Objective 1: Communication Patterns and Preferences
Participants were asked questions about how they receive and seek out scientific livestock industry information and the demands they face for using and re-packaging that information for different audiences. Their job demands for communicating scientific livestock industry information ranged from needing descriptive, highly technical scientific information, to needing concise, nontechnical data summation that could be easily transferred to other audiences.
Two themes emerged as is related to this objective. The first was centered on their recommendations and preferences for using scientific information to communicate with their stakeholders, while the second was on how the study itself needed to potentially change before they would consider using it or sharing it with others.
Theme 1.1: Improve the Variety, Usability, and Frequency of Communication
In general, for more immediate or likely use of the NAHMS study reports, interviewees said they would like more formats than the PDF report of the information. However, they all said the PDF reports should still be available for deeper investigations or corroboration of any highlighted information. This is especially important for groups that do rely on in-depth information as well as shorter summaries, such as agricultural media. Agricultural journalists keep tabs on broader industry information while also reporting on more timely issues: We watch everywhere from raw data on the market, like the hogs and pigs report, to APHIS, anything that would do with regulations, and proposed laws, legislation; anything that would change the way the hog producer raises pigs. We rely heavily on USDA's information, whether its raw data, it's information about regulations, whether they're changing rules or if it's just information they collect.
Journalists from agricultural media outlets commented on the need for press releases for current research, as well as fact sheets that provide a summary of findings, with important pieces of information highlighted. Any visuals or infographics are also helpful because such features attract more readers and aid audience understanding.
From there, I think the more relatable data to the audience is a must. If the information is practical and easy to understand, then it's a lot faster to turn the story. It just depends on the story I'm trying to develop. But if I want to do a quick online story, and they're trying to get a news release out that's quick and timely today, and it's basically a summary of that 80-page report, then I just want good quotes that I can build a story around. A good link to that 80 page that I can go hunt if I want to go hunt more information. But no, I don't want the 80-page report just handed to me in an email.
Within this theme, a subtheme emerged on the need to address the website usability of USDA and NAHMS. Almost all interviewees commented on the usability of the USDA website in general. Many recommended that the APHIS NAHMS page should be more prominent and include more program informationgoing back to communicating why certain research is important to all stakeholders. Those somewhat familiar with the website described how its organization of content was not entirely intuitive. Usability-wise, having to download/open PDF files to know what information is contained within them was described as "frustrating" by all interviewee types. Many emphasized the value of executive summaries and/or commented on the importance of highlighting the types of information audiences could find in the PDF files. One industry group discussed having the report information on PowerPoint slides that are easily found and downloadable from the website: When I give presentations, I'll use USDA websites a lot. …we have a lot of members that do presentations and things like that, either in meetings or to student events or to the public. It might be useful if they could put some of the charts and data on PowerPoint slides, so they could be used obviously with permission, a reference for NAHMS.
An issue for those less familiar (as for one livestock extension agent) with NAHMS study reports is they reported that they do not often come up in general search engine results unless keywords are specific enough. Beside the individual less familiar with the reports, others mentioned they had to remember to go directly to the website to find the information.
A consensus among those interviewed was that a "subscribe button" for NAHMS reports would be beneficial for upcoming research, as well as research that is being disseminated to target audiences. One swine industry journalist discussed how a "subscribe button" would help them in linking a NAHMS study report directly to an article they may write, which would help to increase credibility and help with research dissemination to producers: Also, we use the information because we do get consumers through our site, but if we're using that kind of information, they see USDA as a good source. We would like to use those kinds of sources to not only build messages that consumers can find about pork, but also giving education to the hog producers so when they're having conversations outside of their business, they can use that data too. And they know how to get to it faster if it's on our site.
Besides improvements to the media channel variety and usability, many interviewees described how more interpersonal communication between organizational stakeholders and USDA APHIS representatives would help improve how government studies such as those done by NAHMS are received. Their recommendations included more conference research presentations, involvement in industry conferences as possible keynote speakers, writing research summaries for publication in industry media, and being involved in educational outreach panels. These types of interpersonal or in-person communication activities would also help bridge the gap between extension and USDA APHIS. One industry group representative went as far to say that those suggestions were just "the tip of the iceberg" when it came to possible engagements with stakeholders. Some interviewees said an increase in USDA APHIS presence at events like this would help to increase trust and could potentially bring in more study endorsements in the future. An agricultural journalist mentions that these face-to-face interactions would improve both stakeholder and producer perceptions of USDA APHIS: They see the staff as individuals doing the right thing for the industry.
And the more open communication USDA APHIS can be when they're at events, which I see this all the time. When they're at events and they're just being a person, and are having those conversations about the industry, they're well received. I think it's two ways and I see the producers really value their input, but I think the producers as farmers by nature, they're always gonna say, "It's good that you're going down this road with regulations or we understand you had to propose this or work on this rule but you're forgetting this." They're always gonna say, "don't forget the practical side of what that rule is." The more conversations and people being people. That all goes on I think the more the trust goes up with the producer.
Throughout this line of questioning on ways to improve communication, participants revealed how the AMU studies needed to potentially change before they would consider using it or sharing it with others, leading to the second theme in Objective 1.
Theme 1.2: Issues with NAHMS Studies May Impede Willingness to Share/Use Reports
The second theme centered on their desires for greater involvement in NAHMS studies' development, particularly in AMU data collection. This theme emerged within advocacy and industry group data. An interesting finding from a large advocacy group was their frustration with USDA APHIS as being biased towards large agricultural production. That bias was blamed on the group's lack of involvement in the NAHMS study development. In terms of the NAHMS AMU studies, this advocacy group also expressed frustration in the survey: We were upset because the surveys lumped together treatment, control, and prevention. So, the question is, where do you draw the line with some of the other uses? The groups I work with really don't think you should use antibiotics for prevention where there's no signs of illness. We just think that's an inappropriate use. And so, if the reports don't capture that part of the use, and we have evidence from other countries that this could be 70% of the antimicrobial use, then we're missing data that we really need to have a conversation about how antibiotics are used in food animals. So, we're unhappy with not having that type of distinction.
Along with advocacy groups, industry organizations also felt that this distinction between treatment, control, and prevention was needed in NAHMS studies that deal with antibiotic usage. Other groups' representatives mentioned specific questions they would want addressed to make the study more useful: In terms of what APHIS and USDA does in terms of antibiotic collection around food animals, our primary concern is to try to understand what are the reasons for most of the use? Why are we using antibiotics? For what specific purposes? And the next layer, if it would be possible for APHIS to do, is try to think then what are the contributing factors? What are the risk factors for higher use versus lower use? And if the studies could ever get that kind of information, that would be helpful for us.
These issues seemed to negatively influence these industry and advocacy group participants' likelihood of disseminating the AMU studies' results. Participants said more time was needed for discussion to find common ground and establish industry trust. Related to this broader theme of a desire to be more involved, those in extension said agents are well-equipped to help promote the AMU and other NAHMS studies among livestock producers in their areas, which could help increase participation, particularly among smaller-sized farms.
Outside of looking at the broader picture of AMR, all participants agreed that the timeliness of the NAHMS studies were an issue, especially in the case of AMU. Although the majority were aware of the reasons as to why studies take a long period of time to develop, conduct, analyze, and report, the gap in years between the data collection and reporting is hard to contextualize. One recommendation from a neutral advocacy group was to focus on smaller data samples that could be disseminated quicker than the longer studies. This could help bridge gaps in the data and could potentially make results more actionable for industry professionals to disseminate to producers.
Objective 2: Exploring Organizational-Stakeholder Relationship Quality
To address this objective, questions focused on experiences in relationships (current and past) with federal agencies, particularly APHIS; how their own stakeholders view APHIS, barriers to building better relationships, and recommendations for improving relations. Five distinct themes emerged.
Theme 2.1: Swine Industry Has Good Relationship Quality with APHIS
Swine industry groups were extremely favorable of USDA APHIS, describing the relationship as "outstanding", "very close", "very trustworthy". All swine industry stakeholders were in agreement that they felt the relationship has improved in the past few years, especially in terms of making personal contacts for the organization. Most also felt their target audiences, ranging from policymakers to hog producers, had a high level of trust in USDA APHIS. This is a promising point, as industry organizations act as a liaison between USDA and producer segments, especially in terms of antibiotic usage as it relates to NAHMS studies. One organization representative described their thoughts: I think everybody's on board with reducing antibiotic use. They've come to that conclusion the same way I think the human side did, as they're seeing not all antibiotics are working effectively so there were some issues with the resistance. They're also seeing an economic drain by excess using of antimicrobials. What they really want to see is, they want to see you reporting actual use versus 'I bought some and it's sitting on my shelf, doesn't mean I gave it to the pig. I stocked up my cabinet.' They want to see practical data because that's what the consumers are asking for. They also want to see real vested money, whether it's private or public research.
In terms of recommendations of improving relationships between swine industry groups and USDA APHIS, one included an opportunity to engage producers indirectly through statespecific organizations. Since APHIS must remain objective and not necessarily offer information geared towards informing production decisions, industry groups are needed as an intermediary information source, making them crucial in the process of disseminating government research. Swine industry group interviewees recommended that APHIS should form relationships with individual state swine production groups, not just nationally recognized organizations. They felt that building rapport with individual state swine organizations, especially in those states with a large hog population, has the potential to increase both promotion and dissemination of NAHMS studies.
Theme 2.2: Cattle Industry Has Weaker Relationship Quality with APHIS
Participants connected with the cattle industry described their relationships with APHIS as fraught with more frustration than those with the swine industry. This frustration stemmed from the AMU studies in particular and the perception that APHIS did not offer enough time or due diligence for conversations with industry representatives to further shape the survey. They described a key influence on them is cattle producers, who strongly value privacy and are sensitive to any talk or act that resembles government intrusion on their cattle business.
Cattle industry group participants mentioned that many cattle producers in their target audience already have distrust in government entities, but it had been exacerbated in recent years through a phenomenon described as "The Trump Effect" by a cattle industry journalist. Other interviewees with the cattle industry also mentioned the current political climate connected with President Donald Trump, which they described as empowering cattle producers' distrust in federal government because weakening federal government power was a platform of Trump's campaign. This general adversity to the government creates challenges for industry groups in any communication efforts of APHIS or any groups using USDA information in producer outreach efforts.
A representative from an agricultural media outlet focused primarily on cattle producers noted that even journalists' trust in USDA APHIS often falters due to the inability to reach a representative to find out more information on an industry issue. One representative noted frustration in wanting to get the facts straight on an issue, but not being able to successfully reach a USDA APHIS representative.
Theme 2.3: Both Industries Desire More Front-End and Back-End Involvement in NAHMS Studies
Both industries desired to see an increase industry involvement in the NAHMS survey design early and late in the process. One swine industry group participant stated: One of the big problems that we have with communication or surveys in particular is you can go so far as to design the survey, but nobody ever talks about how you're going to report the results or what's going happen with the results. I think that's a big gap that we tried to express to the USDA, that the reporting of those results is as important or more important than actually conducting the survey. We have to understand what happens with those results.
Other swine industry group participants echoed similar thoughts, explaining that they appreciated being involved in the survey design but want to be further involved and informed on the dissemination of the study results. Cattle industry group participants had similar thoughts on improving the relationship with USDA APHIS through continual involvement with the NAHMS survey design: "Room for improvement would be on the front end of projects in the development stage before things are finalized by APHIS and sent to OMB. It would be useful for us to be involved in that development process because if we have major concerns, I think it improves both organizations to get those out at the beginning and work through them, versus after the process is nearly completed, then asked for info. I think the only thing that could be improved is involving stakeholders in the process very early in the game, understanding that those conversations we would certainly be held confidential." A criticism from industry groups that dealt with the AMU surveys' development was that some felt industry opinions were being dismissed in early stages of the process. These organizations felt more time was needed for discussion in finding common ground and establishing industry trust. Although, overall, industry organizations perceive USDA APHIS as a credible partner, many feel that greater transparency is needed.
Theme 2.4: Interpersonal and Proactive Communication Could Bolster Relationship Quality
In discussing relationship quality specifically, interviewees again brought up the importance of including the "human factor," meaning many members of these groups respond well to face-to-face interaction and "putting a face to a name." Beside conferences, interviewees' other recommendations included bi-annual meetings with industry stakeholders of all species in Washington D.C. Many groups, both cattle and swine, noted many key officials travel to D.C. regularly, or have satellite or main headquarters located there. These groups also offered that this type of meeting could be covered in their budget, possibly making events such as biannual meetings easier to achieve in a limited governmental budget through industry covered funding. Again, these groups noted the importance of USDA APHIS to the industry and want to be active participants in bridging the gap between organizations.
Interviewees also said another way to increase trust and credibility with producer facing organizations is for USDA APHIS to communicate why the research is important to stakeholders and be more proactive in their efforts. One industry organization put it this way: "It's important for producers to know why the research is being done and why it affects them. These studies should address the intent to improve sustainability, animal health production methods, and have science to back this all up." Advocacy group interviewees noted that organizations need to be more proactive about reaching out to USDA APHIS, as knowledge creation is a constant flow between stakeholders, and not just APHIS' responsibility. These interviewees also discussed the importance of reaching a broader set of audiences with the NAHMS study reports and went back to how explaining the "why" of the studies would help attract those audiences.
Theme 2.5: Untapped and Valuable Partners for NAHMS Studies
Broadening the types of stakeholders involved in shaping and communicating NAHMS studies was a key theme brought forward primarily from interviewees representing extension and an interview from the advocacy group that was more neutral on the issue of AMU in livestock production. Industry groups touched on this same idea when they described the importance of state-level stakeholder groups. To summarize, the interviewee from the more neutral group on AMU said: I think there is a problem that maybe the agency hasn't been very proactive about trying to get a broader set of stakeholders in terms of, to provide input when they're developing surveys, or even making sure we see the results when they come out. I think all of those things the agency probably could do a better job of.
Interviewees with extension described their relationship with APHIS to be weak, but also described future potentials of strengthening relations. Interviewees representing extension said they view USDA APHIS as a trustworthy and credible source of information, but feel extension is overlooked as a stakeholder for NAHMS studies. They described how extension could act as an information intermediary, much like industry organizations do, in disseminating important research information.
These extension interviewees echoed industry group recommendations on the importance of making state-specific relationships. They suggested extension personnel be a point of contact to such ends. Gaining trust and credibility with local, county, and state experts could help USDA APHIS with all aspects of NAHMS studies, as extension groups can help foster relationships and positive perceptions among producers. When asked the best way to contact a state or county extension officer, most replied that a phone call directly to the agent would guarantee a response. Emails often get lost or forgotten, as many extension agents must navigate a job role that encompasses many responsibilities.
Discussion and Implications
The first set of conclusions and recommendations can be drawn from stakeholders' preferences for communicating federal scientific data on animal health and management practices. In order for research to be used and implemented, organizations should determine what type of information is valuable to the end-user and how that information is will eventually be used (Lavis et al., 2003). According to our results, key stakeholders would prefer NAHMS to offer study findings in multiple formats and in ways that improve access and usability, increase interpersonal communication efforts, and ensure stakeholder input on the types of data that would be valuable is included or addressed in some way. All-in-all a more complete communication package that includes: press releases customized for different types of media entities (general public, animal health media, commodity-focused media), e-mail newsletter people can subscribe to, shorter and more visual reports of key findings, infographics, and presentation slides with visuals and summaries. NAHMS can help improve their relationship with stakeholders by providing communication material that benefit the end-user (Hon & Grunig, 1999). However, Lavis et al. (2003) noted that communication materials such as websites and newsletters are beneficial, but they do not replace the power of continuous interaction with stakeholders. This was echoed in our findings when stakeholders highlighted the importance of interpersonal communication and suggested NAHMS staff to attend webinars and presentations at relevant industry conferences or educational events hosted by agribusiness companies. Such tactics would not only increase reach and reach a wider array of stakeholders like advocacy organizations, journalists, and extension agents, but as our findings suggest, they would also increase the likelihood government research would be used, shared, and interpreted into best practices and policy. These results will help NAHMS, and other similar organizations, implement the KTT framework by revealing how the end-user prefers the reports to be disseminated (Ontario, 2016). Because government data is typically not interpreted or formulated into practical recommendations, the research findings need to reach those knowledge brokers in a meaningful way in order to have potential to create impact.
Beyond acting on those concrete recommendations in the findings to improve communication of the AMU and future NAHMS studies reports, there are many opportunities for relationship-building that will ultimately increase the dissemination of NAHMS research information and improve the science communication process. These findings hold more important implications for science communication of this nature.
Except the journalists, all interviewees expressed the importance of their involvement on the front-and back-end of NAHMS studies to ensure data contribute well to the global AMU/R conversation. They expect to see their influence in the final study, especially with measures on controversial production practices like AMU. These desires align with the KTE/KTT framework, which describes the importance of continued stakeholder input in the research process (Smith, 2014). Additionally, including stakeholders in the process could improve the relational quality dimension of commitment between NAHMS and their stakeholders. With stakeholders involved in the production of the NAHMS report, they may feel obliged to promote the research to their networks (Hon & Grunig, 1999). In science communication, the notion of the science generators as experts and the public as information insufficient (i.e., the knowledge-deficit model) is a particularly alluring idea for policy design because the solution is straightforward and fits within the existing infrastructure of the education system (Simis, Madden, Cacciatore, & Yeo, 2016). "Trust in scientific institutions, deference to scientific authority, and values, including religiosity and political ideology, represent murkier waters for building policy. These factors cannot be targeted through simple curriculum reform or exposure to new information" (Simis et al., 2016, p. 409). Community-based research has been recommended as a solution to helping address distrust in scientific institutions (Sims et al., 2016).
Before government-mandated agricultural research can move to an entirely communitybased model, there are communication approaches that would enhance the effectiveness in the aspects of the research these agencies do involve communities. In an invitation process, federal agencies should set clear expectations for stakeholders' participation and boundaries to protect the study's credibility. The boundaries will help develop control mutuality between the agency and participants, where the agency holds primary influence in an effort to provide objective results (Hon & Grunig, 1999). Inviting them demonstrates trust and their participation illustrates their trust in the agency conducting the research. Once the study is finalized, following up with some explanation to those participating stakeholders on decisions made may help support relationship quality. They seem to want to feel heard, but some also expect to see their influence in the final study that is developed. Based on the analysis of this study's findings through the lens of the concepts of organizational-stakeholder relationship quality (Hon & Grunig, 1999) and trust (Earle, 2010), we advise framing any follow-up communication around the following points: 1. Identify ways in which stakeholders did influence the study. 2. Express satisfaction from their contributions but when necessary, highlight the various stakeholder relationships the research sponsor must consider. 3. Highlight commitments made and future commitment to the relationships. 4. Point out the benefits of their insight and the benefits received (ultimately, the research and reports and the impact those had). 5. Reiterate the "why" of the study in terms of concern for their welfare. 6. Remind them of the boundaries of influence that are needed in order to ensure the study is credible and robust to attacks of biases by demonstrating inclusiveness and balancing of viewpoints.
Recall that calculative trust, which is trust based on the organization's actions and competence is more fragile than relational trust (Earle, 2010). When calculative trust seems to be broken, bolstering or reminding external stakeholders of relational trust is important, since it is more stable. To bolster relational trust, reinforce shared values and the research sponsor's intentions in communication with stakeholders.
Findings showed rebuilding relations and trust with cattle industry organizations is a felt need among the stakeholders interviewed, along with continuing to strengthen swine industry relationships. This could have been unique to this specific case study with the AMU survey. Still, lessons can be drawn and applied to cases when other research sponsors are faced with a need to repair relations to continue more successfully with future science communication efforts. With the cattle industry, targeted efforts should be made to highlight ways in which the AMU studies were developed with control mutuality and a communal relationship in mind (Hon & Grunig, 1999). Although those in the cattle industry may currently feel dissatisfied with their relationship with APHIS, using communication to highlight the give and take and concern for the industry's welfare along with reigniting and demonstrating APHIS' commitment to the relationship could be key in disseminating the AMU report and for future studies (Hon & Grunig, 1999). Additionally, outreach to state-level cattle organizations are likely necessary for the AMU studies since current relationships with national organizations will need time and effort to build.
Because of fractured relationships with the cattle industry, those stakeholders may use select data to protect market interests and preferred ideologies and sow doubt on the credibility of the study (National Academies, 2017). Arguably, animal activist and certain consumer advocacy groups may do the same. In fact, any government data is subject to weaponization in pursuit of advocacy groups' agendas. Federal agencies should ensure good communication with journalists and those in extension to help inoculate against such attacks. Scientists and media are trusted more when science intersects controversy (Brewer & Ley, 2013). Notably, however, if those groups feel they were a part of the study development process, they would be less likely to engage in attempts to discredit the research, and may be more likely to help share it (Smith, 2014).
Finally, we note extension and state-based farmer and rancher groups are untapped partners for this federal agency. Not only can extension serve as a knowledge broker, which is a pivotal role in the KTE/KTT framework, but if pursued, they would help enhance and strengthen stakeholder relationships since many agents have closer connections with state agricultural commodity organizations. Therefore, we recommend focusing on extension to help reach the statebased groups that were also mentioned in the findings. Extension agents prefer to receive notice when studies are launched or reports are ready since they may not routinely check a federal agency's website or may find the website difficult to navigate. Offering a listserv for email notifications, hosting a webinar, and/or presenting at a conference are ways to reach those in Extension. They place high value on phone calls. Although calling extension agents would certainly take more time than sending out mass emails or hosting events, the payoff of the effort is likely to be higher over other channels of communication, according to those we interviewed in this study.
Limitations and Future Research
This study's focus on NAHMS studies, and their AMU studies in particular, as a case study through which to explore a science communication process and its products means some caution should be taken in considering the transferability of these results to other contexts. Conducting future qualitative research with small farm/ranch-focused organizations, niche producer-focused organizations, and independent veterinarians could reveal richer findings this study was unable to capture. To better assess the details of design and usability of science communication materials these interviewees said they prefer, conducting studies in which these stakeholders could evaluate materials directly is a logical next step. | 2020-12-17T09:06:08.368Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "216a509eb6fbbe7b0175370a18b84ba2a33f7c2c",
"oa_license": "CCBYNCSA",
"oa_url": "https://newprairiepress.org/cgi/viewcontent.cgi?article=2343&context=jac",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f25b6919ed87524486a2638452d996caca7831f1",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences",
"Political Science"
],
"extfieldsofstudy": []
} |
26566060 | pes2o/s2orc | v3-fos-license | Dataset of the binding kinetic rate constants of anti-PCSK9 antibodies obtained using the Biacore T100, ProteOn XPR36, Octet RED384, and IBIS MX96 biosensor platforms
Here we provide data from a head-to-head comparison study using four biosensor platforms: GE Healthcare׳s Biacore T100, Bio-Rad׳s ProteOn XPR36, ForteBio׳s Octet RED384, and Wasatch Microfluidics׳s IBIS MX96. We used these instruments to analyze the binding interactions of a panel of ten high-affinity monoclonal antibodies with their antigen, human proprotein convertase subtilisin kexin type 9 (PCSK9). For each instrument, binding curves obtained at multiple densities of surface antibodies were fit to the 1:1 Langmuir kinetic model, and the association and dissociation rate constants and corresponding affinity constants were calculated. The data supplied in this article accompany the research article entitled, “Comparison of biosensor platforms in the evaluation of high affinity antibody–antigen binding kinetics” (Yang et al., 2016) [1], which further discusses the strengths and weaknesses of each biosensor platform with an emphasis on data consistency, comparability, and operational efficiency.
Subject area
Biosensors More specific subject area
Binding kinetic analysis
Type of data Antibodies were captured or immobilized onto the sensor surface at multiple densities, and the antigen (hu PCSK9) was flowed over the antibody surface at titrating concentrations.
Experimental features
Characterization of kinetic rate and equilibrium binding constants obtained from binding curves (10-min or 500-s association period and 45-min dissociation period) using the 1:1 Langmuir kinetic model Data source location Department of Immune Modulation and Biotherapeutics Discovery, Boehringer Ingelheim Pharmaceuticals, Inc., Ridgefield, CT 06877 Data accessibility Data are within this article
Value of the data
The data enable the head-to-head comparison of four instruments' performance regarding data consistency and comparability.
The data can help new biosensor users determine the best instrument for their research purposes and maximize the value of their investment.
The data provide additional insights for current biosensor users regarding the systematic factors that influence data reliability. Table 1 contains the calculated antibody surface binding activity for the antigen obtained using three instruments (ProteOn XPR36, Biacore T100, and Octet RED384), and Tables 2 and 3 contain the association (k a ), dissociation (k d ), and equilibrium (K D ) binding constants obtained by applying the binding curves recorded using the same three instruments and the IBIS MX96. Table 4 contains the final kinetic rates and binding constants obtained from global analysis of the binding curves from multiple antibody-coated surfaces, and Table 5 contains the ratio of the instrumental limits provided by the manufacturers to the experimentally determined rate constants.
Proteins and antibodies
To prepare the antigen, suspended HEK293-6E cells were transfected with a plasmid encoding Cterminally 6-His-tagged human PCSK9, using the TransIT-PRO system (Mirus Bio LLC). After a 4-day incubation, the medium was harvested, and the 6-His-tagged PCSK9 was purified using a Ni-NTA His-Bind Superflow column (Novagen), according to the manufacturer's instructions.
To prepare 20 different monoclonal antibodies (mAbs) against human PCSK9, CHO cells were transfected with plasmid DNAs containing heavy chain and light chain cassettes, using Freestyle CHO Expression Medium containing 8 mM Glutamax (Invitrogen). After a 7-day incubation, the medium was harvested, and antibodies were purified using an ÄKTA affinity chromatography system with MabSelect Sure resin (GE Healthcare), by standard procedures [2]. The purified mAbs were formulated in 60 mM sodium acetate (pH 5.0), and their concentration was determined by the adsorption at 280 nm, using a NanoDrop™ 8000 Spectrophotometer (Thermo Fischer Scientific) and an extinction coefficient of 1.36 [2]. The antibodies were determined to be 495% monomers by size exclusion ultra-performance liquid chromatography (UPLC) (ACQUITY, Waters Corporation).
Biacore T100 kinetic measurements
Protein A/G was immobilized onto the four individual flow cells in the CM5 sensor chip by a standard coupling protocol. First, the carboxyl groups on the sensor surface were activated by injecting 200 mM EDC/50 mM NHS. Next, protein A/G (30 mg/ml in sodium acetate [pH 4.5]) was injected over the activated surface, to which it became covalently attached by its primary amines. The excess reactive esters were then blocked with 1 M ethanolamine. Each step was performed with a 7-min injection and a 5 ml/min flow rate. The flow rate was increased to 10 ml/min for the subsequent antibody-capturing step,. Each mAb, prepared at 0.063 mg/ml in HBS-EP running buffer (10 mM HEPES [pH 7.4], 150 mM NaCl, 3 mM EDTA, and 0.005% v/v polysorbate P20) was serially injected onto flow cells 2, 3, and 4 for 220 s, 110 s, and 55 s, respectively. Flow cell 1 was left empty to provide a Table 1 Antibody binding activities for human PCSK9. The slope and R 2 values were obtained from the linear regression fit of the experimental R max vs. R L correlation curves (see Fig. 4 in Ref. [1]). The % activity was the ratio of the experimentally obtained slope to the theoretical R max /R L value, as defined by the known molar mass of the antigen and antibody (see Materials and methods). Table 3 Kinetic rates and equilibrium binding constants obtained by fitting data generated for individual antibody surfaces in the IBIS MX96 to the 1:1 interaction model (see Fig. 10 in Ref. [1]).
mAb ID
mAb ID Array spot IBIS MX96 (amine-coupled) IBIS MX96 (Fc-captured) - , and a buffer blank for baseline subtraction were sequentially injected, with a regeneration step inserted between each cycle. The protein A/G surface was regenerated with two 18-s pulses of glycine (pH 1.5) at 50 ml/min. The binding interactions were monitored over a 10-min association period and a 45-min dissociation period (running buffer only), at 30 ml/min.
ProteOn XPR36 kinetic measurements
To immobilize protein A/G onto the GLM sensor chip imprinted with 6 crisscrossing flow channels, a procedure similar to the above-described coupling method was used, except that 100 mM sulfo-NHS/400 mM EDC was used in the activation step. The activation, immobilization, and deactivation steps were each performed for 5 min with 6 parallel injections in the horizontal direction at 30 ml/ min. The protein A/G-bound surfaces were then conditioned with three 18-s pulses of glycine (pH 1.5) at 100 ml/min in the horizontal and vertical directions. Two different mAbs, each prepared at 0.25 mg/ ml, 0.125 mg/ml, and 0.063 mg/ml in PBS-T-EDTA running buffer (PBS [pH 7.4], 0.005% Tween-20, and 3 mM EDTA) were then injected in parallel in the vertical direction for 160 s at 25 ml/min. One mAb mAb 1 18.1 Â 10 5 3.05 Â 10 À 5 0.017 21.7 Â 10 5 5.14 Â 10 À 5 0.024 4.31 Â 10 5 0.99 Â 10 À 5 0.023 5.05 Â 10 5 3.37 Â 10 À 5 0.067 9.11 Â 10 5 11.9 Â 10 À 5 0.131 mAb 2 1.84 Â 10 5 2.42 Â 10 À 5 0.131 1.23 Â was injected into channels 1-3, and the other into channels 4-6. The orientation of the sensor chip was then switched, and a buffer blank was injected for 60 s. The antigen-binding kinetics of the mAbs in the six samples was measured by simultaneously injecting five human PCSK9 samples at different concentrations in running buffer and a buffer blank. Three different series of human PCSK9 concentration ranges (100-6.25 nM, 25-1.56 nM, and 5-0.313 nM), prepared by 2-fold serial dilution, were used. The binding was monitored over a 10-min association period and a 45-min dissociation period (running buffer only) at 40 ml/min. After each binding cycle, the protein A/G surface was regenerated with two 18-s pulses of glycine (pH 1.5) at 100 ml/min in the horizontal and vertical directions.
Octet RED384 kinetic measurements
Each mAb was prepared at 20 mg/ml, 10 mg/ml, and 5 mg/ml in 1 Â KB running buffer (PBS pH [7.4], 0.02% Tween-20, 0.1% albumin, and 0.05% sodium azide) and dispensed into a 384-well tilted-bottom microplate (90 ml per well). Eight vertical wells were used for each concentration. A second 384-well microplate contained human PCSK9 at 7 different concentrations (100-1.56 nM, in 2-fold serial dilutions), the glycine [pH 1.5] regeneration solution, and 1 Â KB buffer for baseline stabilization. Both plates were agitated at 1000 rpm during the entire experiment. Sixteen AHC (anti-human Fc capture) sensor tips were used for a group of 2 mAbs (8 sensors each) per binding cycle. Before the binding measurements, the sensor tips were pre-hydrated in 1 Â KB for 5 min, followed by 3 pre-conditioning cycles consisting of 15-s dips in glycine (pH 1.5) alternating with 15-s dips in 1 Â KB. The sensor tips were then transferred to the mAb-containing wells for a 200-s loading step. After a 60-s baseline dip in 1 Â KB, the binding kinetics were measured by dipping the mAb-coated sensors into the wells containing human PCSK9 at various concentrations. The binding interactions were monitored over a 500-s association period, followed by a 30-min dissociation period, in which the sensors were dipped into new wells containing 1 Â KB buffer only. The AHC sensor tips were regenerated with two 18-s dips in glycine (pH 1.5) between each binding cycle.
2.6. IBIS MX96 kinetic measurements 2.6.1. Multi-cycle kinetics with amine-coupled antibody arrays For multi-array printing with the CFM, two 96-well microplates were prepared. The sample source plate contained 8 vertical wells of each mAb in sodium acetate (pH 5.0) at concentrations ranging from 20 mg/ml to 0.16 mg/ml in 2-fold serial dilutions, and the reagent plate contained freshly prepared 400 mM EDC/100 mM sulfo-NHS. The COOH-G SensEye chip in the CFM was primed with sodium acetate (pH 5.0) running buffer, the sensor surface was then activated with the EDC/sulfo-NHS for 5 min, and the mAbs were then directly immobilized onto the activated surface. In the immobilization step, the mAb samples in the top half of the source plate were delivered to the sensor using 48 micro-channels, by which the mAbs were cycled across the activated surface bidirectionally for 10 min. The procedure was repeated for the remaining mAb samples in the bottom half of the source plate, generating a 10 Â 8 array of mAb spots on the sensor surface. Two vertical columns of buffercontaining wells served as reference samples. The printed sensor chip was then inserted into the MX96 instrument and primed with the system running buffer (PBS [pH 7.4], 0.01% Tween-20). The surfaces were then quenched with 1 M ethanolamine for 5 min. For the binding measurements, human PCSK9 at 9 different concentrations (0.39-100 nM in 2-fold serial dilutions) in running buffer was cycled across the mAb array surface. Each sequentially injected sample was monitored for a 10-min association period followed by a 45-min dissociation period, at a flow rate of 40 ml/min. The amine-coupled mAb surfaces were regenerated between each binding cycle with glycine (pH 2.0 or pH 2.5). These regeneration conditions were determined in a preliminary experiment.
Single-cycle kinetics with Fc-captured antibody arrays
A SensEye COOH-G chip was inserted into the MX96 instrument and primed with sodium acetate (pH 5.0). The sensor surface was activated by injecting 400 mM EDC/100 mM sulfo-NHS for 5 min, and then 50 mg/ml protein A/G in sodium acetate (pH 5.0) was cycled bidirectionally across the activated surface for 5 min. The sensor chip with the immobilized protein A/G surface was then removed from the MX96 and inserted into the CFM printer, which had been loaded with a 96-well mAb source plate. The same plate layout and mAb concentrations described above for the aminecoupled antibody array were used, but the mAbs were prepared in a system running buffer consisting of 0.01% Tween-20 in PBS. The mAb samples were captured by cycling them across the protein A/G surface for 10 min, then the sensor chip was inserted back into the MX96 instrument and primed with the system running buffer. The binding was measured by cycling human PCSK9 prepared at 7 different concentrations (1.56-100 nM in 2-fold serial dilutions) in the system running buffer. The cycling of each sequentially injected sample was monitored as described in the previous section. No regeneration was performed between sample injections. The binding sensorgrams were all collected at 25°C. Before the curve-fitting analysis, the acquired data were processed as follows. The Biacore T100 data were double-referenced using reference flow cell 1 and the subtraction of a preceding buffer blank with BiaEvaluation (v.4.1), and the ProteOn XPR36 data were double-referenced using channel inter-spots and subtraction of a parallel in-line buffer blank by the integrated ProteOn Manager software (v.3.1.0.6). The Octet RED384 data were referenced by subtracting a parallel buffer blank, and the baseline was aligned with the y-axis and smoothed by a Savitzky-Golay filter in the data analysis software (v.9.0.0.4). The IBIS MX96 data underwent inter-spot reference subtraction, followed by y-axis alignment using the IBIS SPRint software (v.6.15.2.1). The calibrated data were then exported to Scrubber (v.2.0c) for cropping, aligning, and buffer referencing.
The processed binding curves from the four instruments were all fitted to the Langmuir model for a 1:1 binding stoichiometry. In the Biacore T100, "single mode" was used for the fitting of data from individual mAb surfaces, and "batch mode" with "local" R max was used for the global fitting of data from multiple mAb surfaces. In the ProteOn XPR36, the "grouped" and "global" modes and the "local" R max were used for the fitting of data from single vs. multiple surfaces. In the Octet RED384, "R max linked" was used for the group fitting of data from sensors coated with the same mAb concentration, and "R max unlinked by sensor" was used for the global fitting of data from sensors coated with multiple mAb concentrations. Both the multi-cycle and single-cycle kinetic data from the IBIS MX96 were analyzed using Scrubber (v.2.0c). The k d was first determined by fitting the data in the absence of k a , then the k a k d was determined keeping the k d fixed. The fit was then further refined by floating the k d . For single-cycle kinetic data, the injection start time was set as a floating parameter, and the association profiles were fit back to a theoretical baseline origin. In all of the analyses, k a is the association rate constant for the antibody-antigen binding reaction, k d is the dissociation rate constant for the antibody-antigen complex, and K D is the equilibrium dissociation constant defined by k d / k a . The fitting accuracy was described by Chi 2 (Biacore T100, ProteOn XPR36, and IBIS MX96) or X 2 (Octet RED384), a parameter representing how well the observed results resemble those calculated from the model used to analyze the data.
Ligand surface activity
The binding activity of surface-bound mAbs toward human PCSK9, called the % ligand activity, was calculated using the following equations: where the theoretical R max (the binding capacity of the surface) was determined as follows: where MW was the molecular weight of the ligand (mAb, 150 kDa) and analyte (human PCSK9, 72.8 kDa), R L (ligand response) was the amount of immobilized ligand in response units (RU), and S M was the stoichiometry defined by the number of binding sites on the ligand. Rearranging the equation provides the calculation for the ligand density to aim for in the experiments: For the kinetic binding measurements, R max was set at 50-200 RU. | 2018-04-03T00:55:41.426Z | 2016-07-27T00:00:00.000 | {
"year": 2016,
"sha1": "8d530cc138ae2a012f1f14d835fc72bce3f6160a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.dib.2016.07.044",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fefcfbfe6ab9f6276e1421bf69a1de12ed40fadc",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
17028946 | pes2o/s2orc | v3-fos-license | Effect of Commercially Available Egg Cures on the Survival of Juvenile Salmonids
There is some concern that incidental consumption of eggs cured with commercially available cures for the purpose of sport fishing causes mortality in juvenile salmon. We evaluated this by feeding juvenile spring Chinook (Oncorhynchus tshawytscha) and steelhead (O. mykiss) with eggs cured with one of five commercially available cures. We observed significant levels of mortality in both pre-smolts and smolts. Depending on the experiment, 2, 3, or 4 of the cures were associated with mortality. Mortality tended to be higher in the smolts than in the parr, but there was no clear species effect. The majority of mortality occurred within the first 10 d of feeding. Removal of sodium sulfite from the cure significantly reduced the level of mortality. Soaking the eggs prior to feeding did not reduce mortality. We observed a clear relationship between the amount of cured egg consumed each day and the survival time. We conclude that consumption of eggs cured with sodium sulfite has the potential to cause mortality in juvenile steelhead and Chinook salmon in the wild.
Introduction
Anglers in the Pacific Northwest and the Midwest states of the U.S. often use eggs that are cured with a mix of preservatives, dyes, salts, and sugars as bait for trout and salmon. These eggs may either be purchased pre-cured or anglers can purchase the cures or cure ingredients and prepare their own eggs. Although anglers are typically targeting adult fish, juvenile salmonids are known to consume cured eggs incidentally (Scott Amerman: guide and cured egg manufacturer, Pers. Comm.). In the Pacific Northwest states, anglers typically cure the full ovary and cut it into smaller pieces which are attached to a hook. Conversely, anglers in the Midwest of the US (Great Lakes) often use net bags (spawn sacks) to contain the eggs, though there is likely still some level of exposure to smolts from chumming and/or splitting of the bags (Jay Wesley, Lake Michigan Basin Coordinator, Michigan Department of Natural Resources, Pers. Comm.). The level of exposure experienced by individual juvenile fish is unknown, but in areas of heavy fishing pressure it is potentially quite high. Factors such as the number of anglers fishing cured eggs, the time spent fishing in the home range or vicinity of an individual fish, the appetite of the fish, and the density of fish in an area could all affect the level of exposure.
Despite the potential for exposure, there has been little consideration for the effects of cured eggs on the health of juvenile salmonids. It has generally been assumed that the consumption of cured eggs has little effect on juveniles. However, at least some of the ingredients used in these cures are known to have adverse effects on vertebrates. For example, the majority of commercially available cures contain sulfite compounds, typically sodium sulfite.
Sulfites and sulfite radicals, the intermediate products of sulfite metabolism, cause damage to nucleic acids, proteins, and lipids in mammals [1][2][3][4] and have been shown to cause mortality in teleosts [5,6]. Another commonly used group of compounds, nitrites, also have a number of negative effects, including decreased growth in teleosts [7]. Water borne exposure to sodium nitrite, the most commonly used form of nitrite, has also been shown to cause mortality in salmonids [8,9].
Given the concerns about the viability of many wild populations of salmonids, it is important to minimize the risks encountered by these fish wherever possible. Our objective was to determine whether ingestion of eggs cured with commercially available cures affected the survival of juvenile salmonids. We used a variety of approaches, including feeding trials and oral administration trials, to determine whether these cures had the potential to cause mortality in juvenile salmonids. In addition, we assessed whether mortality associated with consumption of cured eggs was caused by sodium sulfite.
Methods
Ethics Statement: Animals were treated in accordance with the principles and procedures of the Laboratory of Animal Resources Center (LARC) at OSU. All manipulations in this manuscript were approved by the LARC prior to experimentation (permit number 3818).
Fish
We used juvenile spring Chinook salmon, Oncorhynchus tshawytscha (North Santiam River stock), and steelhead trout, Oncorhynchus mykiss (Alsea River stock). All fish were held under ambient photoperiod in 1500 L circular tanks (the two species were held separately) at Oregon State University's (OSU) Fish Performance and Genetics Laboratory. Pathogen free, flow through water (,12uC) was supplied from a well. The stock fish were fed twice daily with semi-moist pellet (Bio-Oregon, Skretting Inc., Vancouver, Canada). All fish were apparently healthy throughout the trials and we had no incidence of disease. All experiments were conducted between November 2008 and June 2010. A summary of the experiments is given in Table 1.
Cured egg preparation
We stripped fresh eggs from spring Chinook collected at the Oregon Department of Fish and Wildlife's (ODFW) Clackamas Fish Hatchery and steelhead collected at the Alsea Fish Hatchery. The spring Chinook eggs were contained within a skein whereas the steelhead eggs were ovulated, and so were loose. The eggs were frozen in vacuum sealed bags within 2 h of collection and stored at 24uC until use. We purchased four commercially available cures from local stores and prepared the eggs as follows. We thawed sufficient eggs for 5 d and stripped the spring Chinook eggs from the skeins prior to curing. The eggs were then divided among a negative control and four treatment groups. The eggs for the four treatment groups were cured using the commercially available cures (cures 1-4), following the manufacturers' instructions.
Typically, this involved adding the cure at a ratio of between 1:9 and 1:32, mixing the eggs thoroughly, and leaving them to cure for 12-24 h before use. In addition, we tested another brand of cure, sold in premixed form (cure 5), during some of the experiments. The cured and uncured eggs were stored at 4uC prior to use. During each experiment, we prepared a fresh batch of cured eggs every 4-5 d to ensure that they remained fresh.
Effect of consuming cured eggs on the survival of juvenile salmon
We evaluated the effects of the cured eggs on two lifestages (parr and smolt) and in two species (O. mykiss and O. tshawytscha). We did not specifically test for smoltification during this experiment. However, the experiments were timed to coincide with the period of release and migration to the ocean of these stocks.
We collected juvenile steelhead or Chinook from the stock tank using a dipnet. The fish were then randomly assigned to one of 18 (pre-smolt) or 21 (smolt) tanks (336 L; N = 55/tank). The experimental tanks were supplied with flow through well water (,12uC). We measured the total biomass in each tank prior to stocking. The fish were acclimated to the experimental tanks for 4 d and fed the same pellet diet as in the stock tanks. Following this, the tanks were randomly divided into 6 (presmolt) or 7 (smolt) groups (3 tanks per group) consisting of a control (fed standard pellet feed), a negative control (fed uncured salmon eggs), or treatment groups that were fed salmon eggs cured with 1 of 4 or 5 commercially available cures. The fish were fed daily (,09:00) at 1.5% bodyweight (BW)/d for 23 d. We scored the feeding behavior of the fish in each tank on a scale of 1-4 based on the level of interest and the time taken to consume all the food: 1 = all food consumed within 10 s, 2 = all food consumed within 10-20 s, 3 = some food consumed within 2 min, 4 = no food consumed within 2 min. The feeders were not told which cures were being fed. However, there were obvious differences in the color and/or texture of the different cures. We monitored mortality and morbidity daily and adjusted the feed amount accordingly. We performed a post-mortem analysis of each fish that died, recording length, weight, sex, amount of food in the gut (broken and unbroken eggs), and general notes on tissue breakdown, lesions etc. We also measured the weight and length of survivors at the end of the experiment.
Effect of sodium sulfite on the survival of juvenile spring Chinook
At the completion of the experiments described above, we entered into an agreement with the cure manufacturers to obtain their cure formula. In exchange, ODFW agreed not to disclose the names or formulas of the companies. Therefore, we refer to the cures as cure 1-5 throughout this manuscript. We subsequently obtained a list of ingredients from three cure manufacturers (cures 3-5). These three cures contained various amounts of: salt, sugar, sodium sulfite, calcium propionate, sodium nitrite, potassium sorbate, dyes, and jello. The amounts and combinations of chemicals used in each of the cures varied, though the majority of the cure consisted of salt, sugar, and sodium sulfite. The remaining ingredients each contributed less than 2% by weight to the cure. We were only provided anecdotal information regarding the ingredients in cure 1, though we do know that the cure contains sodium sulfite and sodium nitrite. Based on toxicity data for each of the known ingredients and the concentration at which each ingredient is used, we hypothesized that the most likely cause of mortality was exposure to sodium sulfite. To test this hypothesis, we asked the manufacturer of cure 1 to provide 2 additional cures, one that did not have sodium sulfite and one that did not have sodium sulfite or sodium nitrite. In addition, we prepared two cures (with and without Na 2 SO 3 ) using ingredients that were provided by the manufacturer of cure 5. We chose these two cures as they had caused the highest mortality in the previous tests. We used two experimental approaches to determine the effect. In one set of experiments, the animals were fed as described above. In a second set of experiments the cure was administered directly into the gut via an oral injection. The second approach was used to ensure a known amount of the cure was consumed by all individuals.
We evaluated the two forms (with and without sodium sulfite) of cure 1 and cure 5 separately. In each test, juvenile Chinook were collected from the stock tank using a dipnet, randomly assigned to 1 of 6 tanks (336 L; N = 30 fish/tank), and acclimated as described above. The tanks were randomly divided into 3 (cure 1) or 2 (cure 5) treatment groups as follows. Cure 1: a group fed salmon eggs cured with the complete cure, a group fed the same cure without sodium sulfite, and a group fed the same cure without sodium sulfite and sodium nitrite (2 tanks/group). Cure 5: one group fed salmon eggs cured with the complete cure and a second group fed the same cure without sodium sulfite (3 tanks/group). In both trials the fish were fed at 1.5% BW/d for a period of 10 d. We monitored mortality and morbidity as described above. This experiment was conducted in September and October, 2009.
In addition to the feeding studies, we also directly administered cures 1 and 5 (both with and without sodium sulfite) using a syringe to ensure a known amount of cure was consumed. Juvenile Chinook were divided into 2 groups for each cure type (cure 1 and 5): 1) complete cure or 2) cure without Na 2 SO 3 . The fish were marked using a unique fin clip for each group, stocked into a single experimental tank (336 L, N = 20 fish/tank), and acclimated as above. We did not replicate this experiment because of LARC restrictions on the use of live animals. We prepared the cure by crushing the salmon eggs and separating the contents from the chorion using a strainer. The lipid was then divided into two subsamples. One of the subsamples was cured by adding the full cure at the appropriate ratio. A second subsample was cured using the modified cure (without sodium sulfite). We used approximately 1.5 times the recommended amount of cure to obtain a more concentrated solution that could be administered in a lower volume. The lipid/cure mixtures were then thoroughly mixed, divided into aliquots, and frozen at 24uC. On each day, the fish were anesthetized in a weak solution of MS-222 (25 mg/L buffered with 62.5 mg/L NaHCO 3 ) and fed 0.5 ml (approximately equivalent to 1.5 eggs/d) of the frozen lipid/cure mixture using a tuberculin syringe that was inserted through the esophagus into the stomach. We checked for correct placement of the syringe in 5 individuals prior to beginning the experiment. We monitored mortality and morbidity as described previously. This experiment was conducted in October 2009.
2.5
The relationship between the number of cured eggs consumed, the consumption of alternative food, and the survival time in juvenile spring Chinook We evaluated the effect of consuming the equivalent of 3, 6, or 10 eggs with or without pellets each day for 10 d. We collected juvenile spring Chinook from the stock tank and randomly assigned them to one of eight treatment groups (N = 10/treatment group) as follows: 1) control (C: uncured eggs only), 2) low dose (LD: equivalent to 3 eggs/d), 3) medium dose (MD: equivalent to 6 eggs/d), and 4) high dose (HD: equivalent to 10 eggs/d). Treatment groups 5-8 consisted of the same categories (control and low, medium, and high dose). However, these fish also received 0.28 g pellet/d (Bio-Oregon, Longview, WA, USA). The volume of egg lipid was reduced for these treatment groups to ensure the total energy content was similar between treatment groups 2-4 and 5-8. The fish in each treatment group were divided randomly into two tanks (N = 5 fish/treatment group/ tank; N = 40 fish/tank). The fish in each treatment group were identified by a unique fin mark.
The lipid cure mixture was prepared as described above using the appropriate concentration of cure for the low, medium, and high dose groups based on the manufacturers' recommended rate of application and the average weight of an uncured steelhead egg (0.5 g). We prepared three stock solutions (low, medium, and high) for each cure. Each stock solution was divided into 100 aliquots (10 fish/10 d) which were loaded into syringes and stored at 24uC until they were used.
Following acclimation (described above) we administered the contents of a single syringe to each fish daily for 10 d as described in the previous experiment.
This experiment was conducted twice. We tested the effect of cure 5 during the first experiment and cure 4 during the second experiment. These represent cures associated with high (cure 5) and low (cure 4) levels of mortality in the previous experiments. This experiment was conducted in May 2010.
Effect of intra-buccal administration of the cure
To determine whether the mortality observed in previous experiments was caused by inadvertent exposure of the cure to the gills in a confined space (the experimental tanks), we administered the cure directly onto the gills. The lipid mixtures were prepared as follows. We obtained egg lipid by crushing the eggs. Half of this lipid extract was cured with 1.5 times the recommended amount of cure 1 and divided into 10 aliquots (,60 ml/aliquot). The remaining uncured lipid was also divided into 10 aliquots. All 20 aliquots were frozen until use. Forty spring Chinook parr were divided amongst two tanks (336 L) and allowed to acclimate for 4 d. Following acclimation we began the treatments once daily for 10 d. Each day we thawed two of the aliquots (one cured and one uncured) and pre-loaded 40 1-ml tuberculin syringes with 0.5 ml of the lipid mixture (20 cured and 20 uncured). This volume is equivalent to approximately 1 egg (by weight) or 1.5 eggs (by concentration). The fish were then anaesthetized in a weak solution of MS-222 (25 mg L 21 MS-222 buffered with 62.5 mg L 21 NaHCO3) and the cure was syringed into the rear of the buccal cavity, anterior of the esophagus. We monitored mortality and morbidity daily. This experiment was conducted in November 2009.
Effect of soaking on the toxicity of the cured eggs
To determine whether the mortality observed in the feeding trials was an artifact of the feeding protocol, we soaked the eggs prior to feeding. This was intended to simulate the soaking that might occur in a typical fishing scenario. We collected juvenile spring Chinook from the stock tank and randomly assigned them to 1 of 12 experimental tanks (N = 30/tank). The tanks were then divided into four groups consisting of: 1) a control group that was fed unsoaked eggs and groups that were fed eggs soaked for 2) 30 s, 3) 1 min, or 4) 10 min prior to feeding. We placed the eggs in a black cotton mesh pouch prior to feeding. For those groups that received pre-soaked eggs the pouches were dipped into the tank and gently agitated to encourage ''milking'' of the eggs (washing the cure off the surface). After 1 min soaking, we generally did not observe any of the cure washing off the eggs, suggesting that the majority of surface cure had been removed. At the end of the allotted time the pouch was opened and the eggs fed as normal. All feeding was conducted between 0900 and 1000 hours. We collected uneaten eggs in the afternoon to determine whether there were any differences in the amount of food eaten after soaking. This experiment was conducted in November 2009.
Analysis
Proportion data were arcsine square root transformed. We tested normality of the data using a Kolmogorov-Smirnov test. Data that failed the test of normality were compared using Kruskal-Wallis One Way Analysis of Variance on Ranks. Data that passed the test of normality were compared using ANOVA. Differences among the treatment groups were analyzed using Tukey's post test (comparison of treatment group with the control). We evaluated the effect of removing sodium sulfite on mortality during feeding trails using ANOVA or students t-test (cure 1 and cure 5, respectively). We tested for the effect of tank and presence or absence of pellet feed on survival time using a Cox Stratified model. Fish that survived to the end of the experiment were assigned a censored survival time of 10 d. We then tested for the effect of dose (number of eggs consumed) on the survival time using a Gehan-Breslow Kaplan Meier Survival analysis. Differences among the different groups (control, low, medium, and high) were compared post hoc with a pairwise multiple comparison Holm-Sidak test. Differences that had a P value of ,0.05 were considered significant.
Effect of cured egg mixtures on survival of juvenile salmon
In general, the fish fed well during the experiments, though there was some variation in feeding behavior among the treatment groups. We observed very little mortality in both the control and negative control groups. Only one fish from these two groups died during the experiments (from a total of 1320 individuals). This fish had no eggs in its stomach and we were unable to determine the cause of death. In all four trials we observed significant differences in mortality among the groups (P,0.001, ANOVA). Consumption of cure 1 was associated with a significant increase in mortality (P,0.05, Tukey's pairwise comparison) in steelhead parr and smolts (Fig. 1). Similarly, consumption of cures 2, 3, and 5 was associated with a significant increased in mortality (P,0.05, Tukey's pairwise comparison) in steelhead smolts, but not parr (Fig. 1). Spring Chinook parr that consumed cures 1 and 3 had significantly greater mortality than the groups fed pellets (P,0.05, Tukey's Pairwise comparison) (Fig. 2). Similarly, consumption of cures 1, 3, and 5 was associated with a significant increase in mortality (P,0.05, Tukey's pairwise comparison) in the spring Chinook smolts (Fig. 2).
In general, the majority of fish died within the first 10 d of feeding. In several instances, fish died after 1 or 2 feedings (Fig. 3). The number of eggs in the stomach of dead fish ranged from 1 to 46 ( Table 2). Smolts of both species tended to have more eggs in their stomachs at the time of death than parr. We were unable to confirm whether all fish consumed eggs on any given day. We noted significant tissue degradation in the muscle of the body wall surrounding the gut cavity and of the internal organs in several fish. We collected samples of liver, stomach, intestine, heart, and spleen tissue from surviving fish for histology but found no evidence of tissue damage in these fish.
Effect of sodium sulfite on the survival of juvenile spring Chinook
The presence of sodium sulfite (or sodium nitrite) did not have a significant effect on mortality in the fish fed cure 1 (P = 0.123, ANOVA). Mean mortality was 11.66%, 1.11%, and 0% in the groups fed the full cure, the cure without sodium sulfite, or the cure without sodium sulfite and sodium nitrite, respectively (Fig. 4). However, the removal of sodium sulfite was associated with a significant (P = 0.001, t-test) decrease in the level of mortality in fish fed cure 5 (mean mortality: 13.3363.33% and 0% for fish fed cures with and without sodium sulfite, respectively).
The removal of sodium sulfite was also associated with a significant decrease in mortality in both experiments where the cure was directly administered via syringe. For cure 1, mortality was 30% in the group given the full cure and 0% in the group given the cure without Na 2 SO 3 (P = 0.020, Fisher's Exact Test). Similarly, for those fish fed cure 5, mortality was 35% in the group given the full cure and 0% in the group given the cure without Na 2 SO 3 (P = 0.008, Fisher's Exact Test).
3.3
The relationship between the number of cured eggs consumed, the consumption of alternative food, and the survival time in juvenile spring Chinook The concurrent consumption of pellets had no effect on the survival times in either experiment (Cure 1: P = 0.497; Cure 4: P = 0.286). Similarly, we found no evidence for a tank effect in either experiment (Cure 1: P = 0.889; Cure 4: P = 0.091). The survival time was significantly correlated with the number of eggs consumed in both experiments (Cure 1: P,0.001; Cure 4: P,0.001; Fig. 5). All fish that consumed the equivalent of 10 eggs cured with cure 1 died during the first day. This was significantly faster than the groups consuming the equivalent of 6 eggs (100% mortality within 3 d) and 3 eggs (100% mortality within 5 d). The fish consuming cure 4 survived longer than those consuming cure 1. We observed 100% mortality within 7 d for the fish consuming the equivalent of 10 eggs cured with cure 4. This was significantly faster than the groups consuming the equivalent of 6 eggs (95% mortality within 10 d) and 3 eggs (60% mortality within 10 d). There was no difference in survival time between the HD and MD groups that were administered cure 4.
Effect of intra-buccal administration of the cure
There was no mortality in either treatment group.
Effect of soaking on the toxicity of the cured eggs
Feeding behavior was similar among all treatment groups and all groups consumed the majority of the eggs on each occasion. Soaking the eggs prior to feeding had no effect on mortality (P = 0.691, ANOVA). Mean mortality (6S.E.) was 4.4461.11, 3.3361.92, 6.6763.33, and 7.7862.94% in the control, 30 s soak, 1 min soak, and 10 min soak groups, respectively (Fig. 6).
Discussion
We showed that consumption of eggs cured with some commercially available cures cause mortality in juvenile salmonids. To our knowledge, this is the first such report of this effect. A study conducted in the 1980s evaluated the effect of borax cured eggs and noted a significant decrease in growth and higher levels of cortisol, but no mortality (Bouck et al., unpublished data). The toxic effect in our study appears to be associated with the presence of sodium sulfite in the cures. Rinsing the eggs prior to feeding, as may occur whilst angling, had no effect on mortality. Therefore, we would not expect that eggs (containing sulfites) that are used in rivers and lakes would be any less toxic when used in a typical manner. Interestingly, the overall level of mortality was considerably lower in the experiment evaluating the effect of soaking than in the other experiments. We suspect that this may have been caused by oxidation of the sodium sulfite during storage as the consistency of the cure tended to change over time, becoming darker and moister. Regardless, that we observed any mortality after soaking the eggs for 10 min suggests that it is not possible to eliminate the toxic effect in this way. Our data suggest there is a positive correlation between the concentration of sodium sulfite in the egg played and mortality. There was a significant difference in the survival times of fish fed the equivalent number of eggs cured with mixtures containing a high (.50% by weight) and low (,20% by weight) concentration of sodium sulfite. Our intention was to determine whether the number of eggs needed to cause mortality was within the range of what a wild juvenile might consume. Thus, we did not attempt to quantify the actual dose of sodium sulfite in the mixes that were administered. We initially attempted to quantify the number of eggs required to cause mortality by feeding individual fish a known number of eggs. However, the experiment was unsuccessful as Chinook generally do not feed well when held alone in a tank. Interestingly though, we noted a rapid decrease in ''appetite'' when the individually housed fish were fed cured eggs. This effect was reversed by switching to uncured eggs. This is consistent with the observations of Rankin [10] showing learned aversion to food that is toxic and suggests that the juvenile salmon may learn to avoid such foods following an initial exposure. However, it is not clear whether this would apply in situations where individuals are competing for food as we did not observe a decrease in ''appetite'' when fish were housed in groups.
Interestingly, we did not observe any mortality in fish that were fed cure 4 for 23 d. We maintained daily records of feeding behavior and this group consistently scored lowest for appetite. In addition, a large number of uneaten eggs were removed from the tanks at the end of the experiment. Taken together, these observations suggest that, when given a choice, few fish consumed the eggs cured with cure 4 during the feeding trials. We suspect this explains the low level of mortality as we did observe high levels of mortality when cure 4 was administered directly into the gut using a syringe.
We conclude that exposure via the gut alone is sufficient to cause death in juvenile salmonids. Exposure via the gills did not cause any mortality over a 10 d period. We made no attempt to determine the cause of death though the available literature suggests a number of possible pathways, including toxicity to the central nervous system [11,12], disruption of enzyme activity [13,14,15,16], or oxidative damage [17]. Changes in enzyme activity are unlikely to explain the rapid nature of mortality (,6 h) we observed in several animals following ingestion of cured eggs. However, it is possible that such changes may affect long term fitness. In several instances, we observed degradation of the internal organs and musculature of the gut cavity in the fish that died. In addition, we observed external lesions on ,5 fish. In the latter individuals, the body wall was apparently ''dissolved''. This is consistent with observations in other vertebrates that have shown tissue degradation in the gut. We speculate that the degradation was caused by the formation of sulphurous acid. Sulphurous acid is formed by the equilibrium reaction of sulpher dioxide (an intermediary in the breakdown of sodium sulfite) and water. Taken together, these results suggest that the fish may have died from multiple causes, including tissue breakdown, neurotoxicity, inhibition of enzyme activity, or oxidative stress. Within the body, sulfite is oxidized to the sulfate ion by sulfite oxidase. We hypothesize that the variability in susceptibility among species and lifestages was caused, in part, by differences in the expression of this enzyme. Likewise, up regulation of sulfite oxidase may explain the decrease in mortality after ,10 d exposure. Interestingly, in some instances (,30) we observed mortality after a single feeding of the cured eggs. Of these, 4 fish had a single egg in their gut and 13 had between 2-5 eggs in their guts. This suggests that some fish are particularly sensitive to the negative effects of some cures. Furthermore, it is not unrealistic to expect that wild juveniles would consume 1-5 eggs one or more times during their residence in freshwater.
In addition to sodium sulfite, some cures may also contain other sulfites such as sodium metabisulfite or sodium bisulfite. The available toxicity data (http://www.pesticideinfo.org/Search_-Chemicals.jsp) suggests that other forms of sulfite may be equally as toxic to fish. For example, the average LC50 for sodium sulfite is 660 mg/l in the western mosquitofish (Gambusia affinis) whereas the average LC50 for sodium bisulfite is 240 mg/L (static waterborne exposure). We were not able to find any data for salmonids. Interestingly, sodium nitrite, another ingredient in some cures, appears to be more toxic (LC50: 2.47 mg/L in Chinook salmon) than either of the sulfites. Given the available data, we would urge caution in using any form of sulfite or nitrite without further testing to determine the effects on juvenile fish.
Because the effect appears to be due to the breakdown of chemicals contained within the cured eggs while in the gut, a number of factors may influence the effect in the wild. These include, water temperature and water hardness as well as the presence of other food types in the gut. Temperature is likely to have an effect on both the rate of the chemical reactions and the activity of enzymes, such as sulfite oxidase, which break down sulfites in the body. We found no evidence that the presence of other food in the gut affected the toxicity of the cured eggs. However, we did not test the full range of foods that might be consumed by salmonids, nor did we look at whether prior consumption of alternative foods had any benefit.
In summary, we showed that some commercially available cures killed juvenile salmonids in a laboratory setting. There appears to be a dose dependent effect that is not ameliorated by pre-soaking the eggs prior to feeding. Surprisingly, consumption of relatively few eggs (1-5) was sufficient to cause mortality in some individuals. We have no data to suggest that this does, or does not, represent a problem at the population level. We would further caution that it is not realistic to transfer the rates of mortality observed in this study into the wild. We cannot rule out the possibility that the levels were exacerbated by stress associated with the experimental procedures, particularly with the oral administration studies. Regardless, we believe it is likely that a proportion of juvenile salmonids that consume eggs cured with sulfites will suffer mortality in the wild. Given that the most dominant fish in a population tend to monopolize food resources [18,19,20], it is reasonable to speculate that the more dominant fish could be more Figure 4. Effect of sodium sulfite on mortality in juvenile spring Chinook. The fish were fed (10 d, 1.5% BW/d) eggs that were cured with the full cure (FC), the same cure without sodium sulfite (-Na sulfite), or the same cure without sodium sulfite or sodium nitrite (-Na sulfite -Na nitrite). We evaluated the effect in two cures (cure 1 and 5), however cure 5 does not contain sodium nitrite. Each bar represents the mean % mortality in 2 (cure 1) or 3 (cure 5) replicate tanks (N = 30 fish/tank). 0 represents no mortality in a tank. Groups sharing a line above the bar are significantly different (P,0.05). doi:10.1371/journal.pone.0021406.g004 Figure 5. Relationship between the number of eggs consumed and the time to mortality. Survival time of juvenile spring Chinook fed the equivalent of 3 (low), 6 (medium), or 10 (high) eggs that were cured with cure 5 (A) or cure 4 (B). The cured egg mixture was administered into the stomach daily using a syringe. Control fish were given uncured eggs. The experiment was terminated after 10 d. Each group consisted of 20 fish. doi:10.1371/journal.pone.0021406.g005 prone to mortality. Given the risk, we recommend that anglers take steps to minimize this risk. These may include the use of spawn sacs, using cures that do not contain sulfites, and/or avoiding discarding unused baits into the river. In addition, because sodium sulfite was not commonly used in cures prior to 1980 and can likely be replaced by other mold inhibitors, such as borax, we suggest management agencies and manufacturers consider approaches that would minimize incidental take of wild juveniles salmonids when fishing with cured eggs. These might include eliminating the use of sodium sulfite or establishing a level of acceptable risk that is consistent with other forms of indirect fishing related mortality (e.g. hooking mortality).We urge caution when using this approach as we have no data on additive effects or the effects of other cure ingredients. It is likely that other components of some cures are as toxic (e.g., sodium metabisulfite) but currently have little or no effect as they are present at lower levels. Any increase in the concentrations of these compounds as a result of decreasing the concentration of sodium sulfite may result in a similar problem. Similarly, there are no data on the sublethal effects of sulfite exposure in juvenile salmon. A number of studies have shown that exposure to sulfites may have sublethal effects in other vertebrates [1][2][3][4]21,22], including alteration of mineral balance [23] and damage to renal cell membranes [24]. We have no data to suggest that ingestion of sulfites alters the ability of juvenile salmon to osmoregulate or maintain proper ion balance. However, given that smolts may be exposed to sulfites prior to ocean entry this should be given some consideration. | 2017-04-10T23:11:13.395Z | 2011-06-27T00:00:00.000 | {
"year": 2011,
"sha1": "fbaf523f9313c26f630dc2d5920557cee0d1bbd7",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0021406&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fbaf523f9313c26f630dc2d5920557cee0d1bbd7",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
53564490 | pes2o/s2orc | v3-fos-license | Four-octyl itaconate activates Keap1-Nrf2 signaling to protect neuronal cells from hydrogen peroxide
Background Four-octyl itaconate (OI), the itaconate’s cell-permeable derivative, can activate Nrf2 signaling via alkylation of Keap1 at its cysteine residues. The current study tested the potential neuroprotective function of OI in hydrogen peroxide (H2O2)-treated neuronal cells. Methods SH-SY5Y neuronal cells and epigenetically de-repressed (by TSA treatment) primary murine neurons were treated with OI and/or H2O2. Nrf2 pathway genes were examined by Western blotting assay and real-time quantitative PCR analysis. Neuronal cell death was tested by the LDH and trypan blue staining assays. Apoptosis was tested by TUNEL and Annexin V assays. Results In SH-SY5Y neuronal cells and primary murine neurons, OI activated Nrf2 signaling, causing Keap1-Nrf2 disassociation, Nrf2 protein stabilization and nuclear translocation, as well as expression of Nrf2-regulated genes (HO1, NQO1 and GCLC) and ninjurin2 (Ninj2). Functional studies showed that OI attenuated H2O2-induced reactive oxygen species (ROS) production, lipid peroxidation and DNA damage as well as neuronal cell death and apoptosis. shRNA-mediated knockdown, or CRISPR/Cas9-induced knockout of Nrf2 almost abolished OI-induced neuroprotection against H2O2. Keap1 is the primary target of OI. Keap1 knockout by CRISPR/Cas9 method mimicked and abolished OI-induced actions in SH-SY5Y cells. Introduction of a Cys151S mutant Keap1 in SH-SY5Y cells reversed OI-induced Nrf2 activation and anti-H2O2 neuroprotection. Conclusions OI activates Keap1-Nrf2 signaling to protect SH-SY5Y cells and epigenetically de-repressed primary neurons from H2O2 in vitro. Electronic supplementary material The online version of this article (10.1186/s12964-018-0294-2) contains supplementary material, which is available to authorized users.
Very recent studies confirmed itaconate as a novel and potent Nrf2 activator [14,15]. Itaconate directly alkylates Keap1 at its cysteine residues, causing Nrf2-Keap1 disassociation and Nrf2 activation [14]. The cell-permeable itaconate derivative, 4-octyl itaconate (OI), is shown to efficiently promote Nrf2 activation [14]. The potential effect of OI in H 2 O 2 -treated neuronal cells is tested in the present study. Our results show that OI protects neuronal cells from H 2 O 2 via activation of Nrf2 signaling.
Cell culture
Human neuronal SH-SY5Y cells (from Dr. Gao [16]) were cultured in DMEM plus 10% fetal bovine serum (FBS). Before H 2 O 2 treatment, SH-SY5Y cells were cultured for 5 days with 10 μM retinoic acid (RA) in DMEM plus 10% FBS, 2 mM glutamine, and necessary antibiotics, followed by another 5 days culture in serum-free DMEM with BDNF (brain-derived neurotrophic factor, 50 ng/mL) and glutamine (2 mM) and antibiotics (P/S). As described [16], the primary murine neurons were prepared from E13-E15 embryos of C57 mouse. Neurons were dissociated, counted, and plated in poly-lysine-coated 48-well plates at a density of 1 × 10 5 cells/well in neurobasal medium plus 2% B27, 500 μM L-glutamine, 20 ng/mL trichostatin A (TSA) and antibiotics (P/S). At day-10 (DIV), over 98% of cells were neurons. The protocol of the study was approved by the Ethics Committee of all authors institutions.
Cell viability assay
Cells were seeded onto the 96-well plates (2.5 × 10 4 cells/cm 2 ). Following the indicated treatment, cell viability was tested by the CCK-8 assay kit via the recommended procedure. The CCK-8's optical density (OD) value at 550 nM was recorded.
Trypan blue staining assay
Following treatment of cells, trypan blue was added to stain the "dead" cells. Cell death percentage was calculated via an automated cell counter (Merck Millipore, Shanghai, China).
LDH assay
The death of neurons was examined by calculating lactate dehydrogenase (LDH) release to the medium, using a simple two-step LDH enzymatic reaction kit (Takara, Tokyo, Japan). LDH content in the medium was normalized to total LDH (medium LDH plus cellular LDH).
ROS assay
As previously described [17], cellular ROS content was tested by using the carboxy-H2DCFDA dye. Following treatment, cells were stained with 10 μM of carboxy-H2-DCFDA for 20 min. DCF fluorescence was measured under 485 nm excitation and 525 nm emission using the Fluoroskan Ascent FL machine (Thermo Scientific, Shanghai, China).
Superoxide assay
The cellular superoxide level was examined using a superoxide assay kit (Beyotime, Wuxi, China) according to the manufacturer's instructions. Briefly, neuronal cells were cultured onto six-well plates at a density of 3 × 10 5 cells/well. Following the indicated treatment, the superoxide detection reagent (100 μL/well) was added for 20 min at room temperature. The absorbance, reflecting the superoxide assay, was recorded at 450 nm.
Lipid peroxidation assay
Following treatment, thiobarbituric acid reactive substances (TBAR) activity was tested to quantify cellular lipid peroxidation level, using the previously-described protocol [17,18].
DNA damage assay
The detailed protocol of analyzing DNA damage level was described previously [17,18]. Briefly, after the indicated treatment, cells were washed and tested by FACS assay to quantify p-γ-H2AX percentage, which reflects DNA damage intensity [19]. The p-γ-H2AX percentage was recorded.
Real-time quantitative PCR analysis As previously described [17], Total RNA was extracted by the Trizol reagents. For each condition, five hundred ng of DNA-free total RNA was utilized to perform the reverse transcription with the 2-step RT-PCR kit (Takara Bio, Japan) [20]. Quantitative real-time PCR (qPCR) was performed using the 7500HT Fast Real-Time PCR system (Applied Biosystems). The data presented were normalized to GAPDH transcripts. mRNA primers of human GCLC, Nrf2, HO1, NQO1 and GAPDH were described previously [21]. mRNA primers of murine GCLC, Nrf2, HO1, NQO1 and GAPDH were described in the other study [22]. mRNA primers for Ninj2, forward, 5'-ATGCGGCTGAAGGCGGTGCTG-3′ and reverse, 5'-TGGCTGCGTTGTTGAGCTGGTTG-3′, were synthesized by Genechem (Shanghai, China).
Western blotting assay
After treatment, cells/neurons were lysed in SDS lysis buffer [16]. The detailed protocol of Western blotting assay was described previously [23]. Each lane in the SDS-PAGE gene was loaded with exact same amount of quantified protein lysates (30 μg per sample). Same set of lysate samples were run in sister gels to test different proteins when necessary. Nuclear proteins were extracted via the nuclear extraction kit (Sigma, Shanghai, China) by high-speed centrifugation. For data quantification, each band was quantified via the ImageJ software (NIH).
Immunoprecipitation assay
A total of 600 μg protein lysates per sample were pre-cleared with IgA/G beads (Sigma, Shanghai, China). Endogenous Keap1 was precipitated with anti-Keap1 antibody and protein IgA/G beads (IP). The Keap1-Nrf2 immuno-complex was subjected to Western blotting analysis.
ssDNA ELISA assay of apoptosis
Following treatment, the cellular content of single strand DNA (ssDNA), a characteristic marker of cell apoptosis, was tested via the ApoStrandTM ELISA apoptosis detection ELISA kit (BIOMOL International, Plymouth Meeting, PA). The ssDNA ELIAS OD at 450 nm was recorded.
TUNEL assay
Cells were seeded at 2.5 × 10 4 cells/cm 2 . Following treatment, cells were stained with TUNEL (10 μM) for 20 min. Cells with positive nuclear TUNEL staining were labeled as apoptotic cells. TUNEL ratio (TUNEL/DAPI× 100%) was recorded from a total of 500 cells from ten random views (1 × 100, Zeiss) for each treatment.
Annexin V assay
Cells with the applied treatment were harvested, washed, and incubated with Annexin V and propidium iodide (PI) dyes (10 μg/mL, Beyotime Biotechnology, Wuxi, China). Afterwards, cells were analyzed by fluorescent-activated cell sorting (FACS) on the FACSCalibur machine (BD Biosciences). Annexin V ratio was recorded.
Nrf2 knockout
The lentiCRISPR-GFP-Nrf2-puro KO construct, a gift from Dr. Li [24], was introduced to SH-SY5Y cells via transfection. FACS assay was then performed to sort the GFP-positive cells. Single cells were cultured onto 96-well plate to generate the monoclonal cells. Stable cells were further selected by puromycin. Nrf2 knockout was confirmed by Western blotting assay.
Keap1 knockout
The Keap1 CRISPR/Cas9 KO Plasmid was purchased from Santa Cruz Biotech (sc-400190-KO-2). The construct was transfected to HEK-293 cells with the lentivirus packaging plasmids, psPAX2 and pMD2.G (provided by Genechem, Shanghai, China) using Lipofectamine 2000 reagent. The lentivirus was harvested at day-3, added to SH-SY5Y cells in the presence of polybrene. Puromycin (1.0 μg/mL) was then included to select stable cells. Keap1 knockout in the stable cells was confirmed by Western blotting assay.
Keap1 mutation
The in vitro site-directed mutagenesis system (Genechem, Shanghai, China) was applied to generate Cys151S mutant Keap1 vector [25] (GFP-tagged). The construct was sub-cloned into the GV248 lentiviral vector, added to SH-SY5Y cells. Stable cells were selected by puromycin. Expression of the Cys151S Keap1 in stable cells was verified by Western blotting assay.
Statistical analysis
For each experiment, n = 5 (five replicated wells/dishes). Experiments were repeated three to four times. Data of all repeated experiments were pulled together to calculate mean ± standard deviation (SD). Data were analyzed by one-way ANOVA followed by a Scheffe's f-test via SPSS 18.0 software (SPSS Inc., Chicago, IL). Two-tailed unpaired T test (Excel 2017) was applied to test significance between two treatment groups. Significance was chosen as P < 0.05.
Similar experiments were performed in the primary murine neurons. The basal and inducible Nrf2 activities in neurons are extremely low due to epigenetic repression of Nrf2 gene promoter [26]. Therefore, the primary neurons were cultured in TSA (a histone deacetylase inhibitor)-containing medium for 10 days (DIV10). As shown, OI (25 μM) induced Nrf2-dependent gene (HO1, NQO1 and GCLC) expression ( Fig. 1e and f ), Nrf2 protein stabilization (Fig. 1f ) and nuclear translocation (Fig. 1g). Keap1-Nrf2 association was disrupted by OI as well, as Keap1-bound Nrf2 was largely decreased (Fig. 1h). These results indicate that OI activates Nrf2 signaling in the neuronal cells. Notably, Keap1 expression was unchanged by OI (Fig. 1b, d, f and h). A time-dependent response by OI was tested. Results showed that significant Nrf2 protein stabilization as well as HO1, and GCLC protein expression were detected as early as 1 h following OI treatment in primary neurons (Additional file 1: Figure S1). Importantly, without TSA presence, basal and inducible (by OI) Nrf2 activities in DIV10 neurons were negligible (Additional file 1: Figure S1). Keap1 levels were indifferent (Additional file 1: Figure S1).
Ninjurin2 (Ninj2), a homolog of ninjurin1 (Ninj1), is a homophilic cellular adhesion molecule [27]. Ninj2 is expressed in neurons to promote neurite outgrowth [27]. Our results show that after OI (10-50 μM) treatment Ninj2 mRNA and protein levels were significantly elevated in SH-SY5Y cells ( Fig. 1a and b) and in primary neurons ( Fig. 1e and f ). These results imply that Ninj2 can be induced following Nrf2 activation by OI in neuronal cells.
In the primary murine neurons, pretreatment with OI (25 μM) alleviated H 2 O 2 -induced cell death (LDH medium release, Fig. 2i) and apoptosis (TUNEL ratio increase, Fig. 2j). Additionally, H 2 O 2 -induced cleavages of PARP, caspase-3 and caspase-9 were attenuated as well by OI (25 μM) (Fig. 2k). These results further confirmed the neuroprotective function of OI against H 2 O 2 . OI alone was ineffective in the neuronal cells (Fig. 2a-j). Fig. 1 Four-octyl itaconate activates Nrf2 signaling in neuronal cells. SH-SY5Y cells (a-d) or the primary murine neurons (e-h) were treated with applied concentration of 4-octyl itaconate (OI) for indicated time, mRNA expression of Nrf2-regulated genes and Ninj2 were tested by qPCR assay (a and e); Expression of listed proteins in total cellular lysates (b and f) and nuclear lysates (c and g) were tested by the Western blotting assays. Keap1-Nrf2 association was detected by co-immunoprecipitation assays (d and h). Expression of listed proteins were quantified and normalized to the loading control (b, c, f and g). Keap1-bound Nrf2 was quantified as well (d and h). Lamin-B1 was tested as a marker of nuclear protein (c and g).
Nrf2 activation mediates 4-octyl itaconate-induced neuronal cell protection against H 2 O 2
To study the link between Nrf2 activation and OI-induced neuroprotection, shRNA strategy was employed to silence Nrf2. As described, the lentivirus encoding Nrf2 shRNA ("shNrf2-1" or "shNrf2-2", with non-overlapping sequence) were added to proliferating SH-SY5Y cells. After selection by puromycin, stable cells with Nrf2 shRNA were established. Results show that Nrf2 mRNA and protein levels were significantly downregulated in Nrf2 shRNA-expressing stable cells, with/without OI treatment (Fig. 4a). Basal and OI-induced mRNA expression of HO1 and NQO1 were largely inhibited the Nrf2 shRNA (Fig. 4b).
OI-induced Nrf2 protein stabilization as well as HO1 and NQO1 protein expression were also significantly attenuated by the Nrf2 shRNA (Fig. 4c). GCLC and Ninj2 mRNA and protein expression in response to OI were blocked as well by the Nrf2 shRNA in SH-SY5Y cells (Data not shown). Importantly, in Nrf2-silenced SH-SY5Y cells, OI was unable to inhibit H 2 O 2 -induced viability reduction (Fig. 4d) and cell apoptosis (Fig. 4e). Therefore, OI was ineffective against H 2 O 2 when Nrf2 was silenced ( Fig. 4d and e). Notably, the Nrf2-silenced SH-SY5Y cells were more vulnerable to H 2 O 2 , showing intensified cell death and apoptosis (as compared to control cells, Fig. 4d and e). Nrf2 shRNA alone did not affect SH-SY5Y cell death/apoptosis (Data not shown).
Keap1 knockout or Cys151S mutation abolishes 4-octyl itaconate-induced neuroprotection against H 2 O 2
It has been shown that OI alkylates Keap1, causing Keap1-Nrf2 disassociation, Nrf2 stabilization and activation [14]. If Keap1 is the primary target of OI, Keap1 depletion should abolish OI-induced actions in neuronal cells. To test this hypothesis, CRISPR/Cas9 gene editing method was again employed. As described, the lentiviral CRISPR/Cas9 Keap1 KO vector was transfected to SH-SY5Y cells. Via selection, stable cells with the construct were established ("Keap1-KO" cells). By performing the Western blotting assay, we confirmed that Keap1 protein was completely depleted in the stable cells (Fig. 5a), where Nrf2 protein level was significantly elevated (Fig. 5a). HO1, GCLC and Ninj2 protein expression were significantly increased as well in Keap1-KO cells (Fig. 5a), where HO1 and Ninj2 mRNA levels were significantly higher (Fig. 5b).
The above results suggest that Keap1 should be the primary target of OI. Itaconate directly alkylates Keap1 at cysteine-151 (Cys151) and other cysteine residues, essential for Nrf2 departure and activation. Thus, a Cys151S mutant Keap1 [14] vector was transfected to SH-SY5Y cells. Stable cells were established again via puromycin selection. Western blotting assay results confirmed the expression of mutant Keap1 (GFP-tagged) in the stable cells ["Keap1 (c151s)", Fig. 5f]. Significantly, OI-induced Nrf2 stabilization as well as HO1, NQO1, GCLC and Ninj2 protein expression were almost blocked in Keap1-mutant cells (Fig. 5f). Importantly, OI-induced SH-SY5Y cytoprotection against H 2 O 2 was significantly inhibited in cells with Cys151S-mutant Keap1 (Fig. 5g and h). As compared to the vector control cells, Keap1-mutant cells were more sensitive to H 2 O 2 -induced damage ( Fig. 5g and h). These results indicate that Keap1 alkylatation and following Nrf2 activation could be the primary mechanism of OI-induced neuroprotection against H 2 O 2 .
Discussion
Oxidative stress-induced neuronal cell injury contributes significantly to the pathogenesis of neurodegenerative diseases [4,32,33]. Among various therapeutic strategies, one (See figure on previous page.) Fig. 4 Nrf2 activation mediates 4-octyl itaconate-induced neuronal cell protection against H 2 O 2 . SH-SY5Y cells (a-e) or the primary murine neurons (i-k), with the applied Nrf2 shRNA or the scramble control shRNA ("shC"), were either untreated or treated with 4-octyl itaconate (OI), mRNA and protein expression of listed genes were shown (a-c, and i); Cells were pretreated for 30 min with OI (25 μM), followed by stimulation of H 2 O 2 (300 μM) for indicated time, cell viability (CCK-8 OD, d), cell death (LDH release, j) and apoptosis (TUNEL ratio increase, e, and k) were tested. Stable SH-SY5Y cells, with the CRISPR/Cas9-Nrf2 KO construct ("Nrf2-KO") or the CRISPR/Cas9 control construct ("Cas9-c"), were treated with 4-octyl itaconate (OI), listed proteins were shown (f); Cells were pretreated for 30 min with OI (25 μM), followed by stimulation of H 2 O 2 (300 μM) for indicated time, cell viability (g) and apoptosis (h) were tested. Expression of listed proteins were quantified and normalized to the loading control (c, f and i). "shNrf2 (m)" stands for murine Nrf2 shRNA (I-K). Bars stand for mean ± standard deviation (S.D., n = 5). # P < 0.05 vs. "shC" cells (a, b, d and e). # P < 0.05 (g, h, j and k) promising method is to boost the endogenous defense mechanisms (i.e. Nrf2 signaling) against oxidative stress through pharmacological intake of small compounds [11][12][13]. Activated Nrf2 separates separates from Keap1, enters to cell nuclei, and binds to ARE to promote transcription and expression of multiple antioxidant enzymes and detoxifying genes, thereby inhibiting neuronal oxidative injury [11][12][13]. Many Nrf2 activators have been proven to be strong radical scavengers, but often have some severe adverse effects and poor bioavailability [34,35]. Recently, the research attention has been focusing on searching for novel Nrf2 activators that can scavenge free radicals and efficiently protect neuronal cells from oxidative injury [34,35].
In the current study, we show that OI, the cell-permeable derivative of itaconate [14], activated Nrf2 signaling in SH-SY5Y cells and primary murine neurons. OI induced Keap1-Nrf2 disassociation, Nrf2 protein stabilization and nuclear translocation, leading to expression of multiple known Nrf2 target genes. Importantly, OI pretreatment potently attenuated H 2 O 2 -induced ROS production, oxidative stress, lipid peroxidation and DNA damage. As a result, H 2 O 2 -induced neuronal cell death and apoptosis were significantly attenuated. These results show that activation of Nrf2 by OI should be a fine strategy to protect neurons/ neuronal cells from oxidative stress.
Ninjurin2 (Ninj2) is a homolog of Ninj1 [27]. It is a adhesion molecule expressed in neurons [27]. Ninj2 functions in neurons are not fully understood. In the current study, we show that basal Ninj2 expression is low in SH-SY5Y cells and murine neurons. OI significantly elevated Ninj2 mRNA and protein expression. Importantly, OI-induced Ninj2 expression was almost blocked by Nrf2-shRNA/−knockout or Keap1 mutation. Furthermore, Keap1 knockout induced Ninj2 expression in SH-SY5Y cells. These results suggest that Ninj2 could possibly be a novel Nrf2-regulated gene that can be induced by OI in neuronal cells. Our results provide novel molecular insights to possibly explain the established link between Ninj2 polymorphism and ischemic stroke [36][37][38]. It will be interesting to further explore the underlying mechanism of OI-induced Ninj2 expression, as well as the possible anti-oxidant and neuroprotective functions of Ninj2.
Nrf2-Keap1 is the primary target of OI. Nrf2 knockdown by targeted-shRNA or CRISPR/Cas9 Nrf2 KO almost abolished OI-induced neuronal cell protection against H 2 O 2 . Further, OI was ineffective in Keap1-KO cells where Nrf2 is over-activated. OI alkylates Keap1 to block Keap1-Nrf2 association [14]. This shall lead to robust and sustained Nrf2 activation. Indeed, we show that ectopic overexpression of a Cys151S mutant Keap1 in SH-SY5Y cells reversed OI-induced Nrf2 activation and anti-H 2 O 2 neuroprotection. These genetic evidence suggest that Keap1-Nrf2 should be the primary target of OI in neuronal cells. | 2018-11-17T01:21:47.429Z | 2018-11-15T00:00:00.000 | {
"year": 2018,
"sha1": "13831520222f9928f2d69a7e6a9b819943b3abb0",
"oa_license": "CCBY",
"oa_url": "https://biosignaling.biomedcentral.com/track/pdf/10.1186/s12964-018-0294-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "13831520222f9928f2d69a7e6a9b819943b3abb0",
"s2fieldsofstudy": [
"Medicine",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
102319081 | pes2o/s2orc | v3-fos-license | Torsional Resonance Noise Reduction by Motor Torque Phase Adjustment
A Hybrid Vehicle (HEV) and an Electric Vehicle (EV) are some of solutions to improve efficiency of a cruse and to reduce CO2 gas by regenerative breaking and operating the engine in optimum condition. It is requested to decrease the motor size and to increase the maximum torque for minimizing the driving units. For these purposes permanent magnet applied motors have been commonly used recently. The motor driving systems of HEV or EV must operate at variable speed ranges of up to 1:5. For this large range, Silent and smooth operations are desirable and important. Skew method is often adopted to ensure smooth start and stop and silent operation by reducing the torque ripple. But it possibly causes noise and vibration of the motor. This paper describes the mechanism of torsional resonance noise and vibrations arising from the skew method and the countermeasure to decrease them.
Introduction
A Hybrid Vehicle (HEV) and an Electric Vehicle (EV) are some of solutions to improve efficiency of a cruse and to reduce CO2 gas by regenerative breaking and operating the engine in optimum condition.It is requested to decrease the motor size and to increase the maximum torque for minimizing the driving units.For these purpose a permanent magnet applied motor has been commonly used recently [1].The permanent magnet motor (PM) driving systems of EV or HEV must operate at variable speed ranges of up to 1:5.For this large range, Silent and smooth operations are desirable and important for them [2].Skew method is often adopted to ensure smooth start and stop and silent operation of EV and HEV by reducing the torque ripple.There are two types of skew method.One is the stator skew which changes stator slot angle by one half of slot pitch angle along with motor axial direction.Other is the rotor skew which change rotor pole angle by one half of slot pitch angle along with motor axial direction.In the rotor skew method of an implemented permanent magnet type, discontinuous pole angle change is commonly adopted because of easy manufacturing.The authors have been developing a new motor named PRM, Permanent magnet Reluctance Motor, which largely employs the reluctance torque by changing the magnet position and magnetic circuit design to satisfy specifications [8], [9], [10] [11].However, the driving system still made noise and vibration in some operating condition even after removing the noise caused by the resonance between the core natural frequency and the sub-harmonics electromagnetic force, the noise caused by circulation current run by eccentric rotor motion.[13], [14]
PRM Motor
The flux weakening method for constant-power operation enlarges the operation range of various Permanent Magnet (PM) motors (Fig. 1) [6], [7].But even for an Interior Permanent magnet Motor (IPM), its operation speed range at a constant-power without voltage booster circuit about 1:3, as shown in Fig. 2 [1], [2], [11].And it results in poor efficiency by increase of the flux weakening current and iron losses in those highspeed regions [8], [9].In addition, there is still a possibility for breakdowns of capacitors and/or power devices of an inverter due to excess voltage in case the lost of flux weakening control.those defects of the IPM by largely employing the reluctance torque by changing the magnet position and magnetic circuit design.Increase of reluctance torque leads to decrease of permanent magnet amounts and smaller back EMF.They allow a large variable speed range over 1:5, smaller flux weakening current and higher efficiency at high speed operating region [1], [2], [8], [11], [12].
Low speed Generating operation Noise
The PRM for Hybrid SUV use has 8 poles in the rotor and 48 slots and two star stator circuits connected in skip-pole to maximize its torque and output power considering motor efficiency, noise, producibility [13], [14].When it delivered required torque in driving system at generating operation it made noise at low rotating speed range from 3,000rpm to 4,000rpm.That low rotating speed range noise measured and a Fast Fourie Transform (FFT) analysis was made to clarify the noise out breaking status.Fig. 6 shows the FFT analysis result in arbitrary unit at 48th order of rotating speed.At current design there are two noisy parts around 3,000rpm to 4,000rpm shown as broken circle A in Fig. 6 and around 7,000rpm.It was expected that a noise peak would take place around 7,000rpm because of the resonance between the core natural frequency and electromagnetic force of K=0 mode.But it was unexpected such a large noise around 3,000rpm to 4,000rpm.For mass production, this noise had to be low without degrading the motor performance.
Noise and core natural frequency measurements
Noise and core natural vibration frequency were checked to identify the root cause of low speed generating operation noise around 3,000rpm to 4,000rpm.Fig. 7 shows the core natural frequency in radial direction of PRM.Natural frequency and its mode of the stator core were measured at support free condition.The core with stator coil sit on a rubber sheet and hammered in tangential direction by an impulse hammer.Fig. 8 and Fig. 9 show the measured the stator natural frequency and its mode.
Rotor skew and electromagnetic fore
Rotor skew is adopted in current motor design to ensure silent and smooth operation by reducing torque ripple Original skew in current motor is a half slot skew.
It is designed to depress the fundamental order of a stator slots torque ripple.Its schematic draw of one pole position is shown in Fig. 10.
Slot ripple torque Axial direction
Figure 10: Schematic draw of a half slot skew Slot ripple torque element at every stator slot in upper half of rotor is opposite to the one in lower half.Torque ripple can compensate each other on rotor axis.But their reactions on the stator are expected to be uniform over the tangential direction and opposite in upper and lower part of stator.This higher order force can excite torsional vibration around 3,725Hz.
Rotor skew optimization
The root cause is supposed to be the resonance between axial torsional natural frequency and electromagnetic force.This phenomenon can be avoided if the rotor skew phase is changed and optimized.Two skew schemes are selected keeping the torque ripple compensation and the required maximum torque of 210Nm.One is the 4-step V skew and another is the Zig-Zag skew.These skew are shown schematically in Fig. 11.The low rotating speed range noise measured and a Fast Fourie Transform (FFT) analysis was made for both skew type motors.Fig. 12 shows the FFT analysis result of the 4-step V skew motor's noise in arbitrary unit at 48th order of rotating speed.The noisy part around 3,000rpm to 4,000rpm is suppressed largely as expected.It is very effective to change electromagnetic force axial distribution from axial torsional natural frequency mode.Fig. 13 shows the FFT analysis result of the Zig -Zag skew motor's noise in arbitrary unit at 48th order of rotating speed.The noisy part around 3,000rpm to 4,000rpm is suppressed significantly.And noise around 9,000rpm becomes quiet compared to current a half slot skew and the 4-step V skew.It can le to be effective for a higher axial torsional natural frequency mode.But the 4-step V skew is more effective for the noise around 3,000rpm to 4,000rpm.The 4-step V skew was adopted for the final advanced motor design.
CONCLUSIONS
Noise and vibration frequency and vibration mode were measured to identify the root cause of the noise at low rotating speed range from 3,000rpm to 4,000rpm in generating operation.Through this measurement and rotor skew phase optimizations, the root cause was identified to be the resonance between axial torsional natural frequency and electromagnetic force of slot ripple.Current design has conventional half slot skewed permanent magnet rotor.Rotor skew phase in the rotor was changed to 4-step V skew.After adoption of this countermeasure, the large and plateau noise .around3,000rpm to 4,000rpm were disappeared.This shows effectiveness of the adopted countermeasure.This method is applied to a large electrical output and large variable speed range permanent magnet reluctance motor ( PRM ) for the final advanced motor design.
After large improvement in noise, the PRM are applied for a hybrid SUV, a passenger car and a hybrid truck now [2], [3].
Figure 1 :
Figure 1: Rotor Configurations of PM Motors The authors have been developing the Permanent magnet Reluctance Motor (PRM) to resolve
Figure 2 :
Figure 2: Performance of Motors at Variable speed
Fig. 3 Figure 3 :
Fig.3shows a typical cross section of the PRM and its variety.They are both possible to create magnetic flux flow controller increasing the difference d-axis and q-axis reactance by cutting an air holes inside a rotor and by setting a groove outside a rotor.
Figure 6 :
Figure 6: 48th harmonics Noise of rotating speed of original design PRM for SUV
Figure 7 :
Figure 7: Stator Core natural frequency in radial direction of PRM
Figure 12 :
Figure 12: 48th harmonics Noise of rotating speed of the 4-step V skew PRM
Figure 13 :
Figure 13: 48th harmonics Noise of rotating speed of the Zig-Zag skew PRM | 2019-02-13T05:48:42.798Z | 2012-06-29T00:00:00.000 | {
"year": 2012,
"sha1": "b3f10f1c1bbc793cbd50e181b73fae2de1bc2ac5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2032-6653/5/2/527/pdf?version=1526564610",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "424792276790901e8ce41f45b035cc09114b2b0e",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
42255020 | pes2o/s2orc | v3-fos-license | Observation of femtosecond molecular dynamics via pump–probe gas phase x-ray scattering
We describe a gas-phase x-ray scattering experiment capable of capturing molecular motions with atomic spatial resolution and femtosecond time resolution. X-ray free electron lasers can deliver intense x-ray pulses of ultrashort duration, making them suitable to study ultrafast chemical reaction dynamics in an ultraviolet pump, x-ray probe scheme. A cell diffractometer balances sample flow with gas density and laser focusing conditions to provide adequate scattering vector resolution with high signal intensity and near-uniform excitation probability. Images from a pixel-array x-ray detector, spatially and electronically calibrated, allow for detection of scattering intensity changes below 1%. First experiments on the ring-opening reaction of 1,3-cyclohexadiene to form 1, 3, 5-hexatriene show a rapid initial reaction on an 80 fs time scale.
Introduction
The study of molecules as they interact and transform, a dominant quest of chemistry for over a century, has led to a profound understanding of the nature of chemical reactions and to advanced tools used to create molecules for a myriad of applications [1][2][3]. Chemical reactions are accompanied by changes in molecular structure that pose experimental challenges related to the scale of the processes: atomic motions occur on the order of femtoseconds and over distances measured in ångströms.
To date, most time-resolved studies of chemical reaction dynamics employ spectroscopic techniques. In probing the energies and populations of excited states, common methods are able to follow the flow of energy through molecular systems [4][5][6][7][8][9]. Yet with few exceptions [10][11][12][13][14], they remain incapable of determining molecular structures. Scattering techniques, which are widely used to probe chemical structures in static systems, can be extended to perform on ultrafast timescales as well, thereby connecting spectroscopic information describing the energy flow within molecules to a structural description of the nuclear motions.
Electron scattering, the most commonly employed method for gaseous systems, has recently been extended to the ultrafast time regime with great success [15][16][17][18][19][20][21]. The short wavelength of electrons offers superior spatial resolution in the real space of molecular structure. Challenges to the time resolution stem from the velocity difference of the electrons compared to the excitation laser pulse that initiates the reaction in a pump-probe experiment, the mutual spacecharge repulsion of electrons within a single bunch, and the timing jitter from the initial electron speeds near the cathode. Creative solutions to these issues have been advanced: relativistic electrons approach the speed of light, so that the velocity spread might not limit the time resolution [22]; recognizing that space-charge broadened pulses have almost perfect chirp, they can be compressed [21,23]; or, minimizing the number of electrons per pulse can avoid space-charge broadening in the first place [24]. Taking advantage of these advances, compressed electron pulses have been used to investigate the ultrafast dynamics of gold [25], to investigate the molecular dynamics of diarylethene [26], and to produce single-shot scattering patterns of aluminum [20] and gold foils [21]. For gaseous samples, while promising early studies illustrate the potential of the technique [27,28], progress has been hampered by the small sample densities, the unfavorable scaling of scattering signals at large angles and the extended sizes of interaction regions required to reach adequate signal levels.
In the gas phase, the internal dynamics of molecular motions can be isolated as reactions proceed without the interference of nearby molecules. Therefore, gas-phase molecular dynamics are an important source of reference data and fundamental studies that compare experiments to detailed theoretical calculations. The advent of x-ray free electron lasers (XFELs) that deliver ultrashort x-ray pulses synchronized to pulsed lasers [29] presents an opportunity to observe scattering patterns even for low density gases of small organic molecules. By sidestepping many of the challenges posed by electron scattering, ultrafast x-ray scattering can examine the chemical reaction dynamics of isolated molecules.
The total elastic scattering cross-section for 1,3-cyclohexadiene, a well-known model system for chemical dynamics studies, is 7.8×10 −24 cm 2 [30] for 20 keV x-rays. Depending on the scattering vector, this is about a factor of 10 5 smaller than comparable electron scattering crosssections. To make up for the small cross sections, it is necessary to use an x-ray source with a very high photon flux. This is now possible through the advent of fourth generation XFELs: the linac coherent light source (LCLS) is capable of generating tunable x-rays with pulse lengths down to 2 fs and up to 10 12 photons per pulse [31]. LCLS already has enabled novel structural studies with single x-ray pulses on macromolecules such as mimivirus particles [32] and proteins [33][34][35], which require a high photon flux to image structures before they are destroyed, as well as ultrafast temporal studies of nucleobase thymine via Auger spectroscopy [36].
X-ray elastic scattering maps molecular structure in amorphous (non-crystalline) samples by measuring the atomatom pair distributions. As x-rays pass through matter some of them scatter, resulting in a momentum transfer q that is related to the scattering angle 2θ as q , where λ is the x-ray wavelength. In the independent-atom model, the rotationally averaged elastic scattering is defined as the sum of atomic and molecular contributions [37] I q I f q f q f q qr qr sin 1 with well-known elastic scattering atomic form factors f i (q) [38]. Here, the r ij are the inter-nuclear distances and I 0 is the intensity of the incident x-ray. For molecular samples deliberately aligned or preferentially selected by a polarized excitation laser, the scattering signals additionally depend on the relative orientations of the detection vector and the laser geometry [39,40]. The time-evolving structures of molecules undergoing a chemical reaction can be followed by measuring the scattering pattern as a function of delay time between an excitation pulse and the x-ray probe pulse. In 1,3-cyclohexadiene, the concept of a well-defined structure throughout the ring-opening is justified, since the molecule travels ballistically down the excited state surfaces [41,42], a point to which we will return later. The necessity of balancing opposing demands, in particular a high signal-to-noise ratio and an absence of spurious background signals, required the development of a novel diffractometer and associated experimental protocol. We present here the design of such an apparatus and discuss the methods critical to the calibration and analysis needed to produce ultrafast molecular movies using x-rays. The apparatus was successfully implemented to study the timeresolved ring-opening reaction of 1,3-cyclohexadiene [43,44], a system for which detailed insights from the scattering experiments are discussed.
The pump-probe scheme
To measure time-evolving molecular structures, a pumpprobe scheme is used where a laser pulse initiates the reaction and a variably delayed x-ray probe pulse maps the molecular structure at different delay times. Assuming that the optical excitation leads to deterministic dynamics, a movie is created by collaging the individual structural snapshots.
In our experimental implementation, we paired an ultraviolet excitation pulse (267 nm, 65 fs, 100 μm FWHM focus) with a variably timed x-ray probe pulse, each operating at 120 Hz. The pulses are coupled together 18 cm upstream through a mirror oriented at 45°such that they arrive at the sample region collinearly (figure 1). Scattering patterns were taken with both fundamental (8.3 keV, 0.1494 nm, 30 fs, 10 12 photons/pulse, 30 μm FWHM focus) and 3ω (20.1 keV, 0.0617 nm, 30 fs, 10 10 photons/pulse, 30 μm FWHM focus) wavelengths of the LCLS x-ray source. For a diffractometer with fixed angular limits, the higher-energy 3ω x-ray photon provides a larger range in q-space, but at the cost of 100 times fewer photons per x-ray pulse.
2.1.1. Determination of temporal overlap. While the exact time zero is determined as part of the data analysis, an experimental measurement of time zero is important so that data is acquired in the proper time range. With the chemical reactions of interest often proceeding within less than 1 ps (10 −12 s), it is imperative that the proper time window is set before conducting an experiment.
At the LCLS XPP hutch [29,45], where these experiments were carried out, coarse timing between the laser and x-rays was achieved by inserting an metalsemiconductor-metal (MSM) diode downstream of the interaction region. These diodes have a fast time response for both the x-rays and ultraviolet laser pulses. Waveforms from the detector were measured on a remotely controlled 18 GHz oscilloscope, thus allowing coarse temporal overlap at the 10 ps level. More precise timing (<250 fs) was found using a bismuth (1, 1, 1) crystal inserted near the interaction point [46]. A similar MSM diode, positioned at the Bragg condition of bismuth at 8.3 keV (or 20.1 keV), was used to monitor the reflection of the optical laser off the bismuth target. As the relative time delay, t t t , X ray UV D = --crossed from positive to negative, the optical reflection of the bismuth is altered, revealing the machine-limited jitter of the temporal overlap at the interaction region in our scattering cell.
2.1.2. Jitter correction. The timing of the x-ray and optical lasers at LCLS is RF-controlled [47], but due to electrical noise there is an unpredictable shot-to-shot jitter of >250 fs . For experiments where femtosecond time resolution is important, this may be longer than the dynamics of interest. To determine the relative arrival times of the UV and x-ray photon pulses more accurately, a spectral encoding time-tool was employed that reduces this uncertainty to less than about 10 fs [48]. Operating on a fraction of optical laser light picked off by a beamsplitter upstream from the diffractometer, white light continuum (WLC) is generated in a sapphire substrate. The WLC is spectrally chirped and transmitted through a thin film of silicon nitride (Si 3 N 4 ). When the film absorbs the arriving x-ray photon, the material undergoes an index of refraction change for the optical pulse, creating a change in the transmitted spectrum and a clear measure of the relative arrival of the optical and x-ray pulses. In this way, the time-delay for each collected frame of scattering pattern can be corrected, allowing images to be sorted into time-bins as small as 25 fs.
The diffractometer
High demands are placed on the design and construction of the diffractometer cell: any unwanted scattering of the phenomenally intense x-ray beam by apparatus components such as windows, apertures, beam blocks, x-ray beamline components, or even passage through air, can easily overwhelm the weak scattering signals from the low-density target gas. Moreover, since the scattering signals change upon optical excitation by a mere 1%, any background signal could produce shot noise large enough to mask the pump-probe signals.
To address these challenges, we developed a windowless scattering cell where the low-pressure sample gas streams into a vacuum through a small orifice through which both the laser and the x-ray pulses enter. Here we define 'windowless' in terms of the upstream side of the cell, as it is important to ensure that the primary x-ray beam does not scatter onto the detector from sources other than the sample itself. Importantly, the cell is designed such that any x-rays scattered by the entrance aperture are blocked from directly hitting the detector. Similarly, x-rays scattered from propagation of the primary x-ray beam through air cannot reach the detector pixels.
In order to maximize the scattering signal, one wishes to maximize the sample pressure. However, large pressures of strongly absorbing samples cause attenuation of the laser, limiting the amount of sample. Consequently, a compromise must be struck as is discussed in more detail below. In our experiments using the x-ray fundamental, the sample region pressure was regulated to a constant 3-4 Torr (∼10 17 molecules cm −3 ), which implies for the selected x-ray diameters and interaction lengths a total of ∼8×10 12 molecules in the interaction region. For the 3ω x-ray experiments, where 100 times fewer photons were available, we used sample pressures up to 40-50 Torr. The latter conditions were not used for pump-probe experiments. In practice, the signals at each end of the range are weak and their utility limited by noise, which is a function of the number of scatterers and thus path length of gas exposed at each angle.
Gas turnover.
Because of the windowless design of the cell, there is a constant flow of gas. This has the advantage that, at any time, the sample probed in the cell does not contain a significant number of previously exposed molecules. The gas sample is continuously replenished by a feed line perpendicular to the incoming beams while the molecules in the sample region continuously escape into the upstream vacuum chamber through the 200 μm aperture. The resulting average distance traveled between pump-probe pairs in our experiment is thus found to be 2 mm. Since this is significantly larger than the width of the interaction region, which is 30 μm in diameter, the proportion of molecules probed multiple times in the experiment is negligible.
2.2.3. Scattering from apparatus components. Since gasphase hydrocarbons are weak scatterers, it is vital to ensure that scattering of x-rays from the primary beam by any other source is eliminated. This also includes solid apparatus structures such as apertures and mirrors upstream of the diffractometer. The UV and x-ray photons are coupled into the diffractometer's sample region using a 200 μm aperture, which separates the sample region from the vacuum-chamber that houses the in-coupling mirror. This aperture blocks transmission of x-rays scattered upstream of the in-coupling mirror and any residual gas, while permitting the transmission of the primary beam. Scattering of the primary x-ray beam by this aperture itself is blocked by a much larger-diameter aperture (upper scatter limit) within the sample region as well as by a beam-blocking washer (lower scatter limit) upstream from the beryllium window (figure 3). By judiciously selecting aperture sizes it is possible to largely prevent x-rays scattered from the 200 μm aperture from reaching the detector.
We note that x-rays scattered from the exit window, made from a thin piece of sapphire (figure 3), are prevented from reaching the detector by a lead shield. The same shield prevents x-rays scattered by air from reaching the detector. Finally we note that a 500 μm thick beryllium window ( figure 3) is not in the path of the primary x-ray beam. While this window is traversed by x-rays scattered from the gas sample, the secondary scattering of those x-rays is negligible.
2.2.4. Focal parameters. In order to limit multi-photon processes that might place the molecule on highly excited surfaces or even ionize the molecules, the UV excitation laser was only weakly focused onto the sample such that it causes less than-but not much less than-10% excitation. Given the dual-aperture design described above, where the upstream region of the sample volume scatters onto low-q regions of the detector and the downstream region of the sample volume scatters onto high-q regions of the detector, the laser pulse may experience significant attenuation by the scattering molecules. To maintain the fraction of excited molecules along the length of probed molecules near 10%, we balanced the attenuation with the focal parameters of the excitation pulse.
The absorption of the UV pump beam follows the Beer-Lambert law, 10 , where I 0 is the initial UV intensity, I is the intensity remaining after the beam propagates through the path length l, N is the number of scatterers in the path, and σ is the absorption cross section of the sample. Over the path length, strong absorbers significantly attenuate the intensity of the UV beam, which could result in a decreased fraction of excited scatterers toward the end of the path. To counteract this effect, the UV beam was focused at the downstream end of the sample cell, so that its beam diameter decreases as it traverses the sample cell. Fits of the theoretical calculations to the experimental data validated the results of this calculation.
Detection of scattered photons
The pattern of scattered x-ray photons is collected by a 2.3 megapixel Cornell-SLAC Pixel Array Detector (CSPAD) [49,50] positioned approximately 4 cm from the scattering region. The scattering patterns were radially averaged to produce two-dimensional patterns, since the structural data is encoded in a nearly radially-symmetric signal. In order to interpret the results of the scattering experiments, a careful calibration of the instrumentation was necessary. Particularly important is the determination of the distance and the position of the CSPAD relative to the scattering cell.
2.3.1. Instrument calibration. Correction of gain differences between pixels, and calibration of the diffractometer geometry, were performed with scattering patterns collected from an atomic scatterer, Xenon. To determine a precise position of internal components, such as the vented screw and apertures, the difference between the collected and theoretical patterns was minimized. The Xenon pattern provides a correction factor, g, as a function of q, In order to compare experimental to the theoretical scattering patterns, this factor was applied to all experimental scattering patterns.
2.3.2.
Centering of the detector. Since the CSPAD is perpendicular to the primary x-ray beam, accurate radial and angular integrations of the scattering images requires calibration of the center of the image on the detector [51]. Since the transverse positional stability of the x-ray beam is better than 10 μm [31], a single determination of the center position is sufficient for all frames. To find the optimal Cartesian coordinates of the center of the image on the detector, the difference signal between pumped and unpumped 1,3-cyclohexadiene was examined and coordinates of the detector center were chosen that produce the largest intensity difference between the radial pattern's local maxima and local minima near 1.9 Å −1 and 3.0 Å −1 , respectively (figure 4). The difference pattern was smoothed to eliminate local minima due to noise. This was the only instance of data smoothing; no smoothed data was used for any purpose other than finding the detector center.
Calibration of detector distance.
To ensure that the conversion from pixel coordinates to momentum transfer is properly performed, the distance between the diffractometer and the detector has to be determined. For this, sulfur hexafluoride (SF 6 ) was used as a model molecular scatterer, since its structure is well-known, including interatomic vibrational terms at room temperature [52], which modify the molecular scattering portion of equation (1) according to [53,54] where l m 2 is the mean vibrational amplitude of the associated interatomic distance.
The detector distance was found using a least-squares minimization of the percent difference between the radiallyaveraged theoretical and experimental patterns after the instrument function and center calibration corrections have been applied (figure 5). With this calibration, the detector distance was found to be 38.90 mm from the beryllium window in 8.3 keV experiments and 32.71 mm in 20.1 keV experiments, as shown in figure 5.
Photon counting.
Raw images from the CSPAD are processed using a combination of techniques. First the dark image (also known as a 'pedestal'), collected during each data set, is subtracted on a per-pixel basis. Subsequently, the 'common mode' noise is removed from each tile individually using either of two techniques. In the first technique, used with 20.1 keV experiments, the mean value of all pixels with a value within 30 ADU above or below the pedestal is treated as the common mode noise value and subtracted from the value read from all other pixels on that tile. In the second technique, used with 8.3 keV experiments, the common mode value was calculated from a series of unbonded pixels on each ASIC, which experience common mode noise but are physically unable to provide a reading for the detection of a photon.
Because random and common-mode fluctuations over a large number of pixels risk dominating the number of photons scattered onto the detector, a hybrid photon-counting method was used wherein a lower limit was set for values to be recorded as non-zero. This number was set as 110 ADU for 20.1 keV photon experiments where the value for single photon detection was approximately 150 ADU. For the 8.3 keV photon experiments, when the value for a single photon was only 30 ADU, the lower threshold for photon counting was 2σ where the distribution of values in dark frames for each pixel was modeled as a Gaussian distribution. Above this threshold, the value of each pixel was retained (not converted into a number of photons) so that inter-pixel charge sharing of a single photon event would not be lost.
Several masking filters were applied to images to remove pixels that did not respond properly to photon absorption. In 20.1 keV experiments, pixels with extreme pedestal values (less than 1000 ADU or greater than 2000 ADU) were masked. This was only necessary for CSPAD version 1.3 and was therefore not used for 8.3 keV experiments, which were performed after CSPAD version 1.5 upgrades [55]. A mask was also made to ignore pixels with high photon counts, determined by the number of counts during vacuum-only exposures. These counts come from non-vapor sources and therefore the corresponding pixels are not suitable for lowerintensity gas scattering. Finally, a mask of 'dead' pixels, defined as pixels that never recorded photons during any exposure to gas samples of CHD, was applied.
Experiments using 20.1 keV x-ray photons also experienced larger gain differences (pixel-dependent responses in ADU per photon) because they also used the older, less consistent CSPAD version 1.3, so a gain map was made to normalize the single-photon peak of each pixel to 150 ADU using Xenon exposures. This was not necessary in 8.3 keV experiments, which used a more consistent CSPAD version 1.5.
Data treatment
Before the experimental scattering patterns are compared to theoretical patterns, the data is corrected for many effects unrelated to the molecular dynamics. These are discussed below.
2.4.1. Theoretical scattering patterns. In certain cases, it is necessary to produce absolute scattering patterns of gases, particularly of unreactive species such as xenon and sulfur hexafluoride. These heavier gases require more advanced modeling of their scattering intensities, because a dispersion correction has to be applied and the x-ray polarization has to be considered. Both terms are wavelength-dependent. The dispersion correction for forward scattering has both a real and an imaginary part, which transforms the atomic scattering The necessary factors can be approximated from tabulated values at similar x-ray wavelengths [38,56].
Because the x-ray source is horizontally polarized, photons are preferentially scattered in the vertical direction. Scattering images are corrected by a factor of (sin cos cos 2 , where 2θ is the scattering angle and j is the azimuthal angle as measured from the plane of polarization [57].
2.4.2.
Difference scattering patterns. The changing scattering patterns in time-resolved experiments are best represented by difference patterns between the pattern of the ground-state molecule, derived either from patterns collected before time-zero or from frames where the UV pulse has been blocked, and the instantaneous scattering pattern, taken at a specific delay time after excitation. The difference patterns are particularly advantageous in cases where recording of an instrumental calibration with an atomic scatterer such as Xenon would be cumbersome, since they factor out a number of effects, including pixel-dependent gain differences. It is moreover convenient to express the difference pattern as a percentage where γ is the fraction of excited molecules and ΔI(t, q, γ) is the 'laser on'-'laser off' difference signal In using percentage differences, many signal contributions that do not change upon UV excitation cancel out, such as the magnitude of scattering intensity as a function of q (a result of the diffractometer's internal shape) and the atomic scattering signal. This results in scattering patterns that are independent of many experimental complications, and emphasizes the changing molecular scattering signal by eliminating the unchanging atomic scattering. Additionally, this representation improves the visualization of pattern elements at large qvalues, where absolute differences would show only small changes.
Data acquisition.
Datasets were collected in runs of approximately 1000 frames per time point, including frames with neither x-ray nor UV laser exposure ('dark'), frames with x-ray exposure only, and frames with both UV and x-ray exposure. To ensure consistency in x-ray performance, frames were collected in a 17:17:1 ratio of UV-on: UV-off: dark frames while time points were collected in randomized order, minimizing the effect of systematic drifts in x-ray or UV performance.
Scaling.
Since the number of x-ray photons per pulse can vary up to 100% shot-to-shot [25], the scattering patterns have to be carefully scaled. The most accurate method found to determine the relative number of photons in each shot is to integrate the number of photons collected by the detector. Consequently, the scattering patterns were scaled by the detector's total integrated intensity after radial averaging. Although one might expect that time-evolving structures might exhibit different sums in the observed scattering region, the low percentage of excited molecules suggests that this fluctuation is minimal. Moreover, for our data analysis we used the same scaling method in computing the expected theoretical patterns, so that any such errors should have no impact on the derived molecular structures.
2.4.5. Attenuation. Since the scattered x-ray photons travel through several components prior to the detector, the images are adjusted to account for attenuation by these materials. In all scattering patterns, beryllium is the largest but most consistent attenuator of scattered photons. At a thickness of 500 μm, this window transmits 91.8% of 8.3 keV photons and 98.3% of 20.1 keV photons [58]. However, since the scattered photons do not traverse the window perpendicularly, photons scattered to large q-values experience a longer path length through beryllium (up to 1366 μm). Consequently, a qdependent scaling factor has to be applied to the scattering intensity. A similar factor has to be applied to account for the attenuation by the air between the scattering cell and the detector. The attenuation by air at standard conditions for the relevant path lengths (between 4 and 12 cm) is 4.0%-11.6% for 8.3 keV x-rays and 0.3%-1.0 % for 20.1 keV x-rays [58].
Within the chamber itself, the attenuation depends on the sample gas, through which scattered x-ray photons traverse up to 22.5 mm. This gives a significant effect in Xenon, where the 8.3 keV x-rays are attenuated by up to 4.5% at 10 Torr and 293 K, whereas the 20.1 keV photons are only attenuated by up to 0.4% under the same conditions. In SF 6 the effect is small, with attenuations of up to 2.3% for 8.3 keV photons at 45 Torr and 293 K but only up to 0.17% of 20.1 keV photons [58]. The attenuation of scattered x-ray photons by 1,3cyclohexadiene was negligible at the pressure of our studies, 3-4 Torr [58].
2.4.6. Detector planarity. To eliminate another source of instrument-specific effects on the scattering patterns, the scattering angle dependence of the distance of the detector from the sample cell is accounted for. The scattering intensity decays as , R 1 2 where R is the distance between the scattering center and the detector. In the far-field limit this factor is uniform for every point on the detector. However, in our experiment, where the detector is less than 40 mm from the sample cell, R ranges from 60 to 120 mm. To account for this, the recorded intensity after radial integration is multiplied by R 2 to normalize intensities across the detector.
Similarly, the scattering patterns were corrected to replace the nominal pixel size (110 μm×110 μm) with an 'effective' pixel size, defined as the projection of the pixel area normal to the scattering center for that pixel.
Matching theoretical predictions to experimental
observation. In order to analyze the scattering data, we use an optimization procedure to match theoretically predicted scattering patterns to the observed time-resolved signal. Inversion procedures that yield nuclear density distributions exist for diatomic [59,60] and polyatomic [60,61] molecules, but the limited q-range in the present measurements render these approaches infeasible. Instead, we use state-of-the-art ab initio nonadiabatic quantum molecular dynamics simulations [43,44] to restrict the available configuration space in an accurate manner. A large number of trajectories, each representing a feasible reaction path for the CHD ringopening reaction, are used to fit the observed experimental signal where I t q , ( ) is the predicted signal, w k 2 are the weights for the ensemble of trajectories, with w 1, k k 2 å = and I t q , k ( ) is the scattering signal for each trajectory, calculated using the Debye formula in equation (1) with additional corrections for inelastic scattering. The approximations inherent in this approach, and further details regarding the simulations and the optimization procedure, are discussed elsewhere [43,44,62,63]. We focus here on how the signal in equation (7) is processed for comparison to the experimental signal. As seen in section 2.4.2, the experimental signal is represented by a percentage 'laser on'-'laser off' signal. If the 'laser on' signal is where g is the excitation fraction, I t q , exc ( ) the signal from the excited molecules, and I q off ( ) is the 'laser off' signal, then the difference signal becomes I t q , , exc off ( ( ) ( )) g -However, this expression is only valid if the x-ray pulse intensity is the same for both the 'on' and the 'off' signals. In the XFEL measurements, the intensity of the x-ray pulses fluctuates significantly, and we therefore scale the 'on' signal with respect to the total detected intensity of the 'off' signal off on on off where Q off is the integrated intensity on the detector for the 'off' signal, and Q t, on ( ) g is the integrated intensity for the 'on' signal. It is straightforward to modify the difference signal in equation (9) to account for q-dependent excitation fraction, in the case of a long interaction region in the experiment. Finally, since the experimental signal tapers off for larger values of q, it is better to use a percentage difference signal The predicted signal must also be convoluted to match the duration of the pump and probe pulses, and, finally, the theoretical signal is time-shifted relative the experimental signal to optimize the fit to the experiment. The optimization procedure therefore scans the trajectory weights, w , k 2 the excitation fraction , g and the exact time zero within the experimental limits.
Results and discussion
The low signal level associated with gas-phase x-ray scattering experiments was overcome by higher pressure, longer path length, and use of the hybrid photon counting method. Careful consideration and implementation of all these techniques has made it possible to produce high quality scattering patterns with a very small number of scattering molecules. With a long column of reacting molecules, loss of resolution arises from the number of different scattering angles visible to each point on the detector. The upper and lower scatter limits reduce this loss of resolution, however, by limiting the length of gas exposed that is visible to each pixel. Although the overall length of scattering gas is 13.5 mm, in this doubleaperture design only 5-6 mm is visible to any point on the detector, giving a resolution of around 0.25 Å −1 and 0.50 Å −1 over all regions for 8.3 keV and 20.1 keV x-ray photons respectively ( figure 6). To ensure that this does not introduce errors into the theoretically-fit models, simulated scattering patterns are convoluted by the range of scattering vector visible to each nominal q value.
After radial averaging, the scattering patterns are corrected by the number of visible scatterers as a function of scattering vector, q. At 3.7 Torr, the majority of the detector is exposed to 2×10 −12 molecules ( figure 6).
The use of two separate laser systems for the pump-and probe-pulses produced some variability in their co-timing. Beyond the determination of time-zero, high time resolution was achieved through the use of a spectral time-tool [48]. Shown in figure 2 are the time-delays determined for four sets of nominal (intended) time points, showing that data can be collected with a time resolution that exceeds the jitter of the LCLS source. The background scattering levels were minimized by the use of apertures, which were designed such that scattering of the primary x-ray beam by any source upstream of the sample gas would be attenuated by a thick metallic surface and scattering from any source downstream of the sample gas would be attenuated by a lead foil positioned prior to the detector ( figure 3). The result of this was very lownoise CHD diffraction patterns, as shown in figure 7.
Summary
We demonstrated a windowless cell diffractometer for ultrafast gas phase x-ray scattering experiments that can be used for a range of hard x-ray wavelengths. This diffractometer offers several benefits over molecular beams or nozzle setups, including a higher number of scatterers (to overcome the low scattering cross-section of small hydrocarbons) and the necessary apertures to prevent upstream scattering from reaching the detector.
This experimental design has been demonstrated to provide adequate scattering patterns for low-pressure gases with sufficient resolution to perform time-resolved dynamical studies, and is intended to work for a large number of molecular systems. It has successfully generated patterns for 1,3-cyclohexadiene, sulfur hexafluoride, and xenon at pressures ranging from 3 to 50 Torr, and we anticipate this design to be useful for a number of other molecules in future studies.
Future studies will utilize a simplified diffractometer geometry wherein the sample cell will exhibit a reduced reaction path length, thereby relaxing the high focusing parameter requirements of a long path. The simplification of the diffractometer may also permit scattering at much larger angles and therefore obtain a wider range of q, increasing the experimental resolution and possibly eliminating the technique's substantial reliance on sophisticated quantum calculations. Such experiments would be possible in the CXI hutch at LCLS. Future experiments could also improve the spatial resolution of the molecular target by employing molecular alignment techniques. | 2017-10-24T02:05:29.081Z | 2016-01-08T00:00:00.000 | {
"year": 2016,
"sha1": "f1c9f36b3c4a7a598ab0fcf214999eb9dc5b9d7c",
"oa_license": "CCBY",
"oa_url": "https://www.pure.ed.ac.uk/ws/files/25001766/Budarz_JPB_2016.pdf",
"oa_status": "GREEN",
"pdf_src": "IOP",
"pdf_hash": "a9eb89c9def51d62bd37215c9c7461ccb18255db",
"s2fieldsofstudy": [
"Chemistry",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
245108016 | pes2o/s2orc | v3-fos-license | Interpreting the Manning Roughness Coefficient in Overland Flow Simulations with Coupled Hydrological-Hydraulic Distributed Models
There is still little experience on the effect of the Manning roughness coefficient in coupled hydrological-hydraulic distributed models based on the solution of the Shallow Water Equations (SWE), where the Manning coefficient affects not only channel flow on the basin hydrographic network but also rainfall-runoff processes on the hillslopes. In this kind of model, roughness takes the role of the concentration time in classic conceptual or aggregated modelling methods, as is the case of the unit hydrograph method. Three different approaches were used to adjust the Manning roughness coefficient in order to fit the results with other methodologies or field observations— by comparing the resulting time of concentration with classic formulas, by comparing the runoff hydrographs obtained with aggregated models, and by comparing the runoff water volumes with observations. A wide dispersion of the roughness coefficients was observed to be generally much higher than the common values used in open channel flow hydraulics.
Introduction
The estimation of overland flow is one of the most relevant processes when assessing the hydrological response of a basin. Despite hydrological processes being extremely complex and, nowadays, still not fully understood, their evaluation is essential to characterize the basin flow production under rainfall events. The characterisation of overland flow is a crucial aspect for flood hazard management, which is the most catastrophic natural hazard and results in the greatest material and life damages all over the world [1][2][3]. The flow characteristics are key inputs to perform particular risk analyses as well as for integrating into flood risk early warning systems [4][5][6][7].
Numerical modelling is a widely extended technique that allows studying this kind of complex phenomena for which, in general, no analytical solutions exist. Hydrological modelling has been traditionally performed by means of the so-called aggregated models [7,8], which rely on the simplification of the study area as a single whole unit (generally the basin) or as the contribution (or "aggregation") of different parts (subbasins) constituting it all linked through a routing method. These models are usually based on conceptual or empirical formulations to describe several physical processes related to precipitation, infiltration, vegetation interception, overland storage, surface runoff, etc., and normally use a representation of the basin, or subbasins, as a homogenous unit [9].
In this aggregated approach, one of the fundamental parameters needed for the transformation of the rainfall into runoff is the time of concentration (t c ). At present, of first performing a hydrologic simulation with an aggregated model, and subsequently using a hydraulic model to characterize the flow. This coupled model technique usually implies solving the equations of mass and momentum conservation in two dimensions, i.e., the two-dimensional shallow water equations (2D-SWE). In these, the hydrological processes (rainfall-runoff transformation and losses) are usually considered as source terms of the mass conservation equation, and their contribution to momentum is considered only in a few cases.
Both aggregated and coupled hydrological-hydraulic distributed models depend on a large number of parameters (e.g., soil and land characteristics, vegetation, topography, atmospheric conditions, etc.) [34][35][36][37], but also depend on the practitioner's expertise. Nevertheless, coupled distributed models constitute a conceptual change with respect to aggregated models as they are based on a physical approach. This means that the routing of overland flow is dependent on the terrain roughness and is considered in the model with hydraulic equations, commonly the Manning equation. Thus, the parameters: time of concentration (t c ) and lag time (t lag ) are no longer part of the modelling process, with the Manning roughness coefficient being the only parameter apart from geometry that conditions the runoff propagation.
For hydraulic applications, the Manning roughness coefficient has been widely studied; numerous publications and guidelines for its estimation in natural or artificial channels exist [38][39][40]. However, in coupled hydrological-hydraulic distributed modelling, the value of roughness coefficients must incorporate not only the material roughness itself, but also all the sub-grid geometric features that affect friction, or energy dissipation, and thus the water depth and flow velocity [38]. It is worth mentioning that the flow patterns of overland flow in hydrological models are totally different than for standard uniform steady open channel flow, for which the Manning equation was originally introduced. In hydrological applications, the water depth values can be of a few millimetres or less. Thus, the roughness coefficient values for these extremely shallow overland flows characteristic of hydrological applications can, by no means, be those in the aforementioned guidelines, or at least, those must be questioned and verified. For overland flows, such as those on hillsides, the coefficient values tend to be higher than common ones used in river or stream flows [31,41,42]. Moreover, in coupled hydrological-hydraulic distributed models, the terrain roughness is a fundamental parameter that governs water propagation. It, together with the geometry discretisation, takes the role of determining the basin response as the time of concentration does in aggregated hydrological models.
This work aims to provide insight on the role of the Manning roughness coefficient in overland flow simulations by means of two-dimensional coupled hydrological-hydraulic distributed numerical models, and, thus, to assist in the construction of more accurate distributed rainfall-runoff models. For that purpose, three different strategies were used to characterize the hydrological response of a basin. The first one is a set of four small basins where the concentration time was estimated using empirical formulas and the Manning coefficient in the distributed model was adjusted for the S-shaped hydrograph to fit with the concentration time. In the second case, the resulting hydrograph obtained by means of traditional aggregated hydrological modelling in a basin was used to calibrate the terrain roughness in a 2D-SWE-based distributed model. Finally, the last strategy consisted of the calibration of the roughness coefficients of a monitored basin to adjust the computed results in four observed rainfall events.
Materials and Methods
As already introduced in the previous section, two different modelling approaches were used in the present work: (1) traditional aggregated hydrological modelling and (2) coupled hydrologic-hydraulic distributed modelling. The two techniques are briefly presented in the following sections.
Aggregated Hydrological Modelling
Case Study 2, which consists of a 1016 km 2 basin located in Mexico, was firstly analysed with an aggregated hydrological modelling approach through the HEC-HMS software [43]. A standard methodology was applied, based on the local administration guidelines, to discretize the basin. The unit hydrograph (UH) methodology was used for the rainfall-runoff transformation processes, which was obtained from the dimensionless unit hydrograph of the Soil Conservation Service [16,44]. The t lag parameter and the value of the time of concentration (t c ) were estimated by means of the Kirpich formula. Precipitation losses were accounted for using the SCS-CN method [44,45], also referred to as the NRCS-CN method [46] after the U.S. Soil Conservation Service was renamed as Natural Resources Conservation Service. All parameters of the numerical model are detailed in the Supplementary Material.
Iber solves the 2D Shallow Water Equations (2D-SWE) using a conservative scheme based on the Finite Volume Method (FVM) on a structured or/and unstructured mesh of triangles and/or quadrilaterals. It uses the Roe scheme [60], which consists of the Godunov method together with Roe Approximate Riemann Solver [33].
The shear stress terms due to friction are generally incorporated in the SWE via the concept of the friction slope. This term expresses the contribution of the momentum change due to the energy dissipation produced by flow-boundary interactions as well as by a sub-grid of obstructions and by the losses due to the flow turbulence if, as it is generally done in coupled hydrological-hydraulic distributed modelling, no turbulence model is used. The friction slope (S o ) is calculated using the Manning formula, which, in 2D, results in the following equations for the X and Y directions: where n is the Manning roughness coefficient; U x and U y are the two components of the depth averaged velocity vector in X and Y direction, respectively; |U| is the modulus of the flow velocity; and h is the depth. The DHD numerical scheme [25], developed ad hoc for hydrological modelling purposes, was used in all simulations with Iber. Briefly, this scheme merges the hydrostatic pressure gradient with the bed slope in a single term that depends on the free surface gradient. With this approach, when the free surface is horizontal, an exact balance between the bed slope and the hydrostatic pressure gradient is obtained naturally. As a result, more stable, efficient, and faster simulations (up to 1.5-times) are achieved as compared with other traditional FVM schemes [25,58,59].
Two additional options were also used in the simulations. On the one hand was an option that removes local depressions generated during the discretisation process along the flow path [61]. This tool is based on the technique proposed by Jenson and Domingue [62] in which each depression is refilled considering the lowest elevation of its neighbour cells. The application of this methodology is convenient even if the Digital Terrain Model (DTM) was previously treated with a Geographical Information System (GIS) software due two factors: (1) common GIS tools obtain the flow paths based on the connections between cells through their edges defined by the vertices, while the FVM uses the cells elevation to compute the flow; and (2) the topographical discretisation in the numerical model can use elements larger than the DTM raster cell size, and in the obtention, new depressions might be generated. Thus, this option ensures a proper definition of the flow path to work with the FVM whilst avoiding spurious depressions. On the other hand, to handle the transition from wet to dry conditions, and vice versa, Iber implements a wet-dry method by defining a water depth threshold (ε wd ) below which a cell is considered to be dry. Low values of ε wd , such as 0.0001 m, as used herein in all cases, guarantee mass conservation and improve rainfall-runoff transformation as well as overland flow propagation [63].
Other particular parameters, as well as the numerical discretisation, are detailed in the description of each case study and in the Supplementary Material.
Case Studies
The role of the Manning coefficient roughness in the numerical modelling of the overland flow, performed with coupled hydrological-hydraulic distributed numerical models, was addressed, not referring to specific case studies around the world but to the numerical strategy performed. To that end, well documented studies that analyse the hydrological response of a basin were used from different points of views.
In Case Study 1, theoretical aspects related to the definition of the time of concentration and the role of the bottom roughness on the hydrological response were analysed. In Case Study 2, a distributed numerical model was calibrated by varying the roughness coefficient to fit the results to the hydrograph resulting from an aggregated model forced with synthetic hyetographs based on historical data. For that purpose, the parameters involved in the aggregated model were determined following the local administration recommendations. Finally, in Case Study 3, four different well-documented rainfall events in a monitored and regulated basin were used to force a distributed model aiming to analyse the role of the roughness coefficient in the overland propagation. The roughness coefficient was calibrated to adjust the simulated water elevation to the observed one during each event. A total of six basins, with different topographic and hydrologic characteristics, were analysed ( Figure 1).
Case Study 1: Adjustment of the Roughness Coefficient Based on the Time of Concentration
In Case Study 1, the hydrological behaviour of four basins presented by Grimaldi et al. [10] is analysed. The four basins, located in USA, are: Brazos basin (Cow Bayu), San Antonio (Escondido Creek), Trinity basin (North Creek), and Brazos basin (North Elm Creek). Table 1 summarizes the main geometric characteristics. In Brazos (Cow Bayu) and Sant Antonio, there are large urban areas and both basins are regulated by a reservoir located at their lower parts. Trinity basin is a semi-rural area with five reservoirs that play a significant role in their hydrological behaviour. Finally, the largest basin, Brazos (North Elm Creek), is located in a rural area with two reservoirs (see Figure S1). In this case study, a distributed coupled model was built for each basin using Iber. Details of the parameters and structure of the numerical models are described in the Supplementary Material. No aggregated hydrological modelling was used.
The numerical models were operated with constant rainfall intensities of infinite duration, analysing the output hydrographs (S-shaped). The required time for the hydrograph to stabilize to constant discharge, i.e., the time for the basin to achieve a stationary state, should coincide with the time of concentration if the WMO definition [14] is adopted. For the adjustment of the roughness coefficients based on t c , the six common expressions used by Grimaldi et al. [10], plus the Témez [17] formula commonly used in Spain, were selected. At this point, the question of ambiguity on the concept and formulas to estimate t c arises again as pointed out by Beven [12] and commented upon in the introduction. If the variable used to compare methodologies is the time the S-curves need to stabilize, but the formulas used to estimate the t c are empirically derived using the SCS-UH method or other similar means, there is a conceptual discrepancy as explained by Beven [12].
The different formulas used to estimate t c together with the results are presented in Table 2. It also shows the minimum, maximum, and mean values of the time of equilibrium that are going to be compared with the results of the distributed method.
Case Study 2: Adjustment of the Roughness Coefficient Based on the Peak Time and Discharge from Aggregated Hydrological Models
Case Study 2 corresponds to the Mexican basin called Marquelia, located in the Guerrero region ( Figure 1). The basin precipitation is monitored by eight rainfall gauge stations within or near it, and a hydrometric station at the outlet (see Figure S3a). The area of the Marquelia basin is 1016 km 2 and its mean slope is 0.75%. Land uses are characterized by large extensions of secondary vegetation (48.51%) and forested (24.23%) areas. Other areas are covered by rivers (10.70%), pasturelands (10.60%), and rainforest agriculture extents (5.42%). According to the World Reference Base for Soil Resources [67], the basin edaphology is mainly composed of large areas of regosol (71%) and other types of soil such as phaeozem (11.3%) and cambisol (9.6%). From the land uses classification, a Curve Number (CN) of 76.7 was estimated as an area weighted average value in the whole basin.
Historical rainfall data was used to generate synthetic hyetographs associated with a 50-years return period (T50). Alternate block synthetic hyetographs [16] for a T50 event were calculated for each rain gauge station using the Intensity-Duration-Frequency (IDF) curves [68] and the time of concentration of 12.5 h resulting from the application of the Kirpich formula. Historical data of annual maximum precipitation were corrected, as proposed by Campos-Aranda [69], to account for the effect of using a fixed observation time-interval [70] and the non-uniformity of rainfall in the whole area of the basin.
On one hand, the output hydrograph was first calculated with aggregated modelling using HEC-HMS. The basin was discretized in seven subbasins. In each one, t c was estimated with Kirpich formula and t lag as 0.6t c . The subbasins were connected by three river reaches where the flow was propagated using the Muskingum routing method [16].
On the other hand, Iber was used for coupled hydrological-hydraulic distributed modelling. In this case, the terrain roughness was calibrated to adjust the hydrological behaviour (peak time and peak discharge) to that of the aggregated model. Six different Manning coefficient (n) values related to the land uses distribution of the basin (see Figure S3b) were used. All information regarding the topography, land uses, and precipitation data are presented in the Supplementary Material.
Case Study 3: Adjustment of the Roughness Coefficient Based on Observed Storm Events
Case Study 3 focuses on La Muga basin (Spain). Four well-documented rainfall events [58,59,71] were used to assess the hydrological response of the basin by varying the roughness coefficient values in the distributed model.
The study area consists of the upper part of La Muga basin, which has an extension of 181.2 km 2 and a reservoir at the outlet of 61 hm 3 of storage capacity. Weather conditions of the basin are characterized by a wide variability of the rainfall regime, which is influenced by marine conditions with small thermal as well as rain variations [72]. Extreme weather conditions-such as heavy rain events with high precipitation intensities and water accumulations concentrating in a few days or hours, and water scarcity associated to long dry-weather periods-are typical of the area [73,74].
A land use analysis [75] revealed that the basin is mainly covered by large forest extensions (92.5%), while a terrain characteristics analysis indicated a low permeability and underground storage capacity [72]. Thus, the hydrological response of the basin can be characterized with a unique land use and soil texture.
In the same way as for Case Study 2, the NRCS-CN method was applied to transform rainfall into runoff. Previous studies [58,59,71] allowed determining the CN value as a function of each studied rainfall event and the soil moisture antecedents evaluated with remote sensing [76,77]. The resulting CN values varied from 50 to 81. The four events were labelled with the starting date (year/month/day) and the duration in days as follows: 20130304_3d, 20131115_4d, 20141128_2d, and 20150320_6d. Table 3 summarizes the cumulated rainfall for each event, the maximum registered rainfall intensity, and the CN value. A more extended description of these events is detailed in the Supplementary Material. Based on the uniformity of land uses and soil characteristics in the basin, a unique Manning coefficient value was used. The model calibration was performed by comparing the calculated and observed water elevation at the Boadella reservoir dam, located at the basin's lower end (see Figure S5).
Case Study 1
Four distributed hydrological models were built, one for each of the four basins. The domain was discretized with a calculation mesh of a regular grid generated directly using the DTM data, being quadrilateral elements with sides of approximately 9.5 m. A constant rainfall intensity of 25 mm/h was assigned to the whole domain. In this case, no infiltration losses were considered; thus, the rainfall intensity was considered to be effective precipitation. The value of the Manning coefficient (n) was varied from 0.01 to 0.2 s/m 1/3 , in intervals of 0.01 s/m 1/3 . The purpose was to obtain the best approximation for the minimum, the maximum, and the mean times of concentration (t c ) calculated with the empirical formulas ( Table 2). The maximum discharge generated with the aforementioned rainfall intensity is, in this case, a constant discharge. For the estimation of t c , the discharge was considered to be constant when its variation between two consecutive results time steps was lower than 0.1% of the discharge. Results time steps of 60 s were used. Figure 2 shows the simulated hydrograph in each basin for the selected Manning coefficients, together with its value. All of them are consistent with the S-curve shape characteristic of the rational method [16,78], which stabilizes at the same value obtained with the rational method, i.e., the product of the effective rain intensity and the basin area (see Table S1). For the Brazos (Cow Bayu) ( Figure 2a) and San Antonio (Figure 2b) basins, a very low value of the Manning coefficient (0.01 s/m 1/3 ) was needed for the S-curve to fit with the minimum t c , which, in this case, corresponds to the Kirpich formula. In Trinity (Figure 2c) and Brazos (North Elm Creek) (Figure 2d) basins, it was not possible to reproduce the minimum t c without considering any friction since the hydrograph needed more time than the minimum t c to stabilize. For this reason, they are not plotted in Figure 2. This induces the thought that the Kirpich formula, the one that provides such minimum values, is quite unrealistic for these basins, as water propagation is slower even with no friction. This is in agreement with the assertion of Michailidi et al. [11] that "the Kirpich formula is a special case of a very general expression that is valid under very limited conditions" and that this formula was obtained for basins where channel flow was predominant.
The particular shape of each S-curve depends on the geometry and hydrological behaviour of each basin. The shape of the basin and the river network play an important role not just in the hydrograph definition, but also the drainage area. For the analysed basins, the smaller ones (Figure 2a,b) follow a more standard S-shaped hydrograph; in contrast, in the bigger ones (Figure 2c,d), the contribution of each part of the basin is most clearly appreciated and the hydrographs' shape differ more from a smooth S-shape. In particular, the hydrographs of Trinity basin (Figure 2c) show some more flat areas (around 2 h for n = 0.05 s/m 1/3 and between 3 and 5 h for n = 0.14 s/m 1/3 ) after an initial rapid discharge increase, which can be attributed to the effect of small basins having reached their maximum discharges.
Numerical results show a wide range of the Manning coefficient, from values lower than 0.01 up to 0.2 s/m 1/3 in order to achieve similar responses of the basin to those obtained by common time of concentration formulas and an aggregated model. As expected, a low value of the Manning coefficient provides faster response, while high values in a slower runoff advance. It is worth mentioning that a unique roughness coefficient value was used all over the basin; thus, a more detailed roughness discretisation would result in higher values at hillsides and areas outside of the hydrographic network, and lower ones in rivers and floodplains.
Case Study 2
The hydrological analysis of the Marquelia basin was performed first with an aggregated approached using HEC-HMS and the UH method, and then with a distributed model using Iber. The first model, built following the existing recommendations in Mexico, was taken as the reference for adjusting the roughness coefficients in the second one. In this, the domain was discretised with seven subbasins and three reaches. The time of concentration for each subbasin, t c,i , was determined by the Kirpich formula. The Muskingum method was used as the routing method [16], the parameters-K and χ-ere estimated following the recommendations of Fuentes et al. [79]. A more extended description is detailed in the Supplementary Material.
On the other hand, for the distributed approach, an irregular calculation mesh made of triangular elements, between 100 and 500 m of side, was built. Smaller elements were used in the river network while greater ones were used on the hillsides, aimed at optimizing the computational time and results accuracy balance. The topographical data used was the 15 m cell size DTM provided by the Mexican National Institute of Statistics and Geography, INEGI [80]. The Manning coefficients that associated the spatial distribution of the land uses map ( Figure S3b) were calibrated. Additional information of the model discretisation is detailed in Supplementary Material.
The values of the Manning roughness coefficient that provide the best adjustment in terms of peak discharge are depicted in Table 4. They range from 0.069 s/m 1/3 , corresponding to urban settlements, to 0.207 s/m 1/3 corresponding to forested area. All these values widely exceed the traditionally recommended values for hydraulic computations in artificial or natural channels and floodplains areas [38][39][40][81][82][83]. Table 4. Manning roughness coefficient according to the land uses discretisation shown in Figure S3b that provides the best adjustment in the calibration process. Figure 3a plots the hydrographs resulting from the aggregated approach (dashed line) and the distributed approach (continuous line). The peak discharge and the hydrograph volume differ, in absolute value, by less than 0.2% and 3%, respectively, being in both cases lower in the ones obtained in the distributed approach. The aggregated model provides a faster hydrological response with almost two partial peaks discharges at 8.5 h and 11.5 h, while the maximum peak discharge is produced 1.6 h after the one obtained by the distributed model. In this sense, the distributed model also produces a partial peak at around 8.5 h but with a lower discharge. Despite the whole basin being split into seven subbasins and three river-reaches, the hydrological response of this semi-aggregated model does not represent the basin response as accurately as the distributed one. In this case, the shape of the hydrograph is clearly influenced by the rainfall-runoff method (UH). This is especially denoted in the generation of a partial peak during the first hours and a fast decay of recession curve after the maximum peak discharge, the discharge being zero after 27 h.
Land
On the other hand, the distributed numerical modelling based on the 2D-SWE solution, which allows considering the full topographical and land uses characteristics of the basin and subbasins, provides more accurate results for the hydrograph shape. A sensitivity analysis was carried out by considering variations of ± 20% in the Manning roughness coefficients (Table 4). Figure 3b also presents the resulting hydrographs of this analysis (dotted and dashed lines), increasing the peak discharge while decreasing the time of the peak when the roughness coefficient decreases, and vice versa. It is worth noting that the peak discharge varies from −9.5% (+20% of n values) to 11.9% (−20% of n values), while the time of the peak and the hydrograph volume vary by exactly ± 8.8% and ± 0.5%, respectively. Thus, variations on the roughness coefficient are directly related to the time when peak discharge is produced, but the value of the peak discharge is not proportionally increased/reduced due to this variation.
Case Study 3
The model of La Muga basin was based on a spatial discretisation of triangular mesh elements with variable size that were finer in rivers (20 m) and coarser in hillslopes (200 m), constructed from a high resolution DTM of 2 × 2 m cell size [84]. As normal river discharge is small in comparison with the bankfull, the DTM, from a LiDAR flight, is a good approximation of the riverbed geometry.
The model was operated with variations on the Manning roughness coefficient (n), from 0.02 to 0.16 s/m 1/3 with steps of 0.02 s/m 1/3 , resulting in different hydrological responses. The computed water elevation in the reservoir at the basin outlet was compared with the field observations along the events and at the end of each one. Figure 4 shows the simulated water elevation for the four rainfall events analysed.
In Figure 4a, the results of 20130304_3d event are shown, which concentrates on the rainfall during day 2 and the first-half of day 3. In it, the simulated water elevation at the end of the event is similar to the observed one for all scenarios, with relative errors being less than 1.5%, and the value of n being 0.1 s/m 1/3 -one that achieves the best adjustment (0.005% of error). For this event, significant differences on the arrival time of the flood can be observed, with a time gap of around 12 h between the hydrographs corresponding to the minimum and maximum n values. The evolution of the water elevation during the second-half of day 2 clearly shows faster hydrological responses for low values of n. Figure 4b shows the results of the simulations for the 20131115_3d event. The simulations fit well with the observations in terms of final water resources (volume of water in the reservoir), with relative errors being less than 0.2%. In this case, where rainfall intensities around 50 mm/h were recorded in the middle of day 2 and at the beginning of day 3, the evolution of the water elevation was not very sensible to variations of the Manning coefficient, especially for n values greater than 0.06 s/m 1/3 . The 20141128_2d event (Figure 4c) shows the worst adjustment of all events in the reproduction of the water elevation evolution, but, by contrast, it also achieves a good performance in terms of water resources at the end of the event (relative error less than 0.6%). In this case, the three peaks of rainfall intensity registered generated an almost constant increment of water resources stored in the reservoir; however, the numerical model did not reproduce this trend, probably because in the model the rainfall distribution was considered to be homogeneous all over the basin, unlike the real case [59]. Only for a Manning coefficient between 0.06 and 0.12 s/m 1/3 , the water elevation at the end of the event shows a good adjustment with relative error being less than 0.16%. Finally, in the 20150320_6d event shown in Figure 4d, which lasted 6 days, although more than 80% of the rainfall was concentrated during day 2, the numerical model reproduced well the water resources volume at the end of the event, with relative error being less than 0.09%. In this case, the evolution of the water elevation was similar to the observations, but for a slightly sharper increment of the water elevation between day 2 and 3. This fact is probably because the rainfall registered at the gauge station overestimated the real precipitation as pointed out by Sanz-Ramos [59].
The results show that the dependency of the time arrival of the water front to the Manning coefficient values is more evident for low n values (less than 0.08 s/m 1/3 ). In general, the hydrological response of the model is more sensible for low values of n than for high ones, especially from the point of view of water resources. As expected, low values of n produce larger overland flow discharges in shorter times.
No remarkable differences were observed in the model results for values of n greater than 0.08 s/m 1/3 . The precision of the results is aligned with the assumption of a unique Manning roughness coefficient value for the whole catchment, which in this case, was appropriate for hydrological purposes. On the other hand, for values of n in the lower range, oscillations on the water surface elevation appear.
Several indicators were used to assess the adjustment between the observations and the simulated results-root-mean square error (RMSE), mean absolute error (MAE), and the coefficient of determination (R 2 ). Table 5 summarizes the performance of the model for the four rainfall events in terms of these indicators. A good adjustment is seen for 20130304_3d and 20131115_3d events, with low values of MAE and RMSE, but also with high values of R 2 . The performance of 20141128_2d event is, in general, the poorest-the simulated water resources are clearly underestimated during almost all the event. In contrast with the other events, as the hydrological response is slower in comparison with the observations, the indicators show the best adjustments are for low values of n. Finally, the 20150320_6d event presents the worst R 2 values. A peer-to-peer results analysis (RMSE and MAE) reveals that the model has, in general, a good adjustment, especially for values of n greater than 0.06 s/m 1/3 . It is worth noting that the influence of the rainfall antecedents (arrival time of the water front) and the spatial distribution of the precipitation (non-expected increase of water elevation at the end of the episode) can condition the previous statistics. Nevertheless, from the previous considerations, it can be asserted that the performance of the model is, in general, good enough for the purpose of the simulations.
On the roughness coefficient values and their effect on the hydrological response
In free surface hydraulic numerical modelling, shear terms due to friction express the contribution of the momentum change due to the energy dissipation produced not only by flow-boundary interactions, but also by sub-grid turbulence if, as is generally done, no turbulence model is used. Since for hydrological purposes, the effect of turbulence is generally neglected, only the friction of the fluid with the terrain represents the energy dissipation. I.e., in 2D-SWE-based coupled hydrological-hydraulic distributed numerical modelling, the bottom roughness conditions not only the flow routing in the river network, but also the overland flow on the areas where the rainfall-runoff process takes place (hillsides).
Hence, the Manning roughness coefficient can be considered as a "property" of the terrain material where the overland flow is propagated. In 2D-SWE-based models, anisotropic properties of the shear stress terms (see Equation (1)) can be due not only to the velocity field but also to the Manning coefficient [85][86][87]. Thus, Manning coefficient could be considered as a vector to represent the anisotropic properties of the terrain (Equation (2)), where n x and n y are the two components of the Manning coefficient vector in X and Y direction, and |n| its modulus.
However, there is no strong evidence that confirms that the Manning's anisotropy should be considered in hydrological studies because the overland flow usually propagates in the same direction, especially over the hillsides where the rainfall-runoff process is generated. By contrast, the time-space variability of the Manning coefficient, including the dependency of the main hydraulic variables (depth and/or velocity), must be considered in general; the coupled hydrological-hydraulic distribute modelling is the unique numerical strategy that can deal with this.
On the other hand, despite the fact that in distributed numerical modelling the topographical discretisation of the domain is more accurate than for aggregated numerical modelling, the hydraulic behaviour of the water for very low depths (millimetric scale or lower) is not properly reproduced by the solution of the SWE. In this situation, the flow scale is in the same order of magnitude of the terrain particles scale, and the fluid tends to move more slowly than for larger river-like flow scales. Thus, in order to avoid too-fast hydrological responses, the bottom roughness must be increased in relation to the values used in common hydraulic situations. This explains why high values of the bottom roughness, the Manning coefficient value (n), are required in Case Study 2 and Case Study 3 to obtain a good performance of the numerical model from a hydrological point of view. In contrast, very low values of n are needed in Case Study 1 for an accurate adjustment of the time of concentration obtained with the Kirpich formula. A n value of 0.01 was needed to reach a similar t c in Brazos and San Antonio basins, whereas in the other basins, this value was even lower than the previous one. This is in line with various authors' discussions on the appropriateness of the Kirpich formula in situations other than those for which it was obtained [10][11][12].
The generally high values of the roughness coefficient obtained in the three case studies are far from the values extensively used for hydraulic purposes in flood routing [40], but are within the range of values generally found in the literature for hydrological purposes [26,30,31,42,[88][89][90][91], especially for hillslopes and flood plains, where the rainfallrunoff process predominates.
On the role of the domain discretisation in the basin hydrological response In general, coarse meshes provide faster simulations but poorer resolution and accuracy. In contrast, finer meshes provide more accurate results while the computational effort increases. Thus, practitioners must deal with achieving a proper balance to obtain good results with a reasonable computational effort.
Improvements in topographical and rainfall data acquisition and the increase of the computational capacity, in particular those related to Graphical Processing Unit (GPU) computing in particular [89,[92][93][94], are continuously leading to the advance of the numerical modelling, especially in hydrology. This new framework pushes modellers to generate increasingly detailed coupled hydrological-hydraulic distributed numerical models, even using, if necessary, the full information of the DTM (as shown for example in Case Study 1) as the computation mesh and distributed rainfall data. However, mesh resolution affects the model hydrological response (peak discharge) and rainfall intensity affects the value of the time of concentration of the basin.
To illustrate the last two facts, a double sensitivity analysis was carried out on Marquelia basin. The results show that the peak discharge increases when the number of elements of the numerical model also increases (Figure 5a). An asymptotic trend towards 7000 m 3 /s is observed for calculation meshes with more than 1.9 million elements. This peak discharge increment is probably due to the better representation of the topography, in general, as well as the better definition of the river-channel, in particular, which improve the flow propagation representation. This last domain discretisation (1.9 M elements) represents approximately 19 elements per ha, which is still far from the values of flood studies where more than 1000 elements per ha is quite common [95]. On the other hand, the time of response of a basin depends not only on the Manning coefficient but also on rainfall intensity [11]. Higher rainfall intensities imply shorter hydrological responses, i.e., the time of concentration becomes smaller. In this case, the t c was estimated as the time from the end of the excess rainfall to the point of inflection of the hydrograph where the recession curve begins [96]. Using the same model discretisation for Case Study 2 and varying the return period of the rainfall hyetograph from 2 to 10,000, it is possible to observe how the time of concentration tends to a constant value of around 15 h for the statistically less frequent events (Figure 5b). This fact is aligned with Grimaldi et al. [10], who suggest that t c becomes quasi-invariant with respect to the rainfall intensity for flood events with a high return period and who also signalled the time of concentration dependency with the cell-size of the DTM when using NRCS method. Thus, the use of coupled hydrological-hydraulic distributed models, where the time of concentration is no longer part of the modelling process, might reduce the uncertainties related to the rainfall-runoff processes.
On the numerical approach Currently, there are three different strategies to carry out an overland flow analysisconceptual, aggregated, and distributed. Conceptual and aggregated approaches are the simplest and are based on the main basin characteristics, such as the extent, main river length, main river slope, and soil characteristics to estimate the rainfall-runoff threshold, etc., and summarize all hydrological processes for the estimation of the peak discharge associated to a rainfall intensity (e.g., Rational Method).
Aligned with the evolution in the state of the art of applied hydrology, the aggregated models integrate the previous conceptual models into numerical models. They are tools based on the physics of the problem that, by integrating empirical formulas for overland processes, first allowed simulating rainfall events, groundwater interactions, flow routing, etc. These models are still widely extended because they are fast, robust, and conceptually simple. However, as for other numerical tools, to achieve good results they must be calibrated. In them, one of the main parameters to calibrate is the time of concentration (t c ), which can be estimated by empirical formulas of one or the other authors. Additionally, practitioner's expertise is crucial to determine the number of subbasins to define the rainfall-runoff and routing methods, etc.; thus, in all of these processes, the error derived from selecting one or the other parameter is propagated till the final result: the hydrograph.
Finally, the more recently-developed distributed models, and in particular the coupled hydrological-hydraulic 2D-SWE-based tools, provide more detailed hydrologic as well as hydraulic results. In this case, the practitioner's expertise may be somewhat less relevant because there are less parameters to calibrate; in fact, only the terrain roughness. The domain discretisation is also important to achieve suitable results. It plays a similar role to the subbasins' definition in aggregated approaches but, in this case, the modeller has only to decide the mesh element size (minimum, maximum, or mean) and not how many subbasins/reaches the domain must be divided into.
As shown along the document, the value of the terrain roughness in 2D-SWE-based distributed approach is usually out of the range for hydraulic purposes as overland flow tends to move more slowly than when the flow is propagated along a river. Additionally, a distributed approach allows considering the terrain properties and characteristics element by element.
Conclusions
The bottom roughness, evaluated here via the Manning roughness coefficient, is one of the main calibration parameters for hydrological modelling based on coupled hydrologicalhydraulic distributed numerical tools. The mesh resolution also plays an important role on the overland flow propagation, especially on the definition of the peak discharge associated to a rainfall event.
Due to the ambiguities in the definition of the time of concentration and the high dispersion of the values obtained with different formulae, the adjustment of the Manning coefficients with the time of concentration is uncertain and of little practical interest. The Manning coefficients that have to be used in coupled hydrological-hydraulic modelling in order to reach a time of concentrations similar to those obtained by empirical formulas show an extremely high dispersion. Moreover, they are out of the common range of values used for hydraulic purposes. In particular, it was shown that for the Kirpich formula, the required value of the roughness coefficient for the time of concentration adjustment is close or lower than 0.01 s/m 1/3 , which would correspond to the hydraulic value for very smooth material, much different from those existing in natural basins.
The global picture of the presented results shows that higher values of the Manning coefficient than those commonly used for hydrodynamic purposes are required when computing the overland flow of a basin using coupled hydrological-hydraulic distributed numerical tools. This is because the roughness coefficient must integrate the momentum change due to the energy dissipation produced not only by flow-boundary interactions, but also by sub-grid turbulence, in the river network and in the areas where the rainfall-runoff is mainly generated (hillsides).
In coupled hydrological-hydraulic distributed models, the roughness coefficient has a clear role on the arrival time of the water front, playing a similar role like the lag time in conceptual and aggregated models. However, there is no exact relation of roughness values with commonly used hydrological parameters, which means that trying to estimate the roughness coefficient by comparing distributed and aggregated model results is not a good idea. From the authors' points of view, there is still little experience, though increasing rapidly, in the verification of coupled hydrological-hydraulic distributed models; thus, the recommendation would be to use this approach in cases with enough data for model calibration and validation while experience in this kind of model continues to increase and eventually more recommendations and guidelines appear. Example of the levelling process of the terrain in the area that represents a dam in Trinity basin. Calculation mesh of Iber before (a) and after (b) the levelling process, Figure S3: Characterization of the Case Study 2: (a) Topographical map of the basin, influence area of the most closed rain gauges stations (green stars) according to a Thiessen polygon discretization (black lines) and location of the hydrometric station (yellow pentagon); (b) Map of land uses and location of the outlet (red dot), georeferenced in UTM Zone 14N; (c) Map of soil edaphology; and (d) Map of Curve Number according to SCS-CN method, Figure S4: Case study 2. Synthetic hyetographs corresponding to a 50-years return period for the eight most closed rainfall station to Marquelia basin, Figure S5: Case Study 3. Topographical map of the upper part of La Muga basin and location of the gauge station (yellow pentagon) and the outlet (red dot), which is referenced in UTM Zone 31N, Figure S6: Case Study 3. 5-minutal time-resolution hyetographs rainfall intensity (grey bar) and outlet discharge from the reservoir (black line), Table S1: Maximum discharge (in m 3 /s) for the four basins of the Case Study 1 estimated by the Rational Method for a constant rainfall intensity of 25 mm/h, Table S2: Characteristics of the discretization of the Marquelia basin and parameters of the aggregated numerical approach, Table S3: Spatial characteristics of the land uses of the Marquelia basin and Manning roughness coefficient best fit to the 50-years return period synthetic hydrograph. | 2021-12-12T17:35:33.498Z | 2021-12-03T00:00:00.000 | {
"year": 2021,
"sha1": "bde5e71865184808fd465f5afbeb73cde5bb982a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/13/23/3433/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1fbf6ae0652a411b522279bfa758649aac517fe8",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
233955336 | pes2o/s2orc | v3-fos-license | Implementation of Zakat Mal Management Based on Law Number 23 Year 2011 in Institution Amil Zakat Muhammadiyah, Medan City
Indonesia is a country with a majority Muslim population, even though Indonesia is the only country with a majority Muslim population, even though the Republic of Indonesia is not an Islamic country. But in line with that, the participation of the Indonesian Muslim community in this matter through zakat has a great opportunity to realize the goals of the Republic of Indonesia. As the ideals set forth in the preamble to the Constitution of the Republic of Indonesia which contains "advancing public welfare and educating the nation's life. On this basis, it is hoped that zakat can become a system that is structurally capable of overcoming the problem of poverty and encouraging the development of the people's economy and the nation's economy. Even the ethical value in the aspect of zakat should be explored and developed, such as poverty alleviation and economic empowerment. The assessment of zakat values will have an impact on thinking about how to manage economic resources more rationally and efficiently, so that the social and economic community aspirated by Islam and the ideals of the Indonesian state can be achieved optimally. In an effort to alleviate poverty, President Susilo Bambang Yudhoyono hopes that policy synergy between the central government and regional governments will be carried out by involving the private sector and the wider community.In this case, what the researcher meant about the existence of the Amil Zakat Muhammadiyyah Institute (LAZMU). The participation of Indonesian Muslims in poverty alleviation is considered very strategic because apart from sociological arguments it is also a religious order. In the development of Islam in Indonesia, one of the influential institutions is zakat. Zakat is in Abstract
I. Introduction
Indonesia is a country with a majority Muslim population, even though Indonesia is the only country with a majority Muslim population, even though the Republic of Indonesia is not an Islamic country. But in line with that, the participation of the Indonesian Muslim community in this matter through zakat has a great opportunity to realize the goals of the Republic of Indonesia. As the ideals set forth in the preamble to the Constitution of the Republic of Indonesia which contains "advancing public welfare and educating the nation's life.
On this basis, it is hoped that zakat can become a system that is structurally capable of overcoming the problem of poverty and encouraging the development of the people's economy and the nation's economy. Even the ethical value in the aspect of zakat should be explored and developed, such as poverty alleviation and economic empowerment. The assessment of zakat values will have an impact on thinking about how to manage economic resources more rationally and efficiently, so that the social and economic community aspirated by Islam and the ideals of the Indonesian state can be achieved optimally.
In an effort to alleviate poverty, President Susilo Bambang Yudhoyono hopes that policy synergy between the central government and regional governments will be carried out by involving the private sector and the wider community.In this case, what the researcher meant about the existence of the Amil Zakat Muhammadiyyah Institute (LAZMU).
The participation of Indonesian Muslims in poverty alleviation is considered very strategic because apart from sociological arguments it is also a religious order. In the development of Islam in Indonesia, one of the influential institutions is zakat. Zakat is in
Abstract
This study aims to find the problems and strategies carried out by the Lazis Muhammadiyah management in Medan are divided into two, namely internal and external. And what is being done is an effort to raise awareness together to channel zakat al-mal to Lazis Muhammadiyah so that it is managed properly and transparently and evenly so that it directly touches the community so that the objectives of the zakat law are realized. Management of zakat based on Law Number 23 of 2011 is zakat management states that zakat management aims to increase the effectiveness and efficiency of services in managing zakat; and increase the benefits of zakat to create social welfare and poverty alleviation. Efforts to realize the function and role of zakat in the welfare of society, the zakat management law issued by the government is in principle to facilitate, motivate and confirm the management of zakat issued by the Amil Zakat Agency or the Amil Zakat Institute.
Islamic doctrine, which is the fourth pillar of Islam, and is built before shahada, prayer and fasting. Therefore, it is highly suspected that the im plementation of zakat among Muslims has been carried out in this archipelago along with their existence and is seen as part of the implementation of Islam. In connection with the implementation of the Islamic religion on zakat in the Republic of Indonesia, Uswatun Hasanah, stated that Muslims who are the majority of the population in Indonesia have long been implementing zakat institutions. It is further stated that, the implementation of zakat in addition to religious orders is also an effort to realize social justice in the economic field.
Therefore, the management of zakat is deemed necessary to be legally enacted in order to realize the vision and mission of zakat and the ideals of the country. The Indonesian government as the executive has passed the Law on the management of zakat, namely in 1999. Which this Law will become a positive law, which will later accommodate Muslims regarding awareness of their rights and obligations towards their religion and their social relationship with zakat.
Definition of Zakat
In terms of language, the word zakat is the root word (mashdar) from the word zakka which means blessing, clean and developing. According to Ibn Manzur Alam Lisan al-'Arab, the word zakat according to language means growth, blessing and praise, where all these words are used in the Koran and Hadith. According to Wahidi Dam, the word zakka means increasing and growing so that it can be said that the plant is zakka, which is growing. When a plant grows, the word zakka here dares to be clean and if someone is given the characteristic of zakat, it means good, then that person has more good qualities.
Zakat is called a blessing, because by paying zakat the wealth will be doubled or not reduced so that it will make the property grow like a seed that grows seven ears on each one hundred seeds, because of the gifts and blessings given by Allah SWT. to a muzakkzi, as the word of Allah swt. in surah al-baqarah verse 261..
Meaning: "The parable (income incurred by) of those who spend their wealth in the way of Allah is similar to a seed that grows seven ears, on each one hundred seeds. Allah multiplies (rewards) for whom He wills. and Allah is Vast (His gift), All-knowing ". (QS. Al-Baqarah verse 261).
Legal Basis for Zakat Mal
The first zakat is required in the month of Shawwal in the second year of Hijriyah after the month of Ramadan it is obligatory to zakat fitrah, then zakat mal and wealth is required. Zakat hukmnnya fardhu 'ain for every Muslim whose terms and conditions have been fulfilled. The command to issue zakat in the Koran often uses the terms alms, donations and zakat which in everyday terms are assets for which rights must be issued.
As for the basic sources of zakat law, among others, in the letter At-taubah verse 103.
Meaning: "Take Zakat from their property, with that zakat you clean and purify them and pray for them. In fact, your prayer (becomes) peace of mind for them and Allah is All-Hearing All-Knowing." (Surah at-Taubah, verse 103).
According to Ibn 'Umar, what is meant by the above rights are the parts that must be removed from our assets for the groups who need them, " Meaning: "Pay zakat on your wealth." (HR. Turmuzi) As for the assets at the time of the Prophet Muhammad that must be issued zakat, which have the size of the nisab and haul, among others:
a. Gold and Silver
Zakat Gold and silver are one of the zakat males required by Allah SWT, as Allah SWT says.
Meaning: "O you who believe, Verily most of the pious Jews and Christian monks really eat people's property with vanity and they hinder (humans) from the Way of Allah and those who keep gold and silver and do not spend them in the way of Allah, So tell them, (that they will get) a painful punishment." (Surah At-Taubah verse 34).
Gold Nishab and Zakat Levels
Ibnu Munzir said, "the experts have ijma '(agree that if there is 20 mitsqal of gold and the price is 200 dirhams, it is obligatory for zakat. Strictly speaking, the nishab of gold which does not reach the gold standard is 20 mitsqal. Most jurists say," the gold nisab is 20 mitsqal with no. see the price. "That was the opinion of Abu Hanifah, Malik, Asy-shafi and Ahmad.
Ibn Hazm reported from ibn Hazim's finger from Ali that the Prophet SAW said. Meaning: "You do not have anything for the value of the gold, 20 dinars. If you have 20 dinars and you have up to a year, then the zakat is half a dinar, and the one that is more according to the calculation." From the above hadith, the scholars stipulate that the nisab of gold is 20 dinars of pure gold or the equivalent of 85 grams of 24K gold. Then the zakat is one forty or 2,5%.
Silver Nisab and Zakat
The scholars agreed in determining the nisab of silver based on the hadith narrated by Bukhari from Abu Sa'id from the Prophet; Meaning: "There is no zakat for silver less than 5 auqiyah". 1 auqiyah = 40 dirhams, 5 auqiyah = 5X 40 dirhams is equal to 200 dirhams and the zakat is 2.5% which is 5 dirhams.
b. Zakat on Plants and Fruits (Agricultural Products)
Zakat from crops is zakat collected from agricultural products, fruits and plants. Agricultural zakat that is collected is only the result, while zakat on trade, gold and silver and livestock are calculated by including capital and yield. This is where the difference between zakat on agricultural products and zakat on trade, gold, silver and livestock. The legal basis for zakat on crops (agriculture). As stated in the Koran in the letter al-An'am: 141.
Meaning: "And it is He who makes gardens that are upright and not uplifting, date palms, plants of various fruits, olives and pomegranates that are similar (in shape and color) and not the same (taste), eat of their fruit (which all kinds of things) if he bears fruit, and fulfill his rights on the day of reaping the results (by giving to the poor), and don't overdo it. Indeed, Allah does not like people who are extravagant." (Surah al-An'am: 141).
c. Cattle
Regarding livestock, the type has been determined by the Prophet. And after his death by his friends, namely livestock such as camels, cows (ox), and goats. Because the traditions only explain the obligation of zakat on the animals mentioned above.
In Yusuf Qardawi's term, what is meant by livestock is an animal that is useful for humans. By the Arabs it is called "al-an'am", namely: camels, cows including buffaloes, goats and sheep, as mentioned in the Koran as livestock used for human interests, for example, their energy to lift weights, ridden as vehicles. and the milk is taken, the meat is to be eaten and the skins are taken. Because it is appropriate that Allah asks the owner to be grateful for the blessings that He has bestowed on them.
Camel Zakat
There is no zakat on camels less than five heads, male or female. For this the authors created the following table.
Zakat on Cows and Buffalo
Buffaloes are classified into cows according to ijma ', as quoted by Ibn Mundzir, the two types of livestock can be combined. Zakat cows and buffaloes are obligatory based on hadith and consent. However, the scholars differed on the issue of the limits of cows and buffaloes which are obliged to do zakat. Some scholars argue that there is no zakat for ox less than 50 heads. If there are 50 cows, the zakat is one ox and if there are 100 cows, the zakat is two cows. Some of the other scholars have argued, among others, the opinion of Imam Malik, Asy-Shafi'i, and Ahmad that there is no zakat on cattle until they number 30 heads. For more details about nisab and zakat levels for cows and buffaloes, please see the following table:
Zakat Goats and Sheep
Goats and sheep must be zakati when they reach their nisab, this is based on hadith and consent. Zakat in this type starts with the number of 40 goats and sheep. There is no obligation if it is less than this amount. More details can be seen in the following table: According to the author's opinion, the current modern era in the field of animal husbandry has also progressed, therefore zakat on livestock is not limited to those mentioned, but all livestock, the important thing is that the results have met one nisab. For example, a large number of broiler chicken breeders can exceed the results of goat or cattle or camel breeders who are subject to compulsory zakat because they reach their nisab. So for broiler chicken breeders with an income that reaches the nisab, they should also pay their zakat.
d. Trade Assets/Commercial Assets
Trade assets are goods that are prepared for trading, such as animals, clothing, jewelry, and so on. The basis for the opinion that the mandatory zakat trade objects are as follows: This means: "O you who believe, spend (in the way of Allah) a portion of the results of your efforts are good and part of what We put out of the earth for you. and do not choose bad things and then spend from them. Though you do not want to take them yourself but by drawing your eyes to them. and know, that Allah is Rich, Praiseworthy." (Surah Al-Baqarah: 267).
e. Mining Goods
In interpreting mining goods, the jurists differed in opinion according to Hanafiyah, what is meant by mining goods is an ancient relic, while according to the majority of scholars the mining goods that must be zoned for zakat are gold and silver. Regarding this, it is stated in the word of Allah, surah al baqarah verse: 267.
This means: "O you who believe, spend (in the way of Allah) a portion of the results of your efforts are good and part of what we put out of the earth for you. And do not
No.
Nisab Level of Zakat 1 30-39 cows 1 tabi' male / female cow is 1 year old entering the 2nd year 2 40-59 cows 1 female musinnah cow is destroyed at 2 years old into its 3rd year 3 60-69 cows 2 tabi' cows aged 1 year enter their second year 4 70-79 cows 1 head of dead cow enters its 3rd year and 1 tabi' cow is in its 2nd year 5 80-89 cows 2 The tail of the musinnah cows is destroyed when it reaches its 3rd year choose the bad things for your living, but you yourself are squinting at them. And know, that Allah is rich, the most praiseworthy." (Surah Al-Baqarah 267).
f. Investment in Buildings, Factories, Vehicles, Equipment and Others
The argument that commands zakat from investment returns is letter A-Baqarah 267. This means: "O you who believe, spend (in the way of Allah) a portion of the results of your efforts are good and part of what We put out of the earth for you. and do not choose bad things and then spend from them. Though you do not want to take them yourself but by drawing your eyes to them. and know that Allah is rich and praiseworthy." (Surah A-Baqarah 267).
g. Search and Profession Zakat
The income that stands out the most In this millennial era, this is what is obtained from new jobs and professions such as doctors, architects, lawyers, programmers, and a myriad of other forms of profession. Some of the salaf scholars argue that the income assets are obliged to be paid for zakat as well as contemporary scholars. Therefore zakat seeking and profession can be taken even though at the time of the Prophet there were no provisions regarding compulsory zakat and what percentage had to be zakat. but can use analogy (qiyas).
Contemporary scholars such as Yusuf Qardhawi said, "The search and the profession can be taken for zakat if it has been a year and it has been senisab. If we adhere to the opinion of Abu Hanifah, Abu Yusuf and Muhammad that the nisab does not need to be achieved throughout the year, but it is sufficient to be fully achieved between the two ends of the year without missing the middle, we can conclude that with this interpretation it is possible to oblige zakat on the results of each search. the year, because the yield seldom stops throughout the year, mostly reaching both sides of the year. Based on this, we can determine the search results as a source of zakat, because of the presence of illat (cause), which according to the scholars of fiqh is valid, and nisab, which is the basis for compulsory zakat.
h. Zakat on Shares and Bonds
This modern era recognizes a form of wealth created by advances in industry and commerce in the world, which are called "stocks and bonds". Stocks and bonds are paper securities that apply in special trading transactions called "marketable paper exchanges" and impose a tax on their income that is always flowing, which is called "opinion tax on carried value", some even want the tax to be levied on the stock itself on the basis that the tax is a tax on property.
The ratio of zakat on shares is analogous to zakat on gold and silver worth 85 grams of pure gold, the amount of zakat issued is 2,5% with up to one year. The nisab of zakat on bonds is associated with zakat on gold and silver, which is 85 grams of pure gold, the zakat content is 2,5% with a period of up to one year.
Zakat Management According to Law no. 38 of 1999
In Law Number 38 of 1999 concerning zakat management, there are several important points of view that are important to note as follows: a. Zakat management is the activity of planning, implementing and supervising the collection and distribution and utilization of zakat.
b. Zakat is a property that must be set aside by a Muslim or an entity that is owned by a Muslim after fulfilling the mandatory zakat provisions in accordance with the provisions of Islamic law to be given to those entitled to receive it. c. Every Indonesian citizen who is Muslim and capable or an entity owned by a Muslim is obliged to pay zakat. d. Zakat here consists of zakat fithrah and zakat on assets subject to zakat, which are: (a) Gold, silver and money, (b) Trade and companies, (c) Agricultural products, plantation products and fishery products, (d) mining products, livestock products , (e) Income and services, (f) Rikaz. e. Zakat management is carried out by the Amil Zakat Agency which is formed by the government which consists of the community and government elements at the regional level. Namely BAZNAS (National Zakat Agency), Provincial Amil Zakat Board, Regency / City Amil Zakat Board and District Amil Zakat Board. f. The government is obliged to provide protection, guidance and services to muzakki, mustahik and amil zakat. g. The Amil Zakat Institution (LAZ), which is formed and managed by the community which is grouped in various Islamic mass organizations, foundations and other institutions, is confirmed, fostered and protected by the government. h. The Amil Zakat Agency as referred to in article 6 and the Amil Zakat Institution as referred to in article 7 have the main task of collecting, distributing and utilizing zakat in accordance with religious provisions. i. The results of zakat collection are utilized for mustahiq in accordance with religious provisions as well as the utilization of the results of zakat collection based on the priority scale of mustahiq needs and can be used for productive business. j. The management of zakat includes the management of infaq, alms, grants, wills, inheritance and kafarat. An officer who commits an offense due to his negligence in not recording or recording incorrectly the assets of zakat, infaq, alms, grants, wills, inheritance and kafarat as referred to in article 8, article 12 and article 13 of this law shall be punished with a maximum imprisonment sentence the duration of three months and or a fine of up to Rp. 30,000,000, (thirty million rupiah).
Management of Zakat in accordance with Law No. 23 of 2011
In terms of zakat management according to law number 23 of 2011 there are no fundamental changes. Therefore, law number 23 of 2011 can be said to be a complementary law from the previous law, namely law number 38 of 1999. So that the substance and content of Law No. 23 of 2011 are not much different from the Law. -Act number 38 of 1999.
In Law number 23 of 2011, it explains that zakat management is an activity of planning, implementing, and coordinating the collection, distribution and utilization of zakat. Zakat is an asset that must be issued by a Muslim, or a business entity owned by a Muslim to be given to those entitled to receive it in accordance with Islamic law. Which includes zakat mal and zakat fithrah. For this the authors take a summary of the contents of law number 23 of 2011 as follows: 1. Zakat management is based on: Islamic Sharia, Trust, Benefit, Justice, Legal certainty, Integrated and Accountability. 2. Zakat management objectives: Improve the effectiveness and efficiency of services in managing zakat Increase the benefits of zakat to create social welfare and poverty alleviation 3. Zakat mal is zakat that is owned by muzakki individually or as a business entity. Zakat mal includes: Gold, silver and other precious metals, Money and other securities, Commerce/ commerce, Agriculture, plantation and forestry, Animal husbandry and fisheries, Mining, Industry, Income and services, Rikaz. 4. The conditions for assets subject to zakat: Owned in full, Halal, Developing, Reach one nisab, More than usual, Debt free Haul/ have reached one year.
Zakat Management Institution
In carrying out their duties and functions, BAZNAS, Provincial BAZNAS, and Regency/City BAZNAS can form Zakat Collection Units (UPZ) in government agencies, state-owned enterprises, region-owned enterprises, private companies, and representatives of the Republic of Indonesia abroad and can establish a UPZ at the level of sub-district, sub-district or other names, and other places. Further provisions regarding the organization and work procedures of the Provincial BAZNAS, Regency/City BAZNAS are regulated in a Government Regulation.
Regency/City National Amil Zakat Board
In the implementation of zakat management at the Regency/City level, a District/ City BAZNAS is formed. The district/municipal BAZNAS organization consists of the Advisory Council, the Supervisory Commission and the Implementing Body. The Implementing Body consists of a Chairman, a Secretary, a Head of the Collection section, a Head of Distribution section, a Head of Utilization and a Head of Development. The Advisory Council consists of a Chairman, a secretary and 2 (two) members. The district/ municipal BAZNAS management consists of elements from scholars, professional staff, Muslim community leaders and government representatives. BAZNAS Regency / City is assisted by secretariat in carrying out its duties.
District/City BAZNAS Implementing Bodies are in charge; First, Carrying out administrative and technical tasks of collecting, distributing and utilizing zakat. Second collect and process the data required for the preparation of plans for collection, distribution and utilization of zakat. Third Organizing guidance in the field of collection, distribution and utilization of zakat. Fourth, Carrying out research and development, communication, information and education tasks in the field of collection, distribution and utilization of zakat.
Management of Zakat Mall at LAZISMU Medan City
The management of Zakat Mall contained in Lazis Muhammadiyah Medan City is in substance in accordance with the mandate of Law no. 23 of 2011 concerning Zakat Management. As for what is meant by the management of zakat as follows the activities of planning, organizing, implementing and monitoring the collection and distribution and utilization of zakat.
In this case, Lazis Muhammadiyah Medan City is a zakat management institution and it can be seen from the data released in the period 2019-March 2020, lists of the implementation of zakat collection are listed as follows. For this reason, that the management of zakat does not only revolve around the principle of collection and distribution, but what is more important is the realization of planning, organizing, implementing and controlling the collection and distribution and utilization of zakat. LAZIS Muhammadiyah Medan City.
According to Islamic teachings, zakat should be collected by the state or institution mandated by the state and on behalf of the government acting as the representative of the poor. To get his rights that are in the wealth of the rich. Management under the authority of an agency established by the state will be much more effective in carrying out its functions and impacts in building the welfare of the people who are the goal of zakat itself, compared to zakat collected and distributed by institutions that run independently without any coordination with each other.
Whereas in the provisions of the Muhammadiyah Central Leadership Guidelines regarding Lazismu article 4 concerning the principle in paragraph (1) "Islamic Sharia, meaning that in carrying out its duties and functions, it must be guided in accordance with Islamic law, starting from the procedure for recruiting employees to the procedure for distributing ZISKA funds". In this provision, that Lazis Muhammadiyah in managing zakat certainly rests on the principles of Islamic law. It was even reaffirmed in the mandate issued by the Minister of Religion of the Republic of Indonesia with the Decree of the Minister of Religion No. 73 of 2016 that LAZISMU is a zakat institution.
IV. Conclusion
Management of zakat based on Law Number 23 of 2011 is zakat management states that zakat management aims to increase the effectiveness and efficiency of services in managing zakat; and increase the benefits of zakat to create social welfare and poverty alleviation. Efforts to realize the function and role of zakat in the welfare of society, the zakat management law issued by the government is in principle to facilitate, motivate and confirm the management of zakat issued by the Amil Zakat Agency or the Amil Zakat Institute.
The problems and strategies carried out by the Lazis Muhammadiyah management in Medan are divided into two, namely internal and external. And what is being done is an effort to raise awareness together to channel zakat al-mal to Lazis Muhammadiyah so that it is managed properly and transparently and evenly so that it directly touches the community so that the objectives of the zakat law are realized. | 2021-04-24T22:38:43.883Z | 2021-02-09T00:00:00.000 | {
"year": 2021,
"sha1": "a50c647e7f42ec66c59157a1ea95b0f7707deb74",
"oa_license": "CCBYSA",
"oa_url": "http://www.bircu-journal.com/index.php/birci/article/download/1711/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a50c647e7f42ec66c59157a1ea95b0f7707deb74",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": [
"Business"
]
} |
55890775 | pes2o/s2orc | v3-fos-license | North-South International Education Partnerships : Two Canadian Projects with Tanzania .
The following is a review of two Canadian-Tanzanian international partnerships working in Tanzania within the education sector. Project TEMBO (Tanzania Education and Micro-Business Opportunity) supports the development of formal and non-formal education for girls and women in collaboration with other local and international non-governmental organizations. The Huron University College/University of Dar es Salaam project is strengthening post-secondary educational opportunities in collaboration with civil society organizations and local government. Both projects are focused on literacy in the broadest sense to achieve critical skills in civic engagement, poverty reduction, problem solving, decision-making and reducing gender imbalances, and as such are in line with the United Nations’ Millennium Development Goals (MDGs). Achieving improved access to information and educational opportunities for Tanzanians that support poverty reduction are the shared objectives of these two projects. This article will review the potential of partnership and participatory engagement of communities in strengthening educational outcomes in both formal and nonformal education settings.
Introduction
Tanzania is an East African country which has abundant natural resources and a young and growing population of 43 million who still strive to achieve full employment and decent living conditions 40 years after independence from Britain.Since independence, Tanzania has struggled to improve the literacy and education levels of its people.The first President Julius Nyerere, a teacher himself and referred to with great respect as Mwalimu, father of the nation, made his pursuit of literacy for all a fundamental pillar of Tanzanian independence and nation building in the 1960s.At the time of Nyerere's birth, in 1921, only 2 % of Tanzanians attended schools (Ishumi & Maliyamkono, 1995).This article will examine two projects working with partners in Tanzania and Canada to address some of the ongoing challenges facing primary, secondary and post-secondary education.Project TEMBO (Tanzania Education and Micro-Business Opportunity) is a small NGO whose mission is "to provide opportunities for the girls from Longido and Kimokouwa to succeed in secondary school, teacher training school and/or vocational school; and to provide opportunities for women in Longido and Kimokouwa to succeed in micro-business initiatives."(www.projectembo.org).A particular initiative, TEC (TEMBO English Camp) supports girl students to facilitate greater success in the transition from Swahili language primary schools to the English language secondary schools within the national public education system.In addition to supporting and promoting secondary school and vocational education for girls, TEMBO is also engaged in micro-business projects and literacy classes with the women in the two rural, primarily Maasai villages where the Project is based.These 'informal' or 'non-formal' education (NFE) initiatives help to support the schooling and post-secondary ambitions of the Maasai girls and women.The Maasai people are a traditional, pastoral and nomadic culture living on either side of the Kenyan-Tanzanian border.Today, while many retain their traditional ways of life, they are also striving to provide education for their children and achieve greater economic prosperity through more diverse economic enterprises.
Haki Shiriki Katika Sera "Building Civil Society Capacity for Poverty Reduction" is a Huron University College/University of Dar es Salaam (HUC/UDSM) partnership to strengthen ties between the university and the civil sector while increasing the relevance of the curriculum in the Tanzanian context.Curriculum renewal and the subsequent development of an M.A. and Ph.D. program in Civil Studies are the core objectives of the project.Faculty at the university engaged in curriculum renewal and instructional pedagogical training work with the new graduate program while the M.A. or Ph.D. students and faculty supervisors develop field work in collaboration with local government, civil society groups and communities within the UDSM Institute of Development Studies.Canadian faculty and librarians have engaged in partnership activities to support curriculum renewal, access to information resources and pedagogical training to support capacity building in the institution.
Can these international partnerships bridge existing gaps in the educational institutions in Tanzania?This article examines the evidence of success in building two partnerships that support educational transformation within the public education system.It highlights the use of non-formal educational opportunities in the community to achieve goals in both secondary and post-secondary educational outcomes
Educational Gaps
Despite the best intentions of the independence movement and the first president to place educational reform and literacy at the centre of the independence agenda there has been a failure to create the infrastructure to achieve universal primary education.This has led to a history of non-government schools filling in gaps and creating a multiplicity of actors in the educational environment.This has not facilitated the improvement of literacy rates in educational outcomes.A recent sharp drop in transition from primary schools to secondary schools with rates falling from a recorded 67.5 % successfully transitioning in 2006 to 49 % in 2009, is of particular concern (Tanzanian Ministry, 2010).
President Nyerere was one of the few educated Tanzanians at independence and he not only managed to achieve a college degree but received a scholarship for graduate studies at the University of Scotland from 1949-52.The colonial attitudes of the time did not support education of the African individual.Unfortunately, the educational institutions created before and after independence reflected many of the biased colonial attitudes towards Africans and African culture.The curriculum was not well matched to local needs which alienated students and parents (Wanjira 2007).This is particularly true in Maasai communities where formal education has not been a priority (King, 1972).Traditional knowledge and customs have been marginalized and are seen as irrelevant and examples of unscientific, irrational backward attitudes.British colonial officials instilled these prejudices, as exemplified in the following statement from a colonial report: Such in brief are the peoples for whose welfare we are responsible in British Tropical Africa.They have a fascination of their own, for we are dealing with the child races of the world, and learning at first hand the habits and customs of primitive man… from the hardly human 'Bushman' and the lowest type of cannibal to the organized despotism and barbaric display of a Negro Kingdom like that of Buganda, or to the educated native community, a few at least of whose members boast a training in the English universities and Medical Schools…(Cited in Ishumi and Maliyamkono, 1995, p.48).
The first three years of independence saw the widespread expansion of schools, the end of racial segregation, the development of teacher training, elimination of school fees and an overhaul of testing and examination to reduce waiting times for secondary school entrance.By 1967, Nyerere's government introduced an innovation that reflected his nationalist goals of creating a more equitable and economically sustainable society: Education for Self-Reliance (ESR).In Nyerere's own words: "Our education must therefore inculcate a sense of commitment to the total community, and help the pupils to accept the values appropriate to our kind of future, not appropriate to our colonial past" (Cited in Ishumi and Maliyamkono, 1995, p.51).
The exponential growth in the education sector from the first decades of independence to the present has not led to the achievement of universal literacy.The introduction of school fees to improve school finances in the 1980s and 1990s adversely affected Tanzanian enrollment numbers.However recently, the enrollment records are once again increasing according to the UNDP `Millennium Development Goals Mid-way Report 2000-2008` (UNDP, 2008), but the lack of literacy, the problem of finding teachers for rural schools, and underfunding of the education sector as reported continue to produce a lower level of literacy than is to be desired.
Contextual Background
Project TEMBO is an example of an international NGO responding to educational gaps with a local innovation outside the public education system that supports learning the language of instruction in secondary schools (English) while engaging learners with critical thinking and active learning techniques not usually associated with Tanzanian public schools.The role of consultation with local staff and the creation of governance structures that include families reflects a participatory approach to curriculum development that supports local needs.
Haki Shiriki Katika Sera is a partnership between universities that has led a curriculum renewal process which engaged civil society organizations at the national level to identify needs for graduate programs that strengthen public engagement by identifying research agendas and training graduates in civil society research.
Jane Kenway and colleagues, in their book: Globalizing the Research Imagination (2008) urge us to: interrogate our research practices, challenge dominant global research paradigms, and think more deeply about the global politics of knowledge.In keeping with this call to question issues of power and the tendency to practice 'globalization from on high' along with the 'intellectual colonization' that frequently accompanies North/South 'partnerships', these two projects have involved university faculty and librarians, educators and community leaders in a community based participatory process to identify how curriculum can support educational success within a framework of active learning and critical problem solving that is relevant to local needs.The next section reviews selected literature related to NGOs and international partnerships in educational development initiatives, followed by discussion and analysis of our two Canadian projects currently at work in Tanzania.
Review of Literature
International NGOs, particularly in African countries, are proliferating as the UN Millennium Development Goals (MDGs) actively encourage such partnerships in order to achieve results in lowering extreme poverty.Education is central to these goals as is the empowerment1 of girls and women (Buvinic et al 2008;Tembon & Fort, 2008;Kabeer, 1999Kabeer, , 2005)).Some critics (Moyo, 2009;Nutt, 2012) allege that State responsibilities are being downloaded to NGOs, contributing to a "culture of aid-dependency" (Moyo, 2009:37) while others (Appadurai, 2006, x -xi;131) contend that NGOs can seize (and re-fashion) the global agenda at a grassroots level: "globalization from below".
Others' experiences with international partnerships in education initiatives, as discussed below, inform our own work in this regard.The review of literature focuses in particular on the work of other NGOs working in developing countries through various forms of education programs, both formal and informal with an aim to address issues of poverty reduction and the empowerment of girls and women.Non-formal education (NFE) has been identified as an alternative legitimate form of education, which utilizes locally qualified people in development efforts and incorporates them into educational programs run at the grassroots level in the community (Jones, 1997).One advantage of NFE is that it is more cost-effective than formal education because people move through courses and programs at a faster rate than students in the formal system, and students are able to utilize practical knowledge and skills immediately.Jones ' (1997) comparative study involving eight islands in the Pacific and the Caribbean is particularly relevant to the work of Project TEMBO and the micro-business opportunities offered to women along with NFE, given that in this study the women formed the majority of participants in NFE.Jones felt that it was important to assess how NFE training imparted skills that could bring about individual change and enhance employment prospects for women.NFE can be used to validate women's knowledge, their traditional skills and life experiences.Moreover, the critical consciousness-raising in women empowers them to bring about change in their own communities and the wider society.A consensus existed among the participants that NFE programs effectively addressed their educational, social, economic and political needs.The empowerment factor of NFE is to be underscored since it encouraged the women to engage in community activities and to pursue their economic dream in the form of small businesses.Women also recognized the important role they played in development and expressed the need to collaborate with men, and involve young women in raising awareness; after all they would be the future leaders.The recommendations suggest the necessity for women, particularly those in rural, isolated communities, to come together more often in order to exchange views relevant to their needs, development issues and to participate in national and regional plans of action.
In his article: "Popular Education in Nongovernmental Organizations: Education for Social Mobilization?"Magendzo (1990) distinguishes government sponsored educational systems in Chile from the education embarked on by NGOs referred to as "popular education".The general goals of popular education are based on the idea of building a participatory and democratic society, attempting to establish links among individuals, groups, and community based organizations in order to overcome social fragmentation (Bengoa, 1988, cited in Magendzo, 1990).Popular education, unlike other forms of non-formal education, explicitly uses a radical method that calls into question the authoritarian practices and the mechanical transmission of knowledge characterized by traditional pedagogy.It stresses dialogue, group learning, and values the participants' experiences as the foundation for further learning and knowledge.
The NGO, Interdisciplinary Program for Research in Education, offered an alternative set of criteria for evaluating the state's programs and decisions.According to Magendzo, "Education for social mobilization has been understood as synonymous with popular education….It is concerned with empowering citizens to play a crucial role in constructing a just, democratic society." (1990, p.50)One outstanding limitation noted in this study relates to the fact that the participants did not collaborate in developing a body of popular educators in the community.
A more recent examination of what constitutes community participation and the impact various forms of community participation have on school access and quality is a qualitative study in Southern Ethiopia conducted by Swift-Morgan (2006).Ethiopia shares many similar traits with Tanzania as a relatively poor, independent African country, still striving for universal primary education for its largely rural population.The NGO World Learning facilitated the study in eight rural communities using in-depth interviews and focus group discussions with different education stakeholders and communities.Most schools in the study received support from World Learning or from other NGOs.The World Bank (Swift-Morgan, 2006) described participation as "a process through which the stakeholders influence and share control over development initiatives and the decisions and resources which affect them" (p.2).Community participation involves a locally-driven approach to development reforms.
The literature indicates an overwhelming consensus that community participation is important for expansion, improvement of schooling and access and quality of education.The relationship between community involvement and increased school efficiency and student learning is based on the premise that in traditional society, the community is the provider of children's education.The limitation in this model is that the expansion and quality improvement of education in many developing countries has stalled due to the state's failure to reach marginalized populations.
A study of NGO programs in rural Mali (Solomon et al, 2008) suggests that the innovativeness, flexibility, and propensity for promoting participatory practices by NGOs are to be lauded.But one limitation is that they rely on external funding and this dependency curtails project sustainability.They also lack coordination with the wider systems of community and government organizations and partners whether in education, health care or other service provisions.In order to successfully implement a program in a developing context, it is important to draw on diverse local perspectives, promote broad-based participation, and provide culturally appropriate ways inclusive of community members and in particular, women.
A UNICEF program tailored for use in East African formal and non-formal educational settings has been successfully implemented because it addresses issues in a culturally relevant way.To address the issues associated with "The rights of the child", UNICEF launched a program called the Sara Communication Initiative, (www.unicef.org/lifeskills.index_8020;Russon, 2000).The program is divided into eight lesson topics which are presented in comic book form.The topics include child labour, sexual abuse, Female Genital Mutilation (F.G.M.), sexually transmitted diseases, the right to food, clean water, a safe home and education.Each topic is followed by interactive activities designed to reinforce the lesson topics.
Some of the existing literature critiques the work of NGOs in developing countries as a persistent colonial linkage characterizing the relationship between the 'givers' and 'receivers'.The moral terms being set in these relationships and how virtue is deployed as a means of exercising power are analyzed by Mindry (2001).She explores the complex and troubling relationship "that constitutes some women as benevolent providers and others as worthy or deserving recipients of development and empowerment" (p.1189).Robinson-Pant ( 2004) while recognizing the strong links made between women's education and poverty eradication strategies urges a critical analysis of the values underlying educational change and transformative agendas in developing countries.Her ethnographic study in Nepal questions what counts as 'women's education' (p.474).Robinson-Pant's critique extends to the role of the World Bank "whose economic rationale has continued to dominate policy strategies….The nature of schooling for girls has rarely been questioned….donorshave been so preoccupied with getting more girls enrolled in school, they have often ignored the role of the school in perpetuating traditional inequalities… [and] only recently has the education of adult women been given more considered attention" (Ibid.) While many international NGOs are staffed by well-meaning paid and unpaid, trained and untrained individuals from the global North, there is little coordination or oversight among them, and questions and tensions emerge as to their aims, activities and long-term outcomes, as evidenced in the articles reviewed above.Relations with indigenous communities; local and national governments; language and other cultural barriers and the imposition of 'foreign' ways of teaching and learning are in need of greater critical reflection.In his book: Fear of Small Numbers, Appadurai refers to "grassroots globalization" or "globalization from below" (2006: x -xi; 131).He argues that activist NGOs both seize and shape the global agenda with respect to human rights, gender, poverty, environmental issues, and disease.Appadurai speaks of these "transnational activist networks" as ranging from: relatively local and regional in scope and sometimes truly global in their reach and impact.At the upper ends they are vast, well-funded, and widely known networks that have become mega-organizations.At the other end, they are small and fluid, bare networks, working quietly, often invisibly but also across national and other lines.(p.132) He remarks that the study of these networks has sparked recent interest as they represent "new forms of international bargaining", an expansion of the study of social movements, and the identification of a "third space outside of market and state" (Ibid.p.132).
Project TEMBO TEMBO structure, history and organization
TEMBO (Tanzania Education and Micro-Business Opportunity) is a Canadian-based NGO, founded in 2004, with volunteer Boards based in both Canada and Tanzania.Local staff are hired and trained to implement the projects within the two villages, while volunteer directors (both Canadian and Tanzanian) and committees involving parents and other villagers provide guidance.The TEC staff includes: TEC Coordinator (Member of TEMBO Trust Staff), three teachers from Canada, three volunteers from Canada, three matrons (working on a shift basis over a 24-hour period), and three cooks (local Tanzanian women filled the positions of matrons and cooks.)Project Objectives.TEMBO's mission is to educate and empower girls and women in northern Tanzania in two villages: Longido and Kimokouwa.In response to an expressed need of TEMBO sponsored girls to improve their English in order to be more successful in secondary school, the project developed TEMBO English Camp (TEC) in collaboration with local staff and Tanzanian trustees based in Longido.Designed by two Canadian teachers with input from local staff and participants the program runs for three weeks each June.The objectives of the program are to provide: • formal English language instruction for three hours per day; • a range of informal activities to support language instruction, and critical thinking; • expansion of vocabulary development, and life skills; • an alternative teaching method; • learning experiences which would supplement their academic curriculum; and • safe and secure accommodations and healthy, nutritious meals, for the three week period.
Project Methods.One of the key ways TEMBO's mission is accomplished is through an education sponsorship program focusing on secondary school, teacher training, and vocational training.Girls belonging to the Maasai tribe are the main recipients of education sponsorships.Traditionally, girls are left behind when it comes to receiving education opportunities.In the Maasai culture girls typically marry at the end of primary school and begin having babies soon after.As women, they have few, if any, rights.Their lives center on taking care of the home, the children, and the men.Increasingly more girls want to attend secondary school and become educated Maasai living in the modern world.(Phillips & Pashotan Bhavnagri, 2002) Project Challenges As discussed earlier, the Tanzanian education system suffers from being under-resourced at all levels.Tanzania met the challenge of a lack of educational opportunities with a rise in the number of schools and teachers, particularly at the secondary level.In the past most schools existed in the private sector: in the 1990s there were only 365 public and private secondary school and over 30% of the students and 60% of the schools were not funded by the government (UNESCO, 2010;Ishumi & Maliyamkono, 1995, 54).Subsequently, with the state emphasis on increasing educational opportunities "there has been an increase in total enrolment.The rapid increase of enrolment has been a result of a well-orchestrated government initiative of constructing at least one secondary school for each Ward all over the country."By 2008 the total enrollment at the secondary level was 1,164250 students in Forms 1-4 and 3,485 schools (World Bank 2010).Despite this increase, there continues to be a deficit in publicly funded educational spaces for school-aged children.
Students who are fortunate enough to qualify for secondary school face a number of challenges.First, most students must find individuals or organizations to sponsor them.Once they begin secondary school, the language of instruction is English and, for the Maasai, this is their third language.Swahili is the language of instruction in primary school.In rural areas such as Longido and Kimokouwa, the secondary school teachers delivering the subject material are Tanzanians who are under-qualified and insufficiently trained to teach English, making comprehension extremely difficult.Because of this, many students receive very poor or failing grades.Over the course of seven years that TEMBO has provided education sponsorships, the local TEMBO staff has realized that the girls do not do well because they do not understand what they are being taught.Quite simply, in order to succeed in secondary school they must speak and understand English better.
Pedagogical Support for Active Learning and Critical Thinking
Volunteer teachers created both formal and informal lessons to help the girls gain confidence in their ability to speak and understand English and supplemented these lessons with enjoyable evening activities which require participation in English.The Canadian teachers worked on a schedule that provides for formal and informal instruction with activities and topics that are appropriate for the level and cultural orientation of the participants.Instructional topics give special attention to themes and units of study from the Tanzanian Secondary School Syllabus/Curriculum.As well as comprehensive lesson plans, the teachers prepared workbooks at three levels for each of the instructional groups.The workbooks include lesson follow-up assignments, supplementary activities, song sheets, and a vocabulary section.The Goals of TEC are as follows: • to offer a range of informal activities to support language instruction, critical thinking, expansion of vocabulary development and life skills The informal activities proved popular with the girls.The girls are organized into six teams, allowing for easy formation of groups as well as giving them a chance to work with others and make new friendships across forms or villages.The activities provide an opportunity to use the English language as they play and explore a range of activities: arts and crafts, sports, community walk, camera club (taking pictures of their community -churches, schools, health clinics, market day, and their village activities).During the informal activities, the girls gain an opportunity to work together to complete projects.Readily available materials and supplies gave the girls the opportunity to develop or further develop the concept that the materials are available for all to share.Modeling the importance of talking in English about difficulties, problem solving and finding a solution through dialogue and conversation became an important part of the informal curriculum.
• to expose the students to an alternative teaching method Generally speaking teachers in Tanzania use the rote method to teach.No doubt there are numerous reasons for this however the intention of TEC is to use the interactive teaching approach which engages the learner.This method is a challenge to the girls but an important experience for them to have in order to develop divergent thinking skills, problem solve, and learn there is more than one way to do things.And equally as important is class size.The intention of TEC is to have smaller numbers of students per teacher.During the 2011 camp, 42 students shared three teachers and three teaching assistants which is a ratio of 7 to 1, compared to class sizes of 60 students in the local schools.According to Action Aid in Tanzania there is on average one teacher for every 56 students while UNESCO recommends at least one teacher for 40 pupils in order to provide a good learning environment (Sumra, 2006).
• to provide a learning experience which would supplement their academic curriculum During the planning stage of the development of the lessons for TEC, teachers refer to Tanzanian texts for teaching ideas from the national curriculum.The objectives behind all the activities include the development of specific vocabulary and basic linguistic concepts essential to cognitive and language development and academic success.These basic concepts are: colours, letters, numbers/counting, sizes, comparisons, shapes, direction/position, self/social awareness, texture/material, quantity, and time/sequence.A working knowledge of these basic concepts is quickly assessed in conversations with the students.Most of the beginning students lack both the receptive or expressive use of these skills.So during their first three week TEC program lessons concentrate on developing a good working knowledge of these basic concepts.Then during the two follow-up programs they move beyond the basic skills to the higher cognitive and language development skills described in Bloom's Taxonomy of Learning i.e. explaining, comparing, categorizing, and predicting.
Partnership Challenges and Successes
A key component for the success of the TEC Program centers around the nature and delivery of the goals.Tanzanian TEMBO staff and Canadian Volunteer teachers partnered to create both formal and informal lessons to include not just academic goals to help the girls gain confidence in their ability to speak and understand English; but the girls interact in a safe welcoming learning environment in which they can also develop leadership abilities in a single-sex setting.Teachers supplement lessons with enjoyable activities throughout the day and evening requiring participation in English and social development opportunities.Reports from the girls and their regular classroom teachers confirm that the girls improved in their ability to understand what is being taught in the classroom.They also demonstrate much more confidence especially in speaking English.The TEC creators and facilitators believe that by continuing this yearly three week program, the girls will progressively enjoy measurable success that will lead to passing National Examinations and promote their confidence as they become women who can and will undertake leadership roles in their community in the future.
Structure, history and organization
Haki Shiriki Katika Sera "Building Civil Society Partnership for Poverty Reduction" is a collaborative project bringing Canadian faculty, academic librarians and students from Huron University College to work with Tanzanian faculty academic librarians and graduate students at the Institute of Development Studies at the University of Dar es Salaam.The project team worked together for over five years to develop the project proposal engaging with local government officials, civil society organizations and international funders interested in civil society strengthening.The strong relationships between academics in Tanzania and local NGOs underpin the collaborative nature of the new curriculum developed to strengthen the ability of educational institutions to produce graduates that are ready to participate in meaningful community development led by locals.
From the capital city to the community
The project is primarily devoted to strengthening post-secondary education of graduate students in civil society and addressing governance issues with a mainstreaming of gender analysis to support the building of capacity for educational achievement and access to information.Canadian and Tanzanian librarians and Library Technicians are providing increased access to online databases for graduate students in the newly re-furbished graduate student resource centre.Canadian and Tanzanian students working in remote rural villages of Monduli District are using their keen interest in civil society for the benefit of the local government through facilitating access to government programs with an innovative piloting of Village Information Officers using mobile phones.And an international conference on open access to information for 2012 is being organized by UDSM faculty and two local NGOs.
Students in the Civil Society Studies program can tailor their field work to local community needs and build their expertise in relevant areas that lead to employment in civil society organizations which struggle to find educated and experienced local staff.Students at the post-secondary level in Tanzania often do not have access to relevant literature related to Africa.The new curriculum in civil society studies allows for the collection of data in collaboration with local NGO efforts and can now include more data and reports from local e-governance efforts in several Tanzanian Ministries.
The new curriculum is supported by library resources which are increasingly online and include the growing population of open access resources coming from Africa and the global south.The university is enabling the students to learn more about civil society with an emphasis on providing a reference reading room.Hardware and databases available in the graduate resource centre focus on relevant sources and encourage the use and publication of articles on Tanzania and Africa in online journals licensed for free public use.
The Masters and PhD level students now in the civil society program at the Institute for Development Studies are actively engaged in producing data, articles and theses in areas such as health education provision, information needs of remote rural villagers to improve good governance and access to local media for women.The UDSM/HUC collaboration has involved students in the development of pilot civil society projects to respond to needs identified in base line studies carried out by Canadian and Tanzanian students.
Two theses are currently in progress in Maasai communities which highlight the need for further strengthening of local engagement to bridge the gap between government and local communities.The first base line study highlights the lack of access to basic education and healthcare.Focus groups identified the lack of support for women seeking better farming techniques, as well as for men striving to improve livestock husbandry.The use of local media to transmit information related to health issues, educational opportunities and exam information particularly for women who have no other source of information due to illiteracy, has been proven successful but a lack of local radio and TV leaves much of rural Tanzania outside media contact.The second thesis highlights the successful examples of community radio and its positive impacts.The Monduli District in which this graduate student has been working recently invited a local station to collaborate with the public relations officer after the UDSM/HUC project supported a local media workshop familiarizing officials with radio broadcasting.
Context of the Project
This project is designed to work with the government's own initiatives.The National Strategy for Growth and Reduction of Poverty -MKUKUTA in Swahili -received approval in February 2005.The government prepared a framework for the 5 year implementation period from 2005-2010.The government commitment to implement the NSGRP -MKUKUTA stimulated an enabling environment for the civil society sector to operate and to engage for social transformation.Local and international civil society organizations are mobilized to enhance community participation and resource contribution in development activities and also participate in reviewing development strategies including MKUKUTA programs and projects; engage in dialogue with Government and Development partners to consolidate and present community views; stimulate debate, and raise understanding of the policy push and implementation in this arena.Non-state actors are fundamental to the Poverty Reduction Strategy (PRS) and the MKUKUTA.One of the stated challenges is to achieve participation which goes beyond popularization and dissemination and engage the grassroots in two-way communication.This highlights the need for dialogue at all levels, including civil sector organizations.
With its focus on the post-secondary education sector Haki Shiriki Katika Sera actively created new programs and graduates fluent in civil sector issues with relevant knowledge and background.At the same time, Tanzanian faculty have tried to reach out to areas underresourced in multiple aspects.In one pilot study Canadian faculty and students have worked alongside Tanzanian faculty and students with Maasai villagers.Data collected from communities in baseline studies led to close cooperation with local government Community Development officials and a concerted effort to achieve better linkages between rural citizens and services in Monduli Town.The most significant outcome of this partnership between villages, local government and academics has been the pilot use of Village Information Officers who can call local Ministry officials about all aspects of government services from vaccinations, to sexual assault, wildlife poaching and veterinary services using mobile phones supplied by the project.This pilot has been supported by graduate students from the new Civil Society Program who have carried out field work and supported training efforts.The interplay between the curriculum, thesis research and civil society partnerships has created a dynamic partnership between the university, community and local government that supports development.
Project objectives
The outcomes of the project are mirrored in this pilot which strengthens civic engagement and governance through capacity building, information access, outreach for rural villages and support for citizen dialogue on the constitution.The project purpose is being achieved through four components: • Capacity development, to strengthen the capacity of UDSM to offer gender equitable programming on civil society and poverty reduction; • Information resources to support access to gender inclusive information on civil society and poverty reduction; • Outreach, to strengthen the capacity to create and sustain poverty reduction initiatives, as the vehicle for meeting identified needs to have access to better information, analysis and training; and • Policy dialogue, to strengthen capacity to participate in policy dialogue on poverty reduction with communities, governments and donors, including the integration of gender equality perspectives in all policy interventions.
The purpose of the project has been achieved through these four components with the support of academics, local officials and volunteer villagers.The conference on open knowledge, democracy and access to information in 2012 will serve to complement the constitutional dialogue already underway in the media by emphasizing the voices of those who are making accessible a variety of information and data for academic and civil sector uses.
Project Methods
Haki Shiriki Katika Sera embodies the principles of participatory development strategies which are recognized for their value in building sustainable transformation (Brown and Tandon, 2008 p. 231).The partnership between the University of Dar es Salaam and Huron University College was co-led by two professors.The development of opportunities to do field work in rural and remote communities came from in-depth interviews and meetings with local government officials, traditional leadership and village meetings with both women and men to discuss priorities for change.
Both Canadian and Tanzanian undergraduate and graduate students participated in field work and training which has led to the completion of a Masters and a PhD thesis directly related to the project.At the same time the project has supported local government initiatives through consultation on the use of Village Information Officers and District Public Relations Officers and demonstrations of community radio programming for civil society.
Educational Infrastructure and Resources
To support learning in civil society and poverty reduction a curriculum renewal process was initiated in Tanzania at UDSM.This led to a new program focused on civil society and a refurbished graduate resource centre.The courses and materials support learning in the field with civil society agencies and local government as well as village leadership.The faculty lecturers in the Department of International Development Studies (IDS) at UDSM were also supported in transforming classroom experience with new teaching resources and new pedagogical approaches to build critical thinking and strengthen local content.Local civil society agencies have been consulted and students given opportunities to do work within agencies in Dar Es Salaam.At the same time the approach to lecture based learning has been transformed through peer discussions that emphasize successful engagement strategies in the classroom through alternatives which include problem-based learning and small group activities.A peer model used in Canada at Western University, the Instructional Skills Workshop, was offered to Tanzanian lecturers and nine faculty have been trained including two trainers for future workshops at UDSM.
Partnership Challenges and Successes
What does institutional collaboration between North and South offer for social and educational transformation?This partnership is an example of success in three areas and has had a significant impact due to its participatory approach.First, the capacity of both UDSM and Huron were enhanced by the collaboration allowing teaching on both campuses and curriculum development to benefit from the combined talents of multiple faculty and students.This allowed for opportunities to improve UDSM programs, resources and educational opportunities in the new civil society graduate program.
Second, the evolution of relationships between UDSM and civil society organizations in Dar es Salaam have allowed the university to play a role in providing short workshops on funding, research and writing policy that benefit CSOs.Canadian and Tanzanian students have been benefiting from stronger links and positions within CSOs to complete field work relevant to local issues.
Thirdly, the development of the Village Information Officer was of benefit to multiple stakeholders allowing new programs and services to flow from Mondul District to the ward level with the benefit of insights gained through village input.Students facilitated training and support for the Information Officers while gathering data for fieldwork.Local officials participated in training on the use of radio and developed a role for the Public Relations Officer to work alongside the local radio station.Consequently the challenge of being responsible for developments among so many stakeholders is apparent.The sustainability of continued dialogue and service provision to rural and remote villages continues to be a large challenge.
Analysis
Both projects are focused on literacy in the broadest sense to achieve critical skills in civic engagement, poverty reduction, problem solving, decision-making and reducing gender imbalances, and as such are in line with the United Nations' Millennium Development Goals (MDGs).Achieving improved access to information and educational opportunities for Tanzanians that support poverty reduction are the shared objectives of these two projects.Appadurai (2006) urges further study of these efforts at capacity building, partnership development, and transnational activism."We need to watch them, for the coming crisis of the nation-state may lie not in the dark cellularities of terror but in the utopian cellularities of these other new transnational organizational forms …Here lies a vital resource… to the strained relationship between peace and equity in the world we inhabit."(p.137) The disparate approaches of these two projects can be linked through the educational philosophy of Paulo Freire (1970).Freire expounds on the difference between a 'banking system of education' and a 'problem-solving approach'.Rather than assume each student is an empty slate ready to be filled with knowledge we encounter Tanzanians as individuals who desire skills needed for assessing situations, assembling strategies for progress and reducing barriers to the successful completion of their own goals.Access to information and the ability to use logical problem-solving based on knowledge and previous experience is fundamental to their advancement.In dealing with adolescents, university graduate students or Maasai villagers these projects emphasize the right to participate in local community development and national affairs, and the right to participate in policy dialogue and problem-solving at the local level.Whether by strengthening university program options in civil society studies, by offering young girls the experience of using critical thinking and higher cognitive skills in problem-solving or by encouraging villagers to ask more questions about local government services, the underlying assumption is that the individual has the right and responsibility to live an active and engaged role as citizen actor.In order to achieve meaningful advancements there must be many such small steps taken on the road to an "Education for Critical Consciousness" (Freire, 1973).In Freire's conclusion to his 1973 book noted above he states: My preoccupation throughout this essay has been to illuminate the principles and the basic aspects of an education which will be "the practice of freedom".[…] This undertaking requires something basic from any one of the Subjects participating in it -that they ask themselves if they really believe in the people, in ordinary people….If they are really capable of communing with them… If they are incapable [of this] they will at best be cold technicians.They will probably be technocrats, or even good reformers.But they will never be educators who will carry out radical transformations.(p.164)This is a good reminder to all educators working with NGOs from the global North, seeking to work in true partnership with educators and students from the global South.Nyerere's original education for social reconstruction failed for a variety of reasons more aptly studied elsewhere, but it does still hold true that Tanzanians require the critical skills that the problem-solving approach to education can provide.Canadian NGO's, working in collaboration and partnership with local communities and educators need to continually re-assess the goals and outcomes of their programs and the nature of their involvement to ensure that twenty-first century "participatory" collaborations do not inadvertently re-invent colonial relations of the past or oldstyle 'banking education'.Lessons learned from other NGO's with respect to building greater community ownership, drawing on diverse local perspectives, and promoting broad-based participation inclusive of all community members, particularly marginalized girls and women, encourages a more social justice oriented "globalization from below" (Appadurai, 2006).From a sense of international community, to local grassroots projects in Tanzania, in many respects these two projects are examples of how the principles of Nyerere's Education for Self-Reliance are still valuable today: "Our education must therefore inculcate a sense of commitment to the total community, and help the pupils to accept values appropriate to our kind of future, not appropriate to our colonial past" (Nyerere, cited in Ishumi & Maliyamkono 1995, p.51.).
The success of the new graduate Civil Studies program in attracting students has led to opportunities for field work in rural areas and with civil society organizations in the capital.The graduate resource centre refurbishment has supported new graduate students in completing their research and seeking out ways to work with civil society organizations in areas of data collection that are mutually beneficial.The civil society sector has benefited from student interns who have developed directories to local and national civil society organizations and local government services for citizens and volunteer information officers. | 2018-12-07T20:46:26.791Z | 2013-01-08T00:00:00.000 | {
"year": 2013,
"sha1": "75cb54b009df61a240776684cee689ec81bd677a",
"oa_license": "CCBYNC",
"oa_url": "https://ojs.lib.uwo.ca/index.php/cie-eci/article/download/9211/7397",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "4a4dfc9bee0620449f3f2e2fcb577d58dd19ab36",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Geography",
"Political Science"
]
} |
14847843 | pes2o/s2orc | v3-fos-license | A scientific correlation between dystemprament in Unani medicine and diseases: a systematic review
Background Temperament or mezaj refers to four different humors differentiating in individuals and, as a result, proposes specific therapy for their diseases. Objective The aim of this study was to overview the scientific correlation between temperaments in Unani medicine and diseases. Methods This study was carried out from March 2015 to February 2016. A computerized search of published articles was performed using PubMed, Google Scholar, Scopus and Web of Science, and Medline databases as well as local and regional resources between 1983 and 2014. The search terms used were temperament, dystemprament, diseases, mizaj, sue mizaj. Additional sources were identified through cross-referencing. Results The result of this study indicated the relationship between dystemprament and incidence of some diseases such as muscle diseases, skin diseases, asthma, palpitation, bipolar disorder, hemodialysis hysteria, hypertension, sinusitis, aging, diabetes, diarrhea. However, further studies are needed to prove the role of dystemprament in incidence of other diseases. Conclusion The result of this study indicated the relationship between dystemprament and incidence of some disease such as muscle diseases, skin diseases, asthma, palpitation, bipolar disorder, hemodialysis hysteria, hypertension, sinusitis, aging, diabetes, diarrhea. These results are helpful for patients and physicians to change humors toward equilibrium to avoid diseases. Further studies are required to discover the relationship between dystemprament and other diseases.
Background
Unani medicine is based upon theory of humors, which presupposes the presence of four humors in the body: dam (blood), balgham (phlegm), safra (yellow bile), and sawda (black bile) (1,2). The mizaj (temperaments) of individuals are revealed accordingly by the words sanguine, phlegmatic, choleric, and melancholic on the preponderance of the respective humor (3). In Unani medicine, the human body is contemplated to contain the following five components: temperament (mizaj), bodily temperament, humor, spirit, a'da, function (4). This Ruh is responsible for the contraction and relaxation of muscle fiber or muscular activity (5). Besides, there is a residual power of self-preservation or adjustment endowed by nature, called Tabi'at (medicatrixnaturael) (6). Hadmudwi consists of the chemical changes that facilitate sustenance of life and functioning of cells and for which certain nutrients and substrate are required to enter into and the wastes and synthesized product of the cells to come out of them. Protein synthesis under the influence of genes (DNA) can come under this function (7,8). A condition where muscles of an organ or the whole body become flaccid is called istirkha and huzal, which may be caused by (9) nutritional deficiencies. Duchenne muscular dystrophy (DMD) signs and symptoms are related to the istirkha and Page 3241 huzal. DMD is the most common and severe form of muscular dystrophy seen in malea (9). Zakaria Razi (865-925 AD) critically assessed, in his book Al-Havi, (10), all the available knowledge on istirkha at that time. The use of medicinal herbs and herbal medicines is an age-old tradition, and the recent progress in modern therapeutics has stimulated the use of natural products worldwide for diverse ailments and diseases (11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25).
Statement of problem
As Unani therapy is dependent upon equilibrium, if there is any change in mizaj the equilibrium is disturbed in any way the life is threatened (21,22). Each of these humors was associated with and was classified as having the following qualities: hot and moist, cold and moist, hot and dry, and cold and dry (21). Galen introduced for the first time the concept of temperament derived from the Latin word "temperate," which means to mix (23). The different temperaments are sanguineous, phlegmatic, melancholic, and bilious. It is said that each person possesses a dominant plus a subdominant temperament (21). Kant stated that knowing what sort of a person has a disease is more important than knowing what sort of disease a person suffered from (23). Among the temperaments, the sanguine and melancholic were emotional types, and the phlegmatic and bilious types were action types (24). In children the same temperament labels are referred to as aggressive, fearful, apathetic, and impulsive (25).
Objective of research
The aim of this study was to overview results of articles regarding the role of temperament disturbance in suffering diseases.
Design
A computerized search of published articles was performed using PubMed, Google Scholar, Scopus and Web of Science, and Medline data bases as well as local and regional references. The search terms used were "temperament, dystemprement, mizaj, sues mizaj, diseases." Additional sources were identified through cross-referencing.
Inclusion and exclusion criteria
The following keywords were used to a search for the relevant articles published from 1983 and 2014; their full text should be available in English. Articles included consisted of clinical trials, in vitro, in vivo, review, or metaanalysis studies. All relevant studies were evaluated on the basis of the titles and abstracts. Exclusion criteria were just abstracts and not available in the time line of the study. Those article that did not match our inclusion criteria (being in languages other than English during the time line of the study). Irrelevant studies as well as studies on prevention of disease and temperament from epidemiological perspectives were excluded from the study.
Quality assessment
The sampling method, existence of valid instruments and presence of inclusion and exclusion criteria were checked. The search results identified 25 articles; however, 16 studies were excluded from the review, leaving nine articles for further analysis.
Clinical disorder
The results of this study indicated that a number and percentage of patients have the same dominant and subdominant temperament with a specific clinical disorder as identified.
Muscle diseases
Unani physicians describe different types of istirkha: rabw istirkha'i (shortness of breath due to the paresis of respiratory muscles); laqwa istirkha'i (deviation of face due to flaccidity of the facial muscles); istirkha' al-hanjara (flaccidity of laryngeal muscles due to the infiltration of fluids in them; in this condition the movement of larynx is stopped); istirkha' al-mi'da (condition where tonicity of the muscles of stomach is lost; it is due to retention of fluid in the stomach, severe vomiting, and diarrhea); istirkha' al-maq'ad (flaccidity of the anal canal due to paresis and paralysis of sphincter muscles, which leads to involuntary expulsion of stool and gas from the anus); istirkha' almathana (flaccidity/paresis of muscles attached to urinary bladder with or without injury of bladder followed by incontinence of urine); juhuz istirkha'i (protrusion of eyeball due to paresis of eye muscles and ligaments attached to it); istirkha' al-jafn (drooping of upper eyelids due to paralysis of eye muscles or congenitally due to the defect of the muscles of eye); istirkha' al-lisan (condition where the tongue becomes flaccid, there is increased salivation and patient feels difficulty in speaking); istirkha' al-litha (flaccidity of the gums in which they become spongy); istirkha' al-safan (flaccidity of scrotum); istirkha' al-qadib (flaccidity of the penis) (23). Etiology (Asbab): x-linked recessive disorder, obstruction occurs in asab (nerves), excess rutoobat in azlat (26,27).
Skin diseases
According to Zakaria Razi, bahaq is a common dermal disease featured by hypopigmentation and hyperpigmentation (1). Akbar Arzani has classified bahaq into two types: (a) bahaq abyaz is a mild hypopigmentation that appears suddenly and disappear quickly after the topical use of detergent drugs (4, 5); (b) bahaq aswad is a black discoloration of skin featuring the appearance of wheat shell-scales (5, 29).
Palpitation
Palpitation was divided according to its causes in traditional medicine, for example, based on hot, cold, moist, dry, and the combination of these dystemprament (31).
Bipolar disorder
In this special issue dedicated to Falret and the French contributions to the concept of cyclicity in manic depressive illness, we begin with a historical overview of the development of the concept of cyclicity and its fundamental significance in manic-depressive illness, and we underscore how the concept fell into neglect only to reemerge in recent years. We then look at the intimate relationship between mania and depression. The hypothesis of the primacy of mania is discussed. The thesis is presented, supported by the examination of 100 consecutive index manias that, in most cases, mania is triggered by external factors acting upon hyperthymic patients, determining an exogenous cyclicity. On the other hand, in BPII patients the temperamental mood instability (cyclothymia) is an inherent and decisive factor in determining the cyclic autonomous course of the disorder. Finally, a new distinction of bipolar disorders, based on premorbid temperament and course of the illness, is considered (25).
Hemodialysis
In the correlation of the characteristics and the mental health of the different temperamental patients with hemodialysis, in order to provide a psychological base for psychological nursing care, it was concluded that there was positive correlation to compare the personalities of anxiety, fear, paranoia, and the mental diseases with the points of EPQ mental personalities. There were negative correlations between the points of each SCL-90 element, except elements of body and antagonism, and the points of cover-up personalities. The points of each SCL-90 elements in patients group with choleric temperament and depressive temperament were higher than those in the patients group with sanguine and mucous temperaments. Conclusions: patients with hemodialysis all have mental problems in different levels. The mental health of choleric and depressive temperamental group was worse than that of sanguine and phlegmatic temperamental groups. Thus we should pay attention to the influences caused by the characteristics of different temperaments on the diseases and aim mental nursing care to patients with different characteristics (32).
Hysteria
Hippocrates stated for the first time that disease is due to the imbalance of humors (33). Besides, mental disorders were traditionally assorted into three types: mania, melancholia, and hysteria (5). In the book Firdausul Hikmah by Rabban Tabbari, mental disorders were classified into 13 types, e.g., sa'ra, waswasah, hizyan, fasade khayal, fasadeaq'l, nisyan, bedaari, dawi, duwar, etc. (34). The same number of mental diseases was mentioned by Razi in his book Kitabul Fakhir (35). Hysteria is similar to epilepsy and unconsciousness. Its origin is in the uterus, but it involves the heart and brain. It is mentioned in Unani classical literature with clinical pictures and detailed treatments. Nomenclature: A. Because the patient feels as if an air ball arises from her abdomen or pelvis and obstructs the pharynx, it is called "bao-gola" in Hindi. B. Because the patient feels as if she has been throttled, in the past, the disease was associated with uterine disorders; therefore it has been named as "ikhtanak-ur-rehem." C. Hysteria is a Unani word derived from "hysteria," which means uterus. The pathogenesis of general diseases in Unani medicine has been ascribed to three factors, e.g., temperament, structure, and continuity of tissues. Abnormalities of these factors are called altered temperament, altered structure, and discontinuity in tissues, respectively (8,9). ShaikhurRayees Abu Ali Hussain bin Abdullah bin Sina (980-1037 A.D) described that hysteria is similar to epilepsy and syncope. Its origin is in the uterus, but it involves the heart and brain. It occurs due to amenorrhea and retention of semen. It occurs in pre-pubertal girls, multigravida, and lactating mothers. Due to retention of semen, the temperament of women is diverted toward the boroodat (coldness). Due to baroodat (coldness) the viscosity of semen and menstrual blood increased; hence it remains in the uterus. When it remains in the uterus for a long duration, it becomes toxic (sammimadha), which lead to tashanuj and pressure in the vessels. Due to retention, the uterus gets diverted to one side. Sometimes toxic vapors reach up to the brain and hearing, which results in unconscious and syncope (26). Erudite written in his time, Abu Bakar bin Mohammed Zakaria Razi (865-925 A.D.) stated that once it came to my observation to see women who have aborted and has unconscious and the pulse was zaheef and sageer, and sometimes the pulse was not identified. Another incidence of my life was to see women who were conscious and not in synoptic condition, but the condition of breathlessness was present. Another incident was to see women who have epileptic conditions. All these conditions suggest that hysteria has many types. Sometimes the condition will occur in which it is difficult to identify if the woman is alive or dead. Razi stated that hysteria occurs due to amenorrhea and retention of semen. In most cases, it occurs due to retention of semen. It occurs in virgin females because their sexual desire is not fulfilled. If froth comes out after an attack, it is a good sign (27). According to Shareef Shafuddin Ismael Jurjani (died in 1140 A.D.) and Mohammed Kabeerud din (1894-1976 A.D.), hysteria is similar to epilepsy and syncope. Its origin is in the uterus, but it involves the heart and brain. Its causes are sometimes amenorrhea and accumulation of blood or it causes increase in secretions or its retention. It generally happens in unmarried females or in those patients who are habitual coitus. So when menses or secretions are blocked, it gets diverted toward coldness, which is very usual and sometimes its conversion occurs toward the putrefaction and hotness (hararat). Thus the toxicity is either cold (barid) or hot (haar), which appears in the uterus and also affects the heart and brain (36). There are two forms in which this toxic substance (sammimadha) can affect the brain and heart. First, the uterus is affected; it get displaced to one side; due to displacement, the heart and brain will be effected. Second, when toxic substance (sammimadha) becomes faasid in the uterus, then toxic vapours reaches the heart and brain and causes disease. If its toxic substance (sammimadha) goes toward the brain, it causes epilepsy, spasms, or paralysis. Hakim Mohammad Hassan Qarshi stated that disease is more common in young, sophisticated females, especially when the nervous system is congenitally deformed, or if they show inherited susceptibility to this disease. Along with this, complaints of amenorrhea, puerperium, leucorrhoea, and displacement of the uterus are common. According to pioneers of Unani medicine, occasionally this disease also arises as a result of retention and putrefaction of seminal fluid along with severe constipation, impaired digestion, flatulence, anger, stress, anxiety, fears, sorrows, etc., and some psychological factors that are exaggerated. Sudden shocks are the predisposing factors. Prolonged insomnia and excessive fatigue are also the underlying causes (37). Maseehul Mulk Hafiz Hakeem Ajmal Khan (1894-1976 A.D.) noted that the origin of hysteria is in the uterus and called it ikhtinaq-ur-rehm. He stated that it occurs in rich females of delicate temperament and in females between the ages of 12-40 years living in cities. Amenorrhea and dysmenorrhea can also cause this disease. He further stated that chronic constipation, nafakh-e-shikam, distress, sorrow, anxiety, fear, and anger can all causes the disease (31). "Ancient medicine have called this disease hysteria, derived from the name of the uterus, because it is an organ given to women by nature in order that they be able to become pregnant" (25). Improvement is ideally possible through restraining from sexual intercourse and frequent virginity. Patients with hysteria should be cured with care and caution. These diseases can be best prevented by hot baths, massages, and exercise (25,32). Shareef Shafuddin Ismael Jurjani and Mohammed Kabeerud din mentioned the following clinical picture. It starts with paroxysms of morbid fascination (imagination), darkness before eyes, tinnitus, pain below umbilicus, loss of appetite, difficulty in respiration, palpitation, fatigue, weakness in legs, and change in color occurs. Eyes become watery and when the time comes closer than suffocation, palpitation starts. Uncontrolled movements occur in the mouth, lips, and face; the teeth start making noise; the voice gets choked. Breathing becomes feeble. Patient will feel as if something is going up from her pubis symphysis. She does not talk but understands whatever is said to her, then she becomes unconscious, and there is loss of sensation (28,29). Abu Bakar bin Mohammed Zakaria Razi described the following clinical picture of hysteria. Respiration gets ceased; pulse become weak. Teeth start making noise. Uncontrolled movement overtakes half of the body (27). Maseehul Mulk Hafiz Hakeem Ajmal Khan mentioned the following sign and symptoms. The disease occurs with fits. Fits can occur for few minutes, a few hours, and, in some cases, it occurs for two to four days and mostly occurs during menstrual periods. First, the patient feels pain in the hip, watering of eyes, headache, patient become weak and lazy, darkness occurs in front of the eyes. After sometime a ball (gola) arises from the stomach of patient toward the throat and obstructs it, to which patient tries to swallow and asphyxia occurs. Rigidity occurs in the throat, belching, frequency of mictruation increases, the heart beat increases, and the patient starts shouting and cries or starts laughing loudly and becomes faint and fall on the ground. The spasm occurs in her limbs, respiration increases, and limbs become cold. Sometimes the patient pulls her hairs and sometimes tears her cloths. She hates the people around her and tries to bite them. She strikes her hand on the wall and takes her fingers toward the throat again and again, which indicates a sign of obstruction. When the disease starts to disappear, the patient gasps and shivers and starts and sometimes lies calmly. At last she smiles, and the fit ends and urination occurs in more quantity (21). Hakim Mohammad Hassan Qarshi mentioned the following clinical features, which commonly initiate from seizures. They vary according to their severity. Most commonly, the patient feels mild pain on the left side of the pelvis, after which the sense of an air ball arising from stomach and obstructing pharynx is felt, which compels the patient to make an attempt for its elimination for which she has recurrent deglutition. This causes asphyxia, resulting into syncope. In mild cases, the syncope is not serious; therefore the patient is able to recover soon. Although the patient succeeds in recovering, she suffers from unbearable fatigue, headache, nuchal rigidity, flatulence, impaired digestion, palpitation, etc. Urinary incontinence and depressive moods are also common. If the condition is severe, the patient has shrill cries or she laughs madly, and once the sensation of the air ball reaching the pharynx is felt, she soon falls on the ground. The patient beats her chest and bends her head backward along with extending the neck upward. Sometimes a spasmodic condition of lymph is also reported, and the patient tries to move her body forward or backward or even tends to fold herself. Stiffness is accompanied with this. In some cases, the patient is as stiff as a tie, or stiffening of one limb is also seen. Up and down movements are also evident. The patient beats her upper limbs here and there, blinks the eyes; there is ballooning of nostrils and compression of lower jaw without any ugly character of the face. A few patients also pull the hair, tear their clothes, bang on the wall, and try to chase the surrounding individuals (38). The patient takes long and deep breaths upon rubbing her neck frequently; there is coldness of both limbs. Therefore the duration of seizures ranges from a minutes to a few hours or two to three days. The reoccurrence of the seizures depend upon the intensity and severity of the disease. Once the seizures subside, the patient starts having breathlessness along with tremors on being touched and becomes anxious. The patient is calm and quite and laughs whole heartedly or cries loudly. The patient sleeps after vomiting. Many times, a few patients show fake symptoms in between the seizures, the senses are dumped. Sometimes the patient is often stubborn and irritable and gives a long explanation of her clinical features, which is in excess of reality. She assumes them to be very critical and pretends to have fake seizures and pain at different sites in the body. Mostly she complains of hemiplegic, although the condition is rare. The patient complains of incomplete symptoms such as an inability to walk, although she is capable to stand without any support and walk. Moreover, there is no urinary or stool passage involuntarily; there is no facial or lingual paralysis, only she walks (crawling). In such individuals, the left aspect of the body is paralyzed. The subject is hypersensitive, complains of hoarseness of voice-and nausea, hiccups, flatulence, palpitations, spasmodic cough are common (30).
Hypertension
In a study, the relationship between some diseases and the temperament of individuals was investigated; it was shown that 15 patients had a dominant/subdominant sanguineous temperament (39). In another open pilot study conducted by second-year medical students at the Nelson Mandela School of Medicine at the University of Natal, the association between temperament and Type II diabetes mellitus showed positive results. In this study involving 77 confirmed diabetics, the results indicated that 89% of the patients had a dominant or subdominant sanguineous temperament. Of course, this was not a fully standardized questionnaire, and the limited sample size is a limitation of this study (40).
Sinusitis
In the Unani system of medicine, it can be termed as iltehabtajaweefe anaf and can be haad (acute) or muzmin (chronic) according to duration of illness and haar (hot) and baarid (cold) according to the predominant humor (28). It is a major health care problem that affects a large population, mainly of the lower age group. In the wake of unconvincing scenarios of treatment coupled with obnoxious side effects in contemporary medicine, the Unani system of medicine offers an effective treatment for this disease based on its holistic approach. In this system of medicine, various regimes of regimental therapy are used for primary and secondary prevention of sinusitis along with evaluation of mizaj (temperament) of a person. Mizaj is one of the fundamental concepts of Unani medicine and serves as a primary tool for diagnostic and therapeutic purposes. Determination of mizaj is an important tool for prophylaxis of sinusitis. Screening is one of the procedures by which disease can be diagnosed in its early stage, and the possible steps can be adopted in time to reduce the morbidity and mortality, e.g., blood pressure for hypertension and pap smear for cancer of cervix are two example of how screening helps in combating the fatal outcomes of diseases. Keeping in view of the above potential of sinusitis, the present study entitled "Demographic Study of Sinusitis in Patients Visiting Govt. Unani Hospital Srina-gar and AYUSH Centres in Kashmir" has been designed. The objective of the study was to evaluate the prevalence rate of sinusitis and to determine the ratio of diseases in respect of mizaj of the patients attending Govt. Unani Hospital/AYUSH Centres in Kashmir, irrespective of the treatment the patient is seeking (41).
Aging
According to classical Unani literature, organismal ageing is the result of two opposing processes: 1) Tahleel-erutoobat by harat-e ghareezia to maintain organism in functional state; 2) inadequate compensation of tahleel by quwwatehazima, which maintains balance or homeostasis. This imbalance causes decrease in rutoobat-e ghareezia and hararat-e ghareezia thus changes the mizaj to barid yabis. Gradual increase in baroodat with age weakens the quwa and slows the affal of the body because quwa requires hararat for affal. Any quwwat is not immortal and decreases gradually with age. Tabiyat, the supreme power of the body is also a quwwat, which weakens with advancing age and alter all functions of the body. At the molecular level, it is accepted that the aging process is the result of entropy-driven decay, mutations and oxidative stress resulting in generalized impairment of functions, loss of adaptive response to stress, immunosenesence and increasing the risk of age related diseases such as decreased power, vision, memory, locomotion or immobility, exertional dyspnea, depression, gastrointestinal distress, changes in the skin, heart disease, cardiovascular diseases, respiratory diseases, and musculoskeletal diseases (42).
Diabetes
According to Unani medicine, ziabetus shakri is a disease in which consumed water is passed out through the kidney immediately after intake by the patient. It is similar to zalqul medawal ama (irritable bowel syndrome) in which the food passes rapidly through the stomach and intestine without proper digestion (43). In this disease, the patient feels excessive thirst and takes plenty of water and passes all the water he consumed without any metabolic change (44). The term "diabetes" was first coined by Araetus of Cappodocia (81-133 A.D.). Later, the word mellitus (honey sweet) was added by Thomas Willis (Britain) in 1675 after rediscovering the sweetness of urine and blood of patients (43). The Unani philosophy of disease causation is based on mizaji (temperamental) and saakhti (structural) deviation. Any imbalance between mizaj and saakht (structure) results in disease. In this disease, the mizaj (temperament) of kidneys becomes Haar (hot), so they absorb water from blood circulation and send to the urinary bladder immediately due to weakness in quwate masika (retentive power). It has also been described that kidneys attract the watery substance of blood, but the urinary bladder does not attract anything. Thus, the kidneys attract water from the circulation, liver, stomach, and intestines; because of this, the patients feel immoderate thirst (polydipsia) (43).
Diarrhea
Galen (131-201 A.D.) defined diabetes as "diarrhea urinosa" (diarrhea of urine) and "dipsakos" (thirsty disease). He described it as a disease specific to kidneys because of weakness in their retentive ability, and as he came across only two cases of diabetes, he termed it a rare disease. He believed that the urine of diabetic patients was an unchanged drink, which may have accounted for different aromas (45). The Chinese and Japanese literature explained a disease with sweet urine, which attracted dogs and insects. Such patients were more prone to develop boils and tuberculosis (38). During the fifth and sixth centuries, the sweet taste of urine in polyuric patients was also described in the Sanskrit (Indian) literature by Susruta, Charaka, and Vaghbata, and the disease was named "madhumeha." They described that the urine of these patients tasted like honey (madhu), wassticky to touch, and ants were strongly attracted to it (46).
Conclusions
The result of this study indicate the relationship between dystemprament and incidence of some diseases such as muscle diseases, skin diseases, asthma, palpitations, bipolar disorder, hemodialysis hysteria, hypertension, sinusitis, aging, diabetes, diarrhea. These results are helpful for both patients and physicians to change humors toward equilibrium to avoid diseases. Further studies are required to discover the relationship between dystemprement and other diseases. | 2018-04-03T01:44:06.886Z | 2016-11-01T00:00:00.000 | {
"year": 2016,
"sha1": "41a4f24575a568d50e42428b61d63e38da8c60d6",
"oa_license": "CCBYNCND",
"oa_url": "http://www.ephysician.ir/2016/3240.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "41a4f24575a568d50e42428b61d63e38da8c60d6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
207949299 | pes2o/s2orc | v3-fos-license | Comparison of cytotoxicities and anti-allergic effects of topical ocular dual-action anti-allergic agents
Background To investigate the cytotoxicities of the topical ocular dual-action anti-allergic agents (alcaftadine 0.25%, bepotastine besilate 1.5%, and olopatadine HCL 0.1%) on human corneal epithelial cells (HCECs) and their anti-allergic effects on cultured conjunctival epithelial cells. Methods A Methylthiazolyltetrazolium(MTT)-based calorimetric assay was used to assess cytotoxicities using HCECs at concentrations of 10, 20 or 30% for exposure durations of 30 min, 1 h, 2 h, 12 h or 24 h. Cellular morphologies were evaluated by inverted phase-contrast and electron microscopy. Wound widths were measured 2 h, 18 h, or 24 h after confluent HCECs monolayers were scratched. Realtime PCR was used to quantify anti-allergic effects on cultured human conjunctival cells, in which allergic reactions were induced by treating them with Aspergillus antigen. Results Cell viabilities decreased in time- and concentration-dependent manners. Cells were detached from dishes and showed microvilli loss, cytoplasmic vacuoles, and nuclear condensation when exposed to antiallergic agents; alcaftadine was found to be least cytotoxic. Alcaftadine treated HCECs monolayers showed the best wound healing followed by bepotastine and olopatadine (p < 0.0001). All agents significantly reduced the gene expressions of allergic cytokines (IL-5, IL-25, eotaxin, thymus and activation-regulated chemokine, and thymic stromal lymphopoietin) and alcaftadine had the greatest effect (p < 0.0001 in all cases). Conclusions Alcaftadine seems to have less side effects and better therapeutic effects than the other two anti-allergic agents tested. It may be more beneficial to use less toxic agents for patients with ocular surface risk factors or presumed symptoms of toxicity.
Background
The prevalence rates of allergic diseases have been increasing due to hereditary factors, environmental pollution, increased allergen levels, and changes in life patterns including dietary [1,2]. Approximately 6-30% of individuals are suffering from allergic conjunctivitis (AC) and 30~70% of them accompany other allergic diseases [3,4]. Even though AC is not a life-threatening disease, its chronic, recurrent tendencies influence the quality of patient's life considerably [5].
The fundamental treatment for AC is to avoid allergens that cause hypersensitive reactions as other allergic diseases. However, it is difficult not only to identify the causative allergens accurately but also to avoid a known allergen completely if they are easily encountered in daily life. For these reasons, pharmacotherapy has been used to provide symptom relief and treatment in AC.
The clinical manifestations of AC such as itching, hyperemia, chemosis, and eyelid swelling are the result of mast cell degranulation and the release of inflammatory chemical mediators (especially histamine), which are initiated by crosslinking between permeated allergen and sensitized IgE on mast cell surface [6]. Therefore, these pathologic immune reactions have been considered as main targets for pharmacotherapy, and can be controlled by antihistamine agents and mast cell stabilizers. Olopatadine was the first approved dual-action topical agent and other two dual-action agents have been developed and become general trend to treat AC. Although dual-action agents reduce dosage and frequency due to its rapid onset and long lasting therapeutic effect, long period of use can damage ocular surface cells [7][8][9]. An impaired epithelial barrier may allow allergens to infiltrate easily and exacerbate the disease. Therefore, reliable safety as well as therapeutic effects are required for anti-allergic agents.
We chose three topical ocular dual-action anti-allergic agents for this study; alcaftadine 0.25% (Lastacaft®, Allergan, Inc., Irvine, CA, USA) and bepotastine besilate 1.5% (Talion®, Dong-A ST, Seoul, Korea), which were introduced recently, and olopatadine HCL 0.1% (Pataday®, Alcon, Fribourg, Switzerland), a traditionally and widely used agent. The aim of this study was to investigate the cytotoxicities of these agents on cultured human corneal epithelial cells and their anti-allergic effects on cultured human conjunctival epithelial cells in vitro.
Cell lines
This study was performed according to the tenets of the Declaration of Helsinki. The SV-40-transfected human corneal epithelial cell line (HCE-T) was obtained from the American Type Culture Collection (ATCC-CRL-11515; Manassas, VA, USA), and was grown to 80% confluency in keratinocyte serum-free medium (KSFM) containing 0.05 mg/ml bovine pituitary extract and 5 ng/mL epidermal growth factor in collagen-coated plates. Before treatment, the cells underwent epidermal growth factor starvation overnight, as previously described [10].
Morphologic assay
HCECs were exposed to three anti-allergic agents at a concentration of 10% for 24 h and photographed under an inverted phase-contrast light microscope. For transmission electron microscopy, cells that had been grown to confluence in 24-well plates were incubated in DMEM containing 10% concentrations of the three antiallergic agents or phosphate buffer (control) for 4 or 8 h under 5% CO 2 at 37°C. After rinsing with PBS, cells were incubated at 37°C for 24 h, fixed with 2.5% glutaraldehyde in 0.1 mol/L phosphate buffer (pH 7.4) for 12 h and postfixed with 0.1% osmium tetroxide for 2 h. After rinsing with 0.1 mol/L of a phosphate buffer and dehydrating in a graded ethanol series, specimens were embedded in an Epon 812 mixture. Ultrathin sections (60~80 nm) were then stained with uranyl acetate and lead citrate, and examined under a transmission electron microscope (JEOL1200EX: Jeol Ltd., Tokyo, Japan.)
Scratch wound healing assay
A scratch-wound assay was used to compare the effects of alcaftadine, bepotastine and olopatadine on corneal epithelial wound healing. HCECs were cultured to confluent monolayers on eight well chamber slides coated with collagen I (10 mg/cm 2 ; Auspep, Parkville, VIC, Australia) and then scratched with a 100 μl pipette tip. Cells were then washed with fresh medium to remove detached cells and incubated in medium in the presence 10% concentrations of the three anti-allergic agents for 2, 18, or 24 h. To ensure that wounds in similar areas were compared, multiple positioning marks were made at the center of denuded surfaces with a needle, and mean distances between wound edges were measured. Twenty-four hours after wounding, monolayers were fixed, and wound areas in marked fields of view were imaged. Mean distances between original and migrated wound edges of three separate samples per treatment were determined using an image analysis system (Image J 1.33o;available by ftp at zippy.nimh.nih.gov/ or at http://rsb.info.nih.gov/nih-imageJ; developed by Wayne Rasband, National Institute of Health, Bethesda, MD, USA), and percentage wound closures in response to the three anti-allergic agents were compared. The experiment was repeated 5 times.
Analysis of electrolyte compositions, pH values, and osmolarities of the eye solutions
The electrolyte compositions of the three antiallergic agents were assessed using a LX-20 (Beckman Coulter, Fullerton, CA, USA). pH and osmolarity were measured using a Metrohm 780 (Metrohm, Zofingen, Switzerland) and a Micro-Sample Osmometer (Fiske Associate, Norwood, MA, USA), respectively.
Conjunctival provocation test (CPT)
Cultured conjunctival epithelial cells were seeded with or without 10% concentrations of anti-allergic agents and incubated 2 h at 37°C under 5% CO 2 . Conjunctival cells were subsequently treated with or without 1 mg/ml Aspergillus fumigatus allergen extract (Jubilant Hollister-Stier, Kirkland, Quebec, Canada) for 1 h. Cells were then collected, lysed, and treated with Tri-RNA reagent (Favorgen, Taiwan) to extract mRNA, according to the manufacturer's instructions. Total extracted RNA was used to generate cDNA using oligo-dT, dNTP, RNasin® ribonuclease Inhibitor, and M-MLV reverse Transcriptase (Promega, Madison, Wisconsin, USA). To quantify cytokine gene expression, cDNA samples were amplified in AMPOGENE® qPCR Green Mix Lo-ROX (Enzo Life Sciences, Farmingdale, NY, USA). The primer pairs used for RT-PCR are shown in Table 1. The experiment was repeated 5 times.
Statistical analysis
Statistical analysis significance was determined by ANOVA followed by Tukey's post hoc analysis (Prism; GraphPad Software, La Jolla, CA, USA). Statistical significance was accepted for p values < 0.05.
Results
HCECs viabilities after exposure to the three antiallergic agents at different dilutions and exposure times are shown in Fig. 1. At a concentration of 30%, viability decline in alcaftadine was the smallest compared with the control and alcaftdine showed significantly higher viability than bepotastine or olopatadine at exposure times up to 2 h. At a concentration of 20%, viabilities in bepotastine and olopatadine were significantly lower than in the control after 30 min, whereas alcaftadine had no significant effect at exposure times up to 2 h. At exposure times of ≥12 h, the comparisons between agents were meaningless because viabilities were extremely low.
Phase-contrast microscopy revealed many epithelial cells were densely arrayed in control culture media (Fig. 2a), but in the presence of either of the three agents HCECs progressively detached from dishes ( Fig. 2b-d), although alcaftadine exposed cells were less detached and more densely arrayed than cells exposed to bepotastine or olopatadine (Fig. 2b).
Electron microscopy showed that HCECs exposed to alcaftadine, bepotastine, or olopatadine demonstrated more cytoplasmic bleb formation and loss of microvilli ( Fig. 3b-d) than control cells (Fig. 3a). Whereas cells exposed to alcaftadine showed minimal changes, cells exposed to bepotastine or olopatadine showed more and larger cytoplasmic vacuoles and nuclear chromatin condensation along nuclear peripheries (Fig. 3c, d).
The measured values of electrolytes, pH, and osmolarity, and preservative are shown in Table 2. The concentrations of Na + and Cl − in bepotastine were lower and in olopatadine were higher than ideal ranges, whereas the concentration of K + in all three agents was lower than the ideal range; bepotastine had the lowest value. Alcaftadine had the highest Cl − level and was more acidic than the other agents. The osmolarities of all agents were within normal limits. Benzalkonium chloride (BAC) was the preservative used in all agents and its concentration in olopatadine was twice as high as in alcaftadine or bepotastine. All three anti-allergic agents significantly reduced the gene expressions of allergic cytokines induced by Aspergillus allergen provocation, except eotaxin induction by olopatadine in conjunctival cells. Alcaftadine had the greatest effect on the reduction of all cytokine gene expressions examined (Fig. 5).
Discussion
Dual-action anti-allergic agents are widely and commonly used for AC which is not severe as demanding steroid or immune modulators. The aim of the present study was to compare the cytotoxicities and anti-allergic effects of the commercially available topical dual-action Fig. 1 The viabilities of human corneal epithelial cells (HCECs) evaluated by MTT assay. Cell viability was found to be time and concentration dependent and to be significantly reduced after 12 h exposure to all antiallergic agents. At a concentration of 30%, bepotastine and olopatadine treated HCECs were significantly less viable than alcaftadine treated HCECs at exposure times up to 2 h. Survival rates are provided as means ± SDs Fig. 2 Inverted phase contrast micrographs of human corneal epithelial cells (HCECs) exposed to 10% antiallergic agents (bar length 50 μm, original magnification × 200). Many epithelial cells were visible in control culture media (a). HCECs were less detached from dishes after treatment with alcaftadine (b) than after treatment with bepotastine (c) or olopatadine (d) anti-allergic agents; alcaftadine 0.25%, bepotastine besilate 1.5%, and olopatadine HCL 0.1%.
MTT assay revealed that cell viabilities decreased with exposure time and agent concentration. Bepotastine and olopatadine induced significantly lower viabilities than the control even after 30 min at concentrations of 20 and 30%, whereas alcaftadine induced significantly lower viabilities after 2 h at a concentration of 20% and 1 h at a concentration of 30%. In addition, cell viability in the presence of alcaftadine was significantly higher than in the presence of bepostatine or olopatadine at a concentration of 30%. However, after treatment for more than 12 h including 24, 48 and 72 h (data not shown for 48 and 72 h), all agents proved toxic to HCECs, which agrees with other studies [15,16].
A comparative study of olopatadine and alcaftadine on murine conjunctival epithelial cells concluded that alcaftadine had a protective effect on epithelial tight junction protein expression [17]. This property could explain why in our study alcaftadine treated cells detached to a lesser extent than bepotastine or olopatadine treated cells. Cytoplasmic blebbing, chromatin clumping and margination, and loss of microvilli are the evidences of cellular damage caused by chemical, mechanical or hypoxic injury [18]. Alcaftadine exposure resulted in less severe cellular changes than bepotastine or olopatadine.
Abnormal electrolyte composition, pH, and osmolarity can damage cellular functions [19]. In the present study, most measured electrolyte values were beyond ideal ranges. Abnormal electrolyte composition not only augments agent toxicity, due to changes in cell membrane permeabilities, but can also cause other types of cell damage. On the other hand, osmolarity and pH of all agents may not affect the toxicity because they were similar and in normal ranges. Preservatives are necessary to prevent ocular infection by prohibiting microorganism proliferation, but they can damage the ocular surface [20][21][22]. All three agents examined contained BAC, the most commonly used ophthalmic preservative. BAC causes surface-active molecules to bind to cellular epithelium and rapidly intercalate into the bilaminar membrane, and thus, BAC can disrupt the precorneal tear film and damage the ocular surface [16,[23][24][25]. BAC concentrations in topical ocular solutions typically range between 0.004 and 0.025% [20]. Recently, it was reported that even at the lowest concentration tested, 0.001%, Fig. 3 Transmission electron micrograph of human corneal epithelial cells (HCEC) (bar length 2 μm, original magnification × 3000-4000). HCECs were exposed to culture media (a), and 10% diluted solutions of antiallergic agents; alcaftadine (b), bepotastine (c), and olopatadine (d). Normal corneal epithelial cells (a) showed microvilli, homogenous cytoplasm and intact cells and nuclear membranes. Antiallergic agents exposed cells (b, c and d) exhibited damage to plasma membranes, loss of microvilli (black arrowheads), increased and enlarged vacuoles (white arrows), and nuclear chromatin condensation (white arrowheads). Olopatadine treated cells showed more and larger vacuoles and condensed nuclear remnants along nuclear peripheries BAC caused significant loss of cellular metabolic activity at exposure times as short as 1 min [26]. Although we found all three agents had BAC levels in the recommended concentration range, olopatadine had twice as much BAC than alcaftadine or bepotastine. This higher level of BAC could partly explain the lower viability of olopatadine treated HCECs. These results are consistent with those of previous studies, in which cell viabilities were found to be affected more by anti-allergic drugs containing BAC [16,21,23]. Similarly, BAC-containing drugs used to treat other chronic ocular pathologies, like glaucoma, have been reported to be more cytotoxic than preservative free preparations [24,25].
Damage, abrasions, or wounding of epithelial cells are caused by various insults including ophthalmic agents which can impair healing [27]. Healing involves a series of events that includes the proliferation and migration of cells to seal wounds [28,29]. In the present study, alcaftadine treated cell layers showed the best wound healing followed by bepotastine treated cell layers. Olopatadine Fig. 4 The closure of human corneal epithelial cells (HCECs) wounds in response to 10% antiallergic agents (bar length 200 μm, original magnification × 4). Migration was assessed after 2, 18 or 24 h after scratching confluent HCECs in the presence or absence of antiallergic agents. Micrographs show wound widths immediately after and 24 h after wounding in the absence of any agent (a, e) or in the presence of 10% alcaftadine (b, f), bepotastine (c, g) or olopatadine (d, h). The effects of antiallergic agents are expressed as percentage reductions in average wound widths. Results are expressed as means ± SDs of percentage wound widths (defined as average widths at 10 positions). Wound widths were significantly narrower for cells exposed to alcaftadine than for cells exposed to bepotastine or olopatadine BAC benzalkonium chloride treated cell layers showed least wound healing, in fact, wound gaps did not almost change. Interestingly olopatadine has been reported to inhibit monocyte migration by binding to S100A12 protein, which is involved in inflammation [30]. It is generally considered wellmaintained healing capacity helps to minimize harmful effects resulting from a damaged ocular surface barrier.
To the best of our knowledge, this is the first study to use a CPT based on an Aspergillus allergen stimulus to investigate the anti-allergic effect of agents on cultured conjunctival epithelial cells in vitro. We measured the gene expression of 5 cytokines related to the allergic reaction cascade, that is, interleukin (IL)-5 for eosinophil activation, IL-25 for Th2 response maintenance, eotaxin for eosinophil recruitment, thymus and activationregulated chemokine (TARC) for Th2 cell migration, and thymic stromal lymphopoietin (TSLP) for dendritic cell differentiation to prime Th2 cells [31][32][33][34][35]. The results obtained showed all three agents reduced gene expressions. In particular, alcaftadine was superior in terms of attenuating the gene expressions of all five cytokines, the levels of which were similar to those in untreated cells. Furthermore, alcaftadine has a 10 times stronger effect on H1 and H2 receptors than olopatadine and affinity for H4 receptor, which olopatadine does not possess [36][37][38]. Because Th2 cell-driven allergic response is caused by H1 and H4 receptor activations, these results are reasonable and consistent with those of previous studies [39][40][41]. The lower cytokine gene expression observed indicated that alcaftadine, bepotastine, and olopatadine have strong anti-allergic effects. However, the CPT has a limitation because it involves the exposure of cultured conjunctival cells to agents before they are sensitized, which thus, differs from real life situations because anti-allergic agents are usually used to already sensitized patients in clinic.
The most obvious limitation of the present study is its in vitro design. However, although in vitro results do not always reflect in vivo effects, we believe that our findings provide a valuable guide with respect to optimal clinical usage. In particular, in patients with decreased tear clearance or lack of sufficient tear amounts, such as the elderly, nasolacrimal duct obstruction, and dry eye syndrome, anti-allergic agents administered to ocular surfaces may cause cytotoxic effects [42][43][44][45]. Therefore, if these risk factors are anticipated or symptom suggestive of toxicity is encountered, it might be more beneficial to use a less toxic agent.
Conclusion
In summary, our in vitro results indicate alcaftadine has less side effects and better therapeutic effects than bepotastine or olopatadine. Although these effects may not correspond with actual response to eye-drops in patients, we believe our results provide an advanced guideline to clinicians and better treatment to patients. Well designed in vivo studies should be followed in the future. | 2019-11-13T01:40:44.214Z | 2019-11-08T00:00:00.000 | {
"year": 2019,
"sha1": "46a965fd644d4463c3b229ef9f193922fb60f2ee",
"oa_license": "CCBY",
"oa_url": "https://bmcophthalmol.biomedcentral.com/track/pdf/10.1186/s12886-019-1228-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "46a965fd644d4463c3b229ef9f193922fb60f2ee",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
241451651 | pes2o/s2orc | v3-fos-license | The risk factors for post-polypectomy bleeding and establishment of a risk-scoring model for small colorectal polyps (<1.5cm) in an ambulatory surgery center: a retrospective analysis
Background: As one of the most common complications of colonoscopy, the risk factors of post-polypectomy bleeding (PPB) has been rarely explored in an ambulatory surgery unit. We aim to develop a risk-scoring model to predict the risk of PPB forsmall colorectal polyps (<1.5cm) in an ambulatory surgery unit. Methods: The patients with single small colorectal polyps (<1.5cm) who underwent endoscopic polypectomy in the Ambulatory Surgery Center of our hospital between January 2014 and June 2017 were included and retrospectively reviewed. We analyzed patient’s clinical characteristics, morphological and pathological characteristics of polyps, polypectomy techniques, and the occurrence of PPB. Risk factors of PPB were identified with a multivariable logistic regression model. In addition, a risk-scoring system was developed and validated eventually. Results: Among the 771 patients enrolled, 26 (3.4%) patients suffered PPB. The male gender, elderly age (≥ 60 years), using hot biopsy forceps as polypectomy technique adenoma in histopathology, complicated withhypertension, use of anticoagulant or antiplatelet agents, and early excessive activities significantly increased the risk of PPB (P<0.05) as indicated by the results of multivariable logistic regression analysis. The area under the ROC curve (AUC) in the model group (0.890) and validation group (0.924) indicated that the risk-scoring model could predict the occurrence of PPB effectively. Conclusions: This risk-scoring method may help to predict the risk of PPB forsmall colorectal polyps, fit well in the Ambulatory Surgery Center, and provide a new approach to help reduce the incidence of hemorrhage after colorectal polypectomy.Trial registration: This study was retrospectively registered and approved by the Ethics Committee of West China Hospital of Sichuan University (IRB number: ChiCTR1800020201).
Background
Colorectal polypectomy is currently considered an effective strategy to reduce the incidence of colorectal cancer. Colonoscopic polypectomy is routinely believed to be a safe procedure; however postpolypectomy bleeding (PPB), which is one of the most frequent complications after endoscopic operations, may cause serious problems and adverse consequences.
Ambulatory surgery provides high quality and e cient care for a wide variety of surgical procedures.
During the last decades, ambulatory surgery has grown rapidly and now accounts for the majority of operations performed in endoscopic therapy in China. On the other hand, the safety of day surgery must be emphasized. Thus, the safety of colonoscopic polypectomy in the Day Surgery Unit should be taken into account when discussing cost-effective economy and rapid recovery.
Previous studies mostly involved multiple or large colorectal polyps among inpatients.The patients underwent the polypectomy in the Day Surgery Unit and were characterized with smaller size polyps (≤1.5 cm), younger population and limited complications. They were encouraged to perform appropriate activities and diets after the polypectomy as soon as possible. Consequently, it is essential to balance the safety and e ciency via investigating the risk factors of PPB based on day surgery. Furthermore, it would be valuable to establish a risk-scoring model of ambulatory surgery to predict the occurrence of PPB, as little research has described it ever before.
Patients
The records of 2,744 patients who presented with colorectal polyps and underwent an endoscopic colorectal polypectomy in the Day Surgery Unit of West China Hospital, Sichuan University from January 2014 to June 2017 (total 42 months) were reviewed and analyzed. Inclusion criteria were (1) patients with single colorectal polyp, (2) a polyp size ≤ 15mm and (3) aged between 14 and 80 years old. In addition, (4) all patients were required to meet the American Association of Anesthesiologists (ASA) score of less than 3. Patients ( ) with multiple-colorectal polyps, ( ) a laterally spreading tumor (LST), and ( ) a history of in ammatory bowel disease (IBD) or ( ) hemorrhagic disease were excluded. We also excluded (v) the cases of carcinoma which were pathologically con rmed after polypectomy, as well as (vi) the patients with incomplete clinical data.. Consequently, 771 patients were ultimately enrolled. We divided the patients into bleeding and non-bleeding groups according to the occurrence of PPB. A total of another 198 patients with colorectal polyps were included in the Day Surgery Unit from July 2017 to December 2017 as a validation cohort. The study ow is shown in Figure 1. Complete medical records of the patient-related characteristics, polyp-related characteristics, and polypectomy techniques, as well as the use of prophylactic clips during the endoscopic procedure, were collected. This study was approved by the Ethics Committee of West China Hospital of Sichuan University (IRB number: ChiCTR1800020201).
Endoscopic Colorectal Polypectomy
Written consent was obtained before the operation. If anticoagulant or antiplatelet agents were needed, such as aspirin, warfarin or clopidogrel, they were required to discontinue these at least 5 days before the operation. Endoscopic colorectal polypectomy was performed by electronic endoscopes (JIF-H260Z; Olympus Optical Co, Ltd, Tokyo, Japan) by experienced endoscopists who had performed at least 500 cases of endoscopic polypectomies. We routinely applied argon plasma coagulation (APC) (ERBE Co, Ltd, Germany) and hot biopsy forceps (HBF) (Stericlin Co, Ltd, Germany) for diminutive polyps (d≤5mm) and small polyps (d≤10mm), while larger sessile polyps (10mm d≤15mm) were resected by endoscopic mucosal resection (EMR) and pedunculated polyps were removed by snares (SAS-1-S; COOK Co, Ltd, US). Hemostatic clips were selected when bleeding occurred during the operation or to prevent delayed hemorrhage.
Post-polypectomy Bleeding
In our study, PPB was con rmed with the presence of hematochezia, while melena or hemorrhoids were excluded. We also de ned early PPB (EPPB) as hemorrhage within 24 hours of the colorectal polypectomy and delayed PPB (DPPB) was referred to as hemorrhage during 24 hours to 4 weeks after the endoscopic operation. Follow-up telephone calls within 4 weeks were conducted regularly on the 2nd day, 7th day, 14th day, and 28th day after discharge from the hospital.
Patient-related Factors
We collected the patients demographic characteristics, including gender and age. In addition, the factors of antithrombotic agents, history of smoking and alcohol consumption, postoperative activities and diet were compared between two groups. Smoking was de ned as a continuous or cumulative smoking habit for 6 months or more in one's lifetime. Alcohol consumption referred to drinking 10 grams per day on average. Comorbidities of hypertension, diabetes mellitus, cerebrovascular disease, coronary heart disease, hyperlipidemia, chronic obstructive pulmonary disease (COPD), and rheumatoid diseases were also reviewed. Improper postoperative activities referred to intense exercise or heavy physical activity within the 2 weeks after endoscopic operations. Inappropriate diet was de ned as starting oral feeding within 6 hours or having spicy or greasy food within 1 week after the operation.
Polyp-related Factors
The size, location, gross morphology and the histopathology of the colorectal polyps were carefully documented. An open-biopsy forcep of 6mm was used as standard to measure the polyp size. The polyp location contained ascending colon (cecum was included), transverse colon (hepatic exure and splenic exure were included), descending colon, sigmoid colon, and rectum. The morphology of the polyp was categorized into four types (Yamada , Yamada , Yamada , and Yamada ) according to the criteria of Japanese Yamada Classi cation. Polyps were classi ed histopathologically as adenomatous (tubular, tubulovillous, and villous) or hyperplastic, in ammatory, and others (hamartomatous, retentional, etc.).
Statistical Analysis
All statistical analysis were performed with SPSS software version 24.0 (SPSS Institute, Chicago, IL).
Categorical variables were compared with the Fisher exact test or the χ 2 test. Continuous variables were compared with either the unpaired Student t test or the Mann-Whitney U test. The odds ratios for delayed post-polypectomy hemorrhage were calculated by unconditional logistic regression. In addition, multivariable logistic regression analysis was performed to identify independent variables associated with PPB. The score-based prediction rule was generated from the new logistic regression equations by using a regression coe cient-based scoring method []. The total score for each patient represented the sum of the score for each independent risk factor. The calibration was evaluated with the Hosmer-Lemeshow (H-L) goodness-of-t test. To evaluate the predictive performance of the scoring model, the receiver operating characteristic (ROC) curve and the area under the ROC (AUC) were adopted. An AUC of 1.0 indicated perfect concordance, while an AUC of 0.5 indicated no relationship. Meanwhile, external validation of the model was performed by measuring the discriminatory ability with AUC.
Baseline Characteristics
During the study period, a total of 771 single-polyp patients underwent colorectal polypectomy with 771 polyps being completely removed. The baseline characteristics of the patients are shown in Table 1 Overall, PPB occurred in 26 patients (3.4%) while DPPB developed in 23 patients. PPB appeared to take placed as on day 7 (5, 10) after the polypectomy, of which 15 (57.7%) patients had received endoscopic hemostatic procedures. Only one patient underwent a blood transfusion, and none required selective arterial embolization or surgery. The results showed that gender, age, early postoperative activity, and hypertension were statistically different between the bleeding and non-bleeding groups (P<0.05).
Risk Factors of PPB
As revealed in the univariate analyses, the gender and age of the patients, complication of hypertension, history of alcohol, and early postoperative activity differed signi cantly (P<0.05) between the bleeding and non-bleeding groups. In terms of multivariate logistic regression analysis, gender of male, age older than 60, histopathology of adenomatous polyps, polypectomy technique of HBF, complication of hypertension, use of anticoagulant/antiplatelet agents, and early excessive postoperative activities appeared to be the independent risk factors, which were associated with PPB, whereas location, size, morphology, applicaton of prophylactic clips, other comorbidities, history of smoking or alcohol, as well as inappropriate diet were not statistically signi cant ( with a signi cantly increasing trend of risk from the low to high risk groups. The predictive accuracy of the risk score for PPB was 0.890 (95%CI, 0.806-0.960) measured by AUC (Figure 2). In addition, our prediction model calibrated well with the Hosmer-Lemeshow goodness-of-t test (χ 2 = 1.030, P = 0.794).
External Validation of the Risk Scoring Model
A total of another 198 patients with colorectal polyps were included in the Day Surgery Unit from July 2017 to December 2017 as an external validation cohort. The percentage of the risk for PPB in each calculated score and AUC were 0.924 in the validation dataset ( Figure 2).
Discussion
The current study of a large cohort established a novel, simple-to-use risk-scoring system of colorectal PPB, which comprises various aspects of clinical features. Our nding is one of the largest cohorts to investigate the frequent but serious adverse event of PPB in the Ambulatory Surgery Center. We have dedicated to risk strati cation based on these signi cant clinical risk factors, and established a predictive model that can be applied to the safety of polypectomy in the Ambulatory Surgery Center.
We found that advanced age and hypertension in patients were independent risk factors for PPB. In recent years, although patients with colorectal polyps tended to be younger, PPB continued to occur in the elder patients because of poor vascular compliance [,,]. In addition, the various cardiovascular and cerebrovascular diseases associated with advanced age and hypertension might bring about impaired blood vessel wall and abnormal coagulation function [].
Based on the ndings of previous studies, large size was a signi cant polyp-related factor that has been unequivocally proven to increase the risk of delayed bleeding. Buddingh KT [3] thought that the risk increased by 13% for every 1mm increase in polyp diameter (OR:1.13, 95%CI 1.05-1.20, P<0.001). However, we did not come to a similar conclusion as we believe that a limited colorectal polyp size of 15mm would not play such a vital role. The signi cant associations between colorectal adenomatous polyps and postoperative bleeding in our study, in particular, have been previously corroborated [].
Adenomatous polyps, reported by Uno Y [], had more blood vessels exposed upon removal of the polyp, which was probably related to PPB.
Moreover, we also demonstrated that HBF had an obviously higher risk of PPB compared with EMR or snare excision. In addition, the placement of prophylactic clips had no bene t on reducing the incidence of PPB in the light of our ndings.
According to risk strati cation of endoscopic procedures, endoscopic polypectomy is de ned as high risk procedure. What is more, for patients receiving anticoagulation or antiplatelet therapy, the risk of haemorrhage sharply increases. The challenge is to weigh the bene ts against risks of thromboembolism and PPB. We found that the use of anticoagulant or antiplatelet drugs led to a higher risk of bleeding, which was consistent with several reported studies [,,]. In particular, bridge anticoagulation is necessary in patients with high thromboembolic risks who are undergoing polypectomy [,]. The recent BRIDGE trial, which showed that bridging anticoagulation is associated with a signi cantly higher risk of hemorrhage []. Regrettably, we did not classify the categories of antithrombotic drugs, such as warfarin, aspirin or clopidogrel. In future, we may pay close attention to the potential risk of PPB in patients receiving antithrombotic therapy although these drugs has been strictly discontinued and restarted. We consider that it is necessary to delay discharge or extend the follow-up time to ensure the safety of these special patients who received colorectal polypectomy in ambulatory surgery unit.
We also demonstrated that intense exercise or heavy physical activity within 2 weeks after polypectomy was associated with delayed bleeding. In clinical practice, we emphasize the importance of postpolypectomy recovery at home, especially the guidance of discharge instructions within 7 days. We advocate the e cient and safe hospital clinical pathway of day surgery, meanwhile greater emphasis needs to be placed on standardized discharge criteria and high quality discharge follow-up based on clinical risk assessment.
Although our study provides a reliable risk-scoring model of PPB, it has several limitations. An inherent potential bias was inevitable as the study is retrospective despite parital data being collected prospectively. Furthermore, bridge anticoagulation and the time to restart the antithrombotic therapy should be considered as signi cant variables associated with PPB for a more complete risk-scoring system.
Conclusions
In summary, this was the rst study to clarify the risk factors of PPB in the certain populaton of ambulatory surgery. We aimed to promote the development and growth of high quality ambulatory surgery. To this end, this risk-scoring model has resulted in the founding of new association of ambulatory surgery and endoscopic therapy. The signi cance of this study is that a predictive model was developed, which could provide more valuable clinical information for making a better decision about revising the access criteria and follow-up after discharge. Furthermore, high e ciency, remarkable safety and cost reduction of ambulatory surgery have been improving the access of the general population to utilize of endoscopic treatment with colorectal polyps. " †" : Mann-Whitney U-test was performed; "*" : The difference was statistically significant. | 2019-10-17T09:08:44.422Z | 2019-10-09T00:00:00.000 | {
"year": 2019,
"sha1": "0f97472a9b80aa3723de27163983ffc5086375dc",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-6456/v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "81c68afa5994cc5d9a079100c4b20a0639537fbc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
251829056 | pes2o/s2orc | v3-fos-license | SGLT2 Inhibitor Empagliflozin Modulates Ion Channels in Adult Zebrafish Heart
Empagliflozin, an inhibitor of sodium-glucose co-transporter 2 (iSGLT2), improves cardiovascular outcomes in patients with and without diabetes and possesses an antiarrhythmic activity. However, the mechanisms of these protective effects have not been fully elucidated. This study aimed to explore the impact of empagliflozin on ion channel activity and electrophysiological characteristics in the ventricular myocardium. The main cardiac ionic currents (INa, ICaL, ICaT, IKr, IKs) and action potentials (APs) were studied in zebrafish. Whole-cell currents were measured using the patch clamp method in the isolated ventricular cardiomyocytes. The conventional sharp glass microelectrode technique was applied for the recording of APs from the ventricular myocardium of the excised heart. Empagliflozin pretreatment compared to the control group enhanced potassium IKr step current density in the range of testing potentials from 0 to +30 mV, IKr tail current density in the range of testing potentials from +10 to +70 mV, and IKs current density in the range of testing potentials from −10 to +20 mV. Moreover, in the ventricular myocardium, empagliflozin pretreatment shortened AP duration APD as shown by reduced APD50 and APD90. Empagliflozin had no influence on sodium (INa) and L- and T-type calcium currents (ICaL and ICaT) in zebrafish ventricular cardiomyocytes. Thus, we conclude that empagliflozin increases the rapid and slow components of delayed rectifier K+ current (IKr and IKs). This mechanism could be favorable for cardiac protection.
Introduction
Despite the great advances in medicine, sudden cardiac death remains a leading cause of mortality and is responsible for more than 60% of all deaths from cardiovascular diseases [1]. Sodium-glucose cotransporter 2 inhibitors (isSGLT2) reduce hospitalizations and death from heart failure (HF) [2][3][4]. The underlying mechanisms of their beneficial effects are being intensively studied.
Recent experimental data provided evidence that isSGLT2 empagliflozin, canagliflozin, and dapagliflozin slowed the progression of heart failure in normoglycemic animals [5][6][7][8], and their effectiveness was comparable with ACE inhibitors [9]. The results of DAPA HF-and EMPEROR-reduced randomized clinical trials demonstrated the beneficial effects of isSGLT2 in non-diabetic patients with heart failure [10,11]. Moreover, isSGLT2 exerts cardiorenal protection that co-insides with antiarrhythmic effects [12][13][14]. For instance, Int. J. Mol. Sci. 2022, 23, 9559 2 of 17 dapagliflozin decreased the incidence of reported episodes of atrial fibrillation and atrial flutter adverse events in high-risk patients with type 2 diabetes mellitus [12]. Moreover, a recent meta-analysis of 34 randomized trials with 63,166 patients demonstrated that isSGLT2 is are associated with significantly reduced risks of incident atrial arrhythmias and sudden cardiac death in patients with T2DM [15].
Cardioprotective effects of isSGLT2 in the context of anti-arrhythmias may be attributed to reduced fibrosis and decreased left ventricular hypertrophy [16] as well as to reduced sympathetic activity [17,18]. Recently, several important cellular mechanisms of action of isSGLT2 have been identified, e.g., anti-oxidative and improvement of the cardiac metabolome [19], increased energy production from glucose, ketone bodies, and fatty acid oxidation [20,21], enhanced mitochondrial respiratory capacity [22], anti-inflammation [23], and anti-proteolysis [24]. These mechanisms may contribute to fibroblast activation-related electrical remodeling or the function of different cardiac ion channels.
Therefore, this study aimed to investigate the influence of empagliflozin on the main ionic currents in the cardiomyocytes and the action potential (AP) profile in the ventricular myocardium of an isolated heart.
Results
In our study, we used the zebrafish (Danio rerio), a tropical freshwater teleost. Isolated zebrafish ventricular cardiomyocytes appear rod-shaped and quite narrow compared to those of mammals ( Figure 1A), as shown previously [32,33]. Phalloidin conjugated with Alexa Fluor 488 was used to visualize the sarcomeric organization of actin, which is clearly seen in the cross-striations ( Figure 1B).
To elucidate whether empagliflozin has effects on the cardiac electrical activity we performed experiments to register main ionic currents in freshly isolated cardiomyocytes from zebrafish. The concentration range of empagliflozin was used based on the plasma levels observed clinically [34]. Cardiomyocyte viability was assessed using the MTT assay, the results of which are presented in Figure 1C. No significant differences in cell viability were determined between the control and empagliflozin-treated groups.
As shown in Figures 2-4, the extended incubation of ventricular cardiomyocytes for 2 h in the presence of empagliflozin at the concentration of 5 µM had no effect on I Na , I CaL , and I CaT . Analysis of the current-voltage characteristics of I Na , I CaL , and I CaT and the parameters of the voltage-dependence of activation and inactivation revealed no significant differences between the control and empagliflozin-treated groups. Table 1 summarizes the biophysical characteristics of examined ionic currents.
However, in contrast to the aforementioned currents, functional analysis of the rapid component of delayed rectifier potassium current I Kr exhibited a significant increase in the amplitude of the step and tail currents. As shown in Figure 5 and Table 1, I Kr step current density in the range of testing potentials from 0 to +30 mV and I Kr tail current density in the range of testing potentials from +10 to +70 mV in cardiomyocytes after empagliflozin pretreatment was significantly enhanced compared to the control group. Analysis of the slow component of delayed rectifier potassium current I Ks revealed a significant increase in the amplitude in the range of testing potentials from −10 to +20 mV in the empagliflozintreated group ( Figure 6). The effect of empagliflozin on the I Kr tail and I Ks current density had a concentration-dependent manner in the range from 0.2 to 5 µM with a half maximal effective concentration (EC 50 ) of 0.56 µM and 0.76 µM, respectively. However, in contrast to the aforementioned currents, functional analysis of the rapid component of delayed rectifier potassium current IKr exhibited a significant increase in the amplitude of the step and tail currents. As shown in Figure 5 and Table 1 current density in the range of testing potentials from 0 to +30 mV and IKr tail current density in the range of testing potentials from +10 to +70 mV in cardiomyocytes after empagliflozin pretreatment was significantly enhanced compared to the control group. Analysis of the slow component of delayed rectifier potassium current IKs revealed a significant increase in the amplitude in the range of testing potentials from −10 to +20 mV in the empagliflozin-treated group ( Figure 6). The effect of empagliflozin on the IKr tail and IKs current density had a concentration-dependent manner in the range from 0.2 to 5 μM with a half maximal effective concentration (EC50) of 0.56 μM and 0.76 μM, respectively. curve for empagliflozin effect on the IKr tail current density at +10 mV. The solid line shows least-squares fit to the Hill function. Since I Kr is one of the major repolarizing currents in zebrafish hearts and an increase in the I Kr current density can result in a change in AP duration, our next step was to explore the AP profile in the ventricular myocardium of an isolated heart. We found, as expected, that the perfusion of the excised heart for 2 h with 5 µM empagliflozin-containing buffer solution significantly reduced APD50 and APD90, but did not affect other AP parameters ( Figure 7 and Table 2).
curve for empagliflozin effect on the IKs current density at +20 mV. The solid line shows leastsquares fit to the Hill function.
Since IKr is one of the major repolarizing currents in zebrafish hearts and an increase in the IKr current density can result in a change in AP duration, our next step was to explore the AP profile in the ventricular myocardium of an isolated heart. We found, as expected, that the perfusion of the excised heart for 2 h with 5 μM empagliflozin-containing buffer solution significantly reduced APD50 and APD90, but did not affect other AP parameters (Figure 7 and Table 2).
Discussion
Recently, SGLT2 inhibitors have come into the focus of research due to the cardioprotective effects demonstrated by the EMPA-REG OUTCOME [2]. Dramatic cardiac benefits include the reduced rate of death from cardiovascular causes and the reduction in heart failure hospitalization in people with and without diabetes [10,35]. Despite a growing number of studies investigating the cardiovascular effects of empagliflozin, the underlying mechanisms are still not fully understood and research results are sometimes contradictory [29,36].
The present study was designed to determine the potential effects on the main ionic currents responsible for AP generation in the model object, zebrafish. Zebrafish are widely used as an adequate vertebrate model of human cardiac function due to the presence of a similar set of ionic currents responsible for the AP in cardiomyocytes. For example, while the use of small mammals such as mice and rats can be limited due to the little expression of delayed rectifier potassium current [37] the advantage of using zebrafish is the presence of both rapid and slow components of this current [38]. Zebrafish have similarities with humans in resting membrane potential, AP amplitude, and shape, in particular in the presence of a clear plateau phase [39]. The similarity with mammals is the involvement of I Na in the AP upstroke and I CaL in the plateau phase. Although a fast phase-1 repolarization is not present in zebrafish AP by reason of the absence of the transient outward current I To .
A number of studies have already shown the restorative effects of empagliflozin on the increased by some impacts late Na + current [25,26], on the disturbed L-type Ca 2+ and Na + /Ca 2+ exchanger (NCX) currents in the ventricular myocytes of diabetes mellitus rats [27]. There are controversial data regarding the inhibition of Na + /H + exchanger activity by empagliflozin [29][30][31]. In this study, we demonstrate for the first time that empagliflozin affects the rapid and slow components of delayed rectifier potassium current I Kr and I Ks . It was revealed significant increase in the amplitude of the I Kr step and tail and I Ks currents in empagliflozin-treated cardiomyocytes. However, we failed to detect any effects on I Na , I CaL , and I CaT .
In the human heart, the I Kr current is conducted by the hERG channel, also known as K V 11.1 or KCNH2 [40]. I Kr is essential for proper electrical activity in the heart. I Kr is crucially important for determining the cardiac AP duration due to effective control of repolarization [41]. Reduction in I Kr produced by either direct channel block or inhibition of trafficking results in prolonged AP duration that is linked to an increased risk for Torsade de Pointes and, as a consequence, sudden cardiac death [42]. The pore-forming subunit of the I Ks channel, known as KvLQT1 or Kv7.1, is encoded by the KCNQ1 gene [40]. Reduction in current densities due to loss-of-function KCNQ1 mutations or a reduction in repolarization reserve during β-adrenergic stimulation is thought to underlie long QT syndrome phenotypes with increasing susceptibility to arrhythmia [43]. Thus, drugs that increase hERG or KvLQT1 activity might have potential antiarrhythmic effects.
Our experiments in the recording of APs from ventricular myocardium have shown a significant reduction in APD50 and APD90 in empagliflozin-treated zebrafish hearts. These data, as expected, show good agreement with those of I K outward current enhancement.
It has been reported that the expression of the hERG channel is significantly downregulated in diabetic hearts due to high-glucose-induced inhibition of channel trafficking, and this downregulation is a critical contributor to the slowing of repolarization [44]. Studies performed using various animal models have reported a decrease in I Kr and I Ks current along with a prolonged QT interval in diabetic dog and rabbit hearts [45]. It is also well established that a reduction in the expression of K V channels in hypertrophied and failing myocardium can result in AP prolongation, which is known to be a pro-arrhythmogenic substrate [46]. Thus, the obtained results allow speculation that the gain of function effect of empagliflozin on I K outward current might be considered as a mechanism of cardioprotective action of the drug. However, when trying to extrapolate the results of our research to humans, the temperature sensitivity of delayed rectifier potassium current should be taken into account [47,48]. The recordings of ionic currents were performed at +28 • C, within the range of physiological temperatures for zebrafish. Moreover, it is noteworthy that I Kr is produced predominantly by a channel encoded by the zebrafish ortholog to the mammalian KCNH6 gene [49]. These facts should determine objectives of the further study on the effects of empagliflozin.
It should be noted that in some previous studies SGLT2 has not been detected in cardiomyocytes and the heart [50][51][52]. On the other hand, Kwong-Man Ng and coworkers reported SGLT2 expression in hiPSC-derived cardiomyocytes and human heart tissue and they showed that high glucose culture significantly increased SGLT1 and SGLT2 expression in cardiomyocytes [53]. Therefore, what is the pathway of the empagliflozin effect remains a question to be solved. Molecular modeling of empagliflozin docking has shown that the drug has binding affinities to a region in Na V 1.5 that is a binding site for known sodium channel inhibitors [25]. However, it seems more likely indirect effect, through the activation of signaling cascades. Empagliflozin has been shown to induce vasodilation in the rabbit aorta by activating protein kinase G PKG and K V channels [54]. Empagliflozin reduces the activity of Ca 2+ /calmodulin-dependent kinase II CaMKII in mouse and human failing ventricular myocytes [28]. This makes it possible to assume the presence of signaling pathways mediating empagliflozin effects on I K .
In conclusion, results from this study revealed the following key observations: (1) empagliflozin increases I Kr and I Ks currents and has no effects on I Na , I CaL , and I CaT in zebrafish ventricular cardiomyocytes, (2) empagliflozin shortens AP duration in ventricular myocardium. Summing up the aforementioned, suppose that the cardioprotective effect of the SGLT2 inhibitor may be attributed to the upregulation effect on I K outward current.
Isolation of Ventricular Cardiomyocytes
All animal handling was performed in accordance with the Helsinki convention.
One-year-old wild-type zebrafish were used in the experiments.
Ventricular cardiomyocytes were obtained from the heart by enzymatic dissociation. The fish were killed by decapitation. The heart was rapidly excised. A cannula, blunted syringe needle 32 gauge, was introduced through the aortic bulb of the isolated heart for retrograde perfusion for 10-15 min with a Ca 2+ -free solution of the following composition (in mM): 100 NaCl, 10 KCl, 1.2 KH 2 PO 4 , 4 MgSO 4 , 10 HEPES, 50 taurine, 20 glucose, pH 6.9 (adjusted with KOH at room temperature). Then the heart was perfused for 25-30 min with the same solution containing proteolytic enzymes: 0.7 mg/mL collagenase type IA (Sigma-Aldrich, St. Louis, MO, USA); 0.6 mg/mL trypsin, type IX (Sigma-Aldrich, St. Louis, MO, USA), and 1 mg/mL bovine serum albumin (DIA-M, Moscow, Russia). All perfusion was carried out at room temperature, trypsin was used only to obtain cells for registration of Na + current. After the end of perfusion, the atrium was removed and ventricular myocardium was destroyed mechanically (by cutting with surgical scissors and pipetting) to isolate individual cells. Cardiomyocytes were stored in the Ca 2+ -free solution at +4 • C for no more than 8 h.
Cell Viability Assay
Cell viability was determined using 3-(4,5-dimethylthiaz-ol-2-yl)-2,5-diphenyltetrazolium bromide MTT (Sigma-Aldrich, St. Louis, MO, USA). Cardiomyocytes were plated in 96-well plates at 1000 cells per well. Cells were exposed to empagliflozin in the concentration range from 0.2 to 5 µM of empagliflozin for 2 h at +28 • C. All groups were repeated in triplicate and were repeated in three independent experiments. After the treatment, MTT solution was added to the final concentration of 0.5 mg/mL for incubation at +28 • C for 4 h. Then DMSO was added to dissolve formazan crystals. Finally, the absorbance was measured by microplate reader CLARIOstar ® Plus (BMG LABTECH, Ortenberg, Germany) at 570 nm. The absorbance reading at 630 nm was used as a reference and was subtracted from the 570-nm absorbance reading. The percentage of living cells was calculated using the following formula: % of living cells = 100% × [(sample absorbance − blank absorbance)/(control absorbance − blank absorbance)], (1) where blank absorbance is the absorbance in wells with the buffer solution without cells and control absorbance is the absorbance in wells with untreated cells.
Actin Fluorescent Staining
Cardiomyocytes plated on gelatin-coated coverslips were fixed with 4% paraformaldehyde in PBS for 15 min. After the fixation, cells were soaked with PBS three times for 5 min and then permeabilized with 0.05% Triton X-100 in PBS for 5 min at room temperature. Subsequently, the cells were again washed with PBS twice for 5 min. To visualize actin filaments, the cells were incubated with phalloidin conjugated with Alexa Fluor 488 (Thermo Fisher Scientific, MA, Waltham, USA) for 40 min at room temperature and analyzed under a fluorescence microscope Axio Observer Z1 (Carl Zeiss, Oberkochen, Germany) after counterstaining of nuclei with 4 ,6-diamidino-2-phenylindole (DAPI). Images were obtained at a magnification of ×63.
Recording of Ionic Currents
The whole-cell voltage clamp recordings of ionic currents were performed in the freshly isolated ventricular myocytes at +28 • C, which is the standard temperature for zebrafish maintenance in the lab [55] and within the range of physiological temperatures for zebrafish populations reported in the wild [56]. Each control or empagliflozin-treated group consisted of 13-19 cardiomyocytes, 3-5 cells per fish. Data acquisition was performed with amplifier Axopatch 200B and Clampfit software, version 10.3 (Molecular Devices, San Jose, CA, USA). The ionic currents were acquired at 20-50 kHz and low-pass filtered at 5 kHz using the analog-to-digital interface Digidata 1440A acquisition system (Molecular Devices, San Jose, CA, USA). All pulse protocols were applied more than 5 min after membrane rupture. Patch pipettes of 2.5-3.5 MΩ resistance were pulled from the borosilicate glass B150-110-10 (Sutter Instrument, Novato, CA, USA) with a puller P-1000 (Sutter Instrument, Novato, CA, USA). The pipette and cell capacities and access resistance were completely compensated. The series resistance was compensated by 85-90%. Na + current I Na was recorded in the bath solution contained in mM: 150 NaCl, 3 KCl, 1.8 CaCl 2 , 1.2 MgCl 2 , 10 HEPES, 10 glucose, pH 7.6 (adjusted with NaOH at room temperature). The pipette solution contained in mM: 5 NaCl, 130 CsCl, 1 MgCl 2 , 5 EGTA, 5 HEPES, 5 MgATP, pH 7.2 (adjusted with CsOH). Ca 2+ and K + currents, I Ca and I Kr , were blocked with 10µM nifedipine and 2 µM E-4031 (Tocris, Bristol, UK) added to the external solution.
I Na was elicited from the holding potential of −120 mV with 40 ms depolarizing voltage steps from −80 to +60 mV in 5 mV increment at the frequency of 1 Hz. The current density, I Na normalized to the cell membrane capacitance, was plotted against the voltage steps. Sodium conductance G Na was determined using the equation where V is the voltage step, and Vrev is the reversal potential of I Na calculated by a linear extrapolation of peak I Na in the range of depolarization potentials from +10 to +40 mV. The voltage dependence of steady-state activation of I Na was estimated by normalized G Na /G Na max plotted against voltage steps and fitted by the Boltzmann equation where G Na max is the maximal sodium conductance, V 1/2 is the potential of half-maximal activation of I Na , and k is the curve slope factor. The voltage dependence of steady-state inactivation of I Na was estimated using a double-step protocol with 40 ms testing steps to −20 mV following a conditioning 500 ms prepulse ranging from −120 mV to 0 mV in 5 mV step increment. Normalized I Na /I Na max elicited by the testing steps was plotted against the voltage of conditioning prepulse and fitted by the Boltzmann equation.
The time to peak was analyzed as a measure of activation kinetics. Time constants of inactivation were obtained by fitting the decaying phase of current trace with a biexponential equation: where A fast and A slow are the fractions of fast and slow inactivating components, respectively, and τ fast and τ slow are their time constants. I Na late current was measured at 100 ms after I Na peak at −35 mV and the data are presented as a percentage of I Na peak current. I Ca was recorded in the bath solution contained in mM: 130 NaCl, 5 CsCl, 2 CaCl 2 , 1 MgCl 2 , 5 Na-pyruvate, 10 HEPES, 10 glucose, pH 7.4 (adjusted with NaOH at room temperature). The pipette solution contained in mM: 130 CsCl, 1 MgCl 2 , 0.345 CaCl 2 , 5 EGTA, 10 HEPES, 5 MgATP, 15 TEA-Cl, pH 7.2 (adjusted with CsOH). I Na and I Kr were blocked with 2 µM tetrodotoxin TTX and 2 µM E-4031 (Tocris, Bristol, UK) added to the external solution.
The total I Ca , including I CaT and I CaL , was elicited from the holding potential of −90 mV with 300 ms depolarizing voltage steps from −70 to +20 mV in 10 mV increment. I CaL was recorded at depolarization in the range from −40 to +40 mV following the 300 ms step of depolarization up to −50 mV. I CaT was obtained as the difference current between these two protocols.
Delayed rectifier potassium current I K was recorded in the bath solution contained in mM: 150 NaCl, 5.4 KCl, 1.8 CaCl 2 , 1.2 MgCl 2 , 10 HEPES, 10 glucose, pH 7.6 (adjusted with NaOH at room temperature). The pipette solution contained in mM: 140 KCl, 1 MgCl 2 , 5 EGTA, 10 HEPES, 4 MgATP, 0.03 Na 2 GTP, pH 7.2 (adjusted with KOH). I Na , I Ca , and I K1 were blocked with 2 µM TTX, 10 µM nifedipine, (Tocris, Bristol, UK), and 2 mM BaCl 2 added to the external solution. I K was elicited by a double-pulse protocol from the holding potential of −80 mV. Initial 2 s depolarization from −50 to +70 mV in 10 mV step increment was followed by 2 s repolarization to −40 mV. Rapid component of delayed rectifier potassium current I Kr was measured as an E-4031-sensitive current. The step and tail current peak amplitude after subtraction of the E-4031-sensitive current was used to assess I Kr . Slow component of delayed rectifier potassium current I Ks was obtained as outward step current in the presence of E-4031.
Recording of Action Potentials
APs were recorded from ex vivo heart after cutting off the pacemaker area of the heart (sinoatrial junction). The excised heart preparation consisting of the ventricle and a part of the atrium was pinned on the bottom of the Sylgard-coated chamber and continuously perfused at +28 • C with oxygenated solution contained in mM: 150 NaCl, 5.4 KCl, 1.8 CaCl 2 , 1.2 MgCl 2 , 10 HEPES, 10 glucose, pH 7.4 (adjusted with NaOH). The continuous pacing at 2 Hz frequency was performed. After an hour of equilibration in experimental conditions, the recording of electrical activity was started. The conventional sharp glass microelectrodes technique was used for intracellular recording of APs in ventricular myocardium. The microelectrodes of 20-40 MΩ resistance were filled with 3 M KCl and connected to a high input impedance amplifier model 1600 (A-M Systems, Sequim, WA, USA). The signal was digitized and recorded using PowerGraph 3.3 (DI-Soft, Moscow, Russia) and analyzed using Mini Analysis 3.0 software (Synaptosoft, Fort Lee, NJ, USA). AP duration at 20%, 50%, and 90% of repolarization (APD20, APD50, and APD90, respectively), AP amplitude, and AP upstroke velocity (dV/dt) were determined during offline analysis. Drug was added to the perfusion solution from concentrated stock solutions to yield the final concentration.
Empagliflozin Treatment
To elucidate empagliflozin effects on the main ionic currents in freshly isolated zebrafish cardiomyocytes, cells were incubated for 2 h in the presence of various concentrations of empagliflozin. To determine empagliflozin effects on the AP parameters, the isolated heart was perfused for 2 h with empagliflozin-containing oxygenated buffer solution. Incubation and perfusion were carried out at +28 • C. Further, the recordings of ionic currents or AP were performed in the presence of empagliflozin in the buffer solutions.
Statistical Analysis
Data are presented as mean values ± standard errors (SEM). After checking the normality of the distribution of data obtained by ionic currents recording, statistical comparisons were made using Student's t-test. AP parameters measured in zebrafish heart before and after application of 5 µM empagliflozin were compared using paired Student's t-test. Results with p < 0.05 were considered to be statistically significant. Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest.
AP
Action potential APD20, APD50, and APD90 Action potential duration at 20%, 50%, and 90% of repolarization, respectively hERG Human ether-a-go-go related gene hiPSC Human induced pluripotent stem cell I CaL L-type calcium current I CaT T-type calcium current I Na Sodium current I Kr Rapid component of delayed rectifier potassium current I Ks Slow component of delayed rectifier potassium current K V Voltage-gated potassium channel SGLT2 Sodium-glucose co-transporter 2 i(s)SGLT2 Sodium-glucose cotransporter type 2 inhibitor(s) | 2022-08-26T15:14:15.733Z | 2022-08-23T00:00:00.000 | {
"year": 2022,
"sha1": "b33dbbd53714bcef8f2e0d9de1829c55bc3d67c9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/17/9559/pdf?version=1661336787",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "39a9faff880aeb221bbd010d9cf7ecfb134e8fa4",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
247272135 | pes2o/s2orc | v3-fos-license | Validity of Electronic Device-Based Application for Visual Acuity Examination: A Systematic Review
Recent years, advances in the internet and communication technology have enabled the proliferation of digital medical devices with innovations in the form of health applications, including for visual acuity examination. However, the validity of these applications remains unclear. The limited mobility and health service during the COVID-19 pandemic underscores the urgent need to conduct research that validates these electronic device-based applications. Thus, this study aims to critically analyze whether the electronic device-based application is able to provide a valid and high-quality visual acuity examination. A systematic review was conducted through studies search on PubMed, MEDLINE, Springer, and Cochrane Library using specific keywords. After the studies were selected through inclusion and exclusion criteria, extraction was carried out. Publications from 2011 to the end of 2021 were reviewed, yielding in 1409 studies, of which 19 were included. The results showed a lower systematic bias for distance visual acuity testing with electronic device-based applications compared to standard reference tests with a mean difference of -0.08 to 0.10 logMAR. The validity of the near visual acuity examination with the application shows better results than the distance examination which is marked by smaller 95% limits of agreement range. The results of the analysis of Bland-Altman plots in all the studies reviewed showed that the accuracy of the examination results tended to increase in patients who had better visual acuity. In practice, the use of electronic device-based applications for visual acuity examination can increase the work effectiveness of medical personnel and the proliferation of digital medical devices. It can also be one of the breakthroughs in the field of remote medical services and support the implementation of telemedicine policies. INDEX TERMS Application, electronic device, visual acuity, systematic review
systems allowed early detection of true clinical changes in visual acuity in each patient [4].
Visual acuity is a measure of the eye's ability to clearly distinguish the shape and detail of objects at a certain distance [5]. Visual acuity examination is done by comparing a person's visual acuity with the standard normal person which usually begins with an examination using an optotype. Optotypes are marks of different sizes that are placed systematically on a visual acuity chart. The optotype is usually a number, letter, or symbol as an instrument to test visual acuity [6]. Conventionally, symbols have been printed on cards or graphics that are mounted on walls and presented to patients for examination [5].
Human visual acuity can change due to many eye problems, therefore an examination of visual acuity needs to be carried out to help in detecting various eye disorders. Eye disorder such as visual impairment is a health problem that has profound effects on quality of life, educational attainment, and economic productivity [7,8].
World Health Organization (WHO) estimates that 2.2 billion people in the world have near or distance vision problems. The most common causes of visual impairment worldwide are uncorrected refractive errors (48.99%), followed by cataracts (25.81%) and age-related macular degeneration (4.1%). Meanwhile, the most common causes of blindness were cataracts (34.47%), followed by uncorrected refractive errors (20.26%), and glaucoma (8.30%). More than 75% of visual impairments are actually preventable [9]. Nationally, the results of the 2014-2016 Rapid Assessment of Avoidable Blindness (RABB) survey in 15 provinces showed that the blindness rate in Indonesia reached 900,000 people. The main cause of blindness and visual impairment in the population aged over 50 years in Indonesia is untreated cataracts with a proportion of 77.7% [10]. This shows that there are still many cases that are not corrected or even undetected. This fact shows the importance of increasing the affordability of visual acuity examinations to assist in the early detection of visual impairment.
During the COVID-19 pandemic, social distancing, quarantine, and restrictions on face-to-face interactions were enforced to prevent and break the spread of the SARS-CoV-2 virus. This poses a challenge in providing eye care to patients, as eye examinations require the examiner to be in close contact with the patient. In fact, the first case of COVID-19 was reported by an ophthalmologist at the Wuhan Central Hospital, who also died from the new virus [11,12].
Various types of visual acuity testing applications are available and can be downloaded easily on the internet. However, the validity of these applications remains unclear. Meanwhile, the limited mobility and health service during the COVID-19 pandemic underscore the urgent need to conduct research that validates these electronic device-based applications. Thus, this study aims to critically analyze whether the electronic device-based application is able to provide a valid and high-quality visual acuity examination.
A. SEARCH STRATEGY
This systematic review is conducted based on Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) guidelines. We performed a comprehensive search of PubMed, MEDLINE, Springer, and Cochrane Library databases up to January 2011 and updated later to the end of 2021 using keywords as follows: "application", "electronic device", and "visual acuity". Boolean operators (AND, OR, NOT) and truncation (*) were applied to broaden and narrow the search results. We also used the Medical Subject Headings (MeSH) terms in the search strategy. However, the search language was limited to English and Bahasa Indonesia.
B. ELIGIBILITY CRITERIA
Inclusion criteria were set to filter the results as follows: (1) diagnostic test, observational study, or clinical trial, and (2) investigating the validity of visual acuity examination performed by electronic device-based application. It is worth mentioning that different study designs were incorporated into this review including those with one or more index tests and with any reference method that investigated visual acuity test in the general population. Conversely, the exclusion criteria defined included: (1) irrelevant topics, (2) not having index test as comparison, (3) unknown and/or inappropriate study types and settings, (4) incompatible language, and (5) irretrievable full-text articles.
C. DATA EXTRACTION AND RISK OF BIAS
The following data from articles were extracted, including author and year of publication, study design and location, sample size, index test, reference test, and outcome measures such as mean difference, sensitivity, specificity, and any other reported outcome. The quality of included studies was assessed using the Joanna Briggs Institute (JBI) checklist with >50% cut-off. Risk of bias assessment was conducted by the reviewers collaboratively and discrepancies were resolved by consensus between reviewers.
A. STUDY SELECTION
A total of 1409 studies were initially identified. After removing 324 duplicates, 1085 results were screened based on title and abstract, out of which 132 full texts were identified to be examined ( Figure-1). Finally, 113 studies were excluded due to not meeting the inclusion criteria. In total, 19 articles were included in this review. The quality assessment of all studies using the JBI checklist showed a low risk of bias.
IV. DISCUSSION
The results showed 18 of 19 studies stated that visual acuity examination with electronic device-based applications gave valid results. The overall identification results also show lower mean difference between digital applications compared to standard reference tests in assessing distance visual acuity. This indicates a low systematic bias. The mean difference ranged from -0.08 to 0.10 logMAR. The majority of the 95% limits of agreement range on the results of the distance visual acuity examination is quite wide, which indicates the variability.
The study by Satgunam et al. (2021) stated that the Smart Optometry application was not comparable to the reduced Snellen chart, but was declared valid because it only differed by 2 logMAR lines, which means it is still clinically acceptable. This difference is not a problem when digital applications are used for screening to detect visual impairments associated with decreased visual acuity, even though age and refractive errors affect measurements in Smart Optometry applications [13].
Several studies have attempted to link the use of electronic device-based applications to clinical practice. A study stated that the repeatability of using the Eye Chart application needs to be investigated before being integrated into clinical practice even though the study results are reported to be valid [16]. Correspondingly, the study by Perera et al. (2015) has valid results but still needs further research for clinical use [28]. Different things were reported in a previous study by Gounder et al. (2014) who stated that the Eye Snellen application can be used to measure visual acuity in clinical settings reliably on all measures of visual acuity [30]. More specifically, in one application, Eye Chart Pro, it was reported to be reliable for testing if Snellen's visual acuity was better than 20/200 or 0.1 in decimal [31]. The results of the analysis of Bland-Altman plots in all the studies reviewed did show that the accuracy of the examination results tended to increase in patients who had better visual acuity.
In contrast to the results of other studies, one study stated that the Eye Hand Book application was invalid. The application provides an overestimated close-range visual acuity result compared to a conventional near card with an average of 0.11 logMAR, except for the standard measurement result of 20/20. This means that patients tend to perform better on examinations, so eye disorders may go undetected. Overestimated results can result in delays in treatment. This study suspects that the main factor that plays a role in the discrepancy in the results between the application and the standard reference test is the contrast ratio. The contrast ratio of the clean printed Snellen chart or ETDRS is below 33:1, while the iPhone 5 as a digital device used in this study has a contrast ratio of 1151:1 [27]. Other studies have also reported that measurements of visual acuity in subjects can be overestimated with increasing contrast and lighting levels [32]. However, the validity of the near vision test in this systematic review overall shows better results than the remote visual acuity assessment which is characterized by a smaller 95% limit of agreement range. In daily use in clinics, examination time is critical for efficiency. This is the reason why the Snellen chart is an optotype for routine examination in clinical practice. The results of the application quality analysis on the operability component show that the Peek Acuity application that uses the ETDRS Tumbling E chart optotype has an average inspection time of 5 seconds faster than the conventional ETDRS Tumbling E chart [24]. Correspondingly, the REST application also recorded an average examination time of 2.8 seconds faster than the standard reference test [26]. These results support digital applications for routine use. Nevertheless, previous research reported there is a slight delay between the time. During this delay, the subject may begin to read the first line of the text. The result would be an underestimate of the reading time, and consequently, an overestimate of reading speed. This possibility is supported by a recent study that compared stopwatch versus automated timing in a computer-based reading test [32].
In addition, visual acuity examination by application is also affected by the basis of the electronic device. Recent studies have reported few differences in test time between paper and screens 33,34 . In a different study, it is still a debate whether reading is better on paper or LCD [35]. Furthermore, previous research found that visual acuity examinations on iPads are particularly susceptible to glare. Utilizing an antiglare coating can be the solution [36]. However, some of the information that accompanies the valid statement decisions in 18 studies shows that the feasibility of digital applications is currently still limited to early detection and has the potential to be used as an initial examination in remote medical services.
Visual acuity examination with digital applications based on electronic devices has a lot of importance for the development of remote services. This makes various studies state recommendations for the use of digital applications even though there are slight differences between the results of the examination and the application compared to standard references. Conventional examinations in hospitals require the patient to physically come to the clinic. Difficulties that may be faced by patients are living far away, for example, people living in rural areas, elderly patients, and patients who are unable to move [37]. Remote inspection can also reduce costs and speed up early detection [38]. Moreover, a smart mobile application to monitor visual function in diabetic retinopathy and age-related macular degeneration patients already existed and is being investigated [39]. With the increasing flow of digitization, the portability aspect seen from instability supports the quality of digital visual acuity check applications.
The availability and increasing use of electronic devices, especially smartphones and tablets, further emphasizes the potential for digital applications to identify the most common causes of visual impairment in Indonesia and the world, including uncorrected refractive errors. Studies report that half of the visually impaired population actually has a decrease in visual acuity that can be prevented or corrected with glasses or contact lenses [40].
In practice, the use of electronic device-based applications for visual acuity examination can increase the work effectiveness of medical personnel and increase the proliferation of digital medical devices. The results of this systematic review can also be one of the breakthroughs in the field of remote medical services and support the implementation of telemedicine policies.
As in other studies, this review also has several limitations. Due to the novelty of the topic discussed, there are limited study resources. As a result, the study design and validity parameters of the included studies varied. However, all being considered large study using the electronic devicebased application as an index and conventional visual acuity examination as the comparison.
V. CONCLUSION
In conclusion, the use of electronic device-based applications provides valid results for early detection in visual acuity examinations. This systematic review also found that electronic device-based application visual acuity examination showed better results in near-range visual acuity assessments than the distance visual acuity assessments. We also found that the results of Bland-Altman plots analysis observed in all included studies showed that the accuracy of the examination results tended to increase in patients who had better visual acuity.
Further research on the repeatability of visual acuity examination with electronic device-based applications is required to support the validity conclusion. In addition, it is necessary to conduct research that examines the potential for remote medical services. | 2022-03-08T16:09:40.564Z | 2022-02-10T00:00:00.000 | {
"year": 2022,
"sha1": "2f0b8ac9e6fcbbbad0aefaa0bbab6f339815bed1",
"oa_license": "CCBYSA",
"oa_url": "http://ijeeemi.poltekkesdepkes-sby.ac.id/index.php/ijeeemi/article/download/191/76",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4cecf1823669316b49becb9efac5e6fe2890bc14",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
259067920 | pes2o/s2orc | v3-fos-license | Myocarditis development after COVID-19 vaccination in an immunodeficient case
Editor
Dear Editor, It is thought that the coronavirus disease 2019 (COVID-19), caused by the Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2), can be controlled globally by the administration of vaccines. Although COVID-19 primarily affects the respiratory system, systemic effects are also seen in the clinic. Cardiac complications of the disease caused by SARS-CoV-2, in particular, can be life-threatening [1]. The COVID-19 mortality rate due to cardiovascular problems is estimated at 10.5% [2].
In this article, an immunodeficient case who presented with chest pain after the third dose of the Pfizer-BioNTech® mRNA vaccine against SARS-CoV-2 is discussed.
A seventeen-year-old male patient was diagnosed with common variable immunodeficiency (CVID) and followed up in the pediatric immunology division. He presented with chest pain three days after the third dose of the Pfizer-BioNTech® vaccine. In his physical examination, no organomegaly was detected. His-lung sounds were normal. The rest of his physical examination was unremarkable at that time as well. Routine biochemistry and hemogram values were also within normal limits. His-thorax CT showed normal findings in the last year. During this time he was receiving intravenous immunoglobulin (IVIG) therapy at the dose of 0.5 g/kg/month. The frequency of infections was very rare in the last few years. He suffered from just a couple of uncomplicated upper respiratory tract infections. During this period his serum immunoglobulin (Ig) levels were as follows: IgG 1560 mg/dl, IgA 6,0 mg/dl, and IgM 66 mg/dl. Troponin level was 20,657 ng/L, myoglobin was 429 mcg/L, and C-Reactive protein (CRP) was measured as 40 mg/L. The patient's echocardiography (ECHO) was unremarkable as well. The first electrocardiogram (ECG) was normal. He was hospitalized for followup. T negativity was observed in the leads of V5 and V6 in the control ECG ( Fig. 1). T1 mapping of cardiac magnetic resonance imaging (MRI) was normal. The patient, whose complaints lessened and troponin level decreased to 43 ng/L, was discharged. It was learned that the patient did not have chest pain in the outpatient clinic follow-up. At the last followup, the troponin level was measured as 8 ng/L.
Myocarditis is an inflammatory injury of myocardial tissue without ischemia. The most common cause is a viral infection [3]. It is usually a mild illness characterized by chest pain, shortness of breath, or tachycardia [4]. Left ventricular dysfunction is associated with a poor prognosis when arrhythmia and heart failure complications develop [3]. The pathogenesis of SARS-CoV-2-related heart disease has not yet been elucidated. The most common mechanisms are cytokine storm and angiotensin-converting enzyme-2 (ACE-2) mediated heart damage due to SARS-CoV-2 infection [3,5].
Kuntz et al. [6] considered the presence of at least two of the following criteria as definitive myocarditis in patients with post-vaccination with elevated cardiac enzymes or evidence of myocardial inflammation. These criteria are dyspnea, palpitations, or chest pain; ECG abnormalities or left ventricular dysfunction. Accordingly, our patient can be considered to have had myocarditis because he meets these conditions.
The increase in troponin that occurs during SARS-CoV-2 infection is an indicator of hyperinflammation. It may indicate myocardial damage and acute myocarditis due to SARS-CoV-2 [5]. Similarly, some myocarditis cases have been reported in the literature after COVID-19 vaccines. The peculiarity of our point is that he was a patient with a diagnosis of CVID, and this occurred at the third, repeated dose of the mRNA vaccine.
According to the Israeli Ministry of Health data, between December 2020 and May 2021, 5.4 million people were vaccinated with Pfizer BioNTech, 27 cases were reported to have myocarditis at the first dose in 5.4 million people, and 121 cases were reported after the second dose in 5 million people. Most of them consisted of male cases between the ages of 16-19. The mean hospital stay of the patients was four days and 95% were reported as mild cases [4].
Montgomery et al. [7] reported that myocarditis developed in 23 male patients within 4 days after 2.8 million mRNA vaccination in a retrospective case series. All patients were given supportive treatments, and all showed improvement. All patients had chest pain and high troponin levels. Symptoms developed 12-96 h after the mRNA vaccine. Findings developed after the first dose in 3 patients and after the second dose in other patients. The median age range was 25. There were abnormal ECG findings in 83% of the cases. These are ST elevation and T wave inversions. Echocardiograms of 19 cases with abnormal ECG were normal. Coronary artery imaging was performed in 16 of them with computed tomography or cardiac catheterization. None of them had coronary artery disease. Kim et al. [8] reported 4 cases of myocarditis in the first five days after mRNA vaccination. All cases had chest pain and myocardial biomarkers. Cases were confirmed by magnetic resonance imaging (MRI) specific to myocarditis. Castiello et al. [5] emphasized in a systematic review that the increase in troponin is an indicator of myocardial damage. They also reported that the echocardiography results of 7 cases, diagnosed with COVID-19 and possible myocarditis, were normal. Tschöpe et al. [3] suggested the endomyocardial biopsy method for cases that could not be detected by cardiac MRI. Although cardiac MRI was normal in our patient, he had chest pain and elevated myocardial biomarkers including ECG 3 days after vaccination, similar to the literature.
Hagin et al. [9] administered the second dose of the Pfizer-BioNTech COVID-19 vaccine to 26 patients with immunodeficiency. Adverse events were recorded as pain at the injection site in 9 cases, fever in 3 cases, and axillary lymphadenopathy in 1 patient. No long-term side effects were observed in any of them. Göschl et al. [10] administered two doses of vaccination to 26 patients with immunodeficiency. They reported that no adverse event developed except fever and local reactions. Arroyo et al. [11] administered two doses of mRNA vaccines (11 BioNTech and 6 mRNA-1273) to 17 CVID patients and two doses of viral vector vaccine ChAdOx1 to 1 patient. There were no severe side effects after the vaccines. Fifty percent of CVID patients induced a cellular immune response at the first dose of the vaccine. The rate of immune response after the second vaccination was 83%. Cellular and humoral vaccine responses were significantly lower compared to healthy volunteers. Cases were observed for 3 months after vaccination, and no serious adverse events were reported. Similarly, there are studies in the literature that administered 2 doses of the COVID-19 vaccine to CVID patients and wrote positive vaccine responses and no serious adverse effects [9,12]. Similarly; in our reported limited experience with 20 patients, both inactivated virus Sinovac® and mRNA BioNTech® vaccine applications in our CVID patients seemed to be safe and reliable [13]. We think that it was an accidental not causal finding (complication), although we cannot fully explain the development of myocarditis seen only in this patient, which is not seen in our other CVID patients.
Immunocompromised patients are at high risk for SARS-CoV-2 infection. Vaccination is the most crucial measure to combat the epidemic and reduce the disease burden. COVID-19 disease is recognized as a cause of heart damage. Myocarditis and other adverse events developing after the COVID-19 vaccination should be carefully monitored. Clinicians should have informed about adverse conditions after immunization and should be able to perform symptomatic treatment and follow-up of patients. Such side effects should not diminish confidence in the value of vaccines.
Declaration of Competing Interest
None. | 2023-06-05T15:08:08.850Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "ce63a3e0cb41199d845c05f97e7f8027c35a44d3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "0d8e864693a74b491124a130802631831667a7db",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253088948 | pes2o/s2orc | v3-fos-license | Inhibition of H1N1 by Picochlorum sp. 122 via AKT and p53 signaling pathways
Abstract Influenza viruses cause a severe threat to global health, which can lead to annual epidemics and cause pandemics occasionally. However, the number of anti‐influenza therapeutic agents is very limited. Polysaccharides, extracted from Picochlorum sp. (PPE), seaweed Polysaccharides, have exhibited antiviral activity and were expected to be used for influenza treatment. In our research, the capability of PPE to inhibit H1N1 infection was proved in MDCK cells. PPE could make MDCK cells avoid being infected with H1N1 and inhibited nuclear fragmentation and condensation of chromatin. PPE evidently inhibited the generation of reactive oxygen species in MDCK cells. Mechanism study revealed that PPE prevented MDCK cells from H1N1 infection through induction of apoptosis by stimulating AKT signaling pathway and suppressing p‐p53 signaling pathway. In conclusion, PPE turns out to act as a prospective antiviral drug for H1N1 influenza.
in an attempt to control these influenza, while the availability of influenza vaccines shows little effect against the infection (Feng et al., 2014;Hu et al., 2022;Shang et al., 2021). There are many important compounds that can be synthesized by Algal products, polysaccharide (algae-based polymer), lipid, and organic pigment, for example. These have attracted kinds of attention in industrial applications (Schutter et al., 2010). Besides, Algal is used as food supplements, aquaculture ingredients, animal feed, and soil biofertilizers (Li et al., 2013;Lian et al., 2018;Nie et al., 2016). Seaweed polysaccharide can be served as many important materials used in clinical practice, including anti-inflammatory activity, anticancer activity, wound dressings, drug delivery, tissue engineering, immunomodulators, antibacterial effects, and anticoagulant activity, which are attributed to their rich sources, high safety, great biological activity, and low side effects (de Jesus Raposo et al., 2015;Kholiya et al., 2020;Okimura et al., 2020). Polysaccharides have shown the capability to inhibit viral infection in some studies, and they mainly inhibit virus from attaching cell wall to make cells avoid being invaded by virus and inhibit virus proliferation (Abu-Galiyun et al., 2019;Liang et al., 2021;Sharma et al., 2021). The source of PPT as previously described (Guo et al., 2021;Xie et al., 2017;Xu et al., 2021;Yao et al., 2022). However, there are few reports about the antiviral activity. The mechanism of antiviral drugs against H1N1 infection has attracted researchers' attention increasingly. Li reported that functionalized selenium nanoparticles decorated with amantadine were reported to inhibit H1N1-induced apoptosis via the ROS-mediated AKT signaling pathway . Wang found that Selenium Nanoparticles decorated with β-Thujaplicin could inhibit apoptosis induced by H1N1 influenza virus via ROS-mediated p53 and AKT Signaling Pathways . Mou discovered that EGCG induces β-defensin 3 against H1N1 by the MAPK signaling pathway (Ha et al., 2022;Mou et al., 2020;Zhu et al., 2022). The anti-H1N1 function of PPE was demonstrated, and the apoptosis mechanism referring to ROS-mediated signaling pathways was discovered in the research. Our research showed that PPE inhibited host cell apoptosis induced by H1N1 through the AKT, P53, and PARP signaling pathways.
| Cell viability
MTT assay was used to test the cytotoxicity of PPE as previously described . MDCK cells were inoculated at 5 × 10 4 / well and then cultured for 24 h. After that, H1N1 viruses were added with titer of 100 TCID 50 for 2 h and removed which out of the cells.
TA B L E 1 Titer of H1N1
The cells were treated with H1N1 for 24 h before rinsed twice with PBS, and each well were added with 15 μl of MTT (5 mg/ml) solution.
After 4 h, the formed formazan crystals were dissolved by dimethyl sulfoxide (DMSO) at 150 μl/well and detected at 570 nm.
| Annexin-V/PI double staining assay
MDCK cells were cultured and harvested as described previously . Briefly, the collected cells were suspended and then centrifuged. The supernatant solution was removed and the cells were suspended again. Finally, the cells were stained with the kits for 10-20 min without light and tested by flow cytometry, and Cell Quest VTM software was used to analyze the data.
| The assessment of reactive oxygen species (ROS)
ROS in MDCK cells were detected as previously described . In brief, after H1N1-infected MDCK cells were exposed to PPE for 24 h, MDCK cells were stained by the kits, and the results were tested with fluorescence microscope and fluorescence plate reader.
| Test of mitochondrial membrane potential (△Ψm)
JC-1 monomers was used to detect mitochondrial membrane potential (△Ψm) . H1N1-infected MDCK cells were stained with JC-1 without light for 20 min after exposed to PPE for 24 h. Flow cytometry was utilized.
| TUNEL and DAPI staining
Effects of PPE on DNA fragmentation induced by H1N1 were detected by the kits following the protocol. Briefly, MDCK cells were labeled by TUNEL and nuclear was stained by DAPI. Then, the stained cells were observed with a fluorescence microscope (Nikon Eclipse 80i).
| Statistical analysis
All the data are presented as mean ± SD. Differences between the two groups were evaluated using two-tailed Student's t-test. Oneway analysis of variance was used in multiple-group comparisons.
| In vitro antiviral effect of PPE
Cell viability was tested to assess PPE on antiviral effects by MTT assay. In Figure 1a, PPE exhibited low cytotoxicity against MDCK cells. In Figure 1b
| Effects of PPE on mitochondrial function of H1N1-infected MDCK cells
Reactive oxygen species (ROS) are proved as a significant regulator of cell apoptosis induced by chemotherapy drugs. Mitochondrion acts as an important organelle to produce intracellular ROS.
Overproduction of ROS can reduce ATP synthesized by mitochondria and result in mitochondrial dysfunction which leads to cell apoptosis further (Li et al., 2011). As shown in Figure 2a, H1N1
| Effects of PPE on apoptosis in H1N1-infected MDCK cells
In order to explore whether apoptosis was induced in H1N1-infected MDCK cells and effects of PPE, further steps were taken. As shown in Figure 3a, there is an obvious sub-G1 peak ( concentrations showed a greater capability to inhibit apoptosis.
Significant differences were not observed in cell cycle distribution.
DNA fragmentation is one of the representatives of cell apoptosis (Moosavi et al., 2018). The apoptosis induced by H1N1 was further confirmed by TUNEL and DAPI assay. As shown in Figure 3b
| Effects of PPE on the early and late apoptosis of H1N1-infected MDCK cells
Annexin-V/PI double staining acts as a more effective way for cell apoptosis examination (Yu et al., 2020). As shown in Figure 4a, H1N1 infection resulted in a significant elevation of green and red fluorescence of MDCK cells, which indicated that H1N1 induced early and late apoptosis of H1N1-infected MDCK cells. After treatment with PPE, the double fluorescence showed a decrease obviously.
The results indicated that PPE suppressed viral activity in a dosedependent manner, and exhibited great antiviral capability to prevent MDCK cells from undergoing apoptosis in vitro.
| The innate immune response modulation of PPE to H1N1 infection
In addition, PPE exerted an anti-inflammatory activity. In Figure 5, MDCK cells showed a basal secretion of cytokines. H1N1 infection induced a strong innate immune response in the hosts such as TNFα, IL-2, IL-17F, IL-4, IL-17A, and IL-1β, while hosts treated with PPE showed significant suppression of innate immune response.
| Effects of PPE on apoptotic signaling pathways activated by ROS
The overexpression of ROS could cause damage to DNA via regulating apoptosis signaling pathways. In Figure 6a, AKT, and P-P53 signaling pathways (Figure 6b).
| CON CLUS IONS
In our research, the antiviral activity and mechanisms of PPE were explored by different methods. The research demonstrated that PPE obviously restrained H1N1 proliferation and reduced apoptosis of MDCK cells. The mechanisms revealed that PPE limited H1N1induced apoptosis of hosts through AKT and p53 signaling pathways, and suppressed the innate immune response of MDCK cells induced by H1N1. Our findings revealed that PPE could be a prospective drug to treat H1N1 infection. Medica (LMM2020-7).
CO N FLI C T O F I NTE R E S T
The authors report no conflicts of interest in this work.
DATA AVA I L A B I L I T Y S TAT E M E N T
Data sharing is not applicable to this article as no new data were created or analyzed in this study. | 2022-10-24T15:08:15.783Z | 2022-10-21T00:00:00.000 | {
"year": 2022,
"sha1": "206956d1a12ecf7fae41c6f3ad305622caa0a4ff",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ddd584fbb0a6f7258ef35f51fd0a25b0433320e9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
243811384 | pes2o/s2orc | v3-fos-license | Time-resolved photoluminescence studies of perovskite chalcogenides
Chalcogenides in the perovskite and related crystal structures ( “ chalcogenide perovskites ” for brevity) may be useful for future optoelectronic and energy-conversion technologies inasmuch as they have good excited-state, ambipolar transport properties. In recent years, several studies have suggested that semiconductors in the Ba – Zr – S system have slow non-radiative recombination rates. Here, we present a time-resolved photoluminescence (TRPL) study of excited-state carrier mobility and recombination rates in the perovskite-structured material BaZrS 3 , and the related Ruddlesden – Popper phase Ba 3 Zr 2 S 7 . We measure state-of-the-art single crystal samples, to identify properties free from the in fl uence of secondary phases and random grain boundaries. We model and fi t the data using a semiconductor physics simulation, to enable more direct determination of key material parameters than is possible with empirical data modeling. We fi nd that both materials have Shockley – Read – Hall recombination lifetimes on the order of 50 ns and excited-state di ff usion lengths on the order of 5 m m at room temperature, which bodes well for ambipolar device performance in optoelectronic technologies including thin- fi lm solar cells. recombination These chalcogenide direct band gap ( E g ) tunable from the to near-infrared (NIR), optical absorption and luminescence, of slow non-radiative recombination synthesis of large-area, atomically-smooth, epitaxial thin BaZrS (PLD) Here we study single-crystal samples of the perovskite-structured material BaZrS 3 , and the related Ruddlesden – Popper phase Ba 3 Zr S BaZrS is a semiconductor with direct band gap of energy E g ¼ 1.9 eV. 17 Ba 3 Zr 2 S 7 is an indirect-band gap semiconductor with E g ¼ 1.25 eV, and strong NIR absorption due to a direct, allowed transition near 1.3 eV. We measure TRPL data for varying temperature and illumination conditions. We model and t TRPL data using a semiconductor physics simulation, using the program PC-1D. PC-1D allows parametrization of recombination processes including bulk non-radiative Shockley – Read – Hall (SRH), surface, and Auger. We incorporate the models simulated by PC-1D into a MATLAB routine to perform global, nonlinear, least-squares tting on the measured TRPL data sets. We nd that both materials have bulk SRH recombination lifetimes – Read – Hall lifetime ( s SRH ) on the order of 50 ns, and ambipolar di ff usion length ( L D ) on the order of 5 m m. These results suggest that chalcogenide perovskites may support very high-performance solar cells. Our results were obtained by measurements on microscopic single-crystals, and it remains to be seen what properties and performance can be measured in thin lms and solar cells.
for brevity) may be useful for future optoelectronic and energy-conversion technologies inasmuch as they have good excited-state, ambipolar transport properties. In recent years, several studies have suggested that semiconductors in the Ba-Zr-S system have slow non-radiative recombination rates.
Here, we present a time-resolved photoluminescence (TRPL) study of excited-state carrier mobility and recombination rates in the perovskite-structured material BaZrS 3 , and the related Ruddlesden-Popper phase Ba 3 Zr 2 S 7 . We measure state-of-the-art single crystal samples, to identify properties free from the influence of secondary phases and random grain boundaries.
We model and fit the data using a semiconductor physics simulation, to enable more direct determination of key material parameters than is possible with empirical data modeling. We find that both materials have Shockley-Read-Hall recombination lifetimes on the order of 50 ns and excited-state diffusion lengths on the order of 5 mm at room temperature, which bodes well for ambipolar device performance in optoelectronic technologies including thin-film solar cells.
Introduction
Understanding excited-state charge transport properties and recombination rates is central to semiconductor materials selection and device design for optoelectronic and energy-conversion technologies. Time-resolved photoluminescence (TRPL) is a technique that can probe excited-state recombination dynamics, including effective carrier lifetimes. However, the inferred lifetimes typically correspond to a host of radiative and non-radiative recombination processes taking place simultaneously and varying spatially through the probed material. Advanced modeling and tting of transient data can be useful to distinguish the rates of various recombination processes, especially between bulk and surface effects. [1][2][3] In this paper we present TRPL data measured on chalcogenide perovskites in the Ba-Zr-S system, and a data analysis method that enables accurate study of recombination processes. Chalcogenide perovskites are a family of promising semiconductors for optoelectronics. [4][5][6][7][8][9] These chalcogenide materials have direct band gap (E g ) tunable from the visible to near-infrared (NIR), strong optical absorption and luminescence, and reports of slow non-radiative recombination rates. 4-6,10-13 They feature inexpensive and non-toxic elements, combined with thermal stability up to at least 550 C. 14, 15 We have recently demonstrated synthesis of large-area, atomically-smooth, epitaxial thin lms of BaZrS 3 by pulsed laser deposition (PLD) and molecular beam epitaxy (MBE). 16,17 Here we study single-crystal samples of the perovskite-structured material BaZrS 3 , and the related Ruddlesden-Popper phase Ba 3 Zr 2 S 7 . 18 BaZrS 3 is a semiconductor with direct band gap of energy E g ¼ 1.9 eV. 17 Ba 3 Zr 2 S 7 is an indirect-band gap semiconductor with E g ¼ 1.25 eV, and strong NIR absorption due to a direct, allowed transition near 1.3 eV. 6 We measure TRPL data for varying temperature and illumination conditions. We model and t TRPL data using a semiconductor physics simulation, using the program PC-1D. 19 PC-1D allows parametrization of recombination processes including bulk non-radiative Shockley-Read-Hall (SRH), surface, and Auger. We incorporate the models simulated by PC-1D into a MATLAB routine to perform global, nonlinear, least-squares tting on the measured TRPL data sets. We nd that both materials have bulk SRH recombination lifetimes (s SRH ) on the order of 50 ns, and excited-state diffusion lengths on the order of 5 mm at room temperature, which bode well for device performance in optoelectronic technologies including thin-lm solar cells.
Experimental methods
We synthesized crystals using the ux method, according to previously-reported procedures. 18 We ground and mixed 1 g of BaCl 2 powder (Alfa Aesar, 99.998%) together with 0.5 g of stoichiometric mixtures of precursor powders (BaS, Zr, and S), and loaded the resulting powder into a quartz tube, as for powder synthesis. For BaZrS 3 crystal growth we heated to 1050 C at a rate of 1.6 C min À1 , held at 1050 C for 100 h, cooled to 800 C at a rate of 0.1 C min À1 , and then cooled to room temperature in an uncontrolled manner by shutting off the furnace. To make Ba 3 Zr 2 S 7 crystals we heated to 1050 C at a rate of 0.3 C min À1 , held at 1050 C for 40 h, cooled to 400 C at a rate of 1 C min À1 , and then cooled to room temperature uncontrolled. The samples obtained we washed repeatedly with deionized water and isopropyl alcohol to remove excess ux before drying in airow. The crystal dimensions are on the order of $100 mm. 18 For the experiments reported here, these are effectively innitely thick, and the rear surface plays no role. In our models and ts we x the thickness at 100 mm.
We perform steady-state PL and TRPL measurements using pulsed laser diodes (Picoquant 450 nm and 705 nm), focused on the samples using an optical microscope equipped with a cryogenic stage. The spot size for TRPL measurements was approximately 2.2 mm (Gaussian FWHM, determined using Air Force target), and the pulse FWHM was approximately 100 ps. The repetition rate was 5 MHz and 2.5 MHz for BaZrS 3 and Ba 3 Zr 2 S 7 samples, respectively. The measurement spot on the sample was kept constant by aligning to camera images of unique marks. We control the pump intensity using a continuously-variable attenuator. The emitted light is analyzed using a spectrometer and CCD for steady-state spectra, or directed to an avalanche photodiode for TRPL.
The spectrometer and the Si CCD have wavelength-dependent efficiency. To account for these, we convolve these instrument response curves to create a spectral correction curve, which we then use to scale the measured data. We use the resulting spectrally-corrected data for further analysis.
Data analysis methods
To model the TRPL data ( Fig. 1), we assume an absorber layer with an SRH lifetime s SRH (the low-injection limit) and ambipolar mobility m (related to ambipolar diffusivity D a via the Einstein relation). We model surface recombination via surface recombination velocity (S), and Auger recombination via an Auger coef-cient (B). s SRH and S are injection-dependent according to typical semiconductor statistics. 19,20 We assume that excited-state charge transport during TRPL experiments is best-modeled as ambipolar, because of the high injection levels reached, and because the chalcogenide perovskites have very low carrier concentrations at equilibrium in the dark.
The model is rened using a nonlinear, least-squares tting routine in MAT-LAB. We rene the model globally across multiple data sets, collected at a xed temperature but with varying illumination wavelength and pump uence, to more accurately distinguish the different recombination rates. The physical semiconductor model is implemented in soware PC-1D, which considers generation, recombination, dri, and diffusion along one spatial dimension. 19 PC-1D Fig. 1 Flow chart of TRPL data analysis. Multiple data sets are acquired at a given temperature with varying pump fluence and/or wavelength. The data are modeled using a generation-recombination-drift-diffusion solver in one spatial dimension, and the model is refined using global least-squares minimization of all data sets (indexed by k) with customizable parameter sets (p k ). The results are best-estimates of properties such as surface recombination velocity (S), non-radiative recombination lifetime (s SRH ), and ambipolar diffusivity (D a ) that are directly relevant to optoelectronic performance.
calculates spatial proles of charge carriers at discrete time steps. We integrate the PL emission for each spatial prole at each time step to produce a model of TRPL. All soware conguration settings are set to 'PC1D5', to avoid the implementation of empirical models not applicable to chalcogenide perovskites. In the PC1D input le, we use an effective mass ratio (majority to minority carriers) of 2 for both BaZrS 3 and Ba 3 Zr 2 S 7 . In our models and ts we x the thickness at 100 mm. We also assume that the front and back surface recombination rates are equal. We approximate the unknown majority carrier concentration and intrinsic carrier density using values typical for silicon, e.g., N a ¼ 10 16 cm À3 , n i (300 K) ¼ 10 10 cm À3 ; numerical studies demonstrate that our results here are insensitive to these approximations within very wide ranges covering all reasonable values.
Results: steady-state photoluminescence
The steady-state PL spectra contain information on the band-to-band emission, defect level emission, and other recombination processes. We present in Fig. 2a and b the results of steady-state PL measurements on Ba 3 Zr 2 S 7 at temperatures ranging from 78 to 275 K. The PL peak position, peak shape, and intensity all vary with temperature. The data presented in Fig. 2a are raw data, as-measured. In Fig. 2b we present the data aer applying a spectral correction, which accounts for the wavelength-dependent detection efficiency of our equipment. Aer this correction, it is clear that the PV emission develops a two-peak structure that depends on temperature. At lower temperature, the dominant peak is at 1011 nm (1.227 eV). As the temperature is raised, the high-energy shoulder develops into a distinct peak at approximately 997 nm (1.244 eV). We hypothesize that this is the direct band-to-band transition, placed at 1.28 eV in our earlier work based on PL at room temperature. 6 The peak separation is approximately 17 meV. Not enough is known about the excitonic properties and the defect chemistry of chalcogenide perovskites to identify the origin of this peak splitting. However, we note that 17 meV is comparable to the free exciton binding energy of halide perovskites, which have similar band gap and large dielectric susceptibility to the chalcogenide perovskites. 8,21 Therefore, it seems possible that the principal, lower-energy peak represents radiative recombination of free excitons. It is clear from the data, and from the small energy scale, that the two emissive states are in rapid thermal communication at room temperature.
In Fig. 2c and d we present further analysis of the spectrally-corrected PL data. We analyze the data presented in Fig. 2a and b as well as data measured on a second spot on the sample. In Fig. 2c, we plot the PL peak positions, determined by tting a single Gaussian curve to the spectrally-corrected data at each temperature and spot. There is an offset of approximately 28 meV between the data measured on two separate spots, apparently resulting from sample heterogeneity. The negative slope of all data suggests that E g increases with increasing temperature. Conventional, tetrahedrally-coordinated, sp 3 -bonded semiconductors have negative band gap temperature coefficients: E g decreases with increasing temperature. A positive temperature coefficient is observed in less conventional and more ionic materials semiconducting compounds, including binary lead chalcogenides and halide perovskites. 22-24 dE g /dT > 0 is favorable for solar cells, that typically experience temperature higher during operation (e.g., in desert locations) than during early-stage laboratory testing.
In Fig. 2d we plot the integrated PL signal vs. inverse temperature. We t the PL intensity (I) to an Arrhenius model of a single non-radiative recombination process, using the expression: 25 E A is the activation energy of a non-radiative recombination process described by , and a ¼ s R /s 0 where s R is the radiative recombination lifetime which is assumed temperature-independent. We estimate E A ¼ 130.0 AE 19.0 meV and 306 AE 1 meV for the two measurement spots. E A determined in this way likely represents an average of multiple, temperature-dependent recombination processes, rather than a single process. The spatial heterogeneity in the temperature-dependence, expressed by the substantial difference in E A , likely derives from varying surface conditions; Ba 3 Zr 2 S 7 crystal surfaces are faceted and we measure the samples as-grown, without post-growth polishing or surface passivation. 18 These differences notwithstanding, from the data and analysis we conclude that the dominant thermally-activated recombination process (or processes) in our Ba 3 Zr 2 S 7 crystals have activation energy well above 100 meV. When we apply an Arrhenius model to the Gaussian-best-t intensities of the two peaks in each spectrum, we nd that the thermal activation energies are There are multiple scenarios that could produce a thermally-activated rate of non-radiative recombination. Carrier capture by recombination centers could be phonon-assisted, resulting in an activation energy characteristic of the most relevant phonon mode. 26 The temperature-dependence may also result from shallow, non-radiative traps that temporarily capture photo-generated carriers. 27 In this case, the activation energy does not represent the recombination process itself, but rather the energy for trap emission to the relevant band. Unfortunately, not enough is known about the defect chemistry of chalcogenide perovskites to distinguish between these scenarios. We further discuss below the possible inuence of shallow traps on the TRPL results.
Results: time-resolved photoluminescence
We present representative TRPL data and best-t models in Fig. 3, for BaZrS 3 and Ba 3 Zr 2 S 7 crystals. In Fig. 3a, we show results for BaZrS 3 at room temperature, measured with 450 nm illumination and a 505 nm long-pass lter. The inset shows the steady-state PL spectrum, with a peak consistent with E g ¼ 1.9 eV. The feature near 500 nm (cutoff by the long-pass lter) also contributes to the measured TRPL data, which is not wavelength-resolved. This feature is of unknown origin, but fortunately it makes only a minor contribution to the Fig. 4 TRPL spectra measured on a Ba 3 Zr 2 S 7 crystal for temperature between 78 and 300 K, measured with pump wavelength 705 nm and pump fluence of 3.85 mJ cm À2 (1.37 Â 10 16 ph cm À2 ) per pulse. The TRPL data (points) and best-fit models (lines) are normalized to accentuate the rise in decay rate with increasing temperature. integrated PL intensity. From the data we estimate s SRH ¼ 55.2 AE 28.1 ns, S ¼ 1.87 Â 10 4 AE 1.14 Â 10 4 cm s À1 , and m ¼ 146.2 AE 525.6 cm 2 V À1 s À1 . The large uncertainty in the mobility highlights the value for global tting across multiple data sets with varying experimental parameters, to more precisely estimate material parameters. In Fig. 3b, we show a global t of three TRPL data sets measured on a Ba 3 Zr 2 S 7 crystal at 200 K with varying levels of pump uence. From the data we estimate s SRH ¼ 112.9 AE 1.5 ns, S ¼ 3.69 Â 10 4 AE 2.9 Â 10 2 cm s À1 , and m ¼ 2607.47 AE 85.5 cm 2 V À1 s À1 . The global-t routine results in much-reduced fractional uncertainty in the best-t estimate of m.
We next discuss temperature-dependent results on Ba 3 Zr 2 S 7 . In Fig. 4, we show normalized TRPL spectra and the best-t models for temperature from 78 to 300 K (the data is normalized here for visualization, but the tting routines were run on the raw data). It is apparent that the decay rate increases with increasing temperature. In Fig. 5 and Table 1 we present the details of the data analysis. We do not nd a notable temperature-dependence for the mobility within the temperature range from 78 to 300 K, which suggests that the mobility is limited by defect scattering throughout this range. The two non-radiative recombination processes, SRH and surface recombination, both accelerate with increasing temperature. The temperature-dependence of the surface recombination process (S) is far more pronounced than that of the bulk process (s SRH ). This, combined with the large and position-dependent activation energy for non-radiative recombination determined from static PL (Fig. 2d), suggests that the temperature-dependence is dominated by processes at the sample surface, further discussed below.
Discussion
Based on the TRPL analysis for BaZrS 3 and Ba 3 Zr 2 S 7 , we can obtain best-estimates of semiconductor parameters relevant for optoelectronic and energy-conversion device performance. For thin-lm solar cells, the excited-state diffusion length (L D ) is a critical parameter, as it describes the ability of a material to support a photocurrent and photovoltage even in the absence of a depletion region. We estimate L D as L D ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi D a s SRH p ; we nd at 300 K that L D ¼ 4.6 AE 6.2 mm for BaZrS 3 , L D ¼ 7.1 AE 2.0 mm for Ba 3 Zr 2 S 7 . These are promising results because they are larger than a typical absorber thickness for a thin-lm solar cell, and because they were measured on crystals with no attempt at point defect control or surface passivation. Fig. 6 SRH lifetime (s SRH ) and solar cell figure of merit F PV ¼ aL D vs. energy conversion efficiency for various photovoltaic materials. BaZrS 3 and Ba 3 Zr 2 S 7 are plotted as dashed lines corresponding to the best-estimates of our TRPL data analysis; no solar cells of either material have yet been reported. In (a), we only show data for which s SRH and device measurements were performed on samples that were synthesized in the same laboratory and using as close to the same procedure as is reasonably possible; in (b), we additionally require that D and a are reported on comparable samples, with the exception of silicon and GaAs for which we use tabulated values. Data are adapted from ref. 28 and 30-32. It is interesting to compare our results to more well-established photovoltaic (PV) absorber materials. In Fig. 6a, we locate our measured room-temperature values of s SRH for BaZrS 3 and Ba 3 Zr 2 S 7 on a plot of s SRH vs. solar cell efficiency (h) for a number of PV materials. 28 In each case, we represent s SRH and h reported by the same research groups, and measured on as close to the same material as possible. A large s SRH is a necessary prerequisite for high-performance, and the data for BaZrS 3 and Ba 3 Zr 2 S 7 are comparable to the best-performing, established materials: CdTe, CIGS, and lead halide perovskites. We note that, for a particular device, the effective lifetime (i.e., not s SRH ) is what determines the Fermi level splitting and the photovoltage, and ultimately the performance. The effective lifetime depends on thickness and interfaces, in addition to the bulk properties. However, as more parameters are included in any given gure of merit, the quantity and diversity of published data becomes more limited. In making Fig. 6a, we choose to focus on a bulk property relevant to materials selection, as appropriate for chalcogenide perovskites at this early stage of development.
In Fig. 6b we plot h vs. a solar cell gure-of-merit, F PV ¼ aL D , where a is an optical absorption coefficient. 28 For each material, we calculate F PV using a as measured at the knee in the curve of log 10 (a(E)). For BaZrS 3 and Ba 3 Zr 2 S 7 , we use values 4940 cm À1 (at 636 nm) and 4240 cm À1 (at 861 nm), respectively. F PV is a comparison of length scales: the depth required to absorb light (1/a) compared to the distance that excited-state charge carriers can diffuse before non-radiative recombination (L D ). F PV > 1 is a requirement for high-performance solar cells. We nd that F PV z 10 for both BaZrS 3 and Ba 3 Zr 2 S 7 , placing these chalcogenide perovskites among the very best candidates for thin-lm solar cells. It remains to be seen whether these results, measured on microscopic single-crystals, bear out in thin lms and in solar cell devices.
Our analysis enables us to disentangle the contributions of surface and bulk recombination in TRPL spectra. However, the recombination parameters determined in this way (s SRH and S) likely are effective parameters, representing multiple simultaneous processes, that could not be distinguished without a more complex model and more, complementary experiments. We highlight in particular the possible effect of shallow traps. Shallow traps that momentarily capture excited carriers can forestall radiative recombination, making the rate of nonradiative recombination appear slower than it actually is. Several approaches are available to assess the effect of shallow traps on TRPL data, including varying the injection level and the temperature. 27 Our temperature-dependent results ( Fig. 2c and 5) suggest that surface processes have the strongest temperature dependence, and it is quite possible that the activation energy modeled in Fig. 2d represents thermal emission from near-surface traps. The bulk SRH recombination times s SRH determined here may also be articially elongated due to trapping effects, although the data suggest that such effects are less prominent in the bulk than at the surface. Our injection-dependent measurements (Fig. 3b) also do not indicate an effect of carrier trapping on s SRH . Trap-assisted slowing down of TRPL decay is related to trap-assisted persistent photoconductivity. 29 We suggest for future work that a comparison of photoconductive and photoluminescent transients under varying injection and temperature, combined with a more full-edged model, could enable estimation of trap occupancy and the quasi-Fermi level splitting.
In the context of Fig. 6a, we care about s SRH inasmuch as it predicts solar cell performance. A high concentration of traps with a large thermal emission energy, well above k B T, will greatly suppress the quasi-Fermi level splitting, the opencircuit voltage, and device performance. However, shallow traps that remain in rapid thermal communication with the nearby band edge will have a lesser effect, likely manifesting as a slight suppression in carrier mobility and diffusion length. We expect that future work making and testing chalcogenide perovskite solar cells will quantify these effects.
Our estimates for Auger recombination are likely not reliable, as evidenced by the large best-t uncertainty ranges. These uncertainties originate from the possible contribution of extraneous signal to the TRPL data at very short times.
Our estimates of L D are larger than the illumination spot size in our experiments. This represents a deciency in the modeling, and likely introduces inaccuracy in the results. For instance, a measurement spot size smaller than the diffusion length may articially reduce the inferred lifetime (i.e., accelerate the apparent decay to equilibrium) because diffusion of excess carriers out of the measurement spot is an additional, unaccounted-for mechanism that locally reduces the excess carrier concentration. In a preliminary attempt to estimate these effects, we repeated measurements on the same spot with objectives with varying magnication, to vary the spot size. We do observe differences in the decay at short time, but no signicant change in the long-time decay kinetics. We leave for future work a full study including modeling in two dimensions.
Conclusion
We use steady-state and time-resolved photoluminescence to study chalcogenide perovskites in the Ba-Zr-S system. We demonstrate a data analysis workow for TRPL that allows a direct extraction of the material parameters that describe excited-state charge carrier dynamics, using a semiconductor physics model embedded in a global nonlinear-least-squares regression routine. We show that BaZrS 3 and Ba 3 Zr 2 S 7 have bulk Shockley-Read-Hall lifetime (s SRH ) on the order of 50 ns, and ambipolar diffusion length (L D ) on the order of 5 mm. These results suggest that chalcogenide perovskites may support very high-performance solar cells. Our results were obtained by measurements on microscopic single-crystals, and it remains to be seen what properties and performance can be measured in thin lms and solar cells.
Conflicts of interest
There are no conicts to declare. | 2021-11-07T16:12:15.134Z | 2022-07-15T00:00:00.000 | {
"year": 2022,
"sha1": "f80e282ffc1949060534f9ee19cc8005f52d49b1",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/fd/d2fd00047d",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a6dbeec845dacb8551140964fcff23f52c185e32",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
255749372 | pes2o/s2orc | v3-fos-license | NV center magnetometry up to 130 GPa as if at ambient pressure
Engineering a layer of nitrogen-vacancy (NV) centers on the tip of a diamond anvil creates a multipurpose quantum sensors array for high pressure measurements, especially for probing magnetic and superconducting properties of materials. Expanding this concept above 100 GPa appears to be a substantial challenge. We observe that deviatoric stress on the anvil tip sets a limit at 40-50 GPa for practical magnetic measurements based on optically detected magnetic resonance (ODMR) of NV centers under pressure. We show that this limit can be circumvented up to at least 130 GPa by machining a micropillar on the anvil tip to create a quasi-hydrostatic stress environment for the NV centers. This is quantified using the pressure dependence of the diamond Raman shift, the NV ODMR dependence on applied magnetic field, and NV photoluminescence spectral shift. This paves the way for direct and reliable detection of the Meissner effect in superconductors above 100 GPa, such as super-hydrides.
Introduction. The diamond anvil cell (DAC) is routinely used to synthesize compounds under megabar (100 GPa) pressures, exhibiting novel phenomena and remarkable properties. Recent examples such as the observation of metal hydrogen [1], superconductivity close to ambient temperature in superhydrides [2][3][4], or superionic water ice [5] are lacking detailed magnetic or transport measurements for their definite proof and clear understanding. In particular, magnetic measurements remain challenging at megabar pressures because they are mainly based on flux detection by inductive coils and must thus extract the signal of the few-micrometers samples from the much larger magnetic background signal of the bulky DAC apparatus. This constraint can be circumvented by implementing in the DAC sensing methods that exploit the magnetic sensitivity of nitrogen-vacancy (NV) centers in diamond [6][7][8][9]. This method offers a tabletop optical microscopy instrumentation, the mapping of the magnetic field in the sample chamber with micrometer spatial resolution and the absence of any sensitivity decrease with the sample size down to the micrometer scale. Another key feature is the easy combination with synchrotron X-ray characterizations to correlate the magnetic or superconducting properties with a well-defined crystallographic structure [10]. Yet, the extension of this technique to extreme pressures remains a challenge [11]. We investigate here how the existence of a deviatoric stress in the diamond anvil sets effective limits to the magnetic response of NV centers localized at the anvil tip to maximize sample proximity [6,7]. We then propose and implement a method that overcomes that limit and keeps the full NV quantum sensing capabilities at pressures above 100 GPa.
Experimental configuration. The negatively charged NV center is a point defect of diamond that emits visible photoluminescence (PL) by absorbing green photons and re-emitting red photons (at ambient pressure), with an electronic spin s = 1 in the ground and excited states. In the absence of external magnetic and stress fields, the m s = ±1 spin sublevels of the ground state are degenerate and separated by D = 2.87 GHz from the m s = 0 sublevel (Fig. 1a). Spin-dependent PL arises from a spin-selective difference in the non-radiative coupling to metastable singlet states, which also induces optical pumping into the m s = 0 state under green illumination [12]. The energy difference between the sublevels of the ground state can then be read out from the change of the NV luminescence intensity upon scanning the frequency of an additional microwave excitation. Dips in the PL intensity indicate that the excitation microwave frequency is resonant with a transition between two sublevels, leading to optically detected magnetic resonance (ODMR) that can be easily implemented by optically addressing the NV centers through the diamond anvil [6]. Here we use the same experimental configuration as in Ref. [6], keeping two crucial characteristics: 1) the NV centers are integrated in the DAC device by mounting a IIas ultra-pure Almax-Boehler design [100]-cut diamond anvil with a dense ensemble of NV centers (typically 10 4 NV/µm 2 ) implanted at about 10 nm beneath the anvil surface using a nitrogen Focused Ion Beam (FIB) [13] (Fig. 1b); 2) the microwave excitation is applied using an external single-turn coil above the rhenium gasket of the DAC. The metallic gasket is machined with a slit, filled with an epoxy-glue mixture ensuring sample confinement and DAC mechanical stability, that re-distributes the induced currents in the metal, leading to a focusing and amplification of the microwave flux in the sample chamber similarly to a arXiv:2301.05094v1 [quant-ph] 12 Jan 2023 Lenz lens [14] (Fig. 1c). Upon pressure increase, the PL excitation wavelength was decreased to match the blueshift of the NV absorption spectrum [11] by using continuous-wave (cw) lasers at successive wavelengths 532, 488, 457 and 405 nm. A customized confocal optical microscope was used to collect the PL. A static vector magnetic field was applied on the DAC using three Helmholtz coil pairs with an amplitude ranging between 0 and 10 mT. The magnetic field was aligned along the DAC axis with accuracy ±0.5 • . This orientation corresponds to the diamond [100] crystal axis for which all NV centers have equivalent responses to stress and magnetic field. Pressure in the DAC was measured using the calibrated diamond Raman phonon mode at the anvil tip [15].
Stress effect on the NV magnetic response. We performed cw-ODMR experiments on the NV centers under pressures ranging from 10 GPa to 70 GPa. At each pressure point, we collected the ODMR spectrum for the ensemble of NV centers under varying amplitude of the applied magnetic field. The data are shown in Fig. 1d. Four effects of stress on the ODMR signals are observed. First, the zero-field center frequency D = 2.87 GHz increases almost linearly with a slope of 9.6 MHz/GPa to a value D + δ, where δ is the pressure induced variation. Second, a splitting ∆ σ appears between the transition lines in the absence of an external magnetic field. This splitting increases almost linearly with pressure with a slope of 3.9 MHz/GPa and originates in deviatoric stress at the anvil culet. Consequently, at a given pressure, the quasi-linear evolution of the Zeeman splitting due to the applied magnetic field can only be recovered above a compensating amplitude of the magnetic field that increases with pressure. This detrimental influence of stress hence weakens the NV sensing magnetic sensitivity. Furthermore, the required larger applied bias magnetic field isn't aligned with a given NV axis here, to overlap responses from all NV orientations, and thus mixes the sublevels of the ground state. This mixing perturbs the optically induced spin polarization and quenches the PL [17]. Third, the shape of the ODMR spectra differs from the conventional symmetrical pair of peaks. The contrast of the low frequency branch becomes gradually smaller than the high frequency branch. After vanishing at a pressure around 40 GPa, a slightly positive contrast reappears (increase of PL at resonance) above 50 GPa under high enough magnetic field. Finally, the overall observed ODMR contrast decreases severely under pressure.
In the diamond lattice under mechanical stress (or equivalently strain), the Hamiltonian describing the NV center ground state is modified by a spin-mechanical interaction [18,19] related to the stress tensor ↔ σ . The stress tensor must exhibit the cylindrical symmetry of the anvil. At the anvil tip, the stress components parallel (σ ) and perpendicular (σ ⊥ ) to the surface differ. Due to continuity of the normal stress component, σ ⊥ is equal to the experimental pressure P in the DAC chamber. The tangential component, σ , is reduced by a factor α compared to σ ⊥ . Using a simplified model of a semi-infinite anvil with a flat face and a circularly symmetric distribution of pressure applied to this face, the α parameter was estimated about 0.6 [20]. Neglecting off-diagonal shear stress components, the stress tensor then reads as: Using this stress tensor, the diagonalization of the NV ground state Hamiltonian yields modified spin resonance frequencies which can be approximated to first order as: where δ is the spectral shift due to compression, and ∆ = ∆ 2 σ + ∆ 2 B is the quadratic sum of the splittings respectively induced by the stress and by the magnetic field (see Supplementary Material for the full expression). Since eq. (2) is exact only for low off-axis magnetic field, a full numerical diagonalization was used to accurately fit the measured resonance frequencies, as shown by the green dashed lines in fig. 1d. Only two parameters, α and P , are hence needed to predict the magnetic field response under stress. We obtained a value α = 0.56 that is essentially constant with pressure, quantifying deviatoric stress close to the 0.6 value given in Ref. [20].
Deviatoric stress thus introduces major modifications to the NV behavior as the anisotropic compression of the diamond host lattice distorts the C 3v symmetry of the NV center. Here we quantified changes within the NV ground triplet states, but the stress dependence of the singlet states and the excited triplet states remains unexplored and is difficult to assess. As a hypothesis, we attribute the observed modification and ultimate loss of ODMR contrast to the effect of deviatoric stress on these levels involved in the contrast mechanism [21]. This hypothesis is corroborated by recent results obtained on microdiamonds compressed quasi-hydrostatically inside the sample chamber of a DAC, for which the ODMR signal could be conserved up to 140 GPa [22]. These results converge toward a possible circumventing strategy by ensuring hydrostatic compression of the NV centers.
Restoring hydrostaticity with diamond microstructuration. A strategy to try to mitigate deviatoric stress can be implemented by microstructuring the diamond anvil culet. A successful geometry is presented in Fig. 2a. A pillar, 7 µm in diameter and with a 2 µm deep trench around it was FIB-machined on an NV-implanted diamond anvil culet. The pillar surface is thus disconnected from the anvil surface submitted to deviatoric stress induced by anvil cupping tension [23,24]. This also allows the pressure-transmitting medium (PTM) to fill the trench to immerse the pillar in a stress field close to hydrostatic conditions. The pillar is then equivalent to a diamond microdisk that would be integrated in the sample chamber of the DAC but ensures perfect reproducibility and removes any interface with the diamond culet to optimize PL measurements. As seen below, this design is also very robust and can withstand extreme pressures.
The hydrostaticity of the stress exerted on diamond under pressure can be tested by measuring the Raman frequency of the diamond optical phonon. Under hydrostatic conditions, the dependence of the frequency of the Raman scattering with diamond volume follows a Gruneisen relation of parameter γ = 0.97(1) whereas the frequency shift is smaller under deviatoric stress [16]. As seen in Fig. 2c, the Raman spectra measured at the diamond anvil culet on the micropillar and away from it differ. In both cases, the broad asymmetric peak is associated to the stress distribution within the thickness of the anvil that is optically probed and the high frequency edge is used to estimate the pressure [15]. At the micropillar, a well separated peak appears with higher frequency shift. The pressure evolution of its center wavenumber perfectly matches the value obtained for diamond under hydrostatic pressure [16] as shown in Fig. 2d. This indicates that the tip of the micropillar hosting part of the NV center layer is then close to hydrostatic pressure.
Accordingly the PL spectrum of the NV layer in the micropillar shows a pressure induced blue shift (Fig. 2b) that can be quantified with the zero-phonon line (ZPL) [11]. While the NV ZPL dependence with pressure is not linear, its evolution becomes linear when plotted versus the compressed diamond volume estimated using the diamond equation of state [25]. Linear fit gives a slope of −769 ± 4 meV/(cm 3 ·mol −1 ). A similar measurement performed on a non modified diamond anvil yields a weaker slope of −434 ± 2 meV/(cm 3 ·mol −1 ). This significant difference in the pressure dependence of the ZPL is another indication of the deviatoric stress reduction caused by the microstructuration.
ODMR measurements were also performed for the NV centers hosted in the micropillar. As shown in Fig. 3 corresponding to the pressure evolution up to 130 GPa, most of the detrimental effects previously observed and attributed to deviatoric stress are now suppressed. The spectra consistently show a negative contrast remaining almost constant up to at least 100 GPa. Increasing further the pressure up to 130 GPa (where the experiment was stopped by one of the anvils breaking), a slight decrease of the contrast was observed and is attributed to a degraded efficiency of the microwave excitation for frequencies higher than 4 GHz. The magnetic field response remains also unchanged across the whole tested pressure range. The ODMR spectra exhibit a very low zero-field splitting ∆ σ of 0.29 ± 0.03 MHz/GPa with increasing pressure, and a shift of the zero-field center frequency D + δ of 13.42 ± 0.14 MHz/GPa. As shown in Fig. 4 these values differ significantly from those measured for NV centers in standard anvils, and were consistent across four experimental runs performed on different anvils, with pillars machined either using a FIB or a femtosecond laser. Applying the model described above for the spin-mechanical interaction, the evolution of the ODMR eigenfrequencies versus the applied magnetic field were well-fitted using an anisotropy parameter α 0.95 that stays constant within the pressure range tested (Fig. 3a) would indicate perfect hydrostaticity, this result gives an independent confirmation of the almost hydrostatic pressure applied on the NV centers in the micropillar. Consequently, the microstructuration strategy enables efficient magnetic field sensing at pressures higher than 100 GPa with a sensitivity improved by orders of magnitude compared to the use of a standard anvil with a flat tip (see Supplementary Material).
Conclusion. Microstructuration of diamond anvils, implemented here by machining a micropillar on the culet, provides quasi-hydrostatic conditions for NV centers implanted in the anvil up to 100 GPa and above. With this design NV magnetic sensing can be implemented under such extreme pressures as if at ambient pressure. This work opens the way to sensitive and spatially resolved magnetic measurements in the constrained environment of the DAC which should now be used for a convincing observation of the Meissner effect in super-hydrides.
We are grateful to Olivier Marie and Grégoire Le Caruyer for machining of the diamond culets, to Florent Occelli for assistance in DACs preparation and to Dorothée Colson and Anne Forget for annealing the diamond anvils after nitrogen implantation. This work has received funding from the EMPIR program co-financed by the Participating States and the European Union's Horizon 2020 research and innovation program (20IND05 QADeT), from the Agence Nationale de la Recherche under the project SADAHPT and the ESR/EquipEx+ program (grant number ANR-21-ESRE-0031), and from the ParisÎle-de-France Région in the framework of DIM SIRTEQ. JFR acknowledges support from Institut Universitaire de France. | 2023-01-13T06:42:27.498Z | 2023-01-12T00:00:00.000 | {
"year": 2023,
"sha1": "b554bf3c7e6bb173ff187df54374e940e92c91db",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b554bf3c7e6bb173ff187df54374e940e92c91db",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Physics"
]
} |
3129270 | pes2o/s2orc | v3-fos-license | Long-term effects of mild traumatic brain injury on cognitive performance
Although a proportion of individuals report chronic cognitive difficulties after mild traumatic brain injury (mTBI), results from behavioral testing have been inconsistent. In fact, the variability inherent to the mTBI population may be masking subtle cognitive deficits. We hypothesized that this variability could be reduced by accounting for post-concussion syndrome (PCS) in the sample. Thirty-six participants with mTBI (>1 year post-injury) and 36 non-head injured controls performed information processing speed (Paced Visual Serial Addition Task, PVSAT) and working memory (n-Back) tasks. Both groups were split by PCS diagnosis (4 groups, all n = 18), with categorization of controls based on symptom report. Participants with mTBI and persistent PCS had significantly greater error rates on both the n-Back and PVSAT, at every difficulty level except 0-Back (used as a test of performance validity). There was no difference between any of the other groups. Therefore, a cognitive deficit can be observed in mTBI participants, even 1 year after injury. Correlations between cognitive performance and symptoms were only observed for mTBI participants, with worse performance correlating with lower sleep quality, in addition to a medium effect size association (falling short of statistical significance) with higher PCS symptoms, post-traumatic stress disorder (PTSD), and anxiety. These results suggest that the reduction in cognitive performance is not due to greater symptom report itself, but is associated to some extent with the initial injury. Furthermore, the results validate the utility of our participant grouping, and demonstrate its potential to reduce the variability observed in previous studies.
A variety of different aspects of cognitive performance have been investigated in the long-term after mTBI, using a number of different tasks. More importantly, tasks assessing the same cognitive function have varied in their difficulty, possibly leading to the inconsistent results. A challenging cognitive task may be required to observe the subtle long-term alterations in participants with mTBI (Segalowitz et al., 2001;Chen et al., 2004). Of particular utility in this regard are tasks that can be parametrically increased in difficulty (Braver et al., 1997;Pare et al., 2009), enabling an investigation of the effects of enhancing cognitive load.
Two tasks that can be parameterized in this way are the n-Back (assessing working memory) and Paced Visual Serial Addition Task [PVSAT, assessing information speed (Fos et al., 2000)]. Both of these tasks have been previously used in mTBI research [n-Back: (McAllister et al., 1999, 2001Wang et al., 2006;Catale et al., 2009); PVSAT: (Cicerone and Azulay, 2002;Vanderploeg et al., 2005;O'Jile et al., 2006;Wang et al., 2006;Mayer et al., 2009;Brenner et al., 2010b)], with the paced auditory serial addition task specifically created to investigate cognitive difficulties after TBI. However, few of the previous studies have used a range of difficulties within PVSAT to assess cognition.
In addition, sampling of a mTBI population is challenging, as there is inherent heterogeneity between individuals (Shum et al., 2011), with differing severity of injury and subsequent outcome. One way of reducing variability is to split the mTBI sample by post-concussion syndrome (PCS) diagnosis (WHO, 1992), as has been argued previously (Hartlage et al., 2001;Cicerone and Azulay, 2002;Wang et al., 2006). Studies that have split their mTBI sample by PCS diagnosis have been relatively more consistent in their findings of cognitive deficit (Cicerone and Azulay, 2002;Kumar et al., 2005;Sterr et al., 2006;Wang et al., 2006;Chen et al., 2007;Ptito et al., 2007;Johansson et al., 2009).
Consequently, there is some debate whether persistent PCS (>3 months) is due to biological factors from neural damage or a psychological response to the mTBI (Mittenberg et al., 1992;Bailey et al., 2006;Mulhern and McMillan, 2006). It has been shown that subjective symptom report does not relate to objective symptoms (Nolin et al., 2006;Spencer et al., 2010). This has led some to suggest that PCS is not specific to mTBI (Sroufe et al., 2010), a finding we recently confirmed on a larger sample of 350 participants (Dean et al., 2012). However, the use of adequate control populations can help alleviate some of the problems associated with the non-specificity of PCS. Previous studies have used specific clinical populations such as those with posttraumatic stress disorder (PTSD), chronic pain, and patients with equivalent injuries to the body, sparing the head (Bell et al., 1999;Vanderploeg et al., 2009;Meares et al., 2011). It is also possible to control for post-concussion-like symptoms in healthy participants by splitting this group by PCS in a similar way to those with mTBI. Healthy control participants with levels of symptoms that would result in PCS diagnosis can then be compared to those mTBI participants with PCS. Cognitive differences between these two groups may then be attributed to the report of PCS after mTBI, and not the symptoms alone. Furthermore, if PCS is induced to some extent by damage at the time of injury, then it can be assumed that those mTBI participants with greater symptoms will perform worse on cognitive tasks, whereas there will be no correlation between performance and symptoms in control participants.
Based on the considerations above, the present study investigates working memory and information processing speed in participants a year or more post-mTBI compared to a nonhead injured control population. Both populations were assessed for PCS symptoms, and split into those with and without ongoing PCS to form four participants groups: mTBI + PCS, mTBI − PCS, Control + PCS, and Control − PCS. Control participants are labeled as having PCS when they meet all the DSM-IV criteria (APA, 1994), with the exception of previous head injury.
These four groups were used to test the hypothesis that only participants who report persistent PCS after mTBI will show a cognitive deficit. In contrast, head-injured individuals who report no on-going PCS symptoms, and those without prior head injury (regardless of extent of post-concussion symptoms) are likely to have no evidence of cognitive dysfunction. Furthermore, the cognitive deficit in mTBI participants with PCS will become more apparent as the difficulty of the task is parametrically increased.
Recruitment
The study specifically aimed to recruit persons who had not sought medical attention following their mTBI. A large number of those who sustain a mTBI are unreported in traditional hospital and emergency department-based research (Segalowitz and Lawson, 1995;NCIPC, 2003;Bazarian et al., 2005). Consequently, participants were recruited from a database generated by a previous study (Dean et al., 2012) which used an online survey aimed at the general public. This survey was open to both those with and without head injury, and recorded demographic information, comprehensive details about any prior head injury (in order to determine whether any injury met the diagnosis criteria for mTBI), and questionnaires on PCS and co-variables as detailed below. Those reporting any form of head injury in the survey were subsequently screened for mTBI according to ICD-10 criteria. The study protocol was given a favorable opinion by the University of Surrey Ethics Committee. Written informed consent was obtained prior to participation.
Diagnosis
We determined mTBI using ICD-10 criteria (Holm et al., 2005). According to these criteria, participants must report one or more of the following: dizziness or confusion; loss of consciousness ≤30 min; post-traumatic amnesia <24 h. A case history was taken which included a description of the injury, the date of injury, any other head injuries as well as general health and lifestyle information. Only participants at least a year post-mTBI, with no report of litigation, major invasive head injury, chronic pain, or other neurological conditions were contacted to take part in the study. Control participants were selected as those who did not report any prior head injury.
We diagnosed PCS using the modified DSM-IV criteria specified by Mittenberg and Strauman (2000), which requires report of three or more of the following symptoms subsequent to head trauma: (1) headache, (2) vertigo or dizziness, (3) irritability or aggression on little or no provocation, (4) anxiety, depression, or affective instability, (5) becoming fatigued easily, (6) disordered sleep, (7) changes in personality, and (8) apathy or lack of spontaneity. The extent of PCS was measured using the Rivermead Post-Concussion symptoms Questionnaire [RPQ; (King et al., 1995)] and Rivermead Post-Concussion symptoms Questionnaire for Controls [RPQ-C; (Sterr et al., 2006;Dean et al., 2012)]. PCS diagnosis was achieved in the same way for control participants as mTBI participants, with the exception that controls had no "history of head trauma." The majority of control participants did not attribute their symptoms to any specific cause, with a few (n = 5) attributing them to generalized stress or anxiety.
Study groups
Once diagnosed, selected participants were then asked to take part in computer-based tasks of memory and mental agility. Four groups with 18 participants each were included in this study (for demographics, see Table 1). The groups were: • mTBI + PCS: Participants who suffered a mTBI and have persistent PCS • mTBI − PCS: Participants with mTBI but no current PCS (this does not preclude them having had acute PCS symptoms that have recovered) • Control + PCS: Participants with PCS, but no history of brain injury • Control − PCS: Participants with no history of brain injury and no PCS
COGNITIVE TASKS
Participants were presented with two behavioral tasks: the n-Back and the PVSAT. Both tasks looked identical: single digit numbers between 1 and 9 inclusive were presented on the screen one at a time, with 60 of these stimuli (including 20 randomly ordered target stimuli) per block. There was a total of 12 blocks for each task, with 3 randomly ordered repetitions of the 4 levels of difficulty. The order of presentation (n-Back/PVSAT) was counterbalanced across participants. The keys M and C on a standard keyboard were counterbalanced as target and non-target response buttons across the participants.
n-Back
There were four conditions: 0-Back, 1-Back, 2-Back, and 3-Back. The numbers were presented every 3 s. Participants were asked to press the target button when the number on screen matched the number observed one previous (1-Back), two previous (2-Back), or three previous (3-Back). For every other number that did not match, participants were asked to press the non-target button. In the fourth condition (0-Back) a random number between 1 and 9 was designated as a target at the beginning of the block. Performance on the 0-Back condition should be near ceiling for all participant groups, and was therefore used as a test of performance validity.
PVSAT
There were four conditions: 2.5 s PVSAT, 2 s PVSAT, 1.5 s PVSAT, and 1 s PVSAT. The inter-stimulus interval (ISI) was 2.5 s, 2 s, 1.5 s, or 1 s. Each of the four ISI's was presented with each of the three target numbers. Participants were required to add the number on screen to the previously presented number. At the beginning of each block they were given a target number of 9, 10, or 11. If the addition equalled the target number, a "correct" response was required. An "incorrect" response was required for every other addition.
DATA ANALYSIS
A series of One-Way ANOVAs were carried out on the questionnaire and demographic data (Table 1), with between-subjects factor of GROUP and post-hoc Bonferroni-corrected comparisons. Paired samples t-tests were performed for each of the groups to assess the difference between KSS Pre and Post. Gender differences were assessed using a χ 2 test. The cognitive tasks were analyzed using two separate mixedmodel ANOVAs with factor of DIFFICULTY LEVEL (3-, 2-, 1-, 0-Back or 1, 1.5, 2, 2.5 s) and between-subjects factor of GROUP (mTBI + PCS, mTBI − PCS, Control + PCS, Control − PCS), with post-hoc Bonferroni-corrected comparisons as appropriate. Subsequent to this, a series of One-Way ANOVAs were performed for each difficulty level within each task.
In order to investigate the contribution of post-concussion symptoms and its co-variables to cognitive performance after head injury, a series of Spearman's Rho (ρ) correlations were performed. The sample was split into those with mTBI (n = 36) and those without (n = 36), and average error rates across conditions were calculated as a measure of global performance (n-Back average did not include 0-Back). Only those co-variables which significantly differed between groups were used in the analysis. Correction for multiple comparisons was used, with a modified threshold p-value of 0.002.
DEMOGRAPHICS AND QUESTIONNAIRE MEASURES
There was no significant difference between the groups on any of the demographic data (age, gender, IQ). However, a signifi- Bonferroni-adjusted pairwise comparisons revealed no difference on any questionnaire measure between mTBI + PCS and Control + PCS participants, suggesting that their subjective symptom report was similar. This was also true for the comparison between mTBI − PCS and Control − PCS participants. The observed group differences were caused by higher symptom report in the groups with high PCS symptoms compared to those with low PCS symptoms (Table 1), as expected.
In detail, higher RPQ and CFQ symptom report was observed in mTBI + PCS and Control + PCS participants compared to mTBI − PCS and Control − PCS (RPQ: all p < 0.001; CFQ: all p < 0.01), with the exception of the comparison of CFQ score between Control + PCS and mTBI − PCS participants (p = 1.0). Anxiety and depression scores were higher only in Control + PCS participants compared to both mTBI − PCS (anxiety: p = 0.005; depression; p = 0.004) and Control − PCS (anxiety: p = 0.005; depression: p = 0.003). Nocturnal sleep quality was lower in mTBI + PCS participants compared to both mTBI − PCS (p < 0.001) and Control − PCS (p = 0.018) participants. Lastly, mTBI + PCS participants reported a greater number of PTSD symptoms compared to mTBI − PCS participants (p = 0.009).
Control + PCS participants had borderline abnormal levels of depression, but anxiety within normal bounds. Mean PSQI scores for mTBI + PCS and Control + PCS participants were indicative of poor nocturnal sleep. However, the two groups without PCS had borderline scores, suggesting a generally poor level of nocturnal sleep in the sample. All groups reported greater sleep propensity after the task compared to the beginning [KSS Post-Pre: −mTBI + PCS: t (18) = 2.3, p = 0.036; mTBI − PCS: t (18) = 2.6, p = 0.020; Control + PCS: t (18) = 3.4, p = 0.003; Control − PCS: t (18) = 3.6, p = 0.002], suggesting that the task was uniformly tiring.
Error rates
A main effect of GROUP was seen for both the n-Back [F (3, 68) = 8.3, p < 0.001] and PVSAT [F (3, 68) = 9.8, p < 0.001] tasks, together with an interaction between GROUP and DIFFICULTY LEVEL for the n-Back only [F (7, 150) Bonferroni-adjusted pairwise comparisons revealed that participants with mTBI and PCS produced significantly greater error rates than all other groups (see
Reaction time
For both tasks, the main effect of GROUP and the GROUP × DIFFICULTY LEVEL interaction were not statistically significant (Figure 2) n-Back and faster on the PVSAT as task difficulty increased (all comparisons: p < 0.001).
CORRELATION BETWEEN SUBJECTIVE SYMPTOM REPORT AND OBJECTIVE COGNITIVE PERFORMANCE
Correlation between greater PCS symptom report and poorer task performance was not statistically significant (after multiple comparison correction) in mTBI participants for the PVSAT task (Rho = 0.35, p = 0.02, Table 2), nor the n-Back task (Rho = 0.43, p = 0.004). Although the n-Back association (p = 0.004) fell short of the significance threshold (p = 0.002), it represents a medium size effect according to Cohen's (Cohen, 1988) interpretation criteria, along with the PVSAT association. No significant correlations with cognitive performance were observed in control participants for any co-variable. However, there was a significant correlation between lower sleep quality (PSQI) and poorer performance on the PVSAT task (Rho = 0.62, p < 0.001) for mTBI participants. There were medium size effects for correlations between poor PVSAT performance and higher anxiety (Rho = 0.44, p = 0.004) and poor n-Back performance and high post-traumatic stress symptoms (Rho = 0.43, p = 0.004), though these associations fell short of the significance threshold.
PRINCIPAL FINDINGS
This study demonstrated working memory and information processing speed impairments in participants with mTBI and persistent (>1 year post-injury) PCS. Cognitive performance was similar in mTBI participants without PCS and all non-head injured participants. Critically, this is despite the Control + PCS group displaying comparable subjective report of postconcussion symptoms, cognitive failures, depression, anxiety, and sleep quality to the mTBI + PCS participants. This suggests that the cognitive deficit seen in the mTBI + PCS group is not a result of high PCS symptom report per se, nor a result of the co-variables associated with PCS, but is perhaps due to the combination of ongoing PCS symptom report after initial injury. Therefore, PCS symptoms may have a differential cause, with the mechanisms leading to PCS after mTBI distinct from those contributing to the PCS symptoms seen in the general population.
Although there are some studies on cognitive performance after mTBI that have taken PCS into consideration (Chan, 2001;Wang et al., 2006;Ptito et al., 2007), there are none to our knowledge that have controlled for PCS in both mTBI and control participants. The latter allows a tentative dissociation of the effect of PCS symptom report subsequent to mTBI on cognitive performance from the influence of post-concussion-like co-variables observed in non-head injured populations.
COGNITIVE TASKS
Participants in the mTBI + PCS group were impaired on both the n-Back (working memory) and the PVSAT task (information processing speed). In contrast to our hypothesis, participants in the mTBI + PCS group were impaired on even the least cognitively demanding working memory (1-, 2-, 3-Back; Figure 1A) and information processing speed conditions (2.5, 2, 1.5, 1 s PVSAT, Figure 1B). It was assumed that the cognitive deficit would be relatively subtle, and only become apparent when task difficulty is high.
However, many previous studies have not accounted for PCS diagnosis, potentially masking cognitive impairments in a proportion of participants with mTBI. This certainly seems to be the case if the current dataset is re-analyzed without taking account of PCS (Figure 3; 2 groups: mTBI, n = 36; Control, n = 36).
Whilst there is still an overall effect of GROUP for both the n-Back [F (1, 70) = 4.6, p = 0.036] and PVSAT [F (1, 70) = 4.7, p = 0.034], there is no interaction between GROUP and DIFFICULTY LEVEL, and the GROUP difference is only significant for the 2-Back [F (1, 70) = 5.4, p = 0.023] and 1 s PVSAT [F (1, 70) = 4.5, p = 0.037]. Therefore, not taking PCS into account leads to the more subtle results we expected, with only the more difficult levels differentiating between groups. These results suggest that accounting for PCS diagnosis may help reduce the variability inherent to mTBI, and create more consistent results in future research.
An important aspect of the results was that all participant groups performed near ceiling on the 0-Back condition, and there was no significant difference in error rate. This condition was used as an indication of performance validity, and the result suggests this is unlikely to have significantly contributed to the differences observed for working memory and information processing speed. However, the 0-Back is not a standardized measure of effort testing, such as the Test of Memory Malingering (TOMM; Tombaugh, 1996) and Victoria Symptom Validity Test (VSVT; Slick et al., 1997), or even tests with embedded effort sensitive measures such as the Wechsler Adult Intelligence Scale (WAIS; Iverson and Tulsky, 2003) or the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS; Silverberg et al., 2007). As such, it is possible that this test may not be able to detect poor effort in the symptomatic group. However, participants had no overt incentive for poor effort, as they had been screened for any litigation and on-going chronic pain. Previous studies have suggested that participants without overt incentives for poor effort only fail standardized effort tests in a small proportion of cases (Kemp et al., 2008;Pella et al., 2012). This could be due to there being no difference in effort in these groups, due to the difference being so slight that it is not detectable, or due to the standardized tests not being suitable for detecting effort in this population. Performance on this task could also be influenced by iatrogenic factors, such as expectation of symptoms after injury or diagnosis, leading to differences in effort. However, participants did not know whether they were in the group with or without PCS, and without such categorization participants are less likely to be influenced by iatrogenic factors in relation to PCS. Participants could be influenced by expectation of symptoms after mTBI, but both mTBI groups would be equally influenced. Therefore, if there is an effect of poor effort in this study which is not detected by the 0-Back, then it is likely to be small, and unlikely to be the sole cause of the large deficit observed in cognitive performance.
The cognitive deficit seen in those participants with persistent PCS after mTBI may be due to a variety of underlying changes after injury. One putative mechanism which has begun to be explored is a disruption in connectivity in the default mode network (DMN; Mayer et al., 2011Mayer et al., , 2012Bonnelle et al., 2012;Sandrone, 2012;Sandrone and Bacigaluppi, 2012), which will need to be explored further in this participant grouping.
EFFECT OF PCS AND CO-VARIABLES ON COGNITION
The hypothesis of this study was that those participants who report persistent PCS after mTBI would have greater cognitive deficit than participants who report no long-term symptoms after mTBI. Therefore, the data was investigated to see whether increased PCS symptoms would correlate with worse cognitive performance. In addition, as PCS symptom report is influenced by other factors, such as depression, anxiety, fatigue, and posttraumatic stress, it was considered important to explore whether these co-variables correlated with cognitive performance.
PCS symptoms
There was no significant correlation between performance and PCS symptom report for either task. However, there was a medium effect size association (but one which fell short of the significance threshold) between greater PCS symptom report and poorer n-Back performance in mTBI participants (Rho = 0.43, p = 0.004, Table 2), with no comparable association in control participants. There was also a medium effect size association for PVSAT performance and PCS symptoms in mTBI participants (Rho = 0.35, p = 0.02). Although these findings do not lend definitive support for a link between PCS symptoms and cognitive performance, the overall pattern of the results suggests PCS symptoms in mTBI participants may have stronger link to cognitive performance compared to control participants. This supports the hypothesis that the mechanisms leading to PCS after mTBI are distinct from those contributing to the PCS symptoms seen in the general population.
When reporting PCS symptoms using the RPQ, participants with mTBI are attributing the symptoms to the injury, whereas control subjects are not asked to make a specific attribution (Dean et al., 2012). It is therefore possible that an attribution bias is influencing the results, with a greater level of concern over the chronic cognitive effects of the injury causing participants with mTBI and persistent PCS to perform worse on the tasks. An attribution bias of this sort is likely to influence performance for all the cognitive tasks, as well as report of everyday cognitive failures (CFQ score). However, participants performance equally well in the 0-Back task, and CFQ score is equivalent in mTBI + PCS and control + PCS groups. An attribution bias may still be influencing the results to some extent, but not enough to explain the substantial differences seen in the working memory and information processing tasks whilst the sustained attention task (0-Back) is performed almost faultlessly. The influence of an attribution bias may be investigated further in follow-up studies being analyzed which use functional neuroimaging to look at underlying neural activity during this task.
Sleep quality
Night-time sleep quality (PSQI) was significantly worse in mTBI + PCS participants compared to both mTBI − PCS and Control − PCS, despite all groups having scores above or close to the threshold indicating a poor sleeper (Buysse et al., 1989). Sleep propensity (KSS) and sleepiness during the day (ESS) did not differ between the groups. It seems that although mTBI + PCS participants have poorer sleep, they do not report feeling sleepier during the daytime. However, there was a correlation between poor PVSAT performance and poor sleep quality in mTBI participants (Rho = 0.62, p < 0.001, Table 2), with no comparable association in controls. This indicates that the poor sleep quality of some mTBI participants may be having an effect on aspects of their daytime functioning, even if there is no difference in reported daytime sleepiness and sleep propensity. Anecdotal evidence suggests that participants may revert to responding to all stimuli as non-targets when they felt under time pressure. This may also help to explain why there was no significant correlation between n-Back performance and sleep quality in mTBI participants (Rho = 0.31, p = 0.05).
Previous studies have investigated the role of sleep in the short and long-term after mTBI (Ayalon et al., 2007;Schreiber et al., 2008;Chaput et al., 2009), and the present study confirms the association between poor sleep and long-term outcome from mTBI. Sleep could be a risk factor for poor outcome from mTBI, with poor sleep prior to injury undermining subsequent recovery from PCS symptoms. Alternatively, the mTBI itself could trigger sleep problems in previously good sleepers, which in turn may prevent full recovery. In both cases sleep management programs provided after the initial injury could be a relatively simple treatment option to reduce long-term consequences of mTBI.
Post-traumatic stress disorder
PTSD (Bryant et al., 2009) is elevated in mTBI participants with PCS in comparison to mTBI participants without PCS. It is not possible to calculate PTSD in non-injured control participants. Therefore, we are unable to rule out the effect of PTSD on cognition, especially as the correlation between higher IES-R score and worse performance on the n-Back showed a medium effect size association (falling short of statistical significance; Rho = 0.43, p = 0.004, Table 2). Previous studies have used a control group that have sustained an injury to another part of the body without concurrent head injury (Bryant et al., 2009;Vanderploeg et al., 2009;Brenner et al., 2010a). Future studies will require a similar control group to investigate the effect of PTSD on cognitive performance in this paradigm.
Depression and anxiety
Depression and anxiety have the potential to affect both PCS symptom report and cognitive performance (Suhr and Gunstad, 2002;Iverson, 2006;Moore et al., 2006). There was no significant correlation between cognitive performance and depression in either experimental group. This is despite previous research suggesting a strong association between depression, PCS symptoms, and cognitive functioning (Suhr and Gunstad, 2002;Iverson, 2006;Sheline et al., 2010). The lack of any such effect here could be due to a difference in the sample tested (the majority studies recruit from hospitals, whereas this study recruited from a nonhosptial sample) or the depression scale used [this study used the HADS (Bjelland et al., 2002), whereas the Beck depression inventory (Beck et al., 1961) is sometimes used elsewhere]. However, there was a medium effect size association (falling short of statistical significance) between increased anxiety and worse PVSAT performance in mTBI participants (Rho = 0.44, p = 0.004). High anxiety in participants with mTBI could be related to the symptom of hypochondriacal concern as detailed in the ICD-10 criteria for PCS (WHO, 1992) [but not DSM-IV criteria (APA, 1994)]. Another possibility is that those with high anxiety may also have lower sleep quality, and it is this combination that is affecting PVSAT performance. This is an intriguing possibility, especially as there is a significant correlation between sleep quality and performance on the same task (PVSAT). Furthermore, participants in the mTBI + PCS and mTBI − PCS groups exhibited statistically different sleep quality, but no difference in anxiety levels. This requires further research, although the cognitive deficits seen in the mTBI + PCS group cannot be explained purely by increased anxiety levels as Control + PCS participants reported similar levels. Overall, it seems that the influence of anxiety on cognitive performance in this sample may be slight, and there is no tangible evidence of the influence of depression.
CONCLUSION
This study investigated the long-term (>1 year) effects of mTBI on cognition, taking into account PCS in mTBI participants and PCS-like symptoms in control participants. Working memory and information processing speed was significantly impaired in mTBI participants with persistent PCS compared to mTBI participants without PCS and all non-head injured participants. Correlations between cognitive performance and symptoms were only observed for mTBI participants, with worse performance correlating with lower sleep quality, in addition to medium effect size associations (falling short of statistical significance) with higher PCS symptoms, PTSD, and anxiety.
The use of a control group with similar post-concussion symptoms to the participants with mTBI and PCS enabled us to distinguish to a certain extent the influence of confounders such as general cognitive failures, depression, anxiety, sleep quality, and sleepiness from the effect of the brain injury. These results suggest that the reduction in cognitive performance is not due to greater symptom report itself, but is associated to some extent with the initial injury. Furthermore, the results validate the utility of our participant grouping, and demonstrate its potential to reduce the variability observed in previous studies. However, the influence of sleep quality, and to a certain extent PTSD and anxiety, on cognitive performance requires further investigation. A longitudinal study using this protocol would be useful to elucidate the changes over time in these groups. Furthermore, some of the limitations inherent to meta-analyses of cognitive symptoms after mTBI Iverson, 2010;Rohling et al., 2011) may be alleviated using these participant groupings. | 2016-06-17T15:36:10.327Z | 2013-02-12T00:00:00.000 | {
"year": 2013,
"sha1": "1fc16ac84a885d90ee85bb830dc13a3f471d94a0",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2013.00030/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1fc16ac84a885d90ee85bb830dc13a3f471d94a0",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
8342269 | pes2o/s2orc | v3-fos-license | Nuclear organization of splicing snRNPs during differentiation of murine erythroleukemia cells in vitro.
Murine erythroleukemia (MEL) cells are erythroid progenitors that can be induced to undergo terminal erythroid differentiation in culture. We have used MEL cells here as a model system to study the nuclear organization of splicing snRNPs during the physiological changes in gene expression which accompany differentiation. In uninduced MEL cells, snRNPs are widely distributed throughout the nucleoplasm and show an elevated concentration in coiled bodies. Within the first two days after induction of terminal erythroid differentiation, the pattern of gene expression changes, erythroid-specific transcription is activated and transcription of many other genes is repressed. During this early stage splicing snRNPs remain widely distributed through the nucleoplasm and continue to associate with coiled bodies. At later stages of differentiation (four to six days), when total transcription levels have greatly decreased, splicing snRNPs are redistributed. By six days postinduction snRNPs were concentrated in large clusters of interchromatin granules and no longer associated with coiled bodies. At the end-point of erythroid differentiation, just before enucleation, we observe a dramatic segregation of splicing snRNPs from the condensed chromatin. Analysis by EM shows that the snRNPs are packaged into a membrane-associated structure at the nuclear periphery which we term the "SCIM" domain (i.e., SnRNP Clusters Inside a Membrane).
remain widely distributed through the nucleoplasm and continue to associate with coiled bodies. At later stages of differentiation (four to six days), when total transcription levels have greatly decreased, splicing snRNPs are redistributed. By six days postinduction snRNPs were concentrated in large clusters of interchromatin granules and no longer associated with coiled bodies. At the end-point of erythroid differentiation, just before enucleation, we observe a dramatic segregation of splicing snRNPs from the condensed chromatin. Analysis by EM shows that the snRNPs are packaged into a membrane-associated structure at the nuclear periphery which we term the "SCIM" domain (i.e., SnRNP _Clusters Inside a Membrane). I N eukaryotic cells extensive posttranscriptional procesing of primary transcripts (pre-RNA) is essential for the expression of functional mRNA, tRNA, and rRNA products. These processing events include, base and sugar modifications, exo-and endonucleolytic cleavage, formation of a 3' terminus of polyadenosine residues (poly A tail) and removal of intervening sequences (splicing). As normally only mature mRNA, tRNA, and rRNA is transported to the cytoplasm, export of RNA from the nucleus must also be a finely regulated process. While extensive progress has been made in recent years in studies of the mechanism of RNA processing reactions in vitro, by comparison much less is known about how these events take place in vivo. There are many interesting questions still to be answered concerning how the various steps connected with the processing and export of RNA are spatially organized within the nucleus and coordinated with transcription.
The major subunits of spliceosomes (i.e., the complexes that catalyze the splicing of nuclear pre-mRNAs), correspond to the U1, U2, U4/U6, and U5 small nuclear ribonucleoprotein particles (snRNPs) (reviewed by Liihrmann et al., 1990;Lamond, 1993). The use of autoimmune antisera which recognize protein antigens associated with splicing snRNPs has proven extremely valuable for snRNP localization studies (Lerner and Steitz, 1979;Lerner et al., 1981). More recently, the availability of antibodies raised against purified protein components of both snRNPs, and other protein components of the RNA processing machinery, has added significantly to the range of immunoreagents available for localization studies on splicing and polyadenylation factors (Fu and Maniatis, 1990;Takagaki et al., 1990;Zamore and Green, 1991;Zhang et al., 1992). The organization of splicing snRNPs in situ has largely been studied using indirect immunofluorescence techniques (Northway and Tan, 1972;Deng et al., 1981;Reuter et al., 1984;Spector, 1984;Nymann et al., 1986;Verheijen et al., 1986;Habets et al., 1989) and by immunoelectron microscopy (Fakan et aI., 1984;Puvion et al., 1984;Spector, 1990;Visa et al., 1993). The RNA components of splicing snRNPs have been localized by hybridization with antisense oligonucleotide probes made of 2'-O-alkyl RNA (Carmo-Fonseca et al., 1991a,b, 1992Huang and Spector, 1992;Matera and Ward, 1993). This can allow the RNA and protein components of individual snRNPs to be detected in parallel by double labeling using a combination of antisense and antibody probes. Individual pre-mRNAs have also been analyzed by in situ hybridization methods. For example, certain transcripts have been identified forming tracks within the nucleus (Lawrence et al., 1989;Huang and Spector, 1991;Xing et al., 1993). These tracks may be reflecting intranuclear transport or export pathways used by mRNAs.
Many studies using the types of reagents described above have demonstrated that most of the individual components of the RNA processing machinery are concentrated in the nucleus, consistent with the biochemical evidence that major RNA processing events take place before nuclear export. Within the nucleus, splicing snRNPs show a complex localization pattern and associate with several distinct substructures, including coiled bodies and interchromatin granules (reviewed by Lamond and Carmo-Fonseca, 1993a,b). This raises the possibility that specific steps connected with snRNP function, assembly or transport may take place within dedicated nuclear compartments. EM studies have shown that in Drosophila cells snRNPs can bind to, and splice, nascent pre-mRNAs (Beyer and Osheim, 1988). Recent studies involving the in situ colocalization of pre-mRNA, snRNPs, and protein factors required for RNA processing are also starting to give an insight into the spatial distribution of these events in mammalian cells (Carter et al., 1993). However, since most of these studies have been conducted with developmentally stationary cells, any alterations in the distribution of the splicing machinery resulting from changes in the pattern or level of transcription during cellular differentiation still remain to be ascertained. Murine erythroleukaemia (MEL) t cells provide a powerful model system with which to study the nuclear organization of RNA processing reactions. MEL cells are erythroid progenitor cells that have been transformed by the Friend virus complex and can be maintained in culture indefinitely (Friend et al., 1971;Singer et al., 1974). MEL cells can be induced with various chemical agents, such as DMSO, to undergo a program of terminal erythroid differentiation which closely mimics the natural process in vivo. During the course of this differentiation process the MEL ceils cease to divide and total transcription decreases such that it is virtually 0 by 6-d postinduction (Sherton and Kabat, 1976;Orkin and Swerdlow, 1977;Patel and Lodish, 1984). During the initial 2-4 d of this process, the expression of erythroid specific genes is induced, in particular those encoding the or-and /3-globins, while most other loci are repressed. Thus, MEL cells offer an opportunity to study the nuclear organization of splicing and other RNA processing factors during a physiological differentiation process that results in a dramatic alteration in the transcriptional status of the cell.
In this study we report a detailed analysis of snRNP organization in the nuclei of uninduced MEL cells and during the course of terminal erythroid differentiation. We observe that the nuclear distribution of splicing snRNPs, and their association with specific subnuclear compartments, changes during differentiation in a transcription-dependent fashion.
Cell Culture
The APRT-MEL cell line C88 was maintained and induced to undergo erythroid differentiation for up to 6 d in the presence of 2 % (vol/vol) DMSO as previously described (Antoniou, 1991). Experiments involving c~-amanitin (Sigma Chemical Co., St. Louis, MO) were performed with log phase growing cultures (0.5-1 x 106 cells/ml). Ceils were incubated with 50 t~g/ml of a-amanitin for 5 h.
In Situ Hybridization and Immunofluorescence
Analysis of cells of in situ hybridization and immunofluorescence was essentially as previously described . Briefly, 1-2 x 106 MEL cells were washed twice in PBS and resuspended in 200 ~l of the same buffer. A 15-~tl aliquot of this cell suspension was then spread over a 1-cm diam coverslip which had been coated with polylysine (Sigma Chemical Co.) by brushing with a 1 mg/mi solution in water and air drying. The cells were allowed to adhere to the coverslip for 1 rain and then fixed for 10 rain with 3.7% paraformaldehyde in CSK buffer at room temperature as previously described . Excess cells, which had not adhered to the coverslip, were moved by periodic swirling of the fixative (l-2-min intervals). The fixed cells were then permeabilized with either 0.5% Triton X-100 in CSK buffer or 0.2% SDS . Alternatively, cells were pre-extracted with 0.5 % Triton X-100 for 3 min on ice before the paraformaldehyde fixation step.
The antisense 2'-O-allyloligoribonucleotide probes specific for 28S rRNA and U2 snRNA, have been described previously . The following mouse mAbs were used: anti-B" protein ("4(33") (Habets et al., 1989), anti-70K protein (Billings et al., 1982), anti-Sin ("Y12") (Pettersson et al., 1984), and anti-SC-35 splicing factor (Fu and Maniatis, 1990). Polyelonal rabbit antibodies to recombinant p80 coffin , anti-peptide antibodies to lamins (Djabali et al., 1991) and affinity-purified goat anti-mouse hemoglobin antibodies (Kirkegaard and Perry Laboratories Inc., Gaithersburg, MD) were also used. Detection of hybridization sites and immunofluorescence were performed as described . Samples were examined using both the EMBL compact confocal microscope (CCM) (Stelzer et al., 1992) and the Zeiss Laser Scanning Microscope LSM 310 (Carl Zviss, Oberkochen, Germany). The EMBL CCM was operated with excitation wavelengths of 488 (fluorescein fluorescence) and 529 nm (Texas red fluorescence) were selected from an argon-ion laser. The LSM was operated with wavelengths of 488 and 543 nm, selected from an argon-ion and from an helium-neon laser, respectively. For double-labeling experiments each fluorochrome was independently recorded. Pseudo-colored images of both signals were generated and superimposed. Images were photographed on Fujichrome 100 (Fuji Photo Film Co., Ltd., Tokyo, Japan) or Kodak Tmax 100 film (Eastman Kodak Co., Rochester, NY) using a Polaroid Freeze Frame Recorder.
Electron Microscopy
MEL cells were fixed in suspension in 1.6% glutaraldehyde in 0.1 M S6rensen phosphate buffer, pH 7.3. During the 1-h fixation, the cells were centrifuged at 1,000 or 5,000 g for 5 rain. The resulting pellets were dehydrated in increasing concentrations of ethanol and embedded in LR White resin (The London Resin Co., Hampshire, UK). Ultra-thin sections were labeled with the mAb 3C5 and a secondary antibody corresponding to anti-mouse IgM coupled to 10-nm gold particles (Janssen Life Sciences Products, Beerse, Belgium). Staining was performed using the EDTA regressive method, which preferentially reveals RNA-containing structures (Bernhard, 1969). Conventional staining consisted of uranyl acetate and lead citrate. Samples were analyzed using an electron microscope. (100CXII; JEOL USA, Peabody, MA).
Analysis of RNA by SI Nuclease Protection
Total RNA was extracted from MEL cells by selective precipitation in the presence of 3 M LiC1 and 6 M urea as denaturant as previously described (Antoniou, 1991). Assay for selected RNA species was by Sl-nuclease protection using end-labeled DNA probes as previously described (Antoniou et al., 1993). The 5' histone H4 probe is a 460-bp EcoRI-TthlH fragment giving a 180 nucleotide (nt) Sl-protected product. The B major glohin probe is a 700-bp HindlII-NcoI fragment which protects 96 nt from the 5' region of exon II of the mRNA. The 5' U6 probe is a 326-bp EaeI-Fold fragment obtained from the Xenopus U6 snRNA gene (Krol et al., 1987). This gives an 87-nt foil-length Sl-protected product from the equivalent mouse sequence. A single base mismatch at +8 between the Xenopus mouse U6 snRNAs also gives a lower level of a 79-nt product. Reaction products were resolved on 6% polyacrylamide gels in the presence of 6 M urea as denaturant and autoradiographed with Fuji medical x-ray film, with an intensifying screen, at -70°C.
Erythroid Differentiation of MEL Cells
The efficient induction of terminal erythroid differentiation upon treatment of MEL cells with medium containing 2 % DMSO has been reported in many previous studies (reviewed by Antoniou, 1991). Nonetheless, a series of control experiments were carried out to confirm that we were obtaining a similar high efficiency of induction with the MEL cell line used in the present study. First, a goat anti-mouse hemoglobin antibody was used to detect the levels of globin protein expressed by the uninduced MEL cells as compared with cells at six days postinduction (Fig. 1, A and B). The anti-mouse hemoglobin antibody showed only very weak fluorescence in the uninduced cells, with <5 % of the cells showing evidence of spontaneous induction ( Fig. 1 A). In contrast, almost all the cells (>95%) showed strong cytoplasmic fluorescence when analyzed using the same an-tibody at 6-d postinduction (Fig. 1 B). Second, EM was performed to compare the morphology of the uninduced MEL cells with cells at 6-d postinduction (Fig. 1,C and D). This reveals a marked change in the appearance of the nuclei in the induced cells, which are filled with condensed chromatin (Fig. 1, C and D, note dark nuclear staining in D).
A third series of control experiments were also carried out to measure the levels of three separate RNAs in uninduced MEL cells and at two, 4-and 6-d postinduction of terminal erythroid differentiation (Fig. 2). The levels of historic H4 and/3maj globin mRNAs and of U6 snRNA were determined at each stage using a nuclease S1 protection assay. This demonstrates that/3maj globin mRNA is not detected in the uninduced MEL cells, but is detected in cells at two days post induction and is present at high levels in cells at 4-and 6-d postinduction. In contrast, the histone H4 mRNA is detected readily in the uninduced MEL cells but the H4 mRNA level drops significantly during the course of differentiation. The U6 snRNA, which is a metabolically stable RNA species, is readily detected both before and after induction.
In summary, the combination of fluorescence and electron microscopy and RNA mapping data shown above confirm that the MEL cell line used in this study shows a similar response to DMSO induction as previously reported. Specifically, we have shown that, upon induction, the great majority of MEL cells in our cultures undergo the typical pattern of terminal erythroid differentiation, including acti- Figure 2. Effect of differentiation on RNA levels in MEL cells. Total RNA was extracted from uninduced (UI) MEL ceils and from cells stimulated to undergo terminal erythroid differentiation in the presence of 2 % DMSO for 2, 4, and 6 d. The levels of histone H4 mRNA (H4), /~major-globin mRNA (Bmaj) and U6 snRNA in each RNA preparation was analyzed by an Sl-nuclease protection assay using end-labeled DNA probes as described in Materials and Methods. The sizes of the Sl-nuclease protected fragments are 180 nucleotides (nt) for H4, 96 nt for ~maj and 87 nt for U6 RNAs, respectively. The samples analyzed for H4 and Bmaj sequences contained 2 #g total RNA per reaction. Those analyzed for U6 snRNA contained 10 #g total RNA. An equivalent amount of Escherichia coli tRNA was used as a negative control (N) for each series of reactions. Size markers (M) are a Hinfl digest of pBR322. vation of transcription of globin genes and a parallel transcriptional repression of other loci. The data further show that at the late stages of differentiation the MEL cell nuclei accumulate condensed chromatin, consistent with the previous observations that nascent transcription has ceased at this stage.
Localization of snRNPs in MEL Cells
The distribution of splicing snRNPs in MEL cells was analyzed in uninduced cells and at 2-and 6-d postinduction of terminal erythroid differentiation (Fig. 3). The morphology of the MEL cells, as seen by phase contrast microscopy, is shown for the field of cells at each stage (Fig. 3, A-C). In the same cells, splicing snRNPs were detected using an anti-Sm mAb, which recognizes common snRNP proteins ( Fig. 3, D-F). This shows that the snRNPs are concentrated in the nuclei of MEL cells, both before and after induction. In the uninduced cells, the snRNPs are widely distributed throughout the nucleoplasm, excluding nucleoli (Fig. 3 D). A broadly similar pattern is observed in cells at 2-d postinduction, though here a slightly more punctate staining is evident ( Fig. 3 E). At 6-d postinduction, however, a significant change in the nuclear snRNP staining is seen in most cells (Fig. 3 F). At this late stage of differentiation, most of the snRNP is present in large clumps and much less widespread nucleoplasmic staining is visible.
We also examined the cells for the presence of coiled bodies at each of these stages (Fig. 3, G-I). The coiled bodies were detected using an anti-pS0 coilin rabbit antiserum. Coiled bodies were seen in >95 % of the cells, both before and after induction of terminal erythroid differentiation. Coiled bodies were detected in all the cells shown in Fig. 3 (G-I), but not all of them are visible in this figure because they occur in different focal planes in the separate cells in the field. A more detailed analysis of the relationship between the snRNP and coiled body staining patterns is presented in Fig. 5.
To obtain a higher resolution picture of nuclear snRNP distribution throughout MEL cell differentiation, the U1 and U2 snRNPs were analyzed individually (Fig. 4). They were detected by indirect immunofiuorescence using U1 and U2specific mAbs (Fig. 4, E-L). In addition, the U2 snRNA was independently localized by in situ hybridization using a complementary antisense 2'-O-alkyloligoribonucleotide (Fig. 4, A-D). This allows a comparison of the distribution pattern of the RNA and protein components of U2 snRNP. The distribution of each snRNP was examined in uninduced MEL cells and at 2-, 4-, and 6-d postinduction of terminal erythroid differentiation. Micrographs of representative, single cells are presented to reveal the nuclear staining pattern in more detail.
In the uninduced cells, both anti-U1 and -U2 snRNP probes show a widespread nucleoplasmic fluorescence, excluding nucleoli (Fig. 4, A, E, and I). This confirms the results obtained using the anti-Sm mAb (Fig. 3 D). For U2 snRNE both the antibody and antisense probes reveal an identical labeling pattern, confirming that the assembled snRNP is being detected (Fig. 3, cf. A and E). Both probes also show that U2 snRNP is present in higher concentration in several discrete foci, as well as being widely distributed throughout the nucleoplasm (Fig. 4, A and E, arrowheads).
However, U1 snRNP in comparison shows a predominantly The Journal of Cell Biology, Volume 123, 1993 Figure 3. The effect of differentiation on the distribution of splicing snRNPs in MEL cells. Indirect immunofluorescence was performed with a monoclonal anti-Sm antibody (D, E and F) to detect splicing snRNPs in uninduced MEL cells (D) and at 2 (E) and 6 (F) d after terminal erythroid differentiation was induced by treatment with DMSO. A, B, and C represent the same fields of cells as seen by phase contrast microscopy. An anti-p80 coilin rabbit serum was also used to detect coiled bodies in uninduced MEL cells (G) and in cells at 2 (H) and 6 (I) d after induction to differentiate. Note that, due to the thickness of the nuclei, the number of coiled bodies observed in the single focal plane shown is not representative of the total number of coiled bodies per nucleus. Bar, 10 #m. The distribution of U1 snRNP was analyzed using a mAb directed against the U1 snRNP-specific 70-kD protein (I-L). Bar, 5/zm. widespread nucleoplasmic distribution without the striking concentration in loci (Fig. 4, I). This picture of the nuclear distributions of splicing snRNPs in MEL cells is in close agreement with previous double-labeling studies on the U1 and U2 snRNPs in HeLa ceils (Carmo-Fonseca et al., 1991a,b, 1992Matera and Ward, 1993) and with in situ hybridization analysis of U1 and U2 snRNAs at the EM level (Visa et al., 1993).
After induction of terminal erythroid differentiation, changes are evident in the pattern of nuclear staining for both U1 and U2 snRNPs, particularly at the late stage (Fig. 4, B-D, F-H, and J-L). At 2-and 4-d postinduction, both the antisense and antibody probes still show some of the U2 snRNP concentrated in discrete foci (Fig. 4, B, C, F, and G, arrowheads). However, at later stages postinduction, particularly at the 6-d stage, both U1 and U2 snRNPs show a more prominently clumped pattern and appear to stain less of the nucleoplasm as compared with uninduced cells (Fig. 4, compare A, E, and I, with D, H, and L). The antibody and antisense probes for U2 snRNP give identical staining patterns at each time point postinduction, confirming that a genuine rearrangement in the localization of U2 snRNP is taking place during differentiation, rather than a dissociation and rearrangement of U2 snRNP protein independent of U2 snRNA (Fig. 4, cf. B and F , C and G, and D and H).
Association of Splicing snRNPs with Coiled Bodies during Differentiation
The bright foci of concentrated snRNP staining in MEL cells have a similar appearance to the snRNP foci detected in other cell lines, which were shown to be coiled bodies Andrade et al., 1991;Carmo-Fonseca et al., 1992;Huang and Spector, 1992). To determine whether the snRNP foci in MEL cells also correspond to coiled bodies, double-labeling experiments were performed using anti-p80 coilin antibodies to detect coiled bodies and an anti-Sm mAb to detect splicing snRNPs (Fig. 5). The individual staining patterns of the Sm and anti-pS0 coilin antibodies in uninduced MEL ceils were recorded in the confocal laser scanning fluorescence microscope (Fig. 5, A and B). An overlay of the separate images (Fig. 5 C), demonstrates that the bright snRNP loci correspond to the coiled bodies labeled by the anti-p80 coilin antibody (Fig. 5, cf. A-C, arrowheads; note yellow foci in C due to the superimposition of red and green staining).
A similar double-labeling analysis was carried out to assess the association of splicing snRNPs with coiled bodies in MEL ceils at two, four and 6-d postinduction of terminal erythroid differentiation (Fig. 5, D-F). In this case the figure shows only the overlays from the double labeling and not the separate anti-Sm and anti-p80 coilin staining at each stage. At 2 d (Fig. 5 D) and 4 d (Fig. 5 E) postinduction yellow foci are detected. This demonstrates that splicing snRNPs continue to associate with coiled bodies after differentiation is induced (of. Fig. 3, G--l). However, at 6-d postinduction (Fig. 5 F), the foci are red and the snRNP staining is concentrated in large clumps that are not stained by the anti-p80 coilin antibody, This indicates that snRNPs no longer associate with coiled bodies at the late stages of terminal erythroid differentiation. This corresponds to the stage when transcription has ceased and chromatin is condensing (Sherton and Kabat, 1976;Orkin and Swerdlow, 1977;Patel and Lod-Figure 5. snRNP association with coiled bodies in MEL cells. Double-labeling experiments were performed to assess whether splicing snRNPs are present in coiled bodies in MEL cells, snRNPs were detected using a monoclonal anti-Sm antibody (green) and coiled bodies using an anti-pS0 coilin antibody (red) in uninduced MEL cells (A-C), and in cells at 2 (D), 4 (E), and 6 (F) d after terminal erythroid differentiation was induced by treatment with DMSO. The association of snRNPs with coiled bodies was also examined in uninduced cells after transcription was blocked by treatment with c~-amanitin (G-I). Overlays of the anti-Sm and anti-coilin staining patterns are shown in C, D, E, F, and L In uninduced cells, and at 2 and 4 d postinduction, both antibodies colocalize in coiled bodies (yellow staining indicated by arrowheads in C, D, and E). At 6-d postinduction, and in uninduced cells treated with ct-amanitin, the Sm antigens are not detected in coiled bodies (red staining indicated by arrowheads in F and I). Bar, 5/~m. ish, 1984; see also the electron micrographs in Fig. 1 of this study).
The change in snRNP distribution seen in MEL cells at the late stages of differentiation, including the loss of snRNPs from coiled bodies, is similar to the change in snRNP distribution in HeLa ceils when they are treated with inhibitors ofRNA polymerase II . Therefore, to test whether the reduced level of transcription at the late stages of differentiation may be responsible, at least in part, for the change in snRNP distribution, uninduced MEL cells were analyzed after treatment with the RNA polymerase II inhibitor ot-amanitin (Fig. 5, G--l). Staining with anti-Sm antibodies (Fig. 5 G), shows that treatment of MEL cells with ot-amanitin also results in a redistribution of snRNPs, which become concentrated in large clumps and are no longer dispersed throughout the nucleoplasm (Fig. 5, cf. A and G). Double-labeling with anti-p80 coilin antibodies shows that coiled bodies can still be detected (Fig. 5 H). Figure 6. The mAb 3C5 specifically labels interchromatin granule clusters in MEL cells. Ultra-thin sections of uninduced MEL cells were immunogotd labeled with the mAb 3C5 and stained by the EDTA regressive method of Bernhard (Bernhard, 1969). EM shows that the gold label is specifically concentrated over clusters of interchromatin granules (A-D). Bar, 0.5 ~m.
An overlay of images Fig. 5 (G and H) shows that these coiled bodies are not stained by the anti-Sm antibodies, indicating that treatment with a-amanitin has resulted in an exodus of snRNPs from coiled bodies (Fig. 5 I, arrowheads; note red coiled bodies). However, the coiled bodies are observed intimately juxtaposed beside snRNP clusters. This organization was frequently observed in MEL cells and has also been seen in HeLa cells following treatment with a-amanitin . This suggests that snRNPs move from the coiled bodies into the large clusters when transcription is halted. As the redistribution of snRNPs observed in MEL cells following ct-amanitin treatment is similar to that observed at late time points after induction to differentiate, the reduced levels of transcription in both cases may be largely responsible for the changes in snRNP localization.
SnRNPs Concentrate in Clusters of lnterchromatin Granules at Late Stages of Differentiation
In interphase cells, the speckled, or punctate, nuclear structures containing splicing snRNPs arise from the association of snRNPs with clusters of interchromatin granules as well as with coiled bodies (reviewed by Lamond and Carmo-Fonseca, 1993b). Since the large clumps of snRNP observed at late stages of differentiation are not coiled bodies, we therefore examined whether they might correspond instead to clusters of interchromatin granules. To test this we used mAb 3C5, a mAb that was previously shown to predominantly stain interchromatin granules in mammalian cells (Turner and Franchi, 1987). First, we confirmed by immunoelectron microscopy that mAb 3C5 also stains interchromatin granules in MEL cells (Fig. 6). The interchromatin granules were revealed by the EDTA regressive method of Bernhard (Bernhard, 1969). Analysis of ultra-thin sections immunogold labeled with mAb 3C5 shows that the staining is highly specific for interchromatin granules.
Having confirmed its specificity in MEL cells, mAb 3C5 was then used to double-label MEL cells for snRNPs and interchromatin granules (Fig. 7). Splicing snRNPs were detected using an anti-Sin mAb (Fig. 7, A and B). Double labeling with two mouse mAbs was possible in this case since mAb 3C5 is an IgM and the anti-Sm mAb (Y12) is an IgG. The anti-Sm staining again shows the striking rearrangement of splicing snRNPs from a widespread nucleoplasmic distribution in uninduced MEL cells to a restricted pattern of clumps at 6-d postinduction of terminal erythroid differentiation (Fig. 7, A and B ,. In contrast, with mAb 3C5 a similar punctate pattern of nucleoplasmic staining is observed both in uninduced cells and at 6-d postinduction, although the structures stained can appear larger in cells at late stages postinduction (Fig. 7, C and D). However, at 6-d postinduction the anti-Sm and 3C5 mAbs now colocalize in the same punctate structures (Fig. 7, cf. B and D). This contrasts with the uninduced cells, where the snRNP staining is widespread in the nucleoplasm and clearly not confined to the clusters of interchromatin granules, i.e., the punctate structures stained by mAb 3C5 (Fig. 7, cf. A and C).
In summary, the data show that the large clumps of snRNP staining that accumulate in the nuclei of MEL cells at late stages of terminal erythroid differentiation correspond to clusters of interchromatin granules. Since only a subset of the splicing snRNPs are associated with interchromatin granule clusters in uninduced cells, this indicates that snRNPs move out of other nuclear structures (including coiled bodies) and into clusters of interchromatin granules at late stages postinduction.
Localization of Splicing Factor SC-35 during MEL Cell Differentiation
Recent studies have established that a mAb, termed SC-35, recognizes a non-snRNP protein required for pre-mRNA splicing in mammalian splicing extracts (Fu and Maniatis, 1990). This antibody gives a punctate staining pattern in HeLa cells and EM analysis showed that it labels both clusters of interchromatin granules and perichromatin fibrils . It does not, however, label coiled bodies (reviewed by Lamond and Carmo-Fonseca, 1993a). The SC-35 mAb also stains the nuclei of uninduced MEL cells in a punctate pattern (Fig. 8 A). A similar punctate staining pattern is seen with mAb SC-35 at 6-d postinduction of terminal erythroid differentiation (Fig. 8 B). The major difference between the SC-35 staining patterns before and after induction is that fewer but larger structures are often labeled at late stages postinduction. The mAb SC-35 labels the same SC-35 localization arising from EM studies . The mAb SC-35 gives a punctate nucleoplasmic staining pattern in both uninduced MEL cells (,4) and at 6 d after terminal erythroid differentiation was induced by DMSO treatment (B). In B the punctate structures appear larger and fewer in number than in the transcriptionally active, uninduced cells shown in A. Bar, 5 #m.
structures in MEL cells as does mAb 3C5, both before and after induction (data not shown, cf. Figs. 7 and 8). This is consistent with previous observations comparing the labeling patterns of mAbs SC-35 and 3C5 in HeLa cells . The data, therefore, indicate that mAb SC-35 strongly stains clusters of interchromatin granules in MEL ceils, both before and after induction of erythroid differentiation. This is in agreement with the earlier data on
Splicing snRNPs Are Segregated into Membrane Associated Domains prior to Enucleation
The final stage of erythroid differentiation undergone by MEL cells is marked by a global condensation of chromatin, followed by enucleation (Volloch and Housman, 1982;Patel and Lodish, 1987). We therefore analyzed cells at this extreme terminal stage, using a combination of fluorescence and electron microscopy, to determine what happened to snRNPs and other nuclear antigens (Figs. 9 and 10).
The appearance of MEL cells is illustrated at the extreme terminal stage postinduction, as shown by phase contrast microscopy ( Fig. 9 A). For analysis by confocal fluorescence microscopy, splicing snRNPs were labeled using three separate probes; an anti-Sm mAb (Fig. 9 B), a U2 snRNPspecific mAb (Fig. 9 E) and an antisense probe specific for U2 snRNA (Fig. 9 D). All three probes reveal that the splicing snRNPs are located predominantly in a large aggregate at the periphery of the condensed chromatin ( Fig. 9, B, D, and E, arrows). In addition, a lower level of staining is often observed around the nuclear periphery ( Fig. 9, B, D, and E, arrowheads). Double labeling the cells stained with the anti-Sm monoclonal ( Fig. 9 B), with mAb 3C5 (Fig. 9 C), shows that the same large structure at the nuclear periphery is labeled by both antibodies. This suggests that it contains a super cluster of interchromatin granules. In contrast, cells labeled with anti-p80 coilin antibodies show more widespread staining around the nuclear periphery without the striking concentration in a single structure (Fig. 9 F). At this stage there is no longer any sign of discrete coiled bodies. Figure 9. The distribution of snRNPs at terminal enucleation of MEL cells. The terminal stage of MEL cell differentiation is marked by a global condensation of chromatin, as seen by phase contrast in A (see also Fig. 1 D). Splicing snRNPs were localized at this terminal stage of erythroid differentiation using three separate probes; an Sm-specific mAb (B), a U2 snRNAspecific antisense probe (D), and a mAb specific for the U2 snRNP protein B" (E). All three probes show that the snRNPs are predominantly localized within a single large aggregate at the periphery of the condensed chromatin (B, D, and E, arrows) with a lesser amount of staining around the rest of the periphery (B, D, and E, arrowheads). The large snRNP aggregate is also stained by the mAb 3C5 (C), as shown by double labeling (compare B and C, arrows). In contrast, the coiled body antigen, pS0 coilin, is not confined to the snRNP aggregate but rather shows more widespread staining around the periphery of the condensed chromatin (F). Bar, 5 #m, The Jourrla] of Cell Biology, Volume 123, 1993 Figure 10. EM of differentiating MEL cells at the terminal stage just before enucleation. Ultrathin sections of terminally differentiated MEL cells were stained by conventional methods and examined in the electron microscope. The nuclei are full of condensed chromatin and show regions, mainly at the nuclear periphery, where nonchromatin components appear to have been segregated (arrows). Note the partial loss of membrane from the nuclear periphery and the presence of membrane enclosed domains from which condensed chromatin is excluded. Bar, 1 #m.
The structure of MEL cell nuclei at the extreme terminal stage postinduction was also examined in the electron microscope (Fig. I0). In all cases the nuclei are densely packed with condensed chromatin. The prominent snRNP-containing structures at the periphery of the condensed chromatin are clearly visible. Surprisingly, these structures are surrounded by a membrane. This is most likely derived from the nuclear envelope, which is largely absent from the nuclear periphery in the vicinity of the snRNP structures. Therefore, after chromatin condensation, it appears that the splicing snRNPs, (and probably other nuclear antigens), migrate to the nuclear periphery, associate with the nuclear envelope and form a discrete, membrane enclosed domain. We propose the name "SCIM" domain (i.e., SnRNP Clusters Inside a .Membrane), to describe this novel structure.
Discussion
In this study we have analyzed the distribution of splicing snRNPs in the nuclei of MEL cells, both before and after induction of terminal erythroid differentiation in vitro. The data show that splicing snRNPs are not dramatically reorganized within the first two days after induction to differentiate, during which time a major switch in the pattern of gene expression takes place, including the repression of previously active loci and the activation of erythroid-specific transcription. In both uninduced MEL cells, and at early stages following induction, splicing snRNPs are widely distributed throughout the nucleoplasm and show enhanced concentrations in bright foci that correspond to coiled bodies. However, four to six days after MEL cells are induced to differentiate, this widespread nucleoplasmic snRNP staining markedly changes. At the late stage of differentiation, when transcription has ceased, splicing snRNPs no longer associate with coiled bodies and concentrate instead in large aggregates that correspond to clusters of interchromatin granules.
A major conclusion from the present study is that splicing snRNPs can be induced to change their distribution following a differentiation process that alters the pattern and level of gene expression. However, the major changes in snRNP localization are observed only after a global decrease in the level of transcription at late stages of differentiation and not at the earlier stages postinduction when a profound alteration takes place in the pattern of genes being transcribed. The observation that the number of coiled bodies per cell decreases from an average of three to four in uninduced cells, to an average of less than 2-at 6-d postinduction (data not shown), also suggests that the association of splicing snRNPs with this structure is more dependent on the total level, rather than the qualitative pattern, of ongoing transcription. The absence of a clear change in snRNP distribution immediately after induction was somewhat unexpected and has interesting implications for the spatial organization of active genes in the nucleus of MEL cells and their relationship with splicing factors. The data may be explained in three alternative models of nuclear organization: (a) in MEL cells there is no special relationship between gene organization and accessibility to splicing snRNPs, i.e., snRNPs can readily access any newly transcribed gene wherever it is located in the nucleoplasm; (b) a major reorganization of chromatin occurs following the induction of MEL cells to differentiate, to bring previously inactive loci (such as globins) to specific subnuclear locations where snRNPs can interact with them; or (c) in uninduced MEL cells, the loci which are only transcribed follow-ing differentiation are already assembled into "active" sites together with genes that are already being transcribed and processed. Studies are now underway to try and distinguish between these alternative possibilities by comparing the spatial organization of specific genes, relative to splicing snRNPs, before and after MEL cell differentiation.
The snRNP staining pattern in MEL cells markedly rearranges when transcription halts, i.e., at late times after induction of differentiation or in undifferentiated cells treated with the transcription inhibitor o~-amanitin. While changes in snRNP staining have been observed previously in cells treated with drugs, or after heat shock (Carmo-Fonseca et al., 1991a,b, 1992Spector et al., 1991), we provide here a clear demonstration that snRNP localization is also affected by the physiological changes accompanying differentiation. MEL cell differentiation results in a change in the steady state level of snRNPs associated with distinct subnuclear compartments. For example, the association of snRNPs with coiled bodies is a prominent feature in uninduced MEL cells, but is lost at late stages of differentiation when transcription has ceased. These data are consistent with the view that the interaction of snRNPs with coiled bodies is of functional significance for some aspect of their role in the pathway of pre-mRNA processing or transport (reviewed by Lamond and Carmo-Fonseca, 1993a). This does not imply that coiled bodies must be direct sites of pre-mRNA splicing in MEL cells. The coiled bodies may instead play other roles required for the participation of snRNPs in the splicing pathway, e.g., snRNP or spliceosome assembly or disassembly, intron degradation or some aspect of intranuclear snRNP transport connected with gene expression. However, the data argue against coiled bodies being storage sites for inactive snRNPs, since snRNPs no longer associate with coiled bodies at the late stage of differentiation when the proportion of inactive snRNPs in the nucleus must increase.
Splicing snRNPs concentrate specifically in large clusters of interchromatin granules at late stages of differentiation, or when transcription is inhibited by drug treatment. A subset of snRNPs also associate with interchromatin granule clusters when transcription and splicing are taking place and this contributes to the punctate staining pattern frequently observed with anti-snRNP antibodies. The present data indicate that snRNPs can move between coiled bodies and interchromatin granule clusters in MEL cell nuclei. It is possible that snRNPs engaged in the various steps associated with pre-mRNA maturation are normally in flux between separate subnuclear locations, including coiled bodies and interchromatin granules (reviewed by Lamond and Carmo-Fonseca, 1993a). Such a dynamic movement of snRNPs could take place as nascent transcripts are recognized, spliceosomes assembled, mRNA formed and exported to the cytoplasm, and introns degraded and the snRNPs recycled for further rounds of splicing activity. Separate (though possibly overlapping) activities may take place in coiled bodies and interchromatin granules, respectively. Identifying which activities occur in each compartment is an important goal for future studies. However, it should again be noted that neither structure need be the actual site of splicing. Since there is a very strong widespread nucleoplasmic staining in MEL cells, splicing may take place on nascent transcripts in the interchromatin space outside of either interchromatin granules or coiled bodies. This would be consistent with EM data showing that splicing can take place on nascent transcripts of Drosophila pre-mRNA (Beyer and Osheim, 1988).
Since after MEL cells are induced to differentiate individual genes are switched on and expressed at high levels, while the overall number of different transcripts being synthesized decreases, this provides a useful model system for studying the localization of specific pre-mRNAs at different stages of the processing pathway. MEL clones which can be induced to express wild type and mutant/~-globin transcripts at high levels (Collis et al., 1990;Antoniou and Grosveld, 1990), may help to identify subnuclear sites at which splicing and transport events take place. In this regard, the present analysis of snRNP localization represents an important preliminary study which should aid the interpretation of future analyses of mRNA localization.
At the extreme terminal stage of erythroid differentiation we have observed that splicing snRNPs, the coiled body protein pS0 coilin and antigens stained by mAb 3C5, are all excluded from the condensed, transcriptionally inactive chromatin before enucleation. The snRNPs (but not coilin) predominantly cluster in a membrane bound structure at the nuclear periphery. We have termed this novel structure a "SCIM" domain. Splicing snRNPs and other nuclear antigens are also displaced from chromosomes when they condense during mitosis (Verheijen et al., 1986;Leser et al., 1989). However, during mitosis snRNPs do not form the large membrane bound domains we observe before enucleation. Rather, they are mostly widely distributed throughout the mitotic cytoplasm. This may reflect the fact that, unlike the events preceding enucleation, chromatin condensation, and other changes in nuclear structure occurring during mitosis are reversible phenomena.
It is not clear whether the snRNPs, and any other nuclear antigens in the SCIM domain, are actually expelled from the MEL cells during enucleation. Although we do not detect snRNP antigens in mature erythrocytes (unpublished observations), we cannot exclude that the SCIM domain is retained and subsequently degraded within the erythrocyte after the condensed chromatin is expelled. Given the significant degree of breakdown of the nuclear envelope we have observed before enucleation (see Fig. 10), it is apparent that loss of the "nucleus" from MEL cells does not correspond to the ejection of an entirely membrane enclosed organelle comparable to the interphase nucleus. Our data raise the possibility that "enucleation" may actually correspond to the ejection from the cell of an aggregate of condensed chromatin, from which many other components of the interphase nucleus have already been segregated. In this case the formation of the SCIM domain may play an important role in the mechanism that results in mammalian erythrocytes losing their nuclei. It will now be interesting to determine whether an analogous SCIM domain is formed at the terminal stage of avian erythroid differentiation, since nuclei are not lost from avian erythrocytes.
Whatever role it may play in the enucleation process, formation of the SCIM domain represents a striking example of a molecular sorting process involving splicing snRNPs. It also raises a number of interesting questions regarding the mechanism of the sorting process itself and, importantly, the mechanism whereby a similar effect is prevented from occurring before the extreme terminal stage of erythroid differenti-ation. These observations underline the inherent complexity of the mammalian nucleus and illustrate the potential for regulatory mechanisms that determine the association and structural organization of nuclear components. | 2014-10-01T00:00:00.000Z | 1993-12-01T00:00:00.000 | {
"year": 1993,
"sha1": "b42b6553ce2ada8a585cc742f037b6e1a8504c8a",
"oa_license": "CCBYNCSA",
"oa_url": "http://jcb.rupress.org/content/123/5/1055.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "b42b6553ce2ada8a585cc742f037b6e1a8504c8a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
5346082 | pes2o/s2orc | v3-fos-license | Serine Proteases of Parasitic Helminths
Serine proteases form one of the most important families of enzymes and perform significant functions in a broad range of biological processes, such as intra- and extracellular protein metabolism, digestion, blood coagulation, regulation of development, and fertilization. A number of serine proteases have been identified in parasitic helminths that have putative roles in parasite development and nutrition, host tissues and cell invasion, anticoagulation, and immune evasion. In this review, we described the serine proteases that have been identified in parasitic helminths, including nematodes (Trichinella spiralis, T. pseudospiralis, Trichuris muris, Anisakis simplex, Ascaris suum, Onchocerca volvulus, O. lienalis, Brugia malayi, Ancylostoma caninum, and Steinernema carpocapsae), cestodes (Spirometra mansoni, Echinococcus granulosus, and Schistocephalus solidus), and trematodes (Fasciola hepatica, F. gigantica, and Schistosoma mansoni). Moreover, the possible biological functions of these serine proteases in the endogenous biological phenomena of these parasites and in the host-parasite interaction were also discussed.
INTRODUCTION
Proteases are a type of enzymes that are widely distributed in nature and are found ubiquitously in eukaryotes, prokaryotes, and viruses. They perform proteolytic reactions and peptide bond hydrolysis. Based on the important chemical groups in their active sites, proteases are typically categorized as 4 major classes (e.g., serine, metallo-, cysteine, and aspartic proteases). Of these classes, serine proteases are receiving increasing attention due to their diverse array of functions. They are involved in various aspects of physiological progression, such as digestion, apoptosis, signal transduction, blood coagulation, and wound healing through the proteolysis cascade action [1]. Aside from their roles in the physiology of organisms, they also play crucial roles in the pathogenesis of a number of diseases, such as cardiopulmonary disease and emphysema [2].
Parasitic helminths are one of the most important patho-gens worldwide and are classified into nematodes (roundworms), trematodes (flatworms), and cestodes (tapeworms). Humans are constantly threatened by infections with these pathogens, which cause a wide variety of infectious diseases. During the process of infection, proteases derived from parasites are thought to be significant factors for successfully establishing infection. Experimental evidence has shown that serine proteases are involved in a wide variety of events in the life cycle of helminths. The vast majority of serine proteases are digestive proteases involved in metabolic food processing or host tissue penetration. Additional serine proteases that are involved in reproduction, evasion of the host immune system, and development have also been characterized [3]. The essential roles of parasite serine proteases and their diverse activities make them attractive targets for the development of novel immunotherapeutic, chemotherapeutic, and serodiagnostic agents for the next generation of antiparasite interventions. The molecular and biochemical characterization of the serine proteases derived from these parasites is therefore central to the understanding of helminth-host interplay and the successful control of helminth infections. In this review, we summarized what is known of helminth serine proteases and their putative functions.
SERINE PROTEASES AND ENZYME MECHANISMS
Serine proteases are named because of the presence of a nucleophilic serine residue at the active site. The serine residue plays important roles in mediating protein hydrolysis. Most members of the serine proteases contain 3 essential residues at their active sites: a serine (Ser), a histidine (His), and an aspartate (Asp). Although these 3 residues do not have continual distribution throughout the linear protein sequence, they are close to each other in the active 3-dimensional conformation. Chymotrypsin, which is a major serine protease, is found in helminths. Chymotrypsin can be divided into 3 main subfamilies based on its substrate specificity; trypsin-like, chymotrypsin-like, and elastase-like. The proteases in these 3 subfamilies share a similar tertiary structure, but their substrate cleavage specificities differ; trypsin-like, in which a cleavage of amide substrates follows Arg or Lys at the P1 position (Fig. 1A); chy- The pattern and characterization of the binding pocket responsible for specificity of serine proteases. (A) Trypsin specificity is due to a negatively charged aspartic acid (Asp) located in the base of the binding pocket. Thus, it specifically cleaves peptide bonds of positively charged residues, i.e., lysine (Lys) and arginine (Arg). (B) Chymotrypsin specificity is due to a deep hydrophobic pocket containing serine (Ser) and glycine (Gly). This contributes to specifically cleave peptide bonds of large hydrophobic residues, i.e., phenylalanine (Phe), tryptophan (Trp), and tyrosine (Tyr). (C) Elastase has a much smaller binding pocket containing Arg and Lys than Trypsin or Chymotrypsin and prefers to cleave peptides of small, neutral residues, such as alanine (Ala), glycine (Gly), and valine (Val).
Trypsin
Chymotrypsin Elastase motrypsin-like, in which a cleavage occurs following 1 of the hydrophobic amino acids at P1 (Fig. 1B); and elastase-like, in which a cleavage follows an Ala at P1 (Fig. 1C). These enzymes are usually synthesized as inactive precursor zymogens, which are converted to the smaller activated enzymes by cleavage processes involving a conformational change. Conformational change is necessary for hydrolytic activity. There are 4 steps involved in the chymotrypsin catalysis mechanism, which include substrate binding, nucleophilic attack, protonation, and deacylation. A variety of structural features are responsible for the catalytic effectiveness of these enzymes [4] ( Fig. 2A-D). Subtilisin is another family of serine proteases and was also first found in prokaryotes. Subtilisin has a completely different protein structure, but it has the same catalytic residues and shares the same catalytic mechanism that utilizes a catalytic triad.
HELMINTH SERINE PROTEASES
A summary of serine proteases identified in parasitic nematodes, cestodes, and trematodes are presented in Tables 1 and 2, and their individual characteristics are described as follows.
Serine proteases from Trichinella
Trichinella is an intracellular nematode that infects a wide variety of animals. The complete life cycle of the parasite is completed in a single host via the invasion of intestinal epithelial and skeletal muscle cells. A recent study of proteases throughout the life of Trichinella spiralis found that excretionsecretion (ES) and crude extracts of muscle stage larvae show substantial serine protease activity against structural proteins, whereas newborn larvae and adult worms principally degrade hematic proteins. This stage-specific proteolytic activity contributes to the breakdown of both mechanical and humoral barriers within the host during parasite infection. These serine proteases are targets of the antibody response, which can inhibit the protease activity and possibly contribute to the impairment of the parasite in a sensitized host [5,6].
During the invasion of epithelial cells, the larvae released several glycoproteins that bear the highly antigenic sugar moiety, tyvelose (3, 6-dideoxy arabinohexose). Monoclonal antibodies against tyvelose protect against infection, which implicates that tyvelose-bearing glycoproteins play keys roles in intestinal epithelium invasion and niche establishment. With the aim of investigating these glycoproteins at the molecular level, Romaris et al. [7] first isolated glycoproteins by affinity chromatography technique using monoclonal antibodies (mAbs). De novo peptide sequencing combined with cDNA library screening identified that these glycoproteins are serine proteases (TspSP-1). Western blot analysis and immunohistochemistry indicated that these glycoproteins are muscle larvae (ML) stage specific and are synthesized in α stichocytes. Furthermore, the inhibition of epithelial cell invasion and migration by mAbs against TspSP-1 indicated that TspSP-1 could play an important role in degrading cytoplasmic or intercellular proteins, thereby facilitating the movement of the larvae [7]. Subsequently, Nagano et al. [8] also isolated a serine protease, named Ts23-2, from a cDNA library of T. spiralis muscle larvae. The Ts23-2 gene is only transcribed after the completion of cyst formation. The protease activity of the recombinant catalytic domain was confirmed using synthetic peptide substrates, indicating that it is a plasmin-like protease [8].
Recently, another member of this subfamily, named Tsp-SP-1.2, was characterized. The anti-serum against TspSP-1.2 can partially prevent the larval invasion of intestinal epithelial cells. Furthermore, the recombinant TspSP-1.2 protein induced a partial protective immunity in mice. These results indicated that TspSP-1.2 contributes to the larval invasion of host intestinal epithelial cells and could be a potential vaccine candidate against T. spiralis infection [9]. A similar protein (TppSP-1) from Trichinella pseudospiralis muscle larvae was identified by Cwiklinski et al. [10]. Analysis of the deduced amino acid sequence found that the histidine residue of the catalytic triad in TsSP-1 was replaced with an arginine residue in the TppSP-1. This could lead to the loss of proteolytic activity, and the role in the T. pseudospiralis-host interaction needs further research [10]. Trap et al. [11] identified another putative serine protease by screening a library from T. spiralis adult-newborn larvae mixed stage with a radioisotope-labelled DNA probe. TsSerP contains 2 trypsin-like serine protease domains flanking a hydrophilic domain. Northern blot analysis of the expression profile for TsSerP genes demonstrated that it was expressed in all life cycle stages of the parasite. Western blot analysis using soluble and E-S antigens found that it was not detected in ES products. Immunolocalization showed that TsSerP is expressed on the peripheral regions and the esophagus of T. spiralis muscle larvae and adult worms. Thus, TsSerP may be involved in the parasite's moulting process and digestive function [11].
Liu et al. [12] identified a newborn larval stage-specific serine protease gene (NBL1) via a subtractive cDNA library of T. spiralis newborn larvae. It includes 2 regions, a catalytic domain and a C-terminal domain. Epitope mapping using truncated variants of rNBL1 indicated that the C-terminal part of NBL1 is the main immunodominant region. NBL1 showed encouraging potential in the early detection of Trichinella infection and protective immunity against T. spiralis infection in pigs [13]. Based on the high immunogenicity of the C-terminal domain, we hypothesized that during the newborn larval invasion of the host, it may divert the immune response away from the functional regions of NBL1 to contribute to host invasion.
The multiple serine proteases identified at different stages of T. spiralis indicated the existence of a superfamily of serine proteases in T. spiralis. Each member of serine protease families may have different functions in parasite infection, which depends on stage-specific expression, location, and the presence of a regulation domain of the serine protease. Further studies are required to fully understand their functions in parasitehost interplay.
Serine proteases from Trichuris muris
T. muris is a parasitic nematode of mice in which an infective larva invades host intestinal mucosa and develops into an adult worm. The anterior portion of an adult worm embeds within a syncytial tunnel derived from host cecal epithelium. There are 2 major serine peptidases with specific activity for collagen-like molecules in the ES antigens of T. muris adult worms. Interestingly, the activity of both serine peptidases was not observed in worm extract, which suggests that the enzymes are present in the adult worm as inactive precursors or inactivated by endogenous peptidase inhibitors. The ability for degradation of the basement membrane by live worms suggested that these peptidases could be involved in the invasive process. These peptidases also could play important roles in the production and subsequent maintenance of the parasites' syncytial habitat. Moreover, authors speculated that they could contribute to the pathology of trichuriasis by disrupting the integrity of epithelial cell membranes [14].
The intestinal mucus barrier plays a significant role in the expulsion of gastrointestinal nematodes. The mucin is pivotal during the formation of the mucus layer. A recent study found that serine proteases secreted by T. muris can degrade the major intestinal mucin Muc2 and depolymerize the mucin network. Thus, it was suggested that serine proteases secreted by T. muris could contribute to the modification of the parasitic niche to prevent clearance from the host or to facilitate efficient mating and egg laying [15]. Given its essential role in the intestinal stage, this serine protease could be an attractive drug target and should be characterized at the molecular level.
Serine proteases from Anisakis simplex
Anisakiasis is a gastrointestinal tract disease, which is caused by the consumption of raw or undercooked seafood that contains larvae of the nematode A. simplex. After ingestion, A. simplex larvae penetrate the mucosa, submucosa, and muscularis of the host stomach and intestine and may migrate to the omentum, liver, pancreas, or gall bladder. Sakanari and McKerrow [16] found that the secretion of infective larvae contains trypsin-like serine proteases that degrade connective tissue extracellular matrix (ECM). Judy et al. [17] isolated 4 serine protease genes of A. simplex using degenerate oligonucleotide probes based on the consensus regions of mammalian serine proteases. One of these genes is 67% identical to the rat trypsin II gene. Alignment of these 2 genes revealed that the intron-exon junctions are conserved between nematodes and rats, confirming the structural and functional similarity of the 2 genes. Thus, serine proteases of infective larvae may be involved in the degradation or digestion of host tissue [17].
Meanwhile, Stephen et al. [18] purified 2 serine proteases from the infective larvae of A. simplex using different affinity chromatography approaches. The 26 kDa-protease is similar to the trypsin of the domestic pig, Sus scrofa. The second serine protease was similar to the extracellular serine protease of the pathogenic bacterium Dichelobacter nodosus, which can degrade elastin, keratin, and collagen. The mechanism involved in the tissue destruction caused by the 2 serine proteases is far from being decrypted and needs further research [18].
Serine proteases from Ascaris suum
A. suum, also known as the large roundworm of pigs, is an intestinal nematode. During the process of A. suum sperm activation, sperm differentiates from immature spermatids into mature and motile spermatozoa. Recently, study of sperm activation indicated that serine protease is responsible for A. suum sperm activation. This serine protease was purified and identified by de novo sequencing. The purified protease showed strong activity in sperm activation, and it can be inhibited by the serine protease inhibitor PMSF. Finally, the full length cDNA named As-TRY-5 was cloned by RACE-PCR using the degenerative primers based on the peptide sequence. Sequence comparisons indicated that As-TRY-5 shares a high degree of homology with trypsin-like serine proteases of eukaryotes. A further study using a serine protease inhibitor (As-SRP-1) that was released by the activated sperm indicated that during the spermatogenesis process, the activity of As-TRY-5 was regulated by this serine protease inhibitor. This could be significant during postcopulatory sexual selection [19].
Serine proteases from filarial worms
Onchocerca volvulus is an important filarial nematode that causes subcutaneous filariasis of humans and affects the eyes and skin. The infective larvae, male worms, and microfilariae migrate through the host tissue. A proteolytic activity study indicated that there is a 40 kDa neutral elastase in ES products of microfilariae, which can degrade components of the dermal extracellular matrix, collagen type IV, fibronectin, and laminin but cannot degrade intact immunoglobulins. Based on this proteolytic activity, authors suggested that the elastase of microfilariae plays an important role in the degradation of elastic fibres of the host tissue. Moreover, the elastase proteolytic ac-tivity is also present in males, but absent in ES products of females. This is correlated with the different behavior of adult worms; the adult males are able to migrate from 1 nodule to another, whereas adult females only reside in nodules [20]. In addition, a stage-specific 43-kDa serine elastase was also found in infective larvae of Onchocerca lienalis. The specific elastase of L3 larvae most likely contributes to L3 migration from the blackfly bite site to different tissues of the body where the adults will develop and form nodules [21]. Blisterase is a subtilisin-like serine protease and plays important roles in nematode biology including the cuticle production and maintenance, neural signalling, and nematode development. Thus, it is a potential drug target for controlling parasite infection. Catherine et al. [22] isolated blisterase from a cDNA library of the infective larvae (L3) of O. volvulus. A fragment of blisterase was cloned and expressed in insect cells with maximal activity at a neutral pH. However, the roles of the blisterase in the O. volvulus-host interaction remain unknown [22].
Complement plays multiple roles in both innate and adaptive immunity, such as mediating the adherence of myeloid cells to the parasite and subsequently killing parasite and directing cellular recruitment. Rees-Roberts et al. [23] reported that secreted products of Brugia malayi microfilariae can cleave the C5a and completely abolish C5a-mediated chemotaxis of human peripheral blood granulocytes. The inhibition was blocked by a serine protease inhibitor, indicating 1 or more types of serine proteases are responsible for the cleavage of C5a. It has been speculated that serine proteases from B. malayi may suppress the immune system and induce immune tolerance, hindering parasite elimination [23].
Serine proteases from Ancylostoma caninum
Hookworm disease results from infection by a hematophagous nematode of the genus Ancylostoma that lives in the small intestine of the host. Now, more than 1 billion people are infected with this parasite worldwide. Hookworms cause anemia by extracting host blood from lacerated capillaries in the mucosa of the small intestine over an extended period of time. Peter et al. [24] found a 36 kDa elastolytic enzyme with anticlotting properties in ES products of third-stage infective filariform larvae of Ancylostoma caninum. This elastolytic enzyme interferes with fibrin clot formation and promotes fibrin clot dissolution. The protease can degrade fibrinogen into 5 smaller polypeptides with anticoagulant properties. In addition, this protease can convert plasminogen to a mini-plasminogen-like molecule. This molecule is analogous to leukocyte elastase and could be related to the specific antihemostatic mechanism of the hookworm. According to their results, authors hypothesized that the parasite uses this enzyme to feed on the villous capillaries by preventing the blood from clotting. Thus, this protease is a potential target for chemotherapeutic or immunological intervention [24].
Serine proteases from Steinernema carpocapsae
S. carpocapsae is a parasitic nematode of insects and is used as a biological control agent to kill several insect pests and vectors. The infective juvenile can enter the host by mouth or anus and invade the hemocoelium. Invasion has been described as a key factor in nematode virulence and is mediated by proteases. Recently, 2 novel serine protease cDNAs (sc-sp-1 and sc-sp-3) from the parasitic stage were isolated by a degenerate RT-PCR based on conserved motifs near the catalytic histidine of serine protease. The sc-sp-1 expression time frame analysis showed that the sc-sp-1 stage-specifically expressed at parasitic stages. It is mainly expressed in the midgut of the insect (L3), where the nematodes will prepare to invade the insect hemocoelium. Further, analysis of the influence of insect tissue on sc-sp-1 expression showed that different tissues of the insect can induce the expression of sc-sp-1 at different times. This could contribute to the parasite's ability to sense insect tissues at different time points. The peritrophic membrane of the gut wall and the basal lamina are major barriers of host tissue invasion. The study showed that the sc-sp-1 was highly efficient at destroying the peritrophic membranes and caused epithelium cell detachment from the basal lamina. Thus, the function of sc-sp-1 could be the invasion of hemocoelium through the disruption of the midgut barrier [25]. The sc-sp-3 is a multifunctional chymotrypsin-like protease. It not only shares similar biochemical characteristics with sc-sp-1 but also induces caspase-dependent apoptosis in Sf9 insect cells [26]. Recently, a stage-specific elastase-like serine protease gene (Sc-ELA) was isolated by the suppression subtractive hybridization method during the parasitic stage. Sequence comparison and evolutionary marker analysis revealed that Sc-ELA was a member of the elastase serine protease family with potential degradative, developmental, and fibrinolytic activities [27].
In addition to these serine proteases, Balasubranian et al. [28,29] purified 2 insect immune depression-related serine proteases from the ES products of infective-stage S. carpocapsae.
Melanotic encapsulation that is formed by the deposition of multiple layers of hemocytes and⁄or melanin is an important insect defence mechanism against parasites. The trypsin-like serine protease and chymotrypsin-like serine protease can prevent melanotic encapsulation by suppressing prophenoloxidase activity or by disrupting the insect hemocyte F-actin cytoskeleton. Although this experimental evidence did not fully elucidate the exact biological roles of the serine proteases during host immune suppression, it contributes to the understanding of the pathogenesis strategy used by S. carpocapsae. Further biochemical and molecular characterization of Sc-Trypsin and Sc-chymotrypsin is required for a complete delineation of their possible functions in helping parasites to infect and survive within the host [28,29].
CESTODE SERINE PROTEASES
Cestodes reside in the digestive tract of their host as adults. However, the larvae are involved in tissue invasion and can migrate into some visceral organs and the central nervous system, causing a range of serious diseases such as sparganosis, echinococcosis, and neurocysticercosis. Some serine proteases involved in host tissue invasion and immune evasion have been characterized in cestodes.
Sparganosis caused by the plerocercoid larvae of Spirometra mansoni usually results from ingesting contaminated food or water. The parasite can migrate to any part of the body, but it usually resides in the skin where it develops into a nodule. Kong et al. [30] purified 3 neutral serine proteases from the extracts of the plerocercoids. Analysis of proteolytic activities showed that 2 trypsin-like proteases of 198 and 104 kDa have collagenolytic activities; however, the 36 kDa chymotrypsinlike serine protease prefers to cleave human recombinant interferon-g and bovine myelin basic protein. In addition, all purified proteins elicited strong antibody responses in infected patients, suggesting that they could be potential antigens in serologic diagnosis of human sparganosis [30].
Cystic echinococcosis (CE), caused by Echinococcus granulosus has a public health importance not only in areas of endemicity but also in countries or regions where the migration of infected people and exchanges of livestock occurs. Antigen 5 (Ag5) is a major secreted component of the larvae of E. granulosus. It has been used as a diagnostic antigen for detection of echinococcosis in humans for many years. To characterize the biological function, Lorenzo et al. [31] isolated the Ag5 gene by RT-PCR on the basis of the amino acid sequences of tryptic fragments. Analysis of the nucleotide sequence indicated that Ag5 is synthesized as a single polypeptide chain, which afterwards is processed into 2 subunits. The 22-kDa subunit contains a highly conserved glycosaminoglycan-binding motif. This motif may help confine Ag5 to the host tissue surrounding the parasite. The amino acid sequence of the 38 kDa subunit shows high similarity to serine proteases of the trypsin family, specifically to the neutral proteases of mast cells. However, the catalytic serine residue is replaced by threonine. The biochemical characterization of Ag5 showed that neither proteolytic activity nor binding to protease inhibitors could be found in native purified Ag5. This intriguing feature of Ag5 suggests that it could possess a highly specific substrate or a specific activation step to carry out new biological function [31].
Furthermore, immunolabelling with specific antibodies against rAg5 showed that Ag5 is strongly expressed in the tegument of the protoscolex and the embryonic membrane of the egg as well as on the surface of the oncosphere. Meanwhile, it is also weakly expressed in the tegument of the adult. Nevertheless, the roles of Ag5 remain unknown, but the expression in all stages of the life cycle confirms that Ag5 is a potential antigen for use in diagnosis and vaccine development in both intermediate and definitive hosts [32].
In a recent study, a trypsin-like serine protease of Taenia solium cysticercus termed TsAg5 was identified. It is the first serine protease gene characterized from T. solium so far, and it is highly homologous to E. granulosus antigen Ag5. Western blot analysis showed that TsAg5 can be detected in the cyst fluid and ES antigens of the cysticercus. The recombinant trypsinlike domain of TsAg5 showed trypsin-like activity and can be inhibited with chymostatin. Furthermore, evaluation of the diagnostic potential of this domain in detecting human cysticercosis by immunoblot assay showed that the trypsin-like domain was moderately sensitive and specific for neurocysticercosis [33].
Schistocephalus solidus is a tapeworm that infects fish. A 23.5-kDa chymotrypsin-like serine protease with collagenolytic activity was identified in the extracts of procercoids. However, it was absent in plerocercoids and adults. The specific expression of the chymotrypsin-like serine protease in procercoids may be necessary for procercoid invasion via the penetration of the host's intestinal wall [34].
Although increasingly fruitful reports on serine proteases suggest their importance in cestode infections, they still re-main to be extensively characterized and assessed for their therapeutic values.
TREMATODE SERINE PROTEASES
Serine proteases from the liver fluke Fasciola hepatica and F. gigantica are the parasites that cause liver fluke disease (fascioliasis). It is not only an important human disease but it also affects cattle and sheep. Infection causes worldwide economic losses of approximately 2 billion dollars per year. Carmona et al. [35] purified a secreted dipeptidylpeptidase (DPP) from F. hepatica by gel-filtration and ionexchange chromatography. It was found to be a serine protease that is expressed in newly excysted juvenile, immature, and mature flukes. Authors suggested that the parasite DPP may function in the digestion of host macromolecules into peptides. These peptides could be absorbed as nutrients by the parasite's intestine, which could profit the parasite [35]. In a recent study, a 60-kDa neutral serine protease designated as serine PIc was separated from F. gigantica. A study of biological characteristics found that the activity and stability of the serine protease depended on divalent cations and temperature. Enzyme activity assays indicated that proteolytic activity increased followed by the development of F. gigantica, which suggests that it has a very important physiological role, but the precise function remains unknown [36].
Serine proteases from schistosomes
Schistosomiasis is a serious human disease in the tropics, which affects millions of people. Infection of humans by Schistosoma mansoni begins following the invasion of intact skin by the cercariae. The penetration of the skin is facilitated by secretions from the acetabular and head glands. Disruption of these potential mechanisms by specific drugs/vaccines may provide therapeutic benefits. Thus far, a number of studies have confirmed that cercarial elastase (SmCE), which has a chymotrypsin-like activity, is a major histolytic protease involved in skin invasion. Northern blot analysis indicated that it is stage-specific and is only expressed in cercariae. Further anatomical location of SmCE mRNA in tissue sections of developing larvae showed that it is only synthesized in acetabular gland cells of developing cercariae. This further indicated that SmCE is regulated within a limited developmental frame in a specialized cell [37]. A subsequence protease activities assay indicated that a serine protease with "trypsin-like" activities from secretions of cercariae could also be involved in host invasion [38]. To evaluate the relative roles of these 2 serine proteases in larvae invasion, both of 2 serine proteases were analyzed by southern blot, genomic PCR, and immunolocalization. These results demonstrated that only single SmCE with activities against macromolecular substrates is responsible for human skin invasion, and serine protease with trypsin-like activities is a contaminant from the intermediate host snail [39].
To date, 8 isoforms of S. mansoni stage-specific elastases have been identified based on amino acid and promoter sequence homology. In addition, investigation of SmCE ortholog genes in the related species Schistosoma haematobium and S. douthitti found that multiple CE isoforms exist in both species [40]. By contrast, in another schistosome species, Schistosoma japonicum, no SmCE ortholog was identified. This result indicates that the expansion of the cercarial elastase genes is limited to the human-specific schistosomes [41,42]. To further explore the roles of SmCE gene expansion in S. mansoni, James et al. [43] investigated the profile of transcript and protein expression patterns and substrate preferences of the expanded SmCE gene family. The results revealed that these SmcE isoforms are similarly expressed throughout the S. mansoni life cycle. They have largely similar substrate specificities. According to these results, authors suggested that the majority of protease isoforms share a conserved function for a common pool of substrates [43]. Thus, the expansion of the SmCE gene family is functionally redundant and is a direct increase in the amount of protease. In addition, activity-based profiling showed that the activity of SmCE also presented in 6-week daughter sporocysts, suggesting that 1 or more of the SmCE isoforms could have a novel role in its intermediate host [43].
Proteases have long been hypothesized as aiding parasites in evading the immune response of the host by degrading immune effectors or modulating the cellular immune response. Studies have shown a direct correlation between levels of specific IgE and protective immunity against schistosomes in humans [44,45]. Analysis of the immune evasion ability of parasites showed that extracts from cercarial and schistosomular stages of S. mansoni can cleave human, mouse, and rat IgE, but not human IgA1, IgA2, or IgG1. This cleavage can be inhibited by serine protease inhibitors. This indicates that during the establishment of mature infections, an elastase-like serine protease helps the parasite to evade IgE-mediated protective reactions [46]. A recent study using a highly purified SmCE found that these SmCE cleave IgE at solvent-exposed interdomain re-gions of the IgE-Fc. This sequence of cleavage is also present in numerous key molecules involved in regulating immunity, including FcγRI, IL-2, IL-10R, IL-12R, and TLR3. Thus, additional studies are required to determine more genuine substrates for SmCE, which will help us to completely understand the roles of SmCE in immune evasion [47].
To confirm such a role in vivo would require analysis of the immune clearance of parasites in a living host following chemical or genetic knockout of the protease. Because the degradation of immune effectors is vital for parasite immune evasion, the inhibition of this degradation pathway offers a valid approach for developing novel chemotherapeutic agents. Thus, further investigation to determine serine protease biological function and the evaluation of its potential as a drug target are needed.
CONCLUSIONS AND PERSPECTIVES
Worm disease remains a major neglected disease of humanity in many regions, especially in developing countries. Because there is an emerging drug resistance, and there is an inability of current drugs to prevent reinfection, the identification of novel chemotherapeutic agents and vaccines for protection from helminth pathogens is a public health priority. The challenge of developing new therapies involves several steps, the first of which is to identify and characterize potential targets of drug or vaccine treatments. This review presents evidence that serine proteases do not only have important functions in the regulation of endogenous physiological processes of parasitic helminths, but are also actively involved in hostparasite interactions. These findings will undoubtedly make serine proteases to be an exciting field of helminth research. The newly available genome sequences of some helminths combined with large EST libraries should facilitate future work in this area enormously; for example, a bioinformatic analysis of the genome sequence dataset along with proteomic and microarray data will accelerate the identification of more serine proteases. Useful tools to characterize helminth serine protease function such as development regulation, fertilization, invasion of host tissues, and immune evasion include crystallography for the determination of the 3D structure, RNA interference for the silencing of gene expression, and monoclonal antibodies for the inhibition of protease activity as well as colocalization studies to validate an association between a particular serine protease and a putative substrate. This comprehensive analysis not only expands the growing knowledge base regarding helminth serine proteases but also provides a platform for the exploration of their biological functions and potential as targets of effective chemotherapeutic or immunological treatments. Anthelmintic drugs found via high throughput screening for small molecule inhibitors of some of the critical serine proteases involved in the host-parasite interplay will become a practical reality in the near future. Moreover, anthelmintic drug discovery needs to take into accounts not only the target enzymes within the parasite but also similar enzymes in the host because inhibition of host enzymes can result in toxicity to the host. Thus, the increase in information that is becoming available on the serine proteases of humans is also beneficial to parasitologists. In addition, substrate identification will also undoubtedly yield insight into many different areas of helminth biology. The fine specificity of the relationships between serine proteases and their substrate proteins could provide a new molecular paradigm for understanding host-parasite co-evolution. | 2017-06-19T23:10:09.504Z | 2015-02-01T00:00:00.000 | {
"year": 2015,
"sha1": "b7451da60beb040de8cfa7ef4b2878ead9350cc9",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3347/kjp.2015.53.1.1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7451da60beb040de8cfa7ef4b2878ead9350cc9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
247793446 | pes2o/s2orc | v3-fos-license | Malnutrition outweighs the effect of the obesity paradox
Abstract Background High body mass index (BMI) is paradoxically associated with better outcome in patients with heart failure (HF). The effects of malnutrition on this phenomenon across the whole spectrum of HF have not yet been studied. Methods In this observational study, patients were classified by guideline diagnostic criteria to one of three heart failure subtypes: reduced (HFrEF), mildy reduced (HFmrEF), and preserved ejection fraction (HFpEF). Data were retrieved from the Viennese‐community healthcare provider network between 2010 and 2020. The relationship between BMI, nutritional status reflected by the prognostic nutritional index (PNI), and survival was assessed. Patients were classified by the presence (PNI < 45) or absence (PNI ≥ 45) of malnutrition. Results Of the 11 995 patients enrolled, 6916 (58%) were classified as HFpEF, 2809 (23%) HFmrEF, and 2270 HFrEF (19%). Median age was 70 years (IQR 61–77), and 67% of patients were men. During a median follow‐up time of 44 months (IQR 19–76), 3718 (31%) of patients died. After adjustment for potential confounders, BMI per IQR increase was independently associated with better survival (adj. hazard ratio [HR]: 0.91 [CI 0.86–0.97], P = 0.005), this association remained significant after additional adjustment for HF type (adj. HR: 0.92 [CI 0.86–0.98], P = 0.011). PNI was available in 10 005 patients and lowest in HFrEF patients. PNI was independently associated with improved survival (adj. HR: 0.96 [CI 0.95–0.97], P < 0.001); additional adjustment for HF type yielded similar results (adj. HR: 0.96 [CI 0.96–0.97], P < 0.001). Although obese patients experienced a 30% risk reduction, malnutrition at least doubled the risk for death with 1.8‐ to 2.5‐fold higher hazards for patients with poor nutritional status compared with normal weight well‐nourished patients. Conclusions The obesity paradox seems to be an inherent characteristic of HF regardless of phenotype and nutritional status. Yet malnutrition significantly changes trajectory of outcome with regard to BMI alone: obese patients with malnutrition have a considerably worse outcome compared with their well‐nourished counterparts, outweighing protective effects of high BMI alone. In this context, routine recommendation towards weight loss in patients with obesity and HF should generally be made with caution and focus should be shifted on nutritional status.
Introduction
Obesity as a well-established risk factor is known to greatly increase risk for the development of cardiovascular disease including heart failure (HF), setting obesity as the world's leading preventable risk factor for early death. 1 Paradoxically, once a patient develops HF, high body mass index (BMI) seems to confer a survival advantage compared with leaner individuals, a phenomenon commonly referred to as the 'obesity paradox'. 2 Several explanations have been put forth to explain this association. To date, it is not entirely clear whether this is a true phenomenon or a consequence of methodological limitations such as confounding or reverse causation. Earlier appearance of symptoms in obese individuals at the one hand and disease-associated weight loss, smoking status, and muscle wasting, on the other, have been identified as possible reasons for the obesity paradox. 3,4 Although often ignored, malnutrition is highly prevalent among patients with HF and is associated with poor prognosis, prolonged hospital stays, and poor quality of life especially at advanced disease stages. [5][6][7] Numerous studies have outlined the importance of nutritional assessment in clinical practice especially in target groups at risk as HF. Importantly, malnutrition is not only common in underweight/lean individuals but is also common in those who are overweight, obese, or even morbidly obese. 8 Emerging data imply a close link between malnutrition and markers of systemic inflammation generating a hypothesis that increasing adiposity may be protective against the malnutrition-inflammation-cachexia complex which is characteristic for advanced stages of chronic HF. 5 In recent years, several nutritional screening tools have been advocated to assess nutritional status in HF patients. 8 The Prognostic Nutritional Index (PNI) provides a simple and objective tool that has been widely used for evaluating nutritional status. [8][9][10][11] Previous studies investigating the impact of obesity were largely performed in patients with HF with reduced ejection fraction (HFrEF) and with preserved ejection fraction (HFpEF) or in chronic HF regardless of ejection fraction. 2 The relationship between outcome and BMI in patients with HF and mildly reduced ejection fraction (HFmrEF) is less clear and has rarely been addressed. 12 Therefore, this study aims (i) to explore the relationship between nutritional status and BMI across the whole spectrum of HF and (ii) to investigate the impact of nutritional status, reflected by PNI, on the obesity paradox across HF phenotypes.
Study population
A single centre, retrospective observational study design has been followed. Patients with chronic HF were enrolled between 2010 and 2020 at the Medical University of Vienna. Detailed study selection criteria have been described before. 13 Briefly, medical health records and echocardiography database were used to identify patients with HFpEF, HFmrEF, and HFrEF following algorithms that comply with the current guideline diagnostic criteria. 14 This database includes inpatients and outpatients from the Medical University of Vienna. In accordance with the current guidelines, the following algorithms were applied to identify patients with the respective HF phenotype: In patients with mildly reduced or preserved left ventricular function (ejection fraction above 40%), at least one of the following criteria were required: structural heart disease as identified by left atrial enlargement and/or left ventricular hypertrophy, or diastolic dysfunction. Moreover, the presence of elevated N-terminal pro brain-type natriuretic peptide (NT-proBNP) values (>125 pg/mL) and symptoms as well as signs of HF were mandatory for study inclusion. HF with a significant reduction in left ventricular systolic function, that is, left ventricular ejection fraction <40%, was designated as HFrEF.
Echocardiographic exams with missing values of interest and patients with primary valve disease were excluded from the study. Likewise, patients without signs and symptoms of HF, with NT-proBNP values below 125 pg/mL and without measurement of height and weight, were excluded from the analysis. The final study group consisted of 11 995 individuals stratified according to the HF phenotypes.
Targeted keyword search and designated coding from the International Statistical Classification of Diseases (ICD) and Related Health Problems allowed the collection of medical history and laboratory parameters from the electronic local health record database. The process for assigning diagnostic codes is standardized at the Medical University of Vienna. At the time of patient discharge or outpatient presentation, healthcare professionals assign ICD-codes based on newly diagnosed diseases and the patient's medical history. Routine laboratory parameters were analysed from venous blood samples according to the local laboratory's standard procedure. The study was approved by the institutional ethics review board of the Medical University of Vienna (IRBNr: 2137) with a waiver for informed consent.
Prognostic nutritional index
Albumin levels and total lymphocyte counts were routinely measured during outpatient visits or on admission. The nutritional status was assessed by the PNI according to the following formula: albumin (g/L) + 5 × total lymphocyte count × 10 9 / L. Lower PNI scores indicate worse nutritional status. We used an established cut-off score of 45 to stratify patients into two groups: low PNI (<45) vs high PNI (≥45). 11,15,16 As a quality control measure, we investigated the optimal threshold for our study cohort using the Youden index. Concordantly, the optimal threshold value of the PNI score to identify individuals at risk was 46.5.
Endpoints and follow-up
The primary endpoint was defined as all-cause mortality and was assessed via record linkage with the Austrian Death Registry.
Echocardiographic assessment
Standard transthoracic echocardiograms (2D, Doppler) examinations were performed using commercially available equipment (Vivid E7 and E9, GE Healthcare, Chicago, IL and Acuson S2000, Siemens, Berlin, Germany) according to Continuous variables are given as median and interquartile range (IQR), and counts are given as numbers and percentages (%). BChE, butyrylcholinesterase; BMI, body mass index; HFmrEF, heart failure with mildly reduced ejection fraction; HFpEF, heart failure with preserved ejection fraction; HFrEF, heart failure with reduced ejection fraction; LDL, low density lipoprotein; NT-proBNP, N-terminal pro brain-type natriuretic peptide.
The malnutrition-obesity paradox the current guidelines. 17 Cardiac morphology was assessed in standard four and two chamber views. Left ventricular systolic function was graded according to ejection fraction cut-offs. According to the local laboratory standard, left ventricular ejection fraction cut-offs were ≥50% corresponding for HFpEF, 40-49% for HFmrEF, and <40% for HFrEF. Semiquantitative assessment of right heart function was performed by experienced readers using multiple acoustic windows graded as normal, mild, mild-to-moderate, moderate, moderate-to-severe, and severe. Valvular regurgitation was quantified using an integrated approach and graded as none, mild, mild-to-moderate, moderate, moderate-to-severe, and severe. Systolic pulmonary artery pressures were calculated by adding the peak tricuspid regurgitation systolic gradient to the estimated central venous pressure.
Statistical analysis
Continuous data are expressed as median and interquartile range (IQR) and categorical data as count and percentages.
Figure 1
The obesity paradox across the spectrum of heart failure (HF). Distribution of body mass index (BMI) (left), the hazard ratios (HR) for all-cause mortality with 95% confidence intervals (CI) according to the BMI strata (middle) and restricted spline curves examining the association of BMI and outcome (right) are shown for heart failure with preserved ejection fraction (HFpEF), heart failure with mildly reduced ejection fraction (HFmrEF), and heart failure with reduced ejection fraction (HFrEF). Table 1 shows the baseline characteristics of the overall cohort and for the respective HF subtypes.
Impact of body mass index on outcome for the overall cohort and across the heart failure spectrum
During a median follow-up time of 44 months (IQR 19-76), a total of 3718 (30%) deaths were observed. Our results demonstrate a near U-shaped association between BMI and mortality for the overall cohort of HF with an inverse relationship in individuals with a BMI < 35 kg/m 2 (Supporting Information, Figure S1). BMI was consistently associated with lower risk for all-cause mortality in both the HF phenotypes and various subgroups as shown in The Kaplan-Meier estimates for the overall survival at 4 years differed significantly between the HF groups with worse survival in the HFrEF population (74.2% for HFpEF, 73.5% for HFmrEF, and 66.4% for HFrEF; log-rank P < 0.001). Cubic spline modelling and the relative hazards for BMI strata and outcome for the respective HF phenotypes The malnutrition-obesity paradox are depicted in Figure 1. The association of increasing BMI values and favourable outcome was consistent across all HF phenotypes. Compared with patients with BMI levels between 22.5 and 24.9 kg/m 2 (reference group), patients in the lower BMI category (<22.5 kg/m 2 ) had significant worse outcome, while individuals in higher BMI categories (>25 kg/m 2 ) had a significant survival advantage independent of HF phenotype. Figure 4 illustrates the relationship between BMI and outcome depending on malnutrition status for the overall cohort and HF phenotypes. Although a higher BMI was associated with generally favourable outcome in both PNI groups separately (Supporting Information, Table S1), the hazard for patients with poor nutritional status (low PNI) was 1.8-to 2.5-fold higher compared with the reference group of normal weight high PNI patients. Kaplan-Meier analysis confirmed the survival advantage in obese patients with high PNI, whereas the outcome becomes less favourable with decreasing BMI and especially low PNI for all HF phenotypes.
Main findings
This study reinforces the evidence of an obesity paradox across HF regardless of sex, ischaemic aetiology of HF and especially HF phenotype with similar findings for HFpEF, HFmrEF, and HFrEF in a comprehensive cohort of HF patients. Our data underscore the importance of nutritional status in relation to the obesity paradox. While increasing BMI is indeed associated with better outcomes in both normal and malnourished patients, obesity alone cannot outbalance malnutrition, with worse prognosis observed in obese but malnourished patients compared with normal weight patients with normal nutritional index.
Whether a low PNI already reflects malnutrition within the scope of a proinflammatory predisposition marking a more advanced state of disease or is a modifiable factor remains to be investigated. The results, however, encourage clinicians to further assess nutritional status in otherwise obese HF patients to identify malnourished individuals with an intrudingly poor prognosis, which seems contra-intuitive.
Obesity paradox in heart failure
An inverse association between outcome and BMI has been repeatedly demonstrated in individuals with HFrEF and HFpEF. 2 To the best of our knowledge, there is only one study reporting the presence of the obesity paradox in HFmrEF with 947 individuals. 12 With 11 995 patients enrolled (58% HFpEF, 23% HFmrEF, and 19% HFrEF), the current report presents the largest study so far confirming the obesity paradox irrespective of HF phenotype.
The presence of the obesity paradox in HF should clearly not be seen as a promotion of obesity in the general population or individuals without cardiovascular disease; however, recommendations for patients with established disease are not clear. Indeed, assessment of BMI alone may not capture the whole picture of metabolic health as it poorly reflects body composition and metabolic capacity or their trajectories. 18 Therefore, caution should be exercised when recommending weight loss in patients considering only BMI. A number of hypotheses have been drawn to decipher the inverse association of BMI and outcome. The resilient protection of high BMI in patients with HF may be explained by greater metabolic reserve, re- duced sympathetic activity, attenuated response to neuroendocrine stimuli, and less catabolic state. 4,19 In addition, adipose tissue may attenuate inflammatory responses through synthesis of beneficial adipokines. 20 The endotoxin/lipid hypothesis suggests that higher circulating lipoproteins in obese patients may enhance endotoxin-scavenging activity, resulting in lower proinflammatory cytokine production. 21 In light of these considerations, excess body weight is thought to counteract the catabolic effects of HF and thus provide a metabolic cushion to mitigate disease progression.
However, because data on the obesity paradox mostly emerged from observational studies, bias underlying the paradoxical association such as confounding or reverse causation need to be considered. This wide term includes confounding by pre-existing weight loss or other predictors of low body weight (e.g. stage and grade of disease, malnutrition, and smoking status) which in turn increase the risk of adverse outcome. 22 Irrespective of that, one may also argue that lower BMI remains a surrogate for advanced disease stage. The malnutrition-obesity paradox
Malnutrition in heart failure
Malnutrition is common in patients at advanced disease stage and carries poor prognosis. 23 It is defined as a metabolic state resulting from a chronic imbalance between anabolism and catabolism leading to loss of appetite, malabsorption, inflammation, muscle wasting, and cachexia. In HF, chronic congestion accompanied by gastrointestinal congestion leads to decreased nutrient intake further driving muscle wasting. 23,24 Disturbed gut perfusion and impaired microcirculation of the intestine results in local oedema, abnormal mucosal permeability, and increased endotoxin absorption further promoting a proinflammatory milieu. 25 The consequent inflammation is considered to be a key driver of cardiac cachexia representing the hallmark of end-stage chronic HF. 26 To date, there are no clear assessment criteria, universally accepted definitions, or standardized methods for determining nutritional status in patients with HF. PNI calculated with serum albumin concentration and total lymphocyte count presents an easy and objective screening tool to detect cardiometabolic derangements in HF that may allow early detection of both malabsorption and inflammatory disturbances. Numerous reports have demonstrated that albumin is a strong predictor for outcome across the spectrum of HF and provides comparable prognostic information to simple or multidimensional malnutrition tools. 27,28 However, as albumin concentration is known to be affected by several non-nutritional factors such as hydration state, liver dysfunction, capillary permeability, nephrotic syndrome, infection, and malignancies, the use of albumin alone may not provide a comprehensive and accurate reflection of nutritional status. Lymphocyte count constitutes another determining factor of the PNI score. Nutritional deprivation is commonly associated with impaired immune response leading to lymphocyte depletion. 29 Previous reports have shown that total lymphocyte count correlates with various established nutritional assessment tools. ,30 Therefore, combining serum albumin levels and the lymphocyte count to create the PNI may be useful as a screening tool for patients at risk of malnutrition who may benefit from a more detailed nutritional assessment. Earlier reports have shown that low PNI is independently associated with poor outcome in patients with either HFrEF or HFpEF and in chronic HF regardless of ejection fraction. [8][9][10][11] To our knowledge, there are no previous studies investigating PNI specifically in patients with HFmrEF. Clearly, our data demonstrate that a substantial proportion of patients with HF, especially HFrEF, are at great risk for malnutrition. Notably, nearly one in two patients with HFrEF had signs of malnutrition. This finding appears consistent with earlier studies reporting a prevalence to be as high as 69% in some HF populations. 9 Moreover, the present study demonstrates that nutritional status assessed by PNI is an independent predictor of mortality across the spectrum of HF, now embracing also HFmrEF.
The impact of malnutrition on the obesity paradox
Obese patients in the general population are recommended to lose weight; however, these recommendations become uncertain for individuals with obesity and concomitant HF. 31 Importantly, BMI alone will not distinguish between metabolically healthy and metabolically unhealthy individuals. Although our data confirm the observation of protective effects of BMI on outcome in individuals with HF, a closer look incorporating signs of malnutrition revealed that increased body weight cannot reverse the negative impact of malnutrition. While obese patients experience a 30% risk reduction, malnutrition at least doubles the risk for death.
Patients with low BMI and poor nutritional status represent a group of patients with diminishing metabolic reserve and with great risk for cardiac cachexia or already established cardiac cachexia. Undoubtedly, this group of patients are at extremely high risk for adverse outcome and should be closely monitored in daily routine practice. The data of this report however also imply that nutritional assessment is also essential in obese patients, because once signs of malnutrition become apparent, the risk of fatal events increases dramatically although obesity is suggestive for better outcomes based on the obesity paradox.
Limitations
The results of our study should be interpreted in light of its limitations. First, based on the retrospective observational study design, the risk of bias and residual confounding cannot be completely ruled out, although we attempted to adjust for the confounding factors. The observational nature of this report allows us to demonstrate associations, but no inferences can be made about causal relationships. Second, medical history data were derived from the health care provider information system, using codes from the International Statistical Classification of Diseases and Related Health Problems, which could have led to a misclassification and/or underestimation of comorbid conditions. Third, left ventricular ejection fraction could not be measured quantitatively in all patients using biplane Simpson's method, due to the limitations inherent to the method such as poor image quality, dyssynchrony, regional wall motion abnormalities, and foreshortening. Fourth, we did not evaluate measures of frailty, which may be useful to explain the relationship between being underweight and all-cause mortality especially in the elderly age group.
Conclusion
The obesity paradox applies for the whole spectrum of HF irrespective of phenotype, that is, HFrEF, HFmrEF, and HFpEF.
Nonetheless, the prognosis in obese patients varies greatly depending on the status of malnutrition, which underlines the importance of additional nutritional assessment in lean, but especially in obese patients for individual patient risk stratification.
Funding
This work was supported by the grant from the Austrian Science Fund (KLI 700-B30). | 2022-03-31T06:22:57.305Z | 2022-03-29T00:00:00.000 | {
"year": 2022,
"sha1": "212b096242fcbd0233846bf67748e52f6eb3eb1b",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jcsm.12980",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "f094ddd54237d7bec5d283ab17ae07b77a26e00e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225476802 | pes2o/s2orc | v3-fos-license | Determinants of Well-being Status of Rice Farmers in Nasarawa State, Nigeria
This paper examined the determinants of well-being status of rice farmers in Nasarawa State, Nigeria. One hundred and eighty-one rice farmers from Karu, Kokona and Doma local government areas were chosen for the study by multi-stage sampling procedure. Data were analyzed using frequency, mean and logit regression model. Rice farmers were satisfied with their wellbeing status having satisfaction in five out of seven indicators that defined a general well-being. However, indicators of well-being status like safety and future security were not satisfactory. Age, sex, rice yield, income and extension contact had significant and positive influence on the well-being status of the rice farmers. Government and relevant agricultural stakeholders should focus on the key influencing factors in view of improving the well-being status of rice farmers in the study area.
Introduction
Rice (Oryza sativa L.) is an important food security crop in Nigeria. In the producing areas, rice provides employment for more than 80 percent of the inhabitants along the value chain from cultivation to consumption (Ayedun and Adeniyi, 2019). According to (Ajibola, Adeniji, Olaleye and Ojo, 2017), Nigeria is the largest producer of rice in West Africa, producing over 46% of the regions total production.
Rice as a staple crop generates more income for Nigeria's farmers in comparison to some cash crops in the country (Izuchukwwu, 2019). Though, the low productivity of rice farmers is a growing concern in Nigeria due to use of low technologically empowered agricultural equipment that do not support large scale production (Osabuohien, Okorie and Osabohien, 2018). Given the economic importance of rice to the well-being of the citizens of Nigeria, boosting its production has been accorded high priority by the government in recent times.
Well-being of the citizen of a country is the quality of life of an individual. It is a state in which every individual realizes his own potential and can cope with normal stresses of life, can work productively and fruitfully and able to contribute to his According to Yakubu and Aidoo (2015) Objective and relational well-being which forms core well-being, captures household income and others like knowledge, life expectancy, assets and food security. Subjective wellbeing as an end in life which evaluates people's satiation with their life situations, is emerging as a complement to the more traditional and material ways of measuring well-being status.
Well-being is also influenced by the environment and disposition, safety and security, physical and mental health, relationships and social networks, access to goods and services, and the fairness of the society an individual live, to name just a few of the contributing factors. Similarly, well-being may differ from individual to individual due to differences in their socio-economic characteristics (Gamage, Kuruppuge and Nedelea, 2016).
The farmers' well-being is a dynamic process which is influenced by both qualitative and quantitative parameters (Kumar, Narasimha and Lakshminarayan, 2017). Understanding the well-being of farmers is especially complex as, in addition to the usual factors that influence well-being, farming is associated with several occupation specific factors that can challenge well-being, such as climate change, limited capital, inadequate technologies and limited access to market. The maintenance of well-being is critical in enabling farmers to succeed in their personal and professional lives.
Many factors can negatively impact on well-being of farmers: in particular, experiencing poor health difficulties (Schirmer, Mylek and Yabsley, 2015). It is often argued that farmers experience poorer mental health and well-being than nonfarmers. Studies examining this have however produced inconsistent results, most likely because not all farmers are the same, and different studies have looked at different groups of farmers. Nevertheless, the results of several studies suggest that at least some groups of farmers have poorer mental and physical well-being than non-farmers (Yazd, Wheeler and Zuo, 2019).
A large number of rice farmers in Nasarawa State operate at the subsistence level. The question now is what determines the well-being status of rice farmers in Nasarawa State and this is the research gap this study sought to close. Specifically, this study analyzed the well-being status of the rice farmers and examined the factors that influence the well-being status of rice farmers in Nasarawa State, Nigeria.
Methodology
This study was carried out in Nasarawa State, Nigeria. The State lies between Latitude 7º 45' and 9º 25' North of the equator and between Longitude 7º 51' and 9º 37' East of the Greenwich meridian. The major agricultural production activities in the State include rice, cassava, millet, yam, sorghum, sesame and maize cultivation while livestock reared include goat, cattle and sheep.
The study population comprised rice farmers registered as contact farmers and household heads with the Nasarawa State Agricultural Development Programme (ADP). List of the rice farmers was obtained from Nasarawa State ADP for this study. Multi-stage sampling procedure was used to select the rice farmers. The first stage was a purposive selection of one local government area (LGA) predominance in rice production from each agricultural zone in the State The second stage was purposive selection of five villages from each of the three LGAs also due to predominance in rice cultivation to get a total of 15 villages for the research work. The third stage involved a random selection of 10% of the rice farmers from the selected villages giving 181 rice farmers from the total population of 1,812 rice farmers. Data were collected using structured questionnaire and interview schedules. Data were presented using frequencies, percentages, mean and logit regression.
Well-Being Status of Rice Farmers
The result in Table 2 indicates that rice farmers in Nasarawa State were satisfied with their standard of living (x̄=5.84), health status (x̄=6.23), life achievement (x̄=5.54) personal relationship (x̄ =6.48) and a feeling of part of the community (x̄=7.23) while indicators like safety (x̄=4.92) and future security (x̄=4.76) fell below the index satisfaction scale level. The implication of this results is that the rice farmers were not satisfied with their safety and security in their pursuit of satisfactory well-being status. The current worsening security situation being experienced in some parts of North Central Nigeria which are caused largely by land disputes, ethnic and pastoralist-farmers clashes as well as banditry have dampened the rice farmers' efforts and beliefs of attainment of secured lives and properties. Rice farmers do not feel safe any longer going to farm for any farming activities geared towards attainment of food security and well-being status. This result agrees with the finding of Mercy Corps (2015) and International Crisis Group (2017) that violent conflicts involving farmers and herders from northern Nigeria and sedentary agrarian communities of North-Central Nigeria have become common occurrences and has escalated in recent years threatening the country's security and stability. Therefore, the more the farming communities become troubled due to violence, the more difficult it becomes for the rice farmers to achieve a sustainable and satisfactory well-being status. Table 3 indicates factors influencing the well-being status of rice farmers as age, sex, rice yield, extension contact and income. The coefficient for age was found to be positive and significant at 1% level of probability. This implies that the older the rice farmers, the more the responsibility of attaining a satisfactory well-being status. The coefficient for sex was found to be positive and significant at 5% level of probability. This evinces that the more males as household heads, the higher the probability of achieving a satisfactory well-being status. This result aligns with the output of the study carried out by Ajibola, (2017) in Kwara State, Nigeria where majority of rice farmers were males. Rice yield was also positive and significant at 1%, meaning that the more rice farmers increase their rice yield, the better they will be able to realize a satisfactory well-being status. The coefficient obtained for extension contact was positive and significant at 5%.
Determinants of Well-Being Status of Rice Farmers
This indicates that the more contacts rice farmers have with extension agents, the higher the probability of learning and acquiring knowledge through professional teachings best approaches applicable at solving well-being issues. This is consistent with the findings of Msuta and Urassa, (2015) that extension agents' facilitation help farmers gain access to knowledge, resources, and services that will enhance their yield and well-being. The coefficient for income was also found to be positive and significant at 5% level of probability.
This shows that as income of the rice farmers increase, the tendency of achieving a satisfactory well-being status also increase. Income plays a pivotal role at achieving the basic household's needs be it in terms of health, accommodation, transportation, safety, life achievement and other well-being indicators that are very much necessarily to keep the farm households going in the journey of life. The more the income, the merrier the journey of life and the more sustainable well-being a farm household achieves. This result corresponds with that of (Kumar et al. 2017) who stated that income was one of the major reasons why majority of the farmers fell under medium to high category of well-being status.
Conclusion and Recommendations
Factors such as age, sex, rice yield, extension contact and income were significant and had positive influence on the well-being status of the rice farmers. Government and relevant agricultural stakeholders need to intensify efforts on interventions that will continue to improve the yield and income of the rice farmers as well as their access to extension services. Effective conflict resolution mechanisms and sustainable security apparatus that will help guarantee the safety of lives and properties of the rice farmers should also be put in place in order to achieve a more satisfactory well-being status. | 2020-08-20T10:06:06.284Z | 2020-08-13T00:00:00.000 | {
"year": 2020,
"sha1": "76ace9b9f1a50c79bdb7f75b342e8c69379ca749",
"oa_license": null,
"oa_url": "https://www.ajol.info/index.php/jae/article/download/198655/187324",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f0f674bb6d383bb4bdbd60f2a0fb0243cdd0f33e",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Business"
]
} |
267625055 | pes2o/s2orc | v3-fos-license | Analysis of Trapped Small‐Scale Internal Gravity Waves Automatically Detected in Satellite Imagery
In water vapor‐sensitive satellite imagery, small‐scale wave‐like perturbations of brightness temperature can be attributed to the presence of trapped internal waves in the troposphere. We present a method for detecting these local perturbations with wavelengths of about 10 km and apply it to imagery from the Advanced Baseline Imager on board the geostationary satellite GOES‐16. The algorithm allows us to analyze 4 years of sub‐hourly data in the southern part of the tropical eastern Pacific, where only a relatively low amount of medium and high clouds obscures the scene. By combining a measure of wave activity/trapping with ERA5 reanalysis data, we connect the occurrence of trapping with the presence of an increased upper‐tropospheric wind shear. This connection is more evident during December, January and February, when upper‐tropospheric jets are more likely. Our work supports existing case and model studies and is a step forward in the statistical and automated analysis of trapped small‐scale internal waves in the atmosphere.
10.1029/2023JD038956
2 of 17 & Krämer, 2013).In addition to the effect of temperature variability, the spatial confinement of ice crystals caused by TSIW motion has been investigated in Podglajen et al. (2018).(c) For example, the studies of Moustaoui et al. (2010) and Heller et al. (2017) suggest that TSIW generated by mountains can contribute to a nonzero vertical net flux of trace gases, including moisture, and irreversible mixing in the upper troposphere-lower stratosphere.(d) Uhlenbrock et al. (2007) have been able to link severe turbulence reported by pilots to the presence of horizontally trapped TSIW generated by mountains.These results have been extended in Wimmers et al. (2018) by further observational cases.It is known that trapped internal waves can further amplify a turbulent environment (Lane et al., 2012;Trier & Sharman, 2018).
With respect to the above phenomena (a)-(d), the aim of the present work is to identify the primary physical cause of TSIW trapping in the real atmosphere.Since inference of local causality is inconclusive or even impossible with current observations, we limit ourselves to the question of which tropospheric background state increases the likelihood of TSIW trapping.The main advance of the present work is the combined large-scale analysis of satellite observations and reanalysis data, which overcomes case-by-case studies (e.g., Feltz et al., 2009;Uhlenbrock et al., 2007;Wimmers et al., 2018), and thus provides a test of results obtained by existing observational and model studies.We algorithmically link TSIW trapping signatures in water vapor-sensitive satellite imagery with reanalysis data over 4 consecutive years within a domain size of about 10 7 km 2 over the eastern tropical Pacific.This allows us, based on existing theory, to attribute the increased occurrence of signatures to an increase in absolute horizontal wind shear |(∂ z u, ∂ z v)| around a pressure level of 325 hPa, an altitude of about 9 km.The increase in absolute horizontal wind shear is due to jets aloft, inducing a reflection layer that is vertically bounded by the relatively more invariant but vertically decreasing buoyancy frequency.Since upper-tropospheric jets over the eastern tropical Pacific are more common during the boreal winter in December, January and February (DJF), there is also a season of increased TSIW trapping.
A well-established condition for vertical trapping of internal waves in the framework of linear theory is l 2 < k 2 (Nappo, 2012), where k is the horizontal wavenumber of an internal wave and l is the Scorer parameter.The Scorer parameter l, derived from the Taylor-Goldstein equation describing linear wave motion, is defined by where u is the horizontal wind in the direction of the internal wave propagation, c the ground-based phase velocity of the internal wave and H s the density scale height.Using the notation in Stephan (2020, Section 3a), this condition can be equivalently expressed through the "critical wavelength" λ * = 2π ⋅ l −1 , which indicates the reflection of internal waves when λ < λ * , where λ is the horizontal wavelength of the internal wave.The singularity of l 2 , namely u = c, is called a critical level.In linear theory, internal waves dissipate when they reach a critical level.
While linear theory predicts wave energy absorption at critical levels, it is known that for a small Richardson number Ri ≔ , waves are also reflected in the vicinity of critical levels (Teixeira, 2014).The Richardson number Ri quantifies the stability of the flow affected by the stratification, that is, the buoyancy frequency N, and the wind shear |(∂ z u, ∂ z v)|.While for Ri ≫ 1 the processes are in line with linear theory, this changes if Ri < 1 resulting in substantial wave reflection (Teixeira, 2014).Especially abruptly changing horizontal wind resulting in a sudden decrease of Ri is known to induce reflection layers (Teixeira & Argaín, 2020;Teixeira et al., 2005).However, Ri is in general large in the lower troposphere as typically |(∂ z u, ∂ z v)| < 10 m s −1 km −1 and N > 1 ⋅ 10 −2 s −1 , that is, Ri > 1.Therefore, as discussed earlier, linear theory predicts well the filtering of TSIW in the lower troposphere via Scorer parameter trapping and critical level dissipation.
In the upper troposphere, in the absence of critical layers, an increase of the denominator (u − c) 2 in Equation 1 can be interpreted as existing absolute wind shear |(∂ z u, ∂ z v)| 2 .Hence, the decrease of l 2 is relatively well captured by Ri in this case, due to the relatively small contribution of the terms apart from N 2 ⋅ (u − c) −2 in Equation 1 (Stephan, 2020).Note that Ri does not fully represent the effect of wind shear on wave trapping, as the sign of the wind shear relative to the direction of wave propagation is important.However, since we are unable to determine the wave phase velocity accurately, Ri is a useful proxy.Therefore, for upper-tropospheric wave reflection leading to an increased likelihood of TSIW trapping, both theories, linear and nonlinear, can be considered in terms of Ri, without assuming phase velocities of internal waves.Determining the phase velocity is difficult with the temporal resolution of the data used, especially for small scale waves such as those considered here. 10.1029/2023JD038956 3 of 17 This limitation in determining the phase velocity makes it challenging to identify the source of the waves.We considered whether mountains in South America could be the source of the waves detected over the eastern tropical Pacific, instead of our leading hypothesis that they are likely generated by convection.Our approach was to look at numerous additional representative cases, and to repeat our following analysis with eastern and western subdomains to determine whether the eastern domain has a higher occurrence of TSIW trapping.For the latter, we did not see such a pattern, nor did any of the cases suggest that these waves are generated by topography.
To study TSIW trapping in the real atmosphere, we need observations that provide corresponding wave signatures, which previous studies have done, for example, using true-color satellite imagery and searching for wavelike cloud patterns.In the absence of clouds, one can utilize water vapor-sensitive satellite imagery, that is, top of the atmosphere radiance measurements within wavelength bands that are mostly sensitive to the energy emitted by water vapor, approximately 6-7 μm (Schmit et al., 2018).These water vapor bands "[…] sense the mean temperature of a variable-depth layer of moisture -a layer whose altitude and depth can vary, depending on both the temperature and moisture profile of the atmospheric column, as well as the satellite viewing angle" (Schmit et al., 2018).
Since TSIW trapping results in vertically coherent vertical velocity perturbations (Lane & Clark, 2002), signatures of TSIW trapping in water vapor-sensitive satellite imagery can be associated with local oscillations of brightness temperature perturbations as the waves locally modulate the vertical structure of temperature and humidity, that is, relative humidity.To the best of our knowledge, the two studies of Uhlenbrock et al. (2007) and Feltz et al. (2009) are the first to have examined water vapor bands in the context of tropospheric internal waves, focusing on mountain waves.In Lyapustin et al. (2014), mountain waves have been studied using a satellite-derived product of total column water vapor.In particular, the study Feltz et al. (2009) supports the connection between TSIW trapping and local oscillations of brightness temperature perturbations by comparing synthetic satellite imagery with high-resolution model data forced by a real-case mountain wave event.However, detecting these oscillations on spatial scales much larger than their typical wavelengths can be challenging because the corresponding perturbations are small compared to the overall brightness temperature variability (Wimmers et al., 2018).Building on the work of Feltz et al. (2009) and Wimmers et al. (2018), we here analyze imagery from the instrument Advanced Baseline Imager (ABI) on board the geostationary satellite GOES-16, whose nadir is on South America (75.2°W) and thus covers the eastern tropical Pacific, among other regions.
For an analysis over large temporal and spatial scales, it is crucial to be able to automatically detect TSIW trapping signatures in satellite imagery due to the volume of data.We are aware of only a few studies that address (semi)-automatic detection in this context: Wimmers et al. (2018) have applied high-pass filtering, that is, removing trends/frequencies in the brightness temperature field that are larger than the wavelengths of interest, to allow easier detection.However, they have not applied this method for a climatological study.Jann (2017) has proposed a method that uses an extended wavelet analysis that tests given angles and wavelengths, which can be a computationally expensive task depending on the number of angles, wavelengths and test locations.In Hindley et al. (2016) and Wright et al. (2017), internal waves in the stratosphere have been analyzed using a Stockwell transform method.A major advantage in analyzing stratospheric internal waves is their dominant imprint in satellite imagery, which is not the case for tropospheric ones.
We apply a local Fourier analysis to water vapor-sensitive satellite imagery to reveal the underlying frequencies.
This not only allows us to measure local oscillations of brightness temperature perturbations, but also allows us to analyze all the available imagery more efficiently and still obtain qualitatively good results.One could also use established algorithms for pattern recognition from supervised machine learning.However, this involves a training phase where we lack well-studied training data.In this work, we take advantage of the simplicity of the pattern and detect it using a more direct method.
While we always use the term "trapping" throughout the remainder of this manuscript, our analysis does not exclude the possibility that we are measuring large amplitude waves that may not be trapped.However, the waves detected by our methods are most likely related to some form of trapping or wave filtering, and thus our term "trapping" is appropriate.
We have structured the manuscript as follows: We introduce the chosen data in Section 2 and explain how we process it in Section 3, along with an example case.In Section 4, we present the large scale environment of the reanalysis data as well as the distribution of TSIW trapping signatures.In addition, we address the intermittency of signatures on sub-seasonal scales.Finally, we conclude our results in Section 5.
Water Vapor-Sensitive Satellite Imagery
We use water vapor-sensitive satellite imagery from the ABI on board the geostationary satellite GOES-16.Among the 16 bands of ABI, there are three bands at central wavelengths 6.2 μm, 6.9 μm, and 7.3 μm, respectively, that are mostly sensitive to water vapor in the upper, middle, and lower troposphere, respectively (Schmit et al., 2017).Exemplary weighting functions can be found in Schmit et al. (2017, Figure 2).The satellite has been operating since the end of 2017 and the dataset is freely accessible at GOES-R Calibration Working Group and GOES-R Series Program (2017).We use all available data from 2018-01-01 to 2021-12-31, a period of exactly 4 years.
Except for a few negligible gaps, ABI creates "full disk" images every 10-15 min, with a resolution of about 2 km at the satellite's nadir (75.2°W).As mentioned previously, this temporal resolution is insufficient to allow estimates of wave phase velocities for the scales of interest.An important issue we have to consider with ABI data is the satellite instrument measurement failures that result in longitudinal striping noise (Gunshor et al., 2020).These artifacts cause strong local shifts in the brightness temperature field that can be strongly misinterpreted depending on the analysis.In Section 3.1, we explain how we handle these artifacts.
Tropospheric Background State
For the tropospheric background state, we use the fifth generation reanalysis ERA5 by the European Centre for Medium-Range Weather Forecasts (Hersbach et al., 2020).ERA5 is, for example, freely accessible via the C3S Climate Data Store.We use the dataset of hourly profiles as a function of pressure levels P interpolated from its native model onto a horizontal grid with a resolution of 0.25° in longitude and latitude and a vertical resolution between 25 and 50 hPa in the altitude range of interest (Hersbach et al., 2018).While ERA5 is also available on model levels, providing a much denser vertical resolution, this dataset is provided horizontally on spherical harmonics, which need to be translated into geographic coordinates.Since we use multiple hourly variables over a period of 4 years, this dataset is not manageable in terms of both storage and computation time.We are particularly interested in the (squared) buoyancy frequency N 2 and horizontal wind (u, v).Since N 2 is not provided by ERA5 directly, we calculate it via the adiabatic index and the altitude z is inferred from ERA5 geopotential divided by g = 9.81 m s −1 , the mean acceleration due to gravity.While we are computing the dry buoyancy frequency, it is known that moisture can have an impact on the buoyancy frequency (Durran & Klemp, 1982).We expect that this does not have a substantial impact on our findings, since we are focusing on clear-sky conditions in the upper troposphere.
Domain
One limiting factor in our analysis are medium/high-level clouds, which, like striping noise, cause strong local shifts in brightness temperature or simply obscure the scene in the troposphere.Figure 1 shows the spatial ERA5 low, medium, and high cloud cover mean, vertically separated at approximately 2 and 6 km with respect to the "standard atmosphere", for the seasons DJF, MAM, JJA, and SON around the satellite's nadir for our chosen time period.While ERA5 does not directly assimilate any cloud observations, it represents the monthly mean reasonably well (Yao et al., 2020).Since the water vapor bands peak above approximately 2 km, low clouds rarely influence the brightness temperatures.For medium and high clouds, we can see a clear separation between land and ocean, and over the tropical eastern Pacific, a separation between north and south of the equator.Hence, we chose to study the southern tropical eastern Pacific as shown in Figure 1.The domain was chosen primarily with regard to computational limits and the resolution of the satellite.Moreover, the domain is not subject to a clear seasonal cycle in medium/high-level cloud cover that might affect our conclusions.The satellite's resolution median is approximately 2.2 km, with a maximum of 2.7 km.
Methods
Independently of the satellite imagery, we cluster daily mean profiles of ERA5 variables within the chosen domain and time period (Section 2) to characterize the large-scale environment and to identify periods when 10.1029/2023JD038956 5 of 17 TSIW trapping is theoretically more likely.We use ERA5 profiles between 675 hPa (∼3.5 km) and 162.5 hPa (∼13.7 km).The vertical range was purposely chosen to be above the boundary layer and below the tropopause.
Our clustering in horizontal space and time is based on applying a k-means algorithm to the horizontal wind.This algorithm computes k groups of profile positions and time points whose corresponding horizontal wind profiles are most "similar" to each other.We manually set k = 3, which is sufficient to separate the upper troposphere into clusters with distinct features that are robust to increasing k.The corresponding results are presented in Section 4.1, in particular Figure 4.
Together with the satellite imagery, we are able to link the hourly ERA5 profiles of horizontal wind and buoyancy frequency with a TSIW trapping measure.The TSIW trapping measure results from detecting TSIW trapping signatures in the available imagery using local Fourier analysis.We explain this method in more detail in Section 3.1 and present the corresponding results in Sections 4.2 and 4.3, where we first consider the TSIW trapping measure independently of ERA5 to summarize the large-scale distribution.
Detecting TSIW Trapping Signatures
We identify wave-like signals in brightness temperature based on three criteria: First, TSIW trapping signatures are local oscillations of brightness temperature.Second, these signature oscillations are small compared to the large-scale variability.Third, quasiperiodic patterns in an image result in a highly non-uniform frequency space of that image.Hence, our overall strategy consists of the following steps: (1) computing brightness temperature perturbations, (2) subdividing the domain into small overlapping "tiles", (3) independently analyzing the spatial frequency space of each tile, (4) discarding tiles whose spatial frequency space is distorted and therefore not interpretable.Let us explain these steps in detail: (1) Instead of looking at the original brightness temperatures, we analyze their small-scale perturbations by applying a Gaussian filter on each satellite image, as proposed by Wimmers et al. (2018).The Gaussian filter convolves the brightness temperatures with a 2-dimensional Gaussian function of a specified standard deviation.The resulting smoothed brightness temperatures can be subtracted from the original brightness temperatures, to obtain the small-scale perturbations.The Gaussian filter's standard deviation is 2 pixels, which corresponds to approximately 5 km, and thus removes perturbations at wavelengths longer than typical TSIW.(2) We subdivide the satellite data, an equally-spaced 2-dimensional array of brightness temperature in the satellite's scan angles in x-and y-direction, with half-overlapping tiles that decompose the 2-dimensional space into a smaller "tile grid".Each tile is a subarray of 64 pixels in x and y, a spatial length of about 60-80 km.Since TSIW in our domain have horizontal wavelengths of about 4-16 km, these tiles are small enough that TSIW dominate the tile and large enough that a sufficient number of oscillations is contained in order to classify them as waves.(3) On each perturbation tile, we apply the 2-dimensional discrete Fourier transform, which transforms the original values in their spatial domain representation into the frequency domain representation given by complex values called Fourier coefficients.The Fourier coefficients encode the amplitude and phase of each 2-dimensional frequency.We are interested in the amplitudes, that is, the absolute values F of the Fourier coefficients.Quantifying the non-uniformity of F gives us a measure for TSIW trapping.Before that, we apply the following steps to F to minimize misinterpretations: • Applying a high-pass filter to F, that is, ignoring small frequencies, in line with the expected TSIW wavelengths to minimize the influence of abruptly changing brightness temperatures, for example, associated with edges of clouds, which are not removed by the Gaussian filter.• Replacing F with the absolute difference from the mean of F to minimize the impact of background noise, that is, a uniform distribution of frequencies, that could distort the signal of the overlying quasiperiodic pattern.
In Appendix A, we explain in detail how one can define a non-uniformity measure for the processed F values based on measuring the difference between probability distributions.The resulting TSIW trapping measure will be discussed in Section 3.2 using an example case and be applied to the entire data in Sections 4.2 and 4.3.
(4) In Section 2, we have already pointed out that abruptly changing brightness temperatures due to striping noise or medium/high clouds in the satellite data strongly affect the above analysis by distorting the frequency space.Therefore, we ignore tiles whose dominant frequency is 0 in x-or y-direction.The assumption is that natural wavy data rarely possess frequencies with perfect spatial alignment.
The fact that the waves commonly appear as quasiperiodic and quasi-2-dimensional facilitates this method.Using a Fourier analysis in (3), we naturally obtain the dominant wavelength and wave direction from the largest value of the processed F values.While the wavelength does not provide any new information, since we have optimized our method for a specific wavelength range, the wave angle is independent of our chosen scales and can be used as a sanity check when linking the TSIW trapping measure with ERA5.Based on the model studies of Balaji and Clark (1988) and Hauf and Clark (1989), one would expect the wave direction, that is, the direction perpendicular to the wave front, to become linearly correlated with the wind shear direction at a certain altitude for an increasing TSIW trapping measure threshold.
Figure 2 shows the daily mean of available and utilized satellite images per hour, smoothed by a Gaussian filter with a temporal standard deviation of 1 day to exclude unimportant anomalies.It is expected that a considerable amount of data is filtered using our criteria in (4), since in the absence of brightness temperature perturbations, a frequency of 0 is still likely.The shift in available data in January, February Since we only have hourly ERA5 profiles, we only use the hourly maximum of the TSIW trapping measure for each tile for our analyses in Section 4. This also addresses the small seasonal biases in available and utilized satellite data (Figure 2), since we do not expect that the differences in hourly satellite data will strongly affect the hourly maximum.We link each hourly ERA5 profile with its closest tile grid position and consider the maximum of the TSIW trapping measure within the following hour.
Example Case
Before presenting the results for the entire chosen time period and domain, we look at two different sub-regions of the domain in detail, on a day with one of the largest TSIW trapping occurrences.The two scenes, A and B, are depicted in Figure 3 (A: a-g, B: h-n) and have been selected according to the clear distinction in TSIW trapping signatures.
In contrast to scene B, scene A has strong signatures in the water vapor-sensitive image, ABI's "upper-level water vapor" band at a wave length of about 6.2 μm.There are also clear imprints on the boundary layer clouds for scene A in ABI's "clean longwave window" band at a wavelength of about 10.3 μm, a band where one is able to clearly see boundary layer clouds in the absence of high-level clouds.
Figures 3c and 3j each show an example tile whose position is indicated in Figures 3b and 3i, respectively, along with the normalized frequency space in Figures 3d and 3k, respectively.While scene A's frequencies are concentrated around approximately 0.2 pixel −1 , which corresponds to a wavelength of about 9 km, B's frequencies are relatively uniformly distributed, resulting in a smaller TSIW trapping measure value (see Figures 3e and 3l).
Based on experiments, a good TSIW trapping measure threshold is between 0.08 and 0.10, as can be clearly seen in Figures 3e and 3l, for example.
Our TSIW trapping measure is only one possible approach to an automated analysis of TSIW trapping signatures.While we have carefully optimized this measure to capture TSIW signatures as well as possible, in the wavelength range of interest, there are of course other processes that can induce a locally wavy brightness temperature field, as already discussed in Jann (2017).Our approach has been to iteratively improve the measure with different cases, while manually sanity checking the results with cloud imprints in true-color satellite imagery.
It should be noted that clear signatures can and do occur in the absence of shallow clouds, which one might not infer from our example case.This is not the purpose of the chosen scenes.We want to emphasize that while we have a clear distinction between the two regions in terms of our TSIW trapping measure, this is not the case for the mean ERA5 buoyancy frequency and horizontal wind for each scene shown in Figures 3f and 3g, 3m and 3n.
As there are only subtle differences in the profiles, in this case the chosen ERA5 profiles alone cannot explain why we see clear signatures in Scene A compared to Scene B. However, this does not rule out a statistical relationship in general.For both scenes, based on the theory discussed in Section 1, it is likely that the westerly wind shear in the lower troposphere filters TSIW by trapping smaller wavelengths, while the strong easterly wind shear combined with a small buoyancy frequency in the upper troposphere creates a nonlinear reflection layer.Both processes increase the likelihood for upper-tropospheric TSIW trapping.
The main point of this example case is that it is difficult to explain individual TSIW trapping occurrences with the available data, and therefore we must rely on statistical relationships linking these occurrences to background variables.A detailed quantitative analysis would require knowledge that is not available.In general, for example, it is unclear whether a weaker signature is actually due to a smaller wave amplitude or rather the result of the satellite's resolution or viewing angle, or the large scale variability in temperature and moisture that affect the brightness temperature field.Since TSIW signatures are barely resolved in ABI imagery, certain wavelengths are simply better detected by ABI, as pointed out in Jann (2017, Figure 5).Regarding the ERA5 data, we expect it to be smoother than reality, possibly with local biases due to limitations in data assimilation.We therefore focus on the statistical relationship between TSIW and ERA5 over the entire domain and do not attempt to explain local events.
Results
In Sections 4.1 and 4.2, we look at the large-scale distribution of the background state and the TSIW trapping signatures, respectively.We analyze their joint distribution in Section 4.3, to derive our main result, summarized in Figure 10.
Large-Scale Environment
As discussed in Section 3, we analyze the large-scale distribution of the background state by clustering ERA5 profiles, specifically by applying a (k = 3)-means clustering to the horizontal wind.The resulting clustering is given in Figure 4, which shows the temporal relative distribution as well as the mean profiles of horizontal wind, squared buoyancy frequency, and absolute wind shear for each of the three clusters.The standard deviation bands around the cluster mean profiles help to understand at which altitude levels the clustering has resulted in well-defined features and at which there is too much variance.However, since the general scaling of absolute wind speed is larger in the upper troposphere than in the middle troposphere, the k-means algorithm is more sensitive to the upper troposphere as all levels are treated equally.The upper-tropospheric horizontal wind exhibits distinct regimes, while the upper-tropospheric buoyancy frequency is relatively invariant throughout the year.Applying k-means only to the buoyancy frequency shows that its variability is not subject to any seasonal cycle, which can already be expected from Figure 4. Therefore, the large-scale variability of the Richardson number reduces to the absolute wind shear, which in contrast is subject to a clear seasonal cycle.
During DJF, our domain is dominated by upper-tropospheric easterly jets, with a distinct peak in each month, as confirmed by the temporal distribution in Figure 4 (green cluster).During the rest of the year, the horizontal wind is relatively weak and has no clear direction, which also results in weaker absolute wind shear.Irrespective of the number of clusters, the domain primarily separates into two regimes, DJF and the rest of the year.This leads to two important observations: First, small values of Ri are more likely in the upper troposphere throughout the year due to N decreasing vertically.Second, small values of Ri in the upper troposphere are more likely in DJF due to an intermittent increase in |(∂ z u, ∂ z v)|.
Large-Scale Trapping
As described in Section 3.1, for each available satellite image within the chosen domain, we compute our TSIW trapping measure along the defined tile grid, skipping tiles that do not meet our criteria.Calculating the daily mean fraction of tiles that exceed a certain TSIW trapping threshold results in Figure 5.The daily mean was smoothed using a Gaussian filter with a temporal standard deviation of 1 day to exclude unimportant anomalies.Signatures become visibly notable for values larger than 0.06-0.08,and a clear signature can be assumed at values larger than 0.08-0.10,as discussed in Section 3.2.With an increasing TSIW trapping measure threshold, we see a clear trend of increased signature coverage during DJF, as expected by the results in Section 4.1.Moreover, we also see the distinct peaks in each month we noticed for the horizontal wind.
Our domain has a size of approximately 10 7 km 2 .The temporal variability for thresholds between 0.04 and 0.06 is primarily due to missing satellite images or our tile filtering (see Figure 2).The number of signature occurrences decreases exponentially with an increasing threshold, but for a threshold of 0.10, we still have days when at least 1% of the domain is covered.Besides the limitations discussed in Section 3.2, the scarcity of TSIW trapping signatures is another limiting factor in the analysis that we aim to address with enough data.It is unclear to what extent Figure 5 reflects the true frequency of TSIW trapping in our domain, but a seasonal bias due to data filtering (Figure 2) or due to medium/high-level clouds (Figure 1) is unlikely.Furthermore, the clear correlation between the horizontal wind and TSIW trapping signatures during DJF gives us confidence in our detection method.
The distribution of TSIW trapping signatures is subject to daily variability, and we could not detect a diurnal cycle.The signature distribution consists of sparse short periods, in the order of 1-2 days, of large-scale signature coverage up to 10% for thresholds between 0.08 and 0.10, rather than consistent signature occurrences with a smaller coverage on longer time scales.With ERA5, we were not able to explain variability on shorter periods than 2 days or smaller domains.In Section 4.3, we address the sub-seasonal variability and the associated relation between an increased upper-tropospheric wind shear and an increased likelihood of TSIW trapping.
Sub-Seasonal Variability
We have seen in Sections 4.1 and 4.2 that during DJF, the increased upper-tropospheric wind shear increases the likelihood of large-scale upper-tropospheric TSIW trapping in contrast to the rest of the year.In this section, we address the reason of increased likelihood on sub-seasonal time scales.We have already seen in Section 4.2 that there is daily variability in TSIW trapping, and ask if this can be attributed to an intermittent upper-tropospheric wind shear.
To address correlations beyond large-scale trends, we have linked each hourly ERA5 profile with its closest tile grid position, as explained in Section 3.1.To reduce the dimensionality of the data, we first investigate at which altitude levels horizontal wind and buoyancy frequency change with respect to an increasing TSIW trapping measure threshold, that is, we are interested in how the distributions of these variables deviate from the remaining set of ERA5 profiles: For a given trapping measure threshold ɛ, we can divide the set of all ERA5 profiles into profiles > whose corresponding TSIW trapping measure value exceeds ɛ and into profiles ≤ whose do not.Let mean(>) and mean(≤) denote the mean as a function of ERA5 pressure levels.Then, for each ɛ, we can compute the mean difference mean(>) − mean(≤) as a function of ERA5 pressure levels.
In Figure 6 6.This indicates that the horizontal wind is much more sensitive to our TSIW trapping measure than the buoyancy frequency.However, this is most likely due to a steady buoyancy frequency throughout the year, as seen in Section 4.1.The mean difference of absolute wind shear peaks at an ERA5 pressure level of 325 hPa, an altitude of about 9 km.
In Figure 8 the mean of absolute wind shear at 325 hPa and of wind speed at 212.5 hPa in contrast to the buoyancy frequency.Moreover, at these ERA5 pressure levels we see a clear separation between the wind distributions at a TSIW trapping threshold of about 0.08, supporting our statement that a clear TSIW signature can be assumed above 0.08 − 0.10.One should keep in mind that the number of signature occurrences decreases exponentially with an increasing TSIW trapping threshold, as seen in Figure 5, resulting in more uncertain distributions.The peak at 325 hPa is supported by the relationship between the angle of absolute wind shear and the TSIW trapping signature angle with respect to a threshold of 0.09, derived from the Fourier analysis as described in Section 3.1.In Figure 9, we show this relationship at the same ERA5 pressure levels as in Figure 8.At 325 hPa, there is a clear change toward an expected linear relationship, as discussed in Section 3.1 (4).
Based on the previous distribution analysis, we look at the median buoyancy frequency N, absolute wind shear |(∂ z u, ∂ z v)|, and therefore also the Richardson number Ri at an ERA5 pressure level of 325 hPa.Since it cannot be assumed that Ri is normally distributed, we show in Figure 10 the median surrounded by an interquartile range band for N, |(∂ z u, ∂ z v)| and Ri with respect to our TSIW trapping measure during DJF and the rest of the year (¬DJF).As already noted, N is not changing with an increasing TSIW trapping threshold.In contrast, even though there is a large variability, |(∂ z u, ∂ z v)| has a clear positive relationship with an increasing TSIW trapping threshold.Therefore, the clear negative relationship of Ri with an increasing TSIW trapping threshold is primarily due to |(∂ z u, ∂ z v)|.However, as seen in Section 4.1, the minor influence of N is due to its steady nature in the upper troposphere throughout the year.The sub-seasonal relationship of absolute wind shear with TSIW trapping is more evident during DJF than the rest of the year.
Conclusion and Discussion
We developed a measure for TSIW trapping signatures in water vapor-sensitive satellite imagery, based on a local Fourier analysis of brightness temperature perturbations.This allows us to analyze 4 years of water vapor-sensitive satellite imagery from ABI on board GOES-16 within a domain size of about 10 7 km 2 over the eastern tropical Pacific.By linking the TSIW trapping measure with ERA5 data, we are able to infer based on theory that sub-seasonal intermittent upper-tropospheric absolute wind shear increases the likelihood of TSIW trapping in the real atmosphere.While idealized simulations and observational case studies already support this statement, our work is the first step toward a systematic analysis of the real atmosphere on larger scales.Furthermore, the increased wind shear results in a small 13 of 17 Richardson number due to a steady small buoyancy frequency.However, the suitability of the Richardson number in predicting the likelihood of TSIW trapping requires further analysis in regions where the buoyancy frequency varies.
A key limitation in our analysis is the uncertainty arising from the detection of TSIW trapping signatures in satellite imagery.These signatures are not yet well understood in quantitative terms.Most importantly, we lack This analysis was possible primarily due to the improved resolution of ABI on board GOES-16 compared to imagers on board previous GOES satellites (Feltz et al., 2009).The Advanced Himawari Imager (AHI) on board Himawari 8, a geostationary satellite at 140.7°E (nadir), has similar specifications to ABI (BESSHO et al., 2016).
In particular, AHI also provides three water vapor-sensitive bands at the same central wavelengths as ABI with
Figure 1 .
Figure 1.ERA5 low, medium, and high cloud cover mean from 2018-01-01 to 2021-12-31 separated into the seasons DJF, MAM, JJA, SON; the vertical separation is at approximately 2 and 6 km with respect to the "standard atmosphere"; our chosen domain is depicted in orange.
Figure2shows the daily mean of available and utilized satellite images per hour, smoothed by a Gaussian filter with a temporal standard deviation of 1 day to exclude unimportant anomalies.It is expected that a considerable amount of data is filtered using our criteria in (4), since in the absence of brightness temperature perturbations, a frequency of 0 is still likely.The shift in available data in January, February, and March is due to a change in the frequency of image production.Starting around April 2019, ABI has produced an image every 10 min (6 ⋅ 3 water vapor-sensitive satellite images per hour) instead of every 15 min (4 ⋅ 3 water vapor-sensitive satellite images per hour).
Figure 2 .
Figure 2. Smoothed daily mean of available and utilized satellite data.
Figure 3 .
Figure 3.Example case of two different sub-regions a-g/h-n on December 23, 03:00 (UTC); a/h: brightness temperatures of ABI's "clean longwave window" band; b/i: brightness temperature perturbations of ABI's "upper-level water vapor" band; c-d/j-k: example tile with its normalized processed absolute Fourier coefficients (see Section 3.1), tile position is indicated in b/i, depicted coordinates are pixels and pixel frequencies, respectively; e/l: TSIW trapping measure for each tile (see Section 3.1); f-g/m-n: mean ERA5 profiles of N 2 , u, and v with min/max-bands; a/h, b/i, e/l have actually coordinates in the satellite's scan angle, but have been converted to approximate geographic coordinates for better readability.
Figure 4 .
Figure 4. Clustering of daily mean ERA5 profiles between 675 hPa and 162.5 hPa within our chosen period and domain (Section 2); the clustering is based on a (k = 3)-means clustering applied to the horizontal wind (Section 3); the first plot shows the relative dominance of each cluster throughout the year; the remaining plots show for each respective variable the cluster mean with a standard deviation band.
, we show the resulting 2-dimensional mean difference for N 2 , |(∂ z u, ∂ z v)| and |(u, v)|.While the mean difference of N 2 , |(∂ z u, ∂ z v)| and |(u, v)| change with an increasing threshold ɛ, the change in |(∂ z u, ∂ z v)| and |(u, v)|, is much stronger with respect to the standard deviation std(>) along the ERA5 pressure levels, shown in Figure 7 analogous to Figure , we show the mean and standard deviation of N 2 , |(∂ z u, ∂ z v)|, and |(u, v)| for > at three different ERA5 pressure levels, including 325 hPa.There is a clear shift with an increasing TSIW trapping threshold for
Figure 7 .
Figure 7. Standard deviation of > , as defined in Section 4.3, with respect to a TSIW trapping measure threshold ɛ for squared buoyancy frequency N 2 , absolute wind shear |(∂ z u, ∂ z v)| and wind speed |(u, v)| along ERA5 pressure levels.
Figure 8 .
Figure 8.The mean with a standard deviation band of > , as defined in Section 4.3, with respect to a TSIW trapping measure threshold ɛ for squared buoyancy frequency N 2 , absolute wind shear |(∂ z u, ∂ z v)| and wind speed |(u, v)| at different ERA5 pressure levels.
Figure 9 .
Figure 9. Relation between the direction of TSIW signatures and the direction of wind shear at multiple ERA5 pressure levels; the TSIW trapping measure threshold is 0.09 and the direction of the resulting signatures is perpendicular to the strongest wave front; an angle of 0° encodes the zonal direction, positive angles encode directions turning anticlockwise and negative angles clockwise.
Figure 10 .
Figure 10.Median with interquartile range bands of > , as defined in Section 4.3, with respect to a TSIW trapping measure threshold ɛ for buoyancy frequency N, absolute wind shear |(∂ z u, ∂ z v)|, and Richardson number Ri at 325 hPa (∼9.14 km) during DJF and the rest of the year (¬DJF). | 2024-02-12T16:03:47.738Z | 2024-02-10T00:00:00.000 | {
"year": 2024,
"sha1": "b6e0606a489adea9beaea86e6fcf2d426f66509d",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2023JD038956",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "8ce0c4279d02cf53a0206ce4af24533e67ae5d1d",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": []
} |
90857138 | pes2o/s2orc | v3-fos-license | Can Gut Microbiota Modulation Be Used as a Practical Treatment for Obesity?
J Obes Metab Syndr 2018;27:75-77 It is well known that interactions between behavioral factors, specifically, energy intake and physical activity, environmental, and genetic factors can impact the development of obesity. Recently, the composition of gut microbiota has emerged as a possible important endogenous factor that influences nutrient acquisition and energy regulation. Knowledge and understanding about the association between the human gut microbiota and obesity has increased exponentially during last decade, and the concept of gut microbiota modulation has provided a new paradigm for obesity treatment. Mutual and complex mechanisms are involved in the relationship between the gut microbiota, environment and obesity, but these mechanisms still remain largely unexplored. Understanding the mechanisms for how gut microbiota modulate host appetite and metabolism can provide clues for new strategies for obesity treatment. In this context, much effort has been made to manipulate gut microbiota to target host metabolism and appetite control. Administration of live beneficial bacterial strains (probiotics) is a relatively safe and noninvasive approach for altering the gut microbiota in humans. Several studies have reported that particular bacterial strains could reduce body weight or metabolic disorders, such as metabolic endotoxemia and insulin resistance. Zhang et al.1 indicated that probiotic consumption could reduce weight and body mass index (BMI) when multiple species of probiotics were administrated. However, there is currently no evidence of compositional alterations of the fecal microbiota in response to probiotic administration.2 Recently, Borgeraas et al.3 reported a meta-analysis of randomized controlled trials, which were conducted to examine the effects of probiotic administration in only overweight (BMI, 25–29.9 kg/m2) or obese (BMI, ≥ 30 kg/m2) adults. The study showed that short-term (≤ 12 weeks) probiotic supplementation reduced body weight: weighted mean difference (95% confidence interval) of body weight is –0.60 kg (–1.19 to –0.01 kg, I2 = 49%); BMI, –0.27 kg/m2 (–0.45 to –0.08 kg/m2, I2 = 57%); and fat percentage, –0.60% (–1.20% to –0.01%, I2 =19%). Prebiotics are non-indigestible fibers, which undergo bacterial fermentation and subsequently stimulate the growth of particular types of bacteria, thus conferring a beneficial effect on the host. Parnell and Reimer4 suggested that oligofructose supplementation over 12 weeks reduced body weight, mainly body fat, in overweight and obese subjects. The mechanism for this effect is hypothesized to be associated with in-
It is well known that interactions between behavioral factors, specifically, energy intake and physical activity, environmental, and genetic factors can impact the development of obesity. Recently, the composition of gut microbiota has emerged as a possible important endogenous factor that influences nutrient acquisition and energy regulation. Knowledge and understanding about the association between the human gut microbiota and obesity has increased exponentially during last decade, and the concept of gut microbiota modulation has provided a new paradigm for obesity treatment. Mutual and complex mechanisms are involved in the relationship between the gut microbiota, environment and obesity, but these mechanisms still remain largely unexplored.
Understanding the mechanisms for how gut microbiota modulate host appetite and metabolism can provide clues for new strategies for obesity treatment. In this context, much effort has been made to manipulate gut microbiota to target host metabolism and appetite control.
Administration of live beneficial bacterial strains (probiotics)
is a relatively safe and noninvasive approach for altering the gut microbiota in humans. Several studies have reported that particular bacterial strains could reduce body weight or metabolic disor-ders, such as metabolic endotoxemia and insulin resistance.
Zhang et al. 1 indicated that probiotic consumption could reduce weight and body mass index (BMI) when multiple species of probiotics were administrated. However, there is currently no evidence of compositional alterations of the fecal microbiota in response to probiotic administration. 2 Recently, Borgeraas et al. 3 reported a meta-analysis of randomized controlled trials, which were conducted to examine the effects of probiotic administration in only overweight (BMI, 25-29.9 kg/m 2 ) or obese (BMI, ≥ 30 kg/m 2 ) adults. The study showed that short-term (≤ 12 weeks) probiotic supplementation reduced body weight: weighted mean difference (95% confidence interval) of body weight is -0.60 kg (-1.19 to -0.01 kg, I 2 = 49%); BMI, -0.27 kg/m 2 (-0.45 to -0.08 kg/m 2 , I 2 = 57%); and fat percentage, -0.60% (-1.20% to -0.01%, I 2 =19%). Prebiotics are non-indigestible fibers, which undergo bacterial fermentation and subsequently stimulate the growth of particular types of bacteria, thus conferring a beneficial effect on the host. Parnell and Reimer 4 suggested that oligofructose supplementation over 12 weeks reduced body weight, creased short chain fatty acids (SCFAs) and anorexigenic peptides, such as, peptide YY (PYY) and glucagon-like peptide-1 (GLP-1) production.
Over the past decades, pervasive use of antibiotics has been found to disrupt the microbiota composition. The increased use of antibiotics has also been paralleled with an increase in the prevalence of obesity. Several studies have recently shown that repeated exposure to antibiotics in early life is associated with early childhood obesity. 5 In comparison, antibiotic use has been shown to improve plasma lipopolysaccharides (LPS) levels and metabolic endotoxemia in animal studies. 6 The mechanisms for microbiota effects on host body composition induced by antibiotic treatment are complex and remain ambiguous. However, if antibiotic administration can be optimized, this approach could provide a tool to modulate gut microbiota and improve metabolic disorders and obesity.
A fecal microbiota transplant (FMT) is the process transplanting fecal bacteria from a healthy individual into a recipient to cure a specific disease, which is an efficient intervention to alter the gut microbial ecosystem. FMT is a treatment strategy for inflammatory bowel disease, irritable bowel syndrome and metabolic diseases, in addition to severe recurrent Clostridium difficile infection. Recently, Kootte et al. 7 reported that there were beneficial effects of lean donor FMT on glucose metabolism, which is dependent on decreased fecal microbial diversity at baseline. Therefore, FMT could prove to be a useful treatment against obesity and metabolic disorder in the near future although the procedure is not well-established and the mechanisms require further research investigation.
Several mechanisms for gut microbiota modulation, which could potentially influence weight gain and fat deposition, have been elucidated in many studies. Altered symbiotic interactions between the gut microbiota and the host are associated with increased metabolic and immune disorders. Turnbaugh et al. 8 suggested an "energy harvest" hypothesis which proposes that the obese microbiome (microbial metagenome sequences) have an increased capacity to harvest energy from the diet. Gut microbiota produce SCFAs from remnant dietary fibers, subsequently modulating the microbiome-gut-brain axis. SCFAs (acetate, propionate, and butyrate) are ligands for G-protein-coupled recep-tors (GPCRs), which are regulators of gut motility, host energy balance, fat storage and appetite. 9 SCFAs affect appetite through anorexigenic peptides (PYY, GLP-1). 10 PYY is stimulated by GP-CRs which can also lead to changes in gut motility and facilitation of nutrient absorption. Acetate in particular can activate the parasympathetic nervous system that increases glucose-stimulated insulin, ghrelin secretion, and excessive eating. 11 Butyrate may reduce the appetite by regulating leptin expression in adipocytes. 12 In addition, gut microbiota affect the expression of obesity-related genes, for example, fasting-induced adipose factor (FIAF) and AMP-activated protein kinase (AMPK). 13,14 Overexpression of FIAF can reduce adipose tissue by stimulating fatty acid oxidation and uncoupling in fat. AMPK inhibition can lead to decreased fatty acid oxidation and upregulation of lipogenesis.
Chronic low-grade inflammation is prominent in both obesity and insulin resistance. A high-fat diet can increase gut permeability and the proportion of LPS-containing microbiota in the gut. 15 Excessively high levels of LPS (metabolic endotoxemia) can lead to gut, hepatic, and adipose tissue inflammation and diabetes.
Although the rapidly evolving metagenomics field has facilitated a better understanding of microbiome-host-environment interaction, only a fraction of the complex and dynamic relationships have been revealed. Some of the mechanisms for gut microbiota and obesity have already been identified, including energy harvest, free fatty acid modulation, obesity-related gene expression and metabolic endotoxemia. In accordance with those theories, modifications in the gut microbial ecology by probiotics, prebiotics, antibiotics or FMT can change and possibly improve host obesity and metabolic diseases. Therefore, integrating epidemiological analyses in practice will enable researchers to optimize therapeutic strategies to modulate gut microbiota composition to target metabolic effects as well as obesity. In the foreseeable future, we hope to develop additional tailored therapeutic strategies for obesity and metabolic disorders based on individual gut microbiota compositions, host, and environmental factors.
CONFLICTS OF INTEREST
The authors declare no conflict of interest. | 2019-04-02T13:14:53.678Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "16f00af5343e597a60f40556512320a034ad5687",
"oa_license": "CCBYNC",
"oa_url": "http://www.jomes.org/journal/download_pdf.php?doi=10.7570/jomes.2018.27.2.75",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "16f00af5343e597a60f40556512320a034ad5687",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
966949 | pes2o/s2orc | v3-fos-license | Flexibility in Research Designs in Empirical Software Engineering
Problem outline: It is common to classify empirical research designs as either qualitative or quantitative. Typically, particular research methods (e.g., case studies, action research, experiments and surveys) are associated with one or the other of these types of design. Studies in empirical software engineering (ESE) are often exploratory and often involve software developers and development organizations. As a consequence, it may be difficult to plan all aspects of the studies, and to be successful, ESE studies must often be designed to handle possible changes during the conduct of the study. A problem with the above classification is that it does not cater for flexibility in design. Position: This paper suggests viewing research in ESE along the axis of flexible and fixed designs, which is both orthogonal to the axis of quantitative and qualitative designs, and independent of the particular research method. According to the traditional view of ESE, changes to the research design in the course of a study are typically regarded as threats to the validity of the results. However, viewing the study designs as flexible, practical challenges can provide useful information. The validity of the results of studies with flexible research designs can be established by applying techniques that are traditionally used for qualitative designs. This paper urges an increased recognition of flexible designs in ESE and discusses techniques for establishing the trustworthiness in flexible designs.
INTRODUCTION
Empirical software engineering (ESE) studies often involve humans, as individuals or as part of a software development organization, who have their own constraints and expectations.It may be difficult for the researcher to predict these constraints and expectations, but in practice the researcher must adapt to them throughout the research (Anda, Hansen et al. 2006;Conradi, Dybå et al. 2006).Furthermore, studies in ESE are seldom based on established theories (Hannay, Sjøberg et al. 2007), and as a consequence elements of the research design, such as the research question or the concepts investigated, may need to be refined during the study.These features of research in ESE demand that the researcher be flexible, in order to manage research that takes unanticipated directions.To enable the researcher to be flexible, the research design must also be flexible.
We define flexibility simply as the capacity to adapt (Golden and Powell 2000), although a number of alternative definitions of flexibility of projects and organizations exist; see, for example, (DeLeeuw and Volberda 1996;Golden and Powell 2000;Olsson 2006).
Research designs are commonly classified as either quantitative or qualitative.Only qualitative designs are viewed as being flexible.The terms "quantitative" and "qualitative" are also used for the data collected in the empirical studies.Our experience is that using the same terms for both the characteristics of the data collected and the features of the research design leads to some confusion among software engineering researchers.Qualitative designs can, for example, incorporate quantitative methods of data collection.To better describe the degree of flexibility in research designs, leaving the type of data optional, Anastas and MacDonald (1994) and Robson (2002) use the terminology of fixed and flexible designs in social science.Because there is a need for increased awareness of the flexibility in ESE research designs, we suggest that this terminology for, or perspective on, design also be used in ESE.In fixed designs, the design is specified early in the research process, whereas in flexible designs, it is allowed to evolve during the research.Both qualitative and quantitative data may be used in both fixed and flexible designs.A research design may be either completely fixed, completely flexible, or have degrees of flexibility.We believe that there are completely flexible designs conducted in ESE, but that these typically follow the traditional qualitative framework, collecting qualitative data in ethnographies, action research, or exploratory case studies.In our experience, other types of study in ESE also face some form of uncertainty in the planning phase and thus require a degree of flexibility in the design.Hence, our main concern is to find a design perspective that embraces these studies.In particular, we believe that many experiments need a degree of flexibility in the design.A review of the literature on the type of evidence produced by empirical software engineers, performed by Segal (2005), shows that laboratory experiments dominate evaluations.Hence, our perspective might influence many empirical studies in software engineering.
Our main aims in this paper are to increase the awareness of why and how some ESE studies are flexible, and to initiate a discussion of how to handle this flexibility and simultaneously conduct methodologically sound research.We suggest using flexible designs, including appropriate techniques for establishing trustworthiness.
The remainder of this paper is organized as follows.Section 2 describes the features of fixed and flexible designs and gives examples of how the need for flexibility occurs.Section 3 suggests factors to consider when choosing a design.Section 4 suggests techniques for establishing the trustworthiness in studies with a flexible design.Section 5 concludes.
TYPES OF RESEARCH DESIGNS
We view a research design as consisting of the following elements as shown in Figure 1: purpose(s), theories, research questions, methods and sampling strategies.Both the purpose(s) and the theories help the specification of the research questions.Once the research questions have been specified, decisions can be made regarding the methods to use and the sampling strategies.The methods include the following: the research strategy, such as the case study, survey, or experiment; the constructs and measures; the methods of collecting data, such as interviews, observations, or questionnaires; the methods of analysis; the techniques for establishing the trustworthiness in the study; and the research schedule.Finally, the sampling strategy includes descriptions of the study units and how to select them.
Sampling strategy
Research questions FIGURE 1: A framework for a research design (Robson 2002) In a research study that follows a fixed design, the elements presented in Figure 1 must be specified before the data collection starts.More specifically, applying a fixed research design entails following the procedure that is presented in Figure 2a.Ideas are generated and the research is designed at the beginning of the procedure, and it is here that plans for collecting and analysing data are made.Any methods or data can be used, provided that they can be specified early in the research process.
Examples of typical types of fixed design are experiments that test theories and use statistical methods as a decision tool for drawing conclusions (Arisholm, Gallis et al. 2007), replications (Laitenberger, Emam et al. 2001), systematic reviews of well understood phenomena (Kitchenham, Mendes et al. 2006), and surveys that are based on questionnaires (Dybå 2005).
Another example of fixed designs would be studies that are not necessarily based on theories, but that have short time schedules that allow no flexibility.An example would be experiments performed at developer seminars (Grimstad and Jørgensen 2007).
In flexible research designs, the design components that are presented in Figure 1 are specified during the course of the study.Hence, when applying a flexible research design, the methods of inquiry evolve incrementally in response to the data obtained (Robson 2002).The generation of ideas, designing, data collection, and analysis and writing proceed together or in iterations, rather than in separate stages; as shown in Figure 2b.
So, whereas the procedure of following a fixed research design is analogous to the waterfall method of designing software, the procedure of following a flexible research design is similar to iterative software development or agile methods.The researcher's inability to fix one or several design elements at the beginning of the research, as well as practical constraints during the study, create the need for a flexible design.Examples are the specification of research questions, constructs and measures, and the research schedule: • Research questions.A study may set out with a tentative research question that is refined in the course of the study, because the understanding of the phenomenon under study and of what can actually be studied empirically increases.
• Constructs and measures.The mostly immature theories in software engineering mean that there will often be a corresponding lack of established constructs associated with the phenomenon under study.Further, the constructs may lack empirical validation.Consequently, constructs and measures may be refined during the study.Moreover, the knowledge about potential data sources and their quality may be limited at the outset of a study, so that the collection of data must be adapted to the actual data available.• Research schedule.The research schedule may have to be revised during the research, because of unforeseen events.
To give practical examples of how the need for flexibility occurs, we present in Table 1 experiences from two studies in ESE: a systematic review of the literature and a series of experiments.These studies were initially planned with a fixed design.However, because of lack of theories of the phenomenon under investigation, it became evident that some flexibility was required.The review was a quantitative investigation of the literature on experimentation over a decade.The first part of the review selected the relevant articles and summarized the characteristics of the experiments.The last part of the review investigated the reporting of effect sizes and quasi-experimentation.Research question: In the last part of the review, the initial research question asked whether there was a difference in effect sizes between randomized experiments and quasi-experiments.This appeared to be difficult to answer, because the experiments did not include the necessary information for estimating the effect sizes.As a consequence, the review ended up with investigating the state of practice of effect size reporting and quasi-experimentation.Constructs and measures: In the first part of the review, the operational definition of a software engineering experiment evolved with the selection of articles.Several researchers were involved in the study and the final criteria for inclusion were the results of several discussions.Further, the definition of effect size changed throughout the study, and ended up including the unstandardized effect size, because this type appeared to be reported in some articles and seemed very useful for describing the practical importance of the result.Furthermore, types of quasi-experiments in software engineering were not known in advance and therefore, the catalogue of quasi-experiments was continuously refined.Research schedule: The time schedule was revised continuously throughout the review.Lessons learned: The decision to not follow the initial plan, but account for new insight during the work, was important for the final quality of the review.All the refinements of research questions, constructs, and measures were valuable for the final results.However, the iterative process made the study more resource-demanding than planned; a flexible design requires a flexible budget.The iterative process was sometimes frustrating.If we had known the framework of flexible designs, we would probably have been more comfortable with all the refinements.Pre-review mapping and piloting the review protocol, as suggested by Brereton, Kitchenham et al. (2007), might have helped to reduce the number of iterations.However, many changes appeared late in the process and a flexible approach would still have been valuable for this type of review.The first experiment was a pilot study with 26 students as participants; the second was conducted with 53 students as participants; and the third was conducted with 22 professional software developers.The experiments were motivated by common claims in software engineering textbooks, but there were no established theories on the topic.Research question: The initial research question was whether there was a difference, regarding the time spent on design and quality of the final class diagrams, between a use-case-driven development process and a responsibility-driven development process.However, during the analysis and writing up of the experiments, we realized that the experiment had compared a more specific aspect of the two processes, i.e. the transition from use cases to class diagrams.
Consequently, the research question was changed to whether there was a difference, regarding time spent on design and quality of the final class diagrams, when classes where derived by analyzing the use cases compared to when the use cases are used to validate the class diagram.Constructs and measures: The exploratory nature of the experiments meant that the constructs used for the independent variable, the process, and for one of the dependent variables, quality, were not well established at the outset.Therefore, qualitative data was collected during the experiments to allow us to understand how the participants worked when performing the experimental tasks.The assessment of the quality of the final solutions was qualitative and was refined slightly on the basis of the actual data.
The procedure for data collection mostly remained as planned at the outset of the study, but we made some changes in response to the specific features of each experiment.In addition, a few of the participants did not manage to follow the process description that was part of the experimental material, so their solutions were discarded.
Research schedule: The study procedure was revised for each experiment.Lessons learned: Revising the initial research questions and central constructs during the course of the study was important for the quality of the final study, because it allowed us to take into account what we had previously learned.Furthermore, the collection of qualitative data, in particular about how the participants worked during the study, was valuable in ensuring the validity of the results.Conducting a pilot experiment is recommended in empirical research before fixing the design for the main experiment.Therefore, some revisions of the research design are also catered for in the existing literature on software engineering experiments.In this case, the first experiment can be characterized as a pilot.However, we found that it was difficult to fix all aspects of the design on the basis of the relatively small pilot and some flexibility was also useful in the later experiments.
CHOOSING AN APPROPRIATE RESEARCH DESIGN
The researcher must decide whether to use a fixed design or a design that accounts for a certain degree of flexibility early in the research process.In this decision process, we suggest considering the maturity of the research, the purpose of the research, the research setting and the time schedule of the research.
The maturity of research can be catalogued according to the extent of previous work in the field, for example nascent, intermediate, and mature theory; as shown in Table 2.
The purpose of research is commonly divided into exploratory, descriptive and explanatory; see Table 3.The purpose of the research often depends on the maturity of the research, but not in a deterministic way.Purpose and maturity represent two different perspectives, and both must be considered when choosing a design.In general, the less that is known about a specific topic, the greater the flexibility in the design.However, the research setting and the time schedule must also be considered.Exploratory research: Research in which the primary purpose is to examine a little understood issue or phenomenon to develop preliminary ideas and move toward refined research questions by focusing on the "what" question.Descriptive research: Research in which the primary purpose is to "paint a picture" using words or numbers and to present a profile, a classification of types, or an outline of steps to answer questions such as who, when, where, and how.Explanatory research: Research in which the primary purpose is to explain why events occur and to build, elaborate, extend, or test theory.
The research setting can be divided into two categories, which are based on the extent of control.In laboratories and classrooms, more control is possible than in studies that are conducted in a field setting.A controlled setting may enable a fixed design, even if the study is exploratory, whereas a field setting often requires a flexible design.Edmondson and McManus (2007) describe the process of field research on management as a journey that may involve almost as many steps backwards as forwards, in an iterative way.We think that their description fits well into the perspective of a flexible design.Moreover, Edmondson and McManus argue that this iteration is present in all types of field research on management, but that the timing and intensity of the iterations depends on the level of maturity of the research.They also argue that field research is exposed to so many unforeseen events that it must be viewed as a continuous learning process, and that the aim of the learning process is to achieve methodological fit.We present part of their model in Table 4.They suggest using qualitative data for nascent research, a combination of data types (hybrid or mixed methods) for intermediate research, and quantitative data for mature research.This recommendation is in line with our view of the type of data being orthogonal to the choice of fixed and flexible design.Further to this, we believe that quantitative data is sometimes useful for nascent research and qualitative data is sometimes useful for mature research.
A fourth factor to consider is the time schedule of the research.Studies that have a short time schedule (perhaps as short as an hour) often require a fixed design, whereas studies that have a long perspective (perhaps a day or more) often require a flexible design.Sometimes, participants in experiments perform tasks at different times.Such experiments might last for several weeks, allowing the researcher to influence the later part of the experiment using experiences obtained in the early part as a basis.In addition, the chances that other unexpected events will occur increase with time.
ESTABLISHING TRUSTWORTHINESS
An important part of the research design is to establish trustworthiness.In a fixed research design, trustworthiness is established by the production of a research plan, which includes control with potential biases that can influence the result, and a performance according to the plan.Central concepts when talking about trustworthiness in fixed designs, are validity and reliability; as shown for example by descriptions in (Shadish, Cook et al. 2002) and(Wohlin, Runeson et al. 1999).Examples of ways of establishing trustworthiness in fixed designs are randomization, blinding, random sampling, and computations of researcher's agreement scores.
In flexible designs, there is no fixed plan up front to compare performance to by the end of the study, and there might be types of biases different from those that apply to fixed designs.
In the remainder of this section, we describe techniques for establishing trustworthiness in flexible designs.We will use the definitions of validity and reliability that are suited to all types of research, as suggested in (Hinds, Scandrett-Hibden et al. 1990).We start with describing validity.
Validity is established when the findings reflect reality, and the meaning of the data is accurately interpreted.(Hinds, Scandrett-Hibden et al. 1990, p.431) One main threat to validity in studies that have a flexible design comes from the researcher's involvement in the study.It is the researcher's role to be deeply involved in every iteration and decision.In contrast to using a fixed design, where the researcher can concentrate on the planning in a specific time period followed by phases of practical work and analyses according to the plan, in a flexible design, he or she must continuously handle all aspects of the research: planning, performance, and analyses.This situation is very demanding.The researcher must avoid that research being more influenced by his or her personal assumptions than by the data.This threat from researcher bias and hence to valid interpretation can be reduced or eliminated by the techniques described in the literature on qualitative research; see for example (Kvale 1989;Huberman and Miles 2002;Creswell 2007).
In addition to the potential researcher bias, we believe there are two other main threats to validity in flexible designs.One is collecting data that is not the best suited for answering the research questions.This might occur when the research question changes in response to the research and the procedure for collecting data is not sufficiently flexible to account for these changes.The other threat to validity occurs when the researcher does not account for the flexibility of the design when analysing and reporting the results.The flexibility in the design will influence the inferences made from the results.For example, the assumptions for the statistical analyses might not be fulfilled.In such cases, the results can be regarded as justifying the formulation of hypotheses, rather than the formulation of firm conclusions.Furthermore, the reporting of the study must account for the insight obtained through the flexible approach.Hence, both the limitations and the gains obtained through the flexibility must be reported.
We suggest considering these threats to validity and corresponding techniques for reducing them, when performing studies in ESE that need a flexible design.In particular, we are concerned with those studies that traditionally do not use such techniques, for example experiments, systematic reviews, and other studies that use quantitative data.We recommend the following, which are mostly based on the descriptions in (Robson 2002): • Strive for the right researcher skill.The researcher must be able to manage unanticipated directions in the research and to balance adaption and rigour.Moreover, the researcher must know the issue under investigation, because the information gathered is interpreted, not only recorded.Finally, he must be open to contrary findings and ask for critical views on the work.• Use multiple researchers.There is probably more need for multiple researchers in the conduct and analyses in flexible designs than in fixed designs.Arrange peer debriefing and support group sessions.• Be aware of your value system.Write a description of your pre-assumptions and value system and keep a journal of your reflection.• Document everything.Produce an archive of your activities, raw data, analysis notes, etc. and let others inspect it (Audit trial).Document the analysis process so that you can trace the route by which you came to your interpretation.• Use the strategy of triangulation.Use multiple sources to enhance the rigour of the research.For example, collect both qualitative and quantitative data.• Collect data on a broad basis.Be open to the need for data that are related to, but that do not contribute directly to, answering the initial research questions.• Perform member checking.Check with the respondents to determine whether your interpretations are correct from their view.For example, interview the participants after their performance in experiments.• Account for flexibility in the analysis and reporting of the study.Both the limitations and the gains obtained through flexibility must be considered in the analysis and reporting.
Generalizability is one part of validity.Generalizability is possible in flexible design by providing sufficient information in the reporting of the study to enable the reader to determine whether the findings are applicable to his situation (Robson 2002).
Reliability is the second concept of trustworthiness.
Reliability is established when the repeatability of scientific observations, and sources that could influence the stability and consistency of those observations, have been identified and evaluated.(Hinds, Scandrett-Hibden et al. 1990, p.431) Subjectivity and objectivity in research are often connected to the question of reliability.The researcher's role in the flexible design makes it easy to consider flexible design to be subjective, and thereby unreliable.Patton (1990) prefers to avoid using the terms "subjectivity" and "objectivity".He strives for emphatic neutrality and with that, he means to be non-judgemental and report what is found in a balanced way.Phillips (1990) claims that "All good research is objective in the sense that it has been open to criticism and withstood serious scrutiny."Hence, a way of establishing reliability in flexible designs is to let other researchers evaluate all aspects of the research.
We have presented ways of establishing trustworthiness in the research to handle the challenges that arise from the flexibility of the design.In addition, worldviews (such as positivism, constructivism, and pragmatism) and particular choices of research methods and type of data gathered must be considered.For example, Lee (1989) discusses conducting case studies that are consistent with the conventions of positivism, Klein and Myers (1999) discuss how to conduct interpretive field studies, and Host and Runeson (2007) have suggested a checklist to use in case studies in software engineering (see also the book by Yin (2003) for general descriptions of case studies.)Moreover, the recent special issue of Information and Software Technology on qualitative software engineering research provides many useful examples of approaches for study designs, data collection, and analysis that should be relevant for future studies of software development that employ flexible designs (Dittrich, John et al. 2007).Finally, issues regarding mixed methods are presented in (Tashakkori and Teddlie 2003).
CONCLUSION
We have suggested that research designs in ESE often need to be flexible.The rationale for this perspective is that studies in SES are often exploratory, immature, or performed in a field setting.Moreover, the studies involve people, whose behaviour or skills we cannot predict exactly.Because such studies are difficult to plan in detail, the researcher must be flexible and be prepared to adapt when the research takes an unanticipated direction.This requires the use of flexible research designs.
Our impression is that most research in ESE use fixed designs, in the form of experiments and surveys, probably because this type of design is traditionally regarded as the most reliable, or the easiest to implement.This strategy might imply that the full potential of the study is not achieved, for example, because deviations from the plan are regarded as threats to validity.Using a flexible design, such deviations are regarded as learning opportunities and are used to adjust design for the remainder of the research as well as being part of the results.Moreover, flexible research requires a flexible budget.Hence, planning for flexibility will help to formulate a realistic budget.
A flexible design can be used in all types of ESE research, the extent and timing of the flexibility being studyspecific.In order to establish trustworthiness, techniques for reducing researcher bias must be used and the reporting of the study must account for both the limitations and the insight obtained through the flexible approach.
We hope that the work presented herein will promote discussion on how to handle the need for flexibility in research designs in ESE and simultaneously perform methodologically sound studies.
FIGURE 2 :
FIGURE 2: Visualizations of the procedures that follow a fixed research design and a flexible research design.
Example 2. A series of three laboratory experiments that investigated the effects of different ways of applying use cases in the construction of class diagrams (Anda and Sjøberg 2003; Syversen, Anda et al. 2003; Anda and Sjøberg 2005)
TABLE 1 :
Examples of how flexibility might occur in studies in ESE Example
TABLE 2 :
(Edmondson and McManus 2007)(Edmondson and McManus 2007)Mature theory presents well-developed constructs and models that have been studied over time with increasing precision by a variety of scholars.Intermediate theory presents provisional explanations of phenomena, often introducing a new concept and proposing relationships between it and established constructs.Although the research question may allow the development of testable hypothesis, similar to mature theory research, one or more of the constructs involved is often still tentative, similar to nascent theory research.Nascent theory proposes tentative answers to novel questions and suggests new connections among phenomena.
TABLE 4 :
(Edmondson and McManus 2007) fit for research in a field setting(Edmondson and McManus 2007) | 2015-07-15T00:15:54.000Z | 2008-06-26T00:00:00.000 | {
"year": 2008,
"sha1": "2a0d8d62856b76d978dff392595b04d730a91e4c",
"oa_license": "CCBY",
"oa_url": "https://www.scienceopen.com/document_file/a6f8ca4e-4a03-42fd-8ba9-92706eb89abe/ScienceOpen/001_Kampenes.pdf",
"oa_status": "HYBRID",
"pdf_src": "Grobid",
"pdf_hash": "2a0d8d62856b76d978dff392595b04d730a91e4c",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
17708359 | pes2o/s2orc | v3-fos-license | Genetic Evidence for O-Specific Antigen as Receptor of Pseudomonas aeruginosa Phage K8 and Its Genomic Analysis
Phage therapy requires the comprehensive understanding of the mechanisms underlying the host-phage interactions. In this work, to identify the genes related to Pseudomonas aeruginosa phage K8 receptor synthesis, 16 phage-resistant mutants were selected from a Tn5G transposon mutant library of strain PAK. The disrupted genetic loci were identified and they were related to O-specific antigen (OSA) synthesis, including gene wbpR, ssg, wbpV, wbpO, and Y880_RS05480, which encoded a putative O-antigen polymerase Wzy. The Lipopolysaccharide profile of the Y880_RS05480 mutant was analyzed and shown to lack the O-antigen. Therefore, the data from characterization of Y880_RS05480 by TMHMM and SDS-PAGE silver staining analysis suggest that this locus might encode Wzy. The complete phage K8 genome was characterized as 93879 bp in length and contained identical 1188-bp terminal direct repeats. Comparative genomic analysis showed that phage K8 was highly homologous to members of the genus PaP1-like phages. On the basis of our genetic findings, OSA of P. aeruginosa PAK is proven to be the receptor of phage K8. The highly conserved structural proteins among the genetic closely related phages suggest that they may recognize the same receptor.
INTRODUCTION
Pseudomonas aeruginosa is an opportunistic pathogen present in diverse environmental niches. It is also one of the most common causes of healthcare-associated infections including pneumonia, bloodstream infections, urinary tract infections, and surgical site infections, accounting for about 9% of all nosocomial infections (Emori and Gaynes, 1993;Lister et al., 2009). Antibiotics are widely used for prevention and control of the infections caused by P. aeruginosa, leading to the emergence and the increasing prevalence of multidrug-resistant P. aeruginosa clinical isolates (Karaiskos and Giamarellou, 2014). There is an urgent need to discover and develop new classes of antibiotics and alternatives to the conventional drugs. Among the potential candidate antibacterials, lytic phages can kill bacteria efficiently and specifically, bringing great promises to combat drug-resistant pathogens (Kutter et al., 2010;Chan et al., 2013).
Though great progress has been made in phage discovery and applications, the underlying mechanisms of host-phage interactions still remain to be elucidated. In this study, P. aeruginosa phage K8 was selected for biological characteristic analysis, phage genome annotation, and screening for host genes encoding the phage receptors.
Transmission Electron Microscopy (TEM)
Phage particles were purified as described previously . In brief, phage K8 lysate (about 10 11 pfu/ml) was treated with DNase I (5 µg/ml) and RNase A (5 µg/ml) at 37 • C for 1 h. With the addition of 0.1 M NaCl, the mixture was kept on ice bath for 1 h and spun at 12000 × g for 20 min. The collected supernatant was supplemented with PEG6000 (10%) and stored at 4 • C overnight before centrifuging at 12000 × g for 20 min. Phage pellet was suspended with 2% ammonium acetate (pH 7.0) and filtrated with Amicon-100 filter. The purified phages were adsorbed onto a carbon-coated copper grid for 5 min, and subsequently negatively stained with 2% phosphotungstic acid (pH 6.7) for 5 min. Morphology observation was carried out 1 http://www.ncbi.nlm.nih.gov/genomes with a JEM-1400 transmission electron microscope operating at 100 kV.
Latent Period and Burst Size Analysis
Latent period and burst size of phage K8 was determined by onestep growth experiment described previously . Briefly, PAK cells were harvested from the 50 ml culture (OD 600 at 0.6) and suspended in 0.5 ml LB medium. The suspension was mixed with 0.5 ml appropriately diluted phage K8 solution at a MOI (multiplicity of infection) of 0.0001. After adsorption for 1 min, the mixture was spun at 13000 × g for 30 s to remove free phage particles. The pellet was resuspended in 100 ml LB medium for immediate cultivation. At 5 min intervals, samples were taken and the infection centers were determined by the double-layer agar plate method (Li et al., 2010).
Screening of Phage Resistant Mutants
Tn5G transposon was used to mutagenize P. aeruginosa PAK to construct an insertional mutant library as described earlier (Nunn and Lory, 1992). After mating, a fraction of the mutant bank was mixed with the stock solution of phage K8 and incubated for 4 h with a shaking speed of 220 rpm. Aliquots were plated onto L-agar medium with 100 µg/ml gentamicin and 100 µg/ml ampicillin. The grown colonies were selected as the phage-resistant mutants. The mutants were confirmed by the spotting assay and the double-layer plate method as described previously (Li et al., 2010). The adsorption rate of the phageresistant mutants was determined as described previously .
Identification of the Transposon Insertion Sites by Inverse PCR
Inverse PCR was performed as described previously (Wang et al., 1996). In brief, chromosomal DNA was isolated from the phageresistant mutants, digested with the restriction enzyme TaqI or PstI, self-ligated, and amplified using primers OTn1 and OTn2, Tn1 and Tn2, or F1 and R1, respectively ( Table 2). The PCR products was sequenced directly or cloned into pGEM-T Easy vector for sequencing. The obtained sequences were analyzed by searching the genome database of Pseudomonas strains 2 .
The primers were designed to amplify the target genes disrupted in the phage-resistant mutants. The PCR products were subsequently cloned into the multiple cloning sites of plasmid pUCP18. The recombinant plasmids were transformed into the phage-resistant mutants. Sensitivity to phage K8 was tested in the transformants with the spotting and the double-layer method (Li et al., 2010).
Transmembrane Helices Prediction and Lipopolysaccharide (LPS) Profile Analysis
The transmembrane helices of the putative O-antigen polymerase encoded by gene Y880_RS05480 was (Krogh et al., 2001). Lipopolysaccharide (LPS) was extracted using the hot waterphenol method as described previously (Westphal and Jann, 1965). In brief, PAK cells in 100 ml culture (OD 600 at 1.0) were harvested and subjected to the treatments of hot water and phenol sequentially. The residual phenol was removed by dialysis in water. The LPS solution was concentrated by dialysis in 40% PEG6000 solution. DNase I (10 µg/ml) and RNase A (100 µg/ml) were added to remove the residual nucleic acid in the LPS samples. LPS was analyzed by 12% SDS-PAGE and visualized by the silver staining method as described previously (Fomsgaard et al., 1990).
Biofilm Assay
Biofilm production was assessed in wild-type strain PAK and the phage-resistant mutants as described previously (Brencic et al., 2009). In brief, LB medium was diluted three times and used for biofilm production. The cultivation was carried out in 96 well plates and incubated at 37 • C for 48 h. Biofilm was quantitatively measured with the crystal violet staining method (Djordjevic et al., 2002).
Phage Genome Sequencing and Annotation
Purified phage particles were subjected to genomic DNA extraction according to the method described previously . Genome sequencing was carried out by Hiseq Illumina 2500 in GENEWIZ, Inc., China 4 . The adaptor sequences were removed with the software Trimmomatic v0.30 (Bolger et al., 2014). A total of 7803122 reads and 773403396 bp were obtained as clean data without any uncertain bases. The sequences were assembled with the software Velvet_v1.12.10 (Zerbino and Birney, 2008). DNA Master was used for the phage genome annotation 5 by searching against the non-redundant protein database (nr) from NCBI (Wheeler et al., 2003). The software tRNAscan-SE v1.21 was used to predict tRNA genes (Schattner et al., 2005). GC content was determined with the software DNAStar. The GC skew was analyzed by the software DNAPlotter (Carver et al., 2009).
Identification of Phage Genome Termini
Sequencing depth was analyzed across the assembled genome to find the high-frequency sequences (HFSs) which might represent the phage genome termini (Li et al., 2014). Restriction enzyme cleavage sites and restriction mapping in linear or circular genome sequences were simulated using the software DNA Master. The fragments containing the possible 3 or 5 terminus of the K8 genome were purified by the agarose gel electrophoresis. The purified DNA fragments were treated with the Klenow fragment and T4 DNA ligase sequentially. PCR was performed with the specific primers ( Table 2) and the PCR products were sequenced to identify the genome termini. The assembled genome was curated and the complete genome sequence was deposit in GenBank in NCBI with the accession number KT736033.
Comparative Genomic Analysis
The homology search of the K8 genome sequence was performed against the NCBI nucleotide database. Four phage genome sequences were selected for comparison analysis by the software Mauve, including PaP1, JG004, PAK_P2, and vB_PaeM_C2-10_Ab1 (Darling et al., 2004). The tail fiber proteins of phage K8 were analyzed and the phylogenetic trees was constructed with MEGA5 (Tamura et al., 2011).
Characteristics of Phage K8
Purified phage K8 particles were negatively stained using 2% phosphotungstic acid and observed by TEM. The obtained images showed that phage K8 has an icosahedral head structure connected with a contractile tail. The phage head was about 76.0 nm in diameter and the tail was about 122.0 nm in length ( Figure 1A). The observed morphology indicated that phage K8 should be tentatively classified as a member of Myoviridae family. The progeny production of phage K8 was characterized by one-step growth experiment with a MOI of 0.0001. As inferred from the triphasic curve, the latent period was about 20 min and the burst size was about 46.3 pfu/infection center ( Figure 1B).
Identification of Phage Receptor Related Genes
A random Tn5G transposon library of P. aeruginosa PAK was constructed to identify the host genes involved in the phage infection process. A total of 16 phage K8 resistant mutants were isolated. With inverse PCR, five different genes were identified disrupted in the mutated strains, including two mutants with the inactivated gene identical to wbpR gene of strain LESB58, four mutants with the inactivated gene identical to wbpV gene of strain PA96, 1 mutant with the inactivated gene identical to ssg gene of strain PAO1 (Veeranagouda et al., 2011), two mutants with the inactivated gene identical to wbpO gene of strain PA96, and seven mutants with the inactivated gene Y880_RS05480 encoding a probable O-antigen polymerase with 22.1% identity to Wzy (AIG62435) of E. coli at the amino acid sequence level (Figure 2 and Supplementary Table S1).
Adsorption Rate Analysis and Confirmation of the Phage Resistant Mutants
The adsorption ability of the phage-resistant mutants was analyzed. Compared with the parent strain PAK, the relative adsorption rates of phage K8 to the mutants were between 28.5 and 73.7%, implying that the phage receptors were impaired in these mutants (Figure 3A). Complementation test was carried out for each mutant. Gene wbpR, wbpV, ssg, and Y880_RS05480 were cloned and complementation successfully restored the sensitivity to phage K8 in the corresponding mutants, while gene wbpO and wbpP were both required for the phage sensitivity restoration in the wbpO mutant ( Figure 3B). The results indicated that the disrupted genes in the mutants were responsible for the phage resistance phenotype.
Function Analysis of Gene Y880_RS05480
Lipopolysaccharide (LPS) is comprised of two forms of O-antigen, the common polysaccharide antigen (CPA) and the O-specific antigen (OSA). Wzy O-antigen polymerases are essential for O-antigen biosynthesis. They exhibit the low sequence conservation among the P. aeruginosa strains with the different serotypes. Currently Wzy proteins are not identified in O6, O7, and O8 strains of P. aeruginosa despite the fact that they produce normal O-antigen on the surface of their cells (Islam and Lam, 2014). The inactivated gene Y880_RS05480 encodes a hypothetical protein sharing 22.1% identity to the Wzy O-antigen polymerase (AIG62435) of E. coli. With the TMHMM prediction service, this hypothetical protein displayed a large periplasmic loop in its C-terminal and 11 transmembrane helices (Figure 4A), similar to the topology of the Wzy proteins found in the P. aeruginosa serotype O5 strain PAO1 and the other serotype strains (Islam and Lam, 2014). The LPS profiles were analyzed in the strains PAK, the Y880_RS05480 mutant SK75, and the mutant carrying the intact gene Y880_RS05480 (SK75/pLY1201). The Y880_RS05480 mutant SK75 was devoid of the O-antigen with high molecular weight, whereas the wild-type strain PAK and the mutant carrying the intact the Y880_RS05480 gene produced the normal pattern of O-antigen LPS, including O-antigen and core oligosaccharide ( Figure 4B). The results indicate that the product of gene Y880_RS05480 may have a similar function in the serotype O6 strain PAK as the Wzy O-antigen polymerases in their corresponding strains.
Biofilm Production Assay
Strains with different LPS phenotypes produced different amounts of biofilm under various conditions (Murphy et al., 2014;Ruhal et al., 2015). The phage-resistant mutants were analyzed for biofilm production after 48 h incubation. The mutants yielded 1.5-11.5 times biofilm compared with the wildtype strain PAK, and the ssg mutant produced the highest level of biofilm (Figure 5; Veeranagouda et al., 2011). When the mutants were complemented with their corresponding genes, the resulted strains produced significantly less amount of biofilm (Figure 5). The results indirectly indicated the phage-resistant mutants had altered LPS profiles.
Identification of the K8 Genome Termini
Two 102-bp HFSs were found with the sequencing depths 62.8 times over the average level in the assembled phage K8 genome, possibly representing the termini of phage genome (Li et al., 2014). Based on this prediction, the restriction mapping of the enzyme NotI and NdeI was simulated, respectively ( Figures 6A,B). Enzyme NotI digestion produced one specific 7.5-kb fragment containing 3 terminus ( Figure 6B). Enzyme NdeI digestion produced a 3.5-kb fragment instead of the proposed 2.3-kb fragment, indicating that the 5 terminus included a piece of unknown DNA fragment of about 1.2-kb was absent from the draft K8 genome (Figure 6B). The resultant 7.5 and 3.5-kb fragments were further found including identical 1188-bp sequences, demonstrating that the K8 genome has the identical terminal direct repeats. The 1.0-kb PCR product was also analyzed and was part of the 3 terminus possibly amplified from the phage genome fragments (Figure 6C).
showed that an asymmetric nucleotide composition was located near the virtual junction region between the termini of the K8 genome (Figure 7). The asymmetry might correspond to the DNA replication origin and the putative replication initiation site of phage K8 genome (Necsulea and Lobry, 2007).
The K8 genome has 179 predicted protein-coding genes distinctively arranged in five major clusters (Figure 8). (i) Genes in the cluster I mainly encoded proteins related to nucleotide metabolism, most of them shared great similarities with their homologs except for gene 087 encoding the pyrophosphatase that only shares 43.7% similarity with that of Burkholderia phage AH2 (Figure 8 and Supplementary Table S2). (ii) Cluster II has 10 genes encoding structural proteins and unclassified structural proteins. All proteins shared great similarities of 98.6-100% to their counterparts of phage PaP1 (Figure 8 and Supplementary Table S2) (Lu et al., 2013). (iii) Cluster III genes mainly encoded proteins related to DNA replication, transcription, recombination, and modification processes (Figure 8 and Supplementary Table S2). (iv) Genes in the cluster IV and V encoded proteins with unknown functions, and each cluster was adjacent to the termini of the K8 genome, respectively (Figure 8 and Supplementary Table S2). Thirteen tRNA genes were organized in one minor cluster between cluster I and II (Figure 8 and Supplementary Table S2).
Three endolysins encoding genes were identified, including the putative cell wall hydrolase (gene 033) belonging to the hydrolase-2 family located within the cluster I region; the endoylsin (gene 079) identical to that of phage PaP1 located within the cluster II region; and the putative endolysin (gene 115) located within cluster III sharing 40.8% identity with that of Pseudomonas phage LU11 (Adriaenssens et al., 2012). However, Frontiers in Microbiology | www.frontiersin.org FIGURE 7 | GC skew and GC plot of the K8 genome. The outer circle represents the value of the GC skew, green for positive and pink for negative. The inner circle represents the value of GC plot, yellow for G+C content above the average level 49.35% of the K8 genome and blue for G+C content below the average level of the K8 genome. The sequences in the rectangular box stand for the putative replication origin of the K8 genome. no holin encoding gene was identified in phage K8 genome (Figure 8 and Supplementary Table S2).
Comparative Genomic Analysis
Homology of the K8 genome sequence was searched in NCBI. The result showed that the K8 genome has high similarities (>90%) and coverage (>90%) with phage PaP1, JG004, PAK_P2, vB_PaeM_C2-10_Ab1, PAK_P4, and PAK_P1. Comparative genomic analysis was further performed with the software Mauve (Supplementary Figure S1). Though the K8 genome was highly homologous to the reference genomes, genetic differences were found within the phage group. Compared to the K8 genome, PaP1 has six genes absent in its genome (Lu et al., 2013), JG004 has 10 genes absent in its genome (Garbe et al., 2011), PAK_P2 has 12 genes absent in its genome (Henry et al., 2015), and vB_PaeM_C2-10_Ab1 has 10 genes absent in its genome (Essoh et al., 2013). All absent genes were located within the gene clusters IV and V with unknown functions except for gene 093 which was positioned in middle of the K8 genome (Supplementary Table S2). The tail fiber proteins can act as the ligands to recognize the phage receptors during the infection process. The phylogenetic relationship was investigated among the 18 most homologous tail fiber proteins of P. aeruginosa phages including K8. The proteins were grouped into four clades on the basis of homology. The analysis showed that the phylogenetic distance of the tail fiber proteins was not correlated with the geographic locations where the phages were isolated (Figure 9).
DISCUSSION
Pseudomonas aeruginosa phages that have been identified so far are comprised of at least 24 genera classified into Podoviridae, Myoviridae, and Siphoviridae families (Sepulveda-Robles et al., 2012). Phage K8 exhibits an icosahedral head structure with a FIGURE 8 | Genomic structure of phage K8. Red boxes represent genes on the minus strand. Green boxes represent genes on the plus strand. Roman numerals I-XIII refer to the genes encoding tRNA Gln , tRNA Arg , tRNA Lys , tRNA Leu , tRNA Ile , tRNA Asp , tRNA Cys , tRNA Asn , tRNA Pro , tRNA Gly , tRNA Phe , tRNA Glu , and tRNA Thr , respectively. NPR, nicotinamide phosphoribosyl; PRP, phosphoribosyl pyrophosphate; RDR, ribonucleotide diphosphate reductase. contractile tail and is classified into the Myoviridae family. Its genome is highly homologous to that of phage PaP1 and their major capsid proteins are identical, suggesting that phage K8 is a new member of the PaP1-like phages (Lu et al., 2013) or PAK_P1-like phages genus (Henry et al., 2015). To date, the genus includes 18 phages besides phage K8 (Essoh et al., 2015). Though these phages were isolated from France, Germany, Côte d'Ivoire, Chongqing (China), and Tianjin (China), the phage genomes share great similarities. The result is consistent with the findings that P. aeruginosa phages of specific genera are genetically closely related and can be readily isolated from environmental samples globally (Ceyssens and Lavigne, 2010).
The terminal structure of the dsDNA phage genomes has at least five major types, including the linear genomes with 5protruded cohesive ends (Tan et al., 2007); the linear genomes with 3 -protruded cohesive ends (Zeigler, 2013); the linear genomes with terminal direct repeats (Pajunen et al., 2001); the genomes with circular permutation and terminal redundancy with specific pac recognition sites (Alonso et al., 1997); and the genomes with circular permutation and terminal redundancy without specific pac recognition sites (Miller et al., 2003). Many P. aeruginosa phages have similar direct terminal repeats with lengths ranging from 184 to 1238 bp, including PaP1-like phages, KPP10-like phages, and some Podoviridae phages (Ceyssens et al., 2006;Henry et al., 2015). The direct terminal repeats are highly conserved among the PaP1-like phages genus and may be related to the patterns of viral genome replication in these phages.
Diverse receptors of P. aeruginosa phages have been identified. Phage PA1Ø, MPK7, B3, and D3112 use type IV pili as the receptor for infection (Roncero et al., 1990;Kim et al., 2012;Bae and Cho, 2013). Phage phiCTX and H22 use core oligosaccharide of LPS as the receptor (Temple et al., 1986;Yokota et al., 1994). Phage FIZ15 and D3 use LPS O-antigen as the receptor (Kuzio and Kropinski, 1983;Vaca-Pacheco et al., 1999). Phage A7 use CPA as the receptor (Rivera et al., 1992). Phage PIK receptor in LPS contains D-mannose, L-rhamnose, and D-glucosamine and may be the heteropolymer O-antigen OSA (Patel and Rao, 1983). For phage JG004, a series of genes related to LPS pathway have been identified involved in the receptor synthesis (Garbe et al., 2011).
Lipopolysaccharide is described as a molecule with three domains, lipid A, core oligosaccharide, and O-antigen. P. aeruginosa PAK simultaneously synthesizes two different forms of O-antigen. CPA is a homopolymer of D-rhamnose (D-Rha). OSA is a heteropolymer composing of repeating units of D-QuiNAc, D-GalNAcA, D-GalNFmA, and L-Rha (Belanger et al., 1999). In this study, all the disrupted genes related to the phage receptor synthesis play a key role in LPS biosynthesis in P. aeruginosa PAK. WbpR is a putative dTDP-L-Rha transferase, adding the fourth residue L-Rha to the repeating unit of OSA in O6 strains (Figure 2; Belanger et al., 1999). Gene wbpV encodes a UDP-galactose-4-epimerase involved in the pathway of UDP-QuiNAc synthesis. UDP-QuiNAc is further added to the repeating unit of OSA as the first residue D-QuiNAc by the glycosyltransferase WbpL (Rocchetta et al., 1999). Gene wbpP is located downstream of gene wbpO within the same operon, encoding the epimerase converting UDP-GlcNAc to UDP-GalNAc . Gene wbpO encodes the dehydrogenase converting UDP-GalNAc to UDP-GalNAcA. UDP-GalNAcA is further added as the third residue of the repeating unit of OSA (Figure 2; Zhao et al., 2000). The cluster of 17 genes of P. aeruginosa has been found involved in core oligosaccharide (OS) moiety biosynthesis. Among them, gene ssg encodes a glycosyltransferase and is responsible for the transfer of α-D-Glc III to OS moiety (Figure 2; Lam et al., 2011;Veeranagouda et al., 2011). Both CPA and OSA are lost in the ssg mutant strain (Fernandez et al., 2013). P. aeruginosa serotype O6 strains are able to synthesize long-chain O antigen. However, no wzy gene homolog is identified within the wbp gene cluster for O-antigen synthesis in O6 strains. In this work, though the protein encoded by the gene Y880_RS05480 displays less similarities with the known Wzy polymerases, the Y880_RS05480 mutant isn't able to produce the O-antigen with the high molecular weight, indicating the Wzy-dependent pathway existed in O6 strain PAK for LPS synthesis (Islam and Lam, 2014).
CONCLUSION
Five genes wbpR, wbpV, wbpO, ssg, and wzy are identified as inactivated in the phage-resistant mutants. Gene Y880_RS05480 is first proved to function as the Wzy O-antigen polymerases. In combination, the results indicate that OSA should be the receptor of phage K8.
AUTHOR CONTRIBUTIONS
XP performed the bioinformatic analysis and experiments and wrote the manuscript. XC, FZ, and LL carried out the plasmid constructions. YH performed the bioinformatic analysis. HY designed the experiments and wrote the manuscript.
ACKNOWLEDGMENT
This work is supported by The National Natural Science Foundation of China (Grant No. 31370205 and 30970114).
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: http://journal.frontiersin.org/article/10.3389/fmicb. 2016.00252 FIGURE S1| Comparative genomic analyses of Pseudomonas aeruginosa phages. Ab1: vB_PaeM_C2-10_Ab1. The coordinate rulers display the size of the corresponding genomes. The height of the red and green ribbons is correlated with the level of similarities between every two genomes. Different colors indicate the inconsistent genomic organizations among the phages. | 2016-06-17T07:25:17.039Z | 2016-03-02T00:00:00.000 | {
"year": 2016,
"sha1": "ffd8f059b9bf66c6ac2b3e14fda6e5e3752777d8",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2016.00252/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ffd8f059b9bf66c6ac2b3e14fda6e5e3752777d8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
246066398 | pes2o/s2orc | v3-fos-license | Assessment of Phasic Changes of Vascular Size by Automated Edge Tracking-State of the Art and Clinical Perspectives
Assessment of vascular size and of its phasic changes by ultrasound is important for the management of many clinical conditions. For example, a dilated and stiff inferior vena cava reflects increased intravascular volume and identifies patients with heart failure at greater risk of an early death. However, lack of standardization and sub-optimal intra- and inter- operator reproducibility limit the use of these techniques. To overcome these limitations, we developed two image-processing algorithms that quantify phasic vascular deformation by tracking wall movements, either in long or in short axis. Prospective studies will verify the clinical applicability and utility of these methods in different settings, vessels and medical conditions.
INTRODUCTION
Currently, the invasive measurement of central venous pressure (CVP) to estimate right atrial pressure (RAP) is routinely used only in critically ill patients to assess cardiac hemodynamics and volume status. Medical devices that provide this information non-invasively are under development, but not routinely used yet in clinical practice (1). A widely adopted non-invasive approach to estimate RAP is based on ultrasound (US) imaging of the inferior vena cava (IVC) diameter and of its respiratory changes (2,3). These changes can be expressed in terms of the caval index (CI), defined as the variation of the vessel diameter during a respiration cycle, relative to the maximum diameter (4). This approach is not standardized (5) [for instance, it is performed in either long (6) or short axis (7) views], is operator-dependent (8) [with an important effect of experience (9)] and prone to measurement errors (10) [e.g., due to movements (8,11) or to irregular shape of the IVC (12)]. Therefore, a single measurement might only provide limited-and misleading-information. The following sources of variability of the standard US approach have been investigated (8): different respiration cycles (coefficient of variation, CoV=15%), specific longitudinal section (CoV=40%), inter-and intra-operator variability (CoV 35 and 28%, respectively). Furthermore, recent studies indicated that IVC collapsibility assessed with current methods is not a reliable predictor of fluid responsiveness (13) and its correlation with RAP is only modest (14,15).
Alternative approaches to assess, non-invasively, the CVP or volemic status have been proposed. For instance, there is a good correlation between CVP with pressure measured in superficial veins at the forearm with an US probe equipped with a pressure transducer (16). Moreover, measuring with US the ratio of the internal jugular vein during a Valsalva maneuver to that at rest identifies patients with heart failure with more severe intravascular congestion at greater risk of poorer outcomes (17,18). However these techniques are not routinely used in clinical practice, as they require a more robust validation.
The assessment of arterial wall properties is also useful to characterize the health of the cardiovascular system and can improve prediction of cardiovascular events beyond conventional risk factors (19). Measurement of aortic pulse wave velocity (PWV) is considered the current gold standard to assess arterial stiffness. It has acceptable degree of accuracy and reliability (10) and has demonstrated to predict cardiovascular events for patients with different cardiovascular risk factors or diseases (20), but requires a specialized equipment (arterial tonometry, piezoelectric sensors, photoplethysmography) and additional time and resources. Moreover, current assessment of PWV may exclude the proximal segment of the aorta and might not identify pathological segments along the arterial tree, the deformation of which could, potentially, be assessed by US if a reliable method existed (21,22).
Recently, we have started to address some of the above mentioned issues and developed two semi-automated methods to delineate and track displacements of the IVC borders in long (4,8,23,24) or short axis views (12). Our approaches could reduce the inter/intra-operator variability (8), assist in the interpretation of findings clinicians or sonographers with limited training and experience (25), and perhaps facilitate the diffusion of point-of-care US to guide clinical decisions. Our preliminary results suggest that the integration of indexes extracted by both algorithms could provide a more reliable estimation of the volemic status than using the standard IVC assessment (26) and call for extensions of research to larger databases and other vessels, like the arteries, the evaluation of which might improve cardiovascular risk stratification (27).
In this manuscript, we discuss these methods along with some possible future applications.
PHYSIOLOGICAL BASIS OF VESSEL PULSATILITY
Rapid changes in vascular size (irrespective of whether we deal with arteries or veins) are primarily produced by changes in transmural pressure P tm , defined as P tm = P in − P out , where P in is the blood pressure inside and P out the pressure FIGURE 1 | Volume-pressure curve of a venous blood vessel relating the vessel size (volume V) to the transmural pressure (P tm ). Note that an oscillatory perturbation in blood pressure results in large volume changes if P tm is low (A; high vessel compliance) and low pulsatility if P tm is high (B; low vessel compliance).
outside the vessel wall (28,29). The relation between vascular size and P tm is generally represented by a volume-pressure (or capacitance) curve as qualitatively shown in Figure 1, whereby the vascular size, expressed in terms of vessel volume V, is shown to increase with increasing transmural pressure. This sketchy representation can be assumed to be a hypothetical experimental characterization of the vessel of interest. Ranges of volume and pressure are not indicated, as they vary widely along the cardiovascular system (30). Specifically, arteries are more rigid and exposed to larger pressure variations than veins. Pressure variations in the arteries are mainly determined by the pulsatile nature of the cardiac pump and they are also affected by peripheral resistances; whilst P tm changes in the veins are largely influenced by a variation of the external pressure. However, this scheme is only a simplistic representation, as intrinsic vessel characteristics, circulating blood volume, medications and many other additional factors might influence intravascular pressure (31); moreover, the volume-pressure curve of a certain vessel or vascular district can be modulated by spontaneous or drugmediated variations in vascular tone.
Coming back to the simplified representation shown in Figure 1, it can however be observed that the curve has a tendency to flatten at high P tm . This indicates a decreasing vessel compliance, defined as and representing the slope of the volume-pressure curve. A given perturbation of P tm , such as a blood pressure change of cardiac or respiratory origin, would result in a corresponding vessel volume change, according to vessel compliance. As shown in Figure 1, the same change in P tm will produce different changes in vessel size, depending on the resting (average) value of P tm and V: if average P tm is low (for instance, in the case of IVC, when CVP or blood volume are normal) size changes will be large, while at a higher P tm value the vessel size will be larger and its phasic changes (that we might call "pulsatility", for simplicity) will be smaller (for instance, when CVP is high or if there is hypervolemia). The same considerations might apply, in theory, to the entire vascular system. Notably, P tm is not just dependent on the inner blood pressure, but also on the outside pressure. The effect of changes in the outside pressure are particularly relevant in veins, given their low blood pressure.
It is worth to reconsider the changes in transmural pressure that take place in the (abdominal) IVC during respiration. As compared to the reference end-expiratory condition (Figure 2A), in which the IVC exhibits its maximum size, during a thoracic inspiration the IVC undergoes a slight reduction in size, due to the decrease in intrathoracic pressure which drains blood from the IVC, thus lowering blood pressure and its P tm (Figure 2B). In addition, during inspiration, the diaphragm descends and abdominal pressure increases, with a further decrease in P tm and IVC size ( Figure 2C). The different implications of the respiratory pattern on IVC size have been evidenced experimentally during both spontaneous respiration (32) and in controlled isovolumetric respiratory efforts (33). Opposite effects, i.e., increased IVC size during inspiration, are observed during positive pressure ventilation, whereby the increase in intrathoracic pressure hinders venous return, thus increasing abdominal venous blood pressure and IVC size ( Figure 2D).
Finally, the analysis of phasic changes of the vascular size should take in consideration an additional confounding factor: the extravascular compliance. Since blood vessels are embedded within other organs and tissues, their possibility to expand upon variations in blood pressure depends on the capacity of the extravascular tissues to accommodate such changes. In other words, when measuring vessel volume changes in response to given variations in blood pressure, we are actually assessing the total compliance (C tot ), which accounts for the vascular (C v ) and extravascular (C ev ) compliances according to the formula C tot resulting smaller than C v . In summary, 1) the vessel size depends on P tm = P in − P out , according to a non-linear volume-pressure curve; 2) the vessel compliance (C = V/ P tm ) generally decreases at increasing P tm ; 3) vessel phasic changes depend on vascular compliance; 4) low extravascular compliance may lead to underestimate actual vessel compliance.
ECHOGRAPHY PROCESSING
Vessels such as the IVC might have a complex geometry. In particular, as shown in Figure 3, the section of the IVC is not constant along the longitudinal axis and its cross-section is far from being like a perfect circle, with large variations across subjects and clinical conditions. Moreover, IVC is a very compliant vessel whose movements are also affected by surrounding structures to which it can be anchored. Therefore, measuring its size on a single plane might be largely inaccurate (as shown in Figure 4). US scans of IVC in B-mode provide limited information on its phasic changes, which occur in a three dimensional space. With our two algorithms we can estimate the IVC edges either in long or short axis views (12,23).
In long axis, IVC movements are estimated by tracking two reference points selected by the user. Then, the edges are estimated in an entire longitudinal portion of the vessel. The diameters of different sections are then computed in directions orthogonal to the midline of the IVC, thus compensating for possible translations and rotations in the visualized plane (23).
In the case of the short axis view, the contour of the crosssection of the IVC is estimated, finding the edges along different directions starting from its center, identified in the previous frame as the centroid of the vessel border (12). As the edges of the IVC are estimated in each frame of the US video, a temporal series is acquired, from which different measures of IVC size can be obtained (e.g., different diameters, their average, or the cross-sectional area). In particular, two main contributions are clearly visible in IVC dynamics described by those time series: a slow oscillation of IVC size induced by respiration (due to variations of intrathoracic and abdominal pressure) and another at higher frequency induced by the heartbeats (reflecting retrograde flow induced by right atrial contraction). These two components can be measured separately (respiratory caval index, RCI, and cardiac caval index, CCI) and potentially provide complementary information (4,8,12,23,24,26,34).
Alternative methods to assess phasic changes of vascular size have also been applied or developed by other colleagues (35)(36)(37). Specifically, a method widely used in echocardiography is based on tracking the speckle noise (38), i.e., a random mixture of interference patterns and US reflections characterizing each region of the tissue (like as a fingerprint) that is relatively stable on consecutive frames, allowing the region to be traced from one frame to the next. Speckle tracking has been applied to study the IVC deformation in long axis view (36) or to assess aortic or carotid stiffness (39,40). Different processing techniques have also been used to segment blood vessels: for example, Otsu's thresholding (41) was combined with active contour on multiple short axis views to estimate the carotid in 3D (42); a semiautomated modified watershed method was applied to estimate the cross-section of IVC (43); snake and template matching (the latter approach, similar to speckle tracking) were tested for IVC edge detection in short axis (44); a novel energy functional for polar active contour was applied to the segmentation of IVC in short axis (45). More recently, deep learning approaches have been applied on the segmentation of IVC in short axis, obtaining moderately good performances in predicting fluid responsiveness (46). Edge-tracking methodology might exhibit a good compromise between computational cost and accuracy (12). We are currently working on the real time implementation and rendering over the US scan of the estimated vascular edges: this can provide the operators with visual feedback and guidance to obtain and acquire good quality images; moreover, the simultaneous measurement of quantitative indexes (e.g., mean diameter and pulsatility indexes) might be a valuable addon for research and routine clinical practice.
CURRENT AND FUTURE APPLICATIONS
Our methods have been applied in two pilot studies, to estimate the RAP (24, 34) and volume status (26). Representative examples are shown in Figures 5, 6.
Specifically, Figure 5 shows frames corresponding to maximal and minimal IVC size for two patients, with invasively measured high (A) and low (B) RAP, respectively. The average size of IVC is larger in the patient with higher RAP (20 mmHg) and the IVC phasic changes are greater for the patient with lower RAP (4 mmHg): in the latter case, small variations of transmural pressure induce large changes of size in the vessel. Figure 6 shows the long and short axis of the IVC in a hypo- (Figure 6A) and hyper- (Figure 6B) volemic patient, respectively. Compared to patients with hypervolemia, in those with hypovolemia the IVC size is smaller and the pulsatility is larger; moreover, the IVC in the cross-section view has a flattened shape, whereas it is mostly circular in conditions of fluid overload.
In addition, the features offered by the automated algorithms (i.e., size and indexes of pulsatility of the blood vessel of interest) open the way to new applications. We here briefly overview those that are of particular interest to the authors, i.e., specific studies or applications that have recently been attempted and are close to completion or that belong to the author's different working fields in basic and clinical sciences and that will be addressed in the near future. These existing and potential application of the methodology are presented in Table 1, but many other applications may possibly be envisaged.
Edge tracking algorithms can be applied to peripheral veins to investigate, by ultrasound, the mechanical response to changes in transmural pressure, e.g., by venous occlusion, for the assessment of venous compliance (47,48) and characterize the filling condition and the expanding capacity of the peripheral reservoir, a major pathway for venous return. Assessment of venous compliance could also be used to validate another recently proposed index of peripheral vascular filling, the venous pulse wave velocity (49,50). Combining these methods with the IVC assessment might increase the understanding of the underlying mechanisms of fluid distribution and displacements across different body regions and compartments in various clinical contexts (28,29).
In this respect, an interesting model of acute fluid redistribution is offered by the MuVIT technique (51), a procedure used to transiently lower aortic blood flow and pressure during thoracic or abdominal vascular interventions (e.g., stent graft placements) consisting in transiently increasing alveolar pressure up to about 30 mmHg. This maneuver provokes a substantial blood volume displacement from the pulmonary and arterial compartments to the systemic venous compartment resulting in a transient venous congestion. The possibility of simultaneous tracking size changes of abdominal and peripheral veins may help to characterize the dynamic behavior of the full venous compartment.
Fluid overload, or congestion, is a key clinical feature in acute heart failure, but its management with diuretic is still very subjective (52). Controversial results have been documented in the literature about the potential utility of IVC diameter and distensibility to monitor the response to diuretics in patients with acute heart failure (53,54). Quantifying with precision phasic IVC changes might potentially detect even small variations in intravascular fluids and possibly guide clinicians for a more objective use of diuretic therapy.
Similar considerations apply to renal failure patients undergoing dialysis, whereby automated continuous and unsupervised IVC monitoring may help to tailor the dialytic process according to the current volume status of the patient.
The possibility to detect a cardiac pulsatility of IVC in addition to the major oscillatory component of respiratory origin is still largely unexplored and many questions have to be addressed. May the cardiac pulsatility provide a more reliable index of IVC collapsibility than the classical caval index? Is cardiac pulsatility carrying additional or different information than respiratory phasic variations? Long term monitoring and correlation analysis of these oscillatory components as well as extending the investigation to specific patient groups is necessary to address these questions.
Finally, a relevant application concerns the assessment of arterial stiffness. Aortic stiffness is associated with incident cardiovascular events. In clinical practice, aortic stiffness is usually investigated indirectly in terms of the PWV. However, the PWV is usually measured from the carotid-femoral artery PWV, a global index, which does not reflect local stiffness variations. In theory, aortic stiffness could be estimated directly, by measuring with US the aorta pulsatility under a known pressure variation (systo-diastolic) (55), in different aortic segments (56). Whether segmental aortic stiffness measured by US might discriminate different patients' conditions and risk, it is worth exploring.
CONCLUSIONS
Rapid advances in US image processing have made now possible to obtain more objective information on the size of arteries and veins, but also to quantify their phasic changes with more precision, which can transform management of several conditions and potentially improve outcomes. | 2022-01-21T14:14:25.329Z | 2022-01-21T00:00:00.000 | {
"year": 2021,
"sha1": "6eaa5be73fab4afdeb070c624e5e7141d52051a6",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2021.775635/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "6eaa5be73fab4afdeb070c624e5e7141d52051a6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231598956 | pes2o/s2orc | v3-fos-license | Faint trace of a particle in a noisy Vaidman three-path interferometer
We study weak traces of particle passing Vaidman’s nested Mach–Zehnder interferometer. We investigate an effect of decoherence caused by an environment coupled to internal degree of freedom (a spin) of a travelling particle. We consider two models: pure decoherence leading to exact results and weak coupling Davies approximation allowing to include dissipative effects. We show that potentially anomalous discontinuity of particle paths survives an effect of decoherence unless it affects internal part of the nested interferometer.
Faint trace of a particle in a noisy Vaidman three-path interferometer Jerzy Dajka
We study weak traces of particle passing Vaidman's nested Mach-Zehnder interferometer. We investigate an effect of decoherence caused by an environment coupled to internal degree of freedom (a spin) of a travelling particle. We consider two models: pure decoherence leading to exact results and weak coupling Davies approximation allowing to include dissipative effects. We show that potentially anomalous discontinuity of particle paths survives an effect of decoherence unless it affects internal part of the nested interferometer.
Quantum particle can be prepared (preselected) in a given and desired state and may also be postselected in another state with a known at least in principle probability. That what occurs in between, what is the particle's past, remains problematic due to specific features of quantum measurements unavoidably modifying quantum states of measured objects. It is clearly visible for a quantum particle passing trough interferometer: the particle enters the device and leave it (if its outcome is measured), however, what occurs inside an interferometer is, as it will be shown below, disputable and even controversial. One of the approaches 1,2 is that a particle, staying coherent, it leaves nothing but a faint trace which-if comparable to an order of a trace potentially left by a localized wave packet-can serve as a hallmark of its presence. A faint trace which is a small change of an amplitude of a component orthogonal to an undisturbed particle's state (weakly) measurable only in experiments with an ensemble of particles having the same pre-and postselection.
Past of a quantum particle in a nested Mach-Zehnder interferometer-proposed in Ref. 1 and presented in Fig. 1 and abbreviated here as a Vaidman interferometer-was recently studied in Ref. 1 using quantum weak values 3,4 and the two state vector formalism (TSVF) 5 . The results are confounding: particles seem to follow anomalous discontinuous path. Such a seemingly weird conclusion leads to plethora of controversies 6,7 and since that time (almost) all works on that problem have came in triads: a paper, commentary inspired by the paper and a reply to the comment 6,7 . The main reason is that the TSVF 5 applied in Ref. 1 is one of few possible approaches to studies of quantum past. The other non-equivalent alternatives are consistent (decoherent) histories 8,9 standard quantum mechanics 10-12 and many other other alternative studies [13][14][15][16][17][18][19][20] . Moreover, even recent experiments and their detailed analyses fail to resolve all the controversial issues [20][21][22][23][24][25][26][26][27][28] . Despite counter-intuitive of a conception of discontinuous path there are analyses 2,29 and claims which support the faint-trace anomalous picture as experimentally confirmed.
Our present aim is to follow Ref. 1,2 and to supplement the analysis of weak trace (based upon weak values) of particle by including an effect of decoherence affecting internal degree of freedom of the particle passing the Vaidman interferometer. Our results allow to identify natural obstructions for an effective verification of theoretical predictions and exclude factors seemingly but not truly responsible for experimental failures and limitations. Studying internal degrees of freedom e.g. spin or polarization of particles in Vaidman interferometer becomes particularly reasonable for most recent experiments and models utilizing neutrons 24,26 . There are various phenomenologic approaches 30 dedicated to particular quantum systems which usefulness and validity was confirmed by many repeatable experiments. However, for fragile quantum systems microscopic models are a least a good starting point to make predictions which are credible in a tailored and well identified conditions. In this paper we consider two well established microscopic models: (i) an exact model of pure decoherence 31,32 of a solely quantum character and (ii) a weak coupling Davies approximation 33 allowing to include dissipation. Pure decoherence (or pure dephasing) model is limited by a choice of system-environment interaction encoded in a Hamiltonian by an integral of motion: in the Caldeira-Leggett 30 "system + bath + interaction" Hamiltonian the interaction is given by an operator commuting with a system. Davies approach 33 allows for an arbitrary system-environment coupling provided that its small enough for a perturbative treatment to hold true. We show that that the anomalous discontinuity 1,2 of the faint traces (given by weak values of suitable projection operators) left by particles are rigid with respect to decoherence affecting external arm of Vaidman interferometer while very fragile if a source of decoherence disturbs balance of the internal, nested arms of Vaidman interferometer.
Since an experiment is an only way to resolve an (interpretative) ambiguity concerning (un)presence of particles in the Vaidman interferometer the answer if the controversies survive also in the presence of decoherence seems to be crucial.
For a sake of completeness we re-introduce the Vaidman interferometer and review the controversial features of the faint traces of particles passing it. The original Vaidman interferometer presented in Fig. 1 consists of spatial degree of freedom given by three paths denoted by I, II, III and four beam splitters. Vaidman interferometer can effectively be described 10-12 as a three level quantum system with a state space spanned by a basis In an ideal setting of a noise-less system, a passage of a particle is described by a unitary transformation composed of four unitaries U 4 U 3 U 2 U 1 corresponding to subsequent beam splitters termed as BS i , i = 1, . . . , 4 in Fig. 1: The strategy applied in Ref. 1 to infer the path of a particle entering and leaving Vaidman interferometer in a state |III� was to investigate weak trace of a particle inside Vaidman interferometer at three instants A, B, C indicated in Fig. 1. According to Ref. 1 the weak trace is indicated by a non-vanishing weak value 3,34 of one of the projectors where preselected (directly prior to the measurement of i ) and postselected (immediately after the measurement) states compose a two-state vector �ψ post ||ψ pre � being a fundamental object of the TSVF 5 .
Let us emphasise that the physical meaning of vanishing weak values in a current context remains disputable [34][35][36][37] . Most of controversies originate, however, from highly counter-intuitive conclusions provided in Ref. 1 indicating possibility of discontinuous trajectories followed by a particle passing trough Vaidman interferometer. Let us review: there are three instants A, B, C where the weak trace is measured: A: just after it is injected to the Vaidman interferometer in a state |III� and passes BS 1 , B: where the weak measurement becomes conducted for all potential paths in Vaidman interferometer and C: after the BS 3 beam splitter as presented in Fig. 1. The corresponding preselected states read as: |III� and |ψ C post � = U † 4 |III� describe a hypothetical particle detected at III evolving backward in time. Results of the weak measurement are summarized in Table 1. According to Ref. 1 a presence of a particle is defined by its non-vanishing weak trace. The counter-intuitive conclusion of Table 1 is the following: at A and C the particle is present in III , what upon Fig. 1 is intuitively acceptable, but at B it is also present in an internal loop ( I, II ) of Vaidman interferometer. Such confounding result needs experimental verification. One can safely assume that any potential experiment, as it was so far, will be highly subtle and sophisticated. Moreover, such an experiment will rely on quantum properties and it may be fragile with respect to decoherence. Our objective is to investigate if in a presence of omnipresent noise one can still support claims inferred from Table 1 or if they are nothing but an artifact absent in real noisy systems.
Pure decoherence
There are circumstances when internal degrees of freedom of particles need to be taken into account and affect an interference 38 . An output pattern of such an interference becomes further significantly modified by external bath coupling to an internal degree of freedom 39 . In the following for simplicity we assume that an internal degree of freedom of a particle in the Vaidman interferometer requires two-dimensional space H int spanned by Physically, such a qubit model can correspond to an interference of spin-half particles or any qubits. If, moreover, one assumes the internal degree of freedom is coupled to (affected by) an environment one arrives at a composite quantum system with a state space where H int = C 2 and H path = C 3 with a basis |I�, |II�, |III� in Eq. (1). As a composite system consisting of a particle and its environment considered in toto is closed, its evolving states undergo unitary transformations. Unitaries corresponding to beam splitters Eq. (2) and projectors Eq. (3) required for weak measurement become now tensorized with identity operators I acting on remaining parts of a composite space H and read: where U i and i are given in Eqs. (2) and (3) respectively.
Let us notice that an effect of a generic interaction between particle and its environment shall result in a modification of U i which will essentially be interaction-type-dependent. Pure decoherence (dephasing) 30,31 is probably the simplest class of an interaction between an open quantum system and its environment. It is characterized by a high symmetry preventing energy exchange with a surrounding 31 . Despite such an obvious simplification pure decoherence can effectively be applied to a broad class of problems 40-47 ranging from theoretical quantum information up to experiments in optics. Simplifying our model even further we assume local decoherence i.e. the particle remains unaffected by a dephasing environment unless it follows a particular 'noisy' path between two particular beam splitters. The term 'local' is used to distinguish circumstances when a whole Vaidmann interferometer, not just a part of of one its arms, is embedded in an either thermal or non-thermal bath. Hamiltonian describing interaction between particle (which is a qubit with a Hamiltonian given by σ z Pauli matrix) and its environment is then given by a standard Caldeira-Leggett form 30 : Time evolution of a total (closed) system is unitary and reads Eq. (3) of a particle in a noise-less Vaidman interferometer at different instants A, B, C indicated in Fig. 1 Scientific Reports | (2021) 11:1123 | https://doi.org/10.1038/s41598-020-80806-z www.nature.com/scientificreports/ where t denotes duration of particle-bath interaction which is assumed to be smaller that the passage between any pair of beam splitters. A block-diagonal structure of the first term in Eq. (9) with unitary blocks U ± given by Eq. (10) is a hallmark of the assumed pure decoherence model with where E denotes energy separating qubit levels, the environment is simplified to a one-dimensional bosonic field with bosonic operators a(ω), a † (ω) and real-valued h(ω) and g(ω) . To clarify the notation of Eq. (9) let us exemplify: a unitary transformation U 4 U 3 U 2 U II U 1 : H → H describes Vaidman interferometer with a purely dephasing environment coupled to a path II locally between beam splitters BS 1 and BS 2 . Now we are ready to analyse an impact of decoherence on particles travelling via Vaidman interferometer. We recognize two classes of effects: the first, quantitative when the anomalous effect of Ref. 1 survives and the second when the pure decoherence spoils unusual features of noise-less system. The first case in presented in Table 2. We consider quantum particle entering Vaidman interferometer in a state where the first, spatial, component denotes path of the particle whereas the second is a state of internal degree of freedom (qubit). The particle leaving the Vaidman interferometer is assumed to be in a state i.e. it is postselected in its spatial (external) degree of freedom but with no information regarding its internal (here maximally mixed) state. Moreover, we assume that an environment both initially and finally is its ground state (vacuum) | � and affects only these particles which, according to TSVF, travel forward in time. Working essentially with mixed states requires generalisation of a definition of a weak value of an operators 34 Tables 2, 3. The results of (10) Eq. (7) of a particle in a noisy Vaidman interferometer at different instants A, B, C indicated in Fig. 1 Table 3. Weak traces �˜ I,II,III � B,C w Eq. (7) of a particle in a noisy Vaidman interferometer at different instants B, C indicated in Fig. 1 and corresponding pre-and postselections given by Scientific Reports | (2021) 11:1123 | https://doi.org/10.1038/s41598-020-80806-z www.nature.com/scientificreports/ weak measurement of a particle disturbed by decoherence coupled either to II or III are summarized in Table 2. Let us notice that decoherence affects particle traveling forward in time after it passes the beam splitter BS 1 . Tables 2 and 3 are qualified by which for pure decoherence has an exact solution [30][31][32] . Since expressed in terms of displacement operators D 48 time evolution of a vacuum | � ∈ H env 39 reads for a typical choice h(ω) = ω and g 2 (ω) = ω 1+µ exp(−ω/ω c ) 30 one can calculate explicitly where the parameter denotes strength of a particle-environment coupling and 0 ≤ µ allows to classify environment 30 as Ohmic µ = 0 or super-Ohmic for µ > 0 . The sub-Ohmic regime µ < 0 suffers from known 31 mathematical difficulties and is not considered. Let us notice that for t = 0 the integrand in Eq. (17) vanishes, q = 2 and the results of Ref. 1 are reproduced. It holds also true for → 0 corresponding to a particle uncoupled with its environment. We also conclude that even in a long time limit t → ∞ the quantity q > 0 i.e. it remains finite. Therefore we infer that discontinuous faint trace changes only quantitatively. However, there is a qualitative change if decoherence is present in an internal interferometer of Vaidman interferometer as presented in Table 3 i.e. if an environment is coupled either to I or II after beam splitter BS 2 . In both cases, still assuming that particles travelling forward in time are affected by decoherence, we observe that a faint trace of a particle contributes to II after BS 3 . This observation supports claim of Ref. 10 connecting the faint trace discontinuity with a perfect balance of internal interferometer leading to destructive interference on its output. If decoherence affects internal loop in the Vaidman interferometer it removes anomaly of a faint trace. To summarize, the faint trace, otherwise fragile, remains resistant with respect to decoherence present in an external arm of Vaidman interferometer.
Dissipation
Rigidity of faint traces with respect to decoherence affecting external interferometer of Vaidman interferometer accompanied by its fragility with respect to decoherence present in an internal loop was analysed in previous paragraph for a very special model of pure decoherence. Here we investigate if the above predictions survive also in a presence of dissipation i.e. particle-environment realistic interaction not excluding an energy exchange. Such a problem, contrary to exactly solvable pure decoherence, demands approximate treatment 30 . We apply Davies approach 33 , one of rigorous approaches to quantum open systems dedicated to weak coupling to an environment 30 . Davies treatment, mathematically rigorous, can effectively be applied in various areas of quantum information [49][50][51][52] . Let us keep previous assumption of decoherence affecting locally an external arm of Vaidman interferometer only and consider particle-environment interaction encoded in one of two following Hamiltonians Here ε is assumed to be small. The choice of coupling (via σ x Pauli matrix) is complementary to the pure decoherence model studied in the previous Section and allows us to verify if previously predicted stability of faint trace holds also true disturbed by dissipation. None of two Hamiltonians in Eq. (18) allows for an exact treatment similar to pure decoherence. Instead we assume vanishing temperature limit T = 0 and utilize Davies approximation to find time evolution of reduced (with respect to an environment) density matrix of a particle ρ(t) ∈ B (H path ⊗ H int ) in terms of a Master equation We investigate weak traces of a particle in Vaidman interferometer assuming that there is dissipative environment affecting particle just after BS 1 and it is attached to the external interferometer in the Vaidman interferometer in Fig. 1. To calculate � I,II,III ⊗ I 2 � w at instants A, B, C, cf. Fig. 1, for a pre-and a postselection given in Eqs. (20) and (22) respectively and, lower panel, at B for a pre-and postselection given in Eq. (21) as a function of duration of particle-environment interaction. Time is given in 1/ω c and we set ε = 0.05. 34 . Weak values Eq. (14) indicating faint trace of a particle in Vaidman interferometer at A, B, C in Fig. 1 are calculated numerically and presented in Fig. 2 as a function of duration (time) of interaction between the particle and its environment which is coupled to lower ( III ) arm of the external interferometer in the Vaidman interferometer in Fig. 1 after BS 1 . Let us notice that none of the calculated weak values has a non-vanishing imaginary part requiring careful physical interpretation 4 . Weak trace calculated at A, C (B) is presented in the upper (lower) panel of Fig. 2 respectively. The weak trace indicating particle at A and C is present for the path III and absent otherwise. At B the particle leaves weak trace in all the three paths provided that the duration (time) of dissipation is short with respect to a passage time. If it is not the case one obtains oscillations which, for certain value of duration time lead to decreasing of weak traces at I and II to a level which may be undetectable and, effectively but not factually, the particle remains 'visible' in III only. For an environment weakly disturbing path II after BS 1 one obtains qualitatively similar results.
Discussion
Decoherence is a trespasser of failure of many experiments attempting to predict or confirm quantum properties of Nature. Recent predictions of discontinuous path of particle in a nested Mach-Zehnder interferometer can serve as a particular example of a deeply quantum effect requiring further experimental verification. One could have doubt in precise measureability of the controversial and to some extent exotic properties of Ref. 1 and in particular Ref. 2 due to omnipresent noise blurring results of measurements. To dispel such doubts we investigated how decoherence can affect theoretical predictions of noise-less models and if it can obscure or even definitely indisposed theoretically predicted anomalies. Since recent experiments 24,26 utilized neutrons which are particles with internal, spin degree of freedom we consider interference of qubits and decoherence affecting its spin. From a wide spectrum of different models describing quantum open systems we select two limiting cases: (i) an exact, non-Markov but specific pure dephasing model and (ii) a very general but approximate weak coupling Davies approximation. Pure decoherence, being exact, does not take into account dissipation of energy how the Davies approach does but at a cost of applied approximations. However, the results obtained for this seemingly far models were confluent: qualitative properties of weak traces (and their discontinuity) of a particle in the Vaidman interferometer are rigid with respect to decoherence affecting external interferometer but at the same time extraordinarily fragile if decoherence is present in an internal loop of Vaidman interferometer provided that duration of decoherence remain short with respect to overall time scales of particle motion in the Vaidman interferometer. We hope that our results, although only qualitative, can serve as a guidelines for experiments and support further investigations concerning past of quantum particles.
Methods
For analytical calculation of pure decoherence model we utilized coherent state techniques. Numerical calculation for dissipative environment in Davies approximation we performed with Python-based toolbox QuTip 54,55 using mesolve for Master Equation Eq. (19). | 2021-01-14T14:25:31.612Z | 2021-01-13T00:00:00.000 | {
"year": 2021,
"sha1": "ae5450b93d695a6296ea9c927db8103355c8e3e5",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-80806-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "90fa60e4b1d867afd526e0450ad5c79f406cddc2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52997523 | pes2o/s2orc | v3-fos-license | ON THE GLOBALIZATION OF THE POISSON SIGMA MODEL IN THE BV-BFV FORMALISM
We construct a formal global quantization of the Poisson Sigma Model in the BV-BFV formalism using the perturbative quantization of AKSZ theories on manifolds with boundary and analyze the properties of the boundary BFV operator. Moreover, we consider mixed boundary conditions and show that they lead to quantum anomalies, i.e. to a failure of the (modified differential) Quantum Master Equation. We show that it can be restored by adding boundary terms to the action, at the price of introducing corner terms in the boundary operator. We also show that the quantum GBFV operator on the total space of states is a differential, i.e. squares to zero, which is necessary for a well-defined BV cohomology.
Introduction
The goal of this paper is another step towards deformation quantization of the relational symplectic groupoid (RSG), ( [9,10]) through the Poisson Sigma Model (PSM), ( [41,40,31], for the connection to the relational symplectic groupoid see [24,16,12]), using the BV-BFV 1 formalism for the quantization of gauge theories on manifolds with boundary ( [19,20]). This possible application of the BV-BFV formalism was first discussed in [20]. In [22] we explained how the quantization of the RSG can be achieved in the case of constant Poisson structures. In [21], we generalize the methods of formal geometry used in [22] to describe the perturbative quantization of any polarized AKSZ theory ( [1]), possibly on manifolds with boundary. In this paper we apply the results of [21] to the PSM, and extend them to the case of mixed boundary conditions. Let us briefly explain how we go about this task.
In Section 2 we give a very rough review of the classical and quantum BV-BFV formalism. For more details the reader is referred to the literature ( [19,20]). In particular we recall the Quantum Master Equation (QME) and its generalization to manifolds with boundary, called the modified Quantum Master Equation (mQME).
In Section 3 we recall the construction, and the results, of [21]. Most importantly, to apply the quantum BV-BFV formalism one needs to linearize the target around constant maps, which form a part of the moduli space of classical solutions of any polarized AKSZ theory ( [1]). For non-linear targets, this can be done in a covariant way, as one varies the image of the constant map. In a natural way this leads to a family of quantizations parametrized by the target that satisfy a generalisation of the mQME, that we call the modified differential Quantum Master Equation (mdQME). This equation can be interpreted as the closedness of the state with respect to a flat "Quantum Grothendieck connection" ∇ G . Moreover, under change of gauge choices the state changes by a ∇ G -exact term, so that there is a certain cohomology describing the physical states.
In Section 4 we apply the results recalled in Section 3 to the PSM, which is an example of an AKSZ theory. In particular, we describe the algebraic structure which is captured in the mdQME and the flatness of ∇ G .
In Section 6 we discuss what happens when one combines the globalization of the partition function over constant maps with the alternating or mixed boundary conditions that appear in the RSG. In particular, we describe an anomaly that arises form the curvature of the deformed Grothendieck connection D G , and how it can be cancelled by a quantum counterterm in the action. We also describe how the mdQME gets spoilt by terms that come from the corners where the different boundary conditions meet.
In Section 7 we explain how one can restore the mdQME for the PSM with alternating boundary conditions. For this one has to extend both the space of operators and the space of states, and we sketch these extensions in Section 7.1 and 7.2. We also prove that the connection ∇ G remains flat.
Finally, in Section 8 we explain directions for further research. These are not restricted to the deformation quantization of the RSG. The methods developed in this paper could help understand both the globalization of other theories with more complicated moduli spaces of classical solutions, and the "extended" quantization (in the sense of extended TQFTs) of AKSZ theories on manifolds with corners (and possibly, defects of higher codimension).
field theory S M : F M → R the partition function in the path integral approach is Usually, F M is infinite-dimensional, and one cannot define DΦ 2 . The way out is usually to translate the formal asymptotics as → 0 of finite-dimensional integrals to the infinite-dimensional case. The terms in the asymptotic expansion are convenienetly labeled by Feynman diagrams [38]. If the critical points of the action functional S M are degenerate, one needs to gauge-fix the theory before one can use the formal asymptotics. To give a construction for gauge-fixing for a finite dimensional version, consider super coordinates q i , p i and the odd symplectic form ω = i dq i dp i . We can define the BV Laplacian Then we get that ∆ 2 = 0 and, for two functions f, g, ∆(f g) = ∆f g ± f ∆g ± (f, g), where ( , ) denotes the so-called BV bracket, which is the odd Poisson bracket induced by the symplectic form ω. Moreover, for a function f of p and q variables, ψ a function only of the q variables, we can define a BV integral L ψ f := f (q i , p i = ∂ i ψ)dq i · · · dq n to be intended as the integral of f on the Lagrangian submanifold L ψ = graph dψ (here dq 1 · · · dq n denotes Berezinian integration). If we assume that integrals converge, we get The latter allows us to exchange the integral over a Lagrangian, for which the integral is ill defined, by a well defined one. This procedure is called gauge-fixing. This construction can be extended to any (super)manifold. Moreover, considering f = e i S , we need two other conditions, which are the Master Equations for the classical and quantum level: The latter is equivalent to ∆e i S = 0. The former is the classical limit of the latter.
2.3. BV theory. For M a closed manifold, we can construct a BV manifold (F M , ω M , S M ), where F M is a supermanifold with Z-grading, ω M an odd symplectic form of degree −1 on F M , and S M is an even function of degree zero on F M , which extends the classical action and satisfies the CME.
2.4. The case with boundary. Consider now a manifold M with boundary. Then the CME and the QME condition change ( [20,21]). The CME becomes the "modified" CME (mCME), which is give by where Q M is a degree 1 cohomological vector field, i.e. [Q M , Q M ] = 0. On closed manifolds it is defined as the Hamiltonian vector field of S M . Here π M : F M → F ∂ ∂M is the projection to the boundary fields, and α ∂ ∂M a 1-form on F ∂ ∂M such that ω ∂M = δα ∂ ∂M . Again, we denote by δ the de Rham differential on the space of fields.
2.5. BV-BFV theory. The BV manifold construction can be extended to manifolds with boundary as was shown in [19,20]. Let M be a smooth manifold with boundary.
2.5.1. Classical theory. We can define, according to subsection 2.3, a BFV manifold to be a triple (F M , ω M , Q M ), where F M is a graded manifold, ω an even symplectic form of degree 0, and Q M a degree 1 cohomological, symplectic vector field on F M . If ω M is exact, there is a 1-form α ∂M with the same parity and degree, such that ω M = δα ∂M ; such A BFV manifold is called exact. A BV-BFV manifold over a given exact BFV manifold where F M is a graded manifold, ω M is an even symplectic form of degree 0, S M is an even function of degree 0, Q M is a cohomological vector field, and π M : 2.5.2. Quantization. The "modified" Quantum Master Equation (mQME) is given by ( [20]) where Ω is a differential operator on H ∂M , a certain space of functions on the leaf space B P ∂M of a compatible polarization P. Ω quantizes the boundary action S ∂ ∂M and satisfies Ω 2 = 0. ∆ is the BV Laplacian on the space of residual fields V M ( [20]), and ψ M is again constructed through Feynman diagrams, with sources in residual fields and the boundary. In this picture, it is a function (or half-density) on V M ⊗ B P ∂M . If one changes the choices involved in the gauge-fixing, the state changes by a ( 2 ∆ + Ω)-exact term.
Quantization of AKSZ Sigma Models
In [21] it was shown that one can construct a perturbative quantization for any possibly nonlinear AKSZ Sigma Model ([1]) on manifolds with boundary by considering techniques of formal geometry as in [29,7] and the BV-BFV formalism as in [20]. Here one linearizes the space of fields around constant maps. Varying the image of the constant map one obtains a family of quantizations labeled by elements of the target. By choosing a Grothendieck connection on the target( [7,29,6,34,25], see also the discussion in the appendix of [21]), we can add a new term to the action -corresponding to a new vertex in the Feynman diagrams -such that the new state ψ satisfies i.e. if we vary the point of expansion, the state changes by a gauge equivalence. The new vertex is labeled with the coefficients of the connection 1-form of the Grothendieck connection and emits a single arrow, see [6,21]. Equation (1) is an extension of the mQME, which is called the modified "differential" Quantum Master Equation (mdQME). The mdQME was first introduced in [22], where it has been shown that one can construct a globalized version of Kontsevich's star product ( [33,11]) using the Poisson Sigma Model with constant Poisson structure on worldsheet manifolds with boundary, and the BV-BFV formalism, especially its compatibility with gluing. The mdQME can be formulated for a full covariant quantum state ψ ( [21]) to be a flat section with respect to the quantum Grothendieck connection ∇ G . The mdQME is given by where d x represents the de Rham differential on the moduli space of classical solutions for the unperturbed theory M 0 = {(x, 0) | x constant map to the target}.
Moreover, it was shown that ∇ G is a flat connection, and that varying the choices involved in defining ψ -including the Grothendieck connection -the state changes by a ∇ G -exact term.
Review of the Poisson Sigma Model
The Poisson Sigma Model (PSM) ( [31,41,40]) is a 2-dimensional topological field theory, with important relation to deformation quantization ( [33,11,15], see also Appendix A.5), and in particular a special case of an AKSZ Sigma model. In this section we will very briefly review some aspects of its classical version.
Classical Poisson Sigma
Model. Fix a Poisson manifold (P, Π). The classical PSM associates to a smooth, oriented, compact and connected 2-manifold Σ (usually called the worldsheet) the space of fields F Σ = VBun(T Σ, T * P), which is the space of vector bundle maps from T Σ to T * P. An element of the space of fields will be identified with a pair (X, η) where X : Σ → P is the base map and η ∈ Γ(Σ, T * Σ ⊗ X * T * P) is a 1-form on Σ with values in X * T * P. The action functional is Here , denotes the pairing between vectors and covectors. In local coordinates x i on P, we can write η = η i dx i and X = (X 1 , . . . , X n ). Then the action reads where we use the Einstein summation convention.
BV-BFV extension.
Since the PSM is a gauge theory, one needs to apply a gauge fixing formalism. Because the gauge symmetries only close on shell, the BRST formalism fails, and one needs to revert to the BV formalism (see [11,15] for introductions to the BV formalism and detailed discussion of the gauge symmetries; see also section 2) on closed surfaces and to the BV-BFV formalism on surfaces with boundary ( [19,20]).
BV extension.
The BV extended action and space of fields for the PSM can be constructed from the AKSZ formalism as discussed in [13]. The BV space of fields is the space of maps of supermanifolds F Σ = Map(T [1]Σ, T * [1]P) ∋ (X, η) where X : T [1]Σ → P is a map and η is a section of X * T * [1]P. The BV action is given by where D = θ µ ∂ ∂θ µ is the differential on T [1]Σ. In local cooridinates on P we can write On closed surfaces, this action satisfies the Classical Master Equation (CME) Here ( , ) is the odd Poisson bracket (BV bracket) associated to the odd symplectic form (where δ denotes the de Rham differential on the space of fields). One can reformulate the CME as Q Σ (S Σ ) = 0 where Q Σ = (S Σ , ) in local coordinates on P is given by The association Σ → (F Σ , ω Σ , Q Σ , S Σ ) is an example of a BV theory (see also subsection 2.3).
BV-BFV extension.
In the BV-BFV formalism the boundary conditions are left unspecified and hence the CME no longer makes sense. However, one can still define the symplectic form ω by (6), the action by (4) and the vector field Q by (7). One now also introduces the boundary data and the map π Σ : F Σ → F ∂ ∂Σ given by restriction of maps. These then satisfy the axioms of a BV-BFV theory 3 : Again, equation (10) is the mCME as already defined in subsection 2.4.
Perturbative quantization.
We consider the PSM action as a perturbation of the quadratic part of the action, Since we expand around critical points of S 0,Σ , this implies in particular that X is closed. Hence the ghost number 0 component of X is a constant map, which we denote by its image x ∈ P. As discussed in [14,6,22] and Appendix F of [20], it makes sense to perform perturbative quantization around points in the moduli space of classical solutions. Since the EL equations for the PSM are given by dX + Π(X)η = 0, we will perturb around the classical solution X = x = const. and η = 0 and gauge equivalent solutions. Hence for the PSM the appropriate moduli space is given by (11) M 0 = {(x, 0)|x const map to P} ∼ = P.
In this special case we have M 0 ⊂ F Σ . Instead of fixing a single classical solution x ∈ M 0 and expanding around it, we want to vary x itself. We can do this using the methods of Appendix A.2.
Considering the fields X and η given by X = ϕ x X and η = (dϕ x ) * ,−1 η, we get a formally globalized action for the PSM by where we denote by d M 0 and d Σ the de Rham differentials on M 0 and Σ respectively (we only write it once and leave out the indication every time it is clear).
Simply polarized boundaries
We want to describe the mdQME for the PSM in the case of a worldsheet where we have a single boundary component endowed with a certain polarization (see figure 1). 5.1. The boundary BFV operator. In this subsection we want to see how Ω is constructed without any globalization term, i.e. for S Σ . We can formulate the boundary operator Ω for the PSM by the usual construction of the collapsing of subgraphs Γ ′ of a graph Γ in ∂C Γ for the nonglobalized theory. We briefly review the results of [20,Section 4.8], where the boundary operator of the non-globalized theory was computed. We consider two different boundary representation depending on the polarization, either we have an E-representation or an X-representation.
E-representation.
We look at first at the E-representation. Denote by the configuration space of n points in the bulk and k points on the boundary, and by C the quotient of Conf n,k by translation and scaling. Then we have dim C = 2n + k − 2, which has to be the same as the form degree of the weight of the appearing graph such that integrals do not vanish. Thus we need to have 2n+k −2 = 2n, since n is the amount of points in the bulk which represent the Poisson tensor, i.e. emitting two arrows that have to remain inside the collapsing subgraph (otherwise the contribution vanishes by the boundary condition on the propagator). Hence we get k = 2. We label one boundary vertex by u 0 and the other one by u 1 . Let I, J, K, R, S be multi-indices. Consider a graph Γ with incoming arrows at u 0 represented by the index I, which come from a subgraph Γ ′ of Γ, and other incoming arrows at u 0 , represented by the index R, not coming from Γ ′ . Moreover, consider incoming arrows, represented by the index J, which come from the same subgraph Γ ′ , and other incoming arrows at u 1 , represented by the index S, not coming from Γ ′ . Moreover, we also consider arrows, represented by the index K, going into Γ ′ . Then Γ ′ can collapse to the boundary ∂C Γ (see figure 2). Summing over all subgraphs Γ ′ of graphs Γ that appear, we obtain the boundary operator and [E J E S ] represent the so-called composite fields as in [20].
Here Figure 2. An example of a subgraph collapsing as in the description. Here we have three incoming arrows to the boundary for the collapsing graph Γ ′ on the right side corresponding to the index S, three incoming arrows to the boundary on the left side corresponding to the index R, three incoming arrows to Γ ′ corresponding to the index K, two incoming arrows to u 0 from Γ ′ corresponding to the index I and one incoming arrow to u 1 from Γ ′ corresponding to the index J.
Remark 5.1. Recall that x represents the constant solution as a map Σ → R n . We can describe the weights B IJ (x, Π), which depend on the Poisson tensor, as the coefficients in the star product where I, J are multi-indices and i and j are indices and B IJ = 0 if |I| = 0 or |J| = 0.
X-representation.
In the X-representation there are arbitrarily many points on the boundary. Denote the number of points on the boundary by k. Then we have a similar construction as for the Erepresentation, only with the difference that for each boundary vertex we can have arbitrarily many outgoing arrows either out of the collapsing graph Γ ′ (left or right) and arbitrarily many outgoing arrows going into Γ ′ . Label the points on the boundary by 1, ..., k and let L, I 1 , ..., I k , R 1 , ..., R k be multi-indices. Then for the ith vertex we have outgoing arrows, represented by the index R i , going out of Γ ′ and outgoing arrows, represented by I i going into Γ ′ . Moreover, we have outgoing arrows, represented by the index L, coming from Γ ′ . Then Γ ′ ⊂ Γ can collapse to the boundary in ∂C Γ (see figure 3). This gives us the boundary operator where a I 1 ,...,I k are given by the sum of the weights over all Feynman diagrams with k boundary points and I j outgoing arrows for 1 ≤ j ≤ k.
Here Figure 3. An example of a subgraph collapsing as in the description. We consider a term for k = 2 as before and we label them by u 1 and u 2 . Here we have three outgoing arrows for the collapsing graph Γ ′ on the right side corresponding to the index R 1 , three outgoing arrows on the left side corresponding to the index R 1 , three outgoing arrows to Γ ′ corresponding to the index L and one incoming to Γ ′ out of each of the two boundary points corresponding to the indices I 1 and I 2 .
5.2. The globalized BFV operator. We now give a formulation for Ω where we also consider the globalization term S Σ,x,R . Recall that graphically this amounts to introducing new vertices emanating only a single arrow, representing the vector field R as explained in the Feynman rules of [21]. This means that Ω now becomes an inhomogeneous form on P, since R is a 1-form on P.
As before, we distinguish between the E-and the X-representation.
E-representation.
We have seen that degree counting implies that there are exactly two boundary vertices in a collapsing graph. Now we have to take the R vertices into account. Consider a collapsing graph with n bulk and k boundary vertices. Then the dimension of the corresponding configuration space is 2n + k − 2. On the other hand, there are now two types of bulk vertices: Suppose there are m vertices labeled by the Poisson bivector field (emitting two arrows) and r vertices labeled by the vector field R (emitting one arrow). Since arrows cannot leave the collapsing graph, the total form degree is 2m + r, which has to equal 2n + k − 2. Since n = m + r, this implies that r + k = 2. This means there can be either zero, one or two vertices labeled by R with two, one or zero boundary vertices respectively, as shown in figure 4. The first contribution r = 0, k = 2 is exactly the operator Ω E given in (12) from the non-globalized case. In the case r = 1, k = 1 we get the graphs arising in the definition of the connection 1-form A as in (39). Hence this contribution is given by (14) Ω where A I is the sum of weights of graphs where the incoming arrows at the boundary vertex are labeled by I. The contribution from graphs with r = 2, k = 0 can be expressed by the curvature term F defined in (40): In the X-representation, arrows can leave the collapsing graph, so we cannot do a degree count like in the E-representation; in particular, the number of R vertices in a collapsing graph is only bounded by the dimension of P. So, we have Ω where Ω X,r=i is the sum of all graphs with i vertices labeled by r. In particular, Ω X,r=0 = Ω X .
5.3.
Algebraic structure of the boundary operator. We know from [21] that (∇ G ) 2 = 0, and that this is equivalent to d x Ω + 1 2 [Ω, Ω] = 0. For the PSM it is interesting to see how this condition can be derived by looking at the explicit structure of Ω as discussed in 5.2. We again consider the two different representations separately.
Sketch of the proof. We will only prove (17) in detail (but we are sloppy with signs), and sketch the idea of the proof of the other equations. The construction in [14], recalled in Appendix A, yields a bundle E = ST * P[[ε]] of ⋆-algebras on P by applying Kontsevich's deformation quantization in every tangent space. Picking a Grothendieck connection D G = d x + R on P, and applying the Kontsevich formality map to R, one obtains a connection D G = d x + A on E. In [14] it is shown that this connection is a derivation of Γ(E), i.e. for σ, τ ∈ Γ(E), we have We claim that this equation implies (17). This can be done directly by writing out (17) and (21) in coefficients, but it is best seen through Feynman diagrams (after all, A and the star product are defined through Feynman diagrams). First, rewrite (21) into Figure 5. Schematics of the diagrammatic content of (21). σ and τ are arbitrary sections of Γ(E). We sum over all possible graphs. By d x we mean that we apply d x to the result of the integration. An R means that there is precisely one vertex labeled by R in every graph.
The left hand side of this equation is given by applying d x to the coefficients of the star product. Schematically, we represent the diagrammatic content as in Figure 5. On the other hand, we recall from [21] that the commutator [Ω (0) , Ω (1) ] can be expressed by replacing the boundary vertices in the graphs defining Ω (1) by the graphs appearing in Ω (0) and vice versa. If we ignore possible arrows arriving at the boundary vertices from outside the graph, this yields precisely the graphs on the right hand side of figure 5: The first term are the graphs of Ω 0 placed at the boundary vertex of graphs appearing in Ω 1 , and the second and the third term represent the graphs of Ω 1 placed at one of the boundary vertices of Ω 0 . Arriving arrows from outside the graph corresponds to taking derivatives of σ and τ . On the other hand, the left hand side yields precisely d x Ω 0 . Equation (16) holds, since the non-globalized boundary operator squares to zero (which is in turn a consequence of the CME, see [20] and [21]). Equation (20) holds, since there are no E-field contributions in Ω (2) . Equation (19) corresponds to the Bianchi identity, and equation (18)
5.4.
Modification of the action. We modify the classical BV action by using results of [6,14,17] as we also discuss in Appendix A. Let γ ∈ Ω 1 (P, E) be a solution of equation (48) for some choice of ω ∈ Ω 2 (P, E) as discussed in A.3 (here the formal parameter ε is given by (−i )/2). If ω = 0, the modified formally globalized action for the PSM is given by Remark 5.2. Here we integrate the 1-form part of X along the boundary, which, since the X fluctuation vanishes on components of the boundary in X-representation, implies that for a single boundary with X-representation S Σ,γ does not give any contribution to Ω X . Therefore we only need to look at the E-representation. Moreover, note that γ = O( ), i.e. it is already a type of quantum counterterm which is not present classically, so it does not violate the mCME.
Remark 5.3. If we want to consider the case ω = 0, we need to add another term to the action, which is given by We denote the action S Σ,x together with the latter term by S Σ,x .
Considering the degree counting as in 5.2.1, we get different cases of boundary vertex configurations. For the case r = 0, k = 2, we can either have two E-field boundary vertices, one E-field and one γ boundary vertex or two γ boundary vertices. For the case r = 1, k = 1, we can have either one E-field boundary vertex or one γ boundary vertex. For the case r = 2, k = 0 we have the same contribution as before. In the case ω = 0, there is a configuration where r = k = 0, but there is a single ω vertex. These different diagrams contribute to different terms for the new boundary operator (they all have to be understood as sections of where we replace the formal variable y by δ δE ), which are: • r = 0, k = 2 (E, E on the boundary): Summing over all these graphs, this corresponds to the term Ω E , • r = 0, k = 2 (γ, γ on the boundary): Summing over all these graphs, this corresponds to γ ⋆ γ, • r = 0, k = 2 (E, γ on the boundary): Summing over all these graphs, this corresponds to • r = 1, k = 1 (E on the boundary): Summing over all these graphs, this corresponds to the term A(R, Π ϕ )(e i E ), • r = 1, k = 1 (γ on the boundary): Summing over all these graphs, this corresponds to the connection term A(R, Π ϕ )(γ) = A(R, Π ϕ )(γ i )dx i , • r = 2, k = 0 (nothing on the boundary): Summing over all these graphs, this corresponds to the curvature term F (R, R, Π ϕ ), • r = k = 0 (just one ω vertex in the bulk): Summing over all graphs this just yields ω. By equation (48) and (44), we obtain that the terms with two γs, one γ, nothing on the boundary, and possibly ω, can be put together as Hence they do not contribute to the boundary operator, which we denote by Ω E,γ .
5.4.1. mdQME. We want to see how this modification changes the mdQME. In particular, we want to look at the mdQME for the full covariant state ψ, which is defined using S Σ,x , and Ω E,γ . Note that the mdQME holds by construction of Ω E,γ as shown in [21]. Moreover, by equation (45) and the fact that d x e i E = 0, the surviving terms will correspond to Hence the boundary operator is given by
Flatness.
Here ee i E is a multiplication operator on the space of states. For the flatness of ∇ G , we have to prove that Seperating the equation by form degree in P this is equivalent to Equation (29) is just saying that the standard Ω squares to zero. Equation (31) is true because D G is a flat connection. Equation (30) means that Ω E is a D G -closed section. This comes from the fact that the coefficients of Ω E are the same as in the star product.
6. Alternating boundary conditions and the mdQME 6.1. Consistent boundary conditions. In [11] it was shown that the perturbative expantion of the QFT given from of the PSM on the disk coincides with Kontsevich's star product, where we expand around the gauge equvalent classical solutions of the given EL equations, which are X = x = const., η = 0 (recall subsection 4.3 and see also Appendix A). The boundary conditions on the disk D are exactly set such that η| ∂D = 0 in order to be consistent with these type of solutions.
6.2. Construction of boundary conditions. We want to extend the methods developed in the last sections to describe deformation quantization of an object called the relational symplectic groupoid (see [16,9,10]) (RSG) as in [22]. This requires that we perform the BV-BFV quantization in the presence of "alternating" boundary conditions, which we can formulate for any higher genus worldsheet: Let ∂Σ = ℓ ∂Σ (ℓ) and consider a partition of another two distinguished components for every connected component ∂Σ (ℓ) of the boundary given by ∂Σ (ℓ) = ∂ 1 Σ (ℓ) ⊔ ∂ 2 Σ (ℓ) . Each ∂Σ (ℓ) is given as a disjoint union of an even number of intervals I j . Now the alternating condition is that on components of ∂ 1 Σ (ℓ) we set η = 0, and on components of ∂ 2 Σ (ℓ) we choose some polarization P j for each I j , and consider the corresponding boundary fields. We think of the endpoints of the intervals as "corners". 6.3. Curvature Anomaly. Unlike in the constant case discussed in [22], upon quantization 4 the mdQME fails to be satisfied. This effect arises from the curvature of the Grothendieck connection; namely, if we try to prove the mdQME as in [21], when integrating over the boundary of the compactified configuration space there are strata where a bulk graph collapses at a point u ∈ ∂ 1 Σ (i.e. one of the boundary components where η = 0). Summing over all these graphs one obtains the curvature of the Grothendieck connection (for more details see Appendix A). However, since there are no boundary fields on ∂ 1 Σ, these terms cannot be cancelled by a term in Ω. This can be interpreted as a quantum anomaly, since this problem is not present at the classical level. To restore the mdQME, we can add additional terms to the action, reminiscent to the addition of counterterms. This will yield new boundary terms, but they can be cancelled by adding appropriate terms to Ω as we have already seen in subsection 5.4, if we allow for a slight extension of the space of states (see subsection 7.1).
∂Σ (1) ∂Σ (2) Figure 6. Example of a worldsheet manifold Σ with genus g = 4 and two disjoint boundary components ∂Σ (1) and ∂Σ (2) . 6.4. The new state. To cancel this anomaly we add quantum counterterms to the action, specifically, the terms S Σ,γ and S Σ,ω defined in (23) and (24) respectively. The new terms in the action give rise to additional vertices. Namely, we now have vertices of arbitrary valence on components of the boundary where X = 0, i.e. on the η = 0 boundary components and the components of ∂ 2 Σ in E-representation. At such a vertex we place the corresponding derivative of γ in the formal directions. Also, there are new bulk vertices labeled by ω, which are similarly labeled by derivatives of ω in the formal directions. 6.5. New boundary contributions in the proof of the mdQME. If we try to proceed with the proof of the mdQME as in [21], we get terms where a part of a graph collapses on ∂ 1 Σ, i.e. the part of the boundary where η = 0. We will now analyse these terms more closely. Let Γ ′ ⊂ Γ be a subgraph that collapses on a point of the boundary, and denote by Γ/Γ ′ the resulting graph. Suppose Γ ′ has n bulk and k boundary vertices on ∂ 1 Σ. Then the dimension of the corresponding boundary stratum is 2n+k−2, since it is the quotient of the compactification of C. The contribution of the graph is non-vanishing only if the form degree of ω Γ ′ is also 2n + k − 2. The bulk vertices correspond to either Π or R, the former has two outgoing arrows, the latter only one. If one of these arrows points out of Γ ′ , then ω Γ/Γ ′ = 0, since it contains a propagator with the tail evaluated on the η = 0 boundary component. Hence all these arrows must point to another vertex in Γ ′ . Suppose there are m vertices with two outgoing arrows and r vertices with one outgoing arrow. Then we must have the following system of equations: which is equivalent to r = 2 − k (m is arbitrary, and n = m + r). Since r ≥ 0, we conclude that k is either 0, 1, or 2. Let us analyse these possibilities in more detail. 6.5.1. Terms with k = 0. In these terms there are no boundary vertices. They are also present if we do not add S Σ,γ to the action. We have r = 2 − k = 2, so these terms are given by graphs with R at two vertices. Summing over all these terms yields the curvature of the Grothendieck connection, F (again, see Appendix A for details). This is what spoils the mdQME, since we cannot cancel it with terms in Ω, which can only cancel the boundary contributions on boundary components with free boundary fields. We are thus forced to add other terms to the action to cancel the appearance. 6.5.2. Terms with k = 1. In these terms there is one boundary vertex labeled by γ, and one bulk vertex labeled by the vector field R. If we sum over all such graphs, we get by the definition of A as in Appendix A. 6.5.3. Terms with k = 2. In these terms there are two boundary vertices labeled by γ, and no vertices labeled by the vector field R. If we sum over all such terms, we get precisely the star product γ ⋆ γ. Here there is no translation symmetry, so the dimension of the boundary stratum is different. Adding S Σ,γ to the action cancels the anomaly that comes from allowing for alternating boundary conditions. However, it results in new boundary contributions that come from graphs collapsing at the corners, as we will show presently. Let C denote the set of all corner points of Σ. There are two types of corners: Let C 2 ⊂ C denote the subset containing those corner points which connect a δ δX -polarized connected component (i.e. a component in E-representation) of ∂ 1 Σ with a connected component of ∂ 2 Σ and let C 1 ⊂ C denote the subset containing those corner points which connect a δ δE -polarized connected component of Figure 9. The two types of corners.
The propagator still vanishes when its tail is evaluated at one of the corners (this can be checked from the explicit formula for the propagator in Appendix B). For this reason, as above if some subgraph Γ ′ of a graph Γ collapses at a corner, the contribution is only non-vanishing if no arrows leave Γ ′ . Let us start at a corner C in C 2 . Then we cannot have propagators ending at the δ δXpolarized boundary, since otherwise we need to evaluate the E-field at the corner point, which is equal to zero because of its boundary condition. So, any subgraph collapsing at C can only have bulk vertices, say n = m + r of them, where m denotes the number of interaction and r the number of R vertices, and vertices and ∂ 2 Σ, say k of them. Counting the dimensions we arrive at the following system of equations: which has the solutions k = 0, r = 1 and k = 1, r = 0, with m arbitrary. However, at these corners, graphs with bulk vertices do not contribute, this is the statement of the following lemma. Lemma 6.1. If Γ ′ is a subgraph of Γ containing bulk points, then the integral of ω Γ over the boundary face of C Γ where Γ ′ collapses at a corner C ∈ C 2 vanishes.
Proof. The point is that at these corners the boundary conditions are the same on both sides, so we can map the configuration to a configuration of points on the upper half plane, where we use the usual Kontsevich propagator, but without taking the quotient with respect to translations along the real axis. Instead we fix the image of the corner point to be a given point, e.g. 0. See also Figure 10. Now, observe that configurations with one bulk point evaluate to 0: These are either k = 0, m = 0, r = 1, but this case is ruled out because there are no tadpoles, or k = 1, m = 1, r = 0, but this is 0 because graphs cannot double edges. For more than two bulk points, note that the Kontsevich propagator depends on the the real parts of the points in the configuration only through their differences. Hence the product of propgators that is to be integrated has no component in the real part of the center of mass of the configuration, so integrating along this direction yields 0.
This means the only possibly nonzero contributions are those with k = 1, n = 0, i.e. subgraphs Γ ′ consisting of a single γ vertex -possibly with any number of inward leaves -approaching the corner. This vertex can either lie on the ∂ 1 Σ or ∂ 2 Σ component and the corresponding boundary faces have opposite orientation. Hence all terms cancel out: there are no extra contributions from corners in C 2 . Next let us turn to corners C ∈ C 1 . Here the boundary conditions change, so the propagator does not have translation symmetry along the axis. However, by continuity, now it vanishes when either the head or the tail are evaluated at the point of collapse. This implies that a subgraph collapsing at C can have neither inward nor outward leaves, i.e. only entire connected components of graphs can collapse at corner C ∈ C 1 . Counting dimensions as above, we see that there are again the two possibilities r = 0, k = 1 and r = 1, k = 0, with m arbitrary; in addition now we can have an arbitrary number b of vertices at the boundary with X-representation.
Example of an r = 0, k = 1 configuration Example of an r = 1, k = 0 configuration Figure 11. Possibilities for graphs collapsing at C ∈ C 1 .
Since only connected components of a graph can collapse, the corresponding action on the state is a multiplication operator Ω C 1 that multiplies states with a functional of the values of X at corners in C 1 , given by summing over all possible boundary contributions. This is not a regular functional as in [20], as it contains evaluation of fields on the corners. This means that we have to extend the space of states accordingly.
The mdQME for the globalized PSM with alternating boundary conditions
Since the PSM is an example of an AKSZ theory, we can use the results of [21]. That means that the mdQME holds for S Σ,x [( X, η)], if we do not impose alternating boundary conditions. Otherwise the proof of the mdQME fails, as we have shown in subsection 6.5. To restore it, we have to extend the allowed spaces of operators and states for S Σ,x , and change Ω to account for the new corner contributions depending on γ, such that the quantum Grothendieck connection still squares to zero.
Extension of states.
In the mdQME the boundary operator Ω acts on the state, and thus we need make sure that the state space is closed under this action. This is indeed not trivially satisfied, since the corner terms involve evaluating the boundary field at the corners points. There are two different corner terms in Ω C 1 , namely the one where we have a single γ on the boundary approaching the corner and no vector field R, or no boundary vertex on the η = 0 component and one single vector field R included in the graph from the bulk (see also figure 11). Note that since γ ∈ Ω 1 (P, E) and R ∈ Ω 1 (P, T P ⊗ E), we get that the collapsing graphs in the corner are given either by an operator B γ , for the case that we have a γ on the boundary, or B R for the case where we do not have any γ on the boundary. These operators both depend on the boundary field evaluated at the corner, i.e. X(C), where C ∈ C 1 . They multiply the state with the functional that evaluates the boundary fields at the corner, i.e. the operators are given by maps where F is a functional depending on the boundary fields evaluated at the corner. We construct the total bundle H C tot = H tot ⊗ H C , where H C is the space of functionals depending on the boundary fields evaluated at the corner. We have , C ∈ C . Now we can define a state to be given as a nonhomogeneous differential form on P with values in H C tot , i.e. an element of Ω • (P, H C tot ).
Extension of operators.
Recall from [20] that the algebra of the operators is generated by Ω 0 , which is the standard quantization of S 0,Σ , and simple operators, which are of the form where L J I 1 ···I r are some coefficients. Note that we can also have a similar expression for E. We want to extend this space by the multiplication operators coming from the corners as described above. The space of operators is extended by the multiplication operators that appear in the case of corners. The algebra of boundary operators acts on the algebra of corner operators by commutators. E.g. ∂ k Π ij X k δ δ[X i X j ] is a boundary operator and [X i X j ](C)∂ i γ∂ j γ is a corner operator, with C ∈ C 1 . Then the commutator is given by The extended space now consists of operators taking a state in H C tot and multiply it with an element in H C . 7.3. mdQME and Flatness. 7.3.1. mdQME. The proof of the mdQME proceeds as in [21]. We observe that after using Stokes' theorem for integration along the fiber we have to show that boundary contributions vanish. The new boundary contributions from the Grothendieck connection are 6.5.1. We can think of it as a new boundary vertex d Similarly, there will be new boudary vertices corresponding to A(R, Π ϕ )(γ) and γ ⋆ γ and d x γ in d x − i ∆ + i Ω ψ. Since F = D G γ + γ ⋆ γ and D G γ = d x γ + A(R, Π ϕ )(γ), these vertices cancel out after summing over all graphs. Note that the term d x γ now appears in d x ψ. This happens on the parts of the boundary where S Σ,γ is nonzero, i.e. everywhere but in the parts of ∂ 1 Σ in X-representation. Here we get new contributions to Ω pert coming from the collapsing of graphs with R vertices, and from graphs collapsing on the corners we get Ω C 1 .
7.3.2.
Flatness. The flatness condition (∇ G ) 2 = 0 reduces to the proof that Ω = Ω ∂ + Ω C 1 , where Ω ∂ is the part of the operator without corners, is a Maurer-Cartan element of the dgla of differential forms with values in End(H tot ).
We can show this similarly to [20,21]. Namely, since Ω ∂ and Ω C 1 are given as sum of integrals over the boundary of the configuration space of collapsing graphs, we can use Stokes' theorem: Here C c Γ ′ is the configuration space describing the relative position of the vertices of the subgraph Γ collapsing to the corner. In the first, the differential can act on the propagators, the boundary fields, or the vertex tensors Π ϕ , γ, R. The restriction of the propagators to this boundary face is closed, see Appendix B. If the differential acts on the boundary fields, this yields [Ω 0 , Ω C 1 ]. The differential acting on vertex tensors will be cancelled by boundary terms. Notice that on the boundary faces the dimension counting is different and we can have either two vertices labeled by R, one R vertex and one γ vertex on the boundary or two γ vertices on the boundary. A boundary face of C c Γ ′ corresponds to a collapse of a subgraph Γ ′′ ≤ Γ ′ to a single point. There are four distinct possibilities for that point: • The point can be in the bulk. If Γ ′′ contains more than two vertices then the contribution is zero by a Kontsevich vanishing lemma. If it contains exactly two vertices, there is a cancellation similar to the proof of the mdQME using the classical master equation, the fact that vertex tensors are d x + R closed, and that [R, R] = 0. • The point can be the corner. These terms yield [Ω C 1 , Ω C 1 ].
• The point can be at the boundary with the η ≡ 0 boundary condition. In that case there is a cancellation similar to one in the proof of the mdQME in section 6.5 using the equation . A groupoid is a small category whose morphisms are invertible. We denote a groupoid by G ⇒ M , where M is the set of objects and G the set of morphisms. A Lie groupoid is, roughly speaking, a groupoid where M and G are smooth manifolds and all structure maps are smooth. Finally, a symplectic groupoid is a Lie groupoid with a symplectic form ω ∈ Ω 2 (G) such that the graph of the multiplication is a Lagrangian submanifold of (G, ω) × (G, ω) × (G, −ω). The manifold of objects M has an induced Poisson structure uniquely determined by requiring that the source map G → M is Poisson. A Poisson manifold M that arises this way is called integrable. Not every Poisson manifold is integrable. The reduced phase space of the PSM on a boundary interval with target an integrable Poisson manifold P is the source simply connected symplectic groupoid of P ( [16]). In general, the reduced phase space is a topological groupoid arising by singular symplectic reduction. In ( [24,9,10]) it was however shown that the space of classical boundary fields always has an interesting structure called relational symplectic groupoid (RSG). An RSG is, roughly speaking, a groupoid in the "extended category" of symplectic manifolds where morphisms are canonical relations. Recall that a canonical relation from (M 1 , ω 1 ) to (M 2 , ω 2 ) is an immersed Lagrangian submanifold of (M 1 , ω 1 )×(M 2 , −ω 2 ). The main structure of an RSG (G, ω) is then given by an immersed Lagrangian submanifold L 1 of (G, ω), which plays the role of unity, and by an immersed Lagrangian submanifold L 3 of (G, ω) × (G, ω) × (G, −ω), which plays the role of associative multiplication. (In addition, there is also an antisymplectomorphism I of G that plays the role of the inversion map.) In case M is integrable, it was also shown that the RSG G is equivalent, as an RSG, to the the symplectic groupoid G.
8.1.2.
Kontsevich's star product. One can construct the Moyal product [37] (deformation quantization) as the gluing of canonical relations as it was shown in [22]. It still remains to show that one can also use the gluing of the RSG to construct a globalized version of Kontsevich's star product using the gluing formulas of the BV-BFV formalism. One can thus use the results of this paper to deal with the L 3 worldsheet structure, which is given as in figure 12 with mixed boundary conditions. Figure 12. The canonical relation L 3 with its boundary structure. Here we have two δ δX -polarized boundaries (the lower) and one δ δE -polarized boundary (the upper), which would correspond to ∂ 2 L 3 and the η = 0 boundaries which are components of ∂ 1 L 3 . 8.1.3. RSG with handles. Another interesting aspect would be to consider the RSG with handles. That is, one considers canonical relations L 3 with non vanishing genus. Since our theory is topological, we are able to move the handle in arbitrary directions, which means that one has to understand what happens when a hole will approach an observable for the gluing of the disk in [21]. Moreover, one has to check what kind of structures appear for associativity. 8.1.4. Generalization of Kontsevich's star product. Kontsevich's star product arises from the computation of expectation values of observables in the Poisson Sigma Model for a genus zero wordsheet surface. As in string theory, one expects that we should sum over all genera. Since a particular gluing of the RSG gives rise to Kontsevich's star product, one can relate this structure to the RSG construction with handles.
We will return on these questions in a forthcoming paper.
8.2.
Manifolds with corners. The methods developed in this paper can be useful to give a description for the the quantization of manifolds with corners. Here the corners arose from the structure of mixed boundary conditions, but in principle the methods that we develop might be adapted to the general case. Another paper in this direction is [32]. 8.3. Globalization of other theories. AKSZ theories have a particularly nice subset of classical solutions, the space of constant maps. This subset admits for a natural globalization, as was shown in [21]. It would be interesting to see whether the methods we used carry over to more complicated moduli spaces of classical solutions. E.g. in Chern-Simons theory, this subset is just the trivial connection, since the body of the target in that case is just a point, but one would like to take non-trivial connections into account as well.
Appendix A. Deformation quantization and the Poisson Sigma Model
In this section we recollect the connection between globalization of Kontsevich's star product and the Poisson Sigma Model that was discussed in [17,15,14,8,23,33,25].
A.1. Kontsevich's formality map on R d . Kontsevich's formality map is an L ∞ (quasi-iso)morphism from multivector fields T poly R d := Γ • T R d to multidifferential operators D • poly R d on R d . As such it consists of a family of maps where G n,ℓ is the set of graphs with n + ℓ numbered vertices, with ℓ := 2 − 2n + n i=1 k i , such that the jth vertex for 1 ≤ j ≤ n emanates exactly k j arrows (without short loops). Here k i represents the degree of the multivector field ξ i . Note that U n (ξ 1 , ..., ξ n ) acts on ℓ functions. Here B Γ,ξ 1 ,...,ξn are multidifferential operators, depending a graph Γ and also on the vector fields ξ 1 , ..., ξ n , and the w γ are weights corresponding to a graph Γ as in [33]. For a vector field ξ (i.e. ξ is of degree 1) and a bivector field Π (i.e. Π is of degree 2) we can define We have chosen the letters in this way, because later we will think of P to be Kontsevich's star product for Π a given Poisson tensor, A as a connection 1-form and F as its curvature. Let us take a look at some of the graphs appearing for some chosen multivector fields. For example, for a bivector field Π, we get that the term U 1 (Π) corresponds to the first graph of figure 13, whereas for a multivector field V of degree r we get for U 1 (V) the second graph of figure 13. Let now ξ be a vector field. Note that the number ℓ for U n (ξ, Π, ..., Π) will always be 1 for every n, which implies that A(ξ, Π) takes a smooth map f as an argument. We want to look at graphs appearing for higher terms in A. We can, e.g., consider the n = 3 term, i.e. U 3 (ξ, Π, Π). Some example of graphs in G 3,1 , which are taken in account for the sum, are given in figure 14.
We can also explicitely say what the differential operator given by a graph will be. E.g. for the graph as in 14 (b) we get . By definition of F , there are only derivatives of the vector fields in the bulk, i.e. for every n we get that ℓ = 0, i.e. the image of U n will be a differential operator of degree zero, which is a smooth function. Some examples for graphs in G 3,0 are given in figure 15. Figure 15. Example of graphs in G 3,0 .
A.2. Notions of formal geometry. We want to give the most important notions of formal geometry as in [29] following the presentation as in [14] and [6]. For a smooth manifold P we can consider a formal exponential map ϕ ∈ Γ(T P), such that for x ∈ P we have ϕ x : T x P → P, and we define a vector field R ∈ Γ(T * P ⊗ T P ⊗ ST * P), which is a 1-form with valiues in derivations of ST * P. Here S denotes the completed symmetric algebra. In local coordinates we have R i dx i with Then we can define the classical Grothendieck connection D G := d + R, which is flat. For a vector A.3. Globalization. Now let us describe how to generalize the above procedure to an arbitrary Poisson manifold (P, Π). Namely, let x ∈ P, and ϕ a formal exponential map on P. Then Π ϕ,x , the Taylor expansion of Π around x defined using ϕ, is a Poisson tensor on ST * x P. Any choice of coordinates on T x M now allows us to identify ST * x P ∼ = R[[y 1 , . . . , y d ]] and define Kontsevich's star product P (Π ϕ,x ). See [17] for a discussion of the equivariance of this construction in the choice of coordinates. In this way we get a new bundle E := ST * P[[ε]] of ⋆-algebras. One can use the Grothendieck connection defined in A.2 to give a description of a subalgebra A ⊂ Γ(E) which is a deformation quantization of C ∞ (P) seen as a subalgebra of Γ(E). Formally we have Γ(E) ⊃ C ∞ (P) Deformation Quantization ✲ A ⊂ Γ(E). The algebra A is given by closed sections under a deformation of the Grothendieck connection, which is defined in two steps: For a tangent vector ξ ∈ T x P, we let where again we denote by Π ϕ the Poisson tensor Π lifted to a formal neighborhood and ξ is defined as in 42. One can write (44) D G = d + A(R, Π ϕ ) interpreting A(R, Π ϕ ) as a one-form valued in differential operators on E. At some point x ∈ P, in coordinates x i around x, it is given by One can then show (see [17]) that D G is a globally defined connection on Γ(E), a derivation, and that (D G ) 2 is an inner derivation, i.e.
(D G ) 2 σ = [F P , σ] ⋆ := F P ⋆ σ − σ ⋆ F P , for any σ ∈ Γ(E), where F P is the Weyl curvature tensor of D G given by F P (ξ 1 , ξ 2 ) := F ( ξ 1 , ξ 2 , Π ϕ ), where ξ 1 , ξ 2 ∈ T x P are two tangent vectors on P. More, precisely, F P is a 2-form valued in sections of E which in local coordinates can be expressed as F P x = dx i ∧ dx j F (R i (x; ), R j (x; ), Π ϕ,x ). For the Weyl tensor we get D G F P = 0. The task is to modify the globalized connection D G slightly more, such that it becomes flat but still remaining a derivation. One can set 5 (45) D G := D G + [γ, ] ⋆ , and observe that for any 1-form γ ∈ Ω 1 (P, E) this connection is a derivation. Moreover, its Weyl curvature tensor is then given by (46) F P = F P + D G γ + γ ⋆ γ.
We call (43) the deformed Grothendieck connection and (45) the modified deformed Grothendieck connection. One then needs to find γ ∈ Ω 1 (P, E) such that F P = 0, which implies that (D G ) 2 = 0, then D G -closed sections will form the algebra A as a deformation quantization of C ∞ (P). If we compute (D G ) 2 explicitely, by using (45) More precisely, γ has to satisfy F P + D G γ + γ ⋆ γ = 0. The existence of such a γ was shown in [14,17] by homological perturbation theory. Now we want to focus on some special cases. We want to look at two important examples of Poisson structures.
A.3.1. Constant Poisson structure. The situation of a constant Poisson structure is a first example to think about. Let (P, Π) be a Poisson manifold with constant Poisson structure Π and ξ ∈ T x P for x ∈ P be a fixed tangent vector. By the definition of A, and the fact that each vertex has only one outgoing and no incoming arrow, we get A( ξ, Π ϕ ) = ξ, which leads to the fact that D ξ G = (ξ + ξ) = D ξ G . Therefore we get (D G ) 2 = 0 and thus F P = 0.
A.3.2. Linear Poisson structure. Let now (P = g * , Π) be a Poisson manifold with linear Poisson structure Π(x) = Π ij k x k ∂ ∂x i ∧ ∂ ∂x j , where Π ij k represent the structure constants of g, and ξ ∈ T x P for x ∈ P be a fixed tangent vector. As in the constant case, we observe that A( ξ, Π ϕ ) = ξ, which is the case since the integral of a bulk vertex with one incoming and one outgoing arrow is zero, and since there is at most one incoming arrow for each vertex.
A.5. Connection to the Poisson Sigma Model. In [11] and [15] it was shown that Kontsevich's formality map on R d can be intepreted as the perturbative computation of expectation values of observables of the PSM on the upper half plane (or respectively the disk) with values in R d . The graphs which appear in the construction of Kontsevich's star product on Poisson manifolds ( [33]) are given on the upper half plane, where they can collapse, according to the boundary of the configuration space, on the boundary of the upper half plane. This means that the graphs that appear in the PSM are exactly the graphs that appear for Kontsevich's star product. More precisely, if one considers the disk D in R 2 and the classical action of the PSM on D given by S D [(X, η)] = D η, dX + 1 2 Π(X), η ∧ η , we can asymptotically write Kontsevich's star product for two smooth maps f and g as a perturbative expansion of the following path integral: (49) f ⋆ g(x) = X(∞)=x f (X(0))g(X(1))e i S D [(X,η)] , where 0, 1, ∞ represent some marked points on the boundary of D. Note that x ∈ Map(D, R d ) is a constant map, i.e. the we get a local representation of the star product. If one considers a general Theorem B.1. The integral kernels for the superpropagators G S i in presence of two branes are given by with mirror maps (52) and (53). The integral kernels satisfy the additional boundary conditions θ(v, u) S 1 = θ(v,ū) = θ(−v, u) S 1 , θ(v, u) S 2 = θ(v, −ū) S 2 = θ(v, u) S 2 , i.e. every boundary component of P 2 is labeled by a boundary condition for both the variables (u, v). By construction θ(v, u) S 1 = θ(u, v) S 2 , θ(v, u) S 2 = θ(u, v) S 1 .
B.4. Relation to Kontsevich's propagator. Let φ be Kontsevich's angle 1-form. Then, one can show that where A 1 = I 2 ∩ I 2 and A 2 = I C 1 ∩ I C 2 . | 2018-08-06T11:54:37.000Z | 2018-08-06T00:00:00.000 | {
"year": 2018,
"sha1": "83a68ba33cab78f831fe95e16e38e61572b4fd24",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1808.01832",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "43572564abab5ed07df259313566860ef8c11092",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
267359589 | pes2o/s2orc | v3-fos-license | Impact of Residential Area Characteristics and Political Group Participation on Depression Among Middle-Aged and Older Adults: Results of an 11-Year Longitudinal Study
Abstract Background and Objectives The claim that political group attendance is associated with poor mental health among older adults may be conditioned on geographic conditions. This study examined the geographical context in which political group participation may be associated with depression. Research Design and Methods The 11-year follow-up data from the Taiwan Longitudinal Study on Aging, covering 5,334 persons aged ≥50 years, were analyzed using random-effects panel logit models. Depression was assessed using 10 items on the Centre for Epidemiologic Studies Depression scale. Participants were asked to indicate whether they belonged to different social groups. We modeled depression as a function of political group participation (the independent variable) and geographical region (moderators), adjusting for individual-level characteristics. Results Respondents in political groups were more likely to report depression than those in nonpolitical groups (adjusted odds ratio [AOR] = 1.90, 95% confidence interval [CI] = 1.34–2.68). Between urban and rural settlements, there were no statistically significant differences in mental health outcomes among older adults engaged in political groups (AOR = 1.72, 95% CI = 0.81–3.67). For those who remained politically engaged, living in areas with lower levels of electoral competition was associated with a lower likelihood of depression (AOR = 0.92, 95% CI = 0.86–0.98); this conditional effect was not prevalent among those who were solely engaged in nonpolitical groups (AOR = 1.02, 95% CI = 0.99–1.03). Discussion and Implications Political group participation is associated with poor mental health among older adults living in politically competitive regions.
Depression is the leading cause of the disease burden in older adults.The global prevalence of major depression in older persons was 13.3% (Abdoli et al., 2022).Given that antidepressants might pose a risk of potentially dangerous drugdrug interactions among older adults due to polypharmacy (Ishtiak-Ahmed et al., 2023), public health scientists need to develop more social intervention measures to prevent the development of depression among older persons.
Broader literature confirms that various forms of social group participation in old age are associated with decreased depression (Amagasa et al., 2017;Guo et al., 2018;Tomioka et al., 2017).Greater community involvement, including volunteer activities, community events, and clubs, has been associated with a lower risk of psychological distress among older adults (Amagasa et al., 2017).Additionally, frequent participation in sports groups and hobby clubs may influence mental health more positively than nonparticipation (Tomioka et al., 2017).However, previous studies have found the opposite effect concerning the association between engagement with political groups and depression, with higher engagement 2 Innovation in Aging, 2024, Vol. 8, No. 2 being associated with depression (Croezen et al., 2015;Lin & Yan, 2022).These mechanisms involve party-based trust biases, which result in a weaker sense of political community that can raise the possibility of increased levels of hostility between people with different political orientations and promote feelings of uncertainty about who can represent their views (Lin & Yan, 2022).In this context, a growing body of evidence supports the resulting phenomenon of "the health cost of politics" (Smith, 2022;Smith et al., 2019), "political depression," and "election stress disorder" (Chang & Meyerhoefer, 2023;Hoyt et al., 2018;Krupenkin et al., 2019;Pitcho-Prelorentzos et al., 2018;Roche & Jacobson, 2019;Simchon et al., 2020;Tashjian & Galván, 2018).As older people have stronger feelings of psychological attachment to an ideological in-group upon accepting relevant party cues (Devine, 2015), those engaged in political groups are more likely to be influenced by politics, thereby developing depression, compared to those who engage in nonpolitical groups.
Epidemiologists and public health researchers are studying the effects of residential area characteristics on depressive moods in older populations.Previous research has suggested urban-rural differences in mental health among older adults (Friedman et al., 2007;Purtle et al., 2019;Sasaki et al., 2021;Srivastava et al., 2021;Sun & Lyu, 2020;Tang et al., 2020), but the evidence is mixed.The pathways through which urban-rural residences influence depression risk among older adults might differ by country.According to a meta-analysis, urban dwellers in developed countries were significantly more likely to experience depression than those residing in rural areas; however, this association was not observed in developing countries (Purtle et al., 2019).Recent studies have been conducted on the neighborhood environment and the mental health of older adults was identified (Barnett et al., 2018;Guo et al., 2020;Joshi et al., 2017;Miao et al., 2019;Stahl et al., 2017).Generally, neighborhood socioeconomic status, social cohesion, crime-related safety, recreational services, and walkability were negatively associated with depressive outcomes.
Little is known about how the interplay between political group participation and geographic conditions influences older adults' mental health.Politically organized areas may be an important moderator variable.There are several reasons to expect electorally competitive regions to contribute to poor mental health outcomes among older adults engaged in political groups.First, highly competitive pressures tend to facilitate the ideological indoctrination of older adults who engage in political groups, thereby leading to polarized expressions of support for certain political values, which may weaken political community belonging.Recent research indicates that intense political competition serves to mobilize partisan identities and increase partisan animus, thus creating polarized evaluations of political actors because politicians tend to increase electoral fortune by evoking citizens' partisan feelings (Carlin & Love, 2018;Singh & Thornton, 2019).Additionally, the existing evidence suggests that countries with a high degree of political polarization may pose a challenge to governance, and citizens may experience a sense of alienation and reduced willingness to cooperate and compromise if their adversaries are elected to power (McCoy et al., 2018).
The second potential interpretation is that the negativity of the campaign, defined as any criticism leveled by one candidate against another during a campaign, in contrast to the use of messages intended to promote one's policy positions and record, increases with the competitiveness of the race (Auter & Fine, 2016;Banda, 2022;Hassell, 2021;Maier & Nai, 2022;Nai, 2020;Yan, 2022).There is a greater likelihood of depression among older adults in political groups living in areas where candidates rely heavily on negative campaigning, as it can create an environment of hatred in which individuals may attack major ideological differences held by friends or family members, resulting in a weaker sense of community belongingness that can lead to depression.Studies suggest that politics have resulted in lost friendships, ruined family reunions, and disrupted workplaces (Smith et al., 2019).Evidence suggests that exposure to negative campaign messages and political rhetoric is associated with psychological distress, especially when vulnerable populations perceive themselves as being targeted (Chavez et al., 2019;Frost & Fingerhut, 2016;Niederdeppe et al., 2021;Williams & Medlock, 2017).
Third, electoral competitiveness increases political uncertainty regarding who will win an election.This certainly promotes feelings of uncertainty regarding whether the newly elected government will act according to its wishes and meet its needs.After elections, a government rife with opposing ideologies reflects a general departure from the concept of long-standing individual political beliefs, thus contributing to the personal loss of meaning and purpose.A pre-2016 United States presidential election national survey reported that the majority of both Democrats and Republicans reported stress about the country's uncertain future (American Psychological Association, 2017).There is also evidence that there were significant increases in stress, depression, anxiety, and poor sleep quality among groups with high levels of opposition to President Trump such as Democrats, liberals, racial minorities, and students in the aftermath of the 2016 election (Hoyt et al., 2018;Krupenkin et al., 2019;Pitcho-Prelorentzos et al., 2018;Roche & Jacobson, 2019;Simchon et al., 2020).Those who were politically interested and engaged were also more likely to report negative effects on their mental health (Smith, 2022).We expected a similar effect to be observed among older adults engaged in political groups.
The study hypothesized that for older adults who remained politically engaged, living in areas with higher levels of electoral competition will be associated with a higher likelihood of depression.By contrast, this conditional effect will not be prevalent among those who were solely engaged in nonpolitical groups.For the purpose of comparison, other residential characteristics, such as urban-rural residences, are expected to have no impact on the likelihood of depression among older adults who are involved in political groups.
Given that political competition is a crucial factor, it is reasonable to anticipate that those residing in the capital city and those engaged in political groups will have significantly higher odds of reporting depression.Research has found that elections are more competitive in heterogeneous districts than in their homogenous counterparts (Aistrup, 2004;Koetzle, 1998) and larger districts than in smaller ones (Gerring et al., 2015).Greater diversity should make incumbents more vulnerable by offering more heterogeneous groups in the electorate so that the opposition can adequately represent their views.A larger size should increase the challenger supply and reduce the personal connections between representatives and their constituents, thereby encouraging greater contestation.Capital cities are likely to be sociologically, economically, and ideologically diverse, and highly populated because they create employment opportunities, leading to a population shift (Dascher, 2000).Thus, this study hypothesized that there is a pathway linking capital city status, electoral competition, and mental health among older adults engaged in political groups.
This study used panel data covering 5,334 persons aged 50 years and older across four waves from 1996 to 2007 to examine the geographical contexts in which political group participation could be associated with depression.Using the same sample, we further investigated the association of participation in multiple social groups and residential area characteristics with depression.
Study Population
The unit of analysis was "individual-year."This study used data collected from the Taiwan Longitudinal Study on Aging (TLSA), which was conducted by the Health Promotion Administration of Taiwan, thus offering a nationally representative sample of the Taiwanese population.This longitudinal study was based on a three-stage equal-probability sampling design using household registration data and information collected during face-to-face interviews.Data were collected across six waves from 1989 to 2007.We collected a sample of 15,053 pooled time series and cross-sectional observations from four of the six survey rounds : 1996, 1999, 2003, and 2007.This study examined individual differences in the same year, as well as potential temporal trends in how political and social engagement relate to the depression likelihood by comparing subsequent years to the results obtained in 1996.
Study Variables
Depression was assessed using 10 items from the Centre for Epidemiologic Studies Depression scale (CES-D).The respondents rated how frequently each item was applied to them over the preceding week.The ratings were based on a 4-point scale ranging from 0 (rarely or none of the time) to 3 (most or all of the time).Thus, this short form of the CES-D generates total individual scores ranging from 0 to 30, with higher scores indicating greater depressive symptoms.A dummy variable was created such that 1 = respondents with depression (≥10 points) and 0 = otherwise (0-9 points).
As for the independent variables representing the type of engagement in social groups, participants were asked to indicate whether they were members of community friendship groups, religious groups, business associations, political groups, volunteer groups, clan associations, and/or senior groups (0 = "no"; 1 = "yes").We created a categorical variable for specific types of group participation: 0 = those who did not participate in social groups, 1 = those who were solely engaged in political groups, 2 = those who were solely engaged in nonpolitical groups, and 3 = those engaged in both types.
The three moderating variables were measured as follows: First, urban and rural settlements were obtained using the item "area of residence."The question has five answer options (large city = 1, big city = 2, small city = 3, township = 4, and rural area = 5, with options 1, 2, and 3 were combined into "urban," options 4 were "suburban," and options 5 were "rural").Second, we divided the administrative regions according to capital city status (binary, 1 = Taipei City) and special municipality status (binary, 1 = Kaohsiung City).
Third, the disparity between the political bases of parties reflects the level of political competition in electoral regions (electoral competition).We calculated the level of political competition in electoral regions by comparing the percentage of votes won by candidates formally nominated by the two major parties in the four local elections to elect county magistrates (city mayors) between 1993 and 2005.The narrow margin between the two candidates indicates a high level of political competition.Three points merit further investigation.First, we did not analyze presidential elections because a presidential candidate with more votes in county-level electoral districts, unlike a county magistrate candidate, does not necessarily guarantee victories.Second, competition between the two electorally dominant parties has intensified since democratization.Small parties either endorse a candidate from the major parties or, with a main party, nominate a joint candidate for the office of a county magistrate (city mayor).Taiwan's political structure is divided into two camps-the pan-green coalition and the pan-blue coalition-so that bipolar competition still creates party-coalition-based trust biases for individuals who identify with small parties.Third, it is common for the two major parties to send a candidate to run for mayor or county magistrate.However, some candidates may violate party discipline and participate in elections without party approval.These nonnominated aspirants may potentially entice supporters of their own parties.Thus, only considering the difference in votes between formally nominated candidates from the two major parties may underestimate or overestimate the differences between party bases.We computed the average of the differences between party nominees' vote shares in the four local elections to eliminate occasional bias from nonnominated aspirants when coding for single electoral outcomes.
This operationalization also shows whether the electoral region has been dominated by one of the two major parties for a long time or if their basic bases are quite close.For example, consider a constituency dominated by two major parties that alternately win by 10 percentage points.A 10% gap in the percentage of votes will be considered a constituency with low levels of political competition when coding single electoral outcomes.However, this should be viewed as a constituency characterized by intense political rivalry because the two major parties alternately win by 10 percentage points.The average will be more in line with the actual party strength gap if used.Data were collected from the Central Election Commission.
Individual-level characteristics were set as control variables, including gender (binary, 1 = male), age, education level (0, 1-6, 7-9, 10-12, and ≥13 years), frequency of exercise (0, 1-2, and ≥3 times per week), and current living status (binary, 1 = living alone).Furthermore, chronic diseases (hypertension, diabetes, heart disease, stroke, lung disease, arthritis, gastrointestinal disease, hepatobiliary disease, and renal disease) were recorded based on self-reported data.A count of conditions was created based on the total number of chronic conditions for each participant (ranging from 0 to 9), while the number of morbidities was treated as a categorical variable (0: absence of disease, 1-4: mild to moderate, and ≥5: severe).Older adults were also asked to indicate whether financial difficulties had occurred in their families during the past 12 months (i.e., answers of no, somewhat difficult, and very difficult).Finally, self-rated economic situations were obtained using the question, "In general, are you satisfied with your current economic status?"This was answered on a 5-point Likert scale (very satisfied = 1, satisfied = 2, neither = 3, unsatisfied = 4, and very unsatisfied = 5, with options 1 and 2 combined into "satisfied" and options 4 and 5 combined into "not satisfied").Supplementary Table 1 summarizes the characteristics of the study participants in the total sample.
Statistical Analysis
This study tracked 5,334 older adults over 11 years, covering 15,053 pooled time-series and cross-sectional observations.It encompasses the characteristics of cross-sectional data and time-series data, making it panel data.Fixed-effects and random-effects models can be used with panel data, the choice of which is affected by several factors.A fixed-effects model may not prove efficacious if the participants exhibit minimal or no variation over time, as there needs to be withinsubject variations in the variables.In other words, the standard errors from the fixed-effects models may be too large to tolerate if there is a limited degree of variability within subjects, thereby rendering the random-effects models more suitable.This study applied a random-effects panel logit model as the vast majority of older adults changed little over time.
To verify our hypothesis that political group participation is associated with poor mental health among older adults, which may be conditioned on geographic conditions, we computed three interaction terms: "group participation × urban and rural settlements," "group participation × administrative divisions," and "group participation × the level of political competition in electoral regions."An interaction effect occurs when the effect of one variable on an outcome depends on the value of a second variable.
A mediation model was used to explain the mechanisms linking capital city status and mental health among older adults involved in political groups.A mediation model aims to identify and explicate the mechanism underlying the observed relationship between independent and dependent variables by incorporating a third hypothetical variable, commonly referred to as a mediator.A mediator explains the process through which independent and dependent variables are related.The mediator was the level of electoral competition.A mediation effect was found to exist if the effect of capital city status on the increased depression likelihood for older adults engaged in political groups disappeared (or was at least weakened) when the level of electoral competition was included in the regression.However, the mediator still exerted an effect on the likelihood of depression among older adults engaged in political groups.Statistical analyses were performed using the Stata software.
Robustness Tests
For the robustness tests, we changed the cutoff score for the determinants of depression from 10 to 9 or 11.Moreover, we ran additional robustness tests that regarded the outcome variable as a continuous variable.Furthermore, participants were asked to provide additional information on whether they were members of learning clubs for older adults in the 1999 survey to code their type of social group engagement.The analysis also compared the percentage of votes won by the two major parties in the magistrate and mayor elections as an alternative measure of political competition.In other words, we added the percentage of votes won by party nominees and nonnominated aspirants for each party to indicate the political bases of the parties in each election.It is possible that notable electoral events during this period may have led to different conclusions regarding depression.If older adults who participate in political groups encounter an additional county magistrate or a mayoral by-election during the survey period, it may adversely affect their mental health.This is because additional electoral campaigns may increase outparty hostility.To exclude this alternative explanation, we included the interaction term "group participation × byelection" in the regression model.Finally, individuals who lived alone were excluded from the analysis.This was because they were both more prone to depression and more likely to engage in political groups without pressure from their family members of different ideological orientations.The analysis also excluded individuals who did not receive much social support because it is likely that older adults in political groups only were primarily those who were isolated from other forms of social interactions.To allege that political group engagement affects depression implies a spurious interrelationship.Social support was measured by asking, "How much do you feel that your family, relatives or friends care for you?Would you say a great deal, quite a bit, some, very little, or not at all?" Participants were perceived as receiving less social support if they chose the response "some, very little or not at all."
Results
Respondents in political groups were more likely to report depression than those in nonpolitical groups (adjusted odds ratio [AOR] = 1.90, 95% confidence interval [CI] = 1.34-2.68;Figure 1-1).Respondents who did not participate in social groups also had a higher likelihood of depression than those who reported engagement in nonpolitical groups (AOR = 1.44, 95% CI = 1.28-1.62).Respondents in both political and nonpolitical groups were less likely to be depressed than those who were only engaged in political groups (AOR = 0.45, 95% CI = 0.28-0.72).However, there were no statistically significant differences in depression likelihood between those engaged in both types and those in nonpolitical groups only (AOR = 0.85, 95% CI = 0.58-1.22).The results were consistent with our primary analysis if respondents were reclassified into nonpolitical groups (those who were solely engaged in a nonpolitical group), political groups, multiple groups (those who were engaged in at least two group types), or no groups (Figure 1-2).
Among those who remained politically engaged, living in areas with higher levels of electoral competition was associated with a higher likelihood of depression.A one percentage point increase in the difference in the percentage of votes won by candidates formally nominated from the two major parties indicated an 8% lower depression likelihood for older adults engaged in political groups (AOR = 0.92, 95% CI = 0.86-0.98; Figure 3-1).This conditional effect was not prevalent among those who were solely engaged in nonpolitical groups (AOR = 1.02, 95% CI = 0.99-1.03)or both types (AOR = 1.02, 95% CI = 0.96-1.10).The association of an additional percentage point increase in the disparity in the proportion of votes obtained by candidates with the depression likelihood for older adults was more prevalent in political groups than in nonpolitical groups (AOR = 0.90, 95% CI = 0.85-0.96; Figure 3-2).However, the gap between nonpolitical groups and both types in terms of the association between the difference in the percentage of votes that the candidates won and the depression likelihood for older adults was not statistically significant (AOR = 0.99, 95% CI = 0.92-1.06).
Those living in the capital city and engaged in political groups were more likely to report depression, compared to their counterparts in the noncapital city (AOR = 3.22, 95% CI = 1.57-6.59); in contrast, the effects were reduced for those in nonpolitical groups (AOR = 1.66, 95% CI = 1.14-2.42;Figure 4-1).When the explanatory variable and the mediator were included, the effect of living in the capital city on the depression likelihood for older adults engaged political groups weakens (AOR = 2.66, 95% CI = 1.28-5.52compared to AOR = 3.22, 95% CI = 1.57-6.59;Figure 4-2).However, indirect effects were found to be statistically significant: a one percentage point increase in the difference in the percentage of votes won by candidates formally nominated from the two major parties indicated an 7% lower depression likelihood for older adults engaged in political groups (AOR = 0.93, 95% CI = 0.87-0.99; Figure 4-3).Supplementary Table 6 presents the results of robustness tests.
Discussion
This study examined the geographical context in which participation in political and nonpolitical groups may be associated with depression among older adults in Taiwan.First, there was a greater likelihood of depression among older adults who were solely engaged in political groups than among those who were engaged in nonpolitical groups only.Second, for older adults who remained politically engaged, living in areas with higher levels of electoral competition was associated with a higher likelihood of depression; this conditional effect was not prevalent among those who were solely engaged in nonpolitical groups or both types.
It is universally accepted that social participation benefits mental health for older adults (Amagasa et al., 2017;Croezen et al., 2015;Guo et al., 2018;Lin & Yan, 2022;Tomioka et al., 2017).Existing research confirms that various forms of social group participation can exert a positive impact on mental health in older adults (Amagasa et al., 2017;Guo et al., 2018;Tomioka et al., 2017), which is consistent with our findings.However, political group engagement has been associated with increased depression (Croezen et al., 2015;Lin & Yan, 2022).This is also in line with the findings of this study.
Geographical conditions may influence the claim that political group attendance is associated with poorer mental health among older adults.Urban and rural settlements are potential conditions; abundant literature indicates urban-rural differences in mental health among older adults, but the evidence is mixed.Some studies have indicated that poor mental health is positively associated with residence in rural areas (Sasaki et al., 2021;Sun & Lyu, 2020;Tang et al., 2020), whereas others have confirmed that older adults in urban areas exhibit worse mental health than their rural counterparts (Friedman et al., 2007;Purtle et al., 2019;Srivastava et al., 2021).Overall, we found no statistically significant urban-rural differences in the likelihood of depression among older adults who reported participating in nonpolitical groups.This is consistent with previous findings that social networks and participation eliminate mental health disparities between urban and rural older adults (Sun & Lyu, 2020;Tang et al., 2020).The results also showed no definite pattern of urban and rural settlements affecting depression among older adults engaged in political groups.This indicates that politically organized areas other than the urban-rural divide may play a moderating variable role.
Political group participation is associated with poor mental health among older adults living in politically competitive regions.The effect was not statistically significant among those who were solely engaged in nonpolitical groups or both groups.We speculate that there are several possible explanations for this.First, competitive pressure causes rival parties to expedite the ideological indoctrination of older adults who engage in political groups, thereby leading to polarized expressions of support for certain political values that may weaken community belonging (Carlin & Love, 2018;McCoy et al., 2018;Singh & Thornton, 2019).Second, research indicates the link between negative campaign messages and psychological distress (Chavez et al., 2019;Frost & Fingerhut, 2016;Niederdeppe et al., 2021;Williams & Medlock, 2017).It is plausible that higher levels of electoral competition inform candidates' decisions to level criticism against competitors (Auter & Fine, 2016;Banda, 2022;Hassell, 2021;Maier & Nai, 2022;Nai, 2020;Yan, 2022), and exposure to negative messages may have negative consequences for the psychological well-being of older adults involved in political activities.Finally, electoral competitiveness increases political uncertainty about who will win the elections (American Psychological Association, 2017) and whether elected officials represent their views and political beliefs (Hoyt et al., 2018;Krupenkin et al., 2019;Pitcho-Prelorentzos et al., 2018;Roche & Jacobson, 2019;Simchon et al., 2020), which, in turn, can adversely affect opportunities to voice opinions through political networks and further likely contribute to increased odds of depression.Given the limited data available, the present study is unable to examine these mechanisms.
Mediation analyses revealed that the level of electoral competition mediated the effect of capital city status on the likelihood of depression among older adults who reported engagement in political groups.It suggests a pathway linking capital city status, electoral competition, and mental health among older adults engaged in political groups.Through systematic review and meta-analysis, Purtle et al. (2019) found that depression prevalence was significantly higher among older adults in urban areas.This study argues that the capital city is different from other cities in that its intense electoral competition adversely affects the mental health of older adults who participate in political groups.Our empirical investigation found that Taipei, instead of Kaohsiung, revealed the existence of conditional effects related to place of residence (Supplementary Tables 4 and 5).Future research should explore the role of the capital city in the physical and mental health of older adults.
This study has several limitations.First, additional types of social engagement (e.g., sports clubs and cultural groups) should have been employed to examine the predicted relationship.However, the TLSA data (the most comprehensive data available for social group engagement) allowed us to analyze the given group types.Second, participants were asked to indicate their involvement in political groups.There was no assessment of party type (ideological, elitebased, mass-based, democratic, antisystem, etc.) or the time spent on each political group, thus causing bias.Future research should test the validity of the proposed arguments using additional measurements.Third, no data exist for the analysis of village-and town-level electoral competition.However, the influence of elections held to elect the mayors of townships and village chiefs on mental health outcomes among older adults may be much smaller than that of elections to elect the magistrates of counties and cities.More detailed data may enable an examination of the theoretical expectations.Fourth, no data exist to ensure that respondents have the same perception of electoral outcomes across survey time points.We believe that long-term differences in the strength of party bases are voters' common perceptions, which may be reflected in their voting behavior.For example, an electoral region that has long been dominated by a pan-blue or pan-green coalition will have a lower voter turnout than one with similar party strength.Future research should use questionnaires to examine their assessments of and perspectives on the outcomes of each election.Fifth, the observed phenomenon may be linked to specific characteristics of older adults that precipitate their involvement in political groups.To avoid a spurious interrelationship, we excluded individuals with specific characteristics (e.g., living alone) that may synchronously lead to political group attendance and a greater likelihood of depression.However, it is plausible that older adults who only engage in political groups possess a strong political orientation.In such a scenario, their depression may be attributed to other factors, rather than being a result of their involvement in political groups.Future studies should identify valid instrumental variables to eliminate endogeneity.Finally, 1,879 participants who completed the questionnaire after 1996 were lost to follow-up because of death.The patients who died were more likely to represent a subpopulation.Thus, selection bias due to loss of follow-up threatens the internal validity of the estimates derived from the longitudinal data.Nevertheless, the inverse probability weighting models that accounted for the participants lost to follow-up revealed robust results.
The findings of this study contribute to insights on successful aging.Successful aging involves active engagement with life (Rowe & Kahn, 1997).Political activity may have a profound positive impact on well-being and life satisfaction (Lühr et al., 2022;Pavlova & Lühr, 2023).However, political group attendance may also be associated with poor mental health among older adults, particularly those residing in politically competitive regions.This does not imply that older adults' political rights should not be protected because the impact of political group participation on depression depends on other forms of social group participation and the political circumstances of their locality.Instead of encouraging older adults to abstain from politics, the government should promote nonpolitical group engagement to improve the mental health of older adults who are politically engaged and live in increasingly competitive political environments.These findings imply a dimension of successful aging: life should not be entirely about politics.
Figure 4 .
Figure 4.The effects of social group engagement and administrative divisions on the probability of depression among older adults, Taiwan, 1996-2007.AOR = adjusted odds ratio; BG = both groups; NG = no groups; NPG = nonpolitical groups; PG = political groups; Pr(Depression) = the predicted probability of depression.All results were based on random-effects panel logit models.The data points represent the mean ± standard error.Results correspond to model 1 of SupplementaryTable 4 and model 2 of Supplementary Table 5.Source: the author.*p < .05. **p < .01. ***p < .001.
Table 3 .
The effects of social group engagement on the probability of depression among older adults,Taiwan, 1996Taiwan, -2007.AOR = adjusted odds ratio; BG = both groups; MG = multiple groups; NG = no groups; NPG = nonpolitical groups; PG = political groups; Pr(Depression) = the predicted probability of depression.All results were based on random-effects panel logit models.The data points represent the mean ± standard error.Results correspond to models 1 and 2 of Supplementary Table2.Source: the author.*p < .05. **p < .01.Source: the author.*p < .05. **p < .01. ***p < .001. | 2024-02-01T16:38:44.303Z | 2024-01-27T00:00:00.000 | {
"year": 2024,
"sha1": "98281f3a4d1290d47860896fa571aef88d290266",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/innovateage/advance-article-pdf/doi/10.1093/geroni/igae004/56438160/igae004.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "46bcd0fa703924a7f6461c0471a728c493254588",
"s2fieldsofstudy": [
"Political Science",
"Sociology"
],
"extfieldsofstudy": []
} |
16087705 | pes2o/s2orc | v3-fos-license | Cardiac Dysfunction in the BACHD Mouse Model of Huntington’s Disease
While Huntington’s disease (HD) is classified as a neurological disorder, HD patients exhibit a high incidence of cardiovascular events leading to heart failure and death. In this study, we sought to better understand the cardiovascular phenotype of HD using the BACHD mouse model. The age-related decline in cardiovascular function was assessed by echocardiograms, electrocardiograms, histological and microarray analysis. We found that structural and functional differences between WT and BACHD hearts start at 3 months of age and continue throughout life. The aged BACHD mice develop cardiac fibrosis and ultimately apoptosis. The BACHD mice exhibited adaptive physiological changes to chronic isoproterenol treatment; however, the medication exacerbated fibrotic lesions in the heart. Gene expression analysis indicated a strong tilt toward apoptosis in the young mutant heart as well as changes in genes involved in cellular metabolism and proliferation. With age, the number of genes with altered expression increased with the large changes occurring in the cardiovascular disease, cellular metabolism, and cellular transport clusters. The BACHD model of HD exhibits a number of changes in cardiovascular function that start early in the disease progress and may provide an explanation for the higher cardiovascular risk in HD.
Introduction
Over the course of a lifetime, Huntington's disease (HD) patients are subject to a progressive neurodegenerative process that inflicts cognitive, psychiatric and motor dysfunction [1,2]. HD is caused by a CAG repeat expansion within the first exon of the huntingtin (Htt) gene and when translated, produces a polyglutamine (polyQ) repeat that leads to protein misfolding, soluble aggregates, and inclusion bodies detected throughout the body [3,4]. The normal function of the protein (HTT) is unknown; however, the mutated form leads to dysfunction of a large range of cellular processes including cytoskeletal organization, protein folding, metabolism and transcriptional activities. Cardiovascular events are a major cause of early death in the HD population and these events occur at higher rates compared to the rest of the population [5][6][7]. The limited cardiovascular studies in HD patients attribute the increased cardiovascular susceptibility in part to a dysfunctional autonomic nervous system (ANS) that can be detected early in the disease progression [8][9][10][11][12][13]. Additional studies are needed to better understand the time-course and cause of cardiovascular pathology in HD that could potentially help improve symptoms and prevent early death.
While there is no perfect animal model of HD [14], several models recapitulate aspects of the cardiovascular dysfunction seen in human disease. For example, dysfunction of the ANS has been reported in the several mouse models of HD as measured by heart rate variability and baroreceptor reflex experiments [15][16][17][18]. Reduced contractility and cardiac output is also a common feature in mouse models [15][16][19][20][21]. The cardiac-specific expression of polyQ repeats leads to dysfunction, suggesting that cardiovascular disease is a result of both local dysfunction as well as improper input [22,23]. Advances in these preclinical models are critical if we are to develop a mechanistic understanding of the cardiovascular pathology in HD but establish models to evaluate therapeutic interventions.
In this study, we sought to define the time course of cardiovascular symptoms in HD using the BACHD mouse model: a bacterial artificial chromosome (BAC) mediated transgenic whereby the mutant form of the full length human Htt gene with 97 stable CAG repeats was incorporated into the mouse genome [24]. Using this disease model, we evaluated the agedependent progression in heart dysfunction using echocardiograms starting at the beginning of motor symptoms at 3 mo of age and progressing to an age (15 mo) when the motor symptoms are pronounced and brain atrophy can be detected. In addition, we assessed the BACHD heart's response to chronic treatment with beta-adrenergic receptor agonist isoproterenol (ISO). Finally, we examined gene expression profiles in the heart early (3 mo) and late (15 mo) in the disease progression in the BACHD model.
Experimental Animals and Ethics Statement
BACHD mice on the C57BL6/J background along with littermate wild-type (WT) controls were acquired from the mouse mutant resource at The Jackson Laboratory, (JAX, Bar Harbor, Maine) in a colony maintained by the CHDI Foundation. The mice start showing symptoms at 2-3 mo of age and by 12 mo manifest full symptoms. In our colony, the BACHD mice commonly live to 18 mo of age but their health declines precipitously after 16 mo. Therefore, we stopped the present study when mice reached 15 mo of age. Five separate cohorts of mice of each genotype were used for this study: 1) longitudinal measurement of cardiac function using echocardiograms; 2) chronic treatment with isoproterenol (ISO) to boast cardiac output from 3 to 6 mo of age; 3) transcriptional comparison between the hearts of genotypes at 3 and 15 mo of age; 4) measurement of serum cytokines in 3 mo old animals using a multiplex assay; 5) measurement of apoptotic marker in 12-15 mo hearts by western blot. All procedures followed guidelines of the National Institutes of Health and were approved by the UCLA Animal Research Committee.
To provide some details, the health of the mice was monitored daily. We looked for a set of symptoms including restlessness, impaired mobility, licking or guarding wound, failure to groom, open sores, loss of appetite or weight loss. At any sign of ill health, a member of the veterinary staff was consulted for course of treatment. If the animal did not improve, then it would be humanely sacrificed with an overdose of isoflurane and followed by decapitation. For pain management, the mice were given carprofen at 5 mg/kg every 24 hrs for 48 hr minimum. The first dose was given before surgery, and the remaining post-operatively.
Long-Term Assessment of the Heart in WT and BACHD Mice
BACHD mice (n = 10) and WT (n = 10) were group housed and kept in a 12:12 light/dark (LD) cycle with rodent chow provided ad libitum. Mice were examined with echocardiograms at 3, 6, 9, 12 and 15 mo of age. After the last echocardiogram measure, the mice were perfused and morphologic and histological measurements were taken. During the course of the study, 1 BACHD and 2 WT mice died and the data from these mice were excluded from the analysis.
Echocardiograms
Echocardiograms were measured using a Siemens Acuson Sequoia C256 instrument equipped with a 15L8 15MHz probe (Siemens Medical Solutions, Mountain View, CA) as previously described [16]. Briefly, two-dimensional, M-mode echocardiography and spectral Doppler images enabled measurement of heart dimension and function (Left Ventricle (Lv) Mass), enddiastolic dimension (EDD), end-systolic dimension (ESD), posterior wall thickness (PWT), ventricular septal thickness (VST), Lv Ejection Fraction (Lv EF). The mice were sedated with 1% isoflurane vaporized in oxygen (Summit Anesthesia Solutions, Bend, OR) and HR was monitored using electrocardiogram to maintain physiological levels (between 450 and 650).
Isoproterenol (ISO) Treatment
BACHD and WT mice (2-3 mo) were subjected to echocardiograms before subcutaneous implantation of an osmotic pump (Alzet model 2001, Durect Corp, Cupertino, CA) containing either saline (WT: n = 4; BACHD: n = 4) or ISO (WT: n = 10; BACHD: n = 8). To determine an effective dose of isoproterenol, each treated mouse received sequentially increased doses of ISO, beginning with 0.24 mg/day, then 0.48 mg/day, and finally 0.97mg/day. It was this final concentration that proved effective at significantly increasing HR and was used in this study. At the end of the treatments (3 months duration), cardiovascular function was assessed using electrocardiogram recording and echocardiography. After the last echocardiogram measure, the mice were perfused and morphologic and histological measurements were taken.
Electrocardiogram (ECG) Measurements
ECG traces were obtained under isoflurane anesthesia by inserting two platinum needle electrodes (Grass Technologies, West Warwick, RI) under the skin in the lead II configuration. The ECG signal was amplified (Grass Technologies), acquired and analyzed with HEM V4.2 software (Notocord Systems, Croissy sur Seine, France). The ECG data were recorded for 5 to 10 mins from each mouse monthly for the duration of the study. Heart rate (HR) was calculated using the RR-interval for all animals.
Morphometry and Histology
Following the longitudinal echocardiogram measurements as well as the ISO treatments (described above), animals were deeply anaesthetized using isoflurane and perfused with phosphate-buffered saline (PBS, pH 7.4) with heparin (2 units/ml, Henry Schein, Melville, NY) followed by 4% (w/v) paraformaldehyde (Sigma-Aldrich) in PBS (pH 7.4). A motorized pump was used to deliver the solutions in order to control and maintain similar pressure between animals. Hearts were dissected, weighed and post-fixed at 4°C. Tibia length (TL) was also measured. Heart weight (HW)/TL and HW/body weight (BW) ratios were calculated to determine any differences in morphometry that would indicate cardiac hypertrophy. Following a series of dehydration steps using ethanol and xylene, hearts were embedded in paraffin. Paraffinembedded hearts were sectioned and stained with H&E or Masson's Trichrome. Heart dimensions were measured using two mid-ventricular cross sections from each H&E and/or Masson's Trichrome-stained heart with the aid of Image J software. The presence or absence of fibrotic staining was visually scored by two researchers masked to the experimental conditions. In order to quantitate the degree of fibrosis, images were acquired from each heart. Areas of fibrosis were captured, and in sections lacking positive fibrotic staining, an area of the left ventricle (Lv) wall was imaged. Pictures were processed using Corel Draw, to remove pixels of background and pink cardiomyocyte tissue, leaving areas of positive blue stain in the image. Integrated density of these processed images was measured using Image J, where pixel areas less than 3 were excluded from the measurements in order to minimize noise. The resulting density numbers were divided by the area of the heart tissue in the image.
Trichrome stained sections were used to estimate the cardiomyocite cross-sectional area in aged (15 mo) WT and BACHD mice (n = 4 per group). Measurements were performed by three observers masked to the genotype of the mice. Images from multiple fields (3)(4)(5) at the level of the papillary muscles were acquired on a Zeiss Axioskop with an Axiocam using the AxioVision software (Zeiss, Pleas-anton, CA, USA), and measurements (in μm) obtained using the AxioVision software. Only cells with a well-defined round shape were considered. The cross-sectional areas of 7-12 cells/animal were averaged and analyzed for statistical difference.
RNA Extraction
A separate cohort of young (3 mo) and older (15 mo) WT and BACHD mice (n = 4 per group) were used for gene expression analysis. Total RNA from the ventricles were extracted using Trizol RNA isolation protocol (LifeTechnologies, Carlsbad, CA), then DNAse treated (TURBO DNA-free kit, Life Technologies) and column purified (Pureline RNA Mini-kit, Life Technologies). RNA was used for microarray and Quantitative Real-Time PCR experiments described below.
Microarray Hybridization
Microarray processing was performed by the Southern California Genotyping Consortium at UCLA. Briefly, whole genome gene expression profiling protocols began with 100 ng of total RNA isolated as described above. Samples were quantitated using Ribogreen fluorescent assay and normalized to10 ng/ul prior to amplification. Amplified and labeled cRNA was produced using the Illumina-specific Ambion TotalPrep kit. 1st and 2nd strand cDNA was produced using the Ambion kit (Life Technology) and purified using a robotic assisted magnetic capture step. Biotinylated cRNA was produced from the cDNA template in a reverse transcription reaction. After a second Ribogreen quant and normalization step, amplified and labeled cRNA was hybridized overnight at 58°C to Illumina MouseRef-8 v2.0 expression arrays. Hybridization was followed by washing, blocking, staining and drying on a Little Dipper processor. Array chips were scanned using an iScan reader, and expression data were extracted and compiled using BeadStudio software (Illumina, San Diego, CA).
Quantitative Real-Time PCR
One μg of total RNA was reverse transcribed using High Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Foster City, CA). Quantitative real-time RT-PCR was performed using SYBR Green Mix (Applied Biosystems, Foster City, CA). Using Primer3 [25] following primers were designed to flank intron-exon broundaries: Hspa1a (sense: 5'-GGT CTC AAG GGC AAG CTC AG-3' and anti-sense: 5'-CTT GTG CAC GAA CTC CTC CT-3') Nppb . Specificity and efficiency of the reactions were verified using melting curves and dilution series, respectively. Expression levels of Hspa1a, Nppb, Kcnip2, and Acot1 were calculated using the 2 -ΔΔCt method where Tbp and B2m were used as normalizing genes. Tbp and B2m did not vary between genotypes and we report gene expression normalized to Tbp.
Multiplex Assay
Blood sample was collected from a separate cohort of young (3-4mo) WT (n = 8) and BACHD (n = 8) mice using the facial vein technique. Blood was allowed to clot for at least 30 mins at room temperature, and serum was collected following centrifugation at 3000rpm for 15 mins and frozen at -80 C until analyzed. A volume of 25μL serum was used to assess cytokine levels of IFNγ, IL-10, IL-12, IL-1A, IL-1B, IL-2, MIP1A, RANTES, IL-6, MCP-1, and TNF-α using the Millipore cytokine milliplex kit (EMD Millipore Corporation, Billerica, MA, USA) and the Biorad Bio-plex 200 systems machine (Hercules, CA, USA).
Western Blot
Hearts from aged WT (n = 4) and BACHD (n = 4) mice were rapidly dissected and homogenized in lysis buffer containing 50 mM Tris/HCl, 0.25% (w/v) DOC (sodium deoxycholate), 150 mM NaCl, 1 mM EDTA, 1nM EGTA, 1% (w/v) Nonidet P40, 1 mM Na3VO4 (sodium orthovanadate), 1 mM AEBSF [4-(2-aminoethyl)benzenesulfonyl fluoride], 10 μg/ml aprotinin, 10 μg/ml leupeptin, 10 μg/ml pepstatin and 4 μM sodium fluoride. Total protein concentration in cleared extracts was estimated using the Thermo Scientific™ Pierce™ BCA Protein Assay Kit (Thermo Fisher Scientific, Waltham, MA USA). Fifty μg of total proteins were loaded on to a 4-20% Tris-glycine gel (Invitrogen, Carlsbad, CA). Western blots were performed as previously described [26]. Equal protein loading was verified by Ponceau S solution (Sigma, Saint Louis, MO) reversible staining of the blots. Each extract was analyzed for the relative protein levels of Heat-Shock protein (Hsp)-70, by using a mouse monoclonal antibody (Sigma, Saint Loius, Mo), and Cleaved Caspase-3, by using rabbit polyclonal antibodies (EMD Millipore Corporation, Billerica, MA & Cell Signaling, Danvers, MA). Each extract was also analyzed for relative protein levels of GAPDH (Genetex, Irivine, CA) by stripping and re-probing. Protein bands were detected by chemiluminescence using the Thermo Scientific™ Pierce™ ECL 2 Western Blotting Substrate or the Amersham ECL kit (GE Healthcare; Piscataway, NJ) with HRP (horseradish peroxidase)-conjugated secondary antibodies (Cell Signaling, Danvers, MA). Relative intensities of the protein bands were quantified by scanning densitometry using the NIH Image Software (Image J, http://rsb.info.nih.gov/ij/). For the comparison of relative protein levels in the heart from WT and BACHD animals, each background-corrected value was normalized according to the relative GAPDH levels of the sample, and then referred to the average of the WT values calculated from the same immunoblot image.
Statistical Analysis
A two-way repeated measures ANOVA was used to analyze the long-term changes in echocardiographic measurements between WT and BACHD animals using genotype and age as factors. Other comparisons between the two genotypes were made using a Student's t-test. If the data set failed to pass a test for equal variance, a Rank-Sum Test was used. These statistical analyses were performed using the SigmaStat (San Jose, CA) or Prism 5 (GraphPad Software, San Diego, CA) statistical software. Data are shown as the mean ± SEM.
Raw microarray data were analyzed by using Bioconductor packages as previously described [27]. Briefly, quality assessment was performed by examining the inter-array Pearson correlation and clustering based on the top variant genes. Contrast analysis of differential expression was performed by using the LIMMA (Linear Models for Microarray and RNA-Seq Data) package [28]. After linear model fitting, a Bayesian estimate of differential expression was calculated, and the P-value threshold was set at 0.005. The fold change of gene expressions is calculated by comparing the signal intensity of young hearts and old hearts. Differentially expressed genes were then classified according to gene ontology and pathways using Ingenuity Pathway Analysis (IPA, Ingenuity Systems, Redwood City, CA). After applying threshold at P = 0.005, the top 10 genes up-regulated and down-regulated in BACHD compared to WT at the same age are selected according to their P-value. In other words, they are the most significantly changed genes in the comparison.
Structural and Functional Differences between WT and BACHD hearts Start at 3 mo of Age
We assessed cardiac structure and function in a group of BACHD and WT mice throughout most of their adult life in order to follow changes in cardiovascular parameters and establish a time course for pathology. We use echocardiography to follow the same BACHD and WT littermate control mice longitudinally from 3 to 15 mo (Fig 1; Table 1). The hearts of WT mice slowly increased in dimension as they aged, with significance in EDD and PWT starting at 12 mo while Lv Mass and ESD were significantly larger at 15mo (Fig 1A and 1B; Table 1). Cardiac function as measured by Lv EF showed an increase between 3 and 6 mo as is typical with development of WT mice and then declined with age ( Fig 1C). The hearts of BACHD mice also increased in size as they aged, with a significant increase in Lv Mass and PWT measures starting at 9 mo of age ( Fig 1A; Table 1). Cardiac function of BACHD mice did not significantly change with age ( Fig 1C; Table 1). The hearts of BACHD mice were larger (ESD and EDD) and thicker (PWT) than WT mice starting at 3 mo of age ( Fig 1B; Table 1). Lv Mass was significantly higher in BACHD compared to WT with significant differences starting at 9 mo ( Fig 1A). Functional measurements (Lv EF) were depressed in BACHD mice compared to WTs starting at 3 mo of age, and remained lower at all ages ( Fig 1C). Throughout adult life, the hearts of BACHD mice were larger and functionally depressed compared to WTs.
Aged BACHD Mice Develop Cardiac Fibrosis
To further assess the cardiac structure of WT and BACHD hearts, we sacrificed the mice at about 15 mo of age and processed their hearts Masson's Trichrome to visualize fibrotic tissue or H&E staining. We did not detect any fibrotic staining in the WT hearts, while positive staining was present in 6 of the 9 BACHD mice (Fig 2A). Optical density measurements of fibrotic staining were significantly higher in BACHD mice compared to WTs ( Fig 2B; Table 2). BACHD hearts displayed enlarged Lv ( Fig 2C). Although these values did not reach significance, an increase in Lv diameter (19%) and septal thickness (20%) were observed in BACHD compared to WT, whilst no difference in right ventricle dimensions were found between genotypes (data not shown). Furthermore, no changes were found in the cross-sectional area of the BACHD cardiomyocytes (103 ± 15 μm 2 ; n = 4) as compared to WT (121 ± 13 μm 2 , n = 4). In agreement with life-long abnormalities in the BACHD hearts, the morphometric measurements (HW, HW/TL and HW/BW ratios) were all larger in the BACHD mice, although these differences were not significant due to the high variability in the mutant mice (Fig 2D; Table 2). In summary, at 15 mo of age, morphometric analysis showed a trend toward hypertrophy and a significant increase in fibrotic lesions in the hearts of BACHD mice.
Hearts of BACHD Mice Displayed Structural and Functional Adaptation to Chronic β-Adrenergic Treatment
In previous work, we have shown that the BACHD mice show an abnormally weak baroreceptors response to decreases in blood pressure, consistent with a weak sympathetic regulation of heart rate [16]. Therefore, we chronically treated 3mo BACHD and WT mice with ISO to see if there were genotypic differences in the response to this stimulant. By activating beta-adrenergic receptors in the heart, ISO increases heart rate and mimics the action of the sympathetic nervous system. Before drug treatment, there was no difference in baseline HR (WT: 457 ± 21 bpm; BACHD: 498 ± 16 bpm). ISO increased HR of both WT (612 ± 11 bpm) and BACHD (621 ± 16 bpm) animals indicating that it was having the desired effect of stimulating β-adrenergic pathways (Fig 3A and 3B). Saline treatment did not alter the HR in either genotype ( Fig 3C). The magnitude changes in HR post-treatment were not significantly different between WT and BACHD mice suggesting similar drug responsiveness between genotypes (Fig 3C). The ST-segments of the waveforms were elevated in ISO-treated WT and BACHD mice suggesting that the hearts were under ischemic conditions as a result of the cardiovascular challenge. The Lv EF was significantly reduced in the BACHD animals compared to WT controls (Table 3). In summary, ECG measurements suggest that both WT and BACHD mice responded to ISO treatment as measured by increases in HR and ST-segment voltages. The Lv EF was reduced in the mutants, but otherwise, the echocardiograms did not find significant differences in the response to the ISO treatment between the genotypes. These results suggest that BACHD mice are adapting to the cardiovascular challenge and that the beta-adrenergic receptors are functioning properly.
Chronic β-Adrenergic Treatment Exacerbated Fibrotic Lesions in the Hearts of BACHD Mice
Following the 3 months of chronic ISO treatment, we sacrificed the mice (6mo) and processed their hearts for Masson's Trichrome to visualize fibrotic tissue or H&E staining. Masson's trichrome staining of WT and BACHD hearts treated with saline did not reveal any positive fibrotic staining (Fig 4A). In WT mice, the ISO treatment caused positive focal lesions in 1 heart; however, 6 of 8 BACHD mice had positive fibrotic staining. Optical density measures of fibrotic staining revealed a significant increase in the fibrotic regions in ISO-treated BACHD mice compared to saline-treated BACHD mice (Fig 4B; Table 4), as well as compared to ISOtreated WT.
Morphometric measurements detected heavier body weights (BW) in BACHD mice treated with saline compared to WTs (WT: 29.4 ± 1.4 g; BACHD: 34.4 ± 1.4 g; P<0.05). ISO treatment eliminated the difference in BW between genotypes (Table 4). HW was also higher in BACHD mice as compared to WT, and ISO treatment elicited an increase in the weight of the heart of both WT and BACHD mice, although larger in the BACHD. HW/BW ( Fig 4D) and HW/TL ratios were not different between saline treated WT and BACHD animals ( Table 4). Following ISO treatment, HW/BW was higher in ISO treated BACHD mice compared to saline treated BACHD animals (Fig 4D), but was not different when compared to ISO treated WTs. (Fig 4D, Table 4).
In line with the results in Table 1, we found that young WT and BACHD animals exhibited differences in heart dimensions at 6 mo, providing further evidence for the early structural and functional anomalies reported above. In BACHD mice, both the Lv diameter and area were significantly smaller than in WT, while exhibiting thicker Lv and septal walls (Table 5). ISO treatment increased the Lv diameters and areas in both genotypes compared to saline-treated animals, with a greater effect in the mutant hearts ( Table 5). The septal and Lv wall thickness remained significantly thicker in BACHD animals ( Fig 4C). No differences were seen in the right ventricle (data not shown). Notably, ISO treatment elicited significant changes in the cross sectional area of BACHD cardiomyocites (139.9 ±16.1 μm 2 , n = 5, P<0.05) as compared to WT (93.0.1 ± 14.4 μm 2 , n = 4).
In summary, histological examination suggests that both WT and BACHD hearts responded adaptively to ISO treatment. BACHD hearts exhibit susceptibility to developing fibrotic lesions when subject to cardiovascular challenge. Again, the data indicates that the BACHD mice develop structural abnormalities early in the disease progression and life-long hypertrophy.
Microarray Analysis Indicates Gene Expression Changes in Apoptotic, Proliferation, Metabolism, and Immune Function in the Hearts of Young BACHD Mice
In order to begin to identify pathways that may be responsible for the decreased cardiovascular function in the BACHD animals, we assessed gene expression in the ventricles of young (3 mo) and older (15 mo) BACHD and WT mice using microarrays. Early in the disease progression, a total of 105 genes (50 upregulated and 55 downregulated) were moderately but significantly changed. The top 10 genes whose expression varied between the two genotypes at each age are shown in Table 5. Biologically significant pathways with altered gene expression in the ventricles of the BACHD mice included metabolic (monosaccharide and fatty acid), cytoskeletal organization, immune dysfunction and apoptosis (Fig 5A and 5B). A closer look at the expression pattern of such genes suggested that a majority of the changes in expression appears to promote apoptosis in the hearts of young BACHD mice. In fact, genes known to inhibit apoptosis were down-regulated (Dpp7, Yap1, Mns221) while genes that promote apoptosis were up-regulated (Naa35, Irf5, Sgk3, Rbm3, Bcl212, Ncoa3, Lcmt1, Fanc1, Rbpj, Cd22, Apex1, Rpgrip1). We also found significant changes in the expression of a number of genes implicated in cardiovascular disease at 15mo (Fig 5; Table 6). At this age, a total of 254 genes (149 up-regulated and 105 down-regulated) were significantly changed between the genotypes. Pathway analysis using IPA program indicates that the largest changes occurred in the cardiovascular disease, cellular metabolism, gene expression, molecular transport and proliferation clusters (Fig 5B). It is worth emphasizing that we compared the HD mutant to age-matched control tissue, hence aging alone was a major regulator of gene expression patterns. A comparison between young and old WT tissue indicates 806 genes whose expression was significantly changed with age.
PCR Measurements Found Gene Expression Changes in the Hearts of Young BACHD Mice
We further examined transcriptional differences by measuring the expression of 4 genes in the ventricles of each genotype using using quantitative real-time PCR (Fig 6A-6D). We chose 2 genes (Hspa1a, Nppb) that were shown to be up-regulated by the microarray analysis and 2 genes (Kcnip2, Acot1) whose expression was shown to be down-regulated. All of the genes have been previously shown to be involved in cardiomyopathy. In each case, the PCR measurements were in agreement with the microarray results (Fig 6A-6D).
Altered Inflammation in Young BACHD Mice
The microarray results suggested that BACHD mice have altered inflammatory response with up-regulated genes such as Tnfrsf25 and Adam9. In order to confirm this finding, we measured the levels of immune factors in the serum of young (2-3 mo) BACHD and WT mice using multiplex assay. At baseline, without immune challenge, there was a significant decrease in the levels of the CC chemokine RANTES (Regulated on Activation, Normal T Cell Expressed and Secreted) as well as a significant increase in the levels of the pro-inflammatory cytokine interleukin (IL)-6 ( Fig 7A and 7B). We found no changes in the cytokine IL-1α levels between genotypes (Fig 7C), and an absence of signal for several others (TNF-α, IL-10, IL-12, IL-1β, IL-2 and IFN-γ) and the chemokines MIP1α (Macrophage inflammatory protein 1 alpha) and MCP-1 (Monocyte Chemoattractant Protein-1 (MCP-1) were also undetectable. The changes in levels of IL-6 and RANTES are consistent with microarray results that suggest an altered inflammatory state in young BACHD mice.
Increased Caspase 3 Levels in Aged BACHD Mice
One of the most populated clusters of genes altered in the hearts of both young and old BACHD mice was the one involved in apoptosis. Therefore, we measured the protein levels of the cell death effector caspase-3, in whole tissue lysate from aged BACHD and WT hearts. The levels of the activated form of this protein were significantly increased (80%, Fig 8A) in the BACHD mice as compared to WT. We also found a smaller (17%, Fig 8B) but significant increase in expression of HSP70. These changes in protein are consistent with the microarray analysis in showing increases in genes involved in cellular stress and apoptosis in the BACHD heart.
Discussion
The echocardiographic measurements performed on BACHD mice showed changes in heart structure and function at an early age (Fig 1). At this time point (3 mo), motor deficits [24] and circadian rhythm dysfunction [15] are just beginning thus the cardiovascular deficits are occurring very early in the disease progression in this model. Though the slope of the changes were similar between aging WT and BACHD animals, the ejection fraction (EF) ratio of 15 mo old BACHD mice were close to or below 55, which is the threshold between normal heart function and failure. We suspect that the function would continue to weaken with age, which poses a challenge for the cardiovascular system, leaving BACHD mice more susceptible to serious cardiac events. The presence of cardiac fibrosis in 15 mo old BACHD mice (Fig 2) also provides evidence for cardiac stress that feeds forward to further impair heart function. WT mice show minimal fibrotic lesions at this age. The focal fibrotic lesions in the BACHD would be expected to alter wall motion synchrony and could explain the reduced EF. Evidence for fibrosis has also been found in the R6/2 line [23]. The presence of fibrosis could also increase the likelihood of arrhythmias [29] although we did not observe any in our recordings. Though the incidence of arrhythmias has not yet been examined in HD patients, other lines of evidence suggest increased susceptibility including altered ANS input to the heart [11,30]. ANS dysfunction is consistent among a number of HD animal models including the R6/1 [17] and the R6/2 models [18]. In the BACHD model, we have found significant alterations in the baroreceptor reflex and a decline in heart-rate variability that indicate dysfunctional ANS outflow [16]. With this work, we have provided evidence that BACHD line recapitulates cardiovascular dysfunction seen in HD patients as well as established an age-dependent progression by which we can judge the impact of therapeutic interventions. Despite the differences in structural and functional measurements, the hearts of BACHD mice appear to, at least in the short-term, adapt normally to increased cardiovascular challenge. Isoproterenol is a medication used for the treatment of bradycardia (slow heart rate) and heart block. By activating beta-adrenergic receptors on the heart, it increases heart rate and mimics the action of the sympathetic nervous system. The response to isoproterenol, as measured by echocardiography, was not different between WT and BACHD mice (Fig 3). The chronic βadrenergic stimulation even caused the mutant mice to lose weight and to move into ageappropriate body weights (Table 4). However, upon histological assessment, this adaptation came at a cost i.e. the development of fibrosis in the BACHD hearts (Fig 4). WT mice subject to similar treatment did not produce cardiac fibrosis, indicating that the cardiac state in BACHD mice is susceptible to aberrant changes and scarring when presented with this challenge.
Finally, to better define the transcriptional state of the BACHD hearts, we used microarrays to compare gene expression of young (3 mo) and older (15 mo) hearts to determine if we could detect an HD "signature" in the ventricles (Fig 5; Table 6). The results showed alteration in pathways and processes that are broadly consistent with the results of profiling studies of other tissues [31][32][33]. For example, genes with altered expression in the heart of BACHD mice (Hspa1a, P4ha1, Tpd52l2, Basp1, Hadh, Baiap2, Fgf13, Zfp932, Hbp1) were previously characterized to play an important role in HD pathology in other tissues. Altered pathways in the heart that were consistent with the HD signature include protein processing and ubiquitination, metabolic processes, transcriptional activities and cytoskeletal functions. Importantly, we also detected alterations in cardiac relevant processes including gene networks associated with cardiovascular development and disease (Fig 5; Table 7). Among other findings, our microarray and PCR (Fig 6) results indicated that Kcnip2 levels were significantly decreased at 3 mo of age. Kcnip2 is a voltage-gated potassium channel cofactor known to alter electrical properties of the heart and altered expression levels can increase susceptibility to arrhythmias [34,35].
In the BACHD young heart, microarray analysis reveals increased expression of Nppb, also known as brain natriuretic peptide. Studies have suggested that increased level of Nppb is related to higher risk of myocardial fibrosis and ventricular geometry [36,37]. In addition, the altered expression of genes encoding heat-shock proteins, family A and B (Hspa1a, Hspb2) and lysosomal enzymes (Manba) suggests protein misfolding and cellular stress in young hearts. Recent evidence supports a role for Hsp70 in heart hypertrophy [38], coherently, we found elevated levels of of this protein in aged mutant hearts (Fig 8B). We found that the total number of gene changed is higher in the aged hearts. In fact, there are 105 genes changed in the young comparison while 254 genes changed in the old comparison. It is worth emphasizing that age might dampen the HD signature in the older hearts (15mo of age). Yet, we found that Acot1 is down-regulated in the old heart of BACHD mice. A decrease in Acot1 expression is associated with cardiac myopathy and damage [39]. Therefore, our microarray supports the assertion that the hearts of BACHD mice have higher risk of fibrosis and arrhythmia since young age, and it contributes to increased susceptibility to cardiac dysfunction with age. The microarray analysis found that the transcription of a number of genes involved in the inflammatory response were altered, which led us to measure cytokine levels in the serum of BACHD mice. Parallel to microarray results, we found increased IL-6 levels as well as decreased RANTES in the serum of non-stimulated BACHD mice early in the disease progression (Fig 7). IL-6 is a pro-inflammatory cytokine, which has been involved in cardiac remodeling by inducing Lv hypertrophy and increasing collagen deposits [40] and therefore could contribute to the cardiac pathology in BACHD mice. Increased levels of IL-6 have been detected in the plasma and brain tissue of HD patients [41,42], with indications that the dysregulation begins during pre-symptomatic stages of the disease [43]. Consistent with our results, RANTES/CCL5 secretion from astrocytes of R6/2 is also reduced. The effects of altered RANTES levels in serum on cardiovascular health continue to be explored, but there is evidence that reduced RANTES is associated with myocardial infarction, atherosclerosis and cardiac mortality [44] and therefore could also contribute to the cardiac pathology in HD.
Compared with age-matched hearts of WT controls, our microarray analysis reveals downregulation of genes that suppress apoptosis pathway and up-regulation of genes that promote apoptosis pathway in both young and aged hearts of the BACHD mice ( Table 8). The findings from the young heart were intriguing. For example, as early as 3 mo of age, the expression of Bcl2l12 is down-regulated while the expression of Tnfrsf25 and Irf5 are up-regulated in BACHD hearts. Bcl2l12 is an anti-apoptotic factor that acts as an inhibitor of caspases 3 and 7 in the cytoplasm. In the nucleus, it binds to the tumor suppressor p53, preventing its association with target genes. Overexpression of this gene has been detected in a number of different cancers. On the other hand, Tnfrsf25 is a member of the TNF-receptor superfamily. This receptor has been shown to stimulate NF-kappa B activity and regulate cell apoptosis. Therefore, Tnfrsf25 has been shown to involve TNF-mediated signal pathway and promote apoptosis process. Similarly, Irf5 is a member of the interferon regulatory factor family, a group of transcription factors with diverse roles, including virus-mediated activation of interferon, and modulation of cell growth, differentiation, apoptosis, and immune system activity. In short, our microarray findings support that promoted apoptosis process of BACHD young heart may increase the susceptibility of fibrosis in BACHD mice. Western blot data confirmed that the levels of the apoptotic marker Caspase 3 were indeed elevated in the BACHD heart (Fig 8A).
Many studies on HD are focused on the pathology in the brain, but it is becoming clear that the heart is highly susceptible to the effects of mHTT. Like the brain, the heart is a highly metabolic organ and our microarray studies suggest alterations in metabolic pathways in the hearts of BACHD mice. Mismatches in energy supply and demand can lead to ROS generation, cardiomyocyte damage and heart failure [45]. In addition, there are indications of changes in insulin sensitivity and misregulation of plasma glucose levels in HD patients [46], contributing to energy mishandling and cardiac pathology. Lastly, the state of the immune system is closely associated with the cardiac health, and the observed imbalances in immune factors may be driving pathogenesis. With HD, there are multiple insults that may be leading to cardiac pathology and each of these factors would likely need to be addressed in order to help manage cardiovascular concerns in the HD patient population. Overall, our data suggest that monitoring of the cardiovascular system in HD patients should start at an early age in order to intervene or slow down the progression of cardiovascular disease and delay or prevent early death. It is also worth considering that a compromised cardiovascular system could contribute to a decline in nervous system performance and result in cognitive dysfunction. | 2016-05-14T03:16:24.041Z | 2016-01-25T00:00:00.000 | {
"year": 2016,
"sha1": "f05367e731c1827cdab6188b765e7da5f7353d1c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0147269&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f05367e731c1827cdab6188b765e7da5f7353d1c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
221267564 | pes2o/s2orc | v3-fos-license | The effect of working from home on major time allocations with a focus on food-related activities
Telecommuting has been on the rise in the U.S. and working from home may affect how workers allocate their time over the course of a day. In this paper, using a seemingly unrelated regression (SUR) framework, we examine differences in time spent in major activities between individuals who worked from home and away from home. We use data on prime working-age adults (age 25–54 years old) who participated in the 2017–18 Leave and Job Flexibilities Module of the American Time Use Survey. Results show that prime working-age American adults who worked from home during their diary day spent less time working and on personal care, but more time on leisure, sleeping, and on food production and consumption than those who worked away from home. For instance, among individuals with a spouse or partner present, those who worked from home spent 25 more minutes engaged in food production and 48 more minutes eating and drinking at home than did individuals who worked away from home, which are large relative to the sample averages of 33 and 31 min, respectively. These results show that there is important variation in the daily time allocation of workers in their prime working years and suggest in particular that working from home may allow for substantially more time to produce food and consume food at home, which may provide teleworkers with health benefits since home-produced meals tend to be lower in calories and higher in nutrients than meals prepared away from home.
Introduction
In the U.S., the first confirmed case of the Novel Coronavirus (COVID-19) occurred on January 19, 2020 (Holshue et al. 2020) and since then, COVID-19 has spread to every state in the nation. On March 16, 2020, to slow the spread of COVID-19, the federal government announced federal social distancing guidelines ("15 Days to Slow the Spread") and many U.S. jurisdictions followed with implementation of stay-at-home orders. As a result, many American workers are performing their jobs from home. According to Gallup Panel data, the percentage of U.S. workers who said they had worked from home during the COVID-19 pandemic doubled from 31 percent in mid-March to 62 percent in early April. 1 The widespread and extraordinary state of affairs caused by the pandemic raises an important question: how does the daily time allocation of Americans who work from home differ from that of Americans who work away from home? While many Americans are currently doing their jobs from home to prevent getting and spreading COVID-19, the share of American workers who regularly exert a portion of their work effort away from their offices has been on the rise for a number of years. Researchers from a consulting firm Global Workplace Analytics estimated that regular telecommuting (i.e. working from home at least half of the time) grew 115 percent from 2005 to 2015 (FlexJobs 2017). A recent survey of employers conducted by the Society for Human Resource Management (SHRM) indicated that from 2015 to 2019, ad-hoc, part-time, and full-time telecommuting grew by 23.2, 16.7, and 22.7 percent, respectively (see Fig. 1). As of April 2019, over 1 in 4 employers surveyed by the SHRM offered full-time telecommuting to their employees (SHRM 2019). All of these data are roughly in line with nationally representative data provided by the Bureau of Labor Statistics (BLS): over 2017-18, 25 percent of wage and salary workers worked from home at least occasionally (BLS 2019). Existing data clearly show that the share of Americans who occasionally or always work from home has grown over time, which has implications for how today's workforce allocates its time over the course of a day. The reasons for working remotely are varied, but many remote workers indicate that it enhances their ability to balance work and life responsibilities (BLS 2019;Owl Labs 2019). Despite the clear shift in telework participation among Americans in the modern-day labor force in recent years, there exists a lack of research that explores differences in the amount of time devoted to different activities by the location where work is performed. Naturally, the first relevant major activity is the number of work hours: do at-home workers allocate more or less time to work than do workers who supply their work effort away from home? Many American would-be office-goers are currently working from home due to greater risk of exposure to COVID-19 in workplaces and employers may be concerned about their employees "shirking from home." However, extant evidence indicates that working from home does not necessarily mean lower productivity. Bloom et al. (2015) find that, using data from a Chinese travel agency that randomly assigned workers to either work from home or in the office for nine months, home workers actually performed better by logging more minutes per shift and more calls per minute. Without knowing how much a given worker can produce per minute of work effort, a difference in time spent working does not necessarily translate into a productivity difference, but we argue it is still useful to learn about variation in time devoted to work by worksite and explicitly investigate this issue here using American data.
Changes in work hours in recent years have been accompanied by changes in other time allocations, especially leisure. Aguiar and Hurst (2007a) find that, from 1965 to 2004, as market work hours (home production hours) declined for men (women), leisure time rose. Given these findings, in this paper, in addition to working, we focus on activities that the vast majority of American workers report engaging in during a time diary interview of a nationally representative survey: personal care (e.g. grooming), leisure (e.g. watching TV), sleeping, food production (e.g. food preparation and presentation) and at-home eating activity. These are major activities that we expect teleworkers to consider substituting towards when they decrease at-home work time since they are easily performed in home settings.
The production and at-home consumption of food are of particular importance. These are health-promoting daily activities that are frequently performed and ubiquitous among humans (Davis 2014), but also ever-changing with work-life responsibilities. As with variation in work hours by worksite, analysis of these activities is especially timely given that Americans are currently spending more time at home and many U.S. jurisdictions have limited foodservice establishments to services (e.g. takeout) they can safely provide to consumers during the COVID-19 pandemic. Investigation into this issue is also important from a historical perspective because time spent eating and drinking among U.S. adults decreased by 5 percent from 2006-08 to 2014-16, 2 which suggests that American adults may be eating faster (Zeballos and Restrepo 2018) and such downward pressure on time spent eating could potentially have adverse health consequences. This may be especially true if the daily time American workers devote to healthy eating is shrinking in part because of rising work-related time pressure and stress (Escoto et al. 2012;Jabs and Devine 2006;Welch et al. 2009).
Slower eating speed and higher consumption of home-prepared meals have been found to be associated with a lower risk of weight gain and a higher-quality diet. For instance, previous studies have found that, among adults, eating more slowly inhibits the development of obesity (Hurst and Fukuda 2018), and eating slower is associated with a lower risk of elevated triglycerides (Paz-Graniel et al. 2019). Hamermesh (2010) and Zeballos and Restrepo (2018) show that there is an inverse relationship between time spent eating and body mass index (BMI), which suggests that the amount of time people spend eating may play a role in obesity. Studies have also found that, among adults, a higher frequency of at-home cooking is associated with higher intake of fruits and vegetables (Monsivais et al. 2014), lower consumption of energy, fat, and sugar (Wolfson and Bleich 2015), and a higher Healthy Eating Index (HEI) score (Wolfson et al. 2020). 3 Consistent with these latter results, Todd et al. (2010) estimated that substituting a meal prepared away from home (FAFH) for a meal prepared at home (FAH) increases total daily intake among adults by about 134 calories-or about 7 percent for those on a 2000-calorie-per-day diet-and lowers diet quality as measured by the HEI. While this area of research suggests that a diet consisting of mainly home-produced meals is healthier, the time cost of preparing FAH relative to the total cost of food has been estimated to be nontrivial in magnitude (Davis and You 2010;Raschke 2012;You and Davis 2019). For timeconstrained individuals, cheap convenience foods and ready-to-eat meals may be appealing alternatives, especially when non-labor time is scarce.
Labor market circumstances and opportunities have the potential to produce heterogeneity in the allocation of time devoted to nonmarket activities, including FAH production and consumption. In a recent review of the literature on food production and consumption, Davis (2014) noted that scholars have consistently found that an increase in the opportunity cost of time causes a substitution away from FAH toward FAFH. For example, over periods of time when labor market opportunities for women increased, a rise in women's employment was followed by a fall in home cooking by women (Nayga 1996;Kohara and Kamiya 2016;Etilé and Plessz 2018). Labor supply increases among men have also been found to reduce the amount of time they allocate to at-home meal preparation (Dunn 2015). 4 The previous literature has focused on how labor supply affects FAH production, but it is currently unknown how a worker's place of work may affect time devoted to food-related activities. Working from home may allow for greater time for FAH production and consumption during work breaks and lunch breaks. It is also possible that teleworking prevents workplace distractions, enhances worker productivity, and thereby frees up more time for FAH production and consumption. This paper uses data from the 2017-18 Leave and Flexibilities Job Module of the American Time Use Survey to contribute to the literature by analyzing variation in the amount of time prime working-age adults (25-54 years old) American workers spend engaged in major activities by the location the ATUS respondents reported working the day before their time diary, namely: (i) those who are telework-eligible and worked only from home the day before their time-use interview (i.e., worked from home) and (ii) those who worked only in their office or somewhere else the day before their daily time-use interview (i.e., worked away from home). The six major activities we analyze are grouped into three categories: (1) labor market work, (2) leisure-related time use (personal care, leisure, and sleeping), and (3) food-related time use (food production and eating and drinking at home). 5 , 6 About two-thirds of prime workingage adults engaged in each activity and spent 82.4 percent of their day engaged in these six activities (1187 min), on an average weekday over 2017-18. We focus on prime working-age adults because they have a very strong attachment to the labor force-80 percent of Americans age 25 to 54 are employed at the present time (BLS 2020)-and workers 25 years and older are far more likely than younger Americans to work from home (BLS 2019).
We first document that demographic and employment characteristics vary by worksite, which may help to explain variation in major time allocations under consideration. Indeed, previous research has found that individual socioeconomic characteristics and household composition influences the time allocated to FAH meal preparation (Mancino and Newman 2007;Senia et al. 2017). Thus, in a seemingly unrelated regression (SUR) framework, we jointly estimate the conditional associations between working from home and major time allocations-net of a host of other factors including the presence of children, part-or full-time employment status, etc.
-that potentially determine daily time use. We jointly estimate these time-use equations in a SUR framework in order to facilitate the comparison of the magnitude of estimated differences by worksite across major activities. We separately examine single-headed and dual-headed households because intrahousehold time allocation decisions depend on whether a spouse or partner is present.
We find that, controlling for a wide variety of other potentially important determinants of time use, prime working-age American adults who worked from home during their time diary day spend substantially less time working than individuals who worked away from home. Consistent with a priori expectations, teleworkers also spend less time on personal care, but more time on leisure and sleeping. We also find that teleworkers spend more time engaged in food production and at-home consumption activities. The positive effects of working from home on these activities are substantial. For example, among individuals with a spouse or partner present, they amount to 25 min for food production (75 percent of sample mean) and 48 min for athome consumption (156 percent of sample mean). To get a better sense of these estimated effects, we formally tested whether coefficients associated with working from home across time-use equations were equal. We could not reject the null hypothesis that the effects of working from home on food production and at-home consumption were equal to the effect of working from home on leisure, which is an activity that is more easily performed while working from home and represents a substantial share of daily time allocation in our sample. Taken together, the results suggest that working from home may free up more time for several activities, including health-promoting activities such as food production and consumption, which may offer health benefits and help to enhance a worker's quality of life.
The rest of the paper is organized as follows. Section 2 describes the data used in our study. Section 3 presents descriptive statistics to help motivate the regression specification and analysis shown in Section 4. Finally, Section 5 provides a discussion of our results and concluding remarks.
Data, measures, and methods
The American Time Use Survey (ATUS), which is conducted by the U.S. Census Bureau for the Bureau of Labor Statistics (BLS), has been administered annually since 2003 and is still ongoing. One individual who is at least 15 years old from each sampled household is interviewed by a U.S. Census Bureau representative to obtain detailed information about his or her activities the day before the interview. ATUS respondents are asked to identify their primary activity (if they were engaged in more than one activity at a time) from 4 a.m. the day before the interview to 4 a.m. of the interview day, where they were when they performed the activity, and who else was present when the activity was performed. All ATUS respondents participated in the Current Population Survey (CPS). 7 The ATUS data include a time diary, individual demographic characteristics, labor force participation, and household information.
The U.S. Department of Labor's Women's Bureau sponsored the Leave and Job Flexibilities Module (LJFM) as a supplement to the ATUS. The LJFM was initially fielded in January through December 2011 and asked about wage and salary workers' access to paid and unpaid leave and the ability to adjust their work schedules and locations instead of taking leave or because they did not have access to leave. The LJFM was redesigned and fielded once again from January 2017 through December 2018 and now includes questions about workers' usual schedules and their access to schedule and workplace flexibilities, including telework eligibility and participation. The 2011 and 2017-18 Leave Modules are not directly comparable. Also, we are particularly interested in exploring how daily time allocation in major activities varies by worksite and telework eligibility is an important control, so the analysis in this paper uses only the pooled 2017-18 data. 8 The LJFM-ATUS data pertain to wage and salary workers, so our analysis uses weekday data since the majority of work effort is performed on weekdays (please see our sample selection criteria and Fig. 2 below). Moreover, since we are interested in comparing workers by worksite and 84.6 percent of telework-eligible workers have a management, professional, or office occupation ("white-collar" occupation), our analysis is restricted to respondents in white-collar occupations.
Our main analysis sample consists of two subgroups of prime working-age adults (age 25-54): 9 (1) respondents who are telework-eligible and worked only from home the day before their ATUS interview (worked from home ["WFH"]) and (2) respondents who worked only in their office or somewhere else the day before their ATUS interview (worked away from home ["WAFH"]). In each section below, we discuss statistically significant differences between the two subgroups since the amount of time available to perform such activities is likely to vary depending on where people work. 10 Unless otherwise indicated, all differences we discuss in the text between subgroups of Americans are significant at the 90-percent level of confidence (i.e., p < 0.10). 11 The LJFM survey sampling weights are designed to produce nationally representative estimates. We apply LJFM survey sampling weights in all analyses to account for the complex survey design and to obtain nationally representative estimates for prime working-age adults (age 25-54) with a white-collar occupation over an average weekday over 2017-18. 12 Our sample selection criteria are as follows. Between January 2017 and December 2018, 19,816 individuals participated in the ATUS survey and 50.8 percent (10,071 individuals) also participated in the LFJM. The LFJM respondents are all wage and salary workers (i.e. they are all employed by someone else for pay). We limit the analysis to prime working-age adults (age 25-54) (68.6 percent) with a white-collar occupation (62.1 percent) who worked the day before the interview (57.0 percent). Our main analysis sample consists of 2441 individuals: 76.5 percent of them worked during a weekday and the remainder (23.5 percent) worked on a weekend. In an effort to ensure comparability across the two different types of workers, the main analysis focuses on the 1867 individuals who worked on a weekday (Fig. 2). When we turn to the regression analysis, since some respondents are missing data on some of the covariates, the sample size available for the regressions reduces to 1784 individuals.
of worker characteristics for prime working-age adults and on the amount of time these adults allocate to major activities. These descriptive analyses are meant to motivate the need to control for observable demographic and employment characteristics by worksite in a regression analysis that follows in Section 4. Table 1 shows that prime working-age adults in the main analysis sample are, on average, 39 years old; 54.4 percent are female, 68.5 percent are non-Hispanic White, 12.1 percent are Hispanic, and 9.7 percent are Non-Hispanic Black. About 11.5 percent of the main analysis sample respondents reported their highest level of educational attainment to be a high school diploma or GED, 21.9 reported some college, and almost two-thirds reported a bachelor's degree or more. Almost half of the individuals have a professional or related occupation, followed by management, business, and finance occupation (31.5 percent), and office and administrative support occupation (19.1 percent). We also provide descriptive statistics by worksite in Table 1. There are many large and statistically significant differences. For example, compared with individuals who worked away from home, individuals who worked from home are less likely to be hourly workers, more likely to have beyond a bachelor's degree, and have higher hourly wage rates.
Selected activities
We investigate six major activities that we grouped into three categories: labor market work, leisure-related time use (personal care, leisure, and sleeping), and foodrelated time use (food production and eating and drinking at home). 13 About twothirds of prime working-age adults engaged in each activity. On an average weekday over 2017-18, prime working-age adults spent 82.4 percent of their day engaged in these six activities (1187 min). We now discuss unconditional differences in time spent in these major activities by worksite (Table 2). 14
Labor market work
On an average weekday over 2017-18, prime working-age adults spent 498 min working in their main job. Individuals who worked from home the day before their ATUS interview spent 402 min working, which is 105 min less than individuals who worked away from home (507 min). 13 These activity groups are mutually exclusive. Please see Appendix Table 1 for activity codes. 14 For comparison, we present descriptive statistics using a pooled sample of weekends and weekdays. The descriptive statistics for our main analysis sample, which includes only weekday observations (Table 2) are similar to those from a sample that includes weekdays and weekends (Appendix Table 2).
Leisure categories
Personal care On an average weekday over 2017-18, 93.2 percent of prime working-age adults engaged in personal-care related activities and spent 48 min on average. Individuals who worked from home the day before their ATUS interview are less likely to engage in personal care-related activities and spent less time than individuals who worked away from home (28 versus 50 min).
Leisure On an average weekday over 2017-18, 85.4 percent of prime working-age adults engaged in leisure-related activities and spent almost two hours on average (116 min). Individuals who worked from home the day before their ATUS interview spent significantly more time engaged in leisure-related activities than individuals who worked away from home (164 versus 112 min).
Sleeping While everybody slept on an average weekday over 2017-18, individuals who worked from home slept 37 more minutes than individuals who worked away from home (498 versus 461 min). On average, prime working-age adults slept almost eight hours (464 min). Survey weights were used to compute nationally representative coefficient estimates and appropriate standard errors. Standard errors in parentheses. The difference between individuals who worked from home and individuals who worked away from home the day before their ATUS interview is bolded if it is statistically significantly different from zero (p value < 0.10). Food production includes food and drink preparation, food presentation, kitchen and food clean-up, grocery shopping, and travel to the grocery store Source: Authors' calculations, using data from the Bureau of Labor Statistics' 2017-18 LFJM-ATUS
Food categories
Food production Food production includes time spent on food and drink preparation, food presentation, kitchen and food clean-up, grocery shopping, and travel to the grocery store. In 2017-18, over an average weekday, prime working-age adults spent 31 min engaged in food production, and 64.0 percent spent time engaged in these activities. Prime working-age adults who worked from home the day before their ATUS interview are more likely to engage in food production and spent significantly more time than individuals who worked away from home (41 versus 30 min).
Eating and drinking at home
On an average weekday over 2017-18, prime working-age adults spent 29 min engaged in eating and drinking at home, and 77.9 percent engaged in the activity. Individuals are more likely to eat at home when working from home than when working away from home (88.9 percent versus 76.9 percent). Individuals who worked from home the day before their ATUS interview spent 49 min engaged in eating compared to 27 min spent eating at home by those individuals who worked away from home. While the national descriptive statistics are interesting on their own, it is unclear whether many of the differences we see in these activities merely reflect differences in the demographic and employment characteristics shown in Table 1. We now move to a regression analysis in which we control for all of the characteristics shown in Table 1 as well as residential (state of residence and metropolitan area) and interview-related factors (day of the week, the month of the interview, and year of the interview).
Regression results
To estimate the conditional association between working from home and the time spent in major activities (net of other observable differences between types of workers), we follow similar work on estimating the demand for sleep (Biddle and Hamermesh 1990;Ásgeirsdóttir and Ólafsson 2015). However, since we are interested in comparing the effects of working from home the day before their ATUS interview on multiple uses of time, we specified regression equations of the following form in a seemingly unrelated regression (SUR) framework, 15 where T is the (inverse hyperbolic sine transformed) time spent by individual i in a given activity {main work, personal care, leisure, sleeping, food production, or eating and drinking at home}; 16 WFH is a dummy equal to 1 if individual i worked from home the day before the ATUS interview; X is a vector of individual-level characteristics-age, age squared, gender, presence of household children dummy, education level, race/ethnicity, hourly worker dummy, full-time worker dummy, the logarithm of hourly wage, occupation dummies, telework eligibility dummy, and dummy for metropolitan area residence; UR is the time-varying state-level unemployment rate where individual i resides (to capture the effect of macroeconomic changes on health behavior) (Ruhm 2005); STATE is a dummy of the state-ofresidence of individual i (to absorb all time-invariant state characteristics, including permanent differences in state-level food environments and prices); INTERVIEW is a vector of interview-related factors for individual i (dummies for year of interview, month of interview, and day of the week) that may affect time use; and ε i is an idiosyncratic error term. The coefficient α 2 is our coefficient of interest in Eq. (1), which represents the estimated associations between time use and working from home. Notice that these coefficients are estimated relative to individuals who worked away from home the day before their ATUS interview since they are the omitted category of workers. To allow for arbitrary correlation among individuals from the same state, we cluster our standard errors at the state level. Individuals with a partner or spouse present in their household face different time constraints than do individuals without a partner or spouse present. For instance, in the former types of households, partner/spousal labor supply influences own time allocations, while in the latter this is clearly irrelevant. As a consequence, we estimate Eq. (1) for individuals for whom a spouse or partner is present and control for the hours worked by the spouse or partner, and we then separately estimate Eq. (1) for individuals for whom a spouse or partner is not present. Table 3 shows that working from home has economically important and statistically significant effects on most of the main activities considered for both individuals with and without a spouse or partner present in the household. Column 1 of Table 3 shows that individuals who worked from home spent significantly less time working, and this effect is magnified for individuals with a spouse or partner present in the household. We estimate that individuals with a spouse or partner present in the household who worked from home spent 218 fewer minutes than did individuals who worked away from home. Individuals without a spouse or partner present in the household who worked from home spent 123 fewer minutes than did individuals who worked away from home. These represent differences of 43.6 and 24.8 percent, respectively, relative to sample means.
Given a 24-h time constraint, workers who spend less time working on a given day have more time available for other activities, especially for activities that do not allow for multitasking. For individuals who have a partner or spouse present, individuals who worked from home spent significantly less time on personal care 16 We transform the right-skewed activity variables in order to better approximate normal distributions. Specifically, we use the Stata command asinh (i.e. inverse hyperbolic since transformation) since we have observations with zero time use (i.e., respondents who did not engage in a given activity). Thus, the average percentage difference in time allocated to a given activity for a person who worked from home (relative to a person who worked away from home) is given by the formula exp(α 2 )−1. Note: IHST Inverse hyperbolic sine transformation. Estimates from seemingly unrelated regressions. Robust standard errors adjusted for clustering at the state level and for the survey design appear in parentheses below coefficient estimates. Individuals who worked away from home the day before their ATUS interview is the omitted category. Food production includes food and drink preparation, food presentation, kitchen and food clean-up, grocery shopping, and travel to the grocery store. All dependent variables are mutually exclusive. Covariates include age, age squared, gender, presence of household children dummy, education level, race/ethnicity, hourly worker dummy, full-time worker dummy, logarithm of hourly wage, number of hours worked by the spouse or partner, occupation dummies, the state-level unemployment rate, metropolitan area, day-of-week dummies, month dummies, year dummies, and state dummies. *, **, and *** denote statistical significance at the 10, 5, and 1% level, respectively Source: Authors' calculations, using data from the Bureau of Labor Statistics' 2017-18 LFJM-ATUS (33 min), which is perhaps partially attributable to the fact that they do not have to spend as much time to get ready before being able to start their workdays. By contrast, individuals who worked from home spent significantly more time engaged in leisure (94 min) and sleeping (40 min), which are activities that are not easy to perform inside office settings. These patterns conform to expectations and the estimated effect sizes are large relative to sample means. For individuals who do not have a partner or spouse present, the patterns are qualitatively similar, although with the exception of main work the coefficients are less precisely estimated. We now move to food production and eating and drinking at home. Columns 5 and 6 of Table 3 shows that individuals with a spouse or partner present who worked from home spent significantly more time devoted to food production and eating and drinking at home (25 and 48 min, respectively). 17 These represent differences of 74.5 and 156.0 percent, respectively, relative to sample means. Individuals without a spouse or partner present who worked from home spent significantly more time devoted to eating and drinking at home (33 min) than those who worked away from home. Although the estimated gap for food production is economically important (26 min), it is not statistically significant at conventional levels.
To get a sense of the magnitude of the gaps in time allocated to food activities, consider how the effect of working from home on these activities compares to its effect on leisure, which is more easily performed while working from home and accounts for a substantial share of daily time allocation for workers in our sample. For example, among individuals with a spouse or partner present, who spend about 106 min engaged in leisure daily, we fail to reject that the estimated effects of working from home on food production and eating at home are equal to the estimated effect of working from home on leisure time with a p-value of 0.80 and 0.26, respectively. Among individuals without a spouse or partner present, who spend an average of 140 min per day on leisure, the corresponding tests of equality of coefficients produce p-values of 0.98 and 0.63, again indicating that effects of working from home on food production and eating at home are statistically indistinguishable from the impact of working from home on leisure. Combined with evidence that the 17 To save space, we suppress coefficient estimates of other variables in the SUR analysis, but some coefficients in the regressions are also worthy of discussion. The estimated effect of hourly (log) wage proxies for the opportunity cost of engaging in non-work activities. Since there are substitution and wealth effects, the sign of the hourly wage effect is a priori unclear and must be empirically determined. On the one hand, the (shadow) price of food activities, for example, is increasing in the hourly wage rate, so one may expect that high-wage individuals spend less time engaged in food production and consumption than do lower wage individuals. On the other hand, high-productivity individuals may be better able to as well as prefer to spend more time on food activities. The estimated effect of the (log) wage on time spent in food production is negative and statistically significant for individuals without a spouse or partner present in the household. In the case of eating and drinking at home, the effect of the (log) wage, while negative, is not statistically significant for either those with or without a partner/spouse present. Du and Yagihashi (2017) found that time spent on health-enhancing activities tend to rise with an individual's wage rate. However, as the authors note, some activities (e.g. eating and drinking time) do not have a clear health impact because time-use data do not allow researchers to determine the healthfulness of the food produced and consumed. Relative to part-time workers, full-time workers with a spouse or partner present spent less time engaged in eating and drinking at home. Compared with women, men spent less time in food production, but this is only statistically significant for individuals with a spouse or partner present in the household. And finally, for individuals with a spouse or partner present, we estimate that a rise in spousal hours increases time spent on food production and decreases the time spent eating and drinking at home. Full results are available upon request. difference in food production and consumption-relative to sample means-is also economically important, this is further evidence that the place where work is performed has important implications for how much time is allocated to food-related activities over the course of a day.
Clearly, working from home leaves more time for both the production of food and the consumption of food at home. However, we cannot ascertain the health implications of these differences since we cannot determine whether the food consumed at home was prepared at home or purchased and brought home from a foodservice establishment. However, to the extent that food eaten at home is mostly prepared at home, our results may be taken to suggest that individuals who work from home spend more time eating more healthful foods than do individuals who work away from home. It is also important to note that uses of time other than eating may also have effects on health, and there are clear differences in time spent on other activities in Table 3. However, an investigation of the overall effect of time use on health is beyond the scope of this study.
In a robustness check, we investigate whether a change in the control group matters for the analysis. In the main analysis, we compare time use of individuals who worked from home to time use of individuals who worked away from home (including both telework-eligible and telework-ineligible workers). One may be concerned that, while telework-eligible workers can compensate for reduced work time during a telework day by working more hours when they return to their offices later in the week, this is not a possibility for telework-ineligible workers. We re-ran our main regressions-in which we previously included a dummy for telework eligibility as a control (summarized in Table 3)-but now drop telework-ineligible workers. In Appendix Table 3, we show that, while sample size and precision of estimates are reduced, the patterns in Table 3 and Appendix Table 3 are in large part qualitatively similar. Thus, no matter how we address the issue of differences in telework eligibility across workers, the variation in daily time allocations by worksite is similar across workers who work from and away from home.
Discussion and implications for future research
Drawing on data from the 2017-18 LJFM-ATUS, this paper demonstrates that, even after controlling for a wide variety of demographic and employment characteristics, there is sizeable and significant variation in the amount of time prime working-age adults spend on food-related time use by worksite. Our findings complement studies showing that labor supply is an important determinant of FAH production and consumption activity (Nayga 1996;Dunn 2015;Kohara and Kamiya 2016;Etilé and Plessz 2018) by showing that the location where work is performed also matters. A possible interpretation of this study's results is that, among workers in their prime working years, working from home may allow for more time to purchase food from grocery stores, cook meals at home, eat, and then clean up afterward. Given that foods prepared at home tend to be healthier than foods prepared away from home (Todd et al. 2010;Monsivais et al. 2014;Wolfson and Bleich 2015;Saksena et al. 2018;Wolfson et al. 2020), working from home may enable health-promoting dietary behaviors.
It is important to note, however, that since we use all rather than only plausibly exogenous variation in the data, which could be achieved via random assignment of workers (Bloom et al. 2015), we cannot ascertain whether there is a causal relationship running from remote work to food-related activities. Research on this nexus is warranted since telecommuting behavior has grown in popularity over the last decade (FlexJobs 2017; SHRM 2019), the latest available data from the BLS indicate that onequarter of wage and salary workers worked from home at least occasionally over 2017-18 (BLS 2019, and FAH tends to be healthier than FAFH (Todd et al. 2010;Monsivais et al. 2014;Wolfson and Bleich 2015;Saksena et al. 2018;Wolfson et al. 2020). It would be of particular interest to policymakers and the broader community to learn from future research whether greater telework participation has the potential to improve the healthfulness of daily dietary intake and better align American workers' daily intake with those outlined in the Dietary Guidelines for Americans.
The primary focus of this study was on food-related activities since food production and consumption represent frequent and ubiquitous human activities (Davis 2014) and recent research suggests that greater work-related stress may be contributing to less healthy eating practices (Escoto et al. 2012;Jabs and Devine 2006;Welch et al. 2009). However, we investigated other major uses of time and found that working from home affects daily time allocation more broadly, which may have productivity and quality of life implications. Our analysis indicates that individuals who worked from home spent more time per day engaged in leisure and sleeping, but less time per day on personal care and working. Clearly, there are different health benefits associated with each of these time-use differences. The overall effect of working from home on a worker's health is unclear without a full accounting of how each daily activity maps onto health outcomes. The substantially smaller amount of time devoted to work among individuals who work from home generates a particularly interesting question: is this driven by greater productivity while working from home or is the lower amount of time spent working from home compensated by teleworkers when they are in the office later in the week? If time allocations change across the week according to a worker's schedule, time allocation differences in a one-day time diary between workers who work from and away from home may be larger or smaller than the differences observed over a one-week period. Such investigations are beyond the scope of this study, but certainly worthy as future research endeavors.
We conclude with an insight from our analysis that is relevant to today's widespread and extraordinary circumstances. In response to the arrival of COVID-19, many employers across the U.S. have encouraged their employees to work from home to protect the health of their workers and to prevent the spread of COVID-19. Our analysis of pre-pandemic data from 2017-18 clearly demonstrates that daily time allocation varies by worksite. As the nation grapples with the pandemic, it will be important for researchers to continue investigating Americans' responses to COVID-19, including how time-use patterns are changing as well as the health and non-health implications of those changes.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Survey weights were used to compute nationally representative coefficient estimates and appropriate standard errors. Standard errors in parentheses. The difference between individuals who worked from home and individuals who worked away from home the day before their ATUS interview is bolded if it is statistically significantly different from zero (p-value < 0.10). Food production includes food and drink preparation, food presentation, kitchen and food clean-up, grocery shopping, and travel to the grocery store Source: Authors' calculations, using data from the Bureau of Labor Statistics' 2017-18 LFJM-ATUS Note: IHST Inverse hyperbolic sine transformation. Estimates from seemingly unrelated regressions. Robust standard errors adjusted for clustering at the state level and for the survey design appear in parentheses below coefficient estimates. Individuals who worked away from home the day before their ATUS interview is the omitted category. Food production includes food and drink preparation, food presentation, kitchen and food clean-up, grocery shopping, and travel to the grocery store. All dependent variables are mutually exclusive. Covariates include age, age squared, gender, presence of household children dummy, education level, race/ethnicity, hourly worker dummy, full-time worker dummy, logarithm of hourly wage, number of hours worked by the spouse or partner, occupation dummies, the state-level unemployment rate, metropolitan area, day-of-week dummies, month dummies, year dummies, and state dummies. *, **, and *** denote statistical significance at the 10, 5, and 1% level, respectively Source: Authors' calculations, using data from the Bureau of Labor Statistics' 2017-18 LFJM-ATUS | 2020-08-25T05:12:54.590Z | 2020-08-22T00:00:00.000 | {
"year": 2020,
"sha1": "3b3ec03e6afd62e08ead01886cd297d3e367fdc2",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11150-020-09497-9.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b3ec03e6afd62e08ead01886cd297d3e367fdc2",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
266100476 | pes2o/s2orc | v3-fos-license | EGG-LAYING HABITAT SELECTION OF THE INVASIVE SLUG KRYNICKILLUS MELANOCEPHALUS KALENICZENKO, 1851 (GASTROPODA: EUPULMONATA: AGRIOLIMACIDAE)
a bstract : Invasive slugs cause damage to biodiversity as well as to horticultural and agricultural crops. To develop methods to mitigate problems with recently emerging invasive species, basic ecological knowledge about them in their new environment is crucial. We investigated the egg-laying substrate preference of Krynickillus melanocephalus in a laboratory experiment in which the slugs could choose between four different substrates. Slugs were mainly observed in contact with birch leaf compost, and less so with gravel, potting soil and sphagnum moss. Almost 90% of the eggs were found buried in birch leaves, although eggs were found also in the other substrates. In addition, we tested the winter survival of the eggs by examining their susceptibility to low temperatures. Eggs subjected to freezing temperatures did not survive, and at 8 °C, 60–70% of the eggs hatched after three months (640 degree days). In areas where K. melanocephalus is present, transporting any soil, gravel or compost material could potentially contribute to spreading the species. Particularly, leaf composts may warrant attention in this areas.
INTRODUCTION
Invasive pest species cause economic and biodiversity losses (PeJchar & Mooney 2009, LoWry et al. 2013, PyšeK et al. 2020, FantLe-LePczyK et al. 2022).Cost-effective measures to eradicate or mitigate the negative effects of already established persistent populations are rarely available (hoFFMann et al. 2016, Green & GroshoLz 2021).Eradication of invasive species detected at an early stage and preventing the translocation of invasive species, both locally and to new areas, are perhaps the only effective long-term strategies to mitigate the damage caused by biological invasions (reaser et al. 2020).To develop species-specific strategies, basic ecological knowledge about the invasive species in their new environments is crucial, because behaviour as well as life history characteristics may change due to ecological release (LaMbrinos 2004) when transferred from their native area to a new location (herrMann et al. 2021).
Many terrestrial slugs have the potential to inflict substantial damage to horticultural and agricultural crops (south 1992).Recent introductions of certain species have led to their emergence as serious and invasive pests (FranK 1998, Zając et al. 2017, barua et al. 2021).Over the last five decades, for example, the large slug Arion vulgaris Moquin-Tandon, 1855 has successfully established invasive populations within anthropogenic landscapes across numerous northern European countries (Zając et al. 2017(Zając et al. , 2020(Zając et al. , reise et al. 2020)).Similarly, Arion subfuscus s.l. has become established in North America.Conversely, some species, such as Deroceras reticulatum, have been considered pest species in both their native and non-native ranges for a long time (barKer 1991, WiLLis et al. 2006) Although potential negative effects on crops have not yet been extensively described, instances of damage to pumpkin, strawberries, lettuce and cabbage have been reported (von ProschWitz 2020, turóci et al. 2020).In Sweden, K. melanocephalus has gained attention from the Swedish Environmental Protection Agency due to its potential invasiveness.However, little is currently known about its basic ecology, such as habitat use, foraging behaviour and interactions with other terrestrial gastropods (Watz & nyqvist 2022).The primary dispersal vector is human trade in plants and soil.As a result, the species has, so far, been associated with gardens and urban forested areas in its new distribution range (von ProschWitz 2020).
Over the autumn, adults of K. melanocephalus die after laying eggs (von ProschWitz 2020), and the species likely overwinter in the form of eggs (at least at northern latitudes).A crucial factor for developing measures to impede further spread may be to prevent the transportation of soil containing slug eggs.Therefore, having knowledge about the egg-laying substrate preference of K. melanocephalus is critical for offering advice regarding soil transportation; in areas with established populations, the transportation of soils with certain substrates preferred for egg laying should likely be avoided.Furthermore, to predict the potential future northern distribution limit, understanding the tolerance of the eggs to freezing is essential.In the laboratory, we initially investigated the egg-laying substrate preference of K. melanocephalus, and in a subsequent experiment, we tested the winter survival of the eggs by examining their susceptibility to low temperatures.
EXPERIMENTAL ARENAS
We used five replicated habitat selection arenas.An arena consisted of a closed plastic container (length × width × height = 31 × 23 × 18 cm).On the bottom, four open equally-sized plastic trays (14 × 10 × 8 cm) were placed, creating four quadrants (Fig. 1).The four trays were filled with different substrates: (1) birch leaves (Betula sp.), (2) gravel (diameter = 2-20 mm), (3) potting soil and (4) sphagnum moss (Sphagnum sp.).All substrates were wetted, which created a humid environment inside the containers.Substrate pH (measured in 2 L water mixed with the substrate from one tray) ranged from 5 to 7 (Table 1).Slugs could access the different substrates as well as crawl on the plastic floor, walls and ceiling.In the centre, on top of the trays with contact with all four quadrants, a circular open plastic cup (diameter = 6 cm) functioned as the starting point for slugs in the experiment (Fig. 1).The arenas were kept in a temperature-controlled room at 8 °C with the photoperiod (light:dark) 10:14 h, corresponding to conditions in October in south-central Sweden.SLUGS Specimens of K. melanocephalus were collected in the village of Lagan (56.91°N, 13.99°E) on 24 September 2022, and transported to Karlstad University the day after.Slugs were kept in two 50 L plastic containers provided with humid grass and were fed mushrooms and tomato.We used 60 slugs in the experiments, and at the start of the experiment the mean body mass (± SD) was 0.49 (± 0.17) g.Slugs were not fed during the experiment.
HABITAT PREFERENCE EXPERIMENT
The habitat preference experiment started on 24 October 2022 at 11:00 by placing ten randomly selected slugs in each plastic cup in the centre of the arenas (50 slugs in total).After 24, 48 and 72 h, we recorded the location of the slugs by noting where each individual slug was positioned (i.e., on the plastic floor, wall or ceiling, on top of a substrate or buried inside a substrate).When all slugs had been found and their locations recorded, we placed them in the plastic cup, so that they could select habitat again.Dead slugs (n = 10 during the three days) were replaced with new slugs of similar size (and thus 60 slugs participated in the experiment in total).
EGG DEPOSITION EXPERIMENT
The 50 slugs from the last day of the habitat preference experiment were let undisturbed from 27 October and onwards.On 7 and 14 November (i.e., after 11 and 18 days), we carefully searched the four different substrates for slug eggs and counted the eggs per container and substrate.The egg laying experiment was terminated on 14 November, and the slugs were killed by decapitation.
EGG HATCHING EXPERIMENT
All eggs collected on 7 and 14 November were pooled, kept in wet birch leaves and stored in 4 °C until 3 December, when we distributed 100 eggs randomly between ten small plastic containers (also containing wet birch leaves; 12 × 12 × 5 cm; ten eggs per container).We placed a temperature log-ger (HOBO Pendant MX Temp/Light, Onset, Bourne, USA) inside each container.Eight containers were put inside polystyrene boxes and placed at different positions in a refrigerator, whereas two containers were placed directly inside the refrigerator (without polystyrene boxes).We chilled the inside of the polystyrene boxes by placing icepacks of different sizes inside them, and icepacks were changed twice every week.Thereby, the eggs were subjected to different winter temperatures (Table 2), with median values that varied between −0.3 and 8.0 °C.When changing the icepacks, we checked the containers to record hatched eggs.On 3 April, we removed the icepacks, and the temperature thereby gradually increased to 8 °C in all containers.The egg hatching experiment was terminated on 5 June.We destroyed eggs that had not hatched and killed slug juveniles by decapitation.
DATA ANALYSIS
We considered a slug using one of the four substrates when the slug had physical contact with the substrate, either by being located on top of the substrate or buried inside it.We used the mean proportion across the three days for each substrate in the arenas as the response variable in our analysis of habitat use and the five arenas as experimental units.Since most slugs were found in contact with plastic (on the floor, walls or ceiling), we considered problems with autocorrelation to be minor.We included substrate as a fixed factor and arena ID as a random factor in a linear mixed model.For the analysis of egg deposition, we used the arenas as replicates in separate one-sample t-tests (one for each substrate) to investigate if the mean proportion of eggs found in respective substrate deviated from the expected 0.25 if eggs were deposited at random.
RESULTS
Of the 50 slugs, 18 individuals had contact with a substrate other than plastic when located on the first day of observations, 12 on the second and 17 on the third day.Therefore, the majority of the slugs were observed on the plastic floor, walls or ceiling (which likely relate to the larger plastic surface area than those of the egg-laying substrates).We found more slugs (mean proportion ± SE) in contact with birch leaves (0.17 ± 0.01) than with gravel (0.06 ± 0.03), potting soil (0.06 ± 0.01) and sphagnum moss (0.03 ± 0.01) (Fig. 2).This difference was indicated by a significant effect of substrate in the linear mixed model (F = 11.0;df = 3.16; p < 0.001).
We found 157 eggs in birch leaves, 13 in gravel, seven in potting soil and one egg in sphagnum moss.No eggs were found outside of the four substrates.
The first egg to hatch was found on 27 January in a container with the warmest temperature regime (8 °C; Table 1), and by 14 February, six of the ten eggs had hatched in this container.In the other container held at 8 °C, eggs started to hatch on 7 February, and by 20 February, seven eggs had hatched.No other eggs in these or other containers hatched during the experiment.A newly hatched slug was approximately 5 mm long when extended with a body mass of 5 mg.
DISCUSSION
Previous studies dealing with slugs' egg laying have been based on observations (e.g., barKer 1991, south 1992).In our study, we investigated habitat preferences experimentally by letting the slugs select habitat when provided equal access to different substrates.We did not check the state of maturity of the slugs, and if some slugs were not mature, we likely underestimated fecundity.Birch leaves were clearly preferred by the slugs as egg-laying substrate.The slugs showed a higher level of contact with the birch leaves compared to other substrates, although most slugs were found resting on the plastic floor, walls and ceiling.The time of observation (11:00) did not likely coincide with the time when the slugs were most active in egg laying, and the plastic surface area was larger than those of the egg-laying substrates.Almost 90% of the eggs discovered were buried within the birch leaves.A notable difference among birch leaves, sphagnum moss, and potting soil was their respective pH.The sphagnum moss and the In Arion vulgaris, approximately 600-700 degree days are needed for egg development (sLotsbo et al. 2013).Eggs may survive subzero temperatures of −2 °C by supercooling, but at lower temperatures eggs freeze and die (sLotsbo et al. 2011).Some slug species are to some extent tolerant to freezing; for example, Deroceras reticulatum, Deroceras laeve and Arion circumscriptus eggs tolerate brief freezing (about 0.5-1 h) with some ice crystal formation (cooK 2004, storey et al. 2007).In our study, exposure to freezing temperatures (< −5 °C), albeit only for a relatively short while, seemed to be lethal for K. melanocephalus eggs.All eggs that experienced these freezing temperatures died.In containers kept at temperatures above zero (~8 °C), 60-70% of the eggs hatched after approximately three months which corresponded to 640 degree days.Results from future more detailed studies on cold resistance in K. melanocephalus, together with projections of climate change, may be used to model its northern distribution limit (hatteLand et al. 2013).
Whereas almost all eggs were deposited in birch leaves, some eggs were found in all four substrates.Perhaps K. melanocephalus deposit eggs in the most suitable substrate of those that are available, and if there is no leaf compost present (or other substrates with similar properties), they will likely use other substrates.Therefore, transporting any soil, gravel or compost material, as well as plants rooted in such substrates, from areas inhabited by K. melanocephalus could potentially contribute to spreading the species.Leaf composts in these areas warrant extra attention, and it may be advisable to consider incinerating leaf compost material in these areas to prevent further spread.Moreover, plants that are traded from such areas should perhaps be treated with slug control chemicals by fumigation (south 1992) or banned completely.
Fig. 1 .
Fig. 1.One of five replicate arenas used for testing egg-laying habitat selection of Krynickillus melanocephalus.Slugs were offered four different substrates (potting soil, birch leaves, gravel and sphagnum moss) to lay their eggs.A slug individual is visible in the circular cup in the centre, where slugs were released at the beginning of the experiment
Fig. 2 .
Fig. 2. Proportion of Krynickillus melanocephalus adults (n = 50) located in contact with one of four different substrates at the onset of the egg laying period.Error bars indicate ± 1 SE Fig. 3. Proportion of Krynickillus melanocephalus eggs deposited in with one of four different substrates.Error bars indicate ± 1 SE and the dashed line the proportion 0.25 that would be expected if eggs were laid at random
Table 1 .
Mean pH in the four substrates in the experimen-
Table 2 .
Temperature regimes (3 December -3 April) used in an egg hatching experiment.Eggs of Krynickillus melanocephalus (n = 100) were equally distributed between ten containers and kept in 4 °C until 3 December.The containers were subjected to different temperature treatments from 3 December 2022 to 3 April 2023, after which they all were placed in 8 °C.Containers are listed based on temperature treatment, from the coldest to the warmest temperature regime.Temperatures were recorded hourly | 2023-12-09T16:21:50.968Z | 2023-12-07T00:00:00.000 | {
"year": 2023,
"sha1": "62c290be5169878f1fce001c2272e965c480d4a9",
"oa_license": "CCBY",
"oa_url": "https://www.foliamalacologica.com/pdf-176310-97263?filename=Egg_laying%20habitat.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "af0fddf688e00e60ee6e269f06dfd3570cc74be6",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
11380393 | pes2o/s2orc | v3-fos-license | Piloting an HIV self-test kit voucher program to raise serostatus awareness of high-risk African Americans, Los Angeles
Background Up to half of all new HIV cases in Los Angeles may be caused by the 20-30% of men who have sex with men (MSM) with unrecognized HIV infection. Racial/ethnic minority MSM are at particularly high risk for being sero-unaware and due to stigma and poor healthcare access might benefit from novel private, self-testing methods, such as the recently FDA-approved OraQuick® In-Home HIV Test. Methods From July-November 2013, we undertook a pilot study to examine the feasibility of a voucher program for free OraQuick® tests targeting African American MSM in Los Angeles. We determined feasibility based on: (1) the establishment of a voucher redemption and third-party payment system, (2) the willingness of community-based organizations (CBOs) to disseminate vouchers, and (3) the collection of user demographics, test and linkage-to-care results with an anonymous telephone survey. Results We partnered with Walgreens® to create a voucher and third-party reimbursement system for free OraQuick® tests. Voucher distribution was divided into two periods. In total, 641 vouchers were supplied to CBOs: 274 (42.7%) went to clients and of those 53 (19.3%) were redeemed. Fifty (18.2%) of the 274 clients were surveyed: 44 (88%) were African American, 39 (78%) reported being likely to repeat voucher use, 44 (88%) reported reviewing pre-test information, and 37 (74%) the post-test information. Three (6%) of 50 survey respondents reported newly testing HIV-positive of whom all (100%) reported seeking medical care. Two withheld their results, both of whom also sought medical care. Conclusions Developing and partnering with a commercial pharmacy to institute a voucher system to facilitate HIV self-testing with linkage-to-care was feasible. Our findings suggest the voucher program was associated with increasing the identification of new cases of HIV infection with high rates of linkage to care. Expanded research and evaluation of voucher programs for HIV self-test kits among high-risk groups is warranted.
Background
The burden of HIV infection is particularly high among men who have sex with men (MSM) in Los Angeles (LA) County. In 2011, there were 36,330 MSM living with HIV/AIDS and in LA County, 10,833 of which were unaware of their infection [1]. That population disproportionately includes African Americans, who are the most vulnerable demographic group affected by HIV infection. In 2011, African Americans had the highest rate of infections for any demographic group at 966 per 100,000 persons in LA [2]. In addition, African American MSM in LA are 4 times more likely than white MSM not to know they are infected with HIV [3].
A recent study examined HIV testing preferences among high risk MSM in LA and found that, of 75 MSM surveyed, an in-home, immediate, and free HIV test had the highest acceptability [4]. In 2012, the FDA approved the OraQuick® In-Home HIV test, which allows for private, rapid self-testing at home, and helps to overcome stigma, which is a major barrier to testing.
Stigma towards HIV infection is particularly high in the African American MSM community [5,6], and research has shown that stigma reduces people's willingness to test for diseases such as HIV/AIDS [5][6][7]. New in-home HIV testing methods may further reduce barriers due to stigma that are associated with conventional providerbased testing by making the testing experience private and self-controlled [8]. We examined the feasibility of piloting a commercial voucher program for free OraQuick® In-Home HIV Test kits targeting high-risk African American MSM in LA.
Methods
We determined feasibility of our pilot program based on the ability to: (1) establish a functional commercial voucher redemption and third-party payment system, (2) use community-based organizations (CBOs) to disseminate vouchers, and (3) collect and analyze data from an anonymous telephone survey on user demographics, sexual behavior, prior testing practices, self-testing experience, results disclosure and linkage-to-care. Due to the very low cost of printing paper vouchers, we supplied a large number of vouchers to CBOs for a broad reaching campaign.
We partnered with three local CBOs servicing African American MSM to distribute vouchers. We created doublesided color vouchers for a free OraQuick® In-Home HIV test, each costing < $1 to print. Each voucher had a unique number that allowed us to track where it was distributed and where and when it was redeemed. The vouchers were redeemable at 12 local Walgreens, a US-based pharmacy chain, using a third party payment system. On a monthly basis, Walgreens invoiced our program at UCLA for payment based on the negotiated cost and number of redeemed vouchers. We supplied 237 vouchers to distributors in July 2013 during our first test period. During a second test period from August through December 2013, we supplied 404 vouchers with an attached survey recruitment flyer that invited participants to contact us by telephone.
Eligible survey participants had to be over 18 years of age and have received a voucher from the second test period. After obtaining informed consent, the interviewer collected participant demographic information, HIV testing history, sexual history, test result, linkage-to-care outcome and experience with the voucher program. At the conclusion of the interview, the participant was compensated with a $75 gift card.
Survey data were encoded using SurveyMonkey and descriptive frequencies were analyzed with Microsoft Excel® and STATA® 13 (StataCorp, College Station, TX). The UCLA institutional review board approved all aspects of the project (IRB#13-000790). Race: Sexual Behavior and Gender Identification: Men who have sex with women 1 (2%) Women who have sex with men 9 (18%) Men who have sex with men 33 (66%) Women who have sex with women 2 (4%) CBOs employed different strategies in distributing their supplied vouchers. One CBO supplied a total of 144 vouchers during both phases, of which 144 (100%) were distributed and 34 were redeemed (23.6%). Vouchers were distributed at the CBO during community meetings and events, usually after a group discussion on self-testing. A second CBO used a similar strategy, distributing 25 of their 100 vouchers (25%) during both phases, 9 of which were redeemed (9%). The third CBO was supplied 250 vouchers during both phases but only distributed 11 (4.4%), of which none were redeemed. Their vouchers were given to those passing by various mobile outreach vans in Los Angeles. An additional 147 were supplied to student volunteers while exploring alternative distribution strategies, 35 of which were distributed (24.3%) and 10 of which (6.8%) were redeemed. All CBOs and volunteers were asked to target their distribution toward African American MSM but to distribute vouchers to any who were interested.
Survey results
Survey respondents (n = 50) were young (90% under 35 years of age), primarily African American (88%), and a majority MSM (66%) ( Table 1). Forty-nine of 50 survey respondents (98%) redeemed their voucher and used the HIV in-home self-test kit. Three (6.1%) of 49 reported a new positive test result and being linked to care, and an additional 2 (4.1%) did not disclose their test result but reported attending follow-up medical care. The 1 respondent who did not redeem their voucher was not asked about their test result or activities before and after taking the test, so n = 49 was used to calculate descriptive statistics. For all other survey items there were no missing data and n = 50.
Using a Likert scale, 78% of participants reported that they were likely or very likely to use a voucher again, 65% reported that it was easy to travel to a Walgreens to redeem their voucher and 44% preferred self-testing over clinic based testing (26%) (Figure 1). About 22% of participants were uncomfortable or very uncomfortable with the in-store redemption process. One participant noted that the Walgreens staff at the store they visited was confused about the voucher, had to involve the store manager, took longer than expected, and overall the instore process made the participant feel uncomfortable.
Discussion and conclusion
We piloted an HIV self-test voucher distribution and redemption program for free self-test kits in partnership with a large commercial pharmacy. The cost of the actual voucher was low and the major cost to the program was incurred when a voucher was redeemed. Among the sample of those surveyed who redeemed vouchers, there was a high proportion of newly identified cases of HIV infection. All newly identified cases reported linkage to care. Participants endorsed the voucher system as a means to reduce stigma associated with HIV testing through their qualitative and quantitative feedback.
We were able to track voucher use from the time we supplied them to the point when clients redeemed them, validating the functionality of our system. Many CBOs were also willing to distribute a large number of vouchers to African American MSM. Thus, we found it feasible to develop a commercial voucher system with 3rd-party reimbursement to promote HIV self-testing among highrisk African American MSM in Los Angeles.
CBOs that distributed vouchers through their membership tended to have higher distribution and redemption rates than those who solicited those passing by. In addition, distribution and redemption increased for these CBOs during the second phase due to increasing utilization of membership involvement over time. The voucher program could be sustainably used to increase the uptake of HIV self-tests by implementing permanent 3rd-party voucher reimbursement and encouraging CBOs to distribute vouchers to high-risk persons. Our findings suggest a high acceptability for in-home testing among high-risk African American MSM, which is consistent with studies of MSM testing preferences [4,9]. A recent survey examined the hypothetical acceptability of testing at a physician's office, individual voluntary counseling and testing, couples' HIV counseling and testing, expedited/express testing, rapid home self-testing using an oral fluid test, and home dried blood spot specimen selfcollection for laboratory testing [9]. Home self-testing and physician's office testing had the highest acceptability across all demographic and behavioral groups [9]. However, participants typically identified multiple testing scenarios as highly acceptable, indicating a comprehensive strategy that provides multiple testing options to the community may have the greatest effect on this population [9].
A mathematical modeling study by Katz et al. has demonstrated that a complete replacement of clinic-based testing with in-home testing amongst MSM in Seattle may result in an increased HIV prevalence [10]. However, that model doesn't account for in-home testing being offered as a supplement to clinic-based testing, which Katz et al [10] and a recent editorial [11] have acknowledged may reduce HIV prevalence. In addition, promoting inhome testing towards groups who are untested for HIV would decrease HIV prevalence [10,11]. Programs utilizing vouchers to promote in-home testing as a supplement to clinic-based testing can be used to evaluate these assertions, but determination of the effectiveness of such programs to decrease HIV prevalence among African American MSM will require rigorous evaluation on a larger scale.
Our pilot project had several limitations. Firstly, there were only 43 vouchers redeemed at Walgreens but 49 respondents reported redeeming a voucher. This could be due to individuals completing more than one survey or individuals incorrectly reporting their voucher redemption. Second, given there were 230 vouchers with survey recruitment materials and 49 of 50 respondents reported redeeming the voucher, there is a lack of data on those who did not redeem their voucher. Future projects should attempt to verify the uniqueness of each survey participant and collect information from non-redeemers. In addition, Walgreens stores occasionally ran out of selftest kits during the evaluation period and awareness about the program among the Walgreens staff was inconsistent. However, our ability to identify those limitations indicates the success of collecting process data for quality improvement necessary to enhance the pilot program. Lastly, our survey involved a relatively small sample size of 50, but we believed this was sufficient to assess the acceptability of participation and provide formative information on the structure of the voucher system. A pilot study by Young et al. has found distributing HIV self-testing kits through smart vending machines to be feasible [12]. Our team plans to compare multiple methods of increasing the availability of HIV self-test kits such as the use of smart vending machines or the US mail and compare those with referrals to conventional site-based testing to find the best ways to increase HIV testing and community-level HIV serostatus awareness among high-risk groups. Continued innovation is urgently needed to address the large number of persons unaware of their HIV infection. | 2017-07-08T06:48:31.346Z | 2014-11-26T00:00:00.000 | {
"year": 2014,
"sha1": "210b3204c7f41c697ea64a6ffe42db2432abea57",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-14-1226",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "827c7e5357b88140cc8b988233ce3e6a7ed87f69",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
151984232 | pes2o/s2orc | v3-fos-license | Teaching Arithmetic Combinations of Multiplication and Division to Students with Learning Disabilities or Mild Intellectual Disability: The Impact of Alternative Fact Grouping and the Role of Cognitive and Learning Factors
The effectiveness of two instructional interventions was investigated in the context of teaching Arithmetic Combinations (ACs) of multiplication and division to students with Learning Disabilities (LD) or Mild Intellectual Disability (MID). The intervention for the control group (LD = 20, MID = 10) was based on principles of effective instruction, while the intervention for the experimental group (LD = 19, MID = 4) combined the intervention for the control group and an alternative grouping and presentation scheme of ACs. Correlations between cognitive and learning characteristics of the two disability categories and participants’ performance in ACs learning were also investigated. Intra-group comparisons showed that post-intervention performance of both groups (control and experimental) was significantly higher than their pre-intervention performance. However, inter-group comparisons revealed that there was no significant difference between the results obtained through the two interventions. Students with LD outperformed their counterparts with MID. Differences of the two disability categories in domains such as speed of information processing and counting skills correlated with performance. Results are discussed in reference to the organization of effective intervention programs for supporting students with LD or MID in their effort to learn arithmetic combinations of multiplication and division.
Introduction
The multiplications of two one-digit numbers (e.g., 6 x 9 = 54) and the divisions created when using the products of these multiplications as dividends and the factors as dividers (e.g., 54 : 9 = 6) are the arithmetic combinations of multiplication and division respectively (Agaliotis, 2011).Arithmetic Combinations (ACs) are components of complex mathematical tasks (e.g., use of algorithms in the context of word problem-solving).Considering that appropriate implementation of any complex task depends on the fluent use of individual structural components, it becomes obvious that students should be able to recall ACs as quickly and accurately as possible, in order to deal successfully with demanding mathematical tasks including ACs as one of their components (Baroody, Bajwa, & Eiland, 2009;Crawford, 2003).
Students of typical development usually manage to use fluently the ACs by the time they reach the 3 rd grade (Robinson, Menchetti, & Torgesen, 2002).In contrast, students with Learning Disabilities (LD) and students with Mild Intellectual Disability (MID) often have significant difficulties in learning and directly recalling the ACs, until the end of primary school or even later in life (Bouck, Bassette, Taber-Doughty, Flanagan, & Szwed, 2009;Geary, Hoard, & Nugent, 2012;Gersten, Jordan, & Flojo, 2005).
The present research aims at the examination of the impact specific teaching practices might have on the acquisition of multiplication and division ACs by students with LD or MID.Furthermore, the present research seeks to identify correlations between specific cognitive and learning characteristics of the two disability categories and the result of their effort to acquire the ACs.Clarification of these issues would offer important information for organizing appropriate interventions for students of both categories of educational needs.
An approach that is oriented toward the creation of groups of interconnected ACs without, however, being so much dependent on a rigid sequence of prerequisite knowledge and skills, is the one utilizing common characteristics of the numbers included in the ACs or various arithmetic properties and principles (such as the principle of multiplying or dividing any number with "1" or "0"), as "criteria" for the formation of groups.In such an organization of facts the presentation sequence is not defined by the size of the result, but by the ease with which ACs can be integrated in discernable sets of data.For example, in traditional teaching the AC "4 x 6 = 24" is presented earlier than "8 x 8 = 64", since 24 < 64.In contrast, in alternative grouping of ACs "8 x 8 = 64" precedes "4 x 6 = 24", since it is placed in the set of multiplications of "twin numbers" (together with 3 x 3, 4 x 4, 5 x 5, etc.), which are easy to memorize and thus they are presented at the beginning of alternative intervention programs (Agaliotis, 2011;Woodward, 2006).
The alternative grouping and presentation of ACs based on common arithmetic characteristic or arithmetic properties has been used successfully in a relatively small number of studies in various countries (e.g., Agaliotis et al., 2003;Bryant et al., 2016;Woodward, 2006).However, the interventions used in these studies were compositions of many specific teaching techniques; hence, the exact contribution of alternative grouping in improving student performance is not easily distinguishable.In other words, while the effect of the alternative grouping of ACs based on common features and properties is an ingredient of many effective interventions, it has not been adequately studied as a separate strategy.
In reference to the participants of the conducted studies on ACs learning, it should be mentioned that in most cases they belong to the LD category, while students with MID are clearly underrepresented among these participants (e.g., Agaliotis et al., 2003;Graham, Bellert, Thomas, & Pegg, 2007;Powell et al., 2009;Woodward, 2006).Moreover, Caffrey and Fuchs (2007) found only 1 study in 21 years, from 1982 to 2003, referring to the teaching of ACs to students of both disability categories (LD and MID).Nonetheless, students of these two categories of special needs are very often found in common training contexts, and follow similar educational programs in mathematics.Therefore, investigation of similarities and differences between these two disability groups in the way and ease with which they learn ACs, can provide researchers and practitioners with valuable information for improving the quality of educational services offered to them (Bouck et al., 2009;Caffrey & Fuchs, 2007).
Cognitive and Learning Factors of ACs' Learning
Considering that math difficulties result from inadequacies and peculiarities in cognitive domains such as language, memory or executive functioning (Geary, Hoard, Byrd-Craven, Nugent, & Numtee, 2007;Szucs, Devine, Soltesz, Nobes, & Gabriel, 2013), it becomes evident that these domains constitute adequate fields of exploration of the similarities and differences between students with LD and students with MID in learning ACs.A literature review by the present researchers revealed that the number of studies comparing the two groups in these domains is rather limited.In one of the few studies involving students from these two categories of special needs (LD, MID), along with other categories of struggling students (ADHD, anxiety disorder), Calhoun and Mayes (2005) found that students with LD had lower mean scores in processing speed than what would be expected considering their mean IQ score, while students with MID had low mean scores in all examined factors (processing speed, perception of information, verbal comprehension), as could be predicted by their compromised mean IQ score.However, in the aforementioned study no information is reported on whether the scores of the two groups differ significantly.In another study, Maehler and Schuchardt (2009) found that students with LD or MID differed significantly from students of typical development regarding the functionality of working memory, but did not differ significantly from each other in this perspective, despite their difference of 23 points in mean IQ score.In general, however, students with LD have been found to perform better than students with MID in direct recall of ACs (Parmar, Cawley, & Miller, 1994;Shin & Bryant, 2013;Van Luit & Naglieri, 1999).Nonetheless, more research is needed in order to identify similarities and differences between these two disability groups in domains like the counting skills and the knowledge of number (Caffrey & Fuchs, 2007).
Aim of the Study
Based on what has been exposed previously, the first aim of the present research is to study the effect of an alternative grouping of ACs (utilizing structural characteristics, properties and principles of ACs formation) on the effort of students with LD and students with MID to learn the ACs of multiplication and division.The second aim of this study is to highlight the role played by cognitive and learning characteristics of the two disability groups in learning the ACs of multiplication and division.
Participant Characteristics
The sample consisted of 53 students, aged 9-12 years (M = 10.18,SD = 1.37), who were enrolled in primary schools of Northern Greece, and had been diagnosed by qualified state agencies as presenting either LD or MID.These students were instructed in general classes, but they were also visiting the Resource Rooms of their schools for 2-3 hours per week.
Sampling Procedures
Selection of students was based on the following criteria: (1) the official diagnosis of the above-mentioned state agencies, which was issued less than a year before the beginning of the research, (2) the absence of other difficulties or disabilities, (3) the occurrence of difficulties in mathematics, which were so severe that students were classified by their teachers in the lower 25% of the performance spectrum of their class, and (4) the attendance of a Resource Room for at least one year before the commencement of the study.All students with LD were of average intelligence (M = 96.8,SD = 9.6), while students with MID were of sub-average intelligence, ranging from 60 to 75 (M = 71.8,SD = 6.5), as reported by the state agencies which categorized them.
Sample Size
The participants were randomly allocated to a control and an experimental group.The control group included a total of 30 students (LD = 20 and MID = 10) and the experimental group consisted of 23 students (LD = 19 and MID = 4).The experimental group included initially 30 participants.However, due to various reasons seven students (all with MID) were withdrawn or excluded from the final analyses.Parents of all students consented in written form to the participation of their children in the current research.
Procedures of the Study
In order to distinguish the effect of the alternative ACs' grouping in the context of teaching students with LD and students with MID ACs of multiplication and division, the following procedure was employed: initially, an intervention in line with fundamental principles of effective instruction (Agaliotis, 2011;Griffin, 2004) was organized, and utilized for teaching the control group.To this intervention an alternative grouping of ACs (based on distinct arithmetic characteristics and principles) was added, yielding the instruction used for the experimental group.
Duration, Content and Structure of the Interventions
The total duration of the intervention was 12 teaching hours (lessons), 6 hours for the ACs of multiplication and 6 for the ACs of division.Each lesson lasted 30 to 35 minutes, and there were 2 to 3 lessons per week (4 to 6 weeks).A hundred and twenty randomly selected ACs (60 multiplications and 60 divisions) out of the total of 190 existing ACs for both operations (100 multiplications and 90 divisions) made up the content of the intervention.
The intervention based on effective instruction principles (control group intervention) included 5 structural components or steps: 1 st step: Review of prerequisite ACs that can facilitate the learning of the ACs of multiplication and division (e.g., review of ACs of addition from the group of "twin numbers" [e.g., 3 + 3], before teaching ACs of multiplication with "2" as one of the factors [e.g., 2 x 3], because of the conceptual relationship and the same result 2 nd step: Direct teaching of the new ACs through the use of the 3 modes of knowledge representation (three-dimensional materials, pictures and symbols), with clear demonstrations and explanations followed by students' repetitions.school textbooks, which use the organization of the well-known multiplication tables.
In contrast, the presentation sequence used for the experimental group followed a different scheme, according to which the 60 multiplication ACs and the 60 division ACs were placed in 7 distinct groups each (14 groups in total).In each lesson one or two groups of ACs were presented, depending on the progress of the students.In other words, the intervention applied in the case of the experimental group was a combination of the principles used for the instruction of the control group and the alternative grouping of the ACs presented on Table 1.
Measures of the Study
Participants' initial and final performance in providing oral answers to written multiplication and division ACs, was measured through informal tests.Twenty ACs, written with sizeable numerals, were presented to the participants in one minute, preceded by the question: "Can you please tell me how much is…?" In order to investigate the possible effect of cognitive and learning characteristics of the two disability categories (LD vs. MID) on the results of the interventions, a series of comparisons focusing on key processing information factors of ACs' acquisition process was performed.The factors explored, some of the studies identifying their importance, and also the tools used in the present study for their investigation, were: Verbal skills (Cirino, Fuchs, Elias, Powell, & Schumacher, 2015;Compton, Fuchs, Fuchs, Lambert, & Hamlett, 2011): The measures used for investigating the factor of verbal skills were two subscales (Verbal Analogies and Vocabulary) from the ATHINA Test, which is a Greek standardized and widely used tool for diagnosing students with mild disabilities (Paraskevopoulos, Kalantzi-Azizi, & Giannitsas, 1999)."Verbal analogies" include 32 couples of sentences, which the examinees have to complete orally, in order to prove that they understand the analogies of the referred concepts (e.g., "the table is square, the sun is ……").In the "Vocabulary" subtest examinees are asked to provide oral definitions to 20 orally presented verbs and nouns (e.g., "neglect", "apple") (α = .81).
Processing speed (Compton et al., 2011): Participants' processing speed was examined through a measure used also in the study of Compton et al., who presented students with 20 rows of six numbers each, and asked them in one minute to locate and circle the two identical numbers of each row (e.g., 9, 4, 6, 8, 9, 3) (α = .79).
Phonological short-term memory (Gathercole & Pickering, 2001): The "memory of verbal sequences" subscale of the ATHINA Test (Paraskevopoulos et al., 1999) was utilized in order to perform this examination.This sub-scale includes 16 gradually increasing digit sequences, which the examinee is asked to repeat orally, after the initial oral presentation by the examiner (α = .81).
Counting skills (Jordan, Kaplan, Locuniak, & Ramineni, 2007): Counting skills were controlled through the "Counting Skills" subscale of the ATHINA Test.The examinees were asked to skip count in direct and reverse order by 2 up to 12, by 3 up to 18, by 4 up to 24, by 5 up to 35, and by 6 up to 30 (α = .78).
Properties of operations (Cowan et al., 2011): Knowledge of the properties of operations was tested through a variation of a procedure used by Cowan et al. (2011), in order for the procedure to be better adapted to the curriculum by which the present participants were taught.The aim was to determine whether the participants could use known ACs to find unknown ones, based on operations properties.The task included a total of 20 couples of ACs, embedded in questions like: "If you know that 5 x 4 = 20, then how much is 4 x 5 = ...", "If you know that 72: 8 = 9, then how much is 8 x 9 = ..." (α = 0.78).(Cowan & Powell, 2014;Bryant et al., 2011): The fluency with which participants used ACs before and after the intervention was tested through a trial that included 20 multiplication ACs (e.g. 2 x 6 = ..., 7 x 7 = ..., 8 x 6 = ...) and 20 division ACs (e.g.20 : 4 = ..., 18: 2 = ..., 24: 6 = ...), to which students had to provide written answers in one minute (α = 0.77).(Tournaki, 2003): Participants' ability to generalize the use of acquired ACs was tested through two measures, one for the multiplication ACs and one for the division ACs.Each measure contained 10 tasks with 3 one digit numbers of the form "(3 x 5) + 1 = ..." and "(12: 4) + 1 = ...", which the participants were allowed to answer without time limit (α = .79).
Fidelity of Implementation
The interventions were implemented by a total of 18 Resource Room teachers, who taught the participants in groups of 2 to 3 students.All teachers had studies in special education at undergraduate or postgraduate level, and their average teaching experience in special, primary, educational settings was 10 years.Prior to the commencement of the program, the 18 teachers received 8 hours of training by the researchers on the implementation of the interventions.Moreover, every week they received feedback from the researchers on the quality of program application, during meetings that took place in the schools.Implementation fidelity was established by systematic observation and precision recordings, on a 4 point scale (1 = low to 4 = excellent), of at least four lessons of each intervention group.Conformity indicators were related to the quality of implementing the 5 components of effective instruction and to the degree to which teachers complied with the instructions received by the researchers (Bryant et al., 2016).In all cases the results showed high quality of application.
Statistics and Data Analysis
Results of the Shapiro-Wilk test indicated that some variables did not satisfy the condition of normal distribution.
For this reason, non-parametric tests were used (Mann-Whitney and Wilcoxon Signed Rank tests).
The differences in the impact of the interventions (differences between intervention groups and between students with LD and MID) were determined through the calculation of the effect size (Thalheimer & Cook, 2002).
Calculation was performed through the formula: Use of this formula is recommended when the research population is small and the analyses are made with non-parametric tests (Fritz, Morris, & Richler, 2012).The Z in the formula represents the conversion of the individual data in a standardized format, and shows how many standard deviations below or above the population mean a raw score is.The N is the total number of participants' data.For example, if an individual's score is 18 and the mean score is 15, with SD 3.5, the Z score is 0. Correlations between cognitive and learning variables of students with LD or MID, on the one hand, and fluency in the use of multiplication and division ACs, on the other, were calculated through Spearman's rho.
Comparison between Pre-and Post-Intervention Results of the Intervention Groups
There were no significant differences between the control and the experimental group in the initial fluency of multiplication ACs (U = 333.5,z = -.208,p = .835)and division ACs (U = 307.5,z = -.712,p = .476)(Table 4).
Intra-group comparisons revealed that the control group presented a significant difference between initial and final fluency in the use of multiplication ACs (z = -4.713,p = .000)and division ACs (z = -4.541,p = .000).In the experimental group the differences between initial and final fluency were also statistically significant for multiplication ACs (z = -3.989,p = .000)and division ACs (z = -4.205,p = .000)(Table 4).
Inter-group comparisons between the experimental group and the control group showed that there was no statistically significant differences between the two categories regarding the post-intervention fluency (U = 319.5,z = -.460,p = .645)and generalization (U = 339,00, z = -.110,p = .913) of multiplication ACs.Moreover, the effect of alternative grouping was negligible (r = .05).In the case of division ACs, post-intervention differences between the experimental group and the control group were not statistically significant on fluency (U = 269.0,z = -.767,p = .443)and generalization (U = 260,00, z = -.951,p = .342),while the effect of alternative grouping of ACs was negligible for fluency (r = .08)and small for generalization (r = .19)(Table 4).In summary, both research groups showed after the intervention significant improvement in the fluent use of the ACs of both operations in relation to their initial performance; however, the post-intervention differences between the two groups were insignificant both for multiplication and division ACs.
Comparisons between Students with LD and MID
Results of comparisons between students with LD (n = 39) and students with MID (n = 14) showed that there was no significant difference in pre-intervention fluency in the use of multiplication ACs (U = 263,5, z = -.193,p = .847),while the difference between the two groups in the fluency in division ACs was marginally not significant (U = 184.0,z = -1.901,p = .057),with students with LD having higher mean scores (Table 5).
Intra-categorical comparisons showed that students with LD presented a significant difference between pre-and post-fluency in the use of multiplication ACs (z = 5.211, p = .000)and division ACs (z = -4.681,p = .000).
In summary, students with LD and MID showed significant improvement in the fluent use of the ACs of both operations in relation to their initial performance.Moreover, there were no significant differences between students with LD and MID either in fluency or in generalization of division ACs.
Correlations between Cognitive-Learning Factors and Fluent Use of ACs
Table 6 presents the results of correlations between cognitive and learning factors, on the one hand, and fluency in the use of ACs of multiplication and division ACs, on the other, after the completion of interventions for students with LD and MID.It was found that in the case of students with LD information processing speed had a significant correlation with the final fluency in the use of ACs of both operations, while phonological short-term memory was correlated only with the fluency of division ACs.In students with MID, processing speed had a significant correlation with the final fluency in the use of multiplication ACs, while the fluency of division ACs had no significant correlation with any cognitive factor (higher correlation, but still statistically insignificant appeared with information processing speed).Regarding the two learning factors, counting skills and properties of operations, results showed a significant correlation with the fluency of ACs of both operations (multiplication and division) for students with LD and students with MID (p < .01)(Table 6).
Discussion
The present research compared the effectiveness of two teaching interventions for supporting students with LD or MID in achieving fluency and generalization in the use of multiplication and division ACs.The control group received an intervention based on principles of effective instruction.The experimental group was supported through a synthesis, which consisted of the intervention used for the control group and an alternative grouping of ACs based on distinct AC characteristics.Results of the comparisons between the two groups reflect the impact of alternative grouping, as it was the only instructional component differentiating the two interventions.
Furthermore, this research compared students with LD and students with MID regarding the cognitive factors and the specific mathematical prerequisites affecting the learning of ACs, in order to reveal similarities and differences between the two disability groups.
Effect of Alternative Grouping of ACs
According to the results, students in both groups showed significant improvement in the fluency of ACs, in comparison to their initial performance.This result is consistent with results from other studies, which have shown that interventions grounded in principles of effective instruction have a positive effect on the performance of students with severe learning difficulties or disabilities (Agaliotis et al., 2003;Bryant et al., 2016;Re, Pedron, Tressoldi, & Lucangeli, 2014).On the other hand, comparisons between the results obtained only through the application of effective instruction principles, and the results produced by the combination of effective instruction with the alternative grouping of ACs, showed no significant differences in fluency and generalization of ACs.This is in line with results of meta-analyses and comparative intervention studies (e.g., Carr, Taasoobshirazi, Stroud, & Royer, 2011;Codding et al., 2007;Kroesbergen & Van Luit, 2003;Methe, Kilgus, Neiman, & Riley-Tillman, 2012;Woodward, 2006) which have concluded that there are no significant performance differences between groups of students receiving comparable evidence-based practices for acquiring ACs.Regarding the present research, the view can be supported that the positive results obtained through the high-quality intervention used for instructing the control group was probably not easy to be significantly exceeded through the mere addition of the alternative ACs' grouping used for the experimental group.
An additional explanation for the absence of difference between the results of the two interventions applied in the present research might be found in the view of Garnett (1992) that alternative grouping may facilitate the memorization of ACs that are interrelated on the basis of a clear rule (such as the principle that "the product of any number multiplied by '1' is the same number"), but not of ACs that cannot be easily grouped under a particularly distinctive feature (e.g., 7 x 8, 6 x 4).Consideration of this view should have led to the use of two distinct groups of ACs in the final assessment of the present study: one group consisting of combinations that are easy to memorize (e.g., multiplications and divisions of twin numbers, multiplications and divisions by 1 etc.) and another group including the more loosely connected ACs.Comparison of the fluency in the use of ACs from the two groups would yield a better estimate of the potential of alternative grouping.However, no such differentiation was used in the present study.
Comparisons between the Disability Categories
In reference to the categories of special needs (students with LD and students with MID), the results showed that students of both groups significantly improved learning of multiplication and division ACs.Comparisons of the final performance showed that students with LD had significantly better performance than students with MID in the fluency of multiplication ACs, while the differences were not significant in the fluency of division ACs.These results agree with those obtained by Van Luit and Naglieri (1999), who found that students with LD and students with MID significantly improved in the fluent use of multiplication and division ACs, with students with LD outperforming students with MID.One possible explanation for the differences between the two disability groups may be found in the more extensive limitations in cognitive and learning factors characterizing students with MID in comparison to students with LD (Calhoun & Mayes, 2005;Kroesbergen & Van Luit, 2003).
The fact that the difference between students with LD and students with MID in the final fluency was significant for multiplication ACs, but insignificant for division ACs, may, at least partly, be explained by the characteristics of the strategies usually employed for finding the results of the two AC groups.Specifically, multiplication ACs (e.g., 5 x 9 = 45) are usually recalled by most students directly from long-term memory (Baroody et al., 2009;Campbell, 2008), while division ACs are more often calculated on the basis of multiplication, and even subtraction or addition ACs.This stands also for typical students and is rather due to both idiosyncratic and acquired strategies traditionally used in the daily school practice (Crawford, 2003;Robinson et al., 2006).For example, the result of "21 : 3" may be found by seeking the number that needs to be multiplied by 3, in order for the result "21" to be obtained ("7").In another example, the result of 32: 8 can be found by counting the times the divider ( 8) is added repeatedly until the sum reaches the dividend, i.e., 8 + 8 = 16 + 8 = 24 + 8 = 32, with the reply being "4".In other words, it is not rare for students to stop trying to memorize division ACs, as soon as they learn a number of ACs of other operations, which they can utilize to find division ACs (Robinson et al., 2006).Nonetheless, no matter how effective they may be, these processes are certainly time-consuming, especially in early stages of learning ACs; hence, they may affect students' performance, especially in timed trials.Although in the present study instruction was geared toward direct retrieval of division ACs from memory, it is possible that some students with LD used time-consuming techniques, which they acquired prior to this intervention, thus hindering the emergence of their superiority toward their peers with MID, in the timed tasks of the present study.The difficulty to test fluency in the case of ACs that may be found through the use of ACs from other operations has been identified by other researchers too (e.g., Campbell, 2008;Woodward, 2006).On the other hand, students with MID, who did not possess division ACs prior to the received instruction, may have exploited the implemented intervention to improve substantially their performance and, thus, diminish the gap to their peers with LD.
Regarding generalization of multiplication and division ACs, students with LD and MID showed, rather unexpectedly, similar performance.One possible explanation for this result can be found in the characteristics of generalization tasks used in the present study, which presented significant conceptual and procedural proximity to the tasks used in the main teaching phase [e.g., main teaching phase "3 x 4", generalization phase "(3 x 4) + 1"].Because of this proximity and regardless of their category of disability (LD or MID), students were perhaps able to transfer knowledge from acquired ACs to respond to the task of generalization.Generalization tasks with greater conceptual and procedural distance from the tasks of the main instruction could have produced different results.However, it should be noted that the obtained results show that students with MID seem to be able to transfer knowledge to new tasks, when those tasks differ slightly from the knowledge they already possess.Careful and gradual increase of the distance between acquired knowledge and new tasks may allow students with MID widen their opportunities for applying the gains they make at school, as also observed by Griffin (2004).
Correlations between Cognitive-Learning Factors and ACs' Learning
Regarding the correlation between cognitive and learning factors of students with LD and MID, on the one hand, and exhibited progress, on the other, it was revealed that the common cognitive factor for both categories of educational needs that correlated with fluency of ACs was the processing speed of arithmetic information.The result is consistent with the research of Fuchs et al. (2006), who investigated third grade students with and without learning difficulties (e.g., learning disabilities, speech and language difficulties and behavioral problems) and found that the processing speed is a crucial factor for the fluency of ACs.The processing speed facilitates the concurrent processing of information constituting an AC, namely (a) the two numbers, (b) the operation involved and (3) the result.Outcome of the simultaneous presence and processing of these elements is the storage and maintenance of each AC as an integrated structure, which is easy to be recalled with all its constituent parts (Compton et al., 2011;Ellemor-Collins & Wright, 2009).
Results of the present research showed that working memory is not a main factor for fluency in the use of ACs both for students with LD and MID.This is not to say that working memory does not play an important role in ACs learning, but it should be taken to mean that this role is not as vital as it is probably in the case of algorithms or problem solving.Such claims have been made also by other researchers (e.g., Butterworth, 2005;Cirino et al., 2015;Fuchs et al., 2006;Maehler & Schuchardt, 2009).
The present study showed also that properties of operations and counting skills have significant correlations with the fluency of ACs, both for students with LD and with MID.The finding is consistent with results of other studies, which also showed the importance of these two basic arithmetic skills in learning ACs (e.g., Cowan et al., 2011;Jordan et al., 2007;Murphy, Mazzocco, Hanich, & Early, 2007;Toll & Van Luit, 2013).
Besides similarities, the present study showed some differences between students with LD and students with MID regarding the correlation between cognitive factors and learning of ACs.Specifically, results showed that in students with LD phonological short-term memory was associated with the fluency of division ACs, but not with multiplication ACs, while in students with MID phonological short-term memory was not associated with learning ACs of any of the two operations.The small correlation of phonological short-term memory with the fluency of multiplication ACs (Alloway & Passolunghi, 2011;Shin & Bryant, 2013) and the significant correlation with the fluency of division ACs was an unexpected finding, the interpretation of which exceeds the aims of this research.
Limitations of Research
This study has several limitations that should be considered when interpreting the results.One such limitation refers to the sample size, particularly to the category of students with MID.Future research should include larger populations with LD and MID to better examine existing trends.
Another limitation is associated with the small number of learning factors examined as to their correlation with participants' final performance.Considering that there are probably more learning factors associated with both the direct recall of ACs and the processes for finding the result (such as the number sense or the fact strategies), it is obvious that they should be examined in the context of future research.
General Conclusion
Despite the limitations, the present findings may be regarded as supportive of the position that interventions for ACs learning based on the principles of effective instruction are beneficial for both students with LD or MID.Alternative grouping of ACs can contribute to the effectiveness of interventions, without necessarily producing significantly better results.Moreover, well-designed interventions for the teaching of ACs may reduce, but not completely eliminate, the effect of cognitive and learning factors differentiating students with LD and students with MID.Information processing speed seems to be a common decisive factor of ACs learning both for students with LD or MID, whereas knowledge of operation properties and counting skills, and to a lesser extent phonological short term memory, seem to differentiate them.
Table 2 .
Characteristics of participants
Table 3 .
Cognitive and learning factors of students with LD and students with MID
Table 4 .
Results on multiplication and division ACs of research groups
Table 5 .
Results on multiplication and division ACs of students with LD and MID | 2018-12-21T16:22:00.071Z | 2016-08-30T00:00:00.000 | {
"year": 2016,
"sha1": "7999f7a61278eba732532665d01ed5f9d262e11d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5539/jel.v5n4p90",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7999f7a61278eba732532665d01ed5f9d262e11d",
"s2fieldsofstudy": [
"Education",
"Mathematics"
],
"extfieldsofstudy": [
"Psychology"
]
} |
235434078 | pes2o/s2orc | v3-fos-license | Assessment of immigrant detention centers and detainees health status in Libya
Migrants are extremely vulnerable to various risks including the lack of physical and mental health care. The weakness of the health system in Libya is further undermined by the fragile, insecure, limited access, threats to health care workers and the increased social and economic challenges. This study studied the general environment of Detention Centers (DCs) of migrants in Libya and their health. Information were collected by during visits to DCs using to assess the structure, organization, fi nancing, processes occurring in the center upon arrival of detainees, accommodation, water, sanitation and hygiene, food and nutrition, health-care services, and health status of detainees including their general health, chronic conditions, acute challenges as infections including TB, STD/HIV, hepatitis and malaria, and violence. Mental health was assessed using standard tools. Special questions were constructed for pregnant females and under fi ve children. Sixteen DCs were visited. Thirteen of them had children in their premises, while ten detained women. Of the 427 interviewed, more than half were younger than 25 years of age. Overall environment and amenities were inadequate or poor. In more than half of DCs, deliveries did occur inside the DC itself. According to DCs managers, most common causes of death were TB, malnutrition and depression. The prevalence of acute and chronic illnesses including mental conditions were determined. Specifi c actions are proposed to each in particularly establishing/reviving a dedicated health center to meet the individual and public health needs of migrants. Research Article Assessment of immigrant detention centers and detainees health status in Libya Adel El Taguri1,2* and Aisha Nasef3,4 1National Center for Accreditation of Health Establishments, Libya 2Community Department, University of Tripoli, Libya 3Authority of Natural science Research and Technology, Libya 4Scientifi c Council of Laboratory Medicine, Medical Specialty council, Libya Received: 12 April, 2021 Accepted: 04 May, 2021 Published: 05 May, 2021 *Corresponding author: Adel El Taguri, National Center for Accreditation of Health Establishments, Libya, Tel: + 218910419561; E-mail:
Introduction
The world is currently facing what seems to be the largest ever wave of Displaced-Immigrant-Refugee "DIR" population in history. The Eastern Mediterranean Region (EMR) is carrying the largest burden as more than half of these are in this region. Libya has always been an attractive departure point for economic migrants from Africa and the Middle East.
With the longest coast on the Mediterranean in its northern border and the long and diffi cult to protect other borders, Libya became a major cross-road to Southern Europe not only for human traffi c and irregular immigration. At the end of the previous decade, it was estimated that there were more than one and half million immigrants in Libya. The environment for humanitarian actors in countries of the the whole region is fragile, insecure with limited access; threats to health care workers; and increased social and economic challenges. Libya is a particular case in the region as most of these migrants are of mixed origin populations that are coming from even remote areas and for many variable reasons.
As DIR are extremely vulnerable to human rights abuses, particularly the lack of/or denial of physical and mental health care, protection services should be made available and strengthened to vulnerable migrants with particular emphasis on victims of traffi cking (VoTs); Unaccompanied Migrant Children (UMC); migrants with serious medical conditions (including HIV/AIDS); and other categories of those at-risk.
Despite the current situation consisting of insecurity, a lack of rule of law and the loss of fi nancial stability, Libya is still an important transit and destination country for migrants. In certain instances, migrants remain stranded in Libya and are Citation: Taguri caught by the authorities, or they become easy targets for the smuggling networks which promise safe travel to desperate people willing to embark on a dangerous trip by sea to Europe.
Since 2014 transiting migrants, primarily from East and West Africa have continued to exploit Libyan political instability and weak border controls and use the country as a primary departure point to migrate across the central Mediterranean to Europe [1]. The total population of migrants in Libya had been about 700,000 -1 million people, mainly coming from Egypt, Niger, Sudan, Nigeria, Bangladesh, Syria, and Mali. The constant tragedies in the Mediterranean, coupled with the deteriorating situation of the local population, make it necessary to address the instability in Libya through various interventions. The Department for Combating Illegal Migration (DCIM), affi liated to the Interior Ministry, managed the formal migrant detention centers. Of note, there are non-formal migrant detention centers ran by smugglers and traffi ckers.
Libya's health sector capacity has been burdened and under-resourced. The repeated emergencies have not allowed a proper recovery of public sector services. A Service Availability and Readiness Assessment survey, conducted by the WHO and the ministry of health, showed that health system has practically collapsed [2]. Although Libya's health system is largely a public-health oriented health system, it is weak in essence, with debilitated Primary Health-Care (PHC) network, and neglected health services. There were only few previous activities for assessment of migration and of Detention Centers (DCs) conditions that were performed [3]. These assessments concluded that more services should be tailored to these vulnerable Migrants.
In this study, a survey was designed to assess objectively the general environment in and surrounding the DCs in Libya and the corresponding health situation of the detained immigrants.
Population and data collection
Information was collected by different qualitative and quantitative means. It include structured visits to DCs all over Libya using standard questionnaires and discussion with detainees. The focus of the current survey is the status of immigrants in DCs. The list includes all DCIM affi liated and not affi liated DCs to construct a better picture of the situation and challenges facing Immigrants in Libya.
It should be noted that these would only represent a fraction of the total number of immigrants in the country as thousands of immigrants are employed and are contributing actively in Libyan society. The assessment done involved all stakeholders and included accommodation, water, sanitation and hygiene, food and nutrition, health-care services, and health status of detainees.
Assessment tools
The survey involved two questionnaires. First questionnaire was designed to gather data about DCs. The second questionnaire was designed to collect data from detainees.
Detention centers tool
First questionnaire contains general data about visited centers including structure, organization, fi nancing, processes occurring in the center upon arrival of detainees and how is health taken care of in addition to shelter and hygiene (Appendix 1).
Detainees tool
Second questionnaire contains questions addressed to detainees in order to assess their general health, chronic conditions, acute diseases in particular infections and violence that they might have been exposed to. Other data obtained regarding general health and some of the most important infections as tuberculosis, STD/HIV, hepatitis and malaria. mental health using standard mental health tools. Special appendices were constructed for pregnant females and under fi ve children and added to this tool.
Detention centers
Sixteen DCs were visited. Fifteen of these DCs are recognized as affi liated to the designed authority by the Libyan government (DCIM), and one non-affi liated and permitted visiting and surveying of detainees. Ten (62.50%) DCs fi nd that the budget was not enough the last year. Thirteen DCs (81.25%) have children in their premises. Three (18.75%) of these contain more than 50 children. Ten (62.50%) of these centers also detain women. One center contains 520 women.
Among DCs, only three centers were considered part of National program for Immunization, two centers for TB and only one center for HIV. In 12 of these DCs, the health Unit works only during offi cial working hours (Till 14hr). Water supply was considered adequate in 13 DCs (81.3%). Five (31.25%) of DCs have visible fi ssures in the walls and/or ceilings. Humidity (molds) was visible to naked eye in one of DCs. There is evidence of dense presence of insect and/or rodents in seven (46.7%) DCs. In only two DCs, bed linen and covertures were appropriate. They were regularly cleaned and replaced in seven (43.8%) DCs. Enough beds were reported in fi ve (less than 1/3) of the visited DCs.
Detainees are allowed in open air in all DCs. In six (43.8%) DCs, they are allowed just one hour or less. The remaining DCs allow detainees to stay in open air for more than two hours per day. Detainees in almost 2/3 of DCs work during their detention. In most of the cases this happens inside the DCs.
In only fi ve DCs, food presented was of enough quantity. One third of DCs receive food donations, but this occurs less than twice per month. Premises where food is prepared was not considered suitable in more than 3/4 of DCs. Forty three percent of food handlers are not trained. Soap was not available in 42.9% of food distribution halls. In two DCs, hygiene in food halls was poor or very poor.
Only in three DCs, the medical unit supervises hygiene in DC premises. In quarter of DCs, there are no regular visits by Citation: Taguri It is necessary to have a pre-requested permission to seek medical advice in seven (43.8%) of DCs. Two-Thirds fi nd that this permission to seek medical advice was fast and more than 90% found it easy enough to obtain the permission.
There was an isolation room in 71.4% of DCs. According to staff, 84.6% consider that they need more drugs. In only nine DCs, newcomers are subjected to medical assessment at entry. In more than half of DCs, deliveries did occur inside the DC itself. More than 3/4 of DCs have reported cases of scabies and/or pediculosis. In two DCs, detainees were reported to have scorpion/snake bites. More than 2/5 detainees reported exposure to some form of violence, half of them outside the DC before his arrival. In only two DCs there were health promotion activities which were devoted to mental health. One hundred fi fty (35.10%) detainees reported that they were exposed to some form of physical violence from the start of the journey till arrival to DC. The majority of these incidents of abuse (125) had been exposed to violence during the journey either outside Libya (30 detainees, 7.0%) or inside Libya (90 detainees, 21.1%) or in both (fi ve detainees, 3.6%). The remaining 25 detainees had been exposed to violence during arrest or inside current or previous DC. The type of physical violence they were exposed to is shown in (Figure 3).
One-third (52 detainees) had depression according to PHQ2 screening tool while half had anxiety (Table 1) and only 7% had Post traumatic Stress Disorders (PTSD). More than half of detainees were not satisfi ed with the medical services delivered to them.
There were 32 pregnant women. About half of them (46.9%) did not have antenatal visits. The non-presence of complications and the non-availability of the service were the most common reasons (almost 1/3 for each of them).
Fifty one children under fi ve years of age were approached. Almost 1/3 had diarrhea and 2/5 had cough and respiratory diffi culty in the two weeks preceding the survey. Symptoms of stress among children as diffi culty sleeping (41.2%) and frequent and easily crying (29.4%) were also frequent. the migrants are particularly exposed to, the vulnerability of migrants to many of the illness they are exposed to before, during and after migration would require a particular set of skills. The presumed center would implement the executive functions handled to them by the currently established division at the Ministry of Health and implemented by many fragmented service providers. It could also function as a house of experience for these morbidity and mortality pattern in the country and in the whole region. .
Beaten S ck beaten
Fire arm Electric shock More than one Figure 3: The type of physical violence the detainees were exposed to during the journey until arrival to current DC. Table 2: Main issues and proposed measures related to migration and impact of health on individuals and public in Libya (Detention Centers).
Main Issues Proposed Measures
Budget not enough for running DCs and meet requirements Ensuring and allocating adequate resources from different local and international sources Many DCs detain children and women Special consideration to the presence of women and children and vulnerable groups in DCs.
Visible fi ssures in the walls and/or ceilings, humidity (molds) Constructing, maintaining, or allocating suitable buildings to be used for detention if needed or if considered as a must.
Evidence of dense presence of insect and/or rodents. Training on/outsourcing service to institutions or agencies for insuring insect/rodent free premises.
Inadequate water supply in some DCs Measures for insuring adequate water supply to all DCs. Might include relocation of detainees.
Health unit works only during offi cial working hours Insuring 24hrs coverage of basic medical/health services in each facility or through networking. Networking is mandatory for services not to be delivered by the health unit.
Many DCs were not considered part of National programs for Immunization, TB, or HIV.
Considering migrants (intra/Extra mural) in planning and delivering services within national vertical programs as tuberculosis, HIV/STDs, immunizations and others.
Non regular cleaning and replacement of bed linen Insuring appropriate amenities (regular cleaning and replacement), introducing appropriate hostel/room services.
Inadequate number of beds Allocating of resources fi nancial and goods.
Inadequate quantities of food. Limited external Aid that is almost non-existent.
Allocation of resources, provision of adequate food supply, insuring regular consistent well-supervised donations.
Non-suitable food promises, poor hygiene, limited availability of soaps.
Constructing, maintain food halls within DCs, implementing standards of food delivery in collective settings, provision of sanitary goods as soaps and towels.
Non-trained food handlers. HACCP training/certifi cation of food distributors and handlers according to norms Medical Units not supervising hygiene in food premises. hygiene in food halls was poor or very poor Standardization of health/medical package delivered in DCs to include food safety, food hygiene and food handling.
Referral of sick detainees to privet clinics and public hospitals.
Standardization and proactive organization of referral system process including the trigger, fl ow, documentation and payment if necessary.
The necessity of having a pre-requested permission to seek medical advice. Some DCs fi nd that the permission to seek medical advice albeit easy, but for some might not be as fast.
Standardization and proactive organization of referral system process including the trigger, fl ow, documentation and payment.
> 90% do not keep fi les for patients, but ½ keep registers for them. ≈ 2/3 of DCs, the access to these documents are not limited to physicians.
Proper registration and record keeping that contain data that would be useful for planning and management and also for proper follow-up of patients including data needed for the future and upon departure while keeping the data secured.
Informed consent is not universally requested before performing blood investigation.
Raising collective, community and personnel awareness including legislative, religious, professionalism and good manners, ethics and human rights.
Not all DCs have isolation rooms.
Restructuring and fi nancing of medical units.
Shortages of drug supplies as perceived by DC managers. Provision of enough but supervised drug supply (drug management system) Not all newcomers are subjected to medical assessment at entry.
Standardization of procedures and process in management of detainees from detention till departure including all dimension of health (physical, mental and social wellbeing) and to include promotive, preventive, curative, rehabilitative and palliative healthcare services. Special focus on fragile groups as pregnant women, children, elderly and adolescents.
Limited health promotion activities which were devoted to mental health.
Standardization of management of detainees including all dimension of health (physical, mental and social wellbeing) and to include promotive, preventive, curative, rehabilitative and palliative healthcare services. Special focus on fragile groups as pregnant women, children, elderly and adolescents.
>3/4 of DCs have reported cases of scabies and/or pediculosis. Proper sanitation and hygiene of premises and individuals.
Reported cases of scorpion/snake bites. Location of centers to be in safe places, protective measures including those against dangerous insects/ animals.
>2/5 were subjected to some form of violence, ½ of them outside the DC before their arrival.
Raising collective, community and personnel awareness including legislative, religious, professionalism and good manners, ethics and human rights.
Most common causes of death among detainees according to DCs managers are Tuberculosis, Malnutrition, and Depression.
Special emphasis in medical services on infectious diseases, mental health and proper nutrition and provision of adequate food supply. These should include early detection and active surveillance and management. > ½ had no job before arrival or did not specify it, The job range of detainees varied widely: Farmers, Construction workers, Nurses, Engineers, English teachers, Clothes designers, Carpenters.
Regularization of status of migrants who could participate in development of local economy.
In the 6 months preceding the survey, 2/5 of detainees had acute diarrhea, 7% had food poisoning, 1/3 skin diseases as scabies and/or pediculosis, 14.7% had respiratory infections , 3% reported snake/scorpion bites Improving service delivery to detainees including general hygiene, proper nutrition and provision of safe food. Emphasis to be put on promotive, preventive and early detection measures, | 2021-06-14T18:21:12.938Z | 2021-05-05T00:00:00.000 | {
"year": 2021,
"sha1": "ce06e405e1e8acb073fb032a84479fffbbab448c",
"oa_license": "CCBY",
"oa_url": "https://www.peertechzpublications.com/articles/JCEES-7-141.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ce06e405e1e8acb073fb032a84479fffbbab448c",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
73716828 | pes2o/s2orc | v3-fos-license | Quark-Hadron Phase Transitions in Viscous Early Universe
Based on hot big bang theory, the cosmological matter is conjectured to undergo QCD phase transition(s) to hadrons, when the universe was about $1-10 \mu$s old. In the present work, we study the quark-hadron phase transition, by taking into account the effect of the bulk viscosity. We analyze the evolution of the quantities relevant for the physical description of the early universe, namely, the energy density $\rho$, temperature $T$, Hubble parameter $H$ and scale factor $a$ before, during and after the phase transition. To study the cosmological dynamics and the time evolution we use both analytical and numerical methods. By assuming that the phase transition may be described by an effective nucleation theory (prompt {\it first-order} phase transition), we also consider the case where the universe evolved through a mixed phase with a small initial supercooling and monotonically growing hadronic bubbles. The numerical estimation of the cosmological parameters, $a$ and $H$ for instance, makes it clear that the time evolution varies from phase to phase. As the QCD era turns to be fairly accessible in the high-energy experiments and the lattice QCD simulations, the QCD equation of state is very well defined. In light of this, we introduce a systematic study of the {\it cross-over} quark-hadron phase transition and an estimation for the time evolution of Hubble parameter.
I. INTRODUCTION
According to the standard model of cosmology, as the Universe extremely expanded and cooled down, it is likely to expect that the cosmological background matter should undergo a series of symmetry-breaking phase transitions, at which various topological defects may have formed. The study of phase transition from quarkgluon plasma (QGP) to hadrons in the early Universe dates back to about three decades ago [1][2][3][4][5]. A first-order phase transition in various scenarios is assumed to take place [6]. In one scenario, it has been suggested that QGP thermodynamically condensates into a hadron gas. In the second scenario, it is conjectured that the Universe was being supercooled and an out-of-equilibrium nucleation of hadron bubbles in the QGP surrounding should take place. In the third scenario, it has been argued that the phase transition took place in accompany with a small supercooling. Apparently, the coexistence of hadrons and QGP is accessible after the nucleation. The latter would generate fluctuations in the isothermal baryon density i.e., inhomogeneity and therefore can lead to drastic astrophysical consequences. From Yang-Mills theory, we have learned a lot about the kinetics and order of the phase transition [7]. The lattice QCD is a reliable method describing the strongly interacting matter for the whole temperature range starting from very low temperatures (ground state) to very high temperatures (perturbative QCD). Recently, a remarkable discovery of the QGP properties has been achieved in the heavy-ion collision program [8][9][10][11]. The QGP is likely a strongly correlated phase with finite bulk and shear viscosity.
A first-order phase transition is proceeded by bubble nucleation and rapid expansion. When at least 4 − n of these bubble collide, where n = 0, 1, 2, an n-dimensional topological defect may form in the region between them [12]. Recent lattice QCD calculations for two quark flavors suggest that QCD reliably describes a transition at T c ∼ 173 MeV [13]. It is neither first-nor second-order. With increasing temperature there is a rapid change in all thermodynamic quantities. This phase transition, which could have occurred in the early universe, could lead to the formation of relic quark-gluon plasma objects, which still survive today. It will be elaborated below, that the order of the phase transition strongly depends on the mass and flavor of the quarks.
As given above, studying the first-order quark-hadron phase transition in the early Universe has a long history. It can be characterized as follows [12]. As the color deconfined QGP cools down below T c , it becomes energetically favorable to form color confined hadrons (primarily the lightest Goldstone bosons; the pions and a tiny amount of neutrons and protons, due to the conserved net baryon number). However, the new phase does not show up immediately. A characteristic feature of the first-order phase transition is that a part of the supercooling is needed to overcome the energy expense of forming the surface of the bubble and the new hadron phase. When a hadron bubble is nucleated, latent heat is released and a spherical shock wave expands into the surrounding supercooled QGP. This reheats the plasma to the critical temperature, preventing further nucleation in a region passed by one or more shock fronts. Generally, the bubble growth is described by deflagrations with a shock front preceding the actual transition front. The nucleation stops, when the whole Universe has reheated to T c . This part of the phase transition passes very fast, in about 0.05 µsec, during which the cosmic expansion is totally negligible. After that, the hadron bubbles grow at the expense of the quark phase and eventually percolate or coalesce. When neglecting the possibility of the quark nugget production, the transition is assumed to stop, when all QGP has been converted to hadrons.
Depending on the numerical values of the parameters, both deflagrations and detonations can appear. The hadron bubbles can nucleate at very large distance scales and the phase transition may be completed without reheating to the critical temperature. During the low temperature phase in the phase transition the bubble can grow as a supersonic deflagration consisting of a Jouguet deflagration followed by a rarefaction wave. The velocity of the supersonic deflagration varies between the sound and light velocities [14]. The small-scale effects of finite wall width and surface tension have been incorporated in a numerical code, also including both the complete hydrodynamics of the problem and a phenomenological model for the microscopic entropy production mechanism at the phase transition surface [15]. The decaying droplets leave behind no rarefaction wave, so that any baryon number inhomogeneity generated previously should survive the decay.
The nucleation of bubbles, the collisions of shock fronts preceding the bubble, the arrestation of the bubble growth by the reheating, the condensation of the baryon number and the resulting density perturbations after a first-order phase transition through the mixed phase have been studied in a scenario with small initial supercooling and monotonically growing hadronic bubbles [12]. The growth of bubbles after the initial nucleation event in the generic first-order cosmological phase transitions, which is characterized by the latent heat L, the interface tension σ and the correlation length ζ and is driven by a scalar order parameter φ has been considered in Ref. [16]. The mean distance of the nucleation d nuc in a first-order cosmological quark-hadron phase transition has been introduced in Ref. [17]. For a homogeneous nucleation d nuc ≤ 2cm. On the other hand, the impurities can lead to heterogeneous nucleation, with d nuc of several meters. The latter value could change the outcome of the big bang nucleosynthesis. The study of the hydrodynamics of the disconnected quark regions during the final stages of the cosmological quark-hadron transition has been carried out in Ref. [18]. It has been shown that a self-similar solution likely exists. The inclusion of the relativistic radiative transfer produces significantly different results. Furthermore, it enables the formation of high density regions at the end of the drop evaporation [19]. The linear stability analysis of the relativistic detonation fronts, representing the phase interface in first-order phase transitions, has been performed in Ref. [20]. The strong detonations are evolutionary and stable with respect to the corrugations of the front. Moreover, Chapman-Jouguet detonations appear to be unconditionally linearly stable. Taking into account the simultaneous effects of the baryon number flux suppression at the phase interface, the entropy extraction by means of the particles having long mean free paths and baryon diffusion shows that significant baryon number concentrations, up to densities above that of nuclear matter, represent an inevitable outcome within this scenario [20].
The abundance and size distribution of the quark nuggets formed a few microseconds after the big bang due to a first-order QCD phase transition have been estimated in Ref. [21]. The evolution and the collision of slow-moving true vacuum bubbles are examined in Ref. [22]. The comoving bubble walls prevent the formation of extra defects and may lead to an increase of any primordial magnetic field. Within an effective model of QCD, the quark-hadron phase transition was studied in Ref. [23]. In a reasonable range of the parameters of the model, bodies with a quark content between 10 −2 and 10 M ⊙ could have been formed in the early universe. A significant amount of entropy is released during the transition. The density fluctuations amplified by the vanishing sound velocity effect during the quark-hadron phase transition could lead to QGP lumps decoupled from the expansion, which rapidly transform to quark nuggets [24]. The inhomogeneous nucleation, as a new mechanism for the cosmological QCD phase transition, was proposed by Ignatius and Schwartz [25]. In this model the typical distance between bubble centers is of the order of a few meters. The resulting baryon inhomogeneities may affect the primordial nucleosynthesis.
Recent lattice QCD simulations turn to be able to provide an accurate tool to study -among others -the thermodynamics of the strongly interacting matter. The critical temperature T c was a subject of different lattice QCD simulations [26][27][28][29][30][31][32]. We know so far that for two quark flavors (n f = 2) the transition is second-order or rapid crossover and T c ≃ 173 ± 8 MeV. For n f = 3, we have a first-order phase transition and T c ≃ 154 ± 8 MeV. For n f = 2 + 1 i.e., two degenerate light quarks and one heavy strange quark, the transition is again crossover and T c ≃ 173 ± 8 MeV. For the pure gauge theory, T c ≃ 271 ± 2 MeV and the deconfinement phase transition is first-order. In all these lattice QCD simulations, the quark masses are much heavier than their physical values. With recent computational facilities and modern algorithms, it is now possible to use values very close to the physical masses. This raised the critical temperature, for instance, T c ≃ 200 MeV for n f = 2 + 1. From this discussion, we conclude that the order of the phase transition can be either continuous or discontinuous. It depends -among others -on the quark flavors and their masses. The extreme conditions in the early universe, like high temperatures, high densities and out-of-thermal and out-of-chemical equilibrium, likely affect the properties of the partonic matter. Yet, we have no access to study this issue. Recent lattice QCD outputs have been used in [33] to work out the expansion law of the Universe during the cosmological quark-hadron transition. The cosmological behavior found using lattice data was compared with the one obtainable in case the transitions were first-order . The differences between these two scenarios are too small to be tested with cosmological data, but the coming of the era of precision cosmology might open the possibility of testing the nature of the QCD transition by using cosmological data.
In the present work, we consider two cases. First, we assume that the phase transition is of first-order. The cosmological evolutions during the quark and hadron phases are investigated in detail. The main cosmological parameters are obtained for each phase. The hadron fraction h, whose time evolution describes the conversion process, is an important parameter to describe the phase transition and its expression is obtained in an analytical form. h seems to behave as an order parameter. The second part of this study is devoted to an extension of previous works [34][35][36][37][38][39][40], in which we have applied the equations of state deduced from recent lattice QCD simulations at almost physical masses and more accurate lattice configurations in order to study the cosmology of the early universe. With the use of these equations of state we can study the evolution equations of the main physical parameters of the cosmological models. In light of these QCD results, we develop a systematic study of the crossover quark-hadron phase transition and an estimation for the time evolution of the Hubble parameter during the crossover in the presence of bulk viscous effects.
This paper is organized in the following manner. In Section II, the background geometry and the gravitational field equations are written down and the description of the viscous effects in different theoretical models is presented. In Section III, we lay down the equations of state and the relevant physical quantities, necessary for the discussion of the first-order quark-hadron phase transition. In Section IV we analyze in detail the dynamics of the Universe during first-order quark-hadron phase transition. The phase transition in the lattice QCD simulations and the heavy-ion collisions and the QCD equation of state (EoS) are discussed in Section V. The cosmological evolution of the Universe during the crossover in the presence of bulk viscous effects is analyzed in Section VI. The cosmological implications of our results are discussed in Section VII. We discuss and summarize our results in Section VIII.
In the present paper we use natural units with c = = k B = 1, in which 8πG = 1/m 2 P l = 1.687 × 10 −43 MeV −2 , where m P l is the "reduced" Planck mass. The "reduced" Planck time is given by
II. GEOMETRY AND FIELD EQUATIONS
We assume that the early Universe is filled with a bulk viscous cosmological fluid and its geometry is given by a spatially flat Friedmann-Lemaitre-Robertson-Walker (FLRW) metric where a(t) is the dimensionless scale factor, which describes the expansion of the universe. At a vanishing cosmological constant, the Einstein gravitational field equations in the flat Universe read The energy-momentum tensor of the bulk viscous cosmological fluid filling the very early Universe is given by [41] T where indices i, k take discrete values 0, 1, 2, 3, ρ is the energy density, p is the thermodynamic pressure, Π is the bulk viscous pressure and u i is the four velocity, satisfying the normalization condition u i u i = 1. The particle and entropy fluxes are defined according to N i = nu i and S i = sN i − τ Π 2 /2ξT u i , where n is the number density, s is the specific entropy, T is the finite temperature, ξ is the bulk viscosity coefficient and τ gives the relaxation coefficient for the transient bulk viscous effect (i.e. the relaxation time), respectively. The evolution of the cosmological fluid is subject to obeying the dynamical laws of the particle number conservation N i ;i = 0 and Gibbs' equation T dρ = d (ρ/n) + pd (1/n) [41]. In the following, we shall also suppose that the energy-momentum tensor of the cosmological fluid is conserved, i.e., T k i;k = 0, where ; denotes the covariant derivative with respect to the metric.
The bulk viscous effects can generally be described by means of an effective pressure Π, formally included in the effective thermodynamic pressure p ef f = p + Π [41]. Then in the comoving frame the energy-momentum tensor has the components T 0 0 = ρ, T 1 1 = T 2 2 = T 3 3 = −p ef f . For the line element given by Eq. (1), the Einstein field equations read where one dot denotes the derivative with respect to the time t, G is the gravitational constant and H(t) = a(t)/a(t) is the Hubble parameter. Expressions (4) and (5) lead to a generic expression for the time evolution of H:Ḣ From the field equations or with the use of the conservation of the energy-momentum tensor we obtain the following equation (Bianchi identity), relating the time variation of the energy density to the Hubble parameter: In order to solve the field equations, we necessarily need an equation of state and an estimation for the bulk viscous Π, characterizing the viscous properties of the matter in the expanding universe.
A. Eckart relativistic viscous fluid
The first attempts at creating a theory of relativistic fluids were those of Eckart [42] and Landau and Lifshitz [43]. These theories are now known to be pathological in several respects. Regardless of the choice of the equation of state, all equilibrium states in these theories are unstable and in addition signals may be propagated through the fluid at velocities exceeding the speed of light c violating the causality principle. These problems arise due to the nature of the first-order of this theory [review Eq. (9)], that it considers only the first-order deviations from the equilibrium leading to parabolic differential equations, because of the infinite speeds of propagation for the heat flow and viscosity, which contradicts the principle of causality. Conventional theory is thus applicable only to phenomena which are quasistationary, i.e., slowly varying on space-and time-scales characterized by mean free path and mean collision time.
The Eckart theory can be applied on modelling the cosmic background fluid as a continuum with a welldefined average 4-velocity field u α where u α u α = −1. The vector number density n α = n u α can be estimated, when unbalanced creation/annihilation processes take place; n α ;α = 0. This apparently means thaṫ n + 3 H n = 0, where the Hubble parameter H = u α ;α . In the case of a viscous fluid, the entropy current, is no longer conserved. The covariant form of second law of thermodynamics is S α ;α ≥ 0 and the divergence of entropy current is given by T S α ;α = −3HΠ. This is another feature of Eckart's theory. It violates the second law of thermodynamics.
The evolution of the cosmological fluid is subject to the dynamical laws of particle number conservation N i ;i = 0 and Gibbs' equation T dρ = d (ρ/n) + pd (1/n). Then, from the Gibbs equation, the covariant entropy current can be obtained as This is a linear first-order relationship between the thermodynamical flux Π and the corresponding force H. Substituting in Eq. (6) results inḢ
B. Israel-Stewart relativistic viscous fluid
A relativistic second-order theory was introduced by Israel and Stewart [44,45] and further developed by Hiscock and Lindblom [46] through the extended irreversible thermodynamics. In this model, the deviations from equilibrium (bulk stress, heat flow and shear stress) are treated as independent dynamical variables, resulting in 14 dynamical fluid variables to be determined. The causal thermodynamics and its role in general relativity are reviewed in Ref. [41]. A general algebraic form for S α including a second-order term in the dissipative thermodynamical flux Π [44,45] reads where β is a proportionality constant. For the evolution of the bulk viscous pressure, we adopt the causal evolution equation [41] obtained in the simplest way (linear in Π) to satisfy the H-theorem (i.e., for the entropy production to be nonnegative, S i ;i = Π 2 /ξT ≥ 0 [44,45]). According to the causal relativistic IS theory, the evolution equation of the bulk viscous pressure reads [41] τΠ where τ is the relaxation time. In order to have a closed system from Eqs. (4), (7) and (13), we have to take into consideration equations of state for the pressure p, the temperature T and the relaxation time τ , respectively.
III. FIRST-ORDER QUARK-HADRON PHASE TRANSITION
In this Section, we outline the relevant thermodynamic quantities of the quark-hadron phase transition, which will be used in the following sections. Note that the scale of the cosmological QCD transition is given by the Hubble radius R H at the transition: R H ∼ m P l /T 2 c ∼ 10 km, where T c is the critical temperature. The mass inside the Hubble volume is ∼ 1 M ⊙ . The expansion time scale is 10 −5 s, which should be compared with the time-scale of QCD, 1 fm/c≃ 10 −23 s. Even the rate of the weak interactions exceeds the Hubble rate by a factor of 10 7 . Therefore, in this phase the photons, the leptons, the quarks and the gluons (or pions) are lightly coupled and may be described as a single, adiabatically expanding fluid [17].
At high temperatures T > T c , the baryon number density n B may be defined as n B = (1/3) (n q − nq), where n q (nq) is the number density of a specific quark (anti-quark) flavor and the sum is taken over all quark flavors. In utilizing these relations, it is apparent that QGP matter is assumed to be characterized as an ideal gas. At T < 1 GeV only the u, d and s quarks contribute significantly. At low temperatures T < T c the baryon number density is defined as n B = (n b − nb), with the summation extended over all baryon species b. In order to study the quark-hadron phase transition it is necessary to specify EoS of the matter, in both quark and hadron state. Giving an equation of state is equivalent to give the pressure as a function of the temperature T and chemical potential µ.
At high temperatures the quark chemical potentials are equal, because of the weak interactions which apparently keep them in chemical equilibrium and the chemical potentials for leptons are assumed to vanish. Thus the chemical potential for a baryon is defined by µ B = 3µ q . The baryon number density of an ideal Fermi gas of three quark flavors is given by n B ≃ T 2 µ B /3, leading to µ B /T ∼ 10 −9 at T > T c . At low temperatures µ B /T ∼ 10 −2 . Therefore the assumption of a vanishing chemical potential at the phase transition temperature in both quark and hadron phase represents an excellent approximation for the study of EoS of the cosmological matter in the early universe. In addition to the strongly interacting matter we assume that in each phase there are present leptons and relativistic photons, satisfying equations of state similar to that of hadronic matter [12].
A. Thermodynamic parameters of the quark and hadronic matter
The equation of state of the ideal gas in QGP phase can generally be given in the form where V (T ) is the self-interaction potential. a q = π 2 /90 g q , with g q = 16 + (21/2)N F + 14.25 = 51.25 and N F = 2. As given in Ref. [23], the self-interaction potential reads where B is the bag constant, α T = 7π 2 /20 and γ T = m 2 s /4, with m s is the mass of the strange quark ∈ (60 − 200) MeV. The form of the potential V corresponds to a physical model in which the quark fields are interacting with a chiral field formed with the π meson field and a scalar field. If the temperature effects can be ignored, EoS in the quark phase takes the form of the MIT bag model equation of state, p q = (ρ q − 4B)/3, MIT stands for Massachusetts Institute for Technology. The results obtained in the low energy hadron spectroscopy, the heavy-ion collisions and the phenomenological fits of the light hadron properties give an estimation for B 1/4 . It ranges between 100 and 200 MeV [47].
In the hadron phase, we assume that the cosmological fluid is consisting of an ideal gas of massless pions and nucleons described by the Maxwell-Boltzmann statistics. The energy density ρ h and pressure p h can be respectively approximated by where a π = π 2 /90 g h and g h = 17.25. For the entropy densities s(T ) = dp/dT in the two phases we obtain The critical temperature T c is defined by the condition p q (T c ) = p h (T c ) [12] and is given, in the present model, by .
For m s = 200 MeV and B 1/4 = 200 MeV, the transition temperature is of the order of T c ≃ 125 MeV. According to the first-order of the phase transition, all the physical quantities, like the energy density, pressure and entropy, exhibit discontinuities across the critical curve. At the critical temperature, the ratios of the relevant physical quantities, the energy and the entropy density, respectively, are given by and respectively. For m s = 200 MeV and B 1/4 = 200 MeV, the ratios ρ q (T c ) /ρ h (T c ), given by Eq. (21) and s q (T c ) /s h (T c ), given by Eq. (22), equal 3.62 and 4.628, respectively. So far, we conclude that the energy density and entropy suddenly decrease to nearly one-fifth of its value, when the system undergoes a first-order phase transition at T c . According to the first-law of thermodynamics, the entropy s can be expressed in terms of the pressure p and the energy density ρ, so that at vanishing chemical potential µ, s T = p + ρ. It is apparent that the sudden decrease in ρ nearly equals the decrease in s, at fixed T and slightly changing constant p. If the temperature effects in the self-interaction potential V are neglected, α T = γ T ≃ 0, then from Eq. (20), we obtain the well-known relation between the critical temperature and the bag constant, B = (g q − g h ) π 2 T 4 c /90 [12].
IV. DYNAMICS OF THE UNIVERSE DURING THE QUARK-HADRON PHASE TRANSITION
The quantities to be traced through the quark-hadron phase transition are the energy density ρ, the temperature T and the scale factor a. These quantities are determined by the gravitational field Eqs. (4) and (7) and by the equations of state (15), (16) and (17). We shall consider now the evolution of the Universe before, during and after the phase transition.
A. Cosmological evolution in the quark phase (prior to the quark-hadron phase transition) Before the phase transition, at T > T c , the Universe is likely in the partonic phase. With the use of the equations of state of the quark matter and of the Bianchi identity, Eq. (7), the time evolution of the scale factor can be written in the form and can be integrated to give the following scale factor-temperature relation: where a 0 is the initial value of the scale factor corresponding to the temperature T = T 0 of the universe, a (T 0 ) = a 0 . In Fig. 1, the variation of the scale factor of the Universe during the quark phase is presented as a function of the temperature T . Because of the expansion of the Universe the temperature is decreasing with the increase in the comoving time t. Therefore the scale factor a increases with the decreasing T . The exact numerical values of a(t) strongly depend on the initial T 0 value. In order to have an analytical insight into the evolution of the cosmological quark matter, we consider the simple case in which the temperature corrections can be neglected in the self-interaction potential V . In this case V = B = constant and EoS of the quark matter is given by the bag model equation of state, p q = (ρ q − 4B) /3. Thus, Eq. (7) can immediately be integrated to give the following simple scale factor-temperature relation: Hence the presence of a temperature-dependent potential term V (T ) in the quark matter EoS drastically modifies the scale factor-temperature relationship. The same result can be obtained by taking α T = γ T = 0 in Eq. (24). With the use of Eq. (23) and from the gravitational field equations, we obtain an expression describing the evolution of the temperature of the Universe in the quark phase, given by The variation of the temperature in the quark phase is presented, for different values of the bag constant B, in During the quark-hadron phase transition, the temperature and the pressure are constants, T = T c and p = p c , respectively. The entropy S = s a 3 and the enthalpy W = (ρ + p) a 3 are conserved quantities. The energy density ρ (t) decreases from ρ q (T c ) ≡ ρ Q to ρ h (T c ) ≡ ρ H . At the critical temperature T c = 125 MeV, we have ρ Q ≃ 5 × 10 9 MeV 4 and ρ H ≃ 1.38 × 10 9 MeV 4 , respectively. The value of the pressure of the cosmological fluid during the phase transition is p c ≃ 4.6 × 10 8 MeV 4 . Following [12], it is convenient to replace ρ (t) by the volume fraction of matter in the hadron phase where n = (ρ H − ρ Q ) /ρ Q is the relative density and t is the comoving cosmological time. It is obvious that at the beginning of the quark-hadron phase transition, the quantity h(t c ) vanishes, where t c is the time corresponding to the beginning of the phase transition and ρ (t c ) ≡ ρ Q . At the end of the quark-hadron transition, h (t h ) = 1, where t h is the time at which the phase transition ends corresponding to ρ (t h ) ≡ ρ H . At t > t h , the Universe enters in the hadronic phase. From Eq. (7), we obtain an expression for the Hubble parameter whereḣ denotes the time derivative of the hadron fraction parameter h, which can be utilized as an order parameter.
Then, Eq. (28) immediately leads to the scale factor, where we have used the initial condition h (t c ) = 0. The evolution of the fraction of the matter in the hadronic phase is described asḣ with the general solution given by As given above, the quark-hadron phase transition ends up, when the value of h(t) reaches 1. Then, the time t h at which the phase transition ends reads At the end of the phase transition the scale factor of the Universe has the value, Eq. (30) The variation of the hadron fraction given by Eq. (32), as a function of the dimensionless time parameter χ = √ ρ Q t P l t is represented, for different values of the parameter ε and for n fixed in Fig. 4. The hadron fraction apparently gives an estimation for hadrons formed inside QGP. Having the expressions of h(t) andḣ(t), the analytical forms for both H and a can be directly obtained. It is straightforward to show that the Hubble parameter during the phase transition can be expressed as The variation of the dimensionless Hubble parameter Finally, after the phase transition, the energy density of the pure hadronic matter is ρ h = 3p h = 3a π T 4 . The Bianchi identity Eq. (7) gives The time evolution of the temperature in the hadronic phase is governed by the equation giving a comoving time From Eq. (36) and (37), the Hubble parameter reads During the hadronic phase, the density of the Universe varies with the time as The temperature dependence of the scale factor a of the Universe during the hadronic evolution phase is presented in Fig. 6. The temperature dependence of the Hubble parameter H is represented in Fig. 7. Finally, in Fig. 8 we present the time evolution of the scale factor a of the Universe during the quark phase, the phase transition and the hadron phase, respectively, for several values of the bag constant B. We assume that the quark phase begins at a time t = t Q , when the value of the scale factor of the Universe is a = a (t Q ). The phase transition temperature is assumed to be T c = 125 MeV, with a corresponding quark matter energy density at the transition moment of ρ Q = 5 × 10 9 MeV 4 . For the parameter ε we have taken a value of ε = −1/4. As one can see from the figure, an increasing value of the bag constant accelerates, in the long term, the expansion of the universe.
V. PHASE TRANSITION IN LATTICE QCD SIMULATIONS AND HEAVY-ION COLLISIONS
Before introducing the QCD EoS, it is useful to study the similarities between heavy-ion collisions and the early Universe [5]. It is conjectured that the first-order phase transition, studied in section III, might take place in the heavy-ion collisions and/or in lattice QCD simulations. Such a prompt transition seems to have fundamental astrophysical consequences. Its dynamics has been discussed in the previous section. Despite of the order of phase transitions, the QGP era seems not to be followed by an extreme expansion (inflation). This is apparently the case in heavy-ion collisions, because of the baryon number conservation and the limitation of baryon-tophoton ratio (n b − nb)/n γ ∼ 10 −11 [48]. Therefore, np − n p recently measured by ALICE experiment at 7 GeV can be used to estimate the photon number density, n γ ≃ 5.5 × 10 4 , while in the CMB era, n γ ≃ 411.4(T /2.73K) cm −3 . Furthermore, the QGP era seems to be the last symmetry-breaking era of strongly interacting matter. By symmetry breaking, we mean deconfinement and chiral symmetry breaking and/or restoring, respectively. In an isotropic and homogeneous background, the volume of the Universe is directly related to the scale factor a(t), where t is the comoving time. Implementing a barotropic EoS for the background matter makes it possible to calculate -among others -the Hubble parameter H(t) =ȧ(t)/a(t). Focusing the discussion on QCD era of the early universe, which likely turns to be fairly accessible in high-energy experiments, the equation of state is very well defined. In Ref. [40], a viscous EoS for QGP matter has been introduced and different solutions for the evolution equation of H have been worked out. The ratio of baryon density asymmetry to photon density, η, has been measured in WMAP data [48]. Then np − n p from the ALICE experiment at 7 GeV can be used in order to estimate the photon number density, n γ ≃ 5.5 × 10 4 , while in the CMB era, n γ ≃ 411.4(T /2.73K) cm −3 .
In a comoving volume V ∼ a 3 (t), the number density of noninteracting photons is supposed to remain constant. Therefore, n γ ∼ 1/a 3 (t). Nevertheless, when the Universe was expanding, T decreases and a 3 (t) n γ has to be affected. The previous values of n γ support this conclusion. There is a conserved quantity accompanying such a transition, namely, the entropy density s. In a perfectly closed system like the universe, s likely remains unchanged. From the first-law of thermodynamics [40] one can show that at vanishing chemical potential, implying that s(T ) is related to ρ(T ) 1−a3 , where a 1 = 0.319, a 2 = 0.718 ± 0.054 and a 3 = 0.23 ± 0.196. This relation is valid at low energy, where the dominant degrees of freedom are given by hadron resonances. Baryon and boson relative abundances (n(T ) −n(T ))/(n(T ) +n(T )) can be studied in hadron resonance gas (HRG) model. It is found that the abundance approaches 10 −3 . For instance, the kaon relative abundance is by about one order of magnitude higher than that of the proton. It is obvious that the abundances of the light elements ( 7 Li, 4 He, 3 He and 2 H) produced in the early Universe are sensitive indicators of number density [49]. Recent lattice QCD calculations [50,51] give estimations for the equation of state, temperature and bulk viscosity of hadronic and partonic matter at high temperatures. As we will show below, the gravitational cosmological field equations, Eqs. (4) and (7), relate the cosmological parameters, like the Hubble parameter H and the scale factor a, to the energy density ρ. Again, the barotropic equations of state for the thermodynamic parameters are standard in analyzing the viscous cosmological models, whereas the equation for τ is a simple procedure to ensure that the speed of viscous pulses does not exceed the speed of light.
Using this set of equations seems to define the validity of this treatment. Apparently, it depends on the validity of the equations of states, which we have derived from the lattice QCD simulations at temperatures larger than T c ≃ 0.19 GeV. Below T c , as the Universe is cooled down, not only the degrees of freedom suddenly increase [52], but also the equations of state turn out to be the ones characterizing the hadronic matter. Such a phase transition -from QGP to hadronic matter -would characterize one end of the validity of our treatment. The other limitation is provided by the very high temperatures (energies), at which the strong coupling α s entirely vanishes.
The lattice QCD simulations benefit from the rapid progresses achieved in computational facilities and algorithms. The accuracy of recent lattice results is comparable with the laboratory experiments. Currently, it is possible to perform lattice QCD simulations at almost physical quark masses. Recent results on QCD equations of states have been reported in [53]. It is apparent, that the influence of radiation and leptons on phase transition are minimized [54].
A. Equations of state of the viscous quark-gluon plasma
In this section, we give a list of barotropic equations of states deduced from the lattice QCD simulations [53] (an analytic crossover phase transition is obtained) and the quasiparticle model [55]. The latter is utilized when no lattice QCD results are available. Figure 9 depicts the pressure p in dependence on the energy density ρ in a wide range of temperatures, 1/2 < T /T c < 3. Details on the lattice configurations are described in [53]. It is obvious that the (barotropic) pressure -energy density dependence is almost linear referring to the nature of the phase transition from hadrons to quarks and vice versa. This confinement-deconfimement phase transition seems to be smooth i.e., simply continuous and takes place very slowly. This kind of transitions is a very moderate than the second-order one. The nature of the phase diagram in lattice QCD has been discussed in [56]. In Fig. 9, the dashed line represents the fitting in the entire T -region. Ignoring the dip around T c , the results can be fitted as a power law, where α 1 = 0.178 ± 0.009 and α 2 = 1.119 ± 0.011, respectively. In the hadronic phase i.e., at temperatures < T c , the previous power law dependence seems to remain valid. Little changes appear in the parameters; α 1 = 0.096 ± 0.003 and α 2 = 1.03 ± 0.04. In the quark phase i.e., at temperatures > T c , the following polynomial describes this barotropic equation of state, where α 1 = 0.221 ± 0.004 and α 2 = 1.072 ± 0.005. Apparently, expressions (43) and (44) imply that the speed of sound drastically changes with the changes in the phases: • hadron and quark phase: c 2 s = ∂p/∂ρ ≃ 0.199 ρ 0.119 i.e., the sin the peed of sound depends on the energy density. The latter has a nonmonotonic behavior when going from hadronic to partonic phases and vice versa.
• in the hadron phase: c 2 s ≃ 0.098, • in the quark phase: c 2 s ≃ 0.237. Figure 10 presents the barotropic dependence of T as calculated in lattice QCD. The relation can nicely be fitted by the polynomial where β 1 = 0.123 ∓ 0.004, β 2 = 0.058 ± 0.0038 and β 3 = 0.39 ± 0.013. Again, the specific heat in the whole phase of hadrons and quarks seems to depend on the energy density, strongly In determining this value, the volume V is conjectured to remain unchanged.
Based on the quasiparticle model [57], which is an effective model used to reproduce the lattice QCD results on various thermodynamic and transport properties [58], the bulk viscosity ξ reads where g is the degeneracy factor of quarks, gluons and their antiparticles. The function f 0 gives the distribution function (exp(ǫ) ± 1) −1 of boson and fermion particles, respectively. The quantity ǫ = [ p 2 + Π(T )] 1/2 stands for the effective dispersion relation of single particles. It depends on the particle mass Π(T ) which in turn varies with the effective coupling G(T ). Therefore, the effective coupling G(T ) plays an essential role in this model. It has to be adjusted to reproduce lattice QCD results. In top panel of Fig. 11, the bulk viscosity coefficient ξ is given as a function of T . The results are fitted very well as Again, in quasiparticle model [55,57], the relaxation time reads where a ζ = 6.8 [57]. The results are drawn in bottom panel of Fig. 11. Fitting of these results leads to where δ 1 = 2.362 ± 1.318, δ 2 = −0.022 ± 0.056, δ 3 = −3.176 ± 1.05, δ 4 = 0.435 ± 0.126, δ 5 = 2.362 ± 0.318, δ 6 = 3 and δ 7 = 1.25 ± 0.12. From the two expressions (48) and (50), it is obvious that the barotropic relations of ξ and τ are related to each other. Such a relation has been modeled by the projection operator method [59] as Furthermore, the bulk stress is to be related to the distribution function of relaxation time [60]. Such a dependence has to follow the causality principle and fits perfectly with the laws of thermodynamics [61]. The speed of sound c 2 s = ∂p/∂ρ, can be taken from the lattice QCD simulations [53]. The results of c 2 s (T ) are given in Fig. 12. Below T c , the lattice results show a small peak. Remarkable work has been devoted to accurate its location and altitude. The results from the hadron resonance gas model are given, as well. Although the appearance of the peak, the disagreement is not to be neglected. With reference to the restricted causality principle, the nonmonotonic behavior of c 2 s below T c would be explained in the light of: • baryon and strange degrees of freedom which would play an essential role in reproducing c 2 s (T, µ b ), where µ b is the baryo-chemical potential, • the interpolation of both entropy s(T, µ b ) and specific heat c V (T, µ b ) which has been suggested to partly explain nonmonotonic behavior below T c , • the condition(s) deriving the chemical and thermal freeze-out which would enlighten such a behaviour, • the interactions between the constituents of the hadronic phase are conjectured which would be able to explain the nonmonotonic entropy and specific heat production and • the time-varying equation of state in the hadronic phase which refers to out-of-equilibrium processes, while their modification in thermal and dense matter would refer to symmetry changes.
B. Bulk Viscosity in the Hadronic Phase
The treatment of bulk viscosity in Hagedorn fluid has been studied in Ref. [39]. Such a fluid is conjectured to be composed of hadrons and resonances with masses m < 2 GeV. The treatment is based on the relativistic chemical potential and the heat conductivity, the bulk viscosity in thermal medium reads where ρ(m i ) is the Hagedorn mass spectrum ρ(m), which implies growth of the hadron mass spectrum with increasing the resonance masses.
with k = −5, A = 0.5 GeV 3/2 , m 0 = 0.5 GeV and T H = 0.195 GeV. The number density n 0 is related to the deviation of energy-momentum tensor from its local equilibrium δT µν . Such a deviation is corresponding to the difference between the distribution function near and at equilibrium, δn = n − n 0 . The latter can be determined by the relaxation time approximation with vanishing external and self-consistent forces [62,63] The nonequilibrium number density n(p, T ) is to be decomposed using the relaxation time approach into n = n 0 + τ n 1 + · · · . Alternatively, as n(p, T ) embeds the 1 st -rank tensor u, δT µν can be decomposed into u [62] in order to deduce its spatial components. The relaxation time depends on the relative cross section as where v(T ) and n f (T ) is the relative velocity of two particles in case of binary collision and the density of each of the two species, respectively. The thermal-averaged transport rate or cross section is v(T )σ(T ) . The ratio of bulk to shear viscosity, ξ/η, can be related to the speed of sound c 2 s in a gas composed of massless pions. Apparently, there are essential differences between this system and the Hagedorn fluid. According to [64], the ratio of ξ/η in N = 2 * plasma is conjectured to remain finite across the second-order phase transition. In Hagedorn fluid, the system is assumed to be drifted away from equilibrium and it should relax after a characteristic time τ . Should we implement a phase transition in Hagedorn fluid, then τ ∝ ξ z , where z is the critical exponents, which likely diverges near T c .
C. Deconfinement and chiral phase transitions (crossover ) in lattice QCD simulations
Remarkable advances have been made in studying the equilibrium properties of the phase transitions. Obviously, the phase transition is coupled with symmetry breaking and out-of-equilibrium. Therefore, it is natural to turn our attention to the consequences when the system is enforced to go through an out-of-equilibrium phase transition. Thermodynamically, the first and second order phase transitions are described by continuous first and second derivative of the free energy, respectively. The infinite order phase transition is also continuous. But it breaks no symmetry. A famous example for it is the Kosterlitz-Thouless transition in the two-dimensional XY model [65]. The crossover phase transition of lattice QCD simulations is likely a continuous one. An out-of-equilibrium state is reached when the system in deviated from its equilibrium state by applying an instantaneous perturbation. The system will relax to its equilibrium state by dissipating the energy transferred during the transition [66]. Relating the amplitude of this dissipation to the amplitude of fluctuations in equilibrium dates back to Einstein's work on the Brownian motion in 1905. Lars Onsager established a hypothesis that the relaxation of a macroscopic nonequilibrium perturbation follows the same laws which govern the dynamics of fluctuations in equilibrium systems. In other words, the regression of microscopic thermal fluctuations at equilibrium follows the macroscopic law of relaxation of small nonequilibrium disturbances [67].
The lattice QCD simulations turn out to be an accurate tool to study -among others -the thermodynamics of the hadronic and partonic matter up to temperatures of couple T c ; the critical temperature T c [26][27][28][29][30][31][32]. For two quark flavors (n f = 2) the phase transition is second-order or a rapid crossover . T c ≃ 173 ± 8 MeV. For n f = 3, the phase transition is first-order and T c ≃ 154 ± 8 MeV. For n f = 2 + 1, the transition is again crossover and T c ≃ 173 ± 8 MeV. For the pure gauge theory, T c ≃ 271 ± 2 MeV and the phase transition is first-order. In all these lattice QCD simulations, the quark masses are heavier than their physical values. At physical masses, the critical temperature for n f = 2 + 1 is ≃ 200 MeV. Apparently, we conclude that the order of the deconfinement phase transition can be either first, or second or crossover (infinite). It depends on the quark flavors and their masses. The extreme conditions in the early Universe likely affect the properties of the hadronic and partonic matter.
The chiral phase transition is assumed to accompany the deconfinement one, especially at vanishing chemical potential. It is expected that the restoration of the chiral symmetry breaking takes place in full-perturbative and nonperturbative QCD at high temperatures, if the matter is assumed to be exclusively built of light and strange quarks [68]. In perturbative QCD, the chiral symmetry is valid for massless quarks. It is entirely broken in the hadronic phase. It not yet completely clear what is the order of phase transition between hadronic and partonic QCD phases when the broken symmetry is restored at finite temperatures and densities. Different lattice QCD simulations, mainly referring to chiral condensate and chiral susceptibilities [69] indicate that the chiral phase transition is of the second order at vanishing chemical potential: The chiral condensate vanishes at the limit m q → 0 [70]. Below T c , the chiral condensate entirely vanishes, as well. It is finite above T c , where ln Z is the partition function describing the system. The chiral perturbation theory proved to be a very important method in determining some essential observables in QCD, which are dominated at low temperature, such as the masses of pseudoscalar mesons, their decay constants and the chiral observables. It provides an explanation why pseudoscalar mesons are very light. The Goldstone theorem states that for each generator of a spontaneously broken symmetry, there exists a massless Goldstone boson φ with spin 0 and with symmetry properties that are related to those of the symmetry transformation. The Goldstone bosons of the chiral perturbation theory are just the pseudoscalar mesons. This can be utilized as a signature for the phase transition.
VI. DYNAMICS OF THE BULK VISCOUS QUARK-GLUON PLASMA FILLED UNIVERSE
In the following we consider the cosmological evolution of the viscous quark-gluon plasma filling the Universe in the framework of both Eckart and the full causal approaches to dissipative processes.
A. Evolution of the Hubble parameter in the Eckart model
Substituting the barotropic expressions (43), (45) and (48) in Eq. (11) and by assuming for the bulk viscous pressure the Eckart relation, given by Eq. (10), we obtain for the evolution of the Hubble parameter the equatioṅ where A = 3/(8πG). As given in the introduction, the Planck units are given by this parameter, A = 1.778×10 37 GeV 2 . This differential equation can be solved, analytically, when assuming that Then, in terms of H, the comoving time reads .
(60) Figure 13 shows the dependence of t on H. It describes a universe, where its background fluid is characterized by Eckart theory. The three curves differentiate between different types of the background matter. A discussion about the effect of background matter is given in section VII A. The collisionfree and nonviscous background matter is given by the dashed curve. Solid and dotted curves describe the t − H relation when the background geometry is filled with viscous hadron-QGP and nonviscous QGP, respectively. At small H values, there are obvious differences between the latter types of matter and between them and the ideal matter. At large H values, the comoving time behaves very smooth with H, although hadron-QGP results in larger t than QGP. In both of them, t is larger than in ideal matter. [34-38, 71, 72]. The dashed curve shows the results when the background geometry is assumed to be filled with an ideal gas. The Planck scale is given in physics units.
B. Evolution of the Hubble parameter in the full causal approach
Comparing Eq. (51) with the expressions (48) and (50) makes it quite apparent to have a barotropic expression for the relaxation time [41] i.e., the relaxation coefficient for the transient bulk viscous effect is referred to as the relaxation time. With the use of Eqs. (43), (45), (48) and (13), respectively, we obtain the following equation describing the cosmological evolution of the Hubble parameter H: In obtaining this equation we have introduced a number of very tiny approximations. To the exponents α 2 , β 3 and γ 6 we have assigned the values, 1, 1/2 and 1, respectively. In order to derive an analytical solution for this Abel differential equation, we follow the procedure given in [37,71]. After some Algebra, it ends up with these two functions, which are plotted in Fig. (14). They play an essential role in deriving an analytical solution for Eq. (14). Approximating the parametric dependence of g(H) on z(H), Fig. 15, we get the linear dependence, Then, from the definition of Ω, we simply derive In order to reduce this expression to the canonical equation of Abel type, we use the relation Ω = z/P. Then from Eqs. (66) and (64), we obtain a first-order differential equation for H, where P is a free parameter. The solution simply reads The dependence of the cosmological comoving time t on the Hubble parameter H is graphically illustrated in Fig. 16. It is apparent that t(H) is monotonic. The same dependence has been obtained, when assuming that the background matter is characterized as an ideal gas, t = 2/(3γH). All this is summarized in Fig. 16. Solid, dashed and dotted curves represent the results for viscous hadron-QGP, viscous QGP and ideal (nonviscous and collisionless) matter, respectively.
VII. COSMOLOGICAL IMPLICATIONS
Assuming that the background geometry is filled with Eckart relativistic viscous fluid, the comoving time t is given as a function of the Hubble parameter H in Eq. (60) and drawn in Fig. 13 (solid curve). The dotted curve gives the results when viscous QGP EoS is implemented [34-38, 71, 72]. The dashed curve draws the results when the background geometry is assumed to be filled with an ideal gas. In Fig. 16, another t-H dependence is obtained when assuming that the cosmological background is filled with Israel-Stewart relativistic viscous fluid. The solid curve represents the results of present work, in which the background matter is assumed to be characterized by viscous hadrons and QGP i.e., including phase transition(s). The same treatment is applied for viscous QGP and ideal gas. The results are drawn by dotted and dashed curves, respectively.
Before discussing the potential cosmological implications, it is in order now to elaborate essential aspects. We start with the phase transition in the early universe. The first-order phase transition has been discussed in section III. Section V was devoted to discuss the phase transition(s) as measured in lattice QCD simulations. Accordingly, we conclude that the order of the confinement-deconfinement phase transition depends among others on the effective degrees of freedom and the matter content (quark flavors, etc.) The results given in Figs. 13 and 16 illustrate the effects of degrees of freedom (ideal gas, QGP and hadron-QGP matter) and in indirect way the phase transition (QGP matter above T c and hadron-QGP matter over a wide range of temperatures). The evolution of the Hubble parameter obviously depends on all these factors. This might have a direct cosmological implication that our picture about the expansion of the Universe has to be revised, accordingly. Other cosmological implications might arise as a consequence of the phase transition itself. The first-order phase transition is to be characterized by a sudden change in the symmetry. It exhibits a discontinuity in the first derivative of the free energy with respect to some thermodynamic variable. In the cosmological context, such a transition is accompanied by bubble nucleation [73]. In light of this, the Universe is conjectured to go from a metastable state to a new phase, a true vacuum state through the nucleation of bubbles of the new state [73]. Implementing this model to the hadron-QGP transition makes it possible to suggest a scenario, in which the Universe starts from QGP state and ends up in the hadronic state through the nucleation of hadrons. Depending on the kinematics of the bubble nucleation, the Universe might or might not "recover" from this type of phase transition and its relics are left behind i.e., relic QGP objects. The latter would survive for a very long time. The abundance and the size of the quark nuggets have been discussed in [21]. Objects with a quark content ranging from 10 −2 to 10 M ⊙ could have been formed during the cosmological phase transition.
Furthermore, a significant amount of entropy production is to be released during such a process, so that at vanishing chemical potential s = (c 2 s + 1) ρ/T . The density fluctuations are assumed to be amplified by vanishing speed of sound during the quark-hadron phase transition, Fig. 12. The lattice QCD and hadron resonance gas calculations show that the speed of sound reaches a minimum value, c 2 s ≃ 0.1, at T c . On the other hand, the density fluctuations could produce QGP lumps decoupled from the expansion, which rapidly transform into quark nuggets. Typical distance between bubble centers is conjectured to be of the order of a few meters. It is worthwhile to mention here that the resulting baryon inhomogeneities may affect the primordial nucleosynthesis. Such a cosmological consequence can be observed. The origin of inhomogeneities in the matter distribution, which are assumed to be responsible for the later formation of galaxies, cannot be explained by density fluctuations, alone. After fixing the baryon number, the appearance of these fluctuations is almost purely adiabatic. Any departure from adiabaticity falling off is inversely proportional to the mass of the perturbation [74]. This will be elaborated in next paragraph.
At the phase transition, the scale of the cosmological QCD transition is assumed to be given by the Hubble radius R H . Quantitatively, R H ≃ m P l /T 2 c ≃ 10 km. The mass inside the Hubble volume is ≃ 1 M ⊙ . At the QCD phase transition, the expansion time scale is 10 −5 s, which is much large in comparison with the time-scale of QCD, 1 fm/c ≃ 10 −23 s. Even the rate of weak interactions seems to exceed the Hubble rate by a factor of 10 7 . Therefore, we conclude that photons (radiation), leptons, quarks (fermions) and gluons (bosons) are lightly coupled and may be described by an adiabatically expanding fluid [40,72], as the transition takes place in an extremely short time.
The current heavy-ion experiments program, LHC, seems to be very close to probe early eras of the universe. It seems to produce similar antiparticle and particle, when not entirely identical [75]. This can be taken as another supportive indicator for utilizing EoS deduced from heavy-ion collisions and/or lattice QCD calculations. It seems that the observed matter-antimatter asymmetry can be explained without recourse to the hypothesis of specific initial conditions [74].
A. Different types of background matter
The different phase transition(s) likely change the symmetries and thereupon different phases or types of matter are to be expected. The dynamics of the Universe during the fist-order phase transition from QGP to hadrons has been discussed in Section IV. Such a transition is assumed to go over three phases defined by various symmetries. Prior to the phase transition i.e., partonic (QGP) symmetry, the evolution of some cosmological parameters (H, a and ρ) have been studied by using the Bianchi identity. To have an analytical insight into the evolution, T -corrections are neglected in the self-interaction potential. The second phase deals with the dynamics of the Universe during the phase transition i.e., mixed phase symmetry. Here, T and p are assumed to remain unchanged. The entropy s and enthalpy W remain conserved, as well. The third phase is the one in which the dynamics of the Universe is studied post quark-hadron phase transition era (hadronic symmetry). First we start with the time evolution of T . Then, we estimate the comoving time t. Again, the Bianchi identity helps us in expressing the scale factor a and the Hubble parameter H. The time evolution of the hadron fraction h describes the conversion process of QGP into hadrons. Therefore, it can be taken as a parameter describing the phase transition itself.
Again, in this type of transition (a first-order phase transition through hadron nucleation), the numerical estimation of the cosmological parameters gives a clear indication that their time evolution varies from phase to another. In the QGP phase, the scale factor a normalized to a 0 is much smaller than that in the hadronic phase, (compare Fig. 1 with the top panel in Fig. 6). When studying the Hubble parameter H, the T -dependence is just the opposite of a(T ), (compare Fig. 3 with the bottom panel in Fig. 6).
This behavior can be compared with the case of another types of phase transitions, crossover . In Figs. 13 and 16, we notice that the time evolution of H also depends on the type of matter filling the background geometry. If it is filled with QGP, the values of H are relatively large. It is relatively small, if the background geometry is filled with quarks and hadrons, especially when crossover phase transition is allowed to take place. Consequently, it is likely to predict that H in the hadron era is smaller than its value in the previous eras: mixed phase of partons hadrons and QGP.
It seems to be in order now to highlight the differences between viscous and nonviscous background matter. By eliminating the dynamics controlling the phase transition, for instance, we assume that the background geometry is only filled with QGP. For simplicity, we utilize the Eckart theory. A comparison is illustrated in Fig. 17. We notice that the viscosity seems to drastically slow down the evolution of the Hubble parameter. Should this result be confirmed, it would mean that the whole picture about the evolution of early Universe has to be revised. As a prompt consequence, one would expect a considerable delay in all phases post to QGP era. In order to make an estimation for this effect, other initial conditions have to be taken into consideration, for example, dynamics of phase transition(s), interaction(s), out-of-equilibrium processes, etc.
VIII. DISCUSSIONS AND FINAL REMARKS
In natural units, = c = k B = 1, all expressions are given the Planck mass m pl . We consider the cosmic evolution of the early Universe in the regime of confinement QCD phase transition taking finite bulk viscous effects into account. Thereby, it is assumed that the bulk thermodynamic quantities are dominated by the strongly interacting matter component. Two cases, a first-order phase transition scenario and an analytic crossover transition, are considered. In this respect, the present work continues a previous series [34-40, 71, 72] in several aspects. Refined equation(s) of state based on newer lattice QCD results are considered. Different bulk viscosity expressions based on quasiparticle model are used. Finite cosmological constant has been utilized in Ref. [36]. Moreover, the influence that a first-order phase transition (neglecting viscous effects) is elaborated in the present work.
Many details of QCD phase transition(s) are not yet conclusively understood. Even the order of transition is still a matter of debate. An advance in understanding the numerical values of the QCD coupling constants would be very helpful in obtaining accurate cosmological conclusions [38]. Such an advance may also provide a powerful method for testing on a cosmological scale the theoretical predictions of the brane world models and the possible existence of the extra-dimensions. Furthermore, the critical temperature T c has been a subject of different lattice QCD calculations [26][27][28][29][30][31][32]. In addition to this, it is still an open question whether both deconfinement and chiral phase transitions take place at the same T c . The cosmological behavior in first-order phase transition can be characterized as follows. At the critical temperature, the energy ρ and entropy s densities decrease, suddenly. At fixed T and constant p, both quantities have the same rate. Depending on the symmetries, the transition is assumed to go through three phases. Prior to the phase transition i.e., partonic (QGP) symmetry, the evolution of some cosmological parameters (H, a and ρ) have been studied by using Bianchi identity. To have an analytical insight into the evolution, T corrections are neglected in the self-interaction potential. The second phase is the one during the phase transition i.e., mixed phase symmetry. Here, T and p are assumed to remain unchanged. Also the entropy s and enthalpy W remain conserved. The third phase is the one in which the dynamics of the Universe is studied post quark-hadron phase transition era i.e., hadronic symmetry. The Bianchi identity helps in expressing scale factor a and Hubble parameter H. The behavior of a and H with the cosmological comoving times follows the standard cosmological model. Both quantities are expressed in terms of the fraction of matter. The latter gives an estimation for hadrons that are formed inside QGP. The time evolution of the hadron fraction describes the conversion process of QGP into hadrons. Therefore, it can be taken as a parameter describing the phase transition itself. A quantitative comparison between the evolution of scale factor a in the three phases show that a increases while moving from quarks to hadrons over the mixed phase. The values of the bag pressure are reflected in these calculations. In all phases we find that increasing the bag pressure raises the value of the scale factor.
Taking into account the recent lattice QCD results, we find that the order of the phase transition can be either continuous or discontinuous. It seems to depend on the quark flavors and their masses. The extreme conditions in the early universe, i.e., high temperatures, high densities and out-of-thermal and out-of-chemical equilibrium, likely affect the properties of the partonic matter and control the dynamics of the phase transition. The equation of state deduced from lattice QCD calculations (and quasi particle model) plays a very essential role in present work. It sets the validity of the entire treatment. The high temperatures (energies), at which the strong coupling α s nearly vanishes, defines the upper end of limitation. The lower one is characterized by the hadronic era. When applying Eckart theory, we find that the evolution of the Hubble parameter follows the same line defined by the standard cosmological model. The comparison with various types of matter shows that the comoving time behaves very smooth with H, although viscous hadron-QGP results in larger t than in viscous QGP. In both of them, t seems to be larger than in the collisionfree and nonviscous ideal matter. Israel-Stewart theory is assumed to solve the constrains of Eckart theory. Therefore, reliable results are to be expected. In order to make a qualitative estimation for the effect of viscosity, we compare the time evolution of the Hubble parameter in a viscous and nonviscous background matter. Apparently, we find that the viscosity drastically slows down the evolution. Should this result be confirmed, the whole picture about the evolution of early Universe has to be revised, accordingly. In order to make an estimation for this effect, the dynamics of phase transition(s), interaction(s) and out-of-equilibrium and dissipative processes should be taken into account. The effect of the cosmological constant on the anisotropy and homogeneity and the cosmological density perturbations in the early Universe would play an essential role in characterizing the evolution of the cosmological parameters as well. | 2012-04-15T21:01:41.000Z | 2011-08-29T00:00:00.000 | {
"year": 2011,
"sha1": "4b7684d17e01250a0e1a770f8b6b8afeb0fbf70a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1108.5697",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4b7684d17e01250a0e1a770f8b6b8afeb0fbf70a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
2768504 | pes2o/s2orc | v3-fos-license | Infinite qubit rings with maximal nearest neighbor entanglement: the Bethe ansatz solution
We search for translationally invariant states of qubits on a ring that maximize the nearest neighbor entanglement. This problem was initially studied by O'Connor and Wootters [Phys. Rev. A {\bf 63}, 052302 (2001)]. We first map the problem to the search for the ground state of a spin 1/2 Heisenberg XXZ model. Using the exact Bethe ansatz solution in the limit of an infinite ring, we prove the correctness of the assumption of O'Connor and Wootters that the state of maximal entanglement does not have any pair of neighboring spins ``down'' (or, alternatively spins ``up''). For sufficiently small fixed magnetization, however, the assumption does not hold: we identify the region of magnetizations for which the states that maximize the nearest neighbor entanglement necessarily contain pairs of neighboring spins ``down''.
I. INTRODUCTION
The investigation of the role of entanglement in quantum and classical phase transitions, and more generally the role of entanglement in many-body quantum systems is one of the hottest interdisciplinary areas on the borders between quantum information, quantum optics, atomic, molecular, and condensed matter physics. Initially the studies of entanglement in many-body systems have been motivated by the possibility of employing entanglement for quantum computation in optical lattices [1], or precision measurements with Bose-Einstein condensates [2]. Recently, several lines of research have been followed: • Studies of local entanglement in spin systems [3,4,5,6,7], and more generally in various manybody systems (such as linear chains, see for instance [8,9,10]), with particular attention to the role of entanglement in phase transitions.
• Studies of the entropy of blocks of spins and the related "area law" [11], indicating weak entanglement of blocks, and effects of criticality [5,10,12,13].
• Studies of localisable entanglement and entanglement correlation length that diverges at the critical point [14,15] and majorizes the standard correlation length (see also [16]). In particular, it has been shown that the localisable entanglement (cf. [17,18,19]) is bounded from above by the entanglement of assistance [20] and from below by correlation functions. It follows directly from these bounds that one can define an entanglement correlation length that diverges in quantum critical systems.
• Studies of dynamics and generation of entanglement in many-body systems [13,22,23,24,25]. In particular, implementations of the "one-way quantum computer" and short range teleportation [26] of an unknown state has been proposed by using the dynamics of spin systems in Refs. [13,22,25,27,28,29].
• Studies of quantum information and entanglement theory inspired numerical codes to simulate quantum systems [30].
A different approach to the study of entanglement in many-body systems has been proposed in two papers by Wootters and coworkers [31]. In these papers, instead of looking at a specific Hamiltonian, the authors asked the fundamental question "what is the maximal entanglement between two neighboring sites of an entangled ring with translational invariance?" Here, an entangled ring is a chain of spins with periodic boundary conditions. Due to the so-called "monogamy of entanglement" it is impossible for a site to be maximally entangled with both its neighbors: shared entanglement is always less than maximal [32,33]. In Ref. [31] the question of the upper limit for the nearest neighbor (NN) entanglement was simplified by introducing two additional restrictions on the allowed states: (i) The state of N spins 1/2 is an eigenstate of the z-component of the total spin (i.e. it has a fixed number of "down" spins p ≤ N/2) [43].
(ii) Neighboring spins cannot both be "down".
Obviously, one can equally well study the same problem in terms of spins "up", when N ≥ p ≥ N/2. Both restrictions are based on an educated guess for the optimal states for the general problem. O'Connor and Wootters (OW) solved the restricted optimization problem by relating it to an effective Hamiltonian for the one-dimensional ferromagnetic XY model, and found the maximal nearest-neighbor concurrence (cf. Sec. II) for given N and p to be For given N and p, Eq. (1) provides a lower bound for the problem without restriction (ii). It may, or may not happen that C can be increased by also allowing states where two neighboring spins are "down". We have recently studied finite size rings and found that for a fixed p restriction (ii) tends to play a less important role as N is increased [34]: For p close to N/2 one can increase the concurrence significantly by dropping restriction (ii), but for p N/3 OW's result is the optimal one. In fact, already in Ref. [31] it was shown that for all even N the ground state of a Heisenberg spin 1/2 antiferromagnetic ring maximizes the concurrence among the zero magnetization (p = N/2) states although it violates restriction (ii). By optimizing (1) with respect to p one obtains a lower bound on the overall optimal concurrence, i.e. without any restrictions besides the translational invariance. In the limit N → ∞, the optimal number of spins "down" in Eq. (1) approaches p opt ≈ 0.301 N . This leads to an asymptotic value of C max OW ≈ 0.434. Although Ref. [31] as well as our previous work [34] showed evidence for optimality of this number, whether it can be improved was, so far, an open problem.
Wolf, Verstraete, and Cirac have in Ref. [35] directly related OW's type of problems of looking for translationally invariant states that maximize local entanglement to the study of the ground state of a suitably defined "parent" Hamiltonian. In this paper we use this method and employ the known exact solution of the corresponding parent Hamiltonian to prove rigorously that: (A) In the limit N → ∞ the translationally invariant state that maximizes the NN entanglement without any restriction coincides with the state found by OW at the optimal value of p ≃ 0.301N . This means that it is not a superposition of states with different p values and does not contain simultaneously neighboring spin "up" and neighboring spin "down" pairs.
(B) For fixed p sufficiently close to N/2, i.e. for sufficiently small magnetizations, assumption (ii) is not correct: the states that maximize the nearest neighbor entanglement necessarily contain simultaneously pairs of neighboring "up" and "down" spins. In the limit N → ∞ we identify rigorously an interval of p/N for which this is the case and show strong numerical evidence that this interval is optimal.
Our paper is organized as follows. In Section II we apply the method of Ref. [35] and derive the corresponding parent Hamiltonian for a N qubit ring. In Section III we show the connection with the "classical" papers of Yang and Yang on the XXZ model. In Section IV we discuss briefly the regimes of parameters of interest and show that the present problem concerns the "difficult" parameter region of the phase diagram. In Section V we present the analysis based on the limit N → ∞ of the Bethe ansatz solutions. We derive here the basic integral equation, the solution of which allows to calculate the desired energy of the system in question. In Section VI the numerical results are discussed. In Section VII we rigorously prove that the states that were conjectured in Ref. [31] to maximize the NN entanglement and confirmed by us, indeed provide the maximum of the NN entanglement for sufficiently small values of p. We identify the region of p/N where the latter statement does not hold. We conclude in Section VIII. The short appendix contains simple analytic bounds on optimal magnetic field for which the NN concurence is maximal.
II. VARIATIONAL CONCURRENCE FORMULA
In this paper we will use the concurrence as our entanglement measure. The concurrence [36] is defined as are the square roots of the eigenvalues of ρρ, andρ = (σ y ⊗ σ y )ρ * (σ y ⊗ σ y ) is the spin-flipped density matrix. The optimization problems that we consider are complicated by the nonlinearity of the concurrence as function of the density matrix. In our previous work [34], we showed how the optimization problem with fixed p can be reformulated as finding the ground state energy for each in a family of spin-chain Hamiltonians. This family is parameterized by a single real parameter and the optimal concurrence is minus the lowest ground state energy that occurs when this parameter is varied. In this way a complicated non-linear problem in a high-dimensional space is replaced by a series of linear problems and one final one-parameter optimization.
The derivation in Ref. [34] did not cover the case where superpositions of states with different p are allowed. To treat that case, we turn to Ref. [35] where the following general formula for the concurrence for systems of two spins 1/2 has been derived: Here X is an arbitrary 2 × 2 matrix of determinant 1, while F is the flip (or swap) operator, interchanging the two qubits: A useful parameterization of X is obtained using the sin-gular value decomposition [37]: where t ∈ [−∞, ∞], U, V ∈ U (2) and det U · det V † = 1. In fact, from Eq. (2) it is clear that we can restrict to U, V ∈ SU (2). We now rewrite Let us define the matrix in square brackets as h(t 2 ), i.e., where s = t 2 . Then we can rewrite Eq. (2) as Our goal is to maximize the concurrence over all ρ that can occur as nearest neighbor density matrices on a translationally invariant ring. If we always had U = V , it is easy to see that we could drop the infimum over is translationally invariant as well. To do the same for U = V , we can use the fact that h(s) is symmetric in the two qubits: If N is even and ρ = tr 3...N Γ thenρ is a nearest neighbor density matrix belonging to the following translationally invariant state: If N is odd, the above construction does not work: we cannot fit an integer number of U ⊗ V terms on the ring. By placing as many terms U ⊗ V as possible, and taking the translationally variant mixture of the resulting state, ρ can be approximated up to a factor 1/N . In the limit of N → ∞ we can ignore this correction.
III. THE PARENT HAMILTONIAN
In this section we will follow the approach of Ref. [35] to derive the parent spin 1/2 XXZ Hamiltonian, i.e. the Hamiltonian whose ground state maximizes the NN concurrence. We also make the connection to the classical papers on the XXZ model by Yang and Yang [38,39,40].
In the previous section we showed that in the limit where ρ = tr 3...N Γ for some translationally invariant Γ of N spins. The two-spin Hamiltonian (6) can be written in terms of Pauli matrices as where σ ± = σ x ± iσ y . Instead of working with ρ and h(s) we can use the translational invariance and work with Γ and a Hamiltonian for the whole ring obtained by taking (12) for each NN pair: where We have then reformulated the overall optimization problem as where ρ is restricted to arise from a translationally invariant state of N spins while the optimal Γ can automatically be chosen so since H Wolf is translationally invariant. An important observation can be made from Eq. (15), namely that as H Wolf commutes with the z component of the total spin, in the considered limit of N → ∞ OW were right when they made assumption (i): The optimal state can indeed be chosen to have a definite number of spins "down" and thus does not contain superpositions of states with different p values. Conversely, from our previous work [34] we know that Eq. (15) is also valid for any fixed p, i.e. we can write C max (N, p) on the left-hand side when making the appropriate restrictions on Γ. In summary, for fixed p the maximal concurrence is given by: where E GS [H Wolf (s), p] is the "ground state" energy of H Wolf (s) in the manifold of states with p spins "down". The overall maximal concurrence is given by further optimization over p or, equivalently, by using unrestricted ground state energies: Let us now describe the connection with the work of Yang and Yang. In their seminal papers Yang and Yang [38,39,40] study this anisotropic Heisenberg XXZ Hamiltonian (see e.g. [41] for more recent work): and they define f = lim N →∞ f N , where f N is half the energy per spin in the ground state with a given number p of spins "down": Here y is the average magnetization: Since p is a conserved quantum number, one can include a magnetic field along z and only shift the energy of each eigenstate. The translation of the results of Yang and Yang to our optimization problems is therefore, with s > 0. Note that ∆ 2 − H 2 = 1. To find C max (N, p) we should minimize Eq. (21) over s while keeping y fixed at the value corresponding to p [cf. Eq. (20)]. To find the overall maximal concurrence C max (N ) we should furthermore minimize over y.
IV. THE PHASE-DIAGRAM OF THE XXZ MODEL
We are considering the XXZ model in the limit N → ∞. The second Yang and Yang paper [39] deals with the properties of f (∆, y) in exactly this limit. The third paper [40] contains information about the magnetic properties, i.e. it is highly relevant when we also vary y in order to find the optimal fraction of spins "down". In order to understand the regimes of parameters we are interested in and relate them to the known properties of the model, it is useful to look at the phase diagram of this model, displayed in Fig. 1.
Let us first identify the region of the phase diagram which belongs to our parent Hamiltonian (13). From Eq. (14) it is clear that as s varies from 0 to ∞, we move on a hyperbola in the ∆-H plane: The s = 0 case corresponds to (−∞, ∞), whereas at s = 1 we are at the point of closest approach and cross the ∆ axis in (−1, 0), and as s → ∞ we move back to infinity, but this time with negative magnetic field. Comparing this with Fig. 1, it is then easy and not surprising to see that the s-hyperbola lies exactly in the "difficult" region of the phase diagram, i.e., the part with neither perfectly aligned spins nor perfect anti-ferromagnetic order (between AF and A in Fig. 1).
Since a change of sign of the magnetic field will only interchange the role of spin "up" and spin "down", we can ignore the negative H branch and focus on s ∈ [0, 1]. Then each point on the curve corresponds to exactly one ∆ and we can thus parameterize the curve by ∆ instead of s. The optimization is then done over ∆ with the magnetic field always given by H = √ ∆ 2 − 1.
V. THE INTEGRAL EQUATION
The Bethe ansatz basically consists of the assumption that the wave function can be written as a sum of plane waves with a limited number of terms. If we are looking for a state with p spins "down", only p wave numbers are needed. For the XXZ chain the first Yang and Yang article shows that this is indeed enough to produce the ground state wave function [38]. For our purposes we should note that Yang and Yang give explicitly the equations one needs to solve in order to find the ground state energy. In the limit of N → ∞, the number of wave numbers naturally becomes infinite and the equation to find them becomes an integral equation for the wave number density. This equation mathematically has a form of a so called Fredholm equation of the second kind. After some reparameterization this equation attains the form (Eq. [7a] in Ref. [39]): The unknown function here is R, which is the reparameterized density of wave numbers. The other functions depend parametrically on ∆, and in terms of the parameter λ = cosh −1 (−∆), they are explicitly given by: Let us point out the importance of the integration limit b in Eq. (22): When varying b, we get solutions corresponding to different values of y. In fact, y is given by: Note, however, that R also depends on b, so the connection is not very obvious. In praxis (i.e. when doing numerics) one solves Eq. (22) for a range of the parameter b in order to find the result for the wanted values of y.
If one wants to optimize some quantity with respect to y, however, this can equally well be achieved by optimizing with respect to b.
We are not primarily interested in R (which describes the state), but in f , which is the energy. It is given by: Again, f is written as a function of y, but in praxis the dependence is via b.
VI. NUMERICAL SOLUTION OF THE INTEGRAL EQUATION
A possible way to solve Eq. (22) is to turn the integral into a sum so that it becomes a matrix equation. This is called the Nystrom method [44]. The best way to discretize an integral is not always equally spaced points; very often it is much more efficient to use a Gaussian Quadrature. This means that we evaluate the integrand at M points {α k } and make a weighted sum with weights {w k }. The points and the weights can be easily found in e.g. Mathematica. In this way, Eq. (22) becomes: where while It is clear that Eq. (27) is a matrix equation and that solving it cannot be harder than inverting 1 +K wherẽ K kl = w l K kl (no summation over l). The advantage of using Gaussian Quadrature is that one does not need too many points to get a very good estimate of the integral for any sensible function. What exactly a "sensible function" is depends on the exact Gaussian Quadrature rule used. We use the simple Gauss-Legendre rule, assuming that R is well approximated by a polynomial on the interval [−b, b]. This is reasonable here because (23) and (24) are well-behaved for the values of λ we will consider. The final matrix equation can be solved very rapidly on a small size computer. A moderate value of M , however, means that our knowledge of R is restricted to a rather crude sampling; fortunately this is not a problem, since y and f are themselves integrals, and so can be evaluated with the full accuracy of Gaussian Quadrature.
To give the reader an idea about the numerics, we note that a simple Mathematica program will work very well with M ≤ 30. To produce a plot f (∆, y) versus y for ∆ not too close to −1 it takes about one minute. To plot the function of main interest, Eq. (21) optimized over p (i.e. y, i.e. b) also only takes a few minutes. In Fig. 2 we present the results of a Fortran program, which is (not surprisingly) much faster than the initial Mathematica code. The results indicate that OW's assumption (ii) was correct: When we plot E GS [H Wolf ] as function of ∆, we see that the optimal value of ∆ is reached at −∞, and in this limit OW's result is recovered. We conclude that these simple numerical results indicate that the state that maximizes the NN concurrence without any restrictions (i.e. optimized over p/N , i.e. y) coincides with the OW state fulfilling assumption (ii) (no NN pairs of spins "down").
VII. PERTURBATIVE CALCULATION
Looking at Eq. (21) above we see that the finite value in Fig. 2 in the limit ∆ → −∞ is obtained because some diverging terms happen to cancel each other. This is of course a great concern when doing numerics since it means that a good relative precision (knowing the result to e.g. 1 ppm) may not be enough. The obvious strategy is to extract the solution in the strict limit ∆ → −∞. In this section, we will present a perturbative calculation in 1/∆.
In zeroth order of the perturbation series we set cosh λ = ∞ in Eq. (22), and arrive at the simple equation: The right hand side does not depend on α and we easily find the constant solution: This means that in this limit f = −∆f −1 with: and that Eq. (21) thus gives 0, independently of y 0 . At this level of precision we therefore get no information as to whether OW's solution is optimal for all y's. The next order is "1/∆", i.e. we expand both sides of Eq. (22) and equate terms proportional to −1/∆ = 1/ cosh λ. We get The α dependence on the right hand side is cos α plus a constant, so we easily find The correction to f is given by This means that in Eq. (21) we get a zeroth order contribution of It is easy to see that this expression is the same as the one obtained by OW in Ref. [31] and if we do a numerical optimization over b we arrive at the notorious 0.434467 . . . for the maximal concurrence. This value is obtained for b = b OW = 1.351802 . . ., corresponding by Eq. (25) to y = y OW = 0.398316 . . ..
A. Recursion formula for higher order corrections
It is tedious, but essentially not difficult to continue in the above fashion and calculate higher order corrections. A useful trick is to develop a recursion formula. Let us write ǫ = 1/|∆| and define: The k'th order terms of Eq. (22) give us: Using the fact that ∂θ0 ∂β = 1 we collect terms containing R k on the left hand side: where we have introduced q k (α) as a shorthand notation for the r.h.s. The r.h.s. depends only on the known functions dp/dα and ∂θ/∂β, and on R j for j < k. The integral operator acting on R k on the l.h.s. of Eq. (39) can easily be inverted since it is built from the identity and a projection operator (onto a constant). We finally end up with the recursion formula In terms of R k and the auxiliary function q k , we have for y k , k > 0: Note that despite the appearence of α on the right-hand side, this relation does make sense since the form of Eq.(40) ensures that only terms independent of α survive. Using Eq.(40), it is fairly easy to show that and thus Calculating the first order contribution to the ground state energy we find the expression: B. Derivative at fixed y As mentioned above, E GS,1 gives us access to whether OW's solution is at least a local minimum for a given y. In Eq.(44), E GS,1 is expressed as a function of b, so in order to calculate the derivative at fixed y we need to use the appropriate implicit differentiation rule. Calculating the lowest non-vanishing order we find: Again we end up with a somewhat complicated expression, so we plot its graph in Fig. 3. We note that (dE GS /dǫ) y is positive for low b, but already at b = π/2 (corresponding to y = 1/3) it changes sign and becomes negative. This means that for higher b's, i.e. lower y's, OW's solution cannot be optimal as it is not even a local minimum. We conclude that in the region of sufficiently large magnetizations, i.e. y ≥ 1/3, the OW states (with no NN pairs of spins "down") maximize the NN entanglement locally, i.e. we cannot increase the NN entanglement by allowing small admixtures of states with NN pairs of spins "down". For smaller magnetizations, i.e. 0 ≤ y < 1/3, the states that maximize the NN entanglement necessarily contain NN pairs of spins "down".
C. Higher orders
The recursion formula (40) is also well suited for numerical calculations. In Fig. 4 we show a contour-plot based on such a calculation including all terms up to fourteenth order in ǫ = 1/|∆|. The plot indicates that the calculation in Sec. VII B gives the global answer, i.e., for all y ≥ 1/3 the optimal state has no neighboring spins "down". Since we perform here the perturbative calculation up to the 14th order, we expect that this calculation allows us also to obtain some information about the region of y < 1/3. From the Fig. 4 (or more precisely from the numerical data), one can read off the optimal value of ǫ, i.e. optimal value of ∆. Solving the Bethe ansatz integral equation for this value of ∆ we can recover in this way the full information about the corresponding optimal quantum state.
VIII. CONCLUSIONS
In this paper we have studied the question posed by O'Connor and Wootters concerning translationally invariant states of N qubits with maximal nearest neighbor (NN) concurrence. We have answered this question for N → ∞ using the mapping of the problem onto the search for ground states of a certain family of "parent" Hamiltonians, described by the XXZ model. Using the analytic Bethe ansatz solutions of the XXZ model in the limit N → ∞ (combining analytic results of low order perturbation theory and a numerical calculation of the 14th order perturbation theory) we have proved that: (i) for a given number of spins "down", i.e. a given magnetization y larger than 1/3, the states that maximize the NN concurrence coincide with the ones obtained by O'Connor and Wootters, i.e. do not have NN pairs of spins "down"; (ii) For small magnetizations, more explicitly for 0 ≤ y ≤ 1/3, the states that maximize the NN concurrence do contain nearest neighbor pairs of spins "down"; (iii) in particular, the state that maximizes the NN concurrence without constraint on y belongs to the family introduced by O'Connor and Wootters. Our results shed more light on the subtle relations between entanglement in spin 1/2 models and the ferromagnetic/anti-ferromagnetic character of spin-spin interactions. In the appendix we present some simple bounds on the optimal magnetic field that corresponds to the maximal NN concurrence. | 2017-09-23T16:43:26.842Z | 2005-12-23T00:00:00.000 | {
"year": 2005,
"sha1": "1f30fe9016a61437b56407b8fdd5ef36885775ee",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/quant-ph/0512214",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1f30fe9016a61437b56407b8fdd5ef36885775ee",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
56132781 | pes2o/s2orc | v3-fos-license | The 11-year solar cycle in current reanalyses : A ( non ) linear attribution study of the middle atmosphere
This study focusses on the variability of temperature, ozone and circulation characteristics in the stratosphere and lower mesosphere with regard to the influence of the 11-year solar cycle. It is based on attribution analysis using multiple nonlinear techniques (Support Vector Re5 gression, Neural Networks) besides the multiple linear regression approach. The analysis was applied to several current reanalysis datasets for the 1979-2013 period, including MERRA, ERA-Interim and JRA-55, with the aim to compare how this type of data resolves especially the double10 peaked solar response in temperature and ozone variables and the consequent changes induced by these anomalies. Equatorial temperature signals in the lower and upper stratosphere were found to be sufficiently robust and in qualitative agreement with previous attribution studies. The anal15 ysis also pointed to the solar signal in the ozone datasets (i.e. MERRA and ERA-Interim) not being consistent with the observed double-peaked ozone anomaly extracted from satellite measurements. Consequently the results obtained by linear regression were confirmed by the nonlinear approach 20 through all datasets, suggesting that linear regression is a relevant tool to sufficiently resolve the solar signal in the middle atmosphere. Furthermore, the seasonal evolution of the solar response was also discussed in terms of dynamical causalities in the winter hemispheres. The hypothetical mechanism 25 of a weaker Brewer Dobson circulation at solar maxima was reviewed together with discussion of polar vortex behaviour.
Introduction
The Sun is a prime driver of various processes in the climate system. From observations of the Sun's variability on decadal or centennial timescales, it is possible to identify temporal patterns and trends in solar activity, and consequently to derive the related mechanisms of the solar influence on the Earth's climate (e.g. Gray et al., 2010). Of the semi-regular solar cycles, the most prominent is the approximate 11-year periodicity which manifests in the solar magnetic field or through fluctuations of sunspot number, but also in the total solar irradiance (TSI) or solar wind properties. For the dynamics of the middle atmosphere, where most of the ozone production and destruction occur, the changes in the spectral solar irradiance (SSI) are the most influential, since the TSI as the integral over all wavelengths exhibits variations of orders lower than the ultraviolet part of the spectrum (Lean, 2001). This fact was supported by original studies (e.g. Labitzke, 1987;Haigh, 1994) that suggested the solar cycle (SC) influence on the variability of the stratosphere. Gray et al. (2009) have shown, with the fixed dynamical heating model, that the response of temperature in the photochemically controlled region of the upper tropical stratosphere is due to both direct solar heating and an indirect effect caused by the ozone changes.
Numerous studies have identified temperature and ozone changes linked to the 11-year cycle by multiple linear regression. The use of ERA-40 reanalysis (Frame and Gray, 2010) pointed to a manifestation of annually averaged solar signal in temperature, exhibited predominantly around the Equator with amplitudes up to 2 K around the stratopause and with a secondary amplitude maximum of up to 1 K in the lower stratosphere. Soukharev and Hood (2006), Hood et al. (2010) A. Kuchar et al.: Solar cycle in current reanalyses and Randel and Wu (2007) have used satellite ozone data sets to characterise statistically significant responses in the upper and lower stratosphere. The observed double-peaked ozone response in the vertical profile around the Equator was reproduced in some chemistry climate models, although concerns about the physical mechanism of the lower stratospheric response were expressed (Austin et al., 2008).
The ozone and temperature perturbations associated with the SC have an impact on the middle atmospheric circulation. They produce a zonal wind anomaly around the stratopause (faster subtropical jet) during solar maxima through the enhanced meridional temperature gradient. Since planetary wave propagation is affected by the zonal mean flow (Andrews and McIntyre, 1987), we can suppose that a stronger subtropical jet can deflect planetary waves propagating from higher latitudes. Reduced wave forcing can lead to decreasing/increasing or upwelling/downwelling motions in the equatorial or higher latitudes respectively (Kodera and Kuroda, 2002). The Brewer-Dobson circulation (BDC) is weaker during solar maxima (Kuroda and Kodera, 2001), although this appears to be sensitive to the state of the polar winter. Observational studies, together with model experiments (e.g. Matthes et al., 2006), suggest a so-called "topdown" mechanism where the solar signal is transferred from the upper to lower stratosphere, and even to tropospheric altitudes.
Statistical studies (e.g. Labitzke et al., 2006;Camp and Tung, 2007) have also focused on the lower stratospheric solar signal in the polar regions and have revealed modulation by the Quasi-Biennial Oscillation (QBO), or the well known Holton-Tan relationship (Holton and Tan, 1980) modulated by the SC. Proposed mechanisms by Matthes et al. (2004Matthes et al. ( , 2010 suggested that the solar signal induced during early winter in the upper equatorial stratosphere propagates poleward and downward when the stratosphere transits from a radiatively controlled state to a dynamically controlled state involving planetary wave propagation (Kodera and Kuroda, 2002). The mechanism of the SC and QBO interaction, which stems from reinforcing each other or canceling each other out , has been verified by WACCM3.1 model simulations . These proved the independence of the solar response in the tropical upper stratosphere from the response dependent on the presence of the QBO at lower altitudes. However, fully coupled WACCM-4 model simulations by Kren et al. (2014) raised the possibility of occurrence by chance of the observed solar-QBO response in the polar region. The internally generated QBO was not fully realistic though. In particular, the simulated internal QBO descended down to only about 50 hPa.
It has been shown that difficulties in the state-of-the-art climate models arise when reproducing the solar signal influence on winter polar circulation, especially in less active sun periods (Ineson et al., 2011). The hypothesis is that solar UV forcing is too weak in the models. Satellite measurements indicate that variations in the solar UV irradiance may be larger than previously thought (Harder et al., 2009). However, the measurements by Harder et al. (2009) from the SORCE satellite may have been affected by instrument degradation with time and so may be overestimating the UV variability (see the review by Ermolli et al., 2013). The latter authors have also concluded that the SORCE measurements probably represent an upper limit on the magnitude of the SSI variation. Consequent results of general circulation models, forced with the SSI from the SORCE measurements, have shown a larger stratospheric response than for the NRL SSI data set. Thus, coordinated work is needed to have reliable SSI input data for GCM and CCM simulations (Ermolli et al., 2013), and also to propose robust conclusions concerning SC influence on climate (Ball et al., 2014).
At the Earth's surface, the detection of the SC influence is problematic since there are other significant forcing factors, e.g. greenhouse gases, volcanoes and aerosol changes (e.g. Chiodo et al., 2012), as well as substantial variability attributable to internal climate dynamics. However, several studies (van Loon et al., 2007;van Loon and Meehl, 2008;Hood and Soukharev, 2012;Hood et al., 2013;Gray et al., 2013;Scaife et al., 2013) detected the solar signal in sea level pressure and sea surface temperature, which supports the hypothesis of a troposphere-ocean response to the SC. Some studies (e.g. Hood and Soukharev, 2012) suggest a so-called "bottom-up" solar forcing mechanism that contributes to the lower stratospheric ozone and temperature anomaly in connection with the lower stratosphere deceleration of the BDC.
The observed double-peaked ozone anomaly in the vertical profile around the Equator was supported by the simulations of coupled chemistry climate models (Austin et al., 2008). However, the results presented by Chiodo et al. (2014) suggest the contribution of SC variability could be smaller since two major volcanic eruptions are aligned with solar maximum periods and also given the shortness of the analysed time series (in our case, 35 years). These concerns related to the lower stratospheric response of ozone and temperature derived from observations have already been raised (e.g. Solomon et al., 1996;Lee and Smith, 2003). However, another issue is whether or not the lower stratospheric response could depend on the model employed in the simulations (Mitchell et al., 2015b).
Several past studies (e.g. Soukharev and Hood, 2006;Frame and Gray, 2010;Gray et al., 2013;Mitchell et al., 2014) used multiple linear regression to extract the solar signal and separate other climate phenomena like the QBO, the effect of aerosols, North Atlantic Oscillation (NAO), El Niño-Southern Oscillation (ENSO) or trend variability. Apart from this conventional method, it is possible to use alternative approaches to isolate and examine particular signal components, such as wavelet analysis (Pisoft et al., 2012(Pisoft et al., , 2013 or empirical mode decomposition (Coughlin and Tung, 2004). The nonlinear character of the climate system also suggests potential benefits from the application of fully nonlinear attribution techniques to study the properties A. Kuchar et al.: Solar cycle in current reanalyses 6881 and interactions in the atmosphere. However, such nonlinear methods have been used rather sporadically in the atmospheric sciences (e.g. Walter and Schönwiese, 2003;Pasini et al., 2006;Blume and Matthes, 2012), mainly due to their several disadvantages such as the lack of explanatory power (Olden and Jackson, 2002).
To examine middle atmospheric conditions, it is necessary to study reliable and sufficiently vertically resolved data. Systematic and global observations of the middle atmosphere only began during the International Geophysical Year (1957Year ( -1958 and were later expanded through the development of satellite measurements (Andrews and McIntyre, 1987). Supplementary data come from balloon and rocket soundings, though these are limited by their vertical range (only the lower stratosphere in the case of radiosondes) and the fact that the in situ observations measure local profiles only. By assimilation of these irregularly distributed data and discontinuous measurements of particular satellite missions into an atmospheric/climatic model, we have modern basic data sets available for climate research, so-called reanalyses. These types of data are relatively long, globally gridded with a vertical range extending to the upper stratosphere or the lower mesosphere and thus suitable for 11-year SC research. In spite of their known limitations (such as discontinuities in ERA reanalysis -McLandress et al., 2014), they are considered an extremely valuable research tool (Rienecker et al., 2011).
Coordinated intercomparison has been initiated by the SPARC (Stratospheric Processes and their Role in Climate) community to understand them, and to contribute to future reanalysis improvements (Fujiwara et al., 2012). Under this framework, Mitchell et al. (2014) have examined nine reanalysis data sets in terms of 11-year SC, volcanic, ENSO and QBO variability. Complementing their study, we provide here a comparison with nonlinear regression techniques, assessing robustness of the results obtained by multiple linear regression (MLR). Furthermore, EP flux diagnostics are used to examine solar-induced response during the winter season in both hemispheres, and solar-related variations of assimilated ozone are investigated.
The paper is arranged as follows. In Sect. 2 the used data sets are described. In Sect. 3 the analysis methods are presented along with regressor terms employed in the regression model. Section 4 is dedicated to the description of the annual response results. In Sect. 4.1 solar response in MERRA reanalysis is presented. Next, in Sect. 4.1.1 other reanalyses are compared in terms of SC. Comparison of linear and nonlinear approaches is presented in Sect. 4.1.2. Section 4.2 describes monthly evolution of SC response in the state variables. Section 5 is aimed at dynamical consequences of the SC analysed using the EP flux diagnostics.
Data sets
Our analysis was applied to the most recent generation of three reanalysed data sets: MERRA (Modern Era Reanalysis for Research and Applications, developed by NASA) (Rienecker et al., 2011), ERA-Interim (ECMWF Interim Reanalysis) (Dee et al., 2011) and JRA-55 (Japanese 55-year Reanalysis) (Ebita et al., 2011). We have studied the series for the period 1979-2013. All of the data sets were analysed on a monthly basis. The Eliassen-Palm (EP) flux diagnostics (described below) was computed on a 3-hourly basis from MERRA reanalysis and subsequently monthly means were produced. A similar approach has already been used by Seviour et al. (2012) and Mitchell et al. (2015a). The former study proposed that even 6-hourly data are sufficient to diagnose tropical upwelling in the lower stratosphere. The vertical range extends to the lower mesosphere (0.1 hPa) for MERRA, and to 1 hPa for the remaining reanalyses. The horizontal resolution of the gridded data sets was 1.25 • × 1.25 • for MERRA and JRA-55 and 1.5 • × 1.5 • for ERA-Interim respectively.
In comparison with previous generations of reanalyses, it is possible to observe a better representation of stratospheric conditions. This improvement is considered to be connected with increasing the height of the upper boundary of the model domain (Rienecker et al., 2011). For example, the Brewer-Dobson circulation was markedly overestimated by ERA-40; an improvement was achieved in ERA-Interim, but the upward transport remains faster than observations indicate (Dee et al., 2011). Interim results of JRA-55 suggest a less biased reanalysed temperature in the lower stratosphere relative to JRA-25 (Ebita et al., 2011).
In addition to the standard variables provided in reanalysis, i.e. air temperature, ozone mixing ratio and circulation characteristics -zonal, meridional or omega velocity -we have also analysed other dynamical variables. Of particular interest were the EP flux diagnostics -a theoretical framework to study interactions between planetary waves and the zonal mean flow (Andrews and McIntyre, 1987). Furthermore, this framework allows the study of the wave propagation characteristics in the zonal wind and the induced (large-scale) meridional circulation as well. For this purpose the quasi-geostrophic approximation of transformed Eulerian mean (TEM) equations were used in the form employed by Edmon et al. (1980), i.e. using their formula (3.1) for EP flux vectors, (3.2) for EP flux divergence and (3.4) for residual circulation. These variables were then interpolated to a regular vertical grid. For the visualisation purposes, the EP flux arrows were scaled by the inverse of the pressure. The script was publicly released (Kuchar, 2015). To detect variability and changes due to climate-forming factors, such as the 11-year SC, we have applied an attribution analysis based on multiple linear regression (MLR) and two nonlinear techniques. The regression model separates the effects of climate phenomena that are supposed to have an impact on middle atmospheric conditions. Our regression model of a particular variable X as a function of time t, pressure level p, latitude ϕ and longitude λ is described by the following equation: (1) After deseasonalising, which can be represented by the α index for every month in a year, the individual terms represent a trend regressor TREND(t) either in linear form or including the equivalent effective stratospheric chlorine (EESC) index (this should be employed due to the ozone turnover trend around the middle of the 90s), a SOLAR(t) represented by the 10.7 cm radio flux as a proxy for solar ultraviolet variations at wavelengths 200-300 nm that are important for ozone production and radiative heating in the stratosphere, and which correlates well with sunspot number variation (the data were acquired from Dominion Radio Astrophysical Observatory (DRAO) in Penticton, Canada).
We have also included the quasi-biennial proxies QBO 1,2,3 (t) as another stratosphere-related predictor. Similar studies have represented the QBO in multiple regression methods in several ways. Our approach involves three separate QBO indices extracted from each reanalysis. These three indices are the first three principal components of the residuals of our linear regression model (1) excluding QBO predictors applied to the equatorial zonal wind. The approach follows the paper by Frame and Gray (2010) or the study by Crooks and Gray (2005) to avoid contamination of the QBO regressors by the solar signal or other regressors. The three principal components explain 49, 47 and 3 % of the total variance for the MERRA; 60, 38 and 2 % for the JRA-55; and 59, 37 and 3 % for the ERA-Interim. The extraction of the first two components reveals a 28-month periodicity and an out-of phase relationship between the upper and lower stratospheres. The out-of phase relationship or orthogonality manifests approximately in a quarter period shift of these components. The deviation from the QBO quasi-regular period represented by the first two dominant components is contained in the residual variance. Linear regression analysis of the zonal wind with the inclusion of the first two principal components reveals a statistically significant linkage between the third principal component and the residuals of this analysis. Furthermore, the regression coefficient of this QBO proxy was statistically significant for all variables p value < 0.05 (see below for details about significance testing techniques). Wavelet analysis for the MERRA demonstrates three statistically significant but non-stationary periods exceeding the level of the white noise wavelet spectrum (not shown): an approximate annual cycle (a peak period of 1 year and 2 months), a cycle with a peak period of 3 years and 3 months and a long-period cycle (a peak period between 10 and 15 years). Those interferences can be attributed to the possible nonlinear interactions between the QBO itself and other signals like the annual cycle or long-period cycle such as the 11-year SC at the equatorial stratosphere.
The El Niño-Southern Oscillation is represented by the multivariate ENSO index ENSO(t) which is computed as the first principal component of the six main observed variables over the Pacific Ocean: sea level pressure, zonal and meridional wind, sea surface temperature, surface air temperature and total cloudiness fraction of the sky (NCAR, 2013). The effect of volcanic eruptions is represented by the stratospheric aerosol optical depth SAOD(t). The time series was derived from the optical extinction data (Sato et al., 1993). We have used globally averaged time series in our regression model. The North Atlantic Oscillation has also been included through its index NAO(t) derived by rotated principal component analysis applied to the monthly standardised 500 hPa height anomalies obtained from the Climate Data Assimilation System (CDAS) in the Atlantic region between 20 and 90 • N (NOAA, 2013).
The robustness of the solar regression coefficient has been tested in terms of including or excluding particular regressors in the regression model; e.g. the NAO term was removed from the model and the resulting solar regression coefficient was compared with the solar regression coefficient from the original regression set-up. The solar regression coefficient seems to be highly robust since neither the amplitude nor the statistical significance field was changed significantly when NAO or QBO 3 or both of them were removed. However, cross-correlation analysis reveals that the correlation between NAO and TREND, SOLAR and SAOD regressors is statistically significant, but small (not shown).
The multiple regression model of Eq.
(1) has been used for the attribution analysis, and supplemented by two nonlinear techniques. The MLR coefficients were estimated by the least squares method. To avoid the effect of autocorrelation of residuals and to obtain the best linear unbiased estimate (BLUE) according to the Gauss-Markov theorem (Thejll, 2005), we have used an iterative algorithm to model the residuals as a second-order autoregressive process. A Durbin-Watson test (Durbin and Watson, 1950) confirmed that the regression model was sufficient to account for most of the residual autocorrelations in the data.
As a result of the uncorrelated residuals, we can suppose the standard deviations of the estimated regression coefficients not to be diminished (Neter et al., 2004). The statistical significance of the regression coefficients was computed with a t test.
The nonlinear approach, in our case, consisted of a multilayer perceptron (MLP) and the relatively novel epsilon support vector regression (ε-SVR) technique with the threshold parameter ε = 0.1. The MLP as a technique inspired by the human brain is capable of capturing nonlinear interactions between inputs (regressors) and output (modelled data) (e.g. Haykin, 2009). The nonlinear approach is achieved by transferring the input signals through a sigmoid function in a particular neuron and within a hidden layer propagating to the output (a so-called feed-forward propagation). The standard error back-propagation iterative algorithm to minimise the global error has been used.
The support vector regression technique belongs to the category of kernel methods. Input variables were nonlinearly transformed to a high-dimensional space by a radial basis (Gaussian) kernel, where a linear classification (regression) can be constructed (Cortes and Vapnik, 1995). However, cross-validation must be used to establish a kernel parameter and cost function searched in the logarithmic grid from 10 −5 to 10 1 and from 10 −2 to 10 5 respectively. We have used 5fold cross-validation to optimise the SVR model selection for every point in the data set as a trade-off between the recommended number of folds (Kohavi, 1995) and computational time. The MLP model was validated by the holdout crossvalidation method since this method is more expensive in order of magnitude in terms of computational time. The data sets were separated into a training set (75 % of the whole data set) and a testing set (25 % of the whole data set). The neural network model was restricted to only one hidden layer with the maximum number of neurons set up to 20.
The earlier mentioned lack of explanatory power of the nonlinear techniques in terms of complicated interpretation of statistical models (Olden and Jackson, 2002) mainly comes from nonlinear interactions during signal propagation and the impossibility to directly monitor the influence of the input variables. In contrast to the linear regression approach, the understanding of relationships between variables is quite problematic. For this reason, the responses of our variables have been modelled by a technique originating from sensitivity analysis studies and also used by e.g. Blume and Matthes (2012). The relative impact RI of each variable was computed as where I k = σ (ŷ −ŷ k ). σ (ŷ −ŷ k ) is the variance of the difference between the original model outputŷ and the model outputŷ k when the k-input variable was held at its constant level. There are many possibilities with regard to which constant level to choose. It is possible to choose several levels and then to observe the sensitivity of model outputs varying for example on minimum, median and maximum levels. Our sensitivity measure (relative impact) was based on the median level. The primary reason comes from purely practical considerations -to compute our results fast enough as an-other weakness of the nonlinear techniques lies in the larger requirement of computational capacity. In general, this approach was chosen because of their relative simplicity for comparing all techniques to each other and to be able to interpret them too. The contribution of variables in neural network models has already been studied and Gevrey et al. (2003) produced a review and comparison of these methods. Figure 1a, d, g, j shows the annually averaged solar signal in the zonal means of temperature, zonal wind, geopotential height and ozone mixing ratio. The signal is expressed as the average difference between the solar maxima and minima in the period 1979-2013, i.e. normalised by 126.6 solar radio flux units. Statistically significant responses detected by the linear regression in the temperature series (see Fig. 1a) are positive and are located around the Equator in the lower stratosphere with values of about 0.5 K. The temperature response increases to 1 K in the upper stratosphere at the Equator and up to 2 K at the poles. The significant solar signal anomalies are more variable around the stratopause and not limited to the equatorial regions. Hemispheric asymmetry of the statistical significance can be observed in the lower mesosphere. From a relative impact point of view (in Fig. 2a-c marked as RI), it is difficult to detect a signal with an impact larger than 20 % in the lower stratosphere where the volcanic and QBO impacts dominate. In the upper layers (where the solar signal expressed by the regression coefficient is continuous across the Equator) we have detected relatively isolated signals (over 20 %) around ±15 • using the relative impact method. The hemispheric asymmetry also manifests in the relative impact field, especially in the SVR field in the mesosphere.
Annual response (MERRA)
The annually averaged solar signal in the zonal mean of zonal wind (Figs. 1d and 2d-f) dominates around the stratopause as an enhanced subtropical westerly jet. The zonal wind variability due to the SC corresponds to the temperature variability due to the change in the meridional temperature gradient and via the thermal wind equation. The largest positive anomaly in the Northern Hemisphere reaches 4 m s −1 around 60 km (Fig. 1d). In the Southern Hemisphere, the anomaly is smaller and not statistically significant. There is a significant negative signal in the southern polar region. The negative anomalies correspond to a weakening of the westerlies or an amplification of the easterlies. The relative impact of the SC is similarly located zonally even for both nonlinear techniques (Fig. 2d-f). The equatorial region across all the stratospheric layers is dominantly influenced by the QBO (expressed by all three QBO regressors) and for this reason the solar impact is minimised around the Equator. The pattern of the solar response in geopotential height shows positive values in the upper stratosphere and lower mesosphere. This is also consistent with the zonal wind field through thermal wind balance. In the geopotential field, the SC influences the most extensive area among all regressors. The impact area includes almost the whole mesosphere and the upper stratosphere. Figure 1j also shows the annual mean solar signal in the zonal mean of the ozone mixing ratio (expressed as a percent change per annual mean). By including an EESC regressor term in the regression model instead of a linear trend over the whole period (for a detailed description see the methodology Sect. 3), we tried to capture the ozone trend change around the year 1996. Another possibility was to use our model over two individual periods, e.g. 1979-1995 and 1996-2013, but the results were quantitatively similar. The main common feature of the MERRA solar ozone response in Fig. 1j with observational results is the positive ozone response in the lower stratosphere, ranging from a 1 to 3 percent change.
In the equatorial upper stratosphere, no solar signal was detected that is comparable to that estimated from satellite measurement (Soukharev and Hood, 2006). By the relative impact method (Fig. 2j-l), we have obtained results comparable with linear regression coefficients, but especially around Atmos. Chem. Phys., 15, 6879-6895
Annual response -comparison with JRA-55, ERA-Interim
Comparison of the results for the MERRA, ERA-Interim and JRA-55 temperature, zonal wind and geopotential height shows that the annual responses to the solar signal are in qualitative agreement (compare individual plots in Fig. 1). The zonal wind and geopotential response seem to be consistent in all presented methods and data sets. The largest discrepancies can be seen in the upper stratosphere and especially in the temperature field (the first row in these figures). The upper stratospheric equatorial anomaly was not detected by any of the regression techniques in the case of the JRA-55 reanalysis although the JRA-25 showed a statistically significant signal with structure and amplitude of 1-1.25 K comparable with ERA-Interim in the equatorial stratopause (Mitchell et al., 2014). Although the anomaly in the MERRA temperature in Fig. 1a in the upper stratosphere is comparable to that in the ERA Interim temperature in Fig. 1b, the former signal is situated lower down at around 4 hPa (see also Mitchell et al., 2014). However, upper stratospheric temperature response could be less than accurate due to the existence of discontinuities in 1979, 1985and 1998(McLandress et al., 2014 coinciding with major changes in instrumentation or analysis procedure. Therefore, the temperature response to solar variation may be influenced by these discontinuities in the upper stratosphere. The revised analysis with the adjustments of ERA Interim temperature data from McLandress et al. (2014) showed in comparison with the original analysis without any adjustment that the most pronounced differences are apparent in higher latitudes and especially in 1 hPa. The regression coefficients decreased by about 50 % when using the adjusted data set, but the differences are not statistically significant in terms of 95 % confidence interval. The difference in tropical latitudes is about 0.2 K/(S max − S min ). The trend regressor t from Eq. 1 reveals a large turnaround from positive trend to negative in the adjusted levels, i.e. 1, 2, 3 and 5 hPa. Other regressors do not reveal any remarkable difference. The results in Figs. 1b, e, h, k and 3 from the raw data set were kept in order to refer and discuss the accordance and differences between our results and results from Mitchell et al. (2014), where no adjustments have been considered either.
The variability of the solar signal in the MERRA stratospheric ozone series was compared with the ERA-Interim results. The analysis points to large differences in the ozone response to the SC between the reanalyses and in comparison with satellite measurements by Soukharev and Hood (2006). In comparison with the satellite measurements, no relevant solar signal was detected in the upper stratosphere in the MERRA series. The signal seems to be shifted above Figure 3. The annually averaged response of the solar signal in the ERA-Interim zonal-mean temperature t (a-c), unit: K; zonal wind u (d-f), unit: m s −1 ; geopotential height h (g-i), unit: gpm; and ozone mixing ratio o3 (j-l), unit: percentage change per annual mean. The response is expressed as a relative impact RI approach. The relative impact was modelled by MLR, SVR and MLP techniques. The black contour levels in the RI plots are 0.2, 0.4, 0.8 and 1.0. the stratopause (confirmed by all techniques, shown in Figs. 2 and 3j-l). Regarding the ERA-Interim, there is a statistically significant ozone response to the SC in the upper stratosphere, but it is negative in sign, with values reaching up to 2 % above the Equator and up to 5 % in the polar regions of both hemispheres. However, a negative ozone and a positive temperature response in the upper stratosphere to a positive UV flux change from solar minimum to maximum is not physically reasonable. It must reflect an artifact of the assimilation model scheme and/or internal variability of the model rather than an effect of solar forcing (for more details about ozone as a prognostic variable in ERA-Interim, see Dee et al., 2011). There is a clear inverse correlation between the ERA-Interim temperature response in Fig. 1b and the ozone response in Fig. 1k. This does probably imply that the temperature response is producing the negative ozone response in the assimilation model. However, it is not physically reasonable because both the ozone and the temperature in the upper stratosphere respond positively to an increase in solar UV (e.g. Hood et al., 2015). In the case of MERRA, while SBUV ozone profiles are assimilated with SC passed to the forecast model (as the ozone analysis tendency contribution), no SC was passed to the radiative part of the model. The same is also true for ERA-Interim and JRA-55 (see the descriptive table of the reanalysis product on SC in irradiance and ozone in Mitchell et al. (2014). Despite the fact that the analysed ozone should contain a solar signal, the signal is not physically reasonable and is dominated by internal model variability in terms of dynamics and chemistry. Since the SBUV ozone profiles have very low vertical resolution, this may also affect the ozone response to the SC in the MERRA reanalysis. These facts should also be taken into account in case of monthly response discussion of particular variables in Sect. 4.2.
The lower stratospheric ozone response in the ERAinterim is not limited to the equatorial belt ±30 • up to 20 hPa, as in the case of the MERRA reanalysis, and the statistical significance of this signal is rather reduced. The solar signal is detected higher and extends from the subtropical areas to the polar regions. The results suggest that the solar response in the MERRA series is more similar to the results from satellite measurements (Soukharev and Hood, 2006). Nevertheless, further comparison with independent data sets is needed to assess the data quality in detail.
Comparison of the linear and nonlinear approaches (MLR vs. SVR and MLP)
In this paper, we have applied and compared one linear (MLR) and two nonlinear attribution (SVR and MLP) techniques. The response of the studied variables to the solar signal and other forcings was studied using the sensitivity analysis approach in terms of averaged response deviation from the equilibrium represented by the original model outputŷ ( Blume and Matthes, 2012). This approach does not recognise a positive or negative response as the linear regression does. For this reason, the relative impact results are compared to the regression's coefficients. Using linear regression, it would be possible to assess the statistical significance of the regression's coefficients and a particular level of the relative impact since they are linearly proportional. A comparison between the linear and nonlinear approaches by the relative impact fields shows qualitative and in most regions also quantitative agreement. The most pronounced agreement is observed in the zonal wind (Figs. 2, 3 and 4d-f) and geopotential height fields (Figs. 2, 3 and 4g-i). On the other hand worse agreement is captured in the ozone and temperature field. In the temperature field the upper stratospheric solar signal reaches values over 20 %, some individual signals in the Southern Hemisphere even reach 40 %. However, using the relative impact approach, the lower stratospheric solar signal in the temperature field (which is well established by the regression coefficient) does not even reach 20 % because of the dominance of the QBO and volcanic effects. These facts emphasise that nonlinear techniques contribute to the robustness of attribution analysis since the linear regression results were plausibly confirmed by the SVR and MLP techniques.
In conclusion, the comparison of various statistical approaches (MLR, SVR and MLP) should actually contribute to the robustness of the attribution analysis including the statistically assessed uncertainties. These uncertainties could partially stem from the fact that the SVR and neural network techniques are dependent on an optimal model setting which is based on a rigorous cross-validation process, which places a high demand on computing time.
The major differences between the techniques can be seen in how much of the temporal variability of the original time series is explained, i.e. in the coefficient of determination. For instance, the differences of the explained variance reach up to 10 % between linear and nonlinear techniques, although the zonal structure of the coefficient of determination is almost the same. To conclude, nonlinear techniques show an ability to simulate the middle atmosphere variability with a higher accuracy than cross-validated linear regression.
Monthly response (MERRA)
As was pointed out by Frame and Gray (2010), it is necessary to examine the solar signal in individual months because of a solar impact on polar-night jet oscillation (Kuroda and Kodera, 2001). For example, the amplitude of the lower stratospheric solar signal in the northern polar latitudes in February exceeds the annual response since the SC influence on vortex stability is most pronounced in February. Besides the radiative influences of the SC, we discuss the dynamical response throughout the polar winter (Kodera and Kuroda, 2002). ±0.25, ±0.5, ±1, ±2, ±5, ±10, ±15, ±30; zonal wind u (e-h), unit: m s −1 , contour levels: 0, ±1, ±2, ±5, ±10, ±15, ±30; geopotential height h (j-l), unit: gpm, contour levels: 0, ±10, ±20, ±50, ±100, ±150, ±300; EP flux divergence EPfD (m-p), unit: m s −1 day −1 ; together with EP flux vectors scaled by the inverse of the pressure, unit: m 2 s −2 ; and ozone mixing ratio, unit: percentage change per monthly mean; with residual circulation o3 + rc (q-t), units: m s −1 ; −10 −3 Pa s −1 during northern hemispheric winter. The response is expressed as a regression coefficient (corresponding units per S max minus S min ). The statistical significance of the scalar fields was computed by a t test. Red and yellow areas in Panels (a-l) and grey contours in Panels (m-t) indicate p values of < 0.05 and 0.01 respectively. Statistically significant upper stratospheric equatorial anomalies in the temperature series (winter months in Figs. 5 and 6a-d) are expressed in almost all months. Their amplitude and statistical significance vary throughout the year. The variation between the solar maxima and minima could be up to 1 K in some months. Outside the equatorial regions, the fluctuation could reach several Kelvin. The lower stratospheric equatorial anomaly strengthens during winter. This could be an indication of dynamical changes, i.e. alteration of the residual circulation between the equatorial and polar regions (for details, please see Sect. 5). Aside from the radiative forcing by direct or ozone heating, other factors are linked to the anomalies in the upper levels of the mid-dle atmosphere (Haigh, 1994;Gray et al., 2009). It is necessary to take into consideration the dynamical coupling with the mesosphere through changes of the residual circulation (see the dynamical effects discussion below). That can be illustrated by the positive anomaly around the stratopause in February (up to 4 K around 0.5 hPa). This anomaly extends further down and, together with spring radiative forcing, affects the stability of the equatorial stratopause. Hemispheric asymmetry in the temperature response above the stratopause probably originates from the hemispheric differences, i.e. different wave activity (Kuroda and Kodera, 2001). These statistically significant and positive temperature anomalies across the subtropical stratopause begin to descend and move Figure 6. The monthly averaged response of the solar signal in the MERRA zonal-mean temperature t (a-d); unit: K; contour levels: 0, ±0.25, ±0.5, ±1, ±2, ±5, ±10, ±15, ±30; zonal wind u (e-h), unit: m s −1 ; contour levels: 0, ±1, ±2, ±5, ±10, ±15, ±30; geopotential height h (j-l); unit: gpm; contour levels: 0, ±10, ±20, ±50, ±100, ±150, ±300; EP flux divergence EPfD (m-p), unit: m s −1 day −1 ; together with EP flux vectors scaled by the inverse of the pressure; unit: m 2 s −2 ; and ozone mixing ratio, unit: percentage change per monthly mean; with residual circulation o3+ rc (q-t); units: m s −1 , −10 −3 Pa s −1 during southern hemispheric winter. The response is expressed as a regression coefficient (corresponding units per S max minus S min ). The statistical significance of the scalar fields was computed by a t test. Red and yellow areas in Panels (a-l) and grey contours in Panels (m-t) indicate p values of < 0.05 and 0.01 respectively. to higher latitudes in the beginning of the northern winter. The anomalies manifest fully in February in the region between 60 and 90 • N and reach tropospheric levels -contrary to the results for the Southern Hemisphere (see Fig. 10 in Mitchell et al., 2014). The southern hemispheric temperature anomaly is persistent above the stratopause and the SC influence on the vortex stability differs from those in the Northern Hemisphere.
The above described monthly anomalies of temperature correspond to the zonal wind anomalies throughout the year (Figs. 5 and 6e-h). The strengthening of the subtropical jets around the stratopause is most apparent during the winter in both hemispheres. This positive zonal wind anomaly gradu-ally descends and moves poleward, similar to the Frame and Gray (2010) analysis based on ERA-40 data. In February, the intensive stratospheric warming and mesospheric cooling is associated with a more pronounced transition from winter to summer circulation attributed to the SC (in relative impact methodology up to 30 %). However, GCMs have not yet successfully simulated the strong polar warming in February (e.g. Schmidt et al., 2010;Mitchell et al., 2015b). Due to the short (35-year) time series, it is possible that this pattern is not really solar in origin but is instead a consequence of internal climate variability or aliasing from the effects of the two major volcanic eruptions aligned to solar maximum periods.
In the Southern Hemisphere, this poleward motion of the positive zonal wind anomaly halts approximately at 60 • S. For example, in August, we can observe a well-marked latitudinal zonal wind gradient (Fig. 6h). Positive anomalies in the geopotential height field correspond to the easterly zonal wind anomalies. The polar circulation reversal is associated with intrusion of ozone from the lower latitudes, as is apparent e.g. in August in the Southern Hemisphere and in February in the Northern Hemisphere (last rows of Figs. 5 and 6).
When comparing the results from the MERRA and ERA-40 series studied by Frame and Gray (2010), distinct differences were found (Fig. 5e, f) in the equatorial region of the lower mesosphere in October and November. While in the MERRA reanalysis we have detected an easterly anomaly above 1 hPa in both months (only November shown), a westerly anomaly was identified in the ERA-40 series. Further distinct differences in the zonal mean temperature and zonal wind anomalies were not found.
Dynamical effects discussion
In this section, we discuss the dynamical impact of the SC and its influence on middle atmospheric winter conditions. Linear regression was applied to the EP diagnostics. Kodera and Kuroda (2002) suggested that the solar signal produced in the upper stratosphere region is transmitted to the lower stratosphere through the modulation of the internal mode of variation in the polar-night jet and through a change in the Brewer-Dobson circulation (prominent in the equatorial region in the lower stratosphere). In our analysis, we discussed the evolution of the winter circulation with an emphasis on the vortex itself rather than the behaviour of the jets. Furthermore, we try to describe the possible processes leading to the observed differences in the quantities of state between the solar maximum and minimum period. Because the superposition principle only holds for linear processes, it is impossible to deduce the dynamics merely from the fields of differences. As noted by Kodera and Kuroda (2002), the dynamical response of the winter stratosphere includes highly nonlinear processes, e.g. wave-mean flow interactions. Thus, both the anomaly and the total fields, including climatology, must be taken into account.
We start the analysis of solar maximum dynamics with the period of the northern hemispheric winter circulation formation. The anomalies of the ozone, temperature, geopotential in the lower stratosphere only and Eliassen-Palm flux divergence mostly in the upper stratosphere support the hypothesis of weaker BDC during the solar maximum due to the less intensive wave pumping. This is possible through the "downward control" principle when modification of wavemean flow interaction in the upper levels governs changes in residual circulation below (Haynes et al., 1991). The finding about weaker BDC during the solar maximum is consistent with previous studies (Kodera and Kuroda, 2002;Matthes et al., 2006). The causality is unclear, but the effect is visible in both branches of BDC as is illustrated by Fig. 5 and summarised schematically in Fig. 7.
During the early Northern Hemisphere (NH) winter (including November) when westerlies develop in the stratosphere, we can observe a deeper polar vortex and consequent stronger westerly winds both inside and outside the vortex. However, only the westerly anomaly outside the polar region and around 30 • N from 10 hPa to the lower mesosphere is statistically significant (see the evolution of zonal wind anomalies in Fig. 5e-h). The slightly different wind field has a direct influence on the vertical propagation of planetary waves. From the Eliassen-Palm flux anomalies and climatology we can see that the waves propagate vertically with increasing poleward instead of equatorward meridional direction with height. This is then reflected in the EP flux divergence field, where the region of maximal convergence is shifted poleward and the anomalous convergence region emerges inside the vortex above approximately 50 hPa ( Fig. 5m-p).
The poleward shift of the maximum convergence area further contributes to the reduced BDC. This is again confirmed by the temperature and ozone anomalies. The anomalous convergence inside the vortex induces anomalous residual circulation, the manifestation of which is clearly seen in the quadrupole-like temperature structure (positive and negative anomalies are depicted schematically in Fig. 7 using red and blue boxes respectively). This pattern emerges in November and even more clearly in December. In December, the induced residual circulation leads to an intrusion of the ozonerich air into the vortex at about the 1 hPa level (Fig. 5r). The inhomogeneity in the vertical structure of the vortex is then also pronounced in the geopotential height differences. This corresponds to the temperature analysis in the sense that above and in the region of the colder anomaly there is a negative geopotential anomaly and vice versa. The geopotential height difference has a direct influence on the zonal wind field (via the thermal wind balance). The result is a deceleration of the upper vortex parts and consequent broadening of the upper parts (due to the conservation of angular momentum).
Considering the zonal wind field, the vortex enters January approximately with its average climatological extent. The wind speeds in its upper parts are slightly higher. This is because of the smaller geopotential values corresponding to the negative temperature anomalies above approximately 1 hPa. This probably results from the absence of adiabatic heating due to the suppressed BDC, although the differences in the quantities of state (temperature and geopotential height) are small and insignificant (see the temperature anomalies in Fig. 5c). It is important to note that these differences change sign around an altitude of 40 km inside the vortex further accentuating the vertical inhomogeneity of the vortex. This might start balancing processes inside the vortex, which is confirmed by analysis of the dynamical quantities, i.e. EP flux and its divergence (Fig. 5o) Figure 7. Solar cycle modulation of the winter circulation: schema of the related mechanisms. The upper and lower figure show early and later winter respectively. The heating and cooling anomalies are drawn with red and blue boxes. The EP flux divergence and convergence are drawn with green and yellow boxes. The wave propagation anomaly is expressed as a wavy red arrow in contrast to the climatological average drawn by a wavy grey arrow. The induced residual circulation according to the quasi-geostrophic approximation is highlighted by the bold black lines.
Significant anomalies of the EP flux indicate anomalous vertical wave propagation resulting in the strong anomalous EP flux convergence being significantly pronounced in a horizontally broad region and confined to upper levels (convergence (negative values) drawn by green or blue shades in Fig. 5m-p). This leads to the induction of an anomalous residual circulation starting to gain intensity in January. The situation then results in the disruption of the polar vortex visible in significant anomalies in the quantities of state in February -in contrast to January. Further strong mixing of air is suggested by the ozone fields. The quadrupole-like structure of the temperature is visible across the whole NH middle atmosphere in February (indicated in the lower diagram of Fig. 7), especially in the higher latitudes. This is very significant and well pronounced by the stratospheric warming and mesospheric cooling.
The hemispheric asymmetry of the SC influence can be especially documented in winter conditions, as was already suggested in Sect. 4.2. Since the positive zonal wind anomaly halts at approximately 60 • S and intensifies over 10 m s −1 , one would expect the poleward deflection of the planetary wave propagation to be according to NH winter mechanisms discussed above. This is actually observed from June to August when the highest negative anomalies of the latitudinal component of EP flux are located in the upper stratosphere and in the lower mesosphere ( Fig. 6m-p). The anomalous divergence of EP flux develops around the stratopause between 30 and 60 • S. Like the hypothetical mechanism of weaker BDC described above, we can observe less wave pumping in the stratosphere and consequently less upwelling in the equatorial region. In line with that, we can see in the lower stratosphere of equatorial region (Figs. 5b and 6b) a more pronounced temperature response in August (above 1 K) than in December (around 0.5 K) as already mentioned in previous observational (van Loon and Labitzke, 2000) or reanalysis (Mitchell et al., 2014) studies. Although this can point to a weaker BDC, the residual circulation ( Fig. 6q-t) as a proxy for BDC (Butchart, 2014) does not reveal this signature. Hypothetically, this could be due to a higher role of unresolved wave processes in reanalysis (small-scale gravity waves) or due to the worse performance of residual circulation as a proxy for the large-scale transport in SH (e.g. larger departure from steady waves approximation comparing to NH), or because of the other processes than BDC leading to the temperature anomaly, e.g. aliasing with volcanic signal.
Overall, the lower stratospheric temperature anomaly is more coherent for the SH winter than for the NH winter, where the solar signal is not so apparent or statistically significant in particular months and reanalysis data sets.
Conclusions
We have analysed the changes in air temperature, ozone and circulation characteristics driven by the variability of the 11year solar cycle's influence on the stratosphere and lower mesosphere. Attribution analysis was performed on the three reanalysed data sets, MERRA, ERA-Interim and JRA-55, and aimed to compare how these types of data sets resolve the solar variability throughout the levels where the "top-down" mechanism is assumed. Furthermore, the results that originated in linear attribution using MLR were compared with other relevant attribution studies and supported by nonlinear attribution analysis using SVR and MLP techniques.
The nonlinear approach to attribution analysis, represented by the application of the SVR and MLP, largely confirmed the solar response computed by linear regression. Consewww.atmos-chem-phys.net/15/6879/2015/ quently, these results can be considered quite robust regarding the statistical modelling of the solar variability in the middle atmosphere. This finding indicates that linear regression is a sufficient technique to resolve the basic shape of the solar signal through the middle atmosphere. However, some uncertainties could partially stem from the fact that the SVR and MLP techniques are highly dependent on an optimal model setting that requires a rigorous cross-validation process (which places a high demand on computing time). As a benefit, nonlinear techniques show an ability to simulate the middle atmosphere variability with higher accuracy than linear regression.
The solar signal extracted from the temperature field from MERRA and ERA-Interim reanalysis using linear regression has the amplitudes around 1 and 0.5 K, in the upper stratospheric and in the lower stratospheric equatorial region respectively. However, the peak amplitudes of the temperature response in the equatorial upper stratosphere occur at different levels (about 4 and 2 hPa respectively). These signals, statistically significant at a p value < 0.01, are in qualitative agreement with previous attribution studies (e.g. Frame and Gray, 2010;Mitchell et al., 2014). A statistically significant signal was only observed in the lower part of the stratosphere in the JRA-55 reanalysis, however with similar amplitudes as the other data sets.
Similar to the temperature response, the double-peaked solar response in ozone was detected in satellite measurements (e.g. Soukharev and Hood, 2006), although concerns were expressed about the physical mechanism of the lower stratospheric response (e.g. Austin et al., 2008). However, the exact position and amplitude of both ozone anomalies remain a point of disagreement between models and observations. The results of our attribution analysis point to large differences in the upper stratospheric ozone response to the SC in comparison with the studies mentioned above and even between reanalyses themselves. The upper stratospheric ozone anomaly reaches 2 % in the SBUV(/2) satellite measurements (e.g. Soukharev and Hood, 2006, Fig. 5) which were assimilated as the only source of ozone profiles in MERRA reanalysis. This fact is remarkable since the same signal was not detected in the upper stratosphere in the MERRA results. However, the solar signal in the ozone field seems to be shifted above the stratopause where similar and statistically significant solar variability was attributed. Concerning the solar signal in the ERA-Interim, there is a negative ozone response via a regression coefficient in the upper stratosphere, although the solar variability expressed as relative impact appears to be in agreement with satellite measurements. The negative ozone response in the tropical upper stratosphere is not consistent with physical expectations for a nominal positive change in solar UV irradiance (e.g. Hood et al., 2015).
Furthermore, the lower stratospheric solar response in the ERA-Interim's ozone around the Equator is reduced in this data set and shifted to higher latitudes. Another difference was detected in the monthly response of the zonal wind in October and November in the equatorial region of the lower mesosphere between the results for the MERRA series and ERA-40 data studied by Frame and Gray (2010). While in the MERRA reanalysis we have detected an easterly anomaly, a westerly anomaly was identified in the ERA-40 series.
A similar problem with the correct resolving of the doublepeaked ozone anomaly was registered in the study of Dhomse et al. (2011) which investigated the solar response in the tropical stratospheric ozone using a 3-D chemical transport model. The upper stratospheric solar signal observed in SBUV/SAGE and SAGE-based data could only be reproduced in model runs with unrealistic dynamics, i.e. with no inter-annual meteorological changes.
The reanalyses have proven to be extremely valuable scientific tools (Rienecker et al., 2011). On the other hand, they have to be used with caution, for example, due to the existence of large discontinuities occurring in 1979, 1985and 1998(McLandress et al., 2014) that translated into errors in the derived solar coefficients. Our revised analysis with the adjustments from McLandress et al. (2014) resulted in an 0.2 K/(S max − S min ) reduction in the temperature solar regression coefficients in tropical latitudes of the upper stratosphere.
In the dynamical effects discussion, we described the dynamical impact of the SC on middle atmospheric winter conditions. The relevant dynamical effects are summarised in schematic diagrams (Fig. 7). Both diagrams depict average conditions and anomalies induced by the SC. The first one summarises how equatorward wave propagation is influenced by the westerly anomaly around the subtropical stratopause. The quadrupole-like temperature structure is explained by anomalous residual circulation in the higher latitudes together with the anomalous branch heading towards the equatorial region already hypothesised by Kodera and Kuroda (2002). The second diagram concludes the transition time to vortex disruption during February. Again, a very apparent quadrupole-like temperature structure is even more pronounced, especially in the polar region, and seems to be more extended to lower latitudes.
Fields of residual circulation and EP flux divergence in February are opposite to what would be expected from the suppressed BDC in the SC max. There is an enhanced downwelling in the polar and an enhanced upwelling in the equatorial region below 1 hPa. This suggests a need to diagnose the influence of SC on transport at least on a monthly scale because the changes in the underlying dynamics (compare the upper and lower diagrams in Fig. 7) would make the transport pathways more complicated. The negative zonal wind response in late northern winter may be caused by an increased likelihood of major stratospheric warmings later in the winter under solar maximum conditions when the polar vortex in early winter is stronger, on average, and therefore less susceptible to disruption (e.g. Gray et al., 2004). Since GCMs have not yet successfully simulated this pattern (e.g. Schmidt et al., 2010;Mitchell et al., 2015b) and due to the short (35-year) time series, it is possible that this pattern is not really solar in origin but is instead a consequence of internal climate variability or aliasing from effects of the two major volcanic eruptions aligned to solar maximum periods.
However, we can strongly assume that the dynamical effects are not zonally uniform, as is shown here using two-dimensional (2-D) EP diagnostics and TEM equations. Hence, it would be interesting to extend the discussion of dynamical effects for other relevant characteristics, for example, for the analysis of wave propagation and wave-mean flow interaction using the 3-D formulation (Kinoshita and Sato, 2013).
This paper is fully focused on the SC influence, i.e. on decadal changes in the stratosphere and lower mesosphere, although a huge number of results concerning other forcings was generated by attribution analysis. The QBO phenomenon in particular could be one of the points of future interest since the solar-QBO interaction and the modulation of the Holton-Tan relationship by the SC are regarded as highly challenging, especially in global climate simulations . | 2018-12-12T18:52:31.510Z | 2015-06-24T00:00:00.000 | {
"year": 2015,
"sha1": "0248fb8004b4dae43c3ac8412ae1d1c7ed640506",
"oa_license": "CCBY",
"oa_url": "https://www.atmos-chem-phys.net/15/6879/2015/acp-15-6879-2015.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0248fb8004b4dae43c3ac8412ae1d1c7ed640506",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
43996063 | pes2o/s2orc | v3-fos-license | Spontaneous lung pathology in a captive common marmoset colony (Callithrix jacchus)
Abstract Data on spontaneous pathology are substantially scarce for common marmosets, compared to other laboratory animals, but is essential for the interpretation of histological findings in the context of toxicological and experimental studies. Especially if common marmosets are used as experimental animals in respiratory research, detailed knowledge on the spectrum, occurrence, and incidence of spontaneous histopathological pulmonary lesions in this non-human primate species is required. In this study, lung tissue of 638 common marmosets from the marmoset colony of the German Primate Center was examined histologically. The analysis revealed a high incidence of predominantly mild and multifocal interstitial pneumonia (32.99 %) of unknown etiology in most cases. Only few marmosets exhibited lobar pneumonia (1.41 %) and bronchopneumonia (0.94), which were mainly caused by bacterial pathogens such as Bordetella bronchiseptica and Klebsiella pneumoniae. Lung immaturity and atelectasis were common histological findings in newborn marmosets. Typical background lesions included anthracosis (8.15 %), hemosiderosis (1.72 %), extramedullary hematopoiesis (11.6 %), mineralization (10.97 %), and inflammatory cell foci (10.34 %). In addition, three cases of pulmonary arteriopathy (0.47 %) and 1 case of foreign-body granuloma (0.16 %) were detected in the marmoset study cohort. The high prevalence of circulatory disturbances (congestion, edema, hemorrhage) and changes in air content (secondary atelectasis, alveolar emphysema) could partly be explained by euthanasia-related artifacts or agonal changes. The present study provides a comprehensive overview of the range and incidence of spontaneous pulmonary histopathology in common marmosets, serving as valuable reference data for the interpretation of lung lesions in toxicological and experimental marmoset studies.
Introduction
Recently, the common marmoset (Callithrix jacchus) has increasingly attracted attention as a translational animal model in the field of respiratory research because of its small size, good availability, and consistent characteristics of primate lung architecture (Greenough et al., 2005;Lever et al., 2008;Seehase et al., 2012;Curths et al., 2013Curths et al., , 2014. As a non-rodent species, the common marmoset is used in preclinical testing of drugs acting on the respiratory system and is a suitable animal model for various human pulmonary diseases, including asthma and chronic obstructive pulmonary disease (COPD; Seehase et al., 2012;Curths et al., 2013Curths et al., , 2014. Histopathological examination of lung tissue from toxicological and experimental studies requires detailed knowledge of the spectrum of spontaneously occur-ring lung pathology of this laboratory animal species to identify possible drug-induced or disease-associated pulmonary lesions and to distinguish these from species-specific background lesions. Compared to other laboratory animals, spontaneous pathology of common marmosets is less well defined. Background lesions of the common marmoset in toxicological studies have previously been documented by Kaspareit et al. (2006), who also referred to pulmonary findings. David et al. (2009) also performed a retrospective study on the spontaneous pathology of common marmosets including the morphological diagnoses of pneumonia, atelectasis, pulmonary extramedullary hematopoiesis, and lymphosarcoma in the lungs. However, a detailed survey about the range and incidence of lung pathology in common marmosets does not exist in the literature so far.
In order to provide reference data on spontaneous histopathological pulmonary findings in conventionally kept common marmosets, we performed a retrospective study on necropsy material of 638 common marmosets from the indoor-housed marmoset colony of the German Primate Center in Göttingen.
Materials and methods
In this retrospective study, archived lung tissue of 638 common marmosets (317 males and 321 females) originating from the marmoset colony of the German Primate Center in Göttingen, Germany, was used. Archived material included formalin-fixed or paraffin-embedded lung tissue, or histological sections of the lung, which were collected between 1997 and 2011. Animals of this study were housed in small family groups in an indoor facility with a room temperature of 25 • C and relative humidity of 50-60 % on a 12 h light-dark cycle with 30 min "dawn" and "dusk" periods. Care and housing conditions of the animals complied with the regulations of the European Parliament and the Council Directive on the protection of animals used for scientific purposes (2010/63/EU), the National Institutes of Health Guide for the Care and Use of Laboratory Animals (2010), and the applicable German Animal Protection Law. According to necropsy records, animals underwent necropsy after spontaneous death, following euthanasia due to illness with poor prognosis, or after scheduled terminal kill in the context of experimental studies. From the latter group of marmosets only control animals or animals without treatment-related findings were considered for re-evaluation of lung histology. Photographic documentation of respective macroscopic pulmonary findings and results from bacteriological culture of the lungs were available for some animals. Lung tissue samples were fixed in 4 or 10 % phosphate-buffered formaldehyde, paraffin-embedded, sectioned at 3 µm, and stained with hematoxylin and eosin (HE). If required for diagnostic purposes, additional stains were prepared and analyzed, including periodic acid Schiff (PAS) reaction, Prussian blue stain, von Kossa stain, Masson's trichrome stain, Congo red stain, Grocott's methenamine silver stain, and Giemsa stain. The lungs of all 638 marmosets were re-examined histologically, and findings were reported on a spreadsheet (Microsoft Office Excel 2010) with searchable columns for morphological diagnosis/histological finding, sex, age, cause of death (if known), chronicity of lesion (in case of inflammation), and severity grade. Animals were assigned to three age groups: newborn (0 to 7 days old, including supposedly timely delivered but stillborn marmosets), juvenile (7 days to 30 months old), and adult (older than 30 months). Histological findings were grouped into inflammatory conditions, neoplasia, changes in air content, circulatory disturbances, pigment deposition, and miscellaneous findings. Total incidences of findings were indicated in absolute numbers and percentage. In addition, absolute numbers of findings were calculated for males and females as well as different age groups.
Results
In the present study, 39 of 638 common marmosets (6.11 %) did not show any histological changes of the lungs. All spontaneous pulmonary lesions of the other animals are documented in Table 1.
The most commonly observed inflammatory lung condition was constituted by interstitial pneumonia, which was observed in 206 marmosets (32.29 %). The majority of cases with interstitial pneumonia revealed a subacute course of disease with predominance of plasma cells in the inflammatory cell infiltrate. Regarding severity and distributional pattern, mild multifocal or multifocal to coalescing forms predominated ( Fig. 1), while severe and diffuse cases were very rare. In two marmosets (0.31 %), interstitial pneumonia was associated with acute to subacute alveolitis. There was no histological evidence of infectious agents in all cases of interstitial pneumonia, except for one male juvenile marmoset, which showed characteristic disseminated Grocott-positive blastospores and pseudohyphae in inflamed lung regions, indicating a mycotic etiology consistent with candidiasis. Bacterial culture, if available, was positive in the minority of marmosets affected by interstitial pneumonia. Bacterial isolates included Escherichia coli, Streptococcus sp., Erysipelothrix rhusiopathiae, Klebsiella pneumoniae, and Pseudomonas aeruginosa. Other forms of pneumonia were rare and included lobar pneumonia in nine marmosets (1.41 %), suppurative bronchopneumonia in six marmosets (0.94 %), and bronchointerstitial pneumonia in two marmosets (0.31 %). Lobar pneumonias were further subdi- vided into purulent or fibrinopurulent forms according to the inflammatory exudate (Fig. 2). In two cases (0.31 %), there was fibrinopurulent pleuropneumonia. The majority of lobar pneumonias (eight of nine cases) and all suppurative bronchopneumonias were moderate to severe and acute to subacute, representing the main cause of disease or death in most cases. Bacterial culture of marmoset lungs affected by purulent bronchopneumonia yielded isolates of Streptococcus sp. and/or Bordetella bronchiseptica in all cases. Bordetella bronchiseptica was also isolated from the lungs of a juvenile female marmoset with fibrinopurulent pleuropneumonia. In three cases of lobar pneumonia, Streptococcus sp., Enterococcus sp., and/or Klebsiella pneumoniae ssp. ozaenae could be isolated, while five cases of lobar pneumonia were negative for bacterial culture. Neoplastic conditions occurred in five marmoset lungs, including lymphoma in four adult animals (0.63 %) and fibrosarcoma in one juvenile animal (0.16 %). Regarding the age of animals with tumors, lymphomas affected three rather young adults (3.5 years (two cases) and 2.75 years old) and one older animal (7 years old). The fibrosarcoma occurred in a 1-year-old marmoset. In all cases, pulmonary tumors were regarded as secondary, resulting from metastatic neoplastic disease with presumptive primary tumors in the nodal or extranodal lymphatic system (lymphomas) and in the mammary gland (fibrosarcoma). Immunohistochemical examinations confirmed B cell origin of at least three lymphomas (Fig. 3). One lymphoma has not been further characterized.
Changes in air content were commonly observed, either in otherwise healthy lungs or as an additional finding to other histological diagnoses. There was evidence of atelectasis in 215 marmosets (33.7 %), of which the majority represented subtotal secondary (acquired) atelectasis (178 of 215 cases). Primary (fetal) atelectasis occurred in 37 newborn marmosets (5.8 %), of which 36 animals showed total fetal atelectasis that was regularly associated with lung immaturity. One juvenile marmoset revealed partial fetal atelectasis, also accompanied by discrete signs of lung immaturity. Alveolar emphysema of variable severity and extent was present in 154 animals (24.14 %), whereas interstitial emphysema could not be observed in this study.
Circulatory disturbances in marmoset lungs included congestion, edema, hemorrhage, and hyaline membrane formation. Acute pulmonary congestion was a common finding (41.54 %), often regarded as agonal or euthanasia-induced due to the use of barbiturates. The same might be true for pulmonary edema, which was present in 131 marmosets (20.35 %) and occurred both as an additional finding and solitarily. The majority of pulmonary edema was represented by alveolar forms (120 of 131 cases), while involvement of the interstitium was only seen in 11 cases. Extravasation of erythrocytes (hemorrhage) into the interstitium or alveolar space could be seen in 39 marmosets (6.11 %) and, to some extent, was presumably also caused by euthanasia or agony. Hyaline membranes were observed in the lungs of three newborn marmosets (0.47 %) with concurrent atelectasis and evidence of lung immaturity.
Mild to moderate deposition of coal dust in the pulmonary interstitium (anthracosis) was present in 52 mostly adult marmosets (8.15 %). In general, anthracosis was not associated with any tissue reaction (Fig. 4). Hemosiderin-laden macrophages (hemosiderosis) were observed in the lungs of two juvenile and nine adult marmosets (1.72 %), which commonly showed co-existing hemosiderosis in other organs, especially in liver, spleen, and kidneys. There was no evidence of chronic heart failure in cases of pulmonary hemosiderosis. Within the group of miscellaneous lung findings, extramedullary hematopoiesis, mainly characterized by megakaryocytes within alveolar septa, was observed in 74 animals (11.6 %). This predominantly affected adult marmosets, which regularly showed concurrent foci of extramedullary hematopoiesis in multiple organs (liver, spleen, etc.). The second-most-common miscellaneous finding was multifocal interstitial or subpleural mineralization, being present in 70 juvenile and adult marmosets (10.97 %), followed by disseminated inflammatory cell foci observed in 66 marmosets (10.34 %). They mainly consisted of plasma cells, macrophages, and lymphocytes and were primarily located within alveolar septa, perivascular or peribronchial/peribronchiolar (Fig. 5). Cuboidal alveolar epithelium and thick fibrotic interalveolar tissue of variable extent, indicative of pulmonary immaturity, were present in 62 newborn/stillborn marmosets and in 1 juvenile animal (9.72 %). In many cases, premature lungs also showed total atelectasis and represented a common cause of death in newborn marmosets. A few animals with immature lungs also revealed accumulations of intra-alveolar amniotic fluid (Fig. 6). Focal or multifocal alveolar histiocytosis, found in 26 marmosets (4.08 %), was generally associated with inflammatory lung lesions. Multifocal interstitial and subpleural fibrosis was detected in juvenile and adult marmosets (3.61 %), occasionally accompanied by focal mineralization (Fig. 7). Three adult marmosets (0.47 %) revealed pulmonary arteriopathy, characterized by hyperplasia and mineralization of the tunica media as well as edema and hypertrophy of the tunica intima. A focal foreign-body granuloma due to an aspirated hair fragment (Fig. 8) was observed in the lung of one adult female marmoset (0.16 %).
Discussion
With an incidence of 32.99 %, the most common inflammatory condition in the lungs was interstitial pneumonia. However, the majority of cases were mild and were not associated with severe clinical disease or death. Except for a few case reports, published data on the incidence of interstitial pneumonias in common marmoset colonies are lacking. David et al. (2009) observed pneumonias in 9 of 597 marmosets but did not provide further classification of this diagnosis. The etiopathogenesis of interstitial pneumonia generally includes aerogenous damage to the alveolar epithelium (e.g., by toxic gases or due to infection with pneumotropic viruses) or hematogenous injury to the alveolar capillary endothelium or basement membrane (e.g., in septicemia, by endotoxins from the alimentary tract, from free radicals released in acute respiratory distress syndrome, from microembolism or disseminated intravascular coagulation, in the context of hypersensitivity reactions, or due to infection with endotheliotropic viruses) (López, 2007). In the common marmosets affected by interstitial pneumonia, testing for respiratory viruses was not performed routinely. Therefore, a viral etiology accounting for at least a part of interstitial pneumonias cannot finally be excluded. Evidence of bacterial agents was present in only a few cases, including Streptococcus sp., Escherichia coli, Klebsiella pneumoniae, Pseudomonas aeruginosa, and Erysipelothrix rhusiopathiae. These isolates are of variable pathogenicity regarding respiratory infections but are usually not associated with interstitial pneumonia (Simmons and Gibson, 2012). In some marmosets, bacterial isolates were also obtained from other organs (gall bladder, intestine) with evidence of bacteria-induced pathologic lesions suggesting septicemic distribution of the respective bacteria. Mycotic interstitial pneumonia was observed in one marmoset with systemic candidiasis, which represents the most frequently oc- curring mycotic disease in immunocompromised non-human primates (Simmons and Gibson, 2012). However, the etiology of the majority of interstitial pneumonias remains unclear. Environmental influences linked to the housing conditions of the marmoset colony -e.g., room temperature, humidity, air exchange rate and air filter specifications, and aerosol formation -may represent predisposing or triggering factors for the development of interstitial lung inflammation in captive marmosets, although evidence for this assumption is lacking. In addition, fine dust pollution has to be considered as an initiating factor for interstitial lung disease, especially with regard to the atmospheric composition in the natural habitat of common marmosets, which surely differs from the artificial conditions in indoor marmoset husbandry. The presence of anthracosis in 52 marmosets (8.15 %) points to at least partial exposure of indoor-kept marmosets to the outside air. However, the association between anthracosis and interstitial pneumonia remains questionable as many animals with intrapulmonary coal dust pigment did not exhibit obvious interstitial inflammation or fibrosis, which is consistent with observations in cynomolgus monkeys (Sato et al., 2012). Finally, influences like stress or immunological status of the animal, both conditions that are hard to grasp, may have an impact on the individual's disposition to develop interstitial lung inflammation (López, 2007).
Lobar pneumonias and bronchopneumonias occurred in a small number of marmosets (1.41 and 0.94 %, respectively), were grossly evident in most cases, and were generally caused by bacterial infection resulting in severe disease and death. Bordetella bronchiseptica was isolated from the lungs in all cases of bronchopneumonia and 1 case of lobar pneumonia. Outbreaks of bordetellosis with characteristic pneumonic lesions have previously been described in marmoset colonies and were associated with high morbid-ity and mortality (Baskerville et al., 1983;Chalmers et al., 1983). Pathogenic Klebsiella pneumoniae strains are known to cause purulent/fibrinopurulent pneumonias in New World monkeys (Berendt et al., 1978;Simmons and Gibson, 2012) and could be isolated in most cases of lobar pneumonia in the common marmosets.
Primary pulmonary neoplasia is rare in non-human primates and is limited to a few case reports primarily referring to malignant epithelial tumors observed in different macaque species (Lowenstine and Osborn, 2012). In a previous study on the incidence of pulmonary tumors in the marmoset colony of the German Primate Center, Brack et al. (1996) reported three cases of primary lung tumors (one small-cell carcinoma, one bronchial adenoma, one squamous cell carcinoma) in 409 adult callitrichids that were examined between 1978 and 1994. However, in the present study, for which data were obtained from the time period between 1997 and 2011, pulmonary neoplasms in the marmoset colony (0.79 %) exclusively represented secondary tumors in the context of metastatic disease with primary tumors located in other organ systems (lymphatic system, mammary gland). Primary lung malignancies or benign lung tumors were not observed in the present study.
Acquired atelectasis with patchy distribution was commonly diagnosed in the examined marmoset lungs (27.9 %). However, in most cases, obvious causative factors, e.g. compression or obstruction, could not be observed. Therefore, a large portion of atelectasis probably resulted from artificial lung collapse during necropsy followed by immersion fixation with formaldehyde. Congenital atelectasis mainly affected stillborn marmosets or newborn animals that died within a couple of days after birth (5.8 %). The main causes of congenital atelectasis include obstruction of airways due to aspiration of amniotic fluid and alterations in the quantity and quality of pulmonary surfactant (López, 2007). In the common marmosets, atelectatic lungs regularly showed concurrent lung immaturity, suggesting surfactant deficiency. In a few cases, lung immaturity was associated with hyaline membrane formation, indicating acute respiratory distress syndrome as the likely cause of death. Amniotic fluid aspiration was evident in a couple of newborn marmosets and might have caused atelectasis due to airway obstruction. Alveolar emphysema is a common secondary finding in lungs affected by bronchopneumonia or lobar pneumonia and can be attributed to a valve effect elicited by exudate plugs in the intrapulmonary airways (López, 2007). However, as the incidence of alveolar emphysema clearly exceeds the number of alveolar pneumonias in the marmoset study cohort, alveolar emphysema in most marmosets likely represents an agonal change or a euthanasia artifact. The same probably applies to most marmoset lungs exhibiting circulatory disturbances, including acute congestion, edema, and hemorrhage, which are frequently seen in animals euthanized with barbiturates (López, 2007).
Hemosiderosis is a common finding in many New World monkey species, including common marmosets; mainly manifests in the liver; and is believed to be caused by highiron diets (Miller et al., 1997;Rensing and Oerke, 2005). As there were no signs of chronic heart failure in cases of pulmonary hemosiderosis but there was evidence of concurrent hemosiderosis in other organs, the majority of pulmonary hemosiderin deposition in the common marmosets of this study is regarded as the result of systemic iron overload due to excessive intestinal iron uptake. However, the presence of siderophages may also, to some extent, represent residua of localized pulmonary hemorrhages of undefined origin (Sato et al., 2012).
The occurrence of extramedullary hematopoiesis in the lungs of adult common marmosets has previously been described by Kaspareit et al. (2006) and is believed to be an incidental finding without clinical relevance (Zühlke and Weinbauer, 2003;Chamanza et al., 2006). Subpleural mineralization macroscopically presented as subpleural plaques, which are distinctly visible at necropsy. Both interstitial and subpleural mineralization was found in 70 marmosets (10.97 %) and was largely regarded to be of metastatic origin as there was co-existing mineralization of other tissues with accentuation on basal lamina structures. Taking into consideration that the diet for young marmosets in the German Primate Center is supplemented with vitamin D to prevent rachitic lesions, soft tissue mineralization in the common marmosets was likely due to hypervitaminosis D, which is a well-known nutritional disease entity in New World monkeys (Hunt, 1969;Kaspareit et al., 2006;McInnes, 2012;Saravanan et al., 2015). Circumscribed areas of interstitial and subpleural fibrosis occurred in 23 juvenile and adult marmosets (3.61 %) and presumably represent residua from earlier tissue damage. Mononuclear inflammatory cell foci, which were present in 66 marmoset lungs (10.34 %), are a regularly observed background finding in common marmosets and may affect different organ systems (Chamanza et al., 2006;Kaspareit et al., 2006). In the lungs, it is important to distinguish between such inflammatory cell foci and interstitial pneumonia, which should be feasible regarding the extent, distribution, and severity of infiltrating inflammatory cells. The histological features of pulmonary arteriopathy observed in 3 adult marmosets (0.47 %) were indicative of pulmonary hypertension, and concurrent hypertrophic cardiomyopathy was present in at least one of the affected animals. However, the exact pathogenetic mechanisms leading to pulmonary arteriopathy remained obscure in the common marmosets. Occasional occurrence of foreign-body granulomas in marmoset lungs has previously been reported by Kaspareit et al. (2006). They are usually caused by aspiration of foreign material (hair, food particles, plant fragments) (Sato et al., 2012). When small and focal as in the present case, they can be regarded as incidental microscopic findings without clinical relevance. However, aspiration of larger or sharp-edged foreign bodies may result in substantial tissue reaction and | 2017-11-07T04:43:06.387Z | 2017-03-01T00:00:00.000 | {
"year": 2017,
"sha1": "db3f6cc774e724f7e9be22d7e76d489a1c131a46",
"oa_license": "CCBY",
"oa_url": "https://www.primate-biol.net/4/17/2017/pb-4-17-2017.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "db3f6cc774e724f7e9be22d7e76d489a1c131a46",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
270009927 | pes2o/s2orc | v3-fos-license | Wearing surgical face mask has no significant impact on auscultation assessment
Objective During the COVID-19 pandemic, universal mask-wearing became one of the main public health interventions. Because of this, most physical examinations, including lung auscultation, were done while patients were wearing surgical face masks. The aim of this study was to investigate whether mask wearing has an impact on pulmonologist assessment during auscultation of the lungs. Methods This was a repeated measures crossover design study. Three pulmonologists were instructed to auscultate patients with previously verified prolonged expiration, wheezing, or crackles while patients were wearing or not wearing masks (physician and patients were separated by an opaque barrier). As a measure of pulmonologists’ agreement in the assessment of lung sounds, we used Fleiss kappa (K). Results There was no significant difference in agreement on physician assessment of lung sounds in all three categories (normal lung sound, duration of expiration, and adventitious lung sound) whether the patient was wearing a mask or not, but there were significant differences among pulmonologists when it came to agreement of lung sound assessment. Conclusion Clinicians and health professionals are safer from respiratory infections when they are wearing masks, and patients should be encouraged to wear masks because our research proved no significant difference in agreement on pulmonologists’ assessment of auscultated lung sounds whether or not patients wore masks.
INTRODUCTION
Since its outbreak in December 2019, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) that causes novel coronavirus disease 2019 (COVID-19) has caused 7,010,568 deaths worldwide (World Health Organization, 2024; https://data.who.int/dashboards/covid19/deaths?n=c, 14th January 2024).Since the beginning of the pandemic, many government health care agencies have recommended community-wide use of face masks as a low-cost and available epidemiologic tool for decreasing viral transmission, especially in patients with chronic diseases (European Centre for Disease Prevention and Control , 2021a; European Centre for Disease Prevention and Control , 2021b).Several studies have demonstrated that wearing face masks leads to a reduction in virus transmission (Seto et al., 2003;Jefferson et al., 2011;Smith et al., 2016;Chu et al., 2020) although more randomized control studies are needed to confirm these findings.Lung sound auscultation is still one of the key parts of a physical examination that is helpful in identifying respiratory pathology even when chest radiography findings appear normal (such as detecting crackles in patients with interstitial lung disease or detecting wheezes in bronchoobstruction).Other advantages of auscultation also lie in its widespread availability and affordability (Fajardo & Davis, 2022).According to the European Respiratory Society (ERS) Task Force on Respiratory Sounds, respiratory sounds should be divided into lung sounds and other (e.g., pleural rub, grunting, and snoring).Lung sounds should then be divided into normal (basic) sounds and adventitious sounds (Pasterkamp et al., 2016).
Since the beginning of the COVID-19 pandemic, most physical examinations have been done while patients were wearing surgical face masks, so the aim of our study was to investigate whether epidemiological recommendations of wearing surgical face masks have an impact on pulmonologist assessment during auscultation, as well as the reliability of performance under this condition.
MATERIALS & METHODS
This was a repeated measures crossover design study.We included 50 patients between November and December 2022 who were being treated at the Division of Pulmonology, Department of Internal Medicine, Sestre Milosrdnice University Hospital Center.All patients were older than 18 years and both genders were represented.Only patients with previously verified pathological lung auscultation finding (prolonged expiration, wheezing, or crackles) were included.All patients signed informed consent before being included in the study.The study was approved by the University Hospital Center Sestre Milosrdnice Ethical Committee (approval 251-29-11-21-03).Three pulmonologists were instructed to auscultate patients in the outpatient clinic (part of the forementioned division) and physicians did not sign informed consent.Inclusion criteria were stable chronic lung disease with lung sound phenomena present and the ability to sit upright.Exclusion criteria were acute infections, worsening acute heart disease, body mass indexes less than 18 kg/m2 and more than 40 kg/m2, and the presence of pleural effusion.During examination, physicians were unable to see if the patient was wearing a mask (patients' heads were behind an opaque barrier).At the beginning of each examination, a fourth physician randomly instructed patients to put on or take off a three-layer surgical ear loop mask (ear loops were put behind both ears, the mask placed in front of nose and mouth, and aluminum strip nose wire pressed over the nose bridge), and the pulmonologist would auscultate the lungs once and then repeat auscultation after the patient changed their mask status.Subsequently, the fourth physician recorded the pulmonologists' findings.
Statistical analysis
We evaluated the data from both groups (mask and no-mask) and performed statistical analysis.As a measure of the pulmonologists' agreement in the assessment of lung sounds, we used Fleiss kappa (K).
K assesses the agreement between raters in cases where categorical measures of ordinal or nominal measurement scales were used.Below, we present the results of the analysis of agreement between physicians in the assessment of various respiratory symptoms in two situations-when patients wore a mask and when patients did not wear a mask.Along with the value of K, we also show its 95% confidence interval.If the confidence intervals of the K values in the two situations did not overlap, this means that there was a statistically significant difference in agreement between the two situations.Otherwise, there was no difference.
In addition, for each symptom, the percentage of cases in which all three doctors completely agreed is shown.Software used for statistical analysis was IBM SPSS Statistics 29 (Chicago, IL, USA).
Breath sounds
Three pulmonologists evaluated breath sounds in 50 patients in two situations: while the patients were wearing and not wearing masks.They evaluated breath sounds using the following measurement scale: 0 = normal breath sound, 1 = quieter breath sound, 2 = harsh breath sound.The results of the analysis of physician agreement are presented in Table 1.
Physician agreement in the assessment of breath sounds did not differ depending on whether patients wore a mask or not (Table 1).In both situations, physicians completely agreed in 60% of cases.None of the cases resulted in unanimous agreement among the three physicians on the presence of harsh breath sounds.
Expiration
All three physicians assessed the expiration of 27 patients in two situations (patients with and without a mask).Expiration was assessed using the following scale: 0 = regular, 1 = prolonged.The results of the analysis of physician agreement are shown in Table 1.
Physician agreement on the duration of expiration did not differ depending on whether patients wore mask or not.However, in both situations, the agreement was very low and physicians were in complete agreement only in 26% of cases.
Abnormal breath sounds
All three physicians assessed the presence of abnormal breath sounds in 28 patients in two situations (patients with and without a mask).They evaluated two sound phenomena: wheezing and crackles.Both phenomena were evaluated using a measuring scale: 0 = sound phenomenon is not present, 1 = sound phenomenon is present.The results of the analysis of physician agreement are shown in Table 1.
Agreement of physicians in the assessment of abnormal breath sounds did not differ based on whether patients wore a mask or not.When assessing the presence of wheezing, agreement was mild to moderate, and physicians agreed in 64% of cases when patients wore a mask and 68% of cases when patients did not wear a mask.When assessing crackles, agreement was low, and physicians agreed in both situations in 39% of cases.
DISCUSSION
To our knowledge, no similar study has been published so far.The results of our study showed that there was no significant difference in pulmonologists' agreement in the assessment of breath sounds in all three categories (normal breath sound, duration of expiration, and abnormal breath sound) whether the patient was wearing a mask or not, but there were significant differences among pulmonologists when it came to overall agreement in assessed breath sounds.The difference in auscultation was determined by each pulmonologist because auscultation is a subjective method and interpretations vary widely between physicians (Xavier et al., 2014).Auscultation is an essential method in everyday practice that strongly influences future diagnostic and therapeutic workflows.It is performed by various specialists and characterized by its cost effectiveness, availability, simplicity, and transferability.The disadvantages of the mentioned methods are low sensitivity (37%) and acceptable specificity (89%), at least in acute respiratory pathology, all due to high subjectivity and difference in the experience of physicians (Arts et al., 2020).Indeed, physicians often differ in their assessments.In published studies, the pulmonary auscultatory skills of pulmonologists were found to be superior to those of medical students, and interns in internal medicine and general practice (Mangione & Nieman, 1997).Therefore, in this study only pulmonologists were included.Given the highly subjective nature of this interpretation, inter-listener variability restricts interoperability, with experience varying widely and differing across specialties (Sarkar et al., 2015;Hafke-Dys et al., 2019).Other sources of heterogeneity may originate from differences in the intrinsic properties of the stethoscope and extrinsic patient-related factors such as obesity, ambient noise, and patient compliance (e.g., crying child).Our study had some limitations.A higher number of patients would lead to more robust conclusions, and the inclusion of physicians from different specialties could confirm our results.It would also be interesting to see whether the type of the mask has an impact on the results.Further studies with more objective results could be obtained by using digital stethoscopes with recording capabilities to make comparison analysis of the breath sounds captured audio.
CONCLUSION
The results suggest that wearing surgical face masks during lung auscultation had no impact on agreement in pulmonologist assessment and is therefore an appropriate epidemiological measure in healthcare systems during the pandemic or in any environment with high risk of airborne infection.Wearing masks can enhance the safety of clinicians and health professionals from respiratory infections.Patients should be encouraged to wear masks because our study proved no significant differences in the physician assessment of auscultated breath sounds whether the patients wore a mask or not.Additionally, patients (particularly those who are susceptible) can lower their risk of infection by wearing masks while being certain that surgical mask will not ''mask'' their breath sounds.This was the first study where the influence of a surgical face mask on lung sound examination was assessed, and the results will reassure medical professionals in encouraging patients to wear a surgical face mask knowing it will not change auscultation findings.
• Ivan Ivanovski conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the article, and approved the final draft. | 2024-05-26T15:20:02.566Z | 2024-05-24T00:00:00.000 | {
"year": 2024,
"sha1": "8fb46fda212747c32b0adcae7061b59ee0245f67",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a632238bc67e7e79320b5ac5cfbf003bdd82cabd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119686166 | pes2o/s2orc | v3-fos-license | The topology of the set of nonsoliton Lie algebras in the moduli space of nilpotent Lie algebras
A Lie algebra is called nonsoliton if it does not admit a soliton inner product. We demonstrate that the subset of nonsoliton Lie algebras in the moduli space of indecomposable n-dimensional N-graded nilpotent Lie algebras is discrete if and only if n<= 7.
Introduction
A Lie algebra may be endowed with infinitely many different inner products. Among these, soliton inner products are considered preferred inner products. An inner product Q on a nilpotent Lie algebra g is called soliton if the Ricci endomorphism Ric of g defined by Q differs from a derivation of g by a scalar multiple of the identity map on g.
(See Section 2.2 for a precise definition of the map Ric). We will call a Lie algebra soliton if it admits a soliton inner product and we will call it nonsoliton if it does not admit a soliton inner product.
Soliton inner products on nilpotent Lie algebras are called nilsoliton. The study of nilsoliton inner products for nilpotent Lie algebras originated in the analysis of Einstein solvmanifolds ( [Lau01]). Indeed, deep results of Heber and Lauret allow one to reduce the study of Einstein inner products on solvable Lie algebras to the study of soliton inner products on nilradicals ( [Heb98], [Lau01], [Lau10]). Of independent interest, a soliton inner product on a nilpotent Lie algebra defines a metric on the corresponding simply connected nilpotent Lie group that is soliton in the sense that the Ricci flow moves the metric by diffeomorphisms and rescaling ( [Lau01]). And, outside of the category of homogeneous spaces, soliton inner products are of use in a purely algebraic context in that they supply extra structure for algebraic computations and may give canonical presentations of Lie algebras (See Example 3.11 of [Pay11]).
When they exist, nilsoliton inner products are unique up to scaling ( [Lau01]). If a nilpotent Lie algebra admits a soliton inner product, then it is N-graded. As not all nilpotent Lie algebras are N-graded, not all nilpotent Lie algebras admit nilsoliton inner products. One can find continuous families of nonsoliton nilpotent Lie algebras by finding continuous families of characteristically nilpotent Lie algebras. (A Lie algebra is characteristically nilpotent if its derivation algebra is nilpotent.) Such families exist in dimensions seven and higher (see [Kha02]). As the direct sum of nilpotent Lie algebras is soliton if and only if each summand is soliton ( [Jab11], [Nik11]), we will restrict our attention to indecomposable nilpotent Lie algebras. We will study the subset of nonsoliton Lie algebras in the moduli space of all indecomposable N-graded nilpotent Lie algebras of fixed dimension. In particular, we are interested in determining when the set of nonsoliton Lie algebras is discrete in this moduli space.
In dimensions 6 and lower, the situation is well-understood: the moduli space of nilpotent Lie algebras is discrete ( [dG07]), and all nilpotent Lie algebras of dimension 6 and lower admit soliton inner products ( [Lau02], [Wil03]). In dimension 7, the moduli space of real nilpotent Lie algebras consists of a finite number of discrete points and a finite number of continuous families of nonisomorphic nilpotent Lie algebras ( [See93], [Gon98]). Nikolayevsky proved that if two real nilpotent Lie algebras have the same complexification, then either both are soliton or both are nonsoliton ( [Nik11]). This result was also proved independently by M. Jablonski using different methods ( [Jab]). Using this result about complex forms, along with Carles's classification of complex nilpotent Lie algebras of dimension 7 ( [Car96], [Mag07]), Culma determined precisely which 7-dimensional complex nilpotent Lie algebras have real forms that admit nilsoliton inner products ( [Cul11a], [Cul11b]). Culma found that among the continuous families of nonsoliton nilpotent Lie algebras in dimension 7, none of them are N-graded. It follows that the subset of nonsoliton Lie algebras in the moduli space of indecomposable 7-dimensional N-graded nilpotent Lie algebras is discrete.
Arroyo determined precisely which N-graded filiform nilpotent Lie algebras of dimension 8 admit soliton inner products ( [Arr11]). She found although there are continuous families of solitons in that moduli space, there are precisely four isolated nonsoliton Lie algebras.
Eberlein and Nikolayevsky showed independently that except for in two cases, soliton Lie algebras are dense in the moduli space of twostep nilpotent Lie algebras ( [Ebe08], [Nik11]). (Note that all two-step nilpotent Lie algebras are N-graded by a derivation that equals the identity on a complement to the center and that is twice the identity on the center.) Jablonski showed that the soliton Lie algebras are dense in the remaining two cases-when the nilpotent Lie algebra is of type (2k + 1, 2) or (2k + 1, ( 2k+1 2 ) − 2) ([Jab08]). Theorem 1.1 ( [Ebe08], [Nik11], [Jab08]). The set of soliton Lie algebras is dense in the moduli space of two-step nilpotent Lie algebras.
Given the classification results we have described, and the theorem just stated, one might wonder if the set of nonsoliton Lie algebras is always discrete in the moduli space of two-step n-dimensional nonsoliton N-graded nilpotent Lie algebras. The answer is no. The first continuous families of nonsoliton N-graded nilpotent Lie algebras were found by C. Will. Wil10]). The moduli space of indecomposable 9-dimensional two-step nilpotent Lie algebras contains two one-parameter families of nonsoliton nilpotent Lie algebras.
Jablonski defined a general method for constructing families of twostep nilpotent Lie algebras called concatenation. He used the concatenation construction to define continuous families of irreducible nonsoliton two-step nilpotent Lie algebras in infinitely many dimensions. Jab11]). For n ≥ 23, the moduli space of indecomposable n-dimensional two-step nilpotent Lie algebras contains a continuous family of nonsoliton Lie algebras.
There are two key issues involved in the results of Jablonski and Will. First of all, nonsoliton nilpotent Lie algebras are quite rare, and two-step nilpotent Lie algebras are not classified in dimensions 10 and higher. Therefore, finding examples of nonsoliton nilpotent Lie algebras, or curves of them, requires a thorough understanding of the structure of nilpotent Lie algebras and how that structure relates to the existence of a soliton inner product. Second, in contrast to the semisimple case, there are few fine algebraic invariants that allow one to distinguish nonisomorphic nilpotent Lie algebras, so it is a significant task to show that the curves of nonsoliton nilpotent Lie algebras are mutually nonisomorphic. Will used the Pfaffian defined by Scheuneman in [Sch67] to distinguish the Lie algebras in her families, while Jablonski used geometric invariant theory.
Our main result is that the moduli space of indecomposable N-graded n-dimensional nonsoliton nilpotent Lie algebras is not discrete if n ≥ 8 : Theorem 1.4. The moduli space of indecomposable n-dimensional nonsoliton N-graded nilpotent Lie algebras contains a one-parameter family of nonsolitons if n ≥ 8.
As a corollary we can say exactly when the nonsoliton Lie algebras are isolated in the moduli space: Corollary 1.5. The set of nonsoliton Lie algebras is discrete in the moduli space of indecomposable n-dimensional nonsoliton N-graded nilpotent Lie algebras if and only if n ≤ 7.
The families of nilpotent Lie algebras that we construct to prove the theorem are the first examples of continuous families of three-step nonsoliton nilpotent Lie algebras. It would be interesting to refine the result in Theorem 1.4 by specializing it to the two-step case, determining in which dimensions nonsolitons are discrete in the moduli space of two-step nilpotent Lie algebras.
This manuscript is organized as follows. In Section 2, we review necessary background material related to nilpotent Lie algebras, inner products on Lie algebras, soliton inner products, and Nikolayevsky (pre-Einstein) derivations of nilpotent Lie algebras. In Section 3, we present two continuous families of indecomposable N-graded nilpotent Lie algebras, one in dimension eight and one in dimension nine, and we prove that the Lie algebras in the families are nonsoliton. We also describe the derivation algebras of the Lie algebras in the families. In Section 4, we use the 8-and 9-dimensional examples from Section 3 to construct continuous families of indecomposable N-graded nilpotent Lie algebras in dimensions n ≥ 10. We find Nikolayevsky derivations for these Lie algebras, and we prove that all of the Lie algebras in the families are nonsoliton. Last, we prove that for any dimension n ≥ 8, the Lie algebras in the family are all mutually nonisomorphic. In Section 5, we combine our results from Sections 3 and 4 to prove the main result.
Preliminaries
2.1. Lie algebras. The descending central series of a Lie algebra g is defined by g (1) = g and g (j+1) = [g, g (j) ] for j > 1. The Lie algebra g is nilpotent if and only if there is an integer r so that g (r) is trivial. If r is the smallest integer so that g (r+1) is trivial, then g is said to be r-step nilpotent. An r-step nilpotent Lie algebra is said to of type (n 1 , n 2 , . . . , n r ) if dim(g (j) /g (j+1) ) = n j for j = 1, . . . , r.
Let Der(g) be the derivation algebra of g. The algebra Der(g) has Levi decomposition s ⊕ (t s ⊕ t c ) ⊕ n where s is the semisimple Levi factor and the solvable radical rad(Der(g)) = t ⊕ n is the direct sum of its nilradical n and a torus t. The torus further decomposes as the sum t = t s ⊕ t c of an R-split torus t s and a compact torus t c . The dimension of t is called the rank of g, and the dimension of the R-split torus t s is called the real rank of g.
A Lie algebra is indecomposable if it cannot be written as the direct sum of two nontrivial ideals.
2.2. Metric Lie algebras and soliton inner products. A metric Lie algebra (g, Q) is a Lie algebra g endowed with an inner product Q. Associated to each metric Lie algebra is a unique homogeneous space (G, g), where G is the connected Lie group whose Lie algebra is g, and g is the left invariant metric on G such that that the restriction of g to the tangent space T e G ∼ = g of G at the identity coincides with Q. The Ricci endomorphism Ric for the Riemannian manifold (G, g), when restricted to T e G, is an endomorphism of T e G ∼ = g. We call Ric e the Ricci endomorphism for the metric Lie algebra (g, Q) and we abuse notation to let Ric denote Ric e .
Let (n, Q) be a metric nilpotent Lie algebras with associated homogeneous space (N, g). Then Ricci form ric e for (N, g) at the identity is the bilinear form on T e N ∼ = n given by for x, y ∈ n, and where {x i } n i=1 is an orthonormal basis for n. Then Ricci endomorphism Ric e = Ric at the identity is the endomorphism of T e N ∼ = n given by the condition that Q(Ric(x), y) = ric(x, y), for all x, y ∈ n.
A metric Lie algebra (g, Q) is called soliton if its Ricci endomorphism Ric ∈ End(g) differs from a scalar multiple of the identity map by a derivation; that is, there exists a β ∈ R called the soliton constant so thatD = Ric −β Id is a derivation. In the case that the Lie algebra is nilpotent, we call the inner product a nilsoliton inner product, we call β the nilsoliton constant and we call the derivationD the nilsoliton derivation.
Let g be a nonabelian Lie algebra with basis B. Let Λ index the set of nonzero structure constants α k ij relative to B, modulo skew-symmetry: is the standard basis for R m . Let y 1 , y 2 , . . . , y m be an enumeration of the root vectors y k ij , (i, j, k) ∈ Λ, using some fixed ordering of N 3 . Our convention is to order N 3 so that for i 1 , j 1 , k 1 , i 2 , j 2 , k 2 ∈ N, The Gram matrix for g with respect to B is the matrix U = (u ij ) whose entries are the inner products of the root vectors: u ij = y i · y j . We will use the following theorem of Nikolayevsky to show that the nilpotent Lie algebras in our families do not admit soliton inner products. Note that we have replaced Nikolayevsky's hypothesis that the basis is a "nice basis" with an equivalent hypothesis on the Gram matrix U (which depends only on the index set Λ).
Nikolayevsky derivations.
A derivation D N of a Lie algebra g is called a Nikolayevsky derivation if it is semisimple with real eigenvalues and for all F ∈ Der(g). Nikolayevsky defined such derivations and showed that they are unique up to automorphism. He called them pre-Einstein derivations because when the underlying Lie algebra is nilpotent, they are natural generalizations of the nilsoliton derivation used to define an Einstein solvable extension. Because they are purely algebraic objects of broader use, we prefer to call such a derivation a Nikolayevsky derivation. He also showed that if g admits a soliton inner product, then the nilsoliton derivation is a scalar multiple of the Nikolayevsky derivation ( [Nik11]). It follows from the proof of Theorem 1.1(a) of [Nik11] that D N is a Nikolayevsky derivation if and only if the condition in Equation (1) holds for all F in an R-split torus t s containing D N . Thus, it is elementary to find the Nikolayevsky derivations of a Lie algebra with real rank one: Nik11]] Let g be a nilpotent Lie algebra with real rank one. Let D be a nontrivial semisimple derivation with real eigenvalues. Then is the unique Nikolayevsky derivation for g.
2.4.
Moduli spaces of soliton and nonsoliton nilpotent Lie algebras. We need to describe the structure of the moduli space of nilpotent Lie algebras of dimension n. Let R n be a real vector space of dimension n with basis B = {x i } n i=1 . Suppose that R n is endowed with a Lie bracket that defines a nilpotent Lie algebra structure on R n . The Lie bracket is equivalent to a skew-symmetric vector-valued bilinear map µ in the vector space V = ∧ 2 (R n ) * ⊗ R n . The Jacobi identity and the nilpotency condition are polynomial constraints on the coefficients of µ in V with respect to the basis {x * i ∧x * j ⊗x k : 1 ≤ i < j ≤ n, 1 ≤ k ≤ n}, so we may identify each µ with an element (µ, B) of an affine subvariety X of V. We let n µ denote the corresponding nilpotent Lie algebra. The general linear group GL n (R) acts on X by change of basis: for µ ∈ X, the element g · µ of X is defined by (g · µ)(x, y) = gµ(g −1 x, g −1 y), for x, y ∈ R n . Two elements µ and ν of X define isomorphic Lie algebras n µ and n ν if and only if µ and ν are in the same GL n (R) orbit. The quotient N n of X by this action is the moduli space of n-dimensional nilpotent Lie algebras. The equivalence class of µ is denoted by µ. We endow N n with the quotient topology.
The properties of whether a Lie algebra n µ for µ ∈ X is N-graded, and whether it is indecomposable are both invariant under the GL n (R) action. Hence we may define the moduli space N n of N-graded, indecomposable nilpotent Lie algebras to be the set of elements µ of N n so that n µ is N-graded and indecomposable. We use the subspace topology for N n . The property of whether or not a Lie algebra n µ admits a soliton inner product is also invariant under the GL n (R) action, so we may define the set Nonsol(n) ⊆ N n to be the set of all µ in N n such that n µ does not admit a soliton inner product.
3. Continuous families of nonsoliton nilpotent Lie algebras in dimensions 8 and 9 3.1. A curve of nonsoliton nilpotent Lie algebras in dimension 8.
Definition 3.1. Let s ∈ R, and let B = {x i } 8 i=1 be a fixed basis for R 8 . Define n s to be the nilpotent Lie algebra with underlying vector space R 8 whose Lie algebra structure is determined by the bracket relations The Jacobi Identity may be checked by confirming that there are no distinct choices of i, j and k so that [x i , [x j , x k ]] is nonzero. (That there are no such choices may also be deduced from Theorem 7 of [Pay10]).
It is not hard to verify that for all s, the Lie algebra n s is three-step nilpotent of type (3, 3, 2) and is indecomposable. For all s, we may write the vector space n s as the direct sum Define the derivation D : n s → n s by The eigenspaces V 1 , V 2 , V 3 for D define an N-grading of n s . Now we describe the derivation algebra of a typical nilpotent Lie algebra in the family defined in Definition 3.1.
Proposition 3.2. Let n s be the nilpotent Lie algebra as defined in Definition 3.1, for any fixed s in R. Then the derivation algebra Der(n s ) of n s is a 16-dimensional solvable algebra with real rank one. The derivation algebra decomposes as Der(n s ) = R D + m, where D is the derivation D defined above and m is the nilradical. The derivation D N = 5 11 D is a Nikolayevsky derivation of n s . Proof. The derivation algebra of n s is a subspace of End(n s ). The subspace may be described by a system of 8 3 linear equations in 8 2 unknowns, where the coefficients of the linear equations depend on the structure constants for n s . (See Section 1.9 of [dG00].) The structure constants for n s depend on the parameter s. Using Matlab to solve this system symbolically, we found that for any s, the solution space to the linear system is 16-dimensional and is spanned by the derivation D and 15 nilpotent derivations that span the nilpotent subalgebra m. Hence any semisimple derivation of n s is a scalar multiple of D and the real rank of n s is one. By Proposition 2.2, D N = 5 11 D is a Nikolayevsky derivation of n s . Now we show that none of the nilpotent Lie algebras in the family defined in Definition 3.1 are soliton.
Theorem 3.3. Suppose that n s is a nilpotent Lie algebra as defined in Definition 3.1. Then n s does not admit a soliton inner product.
For all such solutions v = (v i ), v 7 = 0. By Theorem 2.1, n s does not admit a soliton inner product.
We will show in Theorem 4.5 that no two nilpotent Lie algebras in the family defined in Definition 3.1 are isomorphic.
3.2.
A curve of nonsoliton nilpotent Lie algebras in dimension 9. Now we define a one-parameter family of 9-dimensional nilpotent Lie algebras, similar to the family of 8-dimensional nilpotent Lie algebras defined in the previous section.
i=1 be a basis for R 9 . Let s be a real number. Let n s be the Lie algebra with underlying vector space R 9 whose Lie algebra structure is determined by the Lie bracket relations The Jacobi Identity may be confirmed by direct computation, noting that the only time [[x i , x j ], x k ] is nontrivial for distinct i, j, k is when {i, j, k} = {1, 2, 3}. (The latter fact follows from Theorem 7 of [Pay10].) Each Lie algebra n s is three-step nilpotent of type (3, 3, 3). We may write the vector space R 9 ∼ = n s as the direct sum R 9 = V 1 ⊕ V 2 ⊕ V 3 , where V 1 , V 2 and V 3 are the three steps for any n s : A derivation D of n s is defined by The eigenspaces V 1 , V 2 , V 3 for D define an N-grading of n s .
Proposition 3.5. Let s ∈ R, and let n s be as defined in Definition 3.4. The derivation algebra Der(n s ) = R D + m of n s , is 19-dimensional and solvable, with 18-dimensional nilradical m. The real rank of n s is one, and D N = 9 14 D is a Nikolayevsky derivation of n s .
The proof of the proposition is analogous to that of Proposition 3.2, so we do not include it. Now we show that none of the Lie algebras defined in Definition 3.4 are soliton.
Theorem 3.6. Let s ∈ R, and let n s be a nilpotent Lie algebra as defined in Definition 3.4. Then n s does not admit a soliton inner product.
Proof. The proof is the same as that of Theorem 3.3, except that 1 1 −1 1 1 1 0 1 3 1 1 −1 1 1 0 0 1 1 1 3 If necessary, we may let x m+i = y i for i = 1, . . . , 2k to define an ordering the basis in accord with the subscripts on the x i 's. When k = 0, we let n 8+2k s = n s .
Because D is a derivation, for m = 8 or 9, and k ≥ 0, the eigenspaces V i define an N-grading of the Lie algebra n m+2k We will need the following lemma later.
Lemma 4.2. Let g be a Lie algebra, and let i and j be ideals in g such that g is the sum (not necessarily a direct sum) of i and j, and [i, j] = 0.
Let π : g → g denote a projection map from g to i; i.e., an endomorphism such that π| i = Id| i and π(g) = i. Let D be a derivation of g. Then the restriction of π 1 • D to i is a derivation of i.
Conversely, if D 1 : i → i is a derivation of i, D 2 : j → j is a derivation of j, and D 1 (z) = D 2 (z) for all z ∈ i ∩ j, then The solvable radical of Der(g) contains the solvable radical of Der(i).
Proof. There exists a basis
and j < span{y j } d j=1 + z, and the projection π : g → i from g to i is given by π(x i ) = x i , for i = 1, . . . , m, and π(y j ) = 0 for j = 1, . . . , d.
Because D is a derivation, for all i, j = 1, . . . , m, As [y l , x j ] = 0 for all l = 1, . . . , d and j = 1, . . . , m, we get for all i, j = 1, . . . , m. The vectors on the right side of Equation (7) are in i, hence the vector on the left side is also in i, so we have for all vectors x i and x j in the basis {x i } m i=1 of i. Hence, the restriction of π • D to i is a derivation of i.
To prove the converse, note that Equation (8) holds for all x i ∈ i and x j ∈ j due to the fact that [i, j] = 0, while it holds if either both x i and x j are in i or both x i and x j are in j because D 1 and D 2 are derivations of i and j respectively. Define the derivation D 0 of m by , for some c ∈ R, and F 1 : h → h is a derivation of the ideal h.
The derivationF fixes the ideals m and h, and is block diagonal when represented with respect to the basis. The restriction ofF to h is the derivation F 1 of h. The derivation F −F maps m = span{x 1 , . . . , x m } into span{y j } 2k j=1 and it maps span{y j } 2k j=1 into m = span{x 1 , . . . , x m }, and when represented with respect to the basis B in block form has 0 blocks along the diagonal.
Using the definition of D 0 we see that trace(D 0 •(F −F )) = 0; hence, In addition, traceF = trace F. Thus, in order to show that trace(D 0 • F ) = trace(F ) for all derivations F, it suffices to show that trace(D 0 • F ) = trace(F ) for all derivations F.
Now we prove a technical lemma about isomorphisms of the algebras which we have defined. Rx 5 ⊕ V 3 ⊕ V 6 to themselves.
Proof. The two subspaces R x 1 ⊕ V 3 ⊕ V 4 ⊕ V 6 and Rx 4 ⊕ Rx 5 ⊕ V 3 ⊕ V 6 are fixed by isomorphisms because each may be uniquely characterized by algebraic properties that are preserved under isomorphisms. We claim that when m = 8, the subset is the set of all elements x such that ad x has rank 3, where ad x is the adjoint map for either n m+2k s or n m+2k t . For example, when k = 1, if x = 8 i=1 b i x i + c 1 y 1 + c 2 y 2 , then with respect to the usual basis B, the adjoint map ad x for n 10 s is represented by the matrix The rank of the submatrix is two if and only if (b 1 , b 2 , b 3 ) = (0, 0, 0), and is zero otherwise.
Therefore if the rank of ad x is 3, then (b 1 , b 2 , b 3 ) = (0, 0, 0). But if b 2 = 0 or b 3 = 0, then the minor has rank two. The block form of the matrix then forces the rank of the larger matrix to be at least four, a contradiction. Hence b 2 = b 3 = 0, and x ∈ S. Conversely, if b 1 = 0 and b 2 = b 3 = 0, then rows 5, 6 and 8 form a basis for the row space of the matrix. Since isomorphisms preserve the rank of ad x , S is invariant under isomorphisms. By continuity, an isomorphism fixes the closure of S, The subspace Rx 5 ⊕ V 4 ⊕ V 6 can be characterized algebraically as the closure of the set of nonzero elements x in the commutator ideal V 4 ⊕V 6 such that the rank of ad x is 1. This is seen by letting b 1 , b 2 , b 3 , c 1 and c 2 equal zero in the matrix representing ad x .
Thus we have shown that Rx 1 ⊕ V 3 ⊕ V 4 ⊕ V 6 and Rx 5 ⊕ V 3 ⊕ V 6 are preserved by isomorphisms, establishing Part (1) of the lemma in the case that m = 10. The same arguments apply in higher even dimensions 8 + 2k > 10. Now suppose the m = 9. Let We assert that S is the set of all elements x such that ad x has rank 3, and x is not in the centralizer of the commutator ideal. The commutator ideal is V 4 ⊕ V 6 and its centralizer is If x = 9 i=1 b i x i + c 1 y 1 + c 2 y 2 , then adjoint map ad x for n 11 s is represented by the matrix The rank of the submatrix and only if (b 1 , b 2 , b 3 ) = 0 and is zero otherwise.
Assume that x is not in z(V 4 ⊕ V 6 ), and that the rank of ad x is three. Since x is not in the centralizer of the commutator, (b 1 , b 2 has rank 2 or more, forcing the larger matrix to have rank greater than three, a contradiction. Therefore b 2 = b 3 = 0. Substituting these values into the larger matrix, we see that rows 5, 6 and 8 are independent, and row 9 will not being in their span unless c 1 = c 2 = 0. Hence, x ∈ S. Conversely, if x ∈ S, then b 1 = 0, b 2 = b 3 = 0, and c 1 = c 2 = 0. Since b 1 = 0, the vector x is not in the centralizer of the commutator ideal. After substituting the zero values into the large matrix, we see that rows 7 and 9 are in the span of the independent rows 5, 6 and 8. Hence the rank of ad x is 3. Thus we have shown that S can be characterized as the set of all x such that ad x is not in the centralizer of the commutator ideal and ad x has rank 3. Therefore φ(S) = S, and φ preserves the closure We have just shown that an isometry φ preserves W = Rx 1 ⊕V 4 ⊕V 6 . Therefore φ will also map the centralizer of W, Therefore, φ preserves Rx 4 ⊕ Rx 5 ⊕ V 3 ⊕ V 6 as claimed.
This we have shown that Part (2) holds in dimension 11. The same arguments apply in odd dimensions 9 + 2k greater than 11. Now we are ready to show that for fixed m and k, no distinct two Lie algebras in the family n m+2k s , s ∈ R are isomorphic. Proof. Suppose that φ : n m+2k s → n m+2k t is an isomorphism. We view n m+2k s and n m+2k t as the same vector space endowed with different Lie brackets. Let V i denote the eigenspace for the derivation D with eigenvalue i as in Equation (5). Recall that eigenspaces V i , where i = 2, 3, 4, 6, define an N-grading of n m+2k {2, 3, 4, 6}, and V 3 = {0} when k = 0. In particular, we know that and that V 6 is central.
To complete the proof, we will consider the cases m = 8 and m = 9 separately. First suppose that m = 8. The first defining relation = e s [a 12 x 1 + a 22 x 2 + a 32 x 3 , a 13 x 1 + a 23 x 2 + a 33 x 3 ] + v 4 = e s−t (a 22 a 33 − a 32 a 23 )x 4 + e s+t (a 12 a 33 − a 32 a 13 )x 5 + e s (a 12 a 23 − a 22 a 13 )x 6 + v 4 , where v 4 is a vector in the center V 6 .
First we consider the case that m = 8. Let the family n 8+2k s , s ∈ R of nilpotent Lie algebras be as defined in Definition 4.1. We will do a proof by contradiction, so we suppose that n 8+2k s admits a soliton inner product.
Let U denote the Gram matrix for n i+2k s with respect to the basis B. For a, b ∈ N, let [0] a×b denote the a × b matrix with all entries zero, and let [1] a×b denote the a × b matrix with all entries one, and let I a denote the a × a identity matrix. The matrix U has block form is the matrix U in Equation (3), broken into blocks U 11 , U 12 , U 21 , U 22 of sizes 5 × 5, 5 × 3, 3 × 5, and 3 × 3 respectively. By Theorem 2.1 there exists a solution v to Uv = [1] with all positive entries. We may write v as Multiplying Uv in block form gives [1] k×1 into the above yields the equivalent system It follows that The inequality in Equation (30) implies that 8 i=6 v i < 1 , so a < 3 − k 3 + k 3 = 1.
Therefore, for all s ∈ R, and all k ∈ n, the Lie algebra n 8+2k s is not soliton, as claimed. Now suppose that m = 9. Let n 9+2k s be one of the Lie algebras in the one-parameter family of nilpotent Lie algebras defined in Definition 4.1. Suppose that n 9+2k s admits a nilsoliton inner product. Let U be the Gram matrix for n 9+2k s with respect to the basis By examining the index set Λ, we see that the matrix U has is of form is the 10 × 10 matrix U in Equation (4), and the blocks U 11 , U 12 , U 21 and U 22 are size 8 × 8, 8 × 2, 2 × 8, and 2 × 2 respectively.
Proof of main theorem
Now we prove Theorem 1.4.
Proof. Let N n denote the moduli space of N-graded, indecomposable nilpotent Lie algebras as described in Section 2.4. Let Nonsol(n) ⊆ N n be the set of all µ in N n such that n µ does not admit a soliton inner product as described in Section 2.4.
We know from the results of Lauret, Will and Culma described in the introduction ( [Lau02], [Wil03], [Cul11a], [Cul11b]) that the set of nonsoliton Lie algebras in N n is discrete when n ≤ 7. By Theorem 3.3, none of the 8-dimensional Lie algebras defined in Definition 3.1 are soliton. By Theorem 3.6 none of the 9-dimensional Lie algebras defined in Definition 3.4 are soliton. Theorem 4.6 implies that none of the Lie algebras in dimensions n ≥ 10 defined in Definition 4.1 are soliton.
By Theorem 4.5, no two of the Lie algebras n n s and n n t defined in dimension n ≥ 8 are isomorphic.
Therefore, in each dimension n ≥ 8, the mapping γ : R → N n mapping s ∈ R to the equivalence class of the nilpotent Lie algebra n n s is one-to-one, with image in Nonsol(n).
Hence, the set Nonsol(n) consisting of nonsoliton Lie algebras (modulo equivalence under isomorphism) in N n is not discrete when n ≥ 10. | 2011-12-15T04:47:56.000Z | 2011-12-15T00:00:00.000 | {
"year": 2011,
"sha1": "544fc2a509605f52b12136a09e4b6530eae92c15",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "544fc2a509605f52b12136a09e4b6530eae92c15",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
237946619 | pes2o/s2orc | v3-fos-license | Temocillin versus meropenem for the targeted treatment of bacteraemia due to third-generation cephalosporin-resistant Enterobacterales (ASTARTÉ): protocol for a randomised, pragmatic trial
Introduction Alternatives to carbapenems are needed in the treatment of third-generation cephalosporin-resistant Enterobacterales (3GCR-E). Temocillin is a suitable candidate, but comparative randomised studies are lacking. The objective is to investigate if temocillin is non-inferior to carbapenems in the targeted treatment of bacteraemia due to 3GCR-E. Methods and analysis Multicentre, open-label, randomised, controlled, pragmatic phase 3 trial. Patients with bacteraemia due to 3GCR-E will be randomised to receive intravenously temocillin (2 g three times a day) or carbapenem (meropenem 1 g three times a day or ertapenem 1 g once daily). The primary endpoint will be clinical success 7–10 days after end of treatment with no recurrence or death at day 28. Adverse events will be collected; serum levels of temocillin will be investigated in a subset of patients. For a 10% non-inferiority margin, 334 patients will be included (167 in each study arm). For the primary analysis, the absolute difference with one-sided 95% CI in the proportion of patients reaching the primary endpoint will be compared in the modified intention-to-treat population. Ethics and dissemination The study started after approval of the Spanish Regulatory Agency and the reference institutional review board. Data will be published in peer-reviewed journals. Trial registration number NCT04478721.
INTRODUCTION
Infections due to antimicrobial-resistant pathogens are recognised as a worldwide public health problem. The problem is especially severe among Gram-negative bacteria. In fact, third-generation cephalosporin-resistant Enterobacterales (3GCR-E), either caused by extended-spectrum β-lactamases (ESBLs) or high production of AmpC enzymes, were the leading cause of invasive infections (estimated, 365 000) and attributable deaths (estimated, 12 700) among antibiotic-resistant bacteria in the European economic area in 2015. 1 Also, 3GCR-E have a very important impact in the use of antibiotics; a very important increase in carbapenems consumption (the drugs of choice for invasive infections due to 3GCR-E) has followed these bacteria spread 2 and it is contributing to the subsequent explosive spread of carbapenems resistance.
Therefore, alternative treatments for 3GCR-E are desperately needed. One of the alternatives is piperacillin-tazobactam, but its efficacy compared with carbapenems is controversial. 3 4 The cephamycins may be active against ESBL-producers, but not against AmpC-producers, and their efficacy is doubtful. 5 Fosfomycin and aminoglycosides may be an empirical option in some cases, but they are only useful in urinary tract infection Strengths and limitations of this study ► The design of this randomised study has limitations, including the open design, the heterogeneity of oral alternatives for switching and the exclusion of patients with septic shock. ► One of the strengths is its pragmatic design which we hope will allow the appropriate representation of patients with the target infections. ► The multicentre participation and the short time limit to recruit the patients once the bacteraemia is diagnosed are other strengths of this study.
Open access (UTI). 6 7 Finally, new β-lactams (ceftolozane-tazobactam, ceftazidime-avibactam) might be reserved for other multidrug-resistant pathogens. 8 Temocillin is a β-lactam drug which is stable against ESBLs and AmpC enzymes, and therefore is active against a high proportion of 3GCR-E. 9 This drug is only approved in a few countries for the treatment of septicaemia, urinary tract and lower respiratory tract infections when susceptible Gram-negative bacilli are suspected or confirmed (standard dosing, 2 g two times a day intravenously; for severe infections, 2 g three times a day is recommended). The pharmacokinetic and pharmacodynamic properties of temocillin have recently been reviewed. 10 The objective of this article is to describe the hypothesis, objectives, design, variables and procedures for a pragmatic randomised controlled trial comparing the efficacy of temocillin and meropenem in bacteraemic infections caused by 3GCR-E. To the best of our knowledge, no randomised trials have been published with temocillin in these infections.
METHODS AND ANALYSIS Hypothesis and objectives of the trial
The hypothesis of the study is that temocillin is noninferior to carbapenems for the targeted treatment of bacteraemia due to 3GCR-E. The primary objective of the trial is to demonstrate the non-inferiority of temocillin in terms of efficacy and safety. Secondary objectives include providing specific comparative data about the efficacy and safety of temocillin and carbapenems in subgroups of patients (eg, different sources of bacteraemia, elderly, renal insufficiency and other underlying conditions), providing data about the pharmacokinetics and pharmacodynamics of temocillin, and about the distribution of the minimum inhibitory concentrations (MIC) of temocillin according to the mechanisms of resistance. showing resistance to ceftriaxone, cefotaxime and/or ceftazidime and susceptibility to temocillin and carbapenems are eligible. Inclusion and exclusion criteria are detailed in table 1. Participation in the study is voluntary and patients can withdraw from the study at any time. Subjects will be withdrawn from the study if they experience a major protocol violation, in case of clinical failure or according to safety criteria. Patients will be randomised once inclusion and exclusion criteria are checked (therefore, once the isolate is known to be susceptible to study drug), and informed consent is signed; randomisation must be performed in <96 hours after the blood cultures were obtained and in <48 hours of the availability of susceptibility results.
Randomisation
Candidates will be detected from the daily review of positive blood cultures. Patients with all inclusion criteria but
Open access
with some exclusion criteria will be considered screening failures. Those signing informed consent will be randomised centrally using an online automatic system with a 1:1 randomisation. Randomisation will be stratified according to empirical treatment (in vitro active or not) and suspected source (urinary tract or other) in order to assure that these variables will be balanced between the study arms. The automatic randomisation system is integrated in the electronic case report form (e-CRF) of the study.
Interventions and study treatment Patients will be allocated to one of the following arms: Arm A (experimental group), in which patients will receive 2 g of temocillin intravenously every 8 hours in 30-40 min infusion; and Arm B (control group), in which patients will receive 1 g of meropenem intravenously every 8 hours in 15-30 min infusion. Ertapenem 1 g per day can be used instead of meropenem except in patients with sepsis, if MIC ≤0.5 mg/L. Dosing will be adjusted in patients with renal insufficiency according to official labels (table 2). Duration of intravenous therapy will be decided by the treating physician, but should be at least four full days; then, patients can be switched to an oral regimen if the infection is controlled, the source of infection has been drained/ removed/solved, the patient tolerates the oral route and the isolate is susceptible to appropriate drugs as follows: ciprofloxacin 500 mg every 12 hours; trimethoprimsulfamethoxazole 160-800 mg every 12 hours and amoxicillin-clavulanic acid, 875-125 mg every 8 hours. Recommended duration of total active therapy is 7-14 days. In monomicrobial bacteraemia in which a polymicrobial infection is suspected (eg, intra-abdominal infection), addition of metronidazole (only in patients assigned to temocillin), vancomycin or linezolid (for resistant Gram-positive organisms) or an antifungal agent are allowed.
Concomitant treatment with any other systemic antibiotic with intrinsic activity against isolated enterobacteria from blood culture is not permitted. The use of one of these antibiotics during the phase of treatment will be deemed as failure and a withdrawal criterion. There are no absolute contraindications for the use of any other drugs during the study.
Since temocillin is not approved in Spain, Belpharma SA will ship the vials to the Pharmacy Service at HUVM, where they will be relabelled and delivered to the sites. The drug traceability will be ensured. The other study drugs are officially approved in Spain, and they will be used through the normal provision of each Hospital Pharmacy at every participating site. The lot number, expiration dates and the number of vials used will be recorded.
Study endpoints
The primary endpoint will be a clinical success in the modified intention-to-treat (mITT) population (see below), and includes all of the following: (1) clinical success at test of cure (TOC); (2) survival at day 28; (3) no need to stop or change the assigned drug because of an adverse event, perceived failure during treatment or occurrence of a superimposed infection; (4) no need to prolong therapy beyond 14 days and (5) no recurrence until day 28. The TOC will be performed 7-10 days after the last day of antibiotic therapy. Clinical success is defined as resolution of the new signs or symptoms related to the infection.
To control potential investigator's bias, the outcome will be checked through: (1) collection of objective clinical data at day 0 and TOC, including temperature, blood pressure, respiratory and heart rates, Glasgow score and specific examination signs and (2) calculation of the SOFA score (Sequential Organ Failure Assessment) on all visits. A blinded investigator will assess their concordance with the outcome classification provided by the local investigator. Secondary endpoints are shown in table 3.
Follow-up of participants
Patients will be followed until day 28; all visits and procedures to be performed at each one are specified in table 4. The day of blood culture is considered 'Day 0' of Open access the study. After discharge, patients will be provided the means to attend the face-to-face visits.
Microbiological procedures Blood samples will be performed using standard clinical practice. Blood cultures and bacterial identification will be performed at local laboratories using standard microbiological procedures; the isolates will be preserved and sent to HUVM. Temocillin susceptibility will be studied in 3GCR-E at local laboratories by gradient strips; those with MIC value >8 mg/L will be considered resistant according to British Society of Antimicrobial Chemotherapy breakpoint for susceptibility. Susceptibility to meropenem and other drugs will be studied using routine protocols and interpreted according to the European Committee on Antimicrobial Susceptibility Testing recommendations. Identification and susceptibility in all isolates to all study drugs will be checked later at HUVM using Matrix-Assisted Laser Desorption Ionization Time-of-Flight Mass Spectrometry (MALDI-TOF) and broth microdilution methods, respectively.
Pharmacokinetic and pharmacodynamic studies Free temocillin plasma levels will be determined on days 1 and 3 in the first 20 patients allocated to the temocillin recruited at HUVM. Blood samples will be obtained 1, 4, 6 and 8 hours after the administration of temocillin; free temocillin plasma concentrations will be measured using HLPC-DAD. 11 The method will be validated according to the FDA Bioanalytical Method Validation Guidance for Industry. 12 For the population pharmacokinetic analysis, one-compartment and twocompartment linear models will be fitted to the temocillin plasma concentrations-time data. Covariate model building will be performed using sequential assessment of biologically plausible clinical parameters. Monte Carlo models will be built the calculate the probability of target attainment (PTA) ≥50% of the time over the MIC for different MIC values (PTAs for >50% fT >MIC ) and simulated dosing schemes (2 g of temocillin administered in 30 min and in 4 hours, every 8 or 12 hours). Dose adjustments will be simulated in patients with decreased renal clearance.
Open access
Sample size We estimated an 85% success rate with meropenem and with temocillin. In order to reject the null hypothesis with 80% power and a 5% one-sided significance level for a 10% non-inferiority margin with a 1:1 assignment, with 5% of missing patients, a total of 167 patients in each study arm are needed (total, 334 patients).
Statistical analysis
For the primary analysis, the absolute difference in the proportion of patients reaching the primary endpoint in the two study arms will be compared in the mITT population, which includes all randomised patients who received at least one dose of the study drug, but in which those incorrectly included or randomised will be excluded. The one-sided 95% CI for the difference will be calculated.
As secondary analyses, all secondary endpoint will be analysed in the mITT population, in the per-protocol population (those receiving at least 4 days of the study drugs) and in the clinically evaluable population (patients evaluated at TOC). Absolute difference with 95% CI will be calculated for categorical endpoints, and Mann-Whitney test for length of hospital stay. The primary endpoint will also be analysed in the following subgroups: according to the source of bloodstream infection (BSI); age groups patients with cancer; mechanism of resistance to third-generation cephalosporins; species of Enterobacterales; temocillin MIC <4 versus 4-8 mg/L; appropriate vs inappropriate empirical therapy; nosocomial versus nonnosocomial episodes and INCREMENT score <7 or ≥8. Analysis considering the site effect will also be performed by using a random effects model. Finally, multivariate analysis will be performed in order to control the potential effect of variables other than randomised antibiotic therapy on the primary outcome using logistic regression and on mortality by Cox regression. Key outcome determinants including age, Charlson, delay in administering an active drug, specific sources, micro-organism, Pitt score, SOFA and renal insufficiency will be considered for inclusion in the models, and will be selected using a stepwise backward process; the variable study arm will be forced in the models.
Safety and adverse event reporting
Pharmacovigilance activities are delegated from the sponsor to the Clinical Trials Unit of University Hospital Virgen del Rocío (CTU-HUVR). Follow-up of adverse events (AE) will be done according to standard procedures and the European Medicine Agency regulation; all potential AE will be recorded and classified according to severity and potential relation with the trial drugs. Any adverse event must be recorded in the e-CRF and all serious AE will be notified in less than 24 hours to CTU-HUVR. The CTU-HUVR personnel are responsible for the reception, recording and resolution of queries and for the identification of any serious unexpected adverse event (SUSAR). SUSAR will be evaluated to communicate them in less than 15 days to Regulatory Authorities, Ethics Committees and Investigators. Safety annual reports will be reported to regulatory Authorities and Ethics Committees by these personnel. A safety monitor from the CTU-HUVR, will coordinate the activities in collaboration with the SCReN.
Data and safety monitoring
Data collection activities will be assessed by an individual responsible of the CTU-HUVR, in contact with the investigators for the revision and verification of data according to a monitoring plan. Subject data will be anonymised and collected using the e-CRF.
An external independent Data Safety Monitoring Board (DSMB) formed by three expert members not participating as investigators in the project will be selected. Interim analyses will be performed after the first 50 and after the first 150 first patients are recruited. Reports with recommendations from the DSMB will be obtained for both interim analyses. A conditional power ≤20% calculated using Mehta and Pocock method after the inclusion of the first 150 patients will be considered low enough to recommend termination of the trial on the basis of futility. Detailed description of rules for decision-making from the committee will be agreed at the time of the agreement of the independent members.
Ethics and dissemination Ethics
An approved informed consent (version 1.2, dated 6 May 2020) form must be signed before any study specific procedures is performed. The study is approved by the Spanish Regulatory Agency and by the Hospitales Universitarios Virgen Macarena and Virgen del Rocío Ethic Committee. The trial will be carried out according to the principles of the Declaration of Helsinki and the legal Royal Decree RD 1090/2015 applicable in Spain for the performance of clinical trials and European Regulation (EU) n° 536/2014 for all the EU countries.
The results of the study will be submitted for publication to a scientific journal following the Consolidated Standards of Reporting Trials recommendations.
Patient and public involvement Patient/public involvement will not be involved in the design, or conduct, or reporting, or dissemination plans of our research.
DISCUSSION
Temocillin, because of its in vitro activity, is a potential alternative to carbapenems for the treatment of infections caused by 3GCR-E and might help to reduce the consumption of these drugs. [13][14][15] As comparative clinical data for temocillin is scarce, ASTARTÉ is expected to provide evidence for the use of this drug in the setting of BSI due to 3GCR-E, which represents a substantial proportion of all BSI caused by Enterobacterales.
Open access
Because carbapenems are highly efficacious for the treatment of bacteraemia due to 3GCR-E and the objective is to find alternatives which can help in reducing their use, a non-inferiority approach is proposed. The use of an alternative drug might have additional benefits for the patient or the population by reducing the selective pressure of carbapenems for multidrug-resistant organisms. A superiority trial could be done by using a composite primary outcome including colonisation and/or superinfection by multidrug resistant bacteria, but a very high sample size would be needed, making it unfeasible for an investigator-driven clinical trial with public funds.
It is well known that classical randomised clinical trials (RCT) may not be adequately adapted to daily practice; they are frequently performed in selected sites with highly experienced investigators and selected participants, and the population included might not be representative of most patients to whom the results would be extrapolated, so overestimation of benefits and underestimation of harms can be present for special populations normally not included in RCT. This led to the idea that more pragmatic trials showing the real-world effectiveness of the intervention in broader patient groups, are required. 16 This may be particularly important in the evaluation of antibiotics as the outcome of the infection do not only depends on the treatment itself but on features of the patients, the source and severity of the infection, the micro-organism and different aspect of the clinical management (source control, support therapy). Therefore, ASTARTÉ was designed as a pragmatic trial trying to mimic clinical practice.
The inclusion of patients with bacteraemia was decided because this is a frequent situation, in which the aetiology is perfectly identified and for which the predictors for outcome have been well studied, also by our group 17 ; the problem of bacteraemia is that it includes different sources of infection, but the experience in previous trials indicates that this can be adequately controlled in the analysis. 4 The use of a composite primary endpoint was decided to include both a very relevant and hard variable such as mortality plus clinical success as recommended in a consensus document for trials in bacteraemia. 18 Meropenem as comparator was chosen because carbapenems are considered the drugs of choice for invasive infections caused by ESBL producers. 19 The use of ertapenem (1 g per day) is accepted except in patients with sepsis if MIC ≤0.5 mg/L. 20 In order to approach standard clinical practice, switching to oral therapy when possible is included in the protocol. First option to oral therapy continuation is ciprofloxacin. Trimethoprimsulfamethoxazole can be used as second option, only in UTI. Third approved option, in case of allergy or resistance to previous treatment described, is amoxicillinclavulanic acid.
The expected impact of the study is a change in clinical practice allowing temocillin to be used in many patients and contributing to a reduction in the consumption of carbapenems highly needed in the actual situation of resistances.
Strengths and limitations of this study
The design of this randomised study has limitations, including the open design, the heterogeneity of oral alternatives for switching, and the exclusion of patients with septic shock. Some strengths are its pragmatic design which we hope will allow the appropriate representation of patients with the target infections, the multicentre participation and the short time limit to recruit the patients once the bacteraemia is diagnosed.
Trial status ► Funding for the study communicated on November 2019, available for study expenses in January 2020. ► Authorisation from the Spanish Regulatory Authority obtained on 9 September 2020. ► Approval for the Ethic Committee for the 32 sites included obtained on 5 May 2020. ► First patient inclusion for the study occurred on December 2020. | 2021-09-28T13:07:28.884Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "2d5f8baeecf130c53870db897b79a673ed605294",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/11/9/e049481.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "2d5f8baeecf130c53870db897b79a673ed605294",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2389808 | pes2o/s2orc | v3-fos-license | Consistent Roof Geometry Encoding for 3 D Building Model Retrieval Using Airborne LiDAR Point Clouds
A 3D building model retrieval method using airborne LiDAR point clouds as input queries is introduced. Based on the concept of data reuse, available building models in the Internet that have geometric shapes similar to a user-specified point cloud query are retrieved and reused for the purpose of data extraction and building modeling. To retrieve models efficiently, point cloud queries and building models are consistently and compactly encoded by the proposed method. The encoding focuses on the geometries of building roofs, which are the most informative part of a building in airborne LiDAR acquisitions. Spatial histograms of geometric features that describe shapes of building roofs are utilized as shape descriptor, which introduces the properties of shape distinguishability, encoding compactness, rotation invariance, and noise insensitivity. These properties facilitate the feasibility of the proposed approaches for efficient and accurate model retrieval. Analyses on LiDAR data and building model databases and the implementation of web-based retrieval system, which is available at http://pcretrieval.dgl.xyz, demonstrate the feasibility of the proposed method to retrieve polygon models using point clouds.
Introduction
Recent development on 3D scanning and modeling technologies has led to an increasing number of 3D models, and most of the models have been made available in web-based platforms with data-sharing service.In this context, the question of "How to generate 3D building models?" may evolve to "How to find them in model databases and in the Internet?"[1].This concept motivates this study to encode unorganized, noisy, and incomplete building point clouds acquired by airborne LiDAR and efficiently retrieve 3D models from databases and the Internet.A set of complete or semi-complete building models is retrieved and reused instead of reconstructing point clouds using complicated modeling techniques [2][3][4][5].
The naive approach called text-based retrieval uses keywords in metadata to search for the desired 3D models.This method is simple and efficient, but using keywords as queries suffers from difficulties caused by inappropriate annotations and language varieties.By contrast, content-based retrieval is a promising approach that encodes geometries of queries and models using a shape descriptor and matches the encoded coefficients for data retrieval.Most previous studies on content-based 3D model retrieval take polygon models as input queries [6,7].These methods can efficiently and accurately extract models from databases.However, obtaining a desired polygon model as input query is difficult, which limits the usage of retrieval systems.With the aid of airborne LiDAR, which has the capability of efficient spatial data acquisitions, a model retrieval by using LiDAR point clouds as input queries is proposed.By consistently encoding point cloud and polygon models, a set of building models similar to an input point cloud in geometry shape can be retrieved for the purposes of data extraction and efficient cyber city modeling.Based on the concepts of data reuse and crowsourcing, the proposed system can efficiently construct a 3D city model which is one of key components in virtual geographic environment.
Content-based model retrieval methods can be classified into two categories, model-based retrieval and view-based retrieval, based on the shape descriptor used in encoding.In model-based retrieval, shape similarities are measured using various geometric descriptors, including shape distribution [6], shape spectral [8], and shape topology [9].In methods based on shape distribution, geometric features defined over feature spaces are accumulated in bins [6].A histogram of these bins is utilized as the signature of a 3D model.In shape spectral methods, geometric shapes are transformed to a spectral domain and the spectral coefficients are used in shape matching and retrieval [8].In topology-based methods, model topologies are represented as skeletons, and retrieval is performed based on the assumption that similar shapes have similar skeletons [9].Unlike model-based methods, view-based methods represent 3D geometric shapes as a set of 2D projections, and 3D models are matched using their visual similarities instead of geometric similarities [7,10,11].Each projection is described by image descriptors, and shape matching is reduced to measurement of similarities among the views of the query object and models in the database.Model-based and view-based methods perform well on existing benchmarks for polygon model encoding and retrieval.However, these methods are not designed for unorganized, noisy, sparse, and incomplete point clouds.Recently, Chen et al. [12] proposed point cloud encoding using a set of low-frequency spherical harmonic functions (SHs).With the preprocessing of data resampling and filling, the approach can alleviate the difficulties caused by sparse and incomplete sampling of point clouds.However, the use of low-frequency SHs decreases the ability to distinguish objects with similar geometric shapes thereby leading to ambiguity in shape description.
To improve shape distinguishability, an roof geometry encoding that integrates shape distribution with visual similarity is proposed.The main idea is to represent point clouds and polygon models using top-view depth images that can describe the shapes of building roofs and avoid disturbances from insufficient sampling of building side-views.The depth images are further encoded by geometry features with spatial histograms, which introduce the properties of compact description, rotation invariance, noise insensitivity, and consistent encoding of point clouds and polygon models.These properties lead to a compact storage size and real-time retrieval response time.Furthermore, the visual similarity in depth images and shape distribution in spatial histograms increase the distinguishability of geometric shapes.The remainder of this paper is organized as follows.Section 2 describes the methodology of point cloud encoding and building model retrieval.Section 3 introduces the properties of the proposed encoding method.Section 4 discusses experimental results, and Section 5 presents conclusions and future work.
System Overview
Figure 1 schematically illustrates the proposed system which consists of three main steps, depth image generation, data encoding, and data retrieval.In the first step, the building models and point cloud, that is, the input query, are represented by top-view depth images for the geometric description of building roofs.An interpolation process is then applied to the depth image of the point cloud for hole filling to facilitate consistent encoding.In data encoding, a set of geometric features are extracted from the interpolated depth images.The extracted geometric features are encoded by utilizing a spatial histogram with a determined origin, which can provide a rotation-invariance shape description.During data retrieval, the encoded coefficients of input query are matched with those of the models in a database using a shape similarity metric.A set of best-matched models are extracted.In this section, the process of depth image generation is described in Section 2.2, followed by the data encoding and retrieval, which are described in Sections 2.3-2.5.
Generation of Depth Image
Top-view projection is selected in depth image generation because of the characteristics of airborne LiDAR scanning.The roof is generally the most informative and identifiable part of a building in airborne LiDAR acquisitions.In the top-view projection, the pixel value of the projection is generally defined as the height difference between the ground and the pixel in the building roof.However, pixel intensity in this definition is dominated by building height, and the roof geometry is insignificant, especially for tall buildings.In this study, the pixel value of depth image, which is denoted as D(x, y), is defined as the distance between the maximal height of building and the height on that position, that is, where Max B represents the maximal height of a building B, and H B (x, y) denotes the building height at the position (x, y).In this definition, the pixel value of depth image corresponds to the relative height difference, which can enhance roof geometry in the depth image.The depth image generation is a process of rasterization.The spatial resolution of a geometric description depends on the setting of grid size in rasterization.A large grid size indicates low possibility of missing depth information and efficient rasterization, but will result in low spatial resolution of depth images and rough geometric representation of point clouds.By contrast, small grid size is linked to a high-resolution depth image with fine geometric description, but will result in time-consuming computation and large number of empty pixels.In this study, small grid size is preferred because data interpolation can be performed to alleviate the problem of missing information.In addition, the grid size is set according to point cloud density with the expectation that, on average, each grid in the depth image has one inside point.Therefore, grid size is defined as GS = 1/AvgPD, where AvgPD represents average point density of a building point cloud.Figure 2 shows an example of point cloud rasterization.The average point density of the building is 16.67 pts/m 2 and grid size GS is set to 0.245 m in rasterization.In this setting, the geometric detail of building roof is preserved and only small holes are present in the depth image.The search for optimal setting of grid size is difficulty because the determination of grid size is data-sensitive.Any advanced approach on that such as the method proposed by Awrangjeb et al. [13] can be adopted and integrated into the system.Holes are generally present in depth images because of rasterization of irregularly distributed LiDAR point clouds.To address this issue, a hole filling and completion process is performed using grayscale morphological operators.A grayscale morphological closing, which consists of dilation and erosion operators with a flat structuring element, is adopted to fill gaps in depth images.The proposed encoding method requires the preservation of point cloud topology and geometry, but it does not require a perfect depth image, that is, an image without holes.Therefore, a simple and efficient morphology-based approach is adopted.With this hole filling, only holes that are smaller than the structuring element are filled; thus, the geometric topology, such as hollow shape, can be maintained.In the implementation, a 7 × 7 structuring element is used.
Data Encoding
The first step of data encoding is determining a translation and rotation invariant origin of a depth image.Chen et al. [12] suggested the selection of the center of minimal bounding box of an object as origin.However, the origin obtained in this approach is slightly sensitive to rotation, especially for non-symmetric buildings.For instance, in Figure 3, the cyan boxes are the minimal bounding boxes of rotated depth images.These bounding boxes differ in height and width, and thus, the obtained origins represented by cyan dots are inconsistent.In this study, a depth image is regarded as a 2.5D model.The barycenter of the 2.5D model is selected as origin because that this center is unchanged after similarity transformation.Examples of obtained origins are shown in Figure 3.The origins displayed in red remain constant as the object is rotated, which implies this origin is invariant to rotation and translation.
Center of Minimal Bounding Box
Barycenter of 2.5D model In formula, given a depth image D(x, y), the shape origin (x 0 ,y 0 ) is defined as the weighted position in the depth image, that is, where n represents the number of roof pixels, and D i denotes the value of D(x i , y i ), which is used as the weight of vector (x i , y i ).Before data encoding, the point cloud and the models are aligned to their origins to facilitate rotation and translation invariant encoding.Three geometric features, namely, height feature, edge feature, and planarity feature, as illustrated in Figure 4, are used to describe the shape of a building roof in depth image domain.These three features are described as follows.Height Feature.A building roof is represented as a depth image, and its pixel intensity denotes the relative height of the roof in that position.Therefore, using pixel intensities in a depth image to represent roof geometry is intuitive and efficient.Following the study [14], height feature, which is denoted as F h (x, y), is defined as the pixel intensities of depth images, that is, F h (x, y) = D(x, y).
Edge Feature.Sharp edges in an image are linked to high-frequency components that represent details in an image.Therefore, sharp edges are used as geometric features to represent roof geometries.Inspired by the study [15], Laplacian of Gaussian (LoG) filter is adopted to extract sharp edges while alleviating disturbances from noise.The LoG is composed of Gaussian and Laplacian filters, where the low-pass kernel function in the Gaussian filter is used to suppress noise and the second derivative kernel function in Laplacian filter is utilized to extract sharp edges.The LoG is applied to depth images with inherent noises, and the resulting data are used as edge feature, which is denoted by F e (x, y).Planarity Feature.Different from sharp edges, which are high-frequency components, planes in an image relate to low-frequency components that represent rough information in an image.Therefore, the planarity feature is selected as geometric feature.The planarity feature from the principal component analysis of a point set is a useful descriptor that can describe the local geometry of a point set and indicate whether local geometry is planar.Given a 2.5D point set P = {(x i , y i , D i )} n i=1 from the pixels in a depth image where the points are located within a circle of diameter r centered at a position p c : (x c , y c , D c ), a simple method for computing the principal components of the point set P is to diagonalize the covariance matrix of P. In matrix form, the covariance matrix of P is written as The eigenvectors and eigenvalues of the covariance matrix are computed using matrix diagonalization, that is, V −1 CV = D, where D is the diagonal matrix containing the eigenvalues {λ 1 , λ 2 , λ 3 } of the covariance matrix C, and matrix V contains the corresponding eigenvectors.In geometry, the eigenvalues relate to an ellipsoid that represents the local geometric structure of a point set.The combinations of these eigenvalues provide discriminant geometric features.For instance, λ 1 ∼ = λ 2 λ 3 indicates a flat ellipsoid that can represent planar structures, and λ 1 ∼ = λ 2 ∼ = λ 3 corresponds to a volumetric structure, such as the corners of buildings.
Following the definitions in [16,17], planarity feature is defined as (λ 2 − λ 3 )/λ 1 , which has the ability of enhancing and describing planar structures.The planarity feature F λ (x, y) obtained from principal component analysis is used as geometric feature.
The height, edge, and planarity features provide the descriptions of height, high-frequency, and low-frequency components of a building roof, respectively.These three features compose a complete set of geometric description for a depth image and a building roof.An example of geometric features is shown in Figure 5.To visualize these features, the feature values are normalized to the range of [0, 255] and displayed as a gray image.
Spatial Histogram
The purposes of using spatial histogram in encoding are to reduce the data sizes of geometric features and to achieve rotation-invariance encoding.In spatial histogram generation, the feature image is partitioned into several disjoint circular subspaces or called bins, which are all centered on the determined origin.Similar to image histogram, the intensities of pixels in the feature image are accumulated in bins according to their positions.The accumulated value in each bin is then normalized by the sum of pixel intensities.The range of bin value is [0.0, 1.0] and the sum of all values in bins is 1.0.The distribution of pixel intensities and spatial positions of the pixels are encoded in the histogram, which leads to an encoding with the ability of geometric distinguishability.In the implementation, the feature image is circularly partitioned into k bins with equal width, and the maximum radius of the depth image is set to the distance between the farthest pixel and the determined origin.The number of bins k is set to 30 to balance the encoding compactness and shape distinguishability.A small k may lead to a rough and insufficient description of a geometric shape, whereas a large k may cause inefficient retrieval and increase the sensitivity to noise.
With spatial histogram, the height, edge, and planarity geometric features are described and encoded as ( To further distinguish the building of objects with similar roof geometries but different sizes, the area of the roof, the height of building, and the maximum radius are integrated in the encoding, which can lead to a scale-variant shape representation.The maximal height of a building, denoted as M h , is defined as where the function max() returns the maximum of an input, and H min represents the minimum of the roof height H B (x, y).The maximum radius, denoted as M r , is defined as where the function max_radius() returns the maximal distance in the xy-space between the roof pixel and its origin.The building outliers, that is, H B (x, y) = H min , are excluded from the distance calculation because outliers are generally the protruded exterior facades lying near the roof boundaries.The roof area, denoted by A f , is defined as where the function count() returns the total number of pixels in the input, except pixels with the minimal value H min .
Data Retrieval
By combining the spatial histograms of three geometric features, a point cloud or polygon model is encoded as In addition, the maximal height, maximal radius, and roof area for scale-variance encoding are combined as Given the encoding in ( 6) and ( 7), the measurement of shape similarity is formulated as a combination of F 1 and F 2 .Given a point cloud P and a building model M, shape similarity is defined as where the first part of the equation, |F 1 (P) − F 2 (M)|, is the measure of geometric similarity between the objects P and M, and the second part, 1 + 2 × |F 2 (P)−F 2 (M)| |F 2 (P)+F 2 (M)| , which ranges from 1.0 (no penalty) to 2.0 (maximal penalty), denotes the penalty for different object scales.No penalty is given for buildings with the same scale, and a maximal penalty is set for buildings with extremely large difference on scale.In (8), a small dist represents a high similarity between the point cloud P and the polygon model M, and vice versa.
The building models downloaded from the Internet are encoded in preprocessing using ( 6) and (7).The encoded coefficients are stored in a database.When a point cloud is selected as input query in the online retrieval system (http://pcretrieval.dgl.xyz), the point cloud is also encoded by using ( 6) and (7).The obtained coefficients are then matched with the coefficients in the database using (8).Shape similarities are sorted and several top-ranking models are extracted as the query response.
Encoding Properties
The proposed encoding method possesses several properties that make a retrieval system feasible to extract polygon building models using point clouds as input queries.The point clouds and polygon models are consistently encoded with the aids of morphological hole-filling and consistent origin determination.To demonstrate this property, a building model and its corresponding point cloud were tested.The encoding results in Figure 7 show that the encoded height, edge, and planarity coefficients of the point cloud and polygon model are similar.This result indicates consistent encoding and the feasibility of retrieving polygon models by using point clouds.Second, the proposed approach provides a shape similarity metric, wherein similar shapes have small distances and dissimilar ones have large distances.This property implies that encoding can distinguish objects with different geometric shapes, which is the foundation of data retrieval.Third, the encoded coefficients are rotation invariant because of the spatial histogram and the rotation-invariance origin.Figure 8 shows that the coefficients remain unchanged when a 45-degree rotation is applied to the object.Fourth, the proposed encoding scheme is insensitive to noise because the noise problem is alleviated by statistics-based histogram encoding.For example, in Figure 9, a point cloud with Gaussian noise is tested.The standard deviation in the Gaussian noise is set to 0.1.
The results show that the encoded coefficients exhibit slight differences when the Gaussian noise is added to the data.Fifth, the building models in the database are compactly encoded.The encoded coefficient for a building model is a set of 90 real numbers, which is smaller than the size of original model data.With the aforementioned properties, the proposed retrieval method can efficiently and accurately retrieve building models using point clouds.
Datasets
A database containing about one million 3D building models was tested.The models are downloaded from the Internet using web crawlers that systematically browse the Internet to search for 3D building models.The study area is the campus of National Cheng-Kung University, Taiwan and Taipei 101 building.The airborne LiDAR point cloud in the study area was acquired by Optech ALTM 30/70 in 2011.This point cloud was extracted from a combination of three overlapped scanning strips.The number of points in the test LiDAR point cloud is 2,772,880 and the average density is 10.72 pts/m 2 or a point spacing of 0.31 m.The LiDAR point cloud of the study area and its corresponding aerial image are shown in Figure 10.The trees nearby buildings were removed and building point clouds were segmented and extracted from the test data manually for simplicity.Any advanced point cloud segmentation or building extraction algorithms [18][19][20] can be adopted to extract building point clouds automatically or semi-automatically.
Computational Performance
The encoding time for a point cloud of 22,440 points and about 2000 m 2 areas is 150 ms, and the retrieval response time is 600 ms for a database containing one million models.The information of the tested building point clouds, grid sizes, and computation time are shown in Table 1.The grid sizes are automatically calculated according to the average point density of the building.Encoding time mainly depends on the number of points and the xy-area of a building, and the retrieval time is linearly dependent on the size of the model database.
Evaluation of Model Encoding
To evaluate the proposed method, two sets of building models that have different level-of-details (LODs) on roofs were tested.The purpose of this experiment is to evaluate the distinguishability of encoding approaches on building models with different geometric roof details.The first dataset that contains four models is a building with hollow geometry.The original model with hollow geometry, which is denoted by LoD a 1 , is gradually reduced to a non-hollow model with simplified roof geometry, which is denoted as LoD The shape similarities between the models and their corresponding point clouds are measured using (8).The models and their similarity values are shown in Figure 12.For the first dataset, the results indicate that the proposed encoding method can separate the model with hollow geometry from the model without hollow geometry.The results in the second dataset show that the shape similarities decrease near-linearly when the geometry of the original model is simplified gradually.These experiments demonstrates the ability to describe hollow geometric shapes and distinguish models with different details on roof geometry, which are superior over the related method [12] that encodes models using a set of low-frequency spherical harmonic functions.Because of the use of low-frequency and spherical basis functions, the method [12] is not able to distinguish models with similar geometry details and models with and without hollow geometries.
Evaluation of Model Retrieval
The proposed method was compared with the method [12] by using a database that contains around one million 3D building models downloaded from the Internet.Three building point clouds were tested as the input queries.To evaluate the retrieval accuracy, the commonly used measurement, namely, root mean square deviation (RMSD), is adopted to calculate the shape similarities between the input query and the retrieved models.Before RMSD calculation, the retrieved models are aligned with the input query semi-automatically using the standard registration algorithm called iterative closet point [21].The shape similarities obtained by RMSD are used as reference values in retrieval evaluation.Figure 13 shows the retrieval results, including the retrieved models, ranks, and reference RMSD values.The visual comparisons show that the models extracted by these two methods are similar to the queries in shape, and the ranking generated by the proposed method is better than that in [12], especially for the third data.This experiment shows that the proposed method combining shape distribution with visual similarity can improve the ability to distinguish geometric shapes, compared with [12].However, the ranks obtained by these two approaches are not well-matched with that of RMSD references.For instance, in the first test data, the RMSD of the ranked 5th model is larger than that of rank 7th model, which means that the ranking is inconsistent with the reference.The imperfect ranking is caused by the use of compact shape description.To achieve efficient retrieval, data are compactly encoded by shape description, and only 90 coefficients are used in the proposed method to encode a point cloud or a polygon model.To solve this problem, the ranks of the extracted models can be further refined using RMSD measurement if perfect ranking is required.To further analyze retrieval performance, the ranking of all extracted models by using RMSD is used as reference.Then, the measurement for ranking is defined as the difference between the obtained ranking and reference, that is, Di f f r = |R method − R re f |, where R re f and R method represent the reference ranking and the ranking from a method, respectively.In this measure, the average of Di f f r , which is denoted as Avg.Di f f r , is used to estimate ranking quality.The statistical analysis for Avg.Di f f r is shown in Table 2.A small Avg.Di f f r means high-quality ranking.In addition, the commonly used measurements precision η s and recall η n were adopted to evaluate the retrieval accuracy using the Data #3 in Figure 13.These measurements are defined as η s = TP TP+FP and η n = TP TP+FN , where TP, FP, and FN represent true positive, false positive, and false negative, respectively.The results are shown in Table 3. From the statistical analysis, the retrival results by the proposed method are superior to that by [12].
Conclusions and Future Work
This study proposed a new method for 3D building model retrieval using LiDAR point clouds as input query.To archive consistent encoding of polygonal models and point clouds, rotation-invariance origin determination is adopted, which utilizes object volume to determine the origin instead of using the minimal bounding box of an object.In addition, the morphological closing operator is applied to fill the holes in the depth images of point clouds.Filling holes effectively alleviates the difficulties caused by sparse and incomplete sampling of point clouds and facilitates consistent encoding.The proposed encoding that integrates shape distribution with visual similarity can increase the distinguishability of geometric shapes.The experiments on building models with different LoDs show that ambiguous shape description is avoided.The experiments also demonstrate that using the spatial histogram of geometric features as shape descriptor introduces the properties of noise-insensitivity and rotation-invariance to the retrieval.These properties facilitate the feasibility of the proposed approaches to achieve efficient and accurate model extraction.Based on the qualitative and quantitative analyses on LiDAR data and the building models and based on the implemented web-based retrieval system, we conclude that retrieve building models by using point clouds is feasible.In the future, we plan to extend the proposed method to point clouds for photogrammetry techniques and terrestrial LiDAR, which also have a great need of 3D modeling.
Figure 1 .
Figure 1.System overview.The system consists of three main steps, depth image generation, data encoding, and data retrieval.
Figure 2 .
Figure 2. Depth image.(Left, Middle): perspective and top views of point cloud, respectively; (Right): top view of depth image.
Figure 3 .
Figure 3. Comparisons between the origins obtained by using the minimal bounding box (the cyan dots) and the building volume (the red dots).
Figure 5 .
Figure 5. Results of geometric features.(Left): original point cloud; (Right): results of height, edge, and planarity features of the point cloud, respectively.
and (p 1 , • • • , p k ), respectively.The total number of components in an encoded coefficient is 90.Examples of spatial histograms with 10 bins are shown in Figure 6.The height, edge, and planarity features used to describe different geometrics have different spatial histogram results.
Figure 6 .
Figure 6.Results of the spatial histograms of height (left); edge (middle); and planarity features (right); Feature images (top) and corresponding spatial histograms of 10 bins (bottom).
Figure 7 .
Figure 7. Consistent encoding of point cloud (top) and building model (bottom).1st row: original data; 2nd-4th rows: results of height, edge and planarity features and their corresponding spatial histograms.
Figure 9 .
Figure 9. Demonstration of noise insensitivity.(Top): point cloud encoding results; (Bottom): encoding results of a point cloud with Gaussian noise addition.
Figure 10 .
Figure 10.Study area.The aerial image (left) and corresponding point cloud (right).
4 .
The simplified and detached parts of the buildings are marked by red.The second dataset contains five models, where the original model with detailed geometry on roof, denoted by LoD b 1 , is gradually simplified to be a simple box, denoted by LoD b 5 .The aerial photos, LiDAR point clouds, and well-constructed building models, which are, LoD a 1 and LoD b 1 , are shown in Figure 11.The point clouds and the models are encoded using the proposed method.
Figure 12 .
Figure 12.Tested models with different LODs.The numbers shown below the models represent the shape similarities between the models and their corresponding point clouds.
Figure 13 .
Figure 13.Comparisons among the retrieval results obtained by our method (Method A) and the related method [12] (Method B).
Table 2 .
Chen et al. (2014)n retrieved rankings using the proposed method and the method byChen et al. (2014).1st column: RMSD of the retrieved model; 2nd column: reference ranking; 3rd column: Di f f r value.Two numbers in a pair denotes the performances of the compared methods.
Table 3 .
[12]ision-and-recall of our method (Method A) and the related method[12](Method B).The tested data is Data #3 in Figure13is used as tested data. | 2018-04-03T03:18:25.675Z | 2017-08-28T00:00:00.000 | {
"year": 2017,
"sha1": "89d7556a8fd30f758630d09e2f43826b18a54fe4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2220-9964/6/9/269/pdf?version=1503914043",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "89d7556a8fd30f758630d09e2f43826b18a54fe4",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
222213551 | pes2o/s2orc | v3-fos-license | Stereochemistry of Astaxanthin Biosynthesis in the Marine Harpacticoid Copepod Tigriopus Californicus
The harpacticoid copepod Tigriopus californicus has been recognized as a model organism for the study of marine pollutants. Furthermore, the nutritional profile of this copepod is of interest to the aquafeed industry. Part of this interest lies in the fact that Tigriopus produces astaxanthin, an essential carotenoid in salmonid aquaculture. Here, we study for the first time the stereochemistry of the astaxanthin produced by this copepod. We cultured T. californicus with different feeding sources and used chiral high-performance liquid chromatography with diode array detection (HPLC-DAD) to determine that T. californicus synthesizes pure 3S,3’S-astaxanthin. Using meso-zeaxanthin as feed, we found that the putative ketolase enzyme from T. californicus can work with β-rings with either 3R- or 3S-oriented hydroxyl groups. Despite this ability, experiments in the presence of hydroxylated and non-hydroxylated carotenoids suggest that T. californicus prefers to use the latter to produce 3S,3’S-astaxanthin. We suggest that the biochemical tools described in this work can be used to study the mechanistic aspects of the recently identified avian ketolase.
Introduction
Tigriopus is a genus of harpacticoid copepods that inhabits rocky tide pools in most of the planet's coasts [1]. Due to the wide geographical distribution and ease of cultivation of this crustacean, Tigriopus is becoming a model organism for evaluating the impact of pollutants on the marine ecosystem [2]. The high nutritional value of Tigriopus has also attracted the interest of aquaculture, and this copepod has been tested for aquafeeds [3][4][5].
Part of the nutritional interest of this copepod lies in its reddish coloration, since Tigriopus spp. produces astaxanthin [6]. This carotenoid, sourced from chemical synthesis mainly, is used to support the healthy growth of farmed salmon and trout [7,8], and is also essential for these species to acquire the characteristic reddish hue that is crucial for customer acceptance [9]. On the other hand, astaxanthin from microalgae origin is marketed for direct human consumption in capsules [10], as experiments in murine models suggest that this carotenoid may be beneficial in cognition [11], cardiac function [12], and inflammation [13]. Nevertheless, a recent meta-analysis of the randomized placebo-controlled trials conducted with astaxanthin suggests that more work is needed to confirm these benefits in humans [14].
Genetic studies in birds have contributed the most prominent advances in the study of astaxanthin biosynthesis in animals with the identification of the ketolase gene [15,16]. However, biochemical studies in zooplankton have also contributed to the knowledge of this pathway [17][18][19][20]. Along this line, Weaver et al. recently proposed T. californicus as a model organism for the study of astaxanthin synthesis in animals [18].
Both crustaceans and birds that produce astaxanthin use ingested carotenoids as precursors. From these precursors, β-carotene, echinenone, and canthaxanthin are non-hydroxylated carotenoids, while β-cryptoxanthin, zeaxanthin, and lutein have hydroxyl groups. β-carotene, a major carotenoid in microalgae, is believed to be the usual precursor of astaxanthin in crustaceans, whereas zeaxanthin is pointed out as the preferred precursor for birds [15]. Hydroxylated carotenoids potentially determine the stereochemistry of astaxanthin produced, while the use of non-hydroxylated carotenoids leaves the stereochemistry of astaxanthin to the mechanism of action of the β-hydroxylase. This, together with the possibility of ingestion and accumulation of astaxanthin of different stereochemistry than that produced, can finally yield one or more of the following stereoisomers: 3S,3'S-astaxanthin, 3R,3'S/3'S,3R-(meso)-astaxanthin or 3R,3'R-astaxanthin ( Figure 1). Of note, the stereochemistry of astaxanthin has not yet been studied in birds. In crustaceans, the species studied yielded different stereoisomeric compositions [21][22][23]. The importance of knowing the stereochemistry of astaxanthin in microorganisms and animals relies on its potential use as a tool to identify the origin of a feeding source, as proposed for wild and farmed salmon [24,25]. This knowledge is also essential to introduce a novel food containing astaxanthin in the market, and can be important in terms of health benefits, since it seems that certain stereoisomers of astaxanthin are superior in this sense, as suggested by seminal studies in C. elegans [26].
In this work, we analyzed the stereoisomeric composition of astaxanthin in Tigriopus californicus, and concluded that this copepod produces pure 3S,3'S-astaxanthin. Furthermore, our results suggest the existence in Tigriopus of a degree of laxity in the ketolase activity, together with a strict selection of non-hydroxylated carotenoids as precursors of astaxanthin.
Carotenoid Profile of Co-Cultured Microalgae and Tigriopus Californicus
We aimed to test the stereochemistry of astaxanthin produced by T. californicus in culture conditions as close as possible to nature. To do this, we co-cultured T. californicus with R. lens, T. chui, and N. oceanica, which provided a mix of potential precursor carotenoids to this crustacean. Figure 2A shows that the cultured microalgae accumulated violaxanthin as the major carotenoid (peak 1), and minor amounts of zeaxanthin (peak 4), canthaxanthin (peak 5), echinenone (peak 8), and β-carotene (peak 9). All these carotenoids are potential precursors of astaxanthin synthesis in Tigriopus. The carotenoid profile of T. californicus ( Figure 2B) showed a main peak of all-trans astaxanthin (peak 13). The absence of this carotenoid in the microalgae indicates that T. californicus produced astaxanthin from ingested carotenoids, confirming previous results [18]. Appreciable amounts of violaxanthin were also observed ( Figure 2B, peak 1), probably due to the presence of ingested microalgae in the gut of the zooplankton. Other minor peaks (peaks 17 and 18) comprised esterified astaxanthin. Canthaxanthin was not detected, suggesting that this carotenoid is not an intermediate in astaxanthin biosynthesis in Tigriopus, at least under the growth conditions tested. A significant carotenoid was peak 15, which exhibited a spectrum identical to that of echinenone from the microalgae ( Figure 2A, peak 8) and echinenone authentic standard ( Figure 2C). Nevertheless, the retention time of this carotenoid was shorter (13.52 min vs. 15.33 min of echinenone). These features suggest that this carotenoid has the chromophore of echinenone, but contains an additional oxygenated function that makes this carotenoid advance more rapidly in chromatography. Comparing this data with the literature, we tentatively identified this carotenoid as 3-hydroxyechinenone [27,28]. This carotenoid has been detected in other copepods that produce astaxanthin [20]. However, more work is needed to identify this carotenoid unequivocally.
Stereochemistry of Zeaxanthin Produced by the Microalgae Culture
As stated above, the microalgae culture used in the previous section provided four carotenoids with the potential to serve as a precursor of astaxanthin synthesis in Tigriopus. Among them, only zeaxanthin has the hydroxyl groups that could force the stereochemistry of astaxanthin ( Figure 1). Although plants, microalgae, cyanobacteria, and non-photosynthetic bacteria produce 3R,3'R-zeaxanthin [29][30][31], we aimed to confirm this in our study. Figure 3 shows the retention time of the zeaxanthin present in the microalgae culture ( Figure 3B, peak 2), which coincides with the retention time and the spectrum of 3R,3'R-zeaxanthin in the standard enantiomeric mixture ( Figure 3A,C). Therefore, we conclude that T. californicus ingested pure 3R,3'R-zeaxanthin. (B) Carotenoid extract from the microalgae culture (peak 1, β-carotene; peak 2, zeaxanthin; peak 3, violaxanthin. (C) Spectrum of peak 2 from the HPLC chromatogram of carotenoids from Tigriopus and from authentic zeaxanthin racemic standard.
Stereochemistry of Astaxanthin Produced by Tigriopus Californicus
The next step consisted of determining the stereochemistry of the astaxanthin produced by T. californicus feeding on the microalgae described in the previous section. We decided to avoid saponification and use only free astaxanthin (roughly 70% of astaxanthin produced) as a proxy to the stereochemistry of this carotenoid in Tigriopus. In this way, we avoided the production of cis isomers that would have made the interpretation of the chiral analyses difficult. Figure 4 shows the chiral analysis of the free astaxanthin produced by T. californicus compared to the true astaxanthin racemic mixture. Free astaxanthin produced by T. californicus ran as a single peak ( Figure 4B, Peak 1), with retention time and spectrum consistent with reference 3S,3'S-astaxanthin ( Figure 4A,C). Peak 2 in Figure 4B showed a retention time compatible with 3R,3'R-astaxanthin ( Figure 4B), but the spectrum of this carotenoid was clearly different from that of the reference ( Figure 4C), and probably corresponds to a cis-isomer of astaxanthin. This result suggests that in this experiment, despite having an hydroxylated carotenoid available such as 3R,3'R-zeaxanthin, Tigriopus preferred to use a non-hydroxylated precursor such as β-carotene, echinenone, and/or canthaxanthin to synthesize astaxanthin. An alternative hypothesis, although less likely, is that Tigriopus dehydroxylated 3R,3'R-zeaxanthin, re-hydroxylated it as 3S,3'S-zeaxanthin, and finally used it to produce 3S,3'S-astaxanthin.
Astaxanthin Biosynthesis in Tigriopus Californicus Using Meso-Zeaxanthin as Precursor
Interestingly, Weaver et al. reported that T. californicus could use zeaxanthin to produce astaxanthin [18]. However, the absolute configuration of the synthesized astaxanthin was not investigated in that study. We wondered whether Tigriopus just adds the ketone groups to 3R,3'R-zeaxanthin or replaces the hydroxyl groups to produce 3S,3'S-astaxanthin. To answer this question, we cultured Tigriopus with marigold powder rich in meso-zeaxanthin and traces of 3R,3'R-zeaxanthin and lutein, but free of β-carotene and other non-hydroxylated precursor carotenoids, as shown in Figure 5. Figure 6B shows how T. californicus, fed with the yeast reference diet, accumulated 3S,3'S-astaxanthin ( Figure 6B, Peak 1), probably because the washing period was insufficient for this carotenoid to disappear from the zooplankton. When Tigriopus fed on marigold powder, congruently, this peak also appeared ( Figure 6C, Peak 2). Nevertheless, a peak corresponding to meso-astaxanthin also appeared in the chromatogram ( Figure 6C, Peak 3). With this result, we cannot rule out that T. californicus converted a percentage of meso-zeaxanthin into 3S,3'S-astaxanthin. However, we can confirm that T. californicus added a ketone group to both β-rings of meso-zeaxanthin and converted this carotenoid into meso-astaxanthin.
Testing Lutein as a Precursor for Astaxanthin Biosynthesis in Tigriopus Californicus
Next, we investigated the possibility that Tigriopus used lutein as a precursor to produce astaxanthin, as the use of this carotenoid could not be ruled out in previous studies and this pathway has been proposed for other crustaceans [18,32]. To do this, we took advantage of the fact that to use lutein, Tigriopus must first convert the ε-ring of this carotenoid into a β-ring by relocating a double bond (Figure 1). The result of this conversion is the production of meso-zeaxanthin, whose use by Tigriopus to produce astaxanthin is proven. Therefore, if lutein is a precursor of choice for Tigriopus, we should observe the appearance of meso-astaxanthin. To test this hypothesis, we cultivated T. californicus with an unidentified species of microalgae that appeared spontaneously in our outdoor facilities and produced high relative amounts of lutein ( Figure 7A, peak 2) and lower amounts of canthaxanthin (peak 4) and β-carotene (peak 8). Again, T. californicus produced fully trans astaxanthin as the primary carotenoid ( Figure 7B, peak 9), representing 49% of the total carotenoids produced. It is important to note that with this species of microalgae as feed, we were able to detect again the carotenoid that we tentatively identified as 3-hydroxyechinenone in Section 2.2 ( Figure 7B, Peak 11).
Stereochemistry of Astaxanthin in Tigriopus Californicus Feeding on Lutein-Rich Microalgae
Chiral analysis of astaxanthin produced under the culture conditions described in the previous section shows that T. californicus produced only 3S,3'S-astaxanthin ( Figure 8). The complete absence of meso-astaxanthin in the chromatogram suggests that T. californicus did not use dietary lutein to produce astaxanthin. The 3S,3'S-astaxanthin produced by Tigriopus probably comes from one of the non-hydroxylated carotenoids ingested (β-carotene or canthaxanthin). It is important to note that this experiment does not rule out whether T. californicus can convert lutein to astaxanthin if this is the only carotenoid available in the diet; importantly, this point is not possible to address, as any current commercial source of lutein contains zeaxanthin at around 5% (w/w). Nevertheless, as there was no meso-astaxanthin in the carotenoid extract, our results suggest that T. californicus prefers to use non-hydroxylated dietary carotenoids instead of lutein when they are available.
Discussion
In this work, we determined that T. californicus produced 3S,3'S-astaxanthin when fed on microalgae that provided carotenoids with and without hydroxyl groups as potential precursors. Then, we used stereochemistry as a readout to better understand the synthesis pathway of this carotenoid in T. californicus. For example, Weaver et al. reported that T. californicus can use zeaxanthin as a precursor. Strikingly, in our experiment with a variety of precursors including 3R,3 R-zeaxanthin, we did not detect the formation of 3R,3'R-astaxanthin. To rule out that T. californicus reorients the hydroxyl groups of zeaxanthin to 3S,3'S, we fed this copepod with 3R,3'S-(meso)-zeaxanthin. In this experiment, T. californicus produced meso-astaxanthin, indicating that the putative ketolase of this copepod can work with β-rings bearing either 3S-or 3R-hydroxyls. Thus, the absence of 3R,3'R-astaxanthin when 3R,3'R-zeaxanthin was available suggests that T. californicus does not choose this carotenoid as a precursor to producing astaxanthin when other precursors are available. We applied the same method to study lutein as a precursor, and again, the results indicated that this carotenoid is also not a preferred choice for T. californicus.
The ability of T. californicus to add ketone groups to carotenoids bearing hydroxyl groups oriented at either R or S is surprising. Still, it is even more surprising that T. californicus discards hydroxylated carotenoids if non-hydroxylated carotenoids such as β-carotene or canthaxanthin are available in the diet. This suggests that a mechanism of selection to refuse hydroxylated carotenoids as precursors of astaxanthin is operating in T. californicus.
Although T. californicus can potentially use both β-carotene and canthaxanthin as a precursor [18], in this work, we have not studied which of these carotenoids T. californicus preferentially uses. In either case, the results of Weaver et al. suggest that not only the putative ketolase is lax in carotenoid acceptance, but the putative β-hydroxylase of T. californicus is also flexible, as it can act on both ketolated and non-ketolated carotenoids.
We propose two reasons why T. californicus might prefer to introduce hydroxylations in carotenoids itself instead of taking advantage of available hydroxylated carotenoids such as zeaxanthin. The first one is that the 3S orientation of hydroxyls in astaxanthin is more advantageous in physiological terms for this copepod. However, neither the presence of meso-astaxanthin nor the total absence of carotenoids in its tissues seems to harm this copepod under normal laboratory conditions. The second possible reason has to do with energy metabolism, and perhaps with phenotypic displays of this crustacean. Recently, Hill et al. [33] proposed that the signal honesty hypothesis in birds, whereby a greater accumulation of astaxanthin in the feathers increases the male's chances of being chosen by a female to procreate, has its molecular basis in a link between the synthesis of this carotenoid and mitochondrial metabolism. Hill et al. proposed that the addition of the ketone groups in the mitochondria is facilitated by good oxidative metabolism. Thus, the particular fitness level of a male would translate into a specific degree of redness in his feathers. There are no studies on the subcellular localization of astaxanthin synthesis in crustaceans, nor has the relationship of this carotenoid with oxidative metabolism been investigated in these animals. It would be very interesting to know whether the synthesis of astaxanthin in Tigriopus is directly related to the oxidative metabolism of the cell.
Following this reasoning, the apparent preference of T. californicus for non-hydroxylated carotenoids makes us wonder if the same happens in birds. This is apparently not the case, since it is generally assumed that these animals use hydroxylated carotenoids as precursors of ketolated carotenoids [15,16]. However, the stereoisomeric analysis of the final product in different species of birds could provide surprises if the hydroxyls are oriented to 3S. This would strongly suggest that birds also have a β-hydroxylase, in addition to the ketolase already described. Feeding experiments with meso-zeaxanthin, available in the market in sufficient quantities to carry out these types of experiments with birds, together with β-carotene, also available, would shed light on the existence of hydroxylations in the synthesis pathway of ketolated carotenoids in birds. From the signal honesty hypothesis proposed by Hill et al., it makes sense to think that the more oxidative functions the bird adds, the more honest their signal would be.
The non-chiral analysis of the carotenoids produced by T. californicus also yielded useful information, which tempts us to propose a biosynthesis pathway for astaxanthin in this copepod. We did not detect canthaxanthin in Tigriopus, which suggests that this carotenoid is not an intermediate in the astaxanthin synthesis pathway. Instead, we noticed a carotenoid that we tentatively identified as 3-hydroxyechinenone. Mojib et al. detected this carotenoid in five of the species of copepods they studied. This data suggest that T. californicus uses β-carotene as a precursor for astaxanthin via 3-hydroxyechinenone, as previously suggested for other copepods [20]. Nevertheless, further work is needed to fully identify the occurrence of 3-hydroxyechinenone in T. californicus. The penultimate step of the route, consisting of the production of adonixanthin or adonirubin, has not been investigated in this work.
In conclusion, in addition to the findings described in the astaxanthin synthesis pathway in T. californicus, we have developed a method that can be used to study the functional aspects of the ketolase enzyme in crustaceans and birds. In the case of crustaceans, further work remains to be done such as the identification of the genes for the β-hydroxylase and ketolase activity and the determination of the subcellular location and characterization of the coded enzymes.
Organisms and Chemicals
Tigriopus californicus was sourced from Reefphyto Ltd. (Newport, UK). Rhodomonas lens was obtained from the Bigelow National Center for Marine Algae and Microbiota (Maine, USA), Tetraselmis chui was obtained from the Culture Collection of Algae and Protozoa (CCAP, Scottish Marine Institute, Argyll, Scotland), and Nannochloropsis oceanica was obtained from the Norwegian Culture Collection of Algae (NORCCA, Oslo, Norway). Dried Brewers' yeast was provided by Biomax Ltd. (Elvington, UK). Meso-zeaxanthin powder (10% dry biomass) was obtained from XABC Biotech Co Ltd. (Xi'an, China). The carotenoid standards lutein, zeaxanthin, and zeaxanthin racemic mixture were purchased from CaroteneNature (Lupsingen, Switzerland). Butylated hydroxyltoluene (BHT) was purchased from Sigma-Aldrich (Arklow, Ireland). HPLC grade methyl tert-butyl ether (MTBE), water, and isopropanol were supplied by Fisher Scientific (Dublin, Ireland). HPLC grade hexane and ethanol 96% were supplied by VWR (Dublin, Ireland). Artificial Sea Salt (Tropic Marin ® PRO-REEF) and Evolution Aqua K1 Micro media were purchased to McGuire's Garden Centre (Waterford, Ireland).
Tigriopus Culture with Microalgae
Tigriopus californicus was cultured in a 1200 L water tank in artificial sea water at 3.5% (w/v) and the microalgae species Rhodomonas lens, Tetraselmis chui, and Nannochloropsis oceanica over 60 days using aerated K1 Micro media to allow the growth of nitrifying bacteria. Temperature was kept at 22 • C. T. californicus culture was initially inoculated with R. lens and T. chui. After growth of T. californicus to a density of circa 1200 individuals per liter at day 50, N. oceanica was inoculated.
Meso-Zeaxanthin Feeding Experiment
T. californicus was cultured in two aquariums of 10 L capacity with artificial seawater (3.5%). Brewer's yeast was provided as feed dissolved in artificial seawater every second day. Ammonia levels were controlled using aerated K1 Micro media and aerated nitrate traps. After two months of culture, one of the cultures was fed during the three weeks with marigold extract rich in meso-zeaxanthin dissolved in artificial seawater, once per week. Temperature of the cultures was kept at 22 • C.
Tigriopus
A total of 10-20 mg (wet weight) of Tigriopus was harvested and analyzed. The fresh biomass was introduced in a 15 mL polypropilene tube, and 3 mL of acetone were added. The tube was sonicated for 5 min in a sonicator bath and then 2 mL of hexane and 5 mL of aqueous NaCl 0.9% (w/w) were added. The tube was agitated for 20 s and centrifuged for 5 min. The upper hexane phase was transferred to a new polypropilene tube and dried in a vacuum centrifuge. The residue was resuspended in the HPLC mobile phase and analyzed.
Microalgae
A sample of 400 mL of microalgae culture were filtered with a nylon mesh with 100 µm pores and centrifuged for 5 min in 50 mL polypropilene tubes. The supernatants were discarded and the pellets were collected in a polypropilene tube. This tube was centrifuged again for 3 min and the pellet was washed with 10 mL of dH 2 O. The tube was frozen at −80 • C and thawed three times. In a new 50 mL Falcon tube, a saponification mix consisting in 0.1 g of KOH, 2 mL of EtOH 96%, and 0.5 mL of dH 2 O was prepared. The saponification solution was warmed at 45 • C and added to the microalgae pellet. The sample was incubated for 5 min at 45 • C at 250 rpm. Then, 10 mL of aqueous NaCl 0.9% was added to neutralize the saponification reaction and 5 mL of hexane was used to extract the carotenoids from the saponified sample. This extraction was repeated three times. The pooled hexane fractions were washed with one volume of aqueous NaCl 0.9%, dried in a vacuum centrifuge, and re-suspended in 0.4 mL of mobile phase.
System 1, Reverse Phase Carotenoid Analysis
Carotenoid samples were separated and quantified in a HPLC 1200 Series (Agilent Technologies, Santa Clara, CA, USA), equipped with a diode array detector, quaternary pump, degasser, thermostatically-controlled column compartment, thermostatically-controlled autosampler, and a C30-reversed phase column (250 × 4.6 mm i.d., 3 µm; YMC Europe, Dinslaken, Germany) with a guard column. The flow rate was 1 mL min -1 with a linear gradient from 100% A, consisting of methanol:methyl tert-butyl ether:water:triethylamine 30:10:1:0.05 (v/v) to 20% B, consisting of methanol:methyl tert-butyl ether 1:1 (v/v) within 10 min, then to 100% B within 1 min. This condition was maintained for another 24 min. The solvents were returned to the starting conditions within 1 min, and the column temperature was set at 25 • C.
System 3, Chiral Analysis of Astaxanthin
For this analysis, the HPLC 1200 Series described in system 1 was used with a Pirkle L-leucine chiral column (Regis Technologies, Morton Grove, IL, USA). The method used has been previously described [21]. | 2020-10-09T13:05:29.044Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "5b836b6f2cd3712051d4f747abf4f45f4343cce1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-3397/18/10/506/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "85b3f6bf4ff9b87a6a833153e63cfeac0f5f1849",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
149764056 | pes2o/s2orc | v3-fos-license | University of Birmingham Opening communicative space
Current participatory research literature describes different approaches to involving service users in research, draws out lessons to be learned from the process and begins to address the difficult question of the impact of service user involvement on the research outcomes. However, very limited attention has been given to analysing in detail ‘what goes on’ in interviews carried out by service users or considering what difference their interactions make to the interview content and process. This article draws on principles of conversation analysis (CA) and member categorisation analysis (MCA) to examine how co-researchers and participants practically accomplish research interviews. Using Habermas’s distinction between communicative and strategic action as a framework, the article addresses the questions of whether and how co-researchers open communicative space in semi-structured interviews. Two dimensions are highlighted in the analysis: co-researchers’ interviewing skills and their ability to forge connections with participants. It is concluded that both components are necessary to open communicative space and generate co-produced knowledge. This detailed empirically-grounded analysis of co-researcher/participant interactions is both innovative and significant in enhancing understanding of co-researcher contributions to participatory research.
Introduction
Research is political, entangled in questions about what counts as 'truth' and who can legitimately make knowledge claims (Humphries, 2005).Participatory research (PR) acknowledges the right of service users to contribute to defining 'truth' and generating 'knowledge' (Cornwall and Jewkes, 1995).'Service users' here refers to people who define themselves as actual or potential users of the same or similar services to those being studied.In addition to moral arguments about the rights of service users to exercise power in relation to knowledge claims, the epistemological argument is that service user involvement in research enhances the knowledge produced through this process (Bergold and Thomas, 2012).
Many funders of health and social research now require the involvement of service users and it is increasingly common for this to be incorporated and costed in research bids.However, the nature and level of involvement vary widely, from acting in an advisory capacity or assisting with data collection to involvement in all stages of the process.The benefits and challenges of service user involvement in health and social care research are well-rehearsed, both in reports from individual studies (such as Miller et al., 2006) and wider reviews (Frankham, 2009).However, Flinders et al. (2016: 266) argue 'the normative desirability of co-production as a buzzword often crowds out a balanced assessment of the risks and limits involved when it is "done" in practice'.Critics suggest that, far from increasing service user power and influence, PR may represent 'a different -and most probably more sophisticated -type of exploitation' (Carey, 2010: 17), coopting service users into supporting dominant agendas whilst leaving oppressive policies and practices unchallenged (Cowden and Singh, 2007).
Even advocates of service user involvement acknowledge that to do it well, both ethically and practically, is costly of time and resources.From varied perspectives, there is therefore recognition that the nature and outcomes of PR need to be given closer and more critical attention (Aldridge, 2015;Flinders et al., 2016;McLaughlin, 2010).This article makes an innovative and significant contribution to the evidence base about the impact of PR through detailed empirical analysis of co-researchers' interactions in research interviews.The term 'co-research' is used here to mean research 'done with' service users rather than 'to', 'about' or 'for' them (Fudge et al., 2007).Co-researchers are viewed as 'co' (that is, joint or mutual) in two senses: working in partnership with academic researchers to generate knowledge and sharing key characteristics with participants.
There is acknowledgement that determining the impact of service user involvement in research is problematic (Barber et al., 2011;Ennis and Wykes, 2013;Minogue et al., 2005;Staley, 2009).Most evaluations of the impact of service user involvement are based on retrospective interviews with co-researchers and/or academic researchers.A significant gap in existing literature is detailed examination of 'what goes on' in research encounters between co-researchers and participants using data collected at the time.This article's originality resides in this focus.Although the co-researchers in this study were involved in other research phases and tasks, the focus here is confined to their involvement in interviews.The article is based on examination of audio recordings and written transcripts of research interviews with either older people or people with learning disabilities.The analysis focuses on co-researcher/participant interactions, though an academic researcher was also present.The audios and transcripts constitute an easily accessible version of 'what went on' in the interviews that can be analysed to address the question of how co-research influences the research role and relationships.Whilst principles of conversation analysis (CA) are used to analyse the interview interactions, CA is concerned with understanding the practical accomplishment of the interview, not with evaluating its quality or effectiveness (Hester and Francis, 1994).Habermas's (1984) distinction between communicative and strategic action offers a normative framework for evaluating the impact and significance of co-research in interviews.
The article begins with a brief summary of the concepts of communicative and strategic action and a justification for using this framework.This is followed by a review of related research using CA and an account of the methods used in this study.The analysis is then presented using selected interview extracts.The article concludes with wider discussion of the findings and their implications.
Communicative action
The central concern of communicative action is how language is used to achieve mutual understanding (Fultner, 2014).Habermas (1984) sees all activity as directed at the attainment of goals, distinguishing between:
•
• Strategic or instrumental action, oriented towards the efficient achievement of specific outcomes.The characteristic domain in which instrumental action takes place is 'the system', comprised of the sub-systems of money and power (Finlayson, 2005) (Fultner, 2014).It takes place in the 'lifeworld' domain, which is unregulated and based on common, informal understanding of 'who we are' (Finlayson, 2005).
Habermas argued that as the state and economy become increasingly concerned with the pursuit of efficiency, strategic action becomes dominant, supplanting communicative action.Habermas's (1984) theory of communicative action has been heralded as a useful conceptual framework for empirical research (Forester, 2003;Parkin, 1996).It has been applied to participatory action research (PAR), with PAR seen as opening communicative space between research participants and wider stakeholders (Godin et al., 2007;Kemmis and McTaggart, 2005;Wicks and Reason, 2009).Whereas in PAR service users are involved in 'collaborative social action' (Kemmis and McTaggart, 2009: 578), generating change through the research process, PR has a more limited focus on the generation of co-produced knowledge, rather than action (Bergold and Thomas, 2012).Communicative space has not previously been applied to PR using semi-structured interviews; communicative and strategic action may be more starkly opposed here than in PAR.Researchers are invariably subject to tension between 'top down' pressures, such as funder requirements, resource limitations and their own organisation's performance concerns, and 'bottom-up' imperatives to give voice to marginalised groups (Aldridge, 2014).In this study, PR represented an attempt to bring together the 'lifeworld' concerns of achieving mutual exchange and consent between researchers and participants, and 'system' concerns with success, measured by the achievement of pre-defined objectives.
We sought to answer the question 'What role do co-researchers play in opening communicative space in semi-structured interviews with participants?'
Conversation analysis and member categorisation analysis
Whilst the concepts of communicative and strategic action provide the theoretical framework for considering 'what' the co-research approach achieved, principles of conversation analysis (CA) assist analysis of 'how' this was accomplished (Hester and Francis, 1994).The origins of CA reside in the work of Harvey Sacks (1995) and his interest in how conversations are structured to produce and reflect social order.Through close analysis of interactional talk, CA examines how parties to an interaction make sense of each other's behaviour and orient themselves accordingly.Sacks and colleagues were particularly interested in how talk-in interaction is sequentially organised, for example, in relation to turn-taking -how a turn relates to the one preceding it, what is accomplished in the turn and how it relates to the turn that follows (Sacks et al., 1974).A related strand of Sacks's work is concerned with how parties use member categories, that is, shared 'commonsense' categories to structure and make sense of their social world (Housely and Fitzgerald, 2002).Member categorisation analysis (MCA) does not import social and cultural understandings of categories from the 'outside', but analyses how categories are referenced and interpreted by the parties to the interaction (Watson, 2015).Thus, the meaning of a category can only be deduced from how it is used within a specific context, at a particular time (Francis and Hester, 2017).MCA, then, is not so much a method of analysis as, 'a collection of observations and an analytic mentality towards observing the ways and methods people orient, invoke and negotiate social category based knowledge when engaged in social action' (Housley and Fitzgerald, 2015: 6).
Although historically CA and MCA have had different analytical foci and developed at different paces, more recently there has been a 'rapprochement' (Stokoe and Attenborough, 2015), with recognition that the practices of sequence organisation and categorisation are reciprocal and inter-dependent (Watson, 2015).Categories are embedded in sequences of interaction, either explicitly or implicitly, as utterances designed for a specific audience; equally, the sequence in which categories are invoked is crucial to the meaning of the utterance (Housley and Fitzgerald, 2002).
The application of CA is wide-ranging but highlighted here are a few studies relevant to our research on experiences of social care organisations.Sacks analysed talk in calls to a suicide prevention helpline (see, for example, Sacks 1995) and a considerable body of subsequent work has focused on forms of helping talk.This research has highlighted: the forms of interaction between practitioners and service users (Chatwin 2014;Hepburn et al., 2014;Ulvik, 2015); how competing policy and practice objectives are managed in practice (Räsänen, 2015;van Nijnatten and van Elk, 2015); and how 'client' identities are constructed in professional talk (Messmer and Hitzler, 2011).
CA has also been applied to research interview data where it has assisted understanding of: the research methods employed (Halkier, 2010;Irvine et al., 2010); the behaviour of research participants (Rapley and Antaki, 1996); the actions of the researcher (Guimaraes, 2007); and research team interactions during the process of data analysis (Housley and Smith, 2015).More directly pertinent to this article, Roulston (2006) reviews research studies which focus on how descriptions are co-constructed in talk between participant and interviewer.She concludes that interviewers undertake significant 'work' in generating and co-constructing interview accounts and therefore much can be learned from examining the processes involved.In a later study, Roulston (2014) highlights the 'category work' undertaken by both speakers and the moral dimensions of 'the ways in which versions of the world are talked into being in research interviews ' (2014: 290).This article shares a concern with how descriptions of experiences are coconstructed in research interviews, but it is novel in investigating how these parties of interest, namely service user co-researchers and service user participants, contribute to this co-construction.Habermas's (1984) concept of communicative space and CA are not easy bedfellows.Although they share a view of the social world as created by the interactions of social agents, realised through language (Bogen, 1999), Habermas's analysis of communication is decontextualized and relies on formal pragmatics, whereas CA is deeply contextualised, embedded in how actors understand and respond to the structure of utterances in the specific situation.Whereas Habermas is concerned with how agency is exercised within formal structure, CA is concerned with how structure is created through the agency of individual actors (Beemer, 2006).However, Beemer argues that the two perspectives should be seen as complementary rather than oppositional: The presuppositions of communicative action provide the mechanisms for action itself, not as isolated conditions apart from the acts themselves but in association with the contingent features of situations where meaning and order are produced interactively.Agency, on the other hand, is brought within the domain of structure through the shared competencies of speakers and hearers, which define the relational expectancies of action as a mutual orientation to reach an understanding.(2006: 102) Whilst this article does not attempt to reconcile CA with Habermas's theory of communicative action, Beemer's argument has merit in relation to this particular study.The interactions are not naturally occurring talk, but research interviews for which all of the actors have been 'prepared' in some way in relation to the purpose and process (i.e. the participatory approach) of the encounter.These presuppositions provide a contextual frame for the interviews and the basis on which the PR approach was evaluated.The lifeworld and system are useful sensitising concepts, guiding analysis of features of social interaction that reflect, on the one hand, 'bottom up' shared understandings between participants and co-researchers, and, on the other, 'top down' concerns and priorities of the researchers.
Methods
The interviews discussed in this article formed part of a larger UK study funded by Economic and Social Research Council (ESRC) that compared services provided by micro-enterprises with those provided by small, medium and large providers in terms of the extent to which those services were: personalised, valued, innovative and cost-effective (see Needham et al., 2017).The empirical study took part in three locations in England, with nine organisations of differing sizes identified as case studies in each site (i.e.27 in total).The organisations provided four main types of service to older people and/or adults with learning disabilities: domiciliary care; day support one-to-one support; or accommodation.Based on their previous research (Littlechild et al., 2015), the research team believed that a co-research approach would increase the validity and relevance of the findings and bring benefits to the co-researchers themselves.Service user views and experiences of micro and nonmicro service provider organisations were gathered in face-to-face semi-structured interviews carried out jointly by co-researchers and academic researchers.Co-researchers led the interviews, with academic researchers offering support when necessary.
The co-researchers were either older people or people with autism or learning disabilities, reflecting the users of the case study organisations.Fifteen service user co-researchers carried out interviews; nine were older people and six had autism or learning disabilities.All were recruited via local organisations or networks, separate from the case study organisations.A total of 106 service users were interviewed for the project, although a third were not interviewed by co-researchers (see Needham et al., 2017).Ethical approval for the project was given by the national Social Care Research Ethics Committee.
The evaluation of the co-research approach was undertaken by two academics who, though employed by the same university as the lead researchers, had no involvement in other aspects of the study.The evaluation included: three focus group interviews, one in each site, with the co-researchers; individual face-to-face semi-structured interviews with the academic researchers; and analysis of a sample of transcripts and audio recordings of co-researcher interviews with participants.This article focuses only on the latter as interest is in analysis of co-researcher/participant interactions.Interviews where the participant was supported by a carer were excluded from analysis as this introduced different dynamics in relation to sequences and member categories.Interviews were categorised by four factors: the site; the co-researcher leading the interview; whether the interviewee was an older person or person with learning disabilities; and the nature of care and support the interviewee was receiving.Five interviews were purposively selected from each site, ensuring that these different criteria were represented in the sample analysed.The role of the academic researcher is considered only insofar as it directly relates to analysis of the co-researcher-participant interaction.
The 15 interview transcripts were read in full and notable points relating to key features, such as turn-taking, sequencing, repairs and categorisation were highlighted.The audio recordings of these extracts were then listened to repeatedly, enabling more detailed note-making.
The remainder of the article focuses on the central question: In what ways does the involvement of co-researchers open up communicative space in semi-structured interviews?
Interview analysis
Detailed examination of 'talk-in-interaction' between co-researchers and participants indicated that co-researchers engaged in two main types of activity when performing their role.
1. Deploying interview skills.Here the orientation was primarily to achieving the task of the interview.This form of communication was mainly strategic, geared to obtaining information of direct relevance to the interview schedule.2. Establishing shared connections.These responses correspond to communicative action; they helped to build a rapport and were oriented to facilitating an empowering and constructive interview process.
The interviews could be located in one of the quadrants formed by the intersecting axes of Skills and Connections: high skills and high connections; low skills and high connections; low skills and low connections; and high skills and no connections.In practice, none of the interviews analysed fell within the latter quadrant (see Figure 1).Interview extracts illustrating these dimensions are presented below.All names of people and places are pseudonyms.C denotes the co-researcher, P the participant; and A the academic researcher.Line numbers are given in brackets.
Quadrant 2: high connection, high skill Interview 1.Here an older female co-researcher (C) is interviewing an older woman (P) who is resident in a care home about her experience of sessions provided by an activity worker (Lyn).In this opening sequence, P orients to her role as interviewee, describing her experience of the activity group.There is continual turn-taking between C and P, but rather than asking direct questions from the interview schedule, C uses techniques such as clarification ('So that's part of the reminiscence thing that you do'?, (4)), reflection ('encouraging each other in friendships', (21)) and empathy ('help you feel good about things', (8)), to encourage P to talk.C and P reinforce and encourage each other's contributions with repeated overlapping talk, such as 'Yeah' and 'lovely'.C compares P's experience of the group within the care home to other groups ('when you're in a group like this', ( 14)) and accepts and reinforces the predicate of the group as about 'talking' and group members as interesting ('almost like reading a book', (15)).C suggests that the group is a mechanism for 'change' through encouraging members' interactions ('it's sort of brought you and them out', (18-9)).
In this short extract, C and P co-construct a view of the activity group as a forum where members can share interesting experiences and where shy members are encouraged to participate.In relation to how the work of the interview is accomplished (Hester and Francis, 1994), C approves P's descriptions of the activities ('lovely' is used six times).C and P create a positive evaluation of not only the activities, but also the activity group members, who are presented as interesting and diverse.Neither party recognises potential need and vulnerability of either older care home residents, activity group members or of P herself.The group is normalised and generalised and the possible need of members to be 'brought out' is acknowledged only in terms of the general characteristic of shyness.
Interview 2. An older male co-researcher is interviewing an older women about the care service she received on discharge from hospital.
1. P When I came out of the hospital I used to have them come in the morning and night (.) That's from the hospital.2. C Oh yeah.3. P Uh (.) but when they came out I'd already got dressed and everything like, you know, and Wendy that run it from the hospital, she said to me "We've only got so many weeks" apparently and the weeks were up.She said "Jean, we can get somebody to come in but you've a bit to pay".She said, "But you're so independent".In these short extracts, C combines forging connections with an orientation to the interviewer role.He establishes connection with P on the basis of shared physical vulnerability, geographical ties, a shared status as parents of daughters and a sense of unity as 'insiders'.The delicate issue of P's receipt of care services is managed by C constructing this as necessary to avoid risk; being 'at risk' is itself rendered acceptable by C's acknowledgement of his own vulnerability and by his praising P's achievement in living a long life.Belonging to shared member categories is emphasised by mutual acknowledgement of 'difference' from others and shared external threat.C takes time to attend to this 'work' of making the receipt of care acceptable before proceeding to the next question on the interview schedule, adapting the wording of the question to suit P (5-6).
Quadrant 4: high connections and low skills
Interview 3. In this interview, a young man with learning disabilities is being interviewed by a slightly older male co-researcher with learning disabilities about his attendance at a football group.
1.
C P does not understand the question posed by C (5) so reads from the interview schedule himself (7).C interprets this as a deficiency in his own performance as interviewer ('I'm not very good' ( 6)).P says that the function of the group for him is 'to interact and exercise' ( 17) and C responds from his shared position as user of a similar service ('Bit like me', ( 21)), attending for the same reason ('I go to interact', ( 23)).P then evaluates the experience of having no-one to speak as 'a bit sad' (25).Wicks and Reason (2009: 249) argue that 'a critical awareness of and attention to the obstacles that get in the way of dialogue' are pivotal to opening communicative space.Here, the main obstacle appears to be C's failure to adapt the questions, prompt, clarify or repair.C's inability to enable P to understand the question at one point leads P to try to interpret the question for himself, prompting the academic researcher to intervene (9).C's admission of 'I'm not very good' (6) appears to refer to his role as interviewer.However, C is learning skills as the interview proceeds.He starts to follow the academic's lead by returning to previous questions to seek further detail ( 14) and to add prompts.Moreover, C shares his service user category membership with P, enabling P to acknowledge loneliness.However, although C at times orients himself to a shared lifeworld with P, at other times his lack of confidence and experience as a researcher lead him to adhere rigidly to the ordering and phrasing of questions in the interview schedule.The potential for their shared lifeworld to operate as a basis for communicative action is restricted by C's reliance on the strategic function of the interview.
Quadrant 3: low skill, low connection
In this extract, a young woman with a learning disability is interviewing another young woman with a learning disability about her attendance at an activity centre.
1.
A So, Amy, take it away.C Sometimes.
Although A invites C to assume the interviewer role, C refuses the turn as she is unsure what to do (2).A takes the turn and models what C needs to say (3-4).C incorporates A's words 'Riverside' and 'enjoy' into a question directed at P.She orientates to the role of 'interviewer' by proceeding to ask multiple questions without waiting for responses from P (5-7).There follow a series of adjacent pairings of questions and answers, but P's turns are confined to single words which C accepts as transition relevant (Sacks et al, 1974), without further prompting of P to elaborate.It is not possible to see whether P gives any non-verbal cues that lead to C assuming a negative (7) or positive (11) answer.P's responses do not reveal her understanding of C's questions and C's questions continue to suggest answers (14,(16)(17)(18)21).C shares with P that she likes playing on computers ( 11), but it is C, not P, who suggests that P uses the computers ( 7) and likes doing this (9).C again acknowledges that she is struggling with the interviewer role (13 and 16).C orientates to being the 'interviewer', but not to being a young woman or someone with a learning disability.The interview fails to open communicative space in that C and P do not co-produce understanding of relevance to the research questions or establish connection based on shared identities or interests.
Discussion
The rationale for PR is to generate opportunities for marginalised groups to 'come to voice' and 'talk back' (Humphries, 2005: 314).Ours analysis of semi-structured interviews suggests that when co-researchers deploy interviewing skills alongside establishing shared connections with participants, communicative space is opened, generating mutual understanding and jointly constructed knowledge.
In interviews with a high level of connection (Quadrants 2 and 4, Figure 1), the bases for connection were broader than identities as older people or people with learning disabilities.In Interview 1, the co-researcher connects on the basis of experiences and interests and, through the rapport established, the co-researcher and participant co-construct a positive view of the activity group and its members.In Interview 2, the bases for connections include shared geographical ties, family roles, age-related vulnerability and 'insider' status.In Interview 3, 'disability' is a predominant category in the interview, but it is treated as troublesome and distanced by both the co-researcher and participant.However, sharing the more general member category of service users enables both parties to acknowledge loneliness.
In relation to skills, whilst 'interviewer/interviewee' is an omnirelevant device to which co-researchers and participants actors orientate by asking/answering questions (Fitzgerald and Housley, 2002), the frequency and length of turns and the degree of adherence to the order and wording of the interview schedule differentiate the interviews in Quadrants 3 and 4 from those in Quadrant 2. The co-researchers in Interviews 1 and 2 (Quadrant 2) continue the sequences of participants, developing a conversation, rather than shifting to the next question on the interview schedule.The sequential organisation of the interviews in Quadrants 3 and 4 indicate that the co-researchers are enacting their understanding of the member category of 'interviewer' with a predominance of questioning devices, but are uncertain or critical about their own performance in this role.
Co-researcher involvement in designing the interview questions aimed to embed lifeworld considerations in the interview structure and content; moreover, the interview schedule was intended to be used flexibly.However, orientation to omnirelevant devices can restrict as well as facilitate shared understanding (Rintel, 2015) and in Interviews 3 and 4, co-researchers' lack of skill and confidence led to reliance on the omnirelevant device of interviewer and an overly strategic focus on the interview task, restricting the opportunity for co-produced understanding.The presence of co-researchers who implicitly or explicitly shared visible member categories such as age and gender or disability may have helped participants to feel comfortable.However, the value of co-researchers' unique perspectives and influence is undermined if they are unable to influence the interview structure and process.In this situation, there is a danger that co-researchers function as little more than service user figureheads, leaving fundamental inequalities in the generation of knowledge unchallenged (Carey, 2010;Flinders et al., 2016).Moreover, coresearchers risk their own disempowerment if they conclude 'I'm not very good' (Interview 3) or 'I'm confused' (Interview 4).Whilst it behoves researchers to ensure that co-researchers have the requisite skills to undertake qualitative interviews (Miller et al., 2006), it is inappropriate to prepare for and evaluate the contribution of co-researchers on the basis of traditional research skills when the reason for their involvement is to bring a different, user-centred perspective (Reed et al., 2006).
Whilst interview skills assist achievement of the pre-defined purpose of the interview, shared connections may rebalance power relationships and extend opportunities for sharing of the lifeworld, enabling the voices of marginalised participants to be heard.In interviews where the co-researcher was able to use skills in conjunction with establishing connections, co-researchers engaged in conversation that addressed the required topics while leaving opportunity for participants to influence the content and direction of the interview.Strategic objectives were achieved within communicative action.Co-researchers who have the ability to use skills and establish connections may in this way reconcile strategic and communicative concerns and ease the challenge for academic researchers of 'boundary crossing' from academic to co-produced worlds (Flinders et al., 2016).
This study has a number of limitations.Our analysis indicates the need for greater attention to how connections are forged between co-researchers and participants.How member categories are oriented to in the opening of interviews seems particularly important, but we were unable to examine this as the audio recordings began after the introductions and negotiation of consent.Analysis was based only on written transcripts and audio recordings, precluding examination of the role of body language, such as eye contact, hand or head gestures.Social actors use embodied as well as verbal sources to produce and interpret social action (Mondada, 2016).The inclusion of video data could enrich, and possibly alter, interpretation of how interview parties orientated themselves to each other; for example, the participant in Interview 4 may have been nodding to indicate agreement with the co-researcher's suggested answers.
Each interview was analysed in isolation, preventing consideration of how coresearchers' contributions changed over time.It is important to recognise the development of co-researchers' skills over the course of the PR process (Bergold and Thomas, 2012).Moreover, the interviews were analysed in relation to the local and situated accomplishment of the talk-in-interaction, without regard to how the members accomplish interaction with other parties and/or in other settings.For example, although the participant in Interview 4 gives single word utterances, this might be an increase in her usual verbal contributions because she feels comfortable with the co-researcher.It is not possible to evaluate the co-researcher's role in opening communicative space without understanding this wider context.
More fundamentally, this analysis relates only to co-researcher involvement in interviews, even though involvement in the study extended to other stages of the research.An evaluation of the impact of co-researcher involvement is therefore incomplete if based on analysis of interview interactions alone.
Conclusion
CA assists understanding of how social action is practically accomplished; it is not concerned with assessing or evaluating interactions, but with examining 'the "what" of how they actually are done' (Hester and Francis 1994: 680).In this article principles of CA and MCA have been applied to explore how co-researchers and participants produce and understand social action in semi-structured interviews.However, the article departs from CA and MCA in invoking a normative standard to evaluate how co-production influenced the interview process, drawing on Habermas's concept of communicative action.CA and MCA are used to examine the detail of interactional processes and understandings between co-researcher and participant; the concept of communicative action is applied to evaluate the impact this has on the knowledge generated through the process.
Earlier in this article, evidence was reviewed that indicates that PAR (as distinct from PR) can open communicative space internally within research teams and externally with wider stakeholders.The originality and significance of this article lie in examination of how co-research can open communicative space in semi-structured interviews, where strategic concerns may be more foregrounded than in PAR, especially when co-researchers have not been involved in designing the research proposal.Applying principles of CA and MCA has indicated that co-researchers can open communicative space in semi-structured interviews by using skills to achieve the strategic purpose of the interview in conjunction with forging connections with participants.Conversely, communicative action is constrained when co-researchers lack the skills or confidence to influence the content and direction of interviews.Participants may perceive power to be more equalised by virtue of shared connections with co-researchers, but unless co-researchers' ability to make shared connections is combined with the skills and confidence to circum-navigate system preoccupations, the potential for a shared lifeworld orientation to generate service user-led knowledge is unrealised.
This study suggests possible future directions for PR practice.Attention to communication within the interviews has highlighted the 'identity work' undertaken by co-researchers and participants and the different ways in which this is manifested and operates.Training programmes tend to focus on methods of gathering data, but could usefully heighten attention to the significance of interactional processes between co-researchers and participants.In particular, there could be increased focus on how co-researchers and participants orientate to member categories and how this influences sequences of interaction.In this study, some co-researchers felt more comfortable adhering strictly to the interview schedule and their interactions were then strategic rather than communicative.Training programmes could build their confidence in departing from restricted notions of 'interviewer' and encourage them to share their own 'selves' with participants.Extracts from conversational analysis of interviews could be a useful training tool, identifying and developing skills pertinent to co-researcher interviewers (as distinct from traditional research interviewing skills) as well as ways of establishing connections.
The study is not presented as a 'good practice' example of PR and significant learning points were noted by the research team in the project evaluation.However, through detailed examination of 'what went on' in the interviews and consideration of the impact this had on the knowledge generated, the article contributes to enhancing the clarity, rigour and validity of PR.CA has illuminated how co-researchers and participants in interviews oriented themselves to each other in sequences of talk and used member categories.Habermas's concept of communicative action has facilitated exploration of how this influenced the co-production of knowledge.Forester (2003: 62) argues that Habermas's concept of communicative action 'enables us to explore the continuing performance and practical accomplishment of relations of power'.There is scope in future research to extend this to analysis of 'the practical accomplishment of relations of power' in interviews between researchers and participants more generally, including comparison of co-researcher and academic researcher interactions with participants.Further examination of whether and how these processes open communicative space can augment the contribution of PR to health and social care research.
Figure 1 .
Figure 1.Interplay of skill and connections.
1. P We talked about local, like the goose fair, and lots of local things, and talked to the other people who are with us, and we talked about each other and what we did, and where we worked, and what our family were like.2. C [Oh yes] [Oh lovely] [So that's part] of the reminiscence thing that you do? 3. P [That's] right.4. C You do the watercolours and the reminiscence.5. P [That's right.Yes.Quite a lot of that] 6. C [Lovely.] Yeah and obviously sort of help you feel good about things I should imagine, improved your life quite a bit.7. P [Yeah.]Getting to know what other people do, yeah.Where they've been, and some people have been all round the world in different places.8. C [Yeah] 9. P Ever so interesting.10.C [Yeah] Isn't it, when you're in a group like this you can find out so much more about people.It's such a lovely thing.It's almost like reading a book, isn't it, of someone's life.How lovely.11.P [Yeah.]How other people's lives are so different.12. C [Yeah] So things have changed in a sense because it's sort of brought you and them out.13.P [That's right.]14.C And encouraging each other in friendships and so on, isn't it?15.P [And] other people then start to talk about what they did, and that helps a lot of people, because lots of people just actually are very shy and they don't want to talk.16.C [Yeah] [Yeah of course] She said "I can't see why you'd want to pay when you can do it yourself, because you're that independent".4. C [Mmm] You don't want to put yourself at risk do you?You know what I mean?I mean say with me, $ sometimes I fall down, you know what I mean.And, like, I hope I reach your age.And how did you hear about the service, Jean?P relays the suggestion made by Wendy, the care co-ordinator, that her status as 'independent' is threatened if, when the period of state-funded care ends, she decides to continue the service and fund it herself when she can manage the tasks unaided (6-9).C challenges Wendy's view, suggesting that P may be putting herself at risk if she tries to manage without care (10).This counters the implication that P is accepting 'dependency' if she continues to receive care.Although C invokes the notion of P being 'at risk' (10), he normalises this by referring to his own experience of falling (11).He presents P and himself as sharing the characteristic of being 'at risk', whilst at the same time admiring and aspiring to P's more advanced age(11)(12).Later, P talks about other work undertaken by the care worker.P relates to C on the basis of an assumption of shared geographical connection ('of course you know Norton' (2)).C's volunteering of personal information that his daughter lives there (3) enables P and C to establish that they both have daughters living in the same part of town and share the same 'small world' (9).
struggle with their temper (8).However, C does not explore this but proceeds to the next question on the interview schedule.In giving his view about the service a few questions later, P refers again to the category of disability: Umm I just really go there more to interact and exercise.You know it's better than just sat on my computer all day.If I can go out and interact with people, you know, it's going to help me to go out in the future.Like I say if I get a job [or] I go back to 'people that have got disabilities' (3).Neither P nor C directly own membership of the category of disability themselves, though this is implied by P in his use of'and' (3)and by C's reply that he is doing 'something like that' himself (4).Conflict is a predicate of the disability service for P, who highlights the arguments -'because it's a disability', they P It gets a bit boring when you're just sat at home and there's no one to speak to.You get a bit, don't know, it just, it's a bit sad, isn't it?18. C Yeah.
Isthere anything you ↑don't↑ like about it?(.) It's very good?(1.2) And do they support you with a lot of things, do they help ↑you↑?
C And you like coming?Oh I'm glad you like coming.(.) Um, I'm trying to think. | 2019-05-12T14:22:21.686Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "2f9d6623a93aa7052198fc01cd82cd00941902b2",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1468794118770076",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "492fdb5d299b79c03899eedbfec522118d3f9875",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Sociology"
]
} |
92999732 | pes2o/s2orc | v3-fos-license | miR-146a-5p expression is upregulated by the CXCR4 antagonist TN14003 and attenuates SDF-1-induced cartilage degradation
Osteoarthritis (OA) is an aseptic inflammatory disease which is associated with the stromal cell-derived factor 1/C-X-C chemokine receptor type 4 (SDF-1/CXCR4) axis. Accumulating studies have identified numbers of microRNAs (miRNAs) that serve important roles in the pathogenesis of OA. However, whether and how the inhibition of the SDF-1/CXCR4 axis induces alterations in miRNA expression remains largely unclear. miRNA profiling was performed in OA chondrocytes stimulated with SDF-1 alone, or SDF-1 with the CXCR4 antagonist TN14003 by miRNA microarray. Candidate miRNAs were verified by reverse transcription quantitative polymerase chain reaction. Bioinformatic analyses including target prediction, gene ontology (GO) and pathway analysis were performed to explore the potential functions of candidate miRNAs. Notably, 7 miRNAs (miR-146a-5p, miR-221-3p, miR-126-3p, miR-185-5p, miR-155-5p, miR-124-3p and miR-130a-3p) were significantly differentially expressed. GO analysis indicated that miR-146a-5p and its associated genes were enriched in receptor regulatory activity, nuclear factor-kappa-light-chain-enhancer of activated B cells (NF-κB)-inducing kinase activity, cellular response to interleukin-1, cytokine-cytokine receptor interaction, NF-κB signaling pathway and osteoclast differentiation pathways. CXCR4 was predicted to be a target of miR-146a-5p with high importance. The mRNA and protein levels of key factors involved in cartilage degeneration were measured following manipulation of the expression levels of miR-146a-5p in OA chondrocytes. CXCR4 and MMP-3 levels were negatively associated with miR-146a-5p expression, while the levels of type II collagen and aggrecan were positively associated. These data reveal that TN14003 upregulates miR-146a-5p expression, and also pinpoints a novel role of miR-146a-5p in inhibiting cartilage degeneration by directly targeting the SDF-1/CXCR4 axis.
Introduction
Osteoarthritis (OA) is a multifactorial articular disease characterized by cartilage degeneration, subchondral sclerosis and osteophyte formation (1)(2)(3). Normal function of articular cartilage is highly dependent on the homeostasis of the extracellular matrix (ECM), which serves as the mechanical structure and is involved in signal transduction in chondrocytes (4)(5)(6)(7). Previous studies have demonstrated the roles of the interaction between anabolic factors, including transforming growth factor β, and catabolic factors, including matrix metalloproteinases (MMPs) and aggrecanases, in the maintenance and regeneration of ECM in chondrocytes (8)(9)(10). However, the exact molecular mechanisms involved in OA remain largely unclear. Although progress in OA therapy has been incremental, the majority of treatments only improve clinical symptoms, as opposed to restoring the damaged ECM (11). In addition, inhibition of OA by the regulation of specific genes has been an unsuccessful strategy (12,13).
Stromal cell-derived factor 1 (SDF-1) is a cytokine that is associated with inflammation, and is identified in the synovial membranes adjacent to articular cartilage (14)(15)(16). Binding of SDF-1 and its ligand C-X-C chemokine receptor type 4 (CXCR4), a G protein-coupled receptor located in the surface of chondrocytes, induces the release of MMPs from the ECM, thereby exacerbating OA (16,17). As a CXCR4 inhibitor, the compound AMD3100 blocks the SDF-1/CXCR4 axis and has been effectively utilized in the treatment of OA (18)(19)(20). However, observed side effects and the unstable nature of AMD3100 limit its clinical application (20)(21)(22). TN14003 was designed based on T140, a 14-residue peptide that possesses a high level of anti-human immunodeficiency virus (HIV) miR-146a-5p expression is upregulated by the CXCR4 antagonist TN14003 and attenuates SDF-1-induced cartilage degradation activity and antagonism of T cell line-tropic HIV-1 entry among all the antagonists of CXCR4 (23). TN14003 was generated by amidating the COOH-terminal of T140 and by substituting basic residues with non-basic polar amino acids to decrease the total-positive charges of the molecule, and is far less cytotoxic and more stable in serum compared with T140 (23,24). MicroRNA (miRNA) belong to a class of small noncoding RNA encoded by endogenous genes, and dysregulation of miRNAs results in numerous diseases that occur in various physiological and pathological processes (25,26). Accumulating studies investigating the roles of miRNAs in bone and cartilage have identified a number of miRNAs that serve important roles in the pathogenesis of OA (27)(28)(29)(30). Therefore, the identification of abnormally expressed miRNAs and the associated biological consequences of their targets is essential to determining the potential molecular mechanisms in the OA pathological process. Unfortunately, the changes in miRNA expression in chondrocytes as a result of inhibiting the SDF-1/CXCR4 axis by drugs including TN14003 remains largely unclear.
Using a series of bioinformatic approaches, the present study aimed to systematically evaluate the aberrant miRNA expression levels in OA chondrocytes treated with TN14003.
The key miRNA miR-146a-5p was also confirmed as a differentially expressed miRNA, and the expression levels of its targets involved in the process of SDF-1/CXCR4 axis inhibition were measured, following molecular manipulation of the expression of miR-146a-5p in chondrocytes.
Materials and methods
Cartilage tissue collection and cell cultivation. OA cartilage was obtained from the weight-bearing surface of the femoral condyle and tibial plateau of 4 female and 1 male patients diagnosed with OA (using the American College of Rheumatology classification criteria), aged between 57 and 69 years old with an average age of 63.4±2.42, and undergoing total knee arthroplasty between October 2017 to March 2018 in the Department of Sports Medicine of the First Affiliated Hospital of Kunming Medical University (Kunming, China) (31,32). Written informed consent was obtained from all patients, and the present study was approved by the Ethics Committee at the First Affiliated Hospital of Kunming Medical University (Kunming, China).
Chondrocytes were digested with 0.15% collagenase and cultured in high glucose Dulbecco's Modified Eagle Medium (Gibco; Thermo Fisher Scientific, Inc., Waltham, MA, USA) supplemented with 10% fetal bovine serum (Gibco; Thermo Fisher Scientific, Inc.) and 100 U/ml penicillin and 100 µg/ml streptomycin. Culture medium was filtered to remove bacteria using a 0.22 µm microfilter. The first generation chondrocytes were used and divided into two groups: Treatment and control. To mimic the osteoarthritic environment of the knee joint, each group was treated with 100 ng/ml SDF-1 (PeproTech, Rocky Hill, NJ, USA). The treatment group was pretreated with 1 µM TN14003 (Scilight Biotechnology, LLC, Beijing, China) for 2 h prior to the addition of SDF-1. Each group of chondrocytes was incubated at 37˚C and 5% CO 2 for 2 days. miRNA extraction and reverse transcription. For miRNA screening, total RNA was isolated from cartilage tissues with or without TN14003 treatment, purified and prepared using the Qiagen RNeasy Mini kit (Qiagen, Hilden, Germany; cat. no. 74106) according to manufacturer's protocol. The integrity and quantity of samples were determined via Nanodrop 2000 spectrophotometer (Thermo Fisher Scientific, Inc.) and Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA). miRNAs were isolated from total RNA using the All-in-One™ miRNA reverse transcription quantitative polymerase chain reaction (RT-qPCR) Detection kit (GeneCopoeia, Inc., Rockville, MD, USA), according to the manufacturer's protocol.
miRNA RT-qPCR and verification. To calibrate the initial results, differentially expressed miRNAs were identified using miProfile™ human inflammatory miRNA qPCR Arrays (GC08017K18014P; GeneCopoeia, Inc.). The chondrocyte samples were isolated from 5 patients with OA and the validation was performed using samples with or without TN14003 treatment. Each well contained a forward primer for the mature miRNA sequence, and a universal adaptor reverse primer cross-linked to the 96-well plate. The primers for measuring miRNA expression were designed as summarized in Table I, and the qPCR was performed using 20 µl reaction volumes containing 1 µl reverse transcription product and SYBR Green Master Mix (Applied Biosystems; Thermo Fisher Scientific, Inc.). The amplification conditions were as follows: Pre-incubation at 50˚C for 2 min, enzyme activation at 95˚C for 10 min, then 40 cycles of denaturation at 95˚C for 10 sec, annealing at 55˚C for 30 sec and extension at 72˚C for 30 sec. Detection was performed using an ABI 7500 instrument (Applied Biosystems; Thermo Fisher Scientific, Inc.), and 2 -ΔΔCq was used to calculate the relative expression of miRNAs, as previously described (33). Alterations with a fold-change >2 and P<0.05 were considered to be differentially expressed.
RT-qPCR assay for mRNA expression. Total RNA was extracted from cartilage tissues using TRIzol ® regent (Thermo Fisher Scientific, Inc.) according to the manufacturer's protocol. Total RNA (1 µg) was transcribed into cDNA using the RevertAid™ First Strand cDNA Synthesis kit (Thermo Fisher Scientific, Inc). cDNA (40 µg/µl) was used as a template for amplification of CXCR4, type II collagen (Col II), aggrecan (ACAN) and MMP-3 genes, and β-actin served as an internal reference. SYBR Green Master Mix (Applied Biosystems; Thermo Fisher Scientific, Inc.) was used for RT-qPCR analysis. The amplification conditions were as follows: Pre-incubation at 50˚C for 2 min, enzyme activation at 95˚C for 10 min, then 40 cycles of denaturation at 95˚C for 10 sec, annealing at 55˚C for 30 sec and extension at 72˚C for 30 sec. All primers (Table II) were obtained from the NCBI database and designed by Premiers Express Software v1.0 (BioTools Incorporated, Edmonton, AB, Canada). Reactions for each sample were performed in at least three independent experiments. Cycle threshold values were measured and data were calculated the by 2 -ΔΔCq method (33).
Statistical analysis. All quantitative data were analyzed with SPSS 18.0 (SPSS, Inc. Chicago, IL, USA) and presented as mean ± standard deviation. Statistical analysis was performed using one-way analysis of variance with the Least-Significant Difference correction to determine differences between groups. P<0.05 were considered to indicate a statistically significant difference.
Results
Identification of candidate miRNAs whose alterations are in response to TN14003 treatment in SDF-1-stimulated chondrocytes. To evaluate the effects of TN14003 on chondrocytes, OA patients-derived chondrocytes were treated with SDF-1 alone or SDF-1 + TN14003 for 48 h. Cells were harvested to investigate the alteration of miRNA profile upon the inhibition of CXCR4/SDF-1 axis by TN14003. There were 7 differentially expressed miRNAs identified in cartilage samples via microarray analysis (Table III). Among these miRNAs, 5 miRNAs (miR-146a-5p, miR-221-3p, miR-126-3p, miR-185-5p and miR-155-5p) were significantly upregulated, and 2 miRNAs (miR-124-3p and miR-130a-3p) were significantly downregulated in the treatment group compared with the control group (Fig. 1A). In order to understand the mechanism of these miRNA alterations and consequently their involvement in OA treatments, the miRWalk database and Cytoscape software was used to analyze the miRNA-mRNA interactions through their visualization as a network (Fig. 1B). While the interactions between miRNAs and potential targets were built, a sequence diagram was performed to reveal the importance of targets. The importance of targets was determined according to the number of connections between each gene and miRNA and other genes in the miRNA-target network (Fig. 1C). As demonstrated in the network map, the miRNAs hsa-miR-146a-5p, hsa-miR-221-3p, hsa-miR-126-3p and hsa-miR-185-5p exhibited direct interaction with the mRNA CXCR4 (Fig. 1B). Among all the potential targets, EGFR, GRB2, CBL, CXCR4, ESR1, PTPN11, SHC1 and SOS1 were the targeted mRNAs with highest importance scores (Fig. 1C).
Verification of candidate miRNAs, GO terms assignment and pathways analysis of miR-146a-5p and its targets.
To validate the differentiated expression of the 7 miRNAs identified from the initial screening, the levels of these miRNAs in control-treated and TN14003-treated chondrocytes were measured by RT-qPCR assays. Compared with the initial microarray results, the expression levels of miR-146a-5p, miR-126-3p and miR-124-3p were validated by the RT-qPCR approach: As indicated in Fig. 1A, the expression of both miR-146a-5p and miR-126-3p was upregulated, while the expression of miR-124-3p was downregulated; the results of RT-qPCR analysis presented in Fig. 2A confirmed this observation. However, other miRNAs that did not exhibit the same changes, or exhibited no statistical differences in expression were excluded from subsequent analyses ( Fig. 2A). Notably, miR-146a-5p was upregulated >3-fold. A previous study suggested an association between miR-146a-5p and OA, as evidenced by its >2-fold increased expression compared with healthy controls (42). Besides, accumulating data indicate that miR-146a-5p is a representative miRNA known to be associated with OA (43,44). Therefore, the present study focused on miR-146a-5p and its targets to delineate their associations with OA treatments by TN14003. GO analysis indicated that miR-146a-5p and its targets were primarily grouped into 'receptor regulator activity' or 'nuclear factor-kappa-light-chain-enhancer of activated B cells (NF-κB)-inducing kinase (NIK) activity' in the Molecular Functions category, into 'lipopolysaccharide-mediated signaling pathways' or 'cellular response to ketones' in the Biological Processes category, and into 'secondary lysosome', 'sperm midpiece' or 'endosomes' in the Cellular Components category (Fig. 2B). Enriched pathways of miR-146a-5p and its targets were primarily involved in the 'chemokine signaling pathway' or the 'Toll-like receptor signaling pathway' (Fig. 3). These results indicated that these identified inflammation-associated GO terms and pathways are important to the roles of the SDF-1/CXCR4 axis in the pathogenesis of OA.
Molecular manipulation of miR-146a-5p expression in chondrocytes transfected with the mimic or inhibitor of miR-146a-5p.
To additionally reveal the roles of miR-146a-5p in the process of OA development, the mimic and inhibitor of miR-146a-5p were designed to examine the effects of molecular manipulation of miR-146a-5p on OA-associated molecules. First, an RT-qPCR assay was performed to verify the effect of cell transfection and successful upregulation and downregulation of miR-146a-5p expression. As demonstrated in Fig. 4, the level of miR-146a-5p was significantly increased ~12-fold compared with the mimic control in chondrocytes transfected with the miR-146a-5p mimic, and markedly downregulated to ~20% of the inhibitor control when the miR-146a-5p inhibitor Figure 3. Signaling pathway analysis involved in miR-146a-5p and its targets. The longitudinal axis represents the pathway category, while the horizontal axis represents the enrichment factor in each pathway. The size of a point represents the number of genes, and the color is indicative of the P-value. was transfected. As hypothesized, there were no significant changes in miR-146a-5p expression between chondrocytes that were treated with SDF-1 only and transfected with negative controls (Fig. 4).
Association between miR-146a-5p expression and the mRNA levels of Col II, ACAN, CXCR4 and MMP-3. Next, RT-qPCR was performed to measure the expression of cartilage degeneration-associated factors (including Col II, ACAN, CXCR4 and MMP-3) following transfection of the chondrocytes with the mimic and inhibitor of miR-146a-5p, and corresponding NCs. As indicated in Fig. 5A and B, in the chondrocytes transfected with the miR-146a-5p mimic, the expression levels of Col II and ACAN were significantly increased. However, there was no significant difference in Col II expression between chondrocytes transfected with the inhibitor NC and the miR-146a-5p inhibitor. The expression levels of ACAN decreased markedly in the chondrocytes transfected with the miR-146a-5p inhibitor.
In contrast, as indicated in Fig. 5C and D, the mRNA expression levels of CXCR4 and MMP-3 decreased in the chondrocytes following transfection with the miR-146a-5p mimic, and increased following transfection with the miR-146a-5p inhibitor. These results demonstrate a positive association between miR-146a-5p expression and the mRNA expression levels of Col II and ACAN, and a negative association between miR-146a-5p expression and the mRNA expression of CXCR4 and MMP-3.
Association between miR-146a-5p expression and the protein levels of CXCR4 and Col II. As CXCR4 was confirmed to be a target of miR-146a-5p, and Col II is the principal component of cartilage ECM, effects of miR-146a-5p expression on the protein levels of these 2 factors were additionally assessed. Consistent with the alteration of mRNA expression, expression of the CXCR4 protein was downregulated as a result of miR-146a-5p mimic transfection ( Fig. 6A and B). Concomitantly, the protein level of CXCR4 was significantly upregulated following miR-146a-5p inhibitor transfection. As hypothesized, the opposite pattern of expression was observed for Col II protein in chondrocytes transfected with the mimic or inhibitor of miR-146a-5p ( Fig. 6A and C). Taken together, these results highlight the critical roles of miR-146a-5p on regulating the expression of cartilage degeneration-associated factors.
Discussion
Antagonists of CXCR4 (AMD3100, T140 and TN14003) have been demonstrated to successfully inhibit the SDF-1/CXCR4 axis by competing with CXCR4 for its ligand SDF-1 (45). These antagonists have been utilized in the treatment of HIV infection and various types of cancer (46,47). AMD3100 and T140 were indicated to be efficient in the management of OA in previous studies (48,49). However, TN14003 is recommended above AMD3100 and T140, as AMD3100 is a weak partial antagonist, and T140 possesses unstable properties, limiting their clinical applications (50,51). In addition, previous studies have demonstrated that TN14003 is a more effective in inhibiting MMP-3, MMP-9 and MMP-13 release, and in Col-II and ACAN degradation, when compared to AMD3100 and T140 (52,53). The present study investigated whether TN14003 therapy may elicit an alteration in miRNA profile in chondrocytes derived from patients with OA, and also identified that miR-146a-5p was a CXCR4/SDF-1 axis inhibitor that induced differentially expressed miRNA, which regulated the expression of cartilage degeneration-associated factors, including CXCR4, Col II, ACAN and MMP-3. It has been established that miRNAs serve key roles in the pathological processes of OA (54,55). Kopańska et al (42) identified 4 miRNAs (miR-138-5p, miR-146a-5p, miR-335-5p and miR-9-5p) in OA cartilage that were upregulated >2-fold compared with healthy controls, indicating an association between miRNA and OA. Zheng et al (56) demonstrated that miR-221-3p was significantly downregulated in OA compared with normal controls, and that upregulating miR-221-3p may inhibit interleukin 1β (IL-1β)-induced cartilage degradation via targeting of the SDF-1/CXCR4 axis. The present study indicated that 84 miRNAs were differentially expressed in OA chondrocytes, and miR-146a-5p, miR-126-3p and miR-124-3p were validated, suggesting that these miRNAs may exert their effects via inhibition of SDF-1/CXCR4 with TN14003 treatment.
miR-146a-5p is a representative miRNA known to be associated with OA (43,44). In addition to the data from Kopańska et al (42), Genemaras et al (57) suggested that following stimulation with IL-1β and tumor necrosis factor-α (TNF-α), miR-146a was significantly upregulated in pig chondrocytes, indicating an interaction between miR-146a and inflammatory cytokines in the promotion of OA. In addition, Spinello et al (58) detected a parallel effect between miR-146a Figure 4. Successful manipulation of miR-146a-5p expression in chondrocytes transfected with the mimic or inhibitor of miR-146a-5p. Chondrocytes were separately transfected with miR-146a-5p mimics, miR-146a-5p inhibitors and NC for 48 h using Lipofectamine ® 3000. Following stimulation with 100 ng/ml SDF-1 for 2 days, cells were harvested to evaluate the levels of miR-146a-5p by reverse transcription quantitative polymerase chain reaction. Data are summarized from 3 independent experiments with similar results. n=3 for each group. ** P<0.01 and *** P<0.001, compared with the SDF-1 treated alone group. miRNA, microRNA; SDF-1, stromal cell-derived factor 1; NC, negative control. and the CXCR4 antagonist. The present study determined that CXCR4 protein expression was decreased following AMD3100 treatment. The sensitivity of leukemic blast cells to cytotoxic drugs was demonstrated to be increased, and this effect was augmented with the overexpression of miR-146a.
However, unlike miR-146-5p, which has been extensively studied, few studies have explored the role of miR-126-3p and miR-124-3p in the process of OA.
OA is an aseptic inflammatory disease (59,60). Several miRNAs, including miR-146a-5p, have been demonstrated to be Figure 5. Association between miR-146a-5p expression and the mRNA levels of Col II, ACAN, CXCR4 and MMP-3. (A-D) Chondrocytes were separately transfected with miR-146a-5p mimics, miR-146a-5p inhibitors and NC for 48 h using Lipofectamine ® 3000. Following stimulation with 100 ng/ml SDF-1 for 2 days, cells were harvested to evaluate the levels of (A) Col II, (B) ACAN, (C) CXCR4 and (D) MMP-3 by reverse transcription quantitative polymerase chain reaction. Data are summarized from 3 independent experiments with similar results. n=3 for each group. * P<0.05, ** P<0.01 and *** P<0.001 vs. the corresponding control group. miRNA, miRNA; Col II, type II collagen; ACAN, aggrecan; CXCR4, C-X-C chemokine receptor type 4; MMP-3, matrix metalloproteinase-3; SDF-1, stromal cell-derived factor 1; NC, negative control; NS, not significant. genetic markers of inflammation, and to function as promoters of OA (61,62). Notably, miR-146a-5p was upregulated in the treatment group in the present study, indicating that it may serve a parallel role with TN14003. Although a number of studies have investigated the role of miR-146a-5p by comparing miRNA profiles between OA and normal chondrocytes, few studies have focused on miRNA expression profile following therapy with specific inhibitors, including CXCR4 antagonists. Through a computational approach to mine miR-146a-5p associated genes and pathways, the present study revealed that the 'receptor regulatory activity' or 'NIF activity' (Molecular Functions), 'cellular response to interleukin-1' (Biological Processes), 'cytokine-cytokine receptor interaction', 'NF-κB signaling pathway' and 'osteoclast differentiation pathways' were involved. Activation of the SDF-1/CXCR4 signaling axis has been verified to be a process of cytokine-to-receptor transmembrane transport, and this activity may regulate disease progress via the NF-κB pathway (63). This indicated that miR-146a-5p may be associated with the SDF-1/CXCR4 axis through the regulation of the NF-κB pathway.
Numerous genes are negatively regulated by complementary pairing with miRNAs, and dysregulation of genes may affect OA (64). Additionally, OA therapy based on miRNAs has been developed in previous years, and may result in high-efficiency treatment with less biological toxicity (65). Yang et al (61) predicted that CXCR4 may function as a direct target of miR-146a-5p, as verified by the fact that CXCR4 expression was decreased and miR-146a-5p was upregulated in endometrial tissue samples. In addition, Labbaye et al (51) determined that two 'seed' regions of the 3'-untranslated region in CXCR4 mRNA directly interacted with miR-146a, thereby demonstrating that CXCR4 mRNA translation was inhibited by miR-146a. In the present study, CXCR4 was predicted to be a target of miR-146a-5p with high importance. Then, RT-qPCR and western blot analysis were used to determine whether several key factors in chondrocytes associated with Figure 6. Association between miR-146a-5p expression and the protein levels of CXCR4 and Col II. Chondrocytes were separately transfected with miR-146a-5p mimics, miR-146a-5p inhibitors and NC for 48 h using Lipofectamine ® 3000. Following stimulation with 100 ng/ml SDF-1 for 2 days, cells were harvested to evaluate the protein levels of CXCR4 and Col II. (A) Represent western blot showing the protein levels of CXCR4 and Col II. (B and C) Summary data from 3 independent western blot analysis experiments on the protein levels of (B) CXCR4 and (C) Col II. ** P<0.01 and *** P<0.001 vs. the corresponding control group. miRNA, miRNA; Col II, type II collagen; CXCR4, C-X-C chemokine receptor type 4; SDF-1, stromal cell-derived factor 1; NC, negative control. the SDF-1/CXCR4 axis were regulated by miR-146a-5p. It was identified that the expression levels of Col II and ACAN were positively associated with miR-146a-5p expression, and levels of CXCR4 and MMP-3 were negatively associated with miR-146a-5p expression. The results suggest that miR-146a-5p may serve a parallel and additive role with TN14003 in blocking the SDF-1/CXCR4 axis and inhibiting cartilage degeneration.
There are certain recognized limitations of the present study that must be considered. The effect of miR-146a-5p on cartilage degeneration was determined in the present study, but a Cell Counting Kit-8 assay should be performed to evaluate the effect of miR-146a-5p on chondrocyte survival. In addition, the primary aim of the present study to investigate chondrocytes, and may not capture the role of miR-146a-5p on the neighboring tissues containing synovium and subchondral bone. Finally, in order to fully demonstrate the function of miR-146a-5p in SDF-1-induced cartilage degeneration by targeting CXCR4, an in vivo investigation should be included in future studies.
In conclusion, the present study provided compelling evidence for the critical roles of miRNAs in SDF-1-induced cartilage degradation by miRNA microarray profiling in OA chondrocytes following TN14003 treatment. miR-146a-5p was detected to be differentially expressed and it most likely represents the key miRNA that participates in the regulation of the SDF-1/CXCR4 axis through the inhibition of CXCR4. Although additional work involving the biocompatibility of miR-146a-5p mimics in vitro and in vivo may be required to fully delineate its roles in OA pathogenesis, the present study offers a promising framework through which diagnostic and therapeutic biomarkers of OA may be determined. The combined use of TN14003 and miR-146a-5p mimics may represent an approach for developing effective OA-targeted therapies with decreased side effects. | 2019-04-04T13:02:50.661Z | 2019-03-21T00:00:00.000 | {
"year": 2019,
"sha1": "fa284a7b3ae7258debb6703aa05e2b2303e1bbae",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/mmr.2019.10076/download",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "fa284a7b3ae7258debb6703aa05e2b2303e1bbae",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
269987011 | pes2o/s2orc | v3-fos-license | Mixed neuroendocrine–non-neuroendocrine neoplasm of the bile duct with long-term prognosis after neoadjuvant chemotherapy
A 74-year-old man with obstructive jaundice presented with a thickened distal bile duct wall. A transpapillary forceps biopsy revealed an adenocarcinoma; however, because the tumor image was different from that of a typical cholangiocarcinoma, endoscopic ultrasound-guided fine-needle aspiration was performed on the tumor and enlarged lymph nodes. The tumor cells were positive for synaptophysin and CD56 with a Ki67 labeling index of 95%, and he was diagnosed with small cell neuroendocrine carcinoma. We diagnosed a bile duct tumor with neuroendocrine carcinoma component with lymph node metastasis. Preoperative chemotherapy for neuroendocrine carcinoma was administered because R0 resection was difficult and the risk of postoperative recurrence was high. Three courses of chemotherapy with carboplatin and etoposide resulted in marked tumor shrinkage, and radical resection was performed 3 months after diagnosis. Postoperative pathology revealed adenocarcinoma in the mucosal epithelium and small cell neuroendocrine carcinoma in the submucosa, most of which resolved with chemotherapy. Carboplatin and etoposide were resumed as adjuvant chemotherapy, and 67 months of recurrence-free survival were achieved after surgery.
Introduction
Neuroendocrine neoplasms (NEN) are derived from neuroendocrine cells and can occur in a variety of organs, although those primary to the biliary tract are rare [1][2][3].Mixed neuroendocrine non-neuroendocrine neoplasms (MiNEN) are defined as mixed neoplasms with both neuroendocrine and non-neuroendocrine components, with either component representing at least 30% [1,2].A diagnosis of MiNEN of the biliary tract is difficult, and its prognosis is extremely poor.Therefore, the optimal treatment remains unknown [4].Here, we report a case of primary MiNEN of the bile duct treated with chemotherapy for neuroendocrine carcinoma (NEC), followed by radical resection and postoperative adjuvant chemotherapy, which resulted in long-term recurrence-free survival.
Case report
A 74-year-old man undergoing treatment for benign prostatic hyperplasia and glaucoma presented to our hospital with anorexia and epicardial pain that had persisted for 1 month.Blood test results were as follows: leucocyte count 4270 cells/μL, hemoglobin 14.2 g/dL, platelet count 24.0 × 10 4 cells/μL, aspartate aminotransferase 285 IU/L, alanine aminotransferase 638 IU/L, total bilirubin 5.7 mg/dL, direct bilirubin 3.7 mg/dL, albumin 3.2 g/ dL, C-reactive protein 0.02 mg/dL, carcinoembryonic antigen (CEA) 6.1 ng/mL, and carbohydrate antigen 19-9 (CA19-9) 30 U/mL.Abdominal ultrasonography revealed dilation of the common bile duct and the intrahepatic ducts.Contrast-enhanced computed tomography (CECT) revealed a 20-mm-sized nodular lesion in the distal bile duct with upstream bile duct dilatation.The lesion was contrasted at the margins in the early phase and stained internally in the later phases.Enlarged lymph nodes were observed near the pancreatic head.However, no abnormalities were observed in the liver, lungs, or bones (Fig. 1).
Endoscopic ultrasonography (EUS) showed a homogeneous hypoechoic wall thickening in the distal bile duct.The mucosal side of the thickened wall was smooth.However, the outer hyperechoic layer was partially disrupted and extended into an irregularly contoured hypoechoic area.Numerous blood flow signals were observed in this area (Fig. 2).
Endoscopic retrograde cholangiopancreatography (ERCP) revealed a stricture with a smooth surface in the distal bile duct, and a transpapillary forceps biopsy of the stricture revealed an adenocarcinoma (ADC).
Peroral cholangioscopy (POCS) was also performed.POCS revealed papillary, easily hemorrhagic mucosa, and dilated vessels, but these findings were localized to a part of the stricture (Fig. 3).
Endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) was performed on the bile duct lesion and enlarged lymph nodes because of the possibility of a specific bile duct tumor.Small tumor cells with a high nucleocytoplasmic ratio that proliferated at a high rate, resulting in high cell density, were detected in both the bile duct and lymph nodes.The tumor cells were positive for synaptophysin and CD56 with a Ki67 labeling index of 95% and were diagnosed as small cell neuroendocrine carcinoma (SCNEC) (Fig. 4).
Pathological findings by EUS-FNA suggested a bile duct tumor with SCNEC and node metastasis of the same component.Therefore, we diagnosed bile duct NEC or MiNEN, clinical stage IIB (T2N1M0, AJCC/UICC 8th edition).After discussing the treatment plan with the surgeon, it was determined that R0 surgery was unlikely and the risk of recurrence after resection was high because the tumor had invaded beyond the bile duct wall and was accompanied by lymph node metastasis of the NEC.Therefore, we decided to introduce chemotherapy with sufficient informed consent and started combination therapy with carboplatin (CBDCA) and etoposide (ETP) for the NEC.After three courses of treatment, CECT showed marked shrinkage of the tumor and lymph nodes, FDG-PET showed reduced FDG accumulation in the primary tumor, and lymph node metastases had decreased.ERCP and POCS were performed again to evaluate the tumor extent.Cholangiography revealed improvement of the stricture, POCS revealed that the mucosal surface was mostly white and scarred, and the submucosal prominence was reduced (Fig. 5).
After a conference with the surgeons, a pylorus-preserving pancreatoduodenectomy was performed 3 months after the diagnosis.Postoperative pathology revealed ADC in the mucosal epithelium and scattered SCNEC in the submucosa of the bile duct (Fig. 6).
Because most of the tumors disappeared due to chemotherapy, there was no evidence of a transition between ADC and SCNEC, and the respective occupancy percentages of either component were difficult to assess.Metastasis was found in excised lymph nodes, consisting only of NEC.After six courses of CBDCA + ETP as adjuvant chemotherapy, the treatment was terminated at the patient's request.Thereafter, regular follow-ups were conducted, and recurrence-free survival for 67 months after surgery was achieved.
Discussion
According to the World Health Organization Classification 2019, NENs are classified as highly differentiated neuroendocrine tumors (NET), poorly differentiated NEC, or MiN-ENs [2].It has been reported that primary NEN in the bile duct accounts for approximately 0.22-1.8% of primary NEN in the pancreas and gastrointestinal tract, and little is known about how to diagnose or optimally treat MiNEN of the bile duct [3][4][5].
The characteristic imaging findings of biliary MiNENs are unclear because of their rarity and heterogeneity.In our case, the lesion presented as a nodular infiltrating type, and a more abundant blood flow signal was observed in the deeper part of the tumor than on the mucosal side on EUS.Compared with the postoperative pathology specimen, the difference in the blood flow signal may suggest two different tumor components with different regional characteristics.The nodular morphology, invasive nature of the tumor, and submucosal mass may reflect the deep proliferation of the NEC component, which arose from the ADC component.Hong et al. reviewed 11 cases of primary extrahepatic bile duct NEN (1 NET, 7 NEC, and 3 mixed adenocarcinoma neuroendocrine carcinoma [MANEC]) and reported that the tumors presented as nodular or intraductal growingtype lesions [8].Kaino et al. suggested that MiNEN is an admixture of exocrine and endocrine components and that its contrast pattern may be heterogeneous [9].Therefore, in tumors with lesions with different blood flow characteristics, close examination for MiNEN may be important; however, it is difficult to differentiate MiNEN from cholangiocarcinoma based on imaging alone.
According to a systematic review, the accuracy of the preoperative endoscopic diagnosis was 24.1% in 67 patients who underwent surgical resection for biliary MiNEN [11].In most cases, the ADC component is located on the mucosal side, and the NEC component is located deeper.It is difficult to detect deeply located NEC components with the commonly performed transpapillary examination, which may be the reason for its low accuracy [10,11].Furthermore, the lack of characteristic imaging findings of biliary MiN-ENs may lead to the diagnosis of cholangiocarcinoma once the ADC component is detected, thus interrupting efforts to detect the NEC component.In MiNEN, NEC is often the cancer component that triggers vascular invasion, liver metastasis, and lymph node metastasis [10,12], suggesting that histological examination of lymph nodes and liver metastases is likely to detect NEC components.In our case, only the ADC component was detected by transpapillary examination; however, the NEC component was detected by EUS-FNA of the primary tumor and enlarged lymph nodes.For cases in which MiNEN is suspected, it may be necessary to consider not only the transpapillary approach but also the performance of EUS-FNA and the pathological examination of metastases.According to the WHO classification, more than 30% of both components are required to diagnose MiNEN [5,6], so it is difficult to diagnose MiNEN from biopsy specimens alone.In this case, tumor shrinkage was achieved due to the effects of neoadjuvant chemotherapy, so unfortunately it was not possible to know the exact proportions of both.Therefore, this case cannot be strictly diagnosed as MiNEN.However, there are many similarities between the present case and previous reports.In addition, as a result of comparing the resected specimen and preoperative images, adenocarcinoma was discovered in the same section where the NEC component was present, and it was observed as a single tumor in the image findings before chemotherapy.Therefore, rather than assuming that adenocarcinoma and NEC were present at the same time in the same site, it is reasonable to assume that the tumor contained both components.
Owing to the difficulty of preoperative diagnosis and the rarity of the disease, there is insufficient consensus on treatment for MiNENs with NEC components.In general, treatment for NEC is recommended.
Surgery is the treatment of choice for localized digestive NEC, but relapse is frequent and associated with a poor prognosis.For localized NEC, the presence of regional lymph node metastases and the primary tumor site are considered important prognostic factors [13].The NCCN guidelines state that treatment options vary depending on the location of the disease, and list neoadjuvant chemotherapy, radiation therapy, chemoradiotherapy, and postoperative adjuvant chemotherapy with or without RT as treatments for resectable NEC [14].This case was a MiNEN that occurred in the distal bile duct, with lymph node metastasis.The surgical method was pancreaticoduodenectomy, which was highly invasive, and it was suggested that R0 resection might not be possible due to tumor invasion, so we decided to start preoperative chemotherapy.Chemotherapy for NEC recommends combination therapy of cisplatin (CDDP) and ETP, and as an alternative therapy is recommended to use CBDCA instead of CDDP [1,13].We chose treatment with CBDCA because the patient was relatively elderly and we wanted to reduce the occurrence of adverse events.There are few reports of neoadjuvant chemotherapy for primary NEC or MiNEN of the bile duct.Terashima et al. reported a case of primary bile duct NEC in which small cell carcinoma was found in biopsy under ERCP and preoperative chemotherapy was performed.CT scan after 2 courses of preoperative chemotherapy (CDDP + ETP) showed no significant change in the size of the tumor, and surgery was performed 2 months after the start of chemotherapy.No postoperative chemotherapy was performed, and CT scan 5 months after surgery showed multiple liver metastases and recurrence [15].
The majority of patients with stage II-III digestive NEC who underwent resection develop recurrence, suggesting that adjuvant chemotherapy may be helpful [13].
A large cohort study of 1861 patients with 519 patients with digestive NEC reported that postoperative chemotherapy was associated with improved survival [16].Therefore, adjuvant chemotherapy with 4 to 6 cycles of platinum plus ETP may be considered after definitive surgery for local gastrointestinal NEC [13].
Treatment strategies for MiNEN of the bile duct are quite different from those for cholangiocarcinoma.To obtain an accurate preoperative diagnosis, it is important to suspect MiNEN based on atypical tumor images and to make an aggressive histological diagnosis.We believe that patients with MiNEN of the bile duct may benefit from accurate preoperative diagnosis.Neoadjuvant chemotherapy and postoperative adjuvant chemotherapy may be effective treatment options for MiNEN of the bile duct.However, many issues need to be resolved, including the duration of neoadjuvant chemotherapy administration, surveillance intervals, and the timing of planned resection.Therefore, it is desirable to establish a treatment system based on the accumulation and analysis of more cases in the future.
Fig. 1 Fig. 2
Fig. 1 Contrast-enhanced computed tomography shows a tumor measuring 20 mm in the distal bile duct.The lesion is contrasted at the margins in the early phase and stains internally in the later phase
Fig. 3 Fig. 4
Fig. 3 Endoscopic retrograde cholangiography reveals a stricture of the distal bile duct (a).Peroral cholangioscopy reveals a submucosal tumor-like elevation in the lesion with dilated vessels (b, c) | 2024-05-25T06:17:25.051Z | 2024-05-24T00:00:00.000 | {
"year": 2024,
"sha1": "2c7dcd6a3729d1be8c7305ec90d7609557cf1232",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12328-024-01982-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7fe7cbb596ca006261c8ded3445510308c390be9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251158027 | pes2o/s2orc | v3-fos-license | MCred: multi-modal message credibility for fake news detection using BERT and CNN
Online social media enables low cost, easy access, rapid propagation, and easy communication of information, including spreading low-quality fake news. Fake news has become a huge threat to every sector in society, and resulting in decrements in the trust quotient for media and leading the audience into bewilderment. In this paper, we proposed a new framework called Message Credibility (MCred) for fake news detection that utilizes the benefits of local and global text semantics. This framework is the fusion of Bidirectional Encoder Representations from Transformers (BERT) using the relationship between words in sentences for global text semantics, and Convolutional Neural Networks (CNN) using N-gram features for local text semantics. We demonstrate through experimental results a popular Kaggle dataset that MCred improves the accuracy over a state-of-the-art model by 1.10% thanks to its combination of local and global text semantics.
Introduction
News is any information to make the public aware of the events happening around them and which can affect them personally or socially. In recent years, online social media has become a common platform for news broadcasting for business, political, and entertainment purposes. Individuals use social media to search and consume news because of its ease, comfort, and fast propagation (Zhang and Ghorbani 2020). This commodity brought both constructive and destructive impacts. People tamper and scatter genuine information for their entertainment and benefits in the form of fake news (Bondielli and Marcelloni 2019). Fake news played a pivotal role in the 2016 US presidential election campaign following a large amount of false information spread on Facebook during its last three months (Allcott and Gentzkow 2017). This incident brought fake news to the attention of many industrial and research institutions for understanding and reducing this phenomenon.
Fake news and social impact Several researchers used the terms like false news, fake news, rumor, and disinformation interchangeably (Ajao et al. 2018;Bondielli and Marcelloni 2019;Lazer et al. 2018). There is no single universal definition of fake news (Zhou et al. 2019), however, we may define the term as any fabricated and deceitful news content that influences its readers to believe in something false. Klein and Wueller (2017) characterized fake news as the online distribution of false information purposefully or intentionally. While printed media was the only medium to spread fake news until a decade ago, online media became today the easiest way to spread low-quality news (Thota et al. 2018). Fake news can lead to a negative effect on politics, the economy, and public opinions. One popular fake news example was Barack Obama being harmed in a blast, which siphoned 130 billion dollars in stock (Rapoza 2020). Numerous fake news on COVID-19 pandemic over social networking sites (Kouzy et al. 2020) caused fear and misconception among the people. Recently, Tim Berners-Lee (Swartz 2020) stated that fake news has become the most upsetting thing over the Internet.
News classification Several researchers treated fake news as a binary classification (i.e. real or fake) (Shu et al. 2017;Garg and Sharma 2020a), while others considered it as a multi-class classification (Karimi et al. 2018), regression, or clustering (Oshikawa et al. 2020) problem. An automated tool assists users in detecting and categorizing fake news according to three criteria, identified in the related work: 1. Propagation-based (Liu and Wu 2018; methods trace the spreading pattern of any news using people's replies and share. 2. User profile-based (Shu et al. 2017) methods track the individuals' behavior using their published, forwarded, or commented news including further analysis information like location, sexual orientation, followers, or friends. 3. News content-based (Zhou et al. 2020;Garg and Sharma 2020b;Wang et al. 2021) methods are of two kinds: (a) Syntactic-based methods use linguistic and writing patterns like a number of special characters, nouns, or verbs to classify the news. (b) Semantic-based methods perform high-level representation and structure of the text in a given document.
Method
We propose in this paper a novel message credibility (MCred) multi-modal method that approaches fake news as a binary classification problem. The method combines global text semantics relationship between words using bidirectional encoder representations from transformers (BERT) with local text semantics using n-grams features of a convolutional neural network (CNN) model. The MCred model uses global and local word embedding as a cue for the news classification validated using four datasets for training and testing purposes. We generated the CNN output by combining multiple n-gram features (i.e. a kernel size of two, three, and four). Finally, we combined the CNN and BERT outputs into a dense network to enhance the performance of MCred model. We achieved a 1.48% improvement in accuracy compared to related state-of-the-art methods.
Outline
The paper has six sections. Section 2 highlights the literature study. Section 3 explains the background of both machine learning (ML) and deep learning (DL). Section 4 describes the proposed MCred model comprising a BERT processing layer, A CNN processing layer, and a Dense net processing layer. Section 5 provides implementation details, followed by the evaluation results. Section 6 concludes the paper and highlights future work opportunities.
Related work
Many researchers did survey on fake news detection and identified the prominent attributes, liable for the fake news classification (Sharma and Sharma 2019). We review in this section the state-of-the-art works on fake news detection into two categories: pattern-based and content-based. We conclude the section with a review of the research available for supporting fake news detection research. Safaya et al. (2020) (2021) proposed a deep learning based multi-modal approach for the social media news classification. They used CNN for the image processing and RoBERTa for text processing; with this combination they achieved an accuracy of 85.3% on MediaEval (Twitter) dataset and 81.2% on Weibo dataset. explained various existing tools and ways for fake news detection and also explained the role of fact checking websites in this classification task. They also executed LSTM and BiLSTM classifier on Kaggle dataset and concluded that with an accuracy of 91.51%, Bi-LSTM performed better than LSTM. ii) CNN processing layer requires large data to train and it is slower because of maxpool operation. Similarly at the testing phase, we require properly preprocessed and larger data. Khan and Alhazmi (2020) also used an ensemble technique to compare the performance of several ML models and achieved the highest accuracy of 90.70% using an AdaBoost random forest. Mersinias et al. (2020) proposed a novel class label frequency distance vectorization approach for fake news detection and found that logistic regression gives the highest accuracy of 97.52%. Kaliyar et al. (2020) used the GloVe word embedding model and deep CNNs for fake news detection and achieved an accuracy of 98.36%. Rohit Kumar Kaliyar and Narang (2021) proposed another model in which BERT embeddings are passed to the CNN model for the classification purpose and after this combination author achieved the accuracy of 98.90%.
Summary
We observed in the literature review that propagation, linguistic, semantics of text, and user profiles are important metrics for fake news classification. However, we observed two limitations.
1. Researchers used ML and DL methods for the fake news detection considering the local context only and ignoring the global context of text data. 2. The state-of-the-art models used a single dataset and missed a generalized model performance evaluation on heterogeneous datasets.
The MCred model proposed in this paper uses CNN for the local context and BERT for the global context of the given information. Originally, the BERT model has large number of parameters ranging from 100 millions to 300 millions (Devlin et al. 2018). Therefore, the BERT model training from scratch using small dataset leads to over-fitting problem. To avoid this, we used pre-trained BERT model and further trained it on our dataset for fine-tuning. There are three possible ways of fine-tuning: (i) training of complete architecture, (ii) training of some layers of pre-defined architecture, and (iii) use of complete architecture as it stands. We followed the third way in our proposed model and fine-tuned the BERT model by adding our dataset with the pre-existing dataset and also concatenated few additional layers. We explained the implementation details in Sect. 4.2.2. After tuning, we tested their performance using extensive experiments on four heterogeneous datasets.
ML background
We used five popular ML methods in developing our proposed MCred model. Logistic regression (LR) evaluates categorical problems. Popular version of LR model have binary result; either true/ false, yes/no and other. Instead of this multinomial LR is also available with multiple results. LR takes the advantage of logistic or Sigmoid function to read the input vector and map it to the appropriate category. In this paper we used LR for the evaluation purpose because it robust and flexible method for classification (Seufert 2014).
Naive Bayes (NB) classifies the news as real or fake using maximum conditional probability. It is based on "Bayes' Theorem".
where X and Y are two events.
We used the NB classifier because it is simple and computationally inexpensive for text classification. NB needs a lesser amount of data for training purposes, unlike other classifiers.
Decision tree (DT) predicts the final class using the recursive partition of all features present in the training dataset. It represents the dataset as a tree, where nodes represent the features, branches represent the decisions and leaves represent the results. We fed the data as input and progressively partition it into small parts until the result finally labels it as real or fake.
Random forest (RF) is a amalgamation of multiple trees therefore it is called as forest. It works for both regression and classification types of use cases. RF prevents over-fitting by using ensemble learning and merging multiple DTs to improve the model accuracy. We used this classifier for faster training and learning of our proposed MCred model. As this paper considered binary problem, all the trees in RF votes for a prediction either 0 or 1 and highest votes are considered as final result of RF.
Here t is the tree prediction, "N" represents the total number of trees present in the forest, "i" is the current tree and "a" is the training data.
Extreme gradient boosting (XGBoost) utilizes the concept of supervised machine learning algorithm. The idea of Gradient Boosting Machines (GBM) is used in XGBoost. XGBoost is more powerful in terms of performance and deals with data irregularities. We used this classifier because it accurately predicts the target data by combining the output generated by multiple weak learners.
Global vector (GloVe) (Jeffrey Pennington 2021)
It is an unsupervised learning algorithm for generating vector of a particular word based on global co-occurrence statistics. The word "GloVe" comes from "Global Vector" and the vector representation generated by this algorithm is known as GloVe word embeddings. This embedding extracts the connection among words from statistics and uses the co-occurrence matrix for finding the semantic relationship. Stanford's GloVe available in four different versions based on its parameters as shown in Table 1.
BERT
Google researchers proposed the BERT (Devlin et al. 2018) model for natural language processing (NLP) applications. They developed a general-purpose pre-trained model using a huge amount of not annotated text on the Internet to overcome the lack of sufficient training data in NLP tasks. These general-purpose models work with any specific task after finetuning and bring good accuracy compared to other models trained on small datasets from scratch. One technical development that separates BERT from other ordinary (bidirectional LSTM) models is its simultaneous bidirectional training. BERT has two types based on the model architecture: i) BERT Base and ii) BERT Large . Table 2 shows that the size of both BERT types uses millions of parameters (110M, 340M) for solving various NLP taks. In this paper, we used BERT Base in proposed model because BERT Large is hard to deploy due to its large size and resource constraint.
CNN for text
CNN became very popular in image processing applications but demonstrated promising results in NLP research applications in recent years too. Kim (2014) showed that CNN gives excellent text classification results after hyperparameter tuning and trained on 100 billion words extracted from Google News using the word2vec vector representation method. Zhang and Wallace (2017) analyzed the performance of a single layer CNN architecture and concluded it is good for sentence classification and as simple as logistic regression and SVM. Zhang et al. (2015) proposed a CNN architecture for text classification operating at the character level and concluded that CNN models can process text as effectively as image data. However, one can use one-dimensional kernels that slide horizontally over the characters, instead of two-dimensional kernels that slide horizontally and vertically over the image pixels.
Dense net
A dense net is a fully connected network of neuron layers, where each layer neuron receives the input from the previous layer neurons and passes it to the next layer neurons. The method finally merges the features coming from the previous layer and generates learning features for further processing. The function used in this layer is the same as for linear layers, but the use of the activation function is different: where I shows the input, O shows the output value, w is the weight, ⋅ is the dot multiplication function applied on input and weight, b is the bias for model optimization, and f is the activation function.
Methodology
We present in this section the design and methodology underneath the MCred message credibility model for fake news detection. Figure 1 shows the MCred model architecture consisting of two phases implemented in Algorithm 1. Data engineering selects and preprocess a suitable text dataset for the proposed MCred model among several available news datasets in two steps.
Data collection selects fake news datasets and stores in
MCred_dataset (line 1). A larger dataset prevents the model from over-fitting and enables better model training. 2. Data preprocessing performs tasks like noise (e.g. stop word) removal, normalization, and tokenization to keep the data in proper format (line 2).
Model generation is a fusion of global and local text semantics for fake news classification consisting of three different sub-layers.
BERT processing layer reads the data from the preprocessing layer and passes it to the BERT pre-trained model tuned for embeddings (line 4). This layer generates global semantics after measuring the relationship among current, previous, and upcoming words in the text. Finally, it passes the output to the dense and dropout layers (line 5).
CNN processing layer reads the pre-processed text data and converts it into GloVe embeddings (line 7). Then, it passes these embeddings through three parallel CNN layers with kernels of sizes two, three, and four (line 8). This layer generates the local text semantics using n-gram features and passes the three outputs through multiple dense and dropout layers to produce the final output (line 9).
Dense net processing layer fuses the local and global text semantic outputs from the CNN and BERT processing layers (line 11). It passes the merged outputs to the dense and dropout layers (line 12), and finally produces the news text classification and labels the news as real or fake (line 13).
Data engineering
1. We selected text dataset having equal distribution of real and fake news for better training and evaluation purpose. 2. We pre-processed the raw text dataset through three steps before performing the MCred model training.
Tokenization breakdowns longer input paragraphs into small sentences. During this process, we protected the sentence delimiters for further execution.
Lemmatization converts the input words into their canonical form with an equal footing for uniform execution.
Stopword removal process filtered out stopwords from the input data since its contribution is low as compared to other meaningful data.
Model generation
BERT processing layer: receives the data from the previous phase and applies three essential data decoration techniques that add metadata to the given text, which is mandatory in the BERT model for the text execution.
Initially the training of BERT model is performed on BooksCorpus (Zhu et al. 2015) and English Wikipedia Token embedding adds two special tokens, as the data contains multiple sentences: [CLS] token at the beginning of the data and [SEP] token at the end of each sentence. In Fig. 2, W 1A and W 2A represent the first and second word of the first sentence, while W 1B and W 2B represent the first and second word of the second sentence.
Segment embedding adds a special marker for different sentences. In Fig. 2, E A and E B represent the segment embedding for the first and second sentences.
Positional embedding specifies the token position in the sentence. In Fig. 2, E k and E n represent the k th and n th elements in the data.
Next, the BERT processing layer converts every token into a 768 long embedding vector, passed further to 12 encoding layers characteristic to the BERT BASE model. The information stored in the [CLS] token is sufficient for classification after processing the twelfth layer. This [CLS] vector flows into the intermediate layer consisting of four dense layers with different neurons. Finally, the BERT processing layer generates the output using a dense layer with 32 neurons. CNN processing layer: contains four layers as shown in Fig. 3: embedding layer, conv1D layer, pooling layer and dense layer. First, the embedding layer takes and preprocesses the input data and generates the sentence matrix of m × n size, where m is the maximum sequence length and n is the embedding dimension. Next, the matrix passes through the one-dimensional convolutional (Conv1D) layer with three 64 filter kernels of sizes two, three, and four. The Conv1D layer generates 64 features from each kernel. The pooling layer processes these three 64 long vectors and concatenates them into a single vector. Finally, the model passes this concatenated vector to the dense layer and converts it into 32 long vectors for next-level processing.
Dense net processing layer: combines the 32 long vector outputs of the BERT and CNN layers as shown in Fig. 4, and merges them into a vector of size 64. We used the dropout layer to prevent the over-fitting problem and applied the rectified linear unit (ReLU) activation function at the hidden layers and Sigmoid function at the output layer. After the multiple dense layers, this dense net processing layer generates the final real or fake news classification.
Model tuning
We used the random search model tuning technique to examine and improve the MCred model training. We applied the ReLU activation function at both BERT and CNN processing layers. We further used the Adam optimizer and applied a Sigmoid activation function at the dense net processing layer. Table 3 shows the different model tuning parameter details.
Adam optimizer (Kingma and Ba 2017) is a memory and computationally efficient enhancement of the gradient descent method that produces improved results in NLP and image processing-based DL applications. This optimizer amalgamates the benefits of AdaGrad and RMSProp optimizers and improves the results with default parameters in various applications.
ReLU activation function (Glorot et al. 2011) is simple and offers rapid convergence if sparsely activated. Its better performance over the other activation functions makes it the default option in most network trainings: Sigmoid activation function (Gupta 2020)takes a real value as input and produces an output in the [0,1] interval. This function is non-linear, continuously differential, monotonic, and has a fixed output range: Binary cross entropy(aka log loss) (Rajesh and Bhat 2019) deals with the binary problems therefore we used this loss function. The mathematical expression of this function is: Where y i is the actual label and P(y i ) is the probability of data being actual label for all N records.
Experimental evaluation
We performed several experiments to evaluate the proposed MCred model and compared it with other baseline approaches using a number of relevant metrics described in this section.
Experimental setup
We implemented the proposed MCred model using sklearn, matplotlib, nltk, and other libraries from the Python 3.9 distribution. We trained our model on workstation with Intel Xeon ® Gold 5222 3.8GHz processor, 128GB 8*16GB DDR4 2933 RAM, 1TB 7200 RPM SATA hard disk and Windows 10 Pro operating system. We trained our model on both Graphics Processing Unit (GPU) and Central Processing Unit (CPU) and estimated the time required for training process. Table 4 shows the required time in seconds on both processing units.
Experimental text dataset
We used the four datasets summarized in Table 5 to implement and evaluate the proposed MCred model. We focus our validation on the WELFake dataset (Verma et al. 2021) that reduces the biases and limitations of the others. The WELFake dataset consists of evenly distributed news text data labeled as unreliable (1) and reliable (0). The dataset contains fields: news identifier, news title, and news text comprising its heading and content. Initially, the news text and title fields contained a few undefined values. Therefore, we combined them and created a new information parameter to reduce the undefined values and increase the number of input tokens for improved model training.
Evaluation metrics
We define four parameters based on the relation between the predicted and the actual news classification, displayed in Table 6: true-positive (TP), true-negative (TN), false-positive (FP), and false-negative (FN). We evaluated the MCred model on four performance metrics based on these parameters. Accuracy is the ratio between the number of correct predictions and the total number of predictions: Precision measures the positive predicted value, as the ratio between the number of correct positive predictions to the total number of positive predictions: Recall R measures the sensitivity of the as the ratio between the number of correct positive predictions to the total number of correctly predicted results: F1-score measures the testing accuracy of the model as the harmonic mean of the precision and the recall:
Experimental results
In this section, we analyze the results achieved by the MCred model using the tuning process presented in Sect. 4.3 and the parameters in Table 3. We performed our experiments on an 80 : 10 : 10 train-test-validation split. Table 7 compares the results using two dropout values (0.3 and 0.5) on four optimizers: Adam, SGD, RMSProp, and Adagrad. Interestingly, the increase in dropout consistently improved the MCred model performance on all evaluation parameters with the Adam and SGD optimizers, while it compromised the performance with RMSprop and Adagrad. The Adam optimizer outperforms the others due to its combination of RMSProp and Adagrad optimizers to handle sparse gradients on large and noisy data. The Adam optimizer produces better results due to its fewer memory requirements and small learning rate adapted to individual parameters for sparse datasets. The higher dropout improves the overall performance of the MCred model by reducing the validation loss to 1.60% and maximizing the validation
Learning curve
We further analyzed the MCred model performance by drawing a learning curve between the training and validation data. Figure 5 shows two learning curves at different epochs: accuracy and loss. Initially, the gap between validation and training data in both curves was very high. After the execution of five epochs, the model reduced this gap and became stable, demonstrating good fit condition (i.e. always between overfitting and underfitting) for two reasons. The gap between training and validation loss is minimum at the stable point.
MCred versus ML models
We compared proposed MCred model with both ML and DL models as sometimes ML models also perform better. We implemented LR, NB, DT, RF, and XGBoost models on the WELFake dataset and tuned them on several hyper-parameters as shown in Table 8. Then, we compared their performance with the proposed MCred model. We extracted text features using the GloVe word embedding technique and converted the text into a feature vector. We fed this vector into these models and analyzed their performance in Table 9. The accuracy of various ML models ranged between 89.46% and 97.65% . Among the five ML models, XGBoost outperformed with 97.65% accuracy followed by RF, DT, NB, and LR. Although, XGBoost achieved a remarkable performance in terms of accuracy yet it was 1.36% lower than the proposed MCred model. It clearly shows that the fusion of several deep learning methods used in the proposed MCred model improved the accuracy as compared to other ML models.
Comparison of MCred with other DL models
Our proposed model is based on BERT-CNN architecture but for the performance evaluation we compared the performance of our model with other deep learning fusions too. For this, we implemented two separate models BERT-RNN and BERT-LSTM on same dataset i.e., WELFake dataset. RNN (Olah 2021) is different from traditional neural networks because it uses the the output obtained from previous step as the input for the next step and it remembers the past information too. Among various types of RNN i.e., one-toone, one-to-many, many-to-one and many-to-many, we used many-to-one because it is suitable for classification task. LSTM (Olah 2021) is a type of RNN that designed to overcome the limitation of RNN like; (i) gradient vanishing and exploding, (ii) complex training and (iii) difficulty to process very long sequences. are suitable for other tasks like question-answering, machine translation etc.
MCred model summary
We used WELFake dataset to classify the real and fake news using message credibility. For this, we proposed an MCred model which is a fusion of two DL methods (i.e., CNN and BERT). Then we implemented five ML models on the same dataset and compared the performance. Further, we implemented two fusions of DL models (i.e., BERT-RNN and BERT-LSTM) and compared their performance with the proposed MCred model. We also compared our proposed model with other recent state-of-theart works and found that the proposed model outperformed over the other state-of-the-art works. The proposed model gives better accuracy but it has few following limitations: i)The complexity of self-attention layer at training is O(n 2 ) , 'n' is the sequence length,ak points during training and testing phase. Therefore BERT processing model takes more time for large inputs. ii) CNN processing layer requires large data to train and it is slower because of maxpool operation. Similarly at the testing phase, we require properly preprocessed and larger data.
Conclusions and future work
We proposed a new model called MCred model to classify the text news as real or fake using the global and local semantic relationship among the words. We modeled the We plan to extend our work in the future with more features based on user credibility, knowledge graphs, and propagation analysis. Image-based news analysis and Deepfake analysis are also in our attention. | 2022-07-30T05:23:40.109Z | 2022-07-27T00:00:00.000 | {
"year": 2022,
"sha1": "7432d739775081894c13743396db8f49a0f22bcd",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7432d739775081894c13743396db8f49a0f22bcd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
248406000 | pes2o/s2orc | v3-fos-license | Strain driven conducting domain walls in a Mott insulator
Rewritable nanoelectronics offers new perspectives and potential to both fundamental research and technological applications. Such interest has driven the research focus into conducting domain walls: pseudo 2D conducting channels that can be created, positioned, and deleted in situ. However, the study of conductive domain walls is largely limited to wide-gap ferroelectrics, where the conductivity typically arises from changes in charge carrier density, due to screening charge accumulation at polar discontinuities. This work shows that, in narrow-gap correlated insulators with strong charge lattice coupling, local strain gradients can drive enhanced conductivity at the domain walls, removing polar discontinuities as a criteria for conductivity. By combining different scanning probe microscopy techniques, we demonstrate that the domain wall conductivity in GaV4S8 does not follow the established screening charge model but rather arises from the large surface reconstruction across the Jahn-Teller transition and the associated strain gradients across the domain walls. This mechanism can turn any structural, or even magnetic, domain wall conducting, if the electronic structure of the host is susceptible to local strain gradients, drastically expanding the range of materials and phenomena that may be applicable to domain wall based nanoelectronics.
Introduction
Extreme miniaturisation of active electronic components, while keeping the complexity of their function, [1] has spurred much interest in low-dimensional objects, including emergent and artificially designed 2D interfaces [2,3] and flakes of 2D materials. [4,5] For instance, naturally occurring structurally sub-nanometer wide interfaces, e.g. domain walls, have been reported to posses the same inherent electronic response as existing circuit elements, such as switches [6] and half-wave rectifiers. [7] In addition, ferroelectric domain walls can be reconfigured in-situ by a variety of external fields which can lead to exotic bulk responses. Such bulk responses offer the opportunity to both enhance existing technology (e.g. magnetoresistance, [8] colossal dielectric constants, [9] memristive behaviour [10]) but also provide next generation functionalities like, negative capacitance, [11] above band gap photovoltaic effects, [12] and domain wall nanoelectronics. [13] Such effects have been discussed from both a fundamental and a technological perspective in recent reviews. [14,15] For all of these bulk responses, the key requirement is for the domain walls to exhibit a different conductivity compared to the surrounding material. Therefore, much of the research has focused on ferroelectrics as the build up of screening charges at domain walls with polar discontinuities are known to modify the local conductivity. [13,14] Examples of this, in single crystals, includes BaTiO 3 , [16] (Ca, Sr) 3 Ti 2 O 7 , [17] Cu-Cl boracite's, [18] LiNbO 3 , [19] h-RMnO 3 (R representing rare earth metals), [20] and GaV 4 S 8 . [8] Because of the energetically costly nature of such polar discontinuities, their spontaneous formations are normally restricted to improper ferroelectrics. [21,22] Indeed, so established is this screening charge mechanism that it is surprising for an improper ferroelectric material, exhibiting polar discontinuities, not to have conducting domain walls. [23] In this scenario, the type of domain wall that is expected to have enhanced conductivity depends on the electronic structure of the host material: In a p-type (n-type) semiconductor the tail-to-tail (head-to-head) domain walls attract screening holes (electrons) and thus provide enhanced conductivity relative to the bulk. [24,25,26] The corresponding head-to-head (tail-to-tail) wall in a p-type (n-type) material is then expected to have reduced conductivity compared to the bulk. [13,21,22,27] Note, in ferroelectrics, particularly thin-films, further conductivity mechanisms have been reported. [6,28,29,30,31,32] But it is challenging, especially in oxides, to disentangle intrinsic effects and those associated with, e.g., enhanced defect density at domain walls, [33,34] which can change surface Schottky barriers and hence conductivity. [7] There have also been reports of domain wall conductivity and even superconductivity in non-ferroelectrics. Examples of the former case include, the antiferromagnetic insulators with conducting magnetic domain walls -attributed to the presence of mid-gap states, [35] or in VO 2 domain walls around the phase transition, as the new phase nucleates preferentially at ferroelastic domain walls (i.e. a metal state on heating, and an insulating state on cooling). [36] Superconductivity has reported to appear at structural twin walls before the bulk transition, [37,38] attributed to the local symmetry of the walls and the corresponding strain coupling.
On the bulk scale it is long established that the conductivity of highly correlated systems, such as Mott insulators, can be change by changing the inter-atomic distances. Practically, such changes to the inter-atomic distance can be achieved via applied strain, hydrostatic-, or chemical-pressures. [39,40] In this work, we move these bulk concepts to the nanoscale by showing how highly conducting domain walls can be created using local strain gradients, rather than traditional screening charge approach. Taking the narrow band gap Mott insulator GaV 4 S 8 as a representative of such highly correlated systems, we use scanning probe microscopy to reveal that the enhanced conductivity arises at domain walls which have a large surface reconstruction, a signature of local strain gradient. This mechanism explains, among other things, the origin of indistinguishable enhanced conductivities at "head-to-head" and "tail-totail" domain walls, which cannot be explained via the conventional screening-charge-based mechanism. Our results provide a new mechanism of generating conductivity at domain walls in single crystals, expanding the field of domain wall nanoelectronics to material systems, like Mott insulators, whose band structures are strongly influenced by strain gradients.
The lacunar spinel GaV 4 S 8
For this proof-of-principle study GaV 4 S 8 is an ideal template system: it is characterized by a correlation band gap of ∼0.3 eV and has a strong charge-lattice coupling. [41,42,43] The material crystallizes in a face centered cubic (NaCl-like) structure comprising [V 4 S 4 ] heterocubane entities and [GaS 4 ] tetrahedra, as depicted in Figure 1 a. [44] In lacunar spinels, the strong charge-lattice coupling arises from the Jahn-Teller instability of the heterocubane cluster, [44,45] and in GaV 4 S 8 this drives the ferroelectric transition. [44,46,47] At the Jahn-Teller transition of GaV 4 S 8 , taking place at T JT ≈42 K, the symmetry is reduced from cubic (F43m) to polar rhombohedral (R3m) via an elongation of the V 4 S 4 units along one of the cubic 111 directions. [44] For the four possible domains, P 1 -P 4 , depicted in Figure 1 b, this distortion leads to a ferroelectric polarization of P s = 2.3 µC/cm 2 . [8,48] Due to the lack of ±P domain pairs, 180 • domain walls of neighboring domains cannot emerge in GaV 4 S 8 , where the largest polar discontinuities normally occur. [10,49] The four polarisation directions are illustrated schematically in Figure 1 b, and the origin of the ferroelectricity is discussed in detail in Refs. [42,8] and the references therein. Below T JT , the material undergoes a magnetic ordering at ∼13 K, with several competing magnetic phases, including a Néel-type skyrmion lattice phase. [50] It is also well documented that the electronic structure of GaV 4 S 8 is very sensitive to inter-atomic spacing, and can be modified with either hydrostatic pressure, [51,52] or chemical pressure. [53] 3
Results and Discussion
To understand and correlate the local structural and electronic properties at domain walls in GaV 4 S 8 , we use scanning probe microscopy performed on an as-grown (111) surface of a single crystal. Figure 1 c displays a representative topography image recorded at 30 K. The topography shows a large amount of surface reconstruction below the Jahn-Teller transition, which is manifested as a series of ridges and valleys that zig-zag across the surface. A direct comparison of the topography above and below the transition is given in Figure S1, and some of the implications from such surface reconstruction are described in Supporting Note 1. The corresponding out-of-plane piezo response force microscopy (PFM) phase image allows us to identify these regions as different ferroelectric domains, Figure 1 d. Since the image is taken on a (111) surface, the polarization of one of the four domain states, P 1 , is perpendicular to this surface, while the polarization for the other three span 71 • to the scanned surface. Hence, P 1 is assigned to the domain with largest amplitude in the out-of-plane piezo response, i.e. the brightest areas in Figure S2. Domains with lighter shades of brown correspond to domain states P 2 -P 4 , which can be distinguished by considering the orientation of the mechanically compatible domain walls, [54] and described for GaV 4 S 8 in Ref. [8]. This is summarised by the different colored crosses in Figure 1 d, which follow the color convention of Figure 1 b. The large topographic features, the ridges and valleys, clearly correlate with the ferroelectric domain structure resolved by PFM, with the larges changes in topography arising at {110}-type domain walls. From symmetry arguments these {110}-type domain walls are expected to have polar-discontinuities and consequentially positive and negative bound charges. We will referrer to these walls as "p-type" or"n-type" when information about the charge state is important, and otherwise as "zig-zag" domain walls.
In order to see which of the domain walls are associated with increased conductivity, we collect conductive atomic force microscopy data (cAFM) (with 60 V), Figure 2 a, of the region illustrated in Figure 1. In the cAFM image the bright gold colors are areas of highly enhanced current (values up to 270 nA) and the dark areas are regions of lower current. These areas of enhanced current align with the zig-zag-domain wall pattern observed in the topography and PFM images of Figure 1 c and d, respectively. The observed current values, of several hundred nanoampers, are high for domain walls in ferroelectrics, which typically have values in the pico-to nanoampere range. [13] One of the most striking features of the cAFM data in Figure 2 is that both the head-to-head and the tail-to-tail domain walls, which are expected to posses opposite bound charge, exhibit enhanced conductivity. From the conventional polar discontinuity argument, this would mean that "n-type" and "p-type" charge carriers have the same effects on the conductivity; in contrast to the literature on bulk GaV 4 S 8 [55]. This is also surprising for semiconducting ferroelectrics in general, as typically the walls with an accumulation of majority carriers is expected to have enhanced conductivity, while the walls with an accumulation of minority carriers have a reduced conductivity relative to the bulk. [3,22,27,56] In addition, if adjacent domain walls had majority p-type and n-type charge carriers, the intersection of the wall would form a depletion region. The absence of any such region in Figure 2 a is inconsistent with the polar discontinuity model but consistent with the strain-gradient driven model.
To provide a more qualitative comparison between conduction and topographic, we plot cross-sections of both in Figure 2 b, indicated by the grey line in Figure 2 a, and schematically add the topography and ferroelectric polarization vectors. We observe that large current values arise at both head-to-head and tail-to-tail domain walls, accompanied by a surface reconstruction (δh 2 ) -discussed below and detailed in Supporting Note 1. The cAFM cross-section shows that the conductive regions are relatively wide, extending ca. 750 nm away from the center of the wall. This non-local nature is consistent with a strain-gradient around the wall, which effects the surrounding electronic structure of the material. However, this is inconsistent with the idea of polar discontinuity driven domain wall conductivity, as this is necessarily limited to the wall itself. For reference, in materials where the polar discontinuity driven mechanisms is well established, such as the hexagonal manganites, [22,57] Cu-Cl boracite, [18] BaTiO 3 , [58] or BiFeO 3 , [34] the walls typically have an effective electronic width of ca. ∼100 nm.
In order to understand the correlation of topography and conductivity in GaV 4 S 8 further, we consider the large surface reconstruction that arises during the Jahn-Teller transition. To represent these local strain-gradients, using an approach inspired by Aizu's work on spontaneous strains across a symmetry lowering structural transition, [59] we compare the changes in topography relative to an assumed surface. This represents the crystal surface had the transition not taken place. Practically, this is achieved by taking the square of the difference in height (δh 2 ) of the topography versus a smooth plane, schematically shown by the dashed grey line in Figure 2 b. This change in In order to visualize the topography/conductance correlation is ubiquitous over the observed area, we present 3D maps of the topography and conductivity in Figure 3. These are the same data sets provided in Figure 1 c and Figure 2 a. This 3D representation shows that the large peaks and valleys in the topography along the {110}-type domain walls (orange lines) are associated with areas of enhanced conductivity (bright colors), while small changes in surface reconstruction at the {100} domain walls (such as those highlighted with blue lines) exhibit no observed change in conductivity.
To further test the idea that the conductivity is enhanced by local strain gradient induced changes in band structure, rather than the accumulation of screening charges, we consider the system in terms of classical semiconductor-metal interface (a Schottky barrier). In a semiconductor-metal interface the conductivity depends on the respective work functions, in this case the work function of the cAFM tip (e.g. Pt-Si and BdSC-diamond) and that of the sample. Natural, areas with p-type conductivity would have a different band structure to areas of n-type conductivity. Therefore, we performed cAFM and PFM measurements using a metallic Pt-Si tip (Figure 4) in addition to the measurements with a BdSC diamond tip (Figures 1 -3). The topography of the same sample is given in Figure 4 a, and the corresponding cAFM data is presented in Figure 4 b. The topography shows the same structure of zig-zag-domain pattern observed in Figure 1. Two things are apparent: the observed current values are smaller, hundreds of picoamps compared to hundreds of nanoamps (c.f. Figure 2 a), and both zig-and zag-walls have similar conductivity. This suggests that, while the tip work function changes the values of the Schottky barrier (and therefore the measured current), there is no difference in band structure between the zig-and zag-domain walls, as this would change the relative current across the domain walls. While highly problematic for the polar discontinuity driven mechanism, it is consistent with strain gradient induced changes to the band structure, as this would have carriers of the same type at both walls. In order to identify the conductivity mechanism, we collect I(V)-spectroscopy data across zig-and zag-walls. The plotted curves in Figure 4 c, for both types of tip and both types of domain wall, are the average values collected over ten data sets. The current response for each tip is the same for both type of domain walls. We fit these voltage dependent responses with the established models of conductivity in semiconductors: Schottky emission, Fowler-Nordheim tunneling, direct tunneling, thermionic-field emission, Poole-Frenkel emission, hopping conduction and space-charge-limited conduction. Following the methodology of Shih 2014, [60] the I(V) data is plotted in a way that, if the model describes the data well, the plot will be linear. Such a linear behaviour is only observed for the space-charge-limited representation of the I(V) data, Figure 4 d. This does not change with different tip material. The other plots, corresponding to the different models, are given in supplementary Figure S3.
To exclude the possibility that the equal conductance of zig-and zag-domain wall segments, as already observed using two types of tips, is an accident caused by a special combination of carrier densities and mobilities, we investigate the temperature dependence of the domain wall conductivity. It is established, that even in the simplest semiconducting materials, such as Si -where the electrons can be described as non-interacting particles -electrons and holes have different temperature dependent mobilities; in the case of Si, T -2. 6 and T -2.3 , respectively. [61] In Mott insulators, the electrons are highly correlated and cannot be considered non-interacting. If such classical 'electron' and 'hole' type charge carriers could form, a difference in their respective mobilities is therefore expected. Separate cAFM line scans across a zig-and a zag-wall (see inset) were collected while cooling from 26 K to 4 K, and the data for the zig-wall is given by the waterfall plot of Figure 5 a. We plot the peak values of currents from our temperature dependency for both walls in Figure 5 b. The temperature dependent conductivity is indistinguishable for the two walls, except around the magnetic ordering transition, suggesting the same charge carrier type at both walls. An anomaly around the magnetic ordering is expected, if the walls have different magnetic properties, a discussion of which is the subject of forthcoming work.
Conclusion
We use AFM topography images to show that a large surface reconstruction arises during the Jahn-Teller transition, which coincides with the formation of ferroelectric domain walls, identified with PFM. The corresponding cAFM images show that both the head-to-head and tail-to-tail domain walls are highly conducting compared to the bulk, and this conductivity is directly proportional to the amount of surface reconstruction squared. Such surface reconstruction is naturally associated with strain gradients, which in Mott insulators, or any other system where the electronic structure is sensitive to strain, will change the band structure and therefore the conductivity. We confirm the strain gradient model, as the temperature-dependent current response of the zig-and zag-domain walls is indistinguishable, even for cAFM tips of different work function. Furthermore, I(V)-spectroscopy identifies that the conductivity is bulk limited and well explained by space charge limited conductivity. This mechanism could turn any structural or even magnetic domain walls conducting, if the electronic structure of the host is susceptible to strain, and so significantly broaden the scope of domain wall physics by expanding the range of materials that could be used for domain wall nanoelectronics.
Experiment
GaV 4 S 8 single crystals have been grown by the chemical transport reactions method. As starting material for growth preliminary synthesized polycrystalline powder was used, the polycrystals were prepared by solid state reactions from the high-purity elements: Ga (99.9999%) (Alfa Aesar, Ward Hill, USA), V (99.5%) (Alfa Aesar, Ward Hill, USA) and S (99.999%) (Strem Chemicals Inc., Newburyport, USA). The iodine was utilized as the transport agent in the single crystal growth. The growth was performed at temperatures between 800 • C and 850 • C. Perfect truncated octahedron-like samples with dimension up to 5 mm were obtained.
The atomic force microscopy (AFM) scans were conducted on the same single crystalline sample at various temperatures using an attoAFM I (attocube systems AG, Haar, Germany) atomic force microscope with Pt-Si-tips (PtSi-FM-10, NanoandMore GmbH, Wetzlar, Germany) as well as boron doped single crystal diamond (AD-2.8-AS, Bruker France, Wissembourg, France) tips. Local transport data was gained by cAFM with varying dc-voltages, ranging from 1 V to 60 V applied to the back electrode. Piezoresponse force microscopy (PFM) was conducted with the off-resonant method using frequencies ranging from 19 kHz up to 99 kHz with 10 V excitation voltage.
Supporting Figures
AFM topography images denoting the strong surface reconstruction that arises during the Jahn-Teller transition, is shown in Figure S1. Figure S1 a provides the as-grown (111) surface at 60 K and Figure S1 b the same area below T JT at 30K. The 30 K image shows that cooling through the transition leads to a series of ridges and valleys, running approximately perpendicular to the scratches, with a variable periodicity. Figure S1: a Topography information of the as-grown (111) surface taken at 60 K. Bright colors show higher areas, while darker colors lower areas. b The same area scanned at 30 K, showing a corrugation of the topography into a series of ridges and valleys of ca. 4 nm in height. These scans were performed across two nano-cracks to confirm it is the same area in both images. These scans were collected using a BdSC diamond tip. Figure S2 is the PFM amplitude to the phase image of Figure 2 d of an area with different domains below T JT . The largest piezo response is used to label P 1 , the out of plane polarization according to the (111) surface. Figure S2: a PFM image showing the respective amplitude information to the scan in Figure 2 d. The scans were performed using a BdSC diamond tip. Figure S3 provides the models, Poole-Frenkel, hopping, space-charge, Fowler-Nordheim and Schottky, fitted to IV spectroscopy data (Figure S3 a). A linearization indicates a perfect match, which is achieved for space-charge limited process depicted in the main manuscript (Figure 4 d). For comparison reasons the same data is plotted in Figure S3 d. Figure S3: a IV-curves taken at zig-and zag-domain walls and on the domain, with both boron-doped single crystal diamond tips and platinum tips. b Current flow per E-field is plotted over the square root of the E-field in order to fit the Poole-Frenkel model with a linear fit (red line) [?]. The data is not linearized meaning it is not a good fit. c Logarithmic plot of the current density over the E-field to linearize the data to a Hopping-model, the resulting plot is not linear and so it is not a good fit. The space-charge model (d) is fitted by using a double logarithmic plot of the current flow over the applied voltage, with R 2 =0.9986. For this fit the data is linearized meaning it is a good fit. Neither the Fowler-Nordheim model e which is linearized by plotting the logarithm of the current flow over the squared E-field against the inverse E-field, nor the Schottky model (f), plotted with the logarithm of the current flow over the root of the E-field, give a linear fit. This means the domains and domain walls are well described by the space-charge limited conductivity model. Note: the models from b, c and d are bulk-limited conductivity mechanisms, while the models used for e and f are electrode contact limited models.
Supplementary Notes
Supplementary note 1. Aizu's work on spontaneous strain correlates the strain to the difference between the lattice parameters below a phase transition, and the extrapolated values of the lattice parameters from the high temperature phase -i.e. the expected lattice parameters if the transition had not occurred.[?, ?] Such spontaneous strain occurs in ferroelastic and coelastic materials.[?, ?] We use an analogues approach but, rather than considering the difference between the lattice parameters, we take into account the difference between the surface reconstruction of the low temperature phase relative to the assumed surface of the high temperature phase. Any surface reconstruction is energetically costly for the material and it typical forms along with domain walls to compensate the change in unit cell volume across a phase transition in order to minimise the absolute change in volume of the crystal. The above concepts allow to calculate a value for δh 2 which, via elastic continuum theory, is used as a proxy for the local strain gradients. Note: we plot the squared value, rather than any other even power, as it is the lowest order term that describes the symmetry. | 2022-04-28T06:47:08.302Z | 2022-04-27T00:00:00.000 | {
"year": 2022,
"sha1": "0111ec518bf58b12abc62283715d4eee3cb99202",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0111ec518bf58b12abc62283715d4eee3cb99202",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
259610395 | pes2o/s2orc | v3-fos-license | Tourists-generated photographs in online media and tourism destination choice: The case of Shiraz metropolis in Iran
Abstract The aim of this study was to examine whether Tourist-Generated Photographs (TGP) and images shared by them, on online media channels can affect tourist destination decisions. The study conducted descriptive-survey research. The statistical population of the research included tourists who traveled to this city on 15 May 2022H15 May 2022 (Shiraz National Day). Simple random sampling was used to determine the research sample and distributed questionnaires among 415 tourists in six five-star hotels in Shiraz. The data collection tool is a researcher-made questionnaire that evaluates each dimension in the majority of the questions. According to the results, four factors, including multidimensionality, tourist involvement, uniqueness, and stimulating emotions by the photo, can have a significant effect on the impact of tourist-generated photography. Findings also revealed that three factors of service quality, media influence on tourists, and destination-specific attractions have a substantial impact on tourists’ destination choices. In the post-pandemic era cyberspace and its tourism function will be of great value, and pictures are the way tourists communicate with tourism and travel. Also, tourists’ photos in cyberspace can help destination marketing. The research findings emphasize that the tour operators of the Shiraz metropolis can take advantage of cyberspace more and better develop their tourism industry for post-pandemic travel.
PUBLIC INTEREST STATEMENT
The article "Application of Tourist-Generated Photography in Tourist' Destination Choice: The Case of Shiraz Metropolis in Iran" highlights the importance of Tourist-Generated Photography (TGP) and its impact on tourists' destination decisions. The research results showed that TGP factors such as multidimensionality, tourist involvement, uniqueness, and stimulating emotions have a significant effect on the impact of tourist-generated photography. Additionally, service quality, media influence, and destinationspecific attractions were found to have a substantial impact on tourists' destination choice. In a time where the tourism industry is facing a crisis, the use of cyberspace and TGP can play a significant role in managing tourists' psychological needs. The findings of this research suggest that tour operators in Shiraz metropolis can take advantage of cyberspace and develop their tourism industry to attract post-pandemic travelers. The study's implications can be useful for policymakers, tourism managers, and stakeholders to promote their destinations and enhance tourists' experiences.
Introduction
During the COVID−19 pandemic era, online media assumed a more important role in the life of tourists, so it became an integral part of tourists' lives (Mousazadeh, Ghorbani, Azadi, Almani, Zangiabadi, et al., 2023). During the quarantine times, tourists were only in contact with destinations using online media, and in this situation, a close relationship between tourists and the media was formed . In other words, the media was the only link between the destination and the tourist during the pandemic, and now it can affect many tourists' preferences and destination choices . On the other hand, the number of travel bloggers based on smartphones has also increased, and tourists are exposed to more quality and multi-dimensional images of diverse and new destinations, which can be effective in choosing a destination (Morabi Jouybari et al., 2023). Tourists photograph whatever seems interesting to them such as landscapes, nature, people, and things that correspond with thoughts of otherness and anything eye-catching, charming, or passionate (Höckert et al., 2018). According to Rezaei et al. (2018), Information sources and destination image, information sources and motivation and travel experience, and destination image are related. Tourists are willing to share their travel pictures in online media along with interesting narratives. These photos are usually accompanied by exciting narratives that usually discuss the routes to reach the destination, risks and challenges, unique features, and the culture of a destination. Simultaneous sharing of photos and travel narratives can introduce the destination more clearly to other tourists (Mousazadeh, Ghorbani, Azadi, Almani, Zangiabadi, et al., 2023). Similarly, a lot of studies have shown that tourism and photography are inherently interlaced in such a way that they cannot be easily separated, and camera lenses have captured notable segments of tourist experiences as well as practices for a long time (Conti & Lexhagen, 2020;Qin et al., 2019;Suharyanto et al., 2020). In line with this notion, the emergence of online media has sped up the dissemination of travel experiences causing a new phenomenon known as smart marketing which is identified as interchanging and communicating a great amount of travel-related information by tourists through online media (Ghorbani, Danaei, Barzegar, et al., 2019). Owing to this, tourists have a chance to share their travel experiences online, using electronic word-of-mouth (e-WOM) (Kimmel & Kitchen, 2014). Social media content, which is generated and shared by public individuals, rather than paid promoters, through online communication channels is known as "user-generated content" (UGC) (Santos, 2022). Likewise, travel-related content produced and uploaded on social media by tourists is called "tourist-generated content" (TGC) (Sun et al., 2015). In line with this, Terttunen (2017) asserted that visual elements constitute a great segment of traveling and are included in almost every social media platform, and photographs as one of the most impressive visual contents shared via online media, have become a focal point of so many researchers as well as marketers to investigate the characteristics of an influential image which impact on travelers' destination choice and can lead them to find new strategies in this context. Thus, this paper aims to survey the impact that TGC, particularly in the form of photographs (i.e., TGP), has on tourists' behavior in choosing a destination after being shared through various channels of communication. It can contribute significantly to a sustainable income (Faraji et al., 2021). Iran suffers from significant negative imagery in tourist-generating markets for decades, making this process even more complex. Old photos republished in cyberspace by travelers suggest that tourists travel in their memories, even non-physically (intangibly) when they share their diaries of past trips in the form of photos when they are not in a condition to travel (Chilembwe & Gondwe, 2020). It also gives new insights to governments as well as the public to utilize emerging technologies more wisely and offer the uppermost usage of such advances for post-pandemic time (Puspita, 2020). This study provides the opportunity for prospective tourists to choose "Shiraz" as their future destination by observing online pictures of this beautiful historical city shared by previous visitors. The most important novelty aspect of the current research is that it examines the characteristics of non-advertising photos that can change the preferences of tourists in choosing a destination and it is looking to identify what characteristics non-advertising photos should have that can change tourists' preferences in the field of choosing and changing destinations. Therefore, the present study pursues the following questions: _Non-advertising photos published in online media effective on tourists' destination preferences?
_What is the most important feature of photos shared in online media that affects destination preferences?
Also, the final goal of the research is to present the relevant model
Photography, tourism, and Tourism Destination Images (TDI)
Travel and photography have been tied for a long time together to such an extent that their separation seems impossible (Nikjoo & Bakhshi, 2019), and the old saying "a picture is worth a thousand words" is most sincere in the context of travel (Jenkins, 2003). "Urry" perceives that a "tourist gaze" creates and mediates tourism. He believes while all feelings such as seeing, smelling, hearing, and touching might be involved in the tourist's offline experience throughout the journey, the act of gazing (photographing) is the essence of this experience (Li et al., 2022). Photographs verify the presence of a tourist at a specific destination (Nikjoo & Bakhshi, 2019). Sharing their travel photos gives tourists the chance to produce more meaningful travel experiences (Taylor, 2020), which can be widely accessed by an audience and provoke in-depth emotional feedback (Balomenou & Garrod, 2019), thereby creating a favorable destination image (DI) in the minds of prospective travelers. "Tourism Destination Image (TDI)" or simply "Destination Image (DI)" is perceived as a totality of attitudes and feelings an individual forms based on the information about a destination from different sources at the time, which develops the mental portrayal of it (Gravari-Barbas et al., 2016). TDI is a notion that is of great importance in tourism studies (Hahm et al., 2018), and has been emphasized by so many researchers (Liang & Luo, 2019;Tan & Wu, 2016;Wang et al., 2021). TDI includes three discrete but hierarchically interdependent components: "cognitive", "effective", and "conative" (Wang et al., 2021). They respectively refer to an individual's belief or information of attributes or features of a particular destination (cognitive) (Balomenou & Garrod, 2019), the feelings or sensations a person links to a specific destination (affective) (Ghorbani et al., 2015), and the last one is generated by the contribution of the two formers to building the final "tourist's behavior" (conative component) (Sparks et al., 2013). In this context, Jenkins (2003), suggests a concept called "the circle of representations by illustrating" presenting that there is a link between tourist destination images as symbolic photos and images taken by tourists as actual photos; both sources of photographs are informative and have an impact on each other. The "circle of representation" depicted in Figure 1 illustrates that the pictures of the destination presented by mass media may affect individuals' beliefs or perceptions and alter their behavior to travel to that specific destination and capture photos similar to those shown by the media (Balomenou & Garrod, 2019). Hence, a "hermeneutic circle" develops through following and recapturing the images by displaying them to family and friends to prove their visit. This circle is another factor forming and determining a positive TDI in the minds of potential travelers (Hahm et al., 2018).
TDI has also been constantly studied in scientific investigations on tourism (Perpiña et al., 2019). Although the main focus has been more on those images of the destination that are published for the purpose of marketing, and less attention has been paid to the images published by nonstakeholders. This is while the images published by non-beneficiaries of the destination can update the knowledge and information of online media users (Pan et al., 2021). Consequently impacting the final decision-making by individuals. Despite its universal availability and popular use, photography has had a somewhat ambivalent relationship with research methodology. Its potential for serving in academic research had been documented as early as the 1830s. Photographs have since been routinely employed by scientific researchers, not only as a means of collecting and cataloging data but also as furnishing proof of the findings from the analysis of such data (Ghorbani, Danaei, Barzegar, et al., 2019).
Tourist-generated content and tourist-generated photographs
In former times, travel photo-sharing used to be done almost personally, restricted to a close circle of relatives and friends (Singh & Srivastava, 2019), usually accompanied by a verbal narration related to the story of each photograph. However, with the advent of digital photography, a basic change was founded in every aspect of travel photography as well as tourism photo-sharing procedure (Prideaux et al., 2018), resulting in the audience expanding beyond customary friends and family to a large number of acquaintances and strangers. Over the past decades, information and communication technologies (ICTs) along with the rapid growth of web-based communication channels have altered people's lives to a large extent (Xia et al., 2018). Likewise, in tourism, by the transmission of their travel experiences through various types of media content including images, tourists supply potential travelers with information and other travel-related products or services (F. X. Yang, 2017). As a result, users who were traditionally passive receivers and powerless consumers of one-way tourism marketing communication have now turned into active contributors of media content since the end of the 20th century (Mak, 2017). The travel experiences shared in the form of visual images, comments, and reviews are accessible to other potential users of online communication channels (Ho et al., 2015). Such media content produced and published online by users outside professional practice is called "user-generated content" (UGC). In tourism, UGC is also termed "tourism-generated content" (TGC) or "travelrelated content" (Mak, 2017), as a great amount of shared content is done by tourists during or after their travel. The significance of UGC (or TGC) in tourism as a powerful tool for gathering information and helping tourists make travel-related decisions (Ukpabi & Karjaluoto, 2017), has to be considered for the following reasons. Firstly, as tourism is a hedonic experience, travelers desire to make the best out of their travel experience by making the right travel decisions with the help of online TGC available on online social media. Secondly, tourists who do not know about a destination before visiting, trust the information and experience of other tourists for making the best choices as information from other users seems to be more trustworthy than promotional content (Ukpabi & Karjaluoto, 2017). Indeed, UGC (or TGC) influences tourists more than promotional content because they believe that it mirrors the actual performance of tourists while traveling. Moreover, TGC has advantages like quick availability, rapid and constant updating, and flexibility for revision according to the needs of users (Timoshenko & Hauser, 2019). Similarly, users (including tourists) utilize UGC and TGC in various ways assessing costs and quality of products and services discovering the best destinations, food and entertainment and booking hotels or selecting desired accommodation before their travel. Recent research has shown that UGC has gradually accredited in social media (Nechita et al., 2019). Social media also provides people with opportunities to interact and constitute communities of common interests through sharing of media content even with unknown online users (Zhu et al., 2016). This practice most likely influences attitudes and beliefs about social standards and, consequently, their behavior (Batat & Prentovic, 2014). Yet, in their studies, many researchers claimed that when compared with photos, text cannot completely convey feelings behind seeing, listening, and smelling experienced by a tourist in a destination; a photograph can convey multilayered senses through its visual features. Moreover, while analog photography had limited users due to the prohibitive cost of film and printing process, and a delay between capturing and sharing of photographs, digital photography, on the other hand, has removed such barriers and paved the way for a low-cost simple storing and an easy-to-modify and significantly faster sharing practice (Prideaux et al., 2018). Moreover, the emergence of smartphones equipped with high-resolution cameras has hastened the process of photo-sharing with the instant transmission of photographs globally. This phenomenon of taking and sharing photographs through new technologies has thrown open research possibilities in understanding the link between photography and destination image and destination promotion. Nonetheless, a survey regarding this particular format of UGC, i.e., photographs captured and disseminated by tourists known as "tourist-generated photographs" (TGP) as an influential factor in promoting and developing new tourism, is still missing. Hence, this study has been done to evaluate the impact of TGP on tourists' destination choice; the importance of this paper is due to the major role photographs play in the tourism economy as a vital medium in promoting TD (Gravari- Barbas et al., 2016). The tourism sector has been going through various types of crises from time to time since it is highly susceptible and influenced by external factors (e.g., environmental crisis, including geological and extreme weather events; societal and political crisis, including riots, crime waves, terrorist acts; health-related crisis, such as disease epidemics; technological crisis, including transportation accidents and IT system failures; economic, such as major currency fluctuations and financial crises). Irrespective of nature (whether it is human-made or natural) or scope (regional or global) crises always create downtime for tourism. The effects of this pandemic are still revolve around uncertainty and estimated to have a long-term impact (Mousazadeh, Ghorbani, Azadi, Almani, Zangiabadi, et al., 2023;Özdemir & Yildiz, 2020).
Study area
Shiraz is one of the most populous cities of Iran and the capital of "Fars Province" located in the southwest of Iran ( Figure 2). In addition to its nice climate, "Shiraz" is globally popular for its rich culture and long history, literature, and poets' treasures, various spectacular places, and remarkable monuments. Another great point about "Shiraz" which made it a popular tourist destination is its adjacency to "Persepolis" (Takht-e-Jamshid). This city is one of the main and world-famous touristic cities in Iran which has a variety of unique historical, cultural, and natural tourist attractions, Shiraz has become a popular tourist destination for domestic and foreign tourists. Edward Browne (1893), in his book "A Year Amongst the Persians", described Shiraz (Shirazes) as follows: "In Iran, Shiraz, which is known for strengthening the region in terms of historical structure as the most delicate, most innovative, everyone seems to like Shiraz".
Moreover, as the largest city in southern Iran and one of the most beautiful metropolises in the Middle East, "Shiraz" has extensive potential for being a tourism destination, from literary tourism (Shaykh-Baygloo, 2021) to medical tourism (Gholami et al., 2020). Therefore, it hosts a large number of tourists from all over the world every year and will be a good population to study.
Sample
The statistical population of the research consists of tourists who have chosen the Shiraz metropolis as their destination. A simple random method was used to select the statistical sample of the research, and 415 tourists who stayed in 5-star hotels in Shiraz during Shiraz Day participated in the interview (Please see Table 1). Ardibehesht 15 is called Shiraz Day in Iran, and every year on this day programs are organized for tourists and residents of this city by tourism organizations.
Research instrument and data analysis
The data was collected using a researcher-made questionnaire. The questionnaire contains 28 questions designed in the form of a 7-point Likert scale. 4 questions are designed for each variable. The questionnaire was distributed to the participants on 15 May 2022. Meta-synthesis and experts' Delphi methods were used to determine the raw and final variables. In the Delphi process, the raw indicators were screened in three stages to obtain the final indicators. According to the initial agreement at the beginning of the work, at least 50% of the experts could give a similar answer to one of the options for each indicator. The final agreement was reached in the third round with at least 55% similar responses, and the final variables were obtained. Finally, in accordance with the sample size and non-normal distribution of data, PLS software was used for data analysis.
After finalizing the factors, the hypotheses were designed according to Table 3:
Correlation of research variables
Pearson correlation for the impact of Tourist-Generated Photography (TGP) shared in cyberspace and Tourist Destination Choice (TDC) was calculated ( Table 4). The results showed a high correlation (more than 0.5). A correlation above 0.5 indicates a high correlation for research variables (Akgün et al., 2020).
Convergent Validity (CV)
CV is a quantitative measurement that shows the degree of internal correlation and alignment of the measurement items of a category. The narrative concept of the questionnaire validity answers the question of how well the measuring instrument measures the desired feature. Whenever a structure (latent variable) is measured on the basis of multiple items (observable variable), the correlation between its items can be examined by convergent validity. If the correlation between the factor loadings of the items is high, the questionnaire has a convergent validity. This correlation is essential to ensure that the test measures what needs to be measured. To calculate convergent validity, the Average Variance Extracted (AVE) should be calculated. In simple words, AVE shows the degree to which a structure is correlated with its characteristics, and a higher correlation means a higher fit. Fornell and Larker (1981) believe that convergent validity exists when the AVE is greater than 0.5 (Ghorbani, Danaei, Barzegar, et al., 2019).
Calculation of CV (Convergent Validity) in Smart PLS
The principles of calculating convergent validity in PLS software and also the technique of the least partial inputs are fixed, but this software, unlike the Lisrel software, gives the value of AVE and you do not need to calculate it manually. To calculate convergent validity in PLS software, one can just refer to the software output.
CR (Compound Reliability)
(CR) stands for Composite Reliability. Convergent Validity exists when the CR is greater than 0.7. CR must also be larger than AVE. In this case, there will be a convergent narrative condition (Table 5).
Subsidiary Hypotheses
The impact of touristsgenerated photography shared in cyberspace (TGP)
H1
Multidimensionality of the photo has a significant effect on the impact of user-generated photography and sharing it in cyberspace.
H2
Tourist involvement of the photo has a significant effect on the impact of user-generated photography and sharing it in cyberspace.
H3
Being unique in the photo has a significant effect on the impact of usergenerated photography and sharing it in cyberspace.
H4
Stimulate emotions by the photo has a significant effect on the impact of user-generated photography and sharing it in cyberspace.
Tourist Destination Choice (TDC) H5 DSQ has a significant impact on tourist destination choice.
H6
DMIOT has a significant impact on tourist destination choice.
H7
DSA has a significant impact on tourist destination choice. In short, the following relationships must be established for convergent validity (Ghorbani, Danaei, Barzegar, et al., 2019).
Data analysis
Factor Loading (FL): Factor loading indicates the relationship between visible and hidden variables. FL is always between Zero and One. The relationship is regarded as weak when the FL is less than 0.3., and the relationship is desirable when FL is higher than 0.6. According to Table 6, the level of FL is appropriate (Mousazadeh, Ghorbani, Azadi, Almani, Zangiabadi, et al., 2023). Where: Acceptable 0.6 ≤ FL Desirable (Ghorbani, Danaei, Barzegar, et al., 2019).
In the present research model, according to Table 6, all the coefficients of the factor loadings of the queries are greater than 0.5, meaning the variance of the indices with their corresponding constructs is acceptable.
Divergent validity (DV)
Divergent validity is used to describe evidence that measures constructs (Please see Table 7). These measurements should not be highly related to each other and not highly correlated with each other. Practically, divergent validity should be smaller in magnitude than convergent validity coefficients (Gearhart, 2022).
Hypothesis testing
According to Table 8, the effect of tourist-generated photography on the destination choice of tourists in the Shiraz metropolis has a coefficient of 0.743. In conclusion, we say that with 95% confidence, the TGPs published in cyberspace have a positive impact on attracting tourists to the Shiraz metropolis:
Designing the final research model
After examining the hypotheses, the final research model was designed in Figure 3:
Model Goodness of Fit (GOF) Testing
The final step of Structural Equations Modeling (SEM) is to calculate the Goodness of Fit (GOF) of the model. It measures the overall goodness of fit for both the structural and measurement models collectively (Luo et al., 2022). The GOF is calculated with the following formula:
Discussion
With the development of technology in developing countries like Iran, the number of online media users and smartphones has increased significantly. On the other hand, the average age of users has also decreased greatly. Also, the number of travel bloggers whose aim is to introduce unknown destinations and access routes is also increasing. Users in different age groups are faced with an onslaught of images and content from different destinations, which can change and replace the destination. Destination planners and stakeholders should not be unaware of the effect of content produced in online media (Ghorbani et al., 2021). To investigate this issue, the main purpose of the present study was to examine the role of tourist-generated photography (TGP) in tourists' destination choices in the Shiraz metropolis, Iran. For better measurement of the research subject, two groups of variables were prepared, and accordingly, the conceptual research model was designed. The first group of variables is related to effective photography criteria in tourism, and the second group of variables is concerned with the criteria for selecting a destination by potential tourists. The criterion was based on the images shared on five social media (Instagram, Facebook, Google+, Pinterest, and Flicker). Based on the software output in Tables 8 and 9, all hypotheses were confirmed (β = 0.743, p < 0.05). The hypotheses consisted of one main hypothesis and seven sub- hypotheses to better assess the issue. In other words, potential tourists in cyberspace who see pictures of the tourist attractions of Shiraz decide to travel to Shiraz in the future after the lockdown ends. Thus, the more a person trusts social media for tourism information, the more he/she gets involved in this media by sharing travel information. This means all the tourist images published from Shiraz in cyberspace can attract potential tourists to this city. As shown in Table 3, in addition to the main hypothesis, the sub-hypotheses of the research fall into two groups. The sub-hypotheses of the first group are related to the characteristics of tourism photography, and the sub-hypotheses of the second group are related to destination selection factors. According to the first hypothesis in Table 9 (P < 0.05), the multidimensionality of the photo has a significant effect on the impact of tourist-generated photography shared in cyberspace. According to Bull (2020), tourists respond better psychologically to photos that have different dimensions and different elements that should be observed in tourism marketing through photography. The second hypothesis assumes the tourist's mental involvement considers the features of an effective tourist's photo. In other words, the photo should psychologically fascinate the tourist's mind, in such a way that pushes the tourist towards the destination. Hereby, Akgün et al. (2020) argued about enduring involvement in travel motivation and the role of photography in this regard. The third hypothesis, according to Table 4, discusses respect to the uniqueness of the photo. Similarly, studies show that unique photographs have always attracted humans (Akgün et al., 2020). The fourth hypothesis-the last hypothesis of the first group's hypotheses-evaluates the stimulating emotional feature of a photograph. It focuses on the mental processing of the image in the mind of the tourist. Based on the related result, the photo should stimulate the tourist's feelings, and provoking the tourist's feelings is a key factor in the formation of a successful tourism destination image (Ernawadi & Putra, 2020). The three hypotheses 5, 6, and 7 which are placed in the second group examine the factors influencing the choice of destination by tourists. The fifth hypothesis considers the quality of service at the destination as one of the factors affecting the tourist's destination choice. In line with this, Hong et al. (2020) argue that service quality is found to be significantly related to destination reuse. In the sixth hypothesis, the marketing power of the destination media was examined. Accordingly, it is stated that having powerful tourism marketing social media has a significant impact on the followers' intention to visit a promoted destination (George, 2020). According to the result, the more skillful in using online platforms and the more trustful in social media for tourism information people are, the greater their use of social media, sharing comments, photos, videos and reviews related to visited destinations. Thus, the role of tourists might be transformed towards one that involves spreading authentic travel experiences. One implication could be that tourist services suppliers might benefit from the amplitude of the disseminating information process regarding their offers. At the same time, they become aware of the consequences generated by negative experiences of tourists and try to avoid them by offering higher quality services and honestly dealing with the forms of dissatisfaction expressed on social media. Finally, the seventh hypothesis highlights the role of the destination's specific attractions captured by tourists' photographs in attracting tourists. With this knowledge, tourism managers and operators at the destination can prepare specific plans in advance to prevent over-crowding in attractions and ensure visitors' safety and pleasure. Parallel to this, L. Yang et al. (2020) asserted that there are many attributes associated with a specific tourist destination. The research findings emphasize that the tour operators of the Shiraz metropolis as one of the most attractive cities in The findings offer useful practical implications. Because this study is set in the general context of infectious diseases, its implications are not restricted to the present COVID−19 outbreak but are general to a broader context. When there is a destination-specific or global outbreak of infectious diseases, such as COVID−19, there is a threat to tourists and the tourism industry, the magnitude of which depends on the scale and severity of the epidemic. The role of photos shared by tourists in cyberspace in maintaining the relationship between society and the tourism industry and reducing psychological effects in the current quarantine situation could be a good choice for future research destination choice. In addition, this research has a significant contribution to the tourism literature and shows important concepts for DMOs in the field of destination marketing. It also enables tourists to explore important factors for attracting more tourists by using appropriate photosharing channels to effectively reach future tourists.
Conclusion
It must be said that with the emergence of digital technology and smartphones and concomitant improvement in the quality of accessible cameras and photography, photo dissemination has become an integral part of tourism. It is well understood that tourists capture images with various motivations. In postmodern tourism, the walls and borders have collapsed and pictures are observable from all over the world thanks to "hashtags". Tourists write exaggerated captions on their photographs, inviting other tourists from across the globe. The pervasive power of photography in the travel industry is to the extent that tourists, even in these times of pandemic and lockdowns, review and disseminate memories of their past travels by sharing their old photographs in cyberspace, somehow conveying their desire to revisit a location. Shiraz metropolis as the largest city in southern Iran and one of the most beautiful cities in the Middle East hosts many tourists from all over the world every year. Furthermore, Shiraz is currently the center of medical tourism in the field of cosmetic surgery, such as dental tourism and hair transplant tourism, in the region; thus, tourism officials of Shiraz metropolis should pay more attention to related content shared in cyberspace because much inbound and outbound tourists travel to this beautiful destination using the Shiraz hashtags. The World Tourism Organization (UNWTO), in line with its advice to tourists at the time of the Covid−19 outbreak, recommended that in the current situation, pictures are the bridge of communication between tourists with tourism and travel and emphasized that tourists' photos in cyberspace can help the psychological management of tourists during the quarantine period. Therefore, the role of photos shared by tourists in cyberspace in preserving the relationship between society and the tourism industry and reducing psychological effects in the current situation during the quarantine period can be a good subject for future research. Moreover, the research constitutes an indicative contribution to tourism literature and suggests important implications for DMOs in the context of destination marketing and empowers them to investigate significant factors for attracting more tourists by utilizing proper photo-sharing channels to reach prospective tourists effectively. In addition, further investigations can be done on supportive UGC or TGC elements such as narrative captions, reviews, comments, etc., for gaining a deeper perception of the behavioral preferences of tourists.
Above these, the current study gives insight to tourism scholars to survey the specific emotional behavior of tourists while opting for their destinations. In the end, local distinctiveness is a changeable feature. This study illuminates marketers with an idea to modify the destination's characteristics and uniqueness creatively according to consumers' needs and pleasures. In this regard, the type of services that travelers care about more can be considered as distinctive features of a destination. Just like most exploratory surveys, some limitations can be pointed out to address future studies. First, the current study was done in the time of Covid − 19 outbreak, so there are some limitations with the homogeneity of the research population as the sample consisted of tourists who traveled to Shiraz, and with no doubt, many regular travelers from so many areas of the world are missing during this time period. Second, as this study sample is selected with no consideration of gender, age, educational level, nationality, etc., future research can consider such socio-demographic factors to obtain more precise results and conclusions. Third, the study has only investigated one format of UGC (i.e., photograph) namely TGC, so future studies can evaluate a multitude of factors together or comparatively, especially comments and reviews supporting an online image shared in social media. Finally, as the authors' observations were based on five social media channels, future research can bring sincerer indications by investigating more online media channels such as Instagram, Whats App, Telegram and etc.
Research limitations and future research suggestions
The re-emergence of the pandemic in Iran coincided with the conduct of the research and created restrictions from the point of view of visiting hotels and interviewing tourists, and for this reason, many tourists did not want to interview researchers. Also, some hotels had imposed restrictions on the visit of non-residents to the hotel according to the conditions of the epidemic. In accordance with these conditions, one of the positive axes for future research can be to investigate the importance of non-advertising images published in the virtual space in marketing and introducing the destination. Also, the role of these images and content produced by tourists from the destination on the re-choice of the destination by tourists is also a practical suggestion. This issue is also important for the beneficiaries and destination planners because it can greatly affect their marketing activities. In this context, issues such as the homogenization of destination marketing activities with the content shared in online media can be a good suggestion for destination studies. Because tourists usually compare the content of marketing activities with tourists' feedback before choosing a destination and one of the most important of this feedback is shared images. | 2023-07-11T18:00:43.844Z | 2023-06-23T00:00:00.000 | {
"year": 2023,
"sha1": "f88b4170fb213fd8940c55903d0d0e9cda8ece59",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23311886.2023.2225336?needAccess=true&role=button",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "9dcd173ee5c24c97efdd68fa371aca45cf8efc4d",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
225604461 | pes2o/s2orc | v3-fos-license | DESIGN, PRODUCTION AND IMPLEMENTATION OF LEARNING ENVIRONMENT BASED ON WEB, LEBW DISEÑO, PRODUCCIÓN E IMPLEMENTACIÓN DEL AMBIENTE DE APRENDIZAJE BASADO EN WEB, AABW
The work presented here is part of the need to contribute to science education through ICT. In this sense, a WEB-based environment was designed, which presents in its structure a didactic strategy of scaffolding of a metacognitive type, which was designed in the context of learning certain contents of textual typology in relation to the Sciences Natural. The research raised a quasi-experimental design of type 2x3, with previously formed groups belonging to two regular courses of a private school in the city of Bogotá. The experience consisted of exposing groups of students to a WEB-based environment for strengthening reading comprehension and scientific skills. The results of this research contribute to teachers, therapists and in general to the Educational Community elements regarding the use of a metacognitive scaffolding in a WEB-based environment, as a teaching strategy for the development of metacognitive processes and learning achievement. Such an integrated strategy along with the cognitive style in the students' DFI dimension could predict the academic success of apprentices when interacting with such environments.
INTRODUCTION
The changing dynamics of today's world proposes different challenges, requiring people capable of developing thought processes that allow them to approach, understand and solve different problems. In a society where information management is privileged, it is of great importance to promote the development of reading competence, emphasizing the interpretative and constructive character that this exercise requires; since reading implies in addition to recognizing and interpreting different sign systems, reflecting on our own thinking, becoming aware of the processes involved in carrying out a task, in order to strengthen critical thinking, flexible and reflective creative (Cerchiaro, Paba, & Sánchez, 2011), (Cantillo, De la Hoz, & Cerchiaro, 2014).
In this sense, it is important to understand that there are two interdependent reading processes: a) the processes of recognition of the signs and characters and b) the macro process of reading comprehension; for Snowling & Hulme affirms: (...) reading competence goes beyond the domain of word recognition skills or reading fluency, and a strategic domain of the high-level cognitive processes involved in reading comprehension is also necessary for the Achieving a deep interpretation and understanding of the text (in Gundín, Fidalgo, & Robledo, 2012).
In this way, there is a close link between reading comprehension and metacognitive activity. Understanding this, as the ability of an individual to possess control over his thoughts and to maintain awareness of the same, in order to integrate their knowledge to the concrete demands of a changing world, where the management of the information and the capacity of quick and effective answers are privileged (De Corte, 1999).
In accordance with the foregoing for the national scope, it was included from the second semester of 2014 in the know 11th applied to last grade students from all over the country, the critical reading component, which aims to favor a transformation oriented to strengthen the evaluation of interpretative and logical reasoning capacities from a text and to avoid declarative knowledge. This component involves the development of different competencies on the part of the student, in order that this one manages to approach critically to a text must, in the first place, understand the local units of sense. Secondly, it must integrate that information to give a global sense to the text. And, thirdly, once the previous two stages have been overcome, it must take a critical stance against the text, reflecting on its content (ICFES, 2013).
However, taking as a reference to Herrera (2005) develop skills to think critically and creatively, be flexible in the forms and methods of observing a reality and give answers that are effective in their application; skills embedded in metacognition are a great educational challenge, in that sense, Tamayo (2016) they say: the Metacognition "is still considered fundamental not only in the processes of teaching and of learning but, in addition, in the constitution of the critical thought".
On the other hand, the competence of reading comprehension in international tests has shown great difficulties; taking into account the performance of students from the eight Latin American countries, these are still far from the quality standards defined by the Organization for Economic Cooperation and Development (OECD) which through the PISA test evaluates students' competencies in math, reading and natural science. For the specific case of reading the results thrown by the Colombian students evaluated in year 2012 report that: 51% of the students did not reach the basic level of competition, and 31% were placed at level 2, that three out of ten Colombian students can detect one or more this means fragments of information within a text; in addition, they recognize the main idea, understand relationships and build meanings within texts that require simple inferences, and can compare or contrast from a unique feature of the text. In levels 5 and 6 there are only 3 out of every thousand young people, who can make multiple inferences, make detailed and accurate crossings and contrasts; they demonstrate a broad and detailed understanding of one or more texts, and make a critical assessment of a text whose content is unfamiliar (ICFES, 2013, p. 9).
For 2015, Colombia improved SUS statistics in the field of reading comprehension, although we are still lagging with reference to the countries of the first world, this year the figures did not put in the couple of countries like Mexico, Turkey, among others, reaching 425 points, 22 more than in 2012. The most problematic result is that 43% of Colombian minors do not exceed the OECD's minimum standards in this area (OCDE, 2016). Therefore, reflections are raised from the academic field, in a concrete way from the teaching activity with high school and university students by highlighting the difficulties that young people present in the development of the reading competition. Generating interest to know about the variables that affect the process of reading comprehension in digital format.
Metacognition, as a mental activity by means of which other processes or mental states become the object of reflection constitutes an important variable in the process of reading comprehension, according to Cerchiaro, Paba, & Sánchez (2011) and Cantillo, De la Hoz & Cerchiaro (2014) two key metacognitive components are distinguished to regulate reading comprehension; knowledge of the purpose of reading and self-regulation of mental activity to achieve that goal; it reports that the less competent readers show insufficient knowledge of the purpose of the task and of the reader strategies, as well as a deficient ability to oversee their own comprehension process.
In accordance Paba & González (2014) established the relationship between metacognitive activity and reading comprehension in high school students, finding that metacognitive activity in the sample is null and therefore the reading comprehension level is low. Gallego Torres, R. A. Design, production and implementation of learning environment based on web, LEBW 123 Revista de Comunicación de la SEECI. 15 July, 2020 / 15 November, 2020, nº 52, From this perspective and in order to encourage the development of metacognitive strategies Molenaar y Sleegers (2010), Van de Pol, Volman, Oort, & Beishuizen (2015) propose the use of metacognitive scaffolds, which have the function of managing and regulating cognitive processes. This type of scaffolding allows the subject to raise learning goals consistent with the interests and time available, in the same way it helps him to oversee the advance against the proposed goal and to reflect on the results obtained, in search of reorienting strategies that have proved unsuccessful in achieving the desired learning achievements.
Another variable related to the performance in reading comprehension in digital format by the students is the cognitive style, concept proposed by Witkin in 1948; according to Hederich et al. (2013) and Hidalgo & Olaya (2016) the cognitive style in the dependency dimension-field independence (DFI) establishes differences between the subjects, related to the capacity of cognitive restructuring, information processing, competences Interpersonal and motivations between two subjects' polarities: the so-called Independent and Field-dependent. These differences affect the learning process, the achievement of individual learning and the way in which students access knowledge in computational environments (López-Vargas, & Hederich-Martínez, 2011), (López, Ibáñez, & Chiguasuque, 2014).
In this context Korthauer & Koubek (2009) establish the effect of the cognitive style of the subjects and their relation with the tasks developed in hypertextual environments, finding that the independent students of field present more exactness in their tasks of learning compared with Field-dependents, which makes it possible to deduce that these students have greater skills in analyzing and synthesizing information. With respect to the performance of field-dependent students, it was found that they did not correctly use explicit aids in a hypermedia environment and require more time to perform the task.
Taking as a reference the previous framework this research aims to design an Environment Based on the WEB that implements within its structure a metacognitive scaffolding, which through metacognitive activators aims to promote the development of the metacognitive capacity of the students and, consequently, possibly to improve the reading comprehension of the same ones when they interact with texts in digital format.
General objective
Create and validate a metacognitive-type scaffolding in a web-based learning environment for the development of metacognitive processes in reading comprehension and learning achievement in natural sciences, minimizing differences between students with different cognitive styles.
Specific objectives
• Design and implement a metacognitive scaffolding in a web-based learning environment for the development of the metacognitive process in reading comprehension in high school students.
• Inquiry into the effectiveness of metacognitive scaffolding in web-based learning environments in the development of scientific competences in individuals with different cognitive style. • Identify the characteristics of the metacognitive processes of students with different cognitive style in the DFI dimension, when interacting with a metacognitive scaffolding in a bas-to-do learning environment on the web.
APPROACH TO THE PROBLEM
The purpose of this study is to analyze different aspects related to the metacognitive processes in reading comprehension in order to encourage the reflection and implementation of strategies that facilitate the attainment of learning goals and the Problem solving in students (Jiménez, 2004). From this perspective, an Environment Based on the WEB was designed, which presents in its structure a didactic strategy of scaffolding of metacognitive type; this scaffolding was designed in the context of learning certain textual type content in relation to the natural sciences. Consequently, in addition to the development of metacognitive capacity, interaction with this computational environment is expected to favor students in relation to the expected learning achievement.
The results of this research contribute to teachers, therapists and in general to the educational community elements against the use of metacognitive scaffolding in an Environment Based on the WEB, as a didactic strategy for the development of metacognitive processes and learning achievement. This integrated strategy along with the cognitive style in the DFI dimension of the students could the academic success of the apprentices when they interact with this type of environments.
In the same way, the results of this research regarding the design and development of web-based learning environments could be considered by training educators and educators in information technology and communication. As they constitute an important resource for the students of different cognitive style because it favors the development of their metacognitive capacity in reading comprehension and therefore the possibilities to achieve the expected academic achievements. In this sense, the central problem of this research can be summed up through the following question: What is the effect of metacognitive scaffolding on the achievement of natural science learning in high school students who interact with web-based learning environments?
Metacognition in web-based learning environments-AABW
The communications revolution in post-industrial societies has led to a shift in the forms of production, distribution and consumption: the emphasis of the productive circuit lies in the so-called immaterial work (creation and production of symbolic goods; to say, information, knowledge and know-how whose infrastructure is based on different forms of language expression). Not only do the ways of producing change: they also change the forms of perception and expression, which has a profound impact on the configuration of knowledge and, of course, it affects the traditional ways of understanding pedagogy. The presence of actors in the process learning education is no longer a requirement and the centrality of the textbook and the teacher are moving to give shape to new modalities of pedagogical communication.
In this sense, the virtual education or e-learning contributes to a better learning because in many areas it is considered that the education external to that of the classroom is more propitious considering the revolution of the knowledge, this has generated that the students develop autonomous learning, in addition to being considered as digital natives and are accessing information from the different devices at your fingertips, you should think of a metacognitive strategy in the field of ICT. First, they have transformed the cognitive process and as Unifi defines it, a new kind of intelligence has been generated, the distributed intelligence that is defined as "intelligence is not an ascribed property of the minds of individuals, but it is distributed among people, and between people and physical tools and symbolic systems" (Herrero & Brown, 2010), this has led to new paradigms and new ways of teaching and learning such as Connectivism, which tries to demonstrate that through learning networks, social media, self-learning and personal environments of learning, in addition to the use of ICT, students can produce their knowledge autonomously, making their construct transcended from the virtual classroom and be evaluated by both them and their peers, simply leaving the teacher as mediator of I learned it (Gallego Torres, 2017).
Secondly, it must be considered that thanks to the use of computational environments, according to Karl Stephen (Sierra, Carrascal, & Buelvas, 2014) can be divided into two types: Container system which is how the environments have been used of virtual learning commonly, are pre-established information repositories and focused on the environment and resources offered by the different platforms and content system where information and content are provided and shared by the users, achieving an appropriation of the course by the students who are involved in the process not only as receivers of knowledge but as prosumers that means that They consume and produce the necessary information to achieve the learning objective proposed by the course, this is achieved with the articulation of the content proposed by the students and with the activities, evaluations and feedbacks of both pairs and teachers using asynchronous tools such as forums, blogs, etc.
For this research was used a mixture of the styles because in the Moodle was used as a repository of contents, but the thematic evaluations were conducted suggesting to the students to make a feedback through the forums Published in each unit, this would make it possible for students to participate in their learning process and generate the ability to regulate learning, looking for what is called blended lives, the students interact mixing their reality between the face and the virtual, they are connected, immersed in the different existing platforms, Internet, Smart TV, etc., Gallego Torres, R. A. Design, production and implementation of learning environment based on web, LEBW 126 Revista de Comunicación de la SEECI. 15 July, 2020 / 15 November, 2020, nº 52, 119-147 achieving to create real personal learning environments achieving an autonomous learning (Sierra, Carrascal, & Buelvas, 2014).
Last but not least, there are metacognitive mediation strategies in virtual environments, since this topic is central in the research project, but are the metacognitive strategies, according to Isabel Sierra in her doctoral thesis Strategies of metacognitive mediation in conventional and virtual environments: influence in the processes of self-regulation and autonomous learning in students defines them as: [...] They are the set of actions oriented to know the own operations and mental processes [...] They are applied by the subject, during and after their learning processes and have the objective of optimizing their executions in a conscious way (Sierra, 2010).
In other words, they are the strategies that the students perform Mayer (Sierra, 2010), they can be divided into three categories; A) which allow the planning of cognitive actions, B) which allow to perceive the advance of the goal and C) those that modify the plan or adjust the action as required.
It can be defined that the Metacognitive judgments are the questions that arise from both the students and their teachers that allow to evaluate their learning processes, these can be abstract, do you have clear the characteristics of an expository text?, or is ready to demonstrate its understanding of this text, its standards are based on the social context or are created by it, this favors the generation of critical thinking in the students making their learning regulating and guiding to the path that will lead to self-learning.
semester of the Faculty of Education of the University of Cordoba in Colombia, this study used the typology of virtual environments, which divides them into: a. Virtual environment oriented to the instrumental development and use of resources for the documentation. b. Virtual environment oriented to the development of competencies, strengthening of work models and learning of procedures. c. Virtual environment oriented to the development of representation activities, cognitive and metacognitive learning strategies. d. Virtual environment oriented to the development of processes of collaboration, participation and management of meetings of socialization of ideas and projects (2011, pp. 83-85).
Individual Learning Differences
Within each academic group there is evidence of differences between the students who constitute it, some of these differences are due to factors associated with gender, religious beliefs, ethnicity or economic, social or cultural aspects; these differences are framed within the emotional, social or cognitive field, from this the concept of style is elaborated (Lozano, 2006).
The styles they are considered as a set of characteristics that people have related to their behaviors in those areas where they act and relate. This is useful for analyzing the different forms of action. The styles conform the categorization of the various behaviors that make each person particular and original (Renes & Martínez, 2016).
The style is defined as the set of regularities characteristic of the subjects that determine their behavior and is characterized by being: 1) differentiating, to the extent that it establishes distinctive characteristics among the people; 2) relatively stable in each individual; 3) Integrator of the different dimensions of the subject, and 4) neutral, that is to say, that one should not be valued, in absolute terms, a style above another (Hederich-Martínez, 2007).
The second category is the styles of learning, referring to the preferences that students have when it comes to processing information or in front of the realization of a learning task (Alonso, C., Gallegos, D., y Honey, P. 1997), (Cerchiaro, Paba, & Sánchez, 2011).
In the educational field, on some occasions the two categories described above are used in terms of learning style and cognitive style; however, these concepts are different, in order to differentiate between these two concepts it is useful to use the model proposed by Curry, in 1987, who proposes "The Model of The Onion" (graph 1).
The cognitive style in the dimension DFI
In Hederich (2013), the cognitive style that has possibly been most studied is the dependency-independence of field (DFI) style which was proposed by Witkin (1948), this dimension determines differences between subjects related to the capacity of cognitive restructuring, information processing, interpersonal competencies and motivations between two polarities of subjects: the so-called independent and field dependents.
These differences in subjects directly affect the learning process, individual academic achievement and how to access knowledge in computational environments (Hederich-Martínez & Camargo-Uribe, 2018), the subjects categorized as independent of field have greater skills of cognitive restructuring, which is evident in their ability to unmask simple figures in complex figures, to process information in an analytical way, situation that it favors the time to deepen the concepts previously acquired and establish relationships between these. In the same way, they have strategies that facilitate the storage and retrieval of the information, they show preferences towards the individual work and they have intrinsic motivation (López-Vargas & Hederich-Martínez, 2011), (García, 2015), (Hederich-Martínez, 2007).
At the other pole are located subjects called field dependent also called sensitive to the environment, which have lower capacities in relation to Cognitive restructuring, process information in a global way, limiting the possibility of making inferences and deep analysis of information, are oriented towards group work and are extrinsically motivated.
Field dependency
Field Independence Global perception: They are difficult to separate the parts of information from their context Articulated perception: perceive the parts as separate elements of the field Gallego Torres, R. A.
Design, production and implementation of learning environment based on web, LEBW 129 Revista de Comunicación de la SEECI. 15 July, 2020 / 15 November, 2020, nº 52, 119-147 Passive people who need abundant social support Active people who work with little external motivation, their trust is in their internal references Are easily submitted to the corresponding authority in part because they have little initiative, critique often has a big impact on them Have a leadership attitude, their actions are based more on their scale of internal values than on the external authority Their learning is more efficient through expository methods Their learning is more efficient through discovery methods They need extrinsic motivational conditions Learn better in intrinsically motivated conditions It is easy for interpersonal relationships; they prefer to work in a group Interpersonal relationships are often difficult, they almost always prefer individual work Source: Hederich-Martínez, 2007. Within this same categorization are located intermediate subjects (INT), which have characteristics of the two groups of extremes (DC) e (IC) (Min & Reed, 1994).
These characteristics have a high incidence on the behavior of subjects in academic fields; in-deed, it has been observed that independent field students surpass in academic achievement their peers Sensitive to the environment.
Cognitive style and learning achievement in AABW
In the educational context the cognitive style is a very important factor because it determines how people perceive, store, transform and process information having a direct influence on the achievement of learning (García, 2015), (López Ó., 2008).
Around this the AABW or the so-called AABC (computer-based learning environments) have generated great expectation in terms of improving the learning achievement since they have ad-vantages over the differences in cognitive style, achieving greater respect for the pace of learning, in the same way favor different forms of social interaction and enable access to information from any point of navigation, combining different representations of the domain of learning (graph, videos, sounds, animations, etc.). However, there are several studies that have questioned the effectiveness of AABC (Spiro & Jehng, 1992), (Tergan, 1997).
Regarding the theme López (2008), suggests that the AABC require greater autonomy on the part of the student as far as the regulation of the learning is concerned; meanwhile, Astleitner & Leuner (Hermann & Detlev, 1995) they claim that the AABC have basically three difficulties; the first refers to the large amount of information that is presented to students, which can lead to the distance from the learning purpose, using some of the time in situations of little importance and leaving to attend to the truly important, the second difficulty is related to the structure itself of the AABC or AABW because they can cause spatial disorientation, losing sight of the heir archives between the contents treated, which hinders the structuring of the knowledge itself, as third difficulty is reported the great cognitive effort required by Gallego Torres, R. A. Design, production and implementation of learning environment based on web, LEBW 130 Revista de Comunicación de la SEECI. 15 July, 2020 / 15 November, 2020, nº 52, 119-147 the student to achieve the organization of the volume of information presented to him.
By way of synthesis, the cognitive style is defined as the set of perceptual, analytical, intellectual, social and affective characteristics that make people process information in a different way, is very stable throughout life and is neutral, that is, it is not possible to say that one style is better than another (Hederich-Martínez, 2007), however, it is possible to state that it directly affects the achievement of learning.
The cognitive style of each subject coupled with the characteristics of AABC or AABW generate a certain degree of inequality for all students, because it causes the independent students of the field, given their cognitive structure, to obtain better academic performance in front of field-dependent students.
Metacognition and comprehension of texts
According to the framework, the metacognitive component plays a key role in reading comprehension. Two key metacognitive components that intervene in the regulation of reading com-prehension can be identified; knowledge of the purpose of reading (for what is read) and self-regulation of mental activity to achieve that goal (how to read), which requires controlling mental activity in a certain way and directing it towards a specific goal.
Both aspects are closely related; the way you read and regulate mental activity while reading is determined by the purpose you are looking for when reading. You do not read a text in the same way to pass the time to explain the content in a class; Neither does the same mental exercise if it is read to identify the main ideas, to look for the best title of a text, to deduce conclusions or to make a critical judgement of the contents of this (Cantillo, De la Hoz, & Cerchiaro, 2014) Brown (1986 notes that metacognition in the comprehension of texts involves the knowledge of four variables and the way they interact to facilitate comprehension. The variables involved are in the first place; the text which includes the characteristics (structure, level of difficulty, degree of familiarity) of the reading materials that affect their comprehension and their memory. Second, the task is found, which includes the storage and retrieval requirements of information, which generate an execution by the reader. Thirdly, we can find the strategies, which are the activities that the reader deploys to store and retrieve the information. As and last place are the characteristics of the reader, his skill, his level of motivation and other states and personal attributes that influence understanding.
In the same way, metacognition in the reading process involves control and selfregulation processes. Such processes, according to Baker & Brown (1984) are as follows: 1. Clarify the purposes of reading. 2. Clarify the demands of the task. 3. Identify the important aspects of the message contained in the text.
4. Focus attention on the main ideas and not on the details. 5. To monitor the activities carried out in order to determine the level of comprehension. 6. Engaging in question-generating activities to determine whether pre-reading objectives are being fulfilled. 7. Take corrective action when understanding difficulties are detected. 8. Avoid interruptions and distractions.
METHOD
The investigation raised an experimental design of 2x3 factorial type, with groups previously formed belonging to two regular courses of a private school in the city of Bogotá. The experience consisted in exposing the student groups to an Environment Based on the WEB for strengthening reading comprehension and scientific competencies.
The Environment Based on the WEB contains three (3) learning units related to textual typology in scientific texts: 1] Descriptive text, 2] narrative text and 3] expository text. At the end of each study unit all students individually presented an evaluation of reading comprehension through scientific texts. In total, three (3) evaluations were obtained for each student, which were averaged at the end of the study.
Before and after the experience the students perform test ESCOLA that is an instrument that evaluates the goal reading comprehension to analyze the possible problems in the strategy of reading of the children of 8 to 13 years, it is advisable that this test is complemented with the direct observation of beside a serious judgement by the parents or teachers and with tests that measure the performance and reading comprehension of children.
The main function is to identify the gaps in reading awareness and learn about the strategies that the students use in the readership process.
In addition, the masked figure test was used which is the instrument used to determine the cognitive style in its dimensions 1. Independence, 2. Dependence e 3. Intermediate, was elaborated by H. Witkin in 1950 but was standardized for the Colombian context by Hederich & Camargo in 1999 and its level of reliability is 0.91.
Description of the environment
Presentation of the course: the data relating to the course are retaken, name, pedagogical approach, target audience, duration and hourly intensity.
a. Reading comprehension: here a clear and understandable definition of the subject of study will be given. b. Infographics of text types: designed for easy remembrance of the subject of study. c. Initial evaluation: the ESCOLA questionnaire was implemented to know the condition of the students at the beginning of the course. Gallego Torres, R. A.
Design, production and implementation of learning environment based on web, LEBW 132 Revista de Comunicación de la SEECI. 15 July, 2020 / 15 November, 2020, nº 52, 119-147 d. Feedback forum: it was implemented for asynchronous communication with students, in which students were able to complete all the questions concerning the course.
The second part consists of 3 study units selected for research, these were: descriptive text, narrative text and expository text, each unit has the following parts: a. Presentation: here is the thematic content of the unit, in addition to having an explanatory text has a video explaining the kind of text studied in each of the units. b. Example: here is a text of the type that the unit is developing and a test of reading control and with which the knowledge of the student will be analyzed. c. Other examples: here you will find more examples of the type of text studied with their respective test to reinforce what they have learned. d. Complement what you learned: here is an infographic that describes and explains the type of text treated, and students are asked to comment on the unit's forum. e. Evaluating: it is the space where we identify what we have learned through an evaluation of reading competencies. In it we will revise the level of comprehension and analysis around the topics worked in this meeting. f. Reflection: here you could interact with the other participants and resolve questions related to the dynamics of the unit studied.
The third and final part is the farewell, this is done the final evaluation which allows evidence the mobilization of metacognitive processes, through the completion of the questionnaire ESCOLA (posttest).
The metacognitive scaffolding visualized by the study group was randomly performed through the following metacognitive trials.
At the start of the course
How much do you know about the subject of study? How important do you think the content of this study topic is? How competent do you feel for learning this unit?
Unit 1:
Have you been clear on the concepts of reading? Are you able to respond to an assessment? Can you make a synthesis? Are you understanding? Do you realize if you're following what you proposed? Have their strategies been effective?
Unit 2:
Could you tell the characteristics of the narrative text? Could you give a definition on narrative text? Are you clear on the characteristics of a narrative text?
Unit 3:
Could you tell the characteristics of the expository text? Could you give a definition on expository text? Are the characteristics of an exhibition text clear?
Image 2: AC image of the unit.
Before initiating testing and evaluations
Are you ready to answer the questions? Are you able to test your understanding of this text? Are you ready to demonstrate your understanding of this text?
Methodological proposal. Design, production and implementation of the AABW
To achieve the objectives was designed of the course was used the methodology recommended in the book guidelines for the design, and implementation of virtual courses, the collection National system of educational innovation with use of ICT, Ministry of Education National, which gives us the roadmap of a virtual course, this is divided into 5 stages: a. Diagnosis and planning. b. Pedagogical design. c. Resource production and educational modeling. d. Platform mount. e. Deployment and updating.
Stage 1 diagnosis and planning:
it is the first step in the creation of a virtual course is characterized by the study that identifies the needs and learning objectives that are served, in addition to the analysis of the context and the target audience. This stage is done to have truthful information about the needs of the students in front of the training process, the contents and the quality levels.
Stage 2 pedagogical design: this contains and defines all the elements, in addition to the didactic strategies of the course, the result of this is the instructional script, the number of units and the thematic contents are defined in addition to the different OVA to be used, in this point the activities and evolutions related to the subject of study are designed.
Stage 3 resource production and educational modeling: two main activities are carried out at this stage; the first is the production of all the contents and resources that are developed for the course, the second is to define the paths of learning that in the previous stage have been established.
Stage 4 platform mounting: at this stage the course implementation is carried out in the LMS platform in our case Moodle V-2.9.1, this stage includes the development, the design of the different graphic pieces, their structure and their operation.
Stage 5 deployment and updating: it is the final stage where the strategies designed for the virtual course are developed and the course is carried out with the students, after an analysis of the development of the course, based on the different feedbacks given by the participants you can make a updating of both textual and multimedia content, evaluations and different activities that the course contains.
Description of the environment Web-based learning (virtual course)
For the development of our research was designed a virtual course mounted on the Moodle platform, for its ease of use and so that it could be reused. The course consists of 3 parts, the first is the welcome where the student will be greeted and has the following parts: 1. Presentation of the course: in this part will be given the data concerning the course these are, name, pedagogical approach, target audience, duration and time intensity. 2. Reading comprehension: here a clear and understandable definition of the subject of study will be given. 3. Infographics of the text types: it was designed for the easy remembrance of the subject of study 4. Initial evaluation: the test ESCOLA was implemented to know the condition of the students at the beginning of the course. 5. Feedback forum: it was implemented for the asynchronous communication with the students, in this one the students could carry out all the questions concerning the course.
The second part consists of 3 study units selected for research, these were: descriptive text, narrative text and expository text, each unit has the following parts: 1. Presentation: here is the thematic content of the unit, in addition to having an explanatory text has a video explaining the kind of text studied in each of the units. 2. Example: here is a text of the type that the unit is developing and a test of reading control and which will analyze the knowledge of the student. 3. Other examples: here you will find more examples of the type of text studied with their respective test to reinforce what they learned. 4. Complement what you learned: here is an infographic that describes and explains the type of text treated, and students are asked to comment on the unit's forum. 5. Evaluating: it is the space where we identify what we have learned through an evaluation of reading competencies. In it we will revise the level of comprehension and analysis around the topics worked in this meeting. 6. Reflection: here you could interact with other participants and resolve questions related to the dynamics of the unit studied. 7. The third and final part is the farewell, this is done the final evaluation which allows evidence the mobilization of metacognitive processes, through the completion of the same questionnaire (post-test).
Procedure
The participants were assigned in the following way the 9th group was enrolled in the course of study and the group 9b in the control course, before starting the course they were applied the ETF test, each student carried out the test independently and in an autonomous way, a group session with a total of 2 sessions was necessary, this one was developed of a multimedia software that was copied in each one of the computers used, the professor and researcher Paola Intencia was present during the application of this instrument.
Subsequently each of the groups independently interact with the course in the academic spaces of the area of natural sciences, the time of interaction with the REDA was 30 hours distributed in three weeks.
Each student was assigned a user and password for entry to the Moodle platform, at the be-ginning, prior to the interaction with the computational environment, the student conducted a questionnaire ESCOLA (pretest) to demonstrate the initial status of each student in I count its metacognitive process, at the end of each learning unit the students carried out an evaluation, in addition to 2 intermediate reading comprehension tests concerning the subject of study, each one of the students carried out a total of 9 tests subjects discriminated in 2 exercises and a verifiable test of the achievement of learning by unit, which is taken as a reference.
Once the interaction with the computational environment was completed, the ESCOLA (post-test) questionnaire was re-applied in order to determine the final state of the students' metacognitive processes.
The metacognitive activators were randomly appeared to the study group at the beginning of each unit, when performing the proposed exercises. This is intended to determine the effects of metacognitive scaffolding in the process of reading comprehension and in the achievement of learning around natural sciences.
Initial conditions
Using the data collected in the research through the different instruments, a descriptive statistical analysis of the different categories that support the development of the student's metacognitive capacity and level of achievement is carried out. In the area of natural sciences, prior to the interaction with the computational environment on reading comprehension and scientific competencies. In the face of metacognitive capacity, the results obtained in the first application of the ESCOLA questionnaire (pretest) are analyzed. As for the level of achievement, the notes previously obtained by the students in the subject of natural sciences are described.
Initial metacognitive abilities in reading comprehension
The questionnaire ESCOLA (Reading Awareness Scale) was answered by the students individually prior to the interaction with the computational environment (pretest). The results thrown are presented below. Table 2 shows the general average of the reading consciousness scale, where the mean corresponds to 95.00 and the standard deviation corresponds (FROM) to 6.58. The minimum score was 81 and the maximum was 106 points of 168 possible. Source: own elaboration.
Source: initial metacognitive activator (performed by the author for the AABW).
The average of each of the metacognitive processes corresponding to the questionnaire and ESCOLA is indicated in an individual way. The average metacognitive processes indicate the presence of medium-sized metacognitive abilities by the students.
Previous learning achievements
The notes previously obtained by the students in the subject of natural sciences were considered for the analysis of the effects of the scaffolding and will be described below.
The teacher of the natural sciences area was asked for the notes obtained by the students corresponding to the average of two scientific competency assessments carried out during the first semester of the academic year. Students are assessed on a numerical scale from 1 to 10. The aver-age of the notes is 5.21; with a standard deviation (OF) of 1.47. On a maximum score of 10, the minimum value was 3 and the maximum of 7. Source: own elaboration.
The graph 3 shows that 4 of the students are evaluated with a note of 3 that corresponds to 21.1%, 1 of the students is evaluated with a note of 4 that corresponds to 5.3%, 6 students are evaluated with a note of 5 corresponding to 31.6%, 3 of the students obtained a note of 6 corresponding to 15.8% and 5 of the participants in the study achieved a note of 7 representing 26.3%. It is noted in the figure that the performance in the evaluations does not describe a normal distribution.
Graph 3: histogram of Notes Natural Science Test 2015.
Source: AC image of the unit (performed by the author for the AABW).
Analysis of the effect of the program
In order to study the effect of the software in its two modalities on the metacognitive processes and their different variables and on the achievement of learning, a multivalent analysis of covariance (Mancova) was carried out. For this analysis, the study-dependent variables were 1] the metacognitive capacity (planning, supervision, evaluation and good Reader) and 2] the academic achievement (average of the evaluations of each unit). Two independents are considered in the analysis; 1] Working with the computational environment that differentiates students who worked in the presence or absence of metacognitive scaffolding and 2] the cognitive (dependent, independent and independent field) style. The analysis Mancova is carried out taking as covariates 1] The initial data of the metacognitive capacity of the students, the pretest of ESCOLA (obtained by the initial application of the questionnaire ESCOLA) and 2] the notes previously obtained in the area of Natural Sciences.
In the first instance, the detailed description of the dependent variables for the case is analyzed; the metacognitive capacity (planning, supervision, evaluation and good reader) and the achievement of learning (average of the evaluations of each unit).
The dependent variables (posttest ESCOLA)
At the end of the interaction with the computational environment the students responded again the questionnaire ESCOLA. Table 4 that of correlations presents the correlations of the different metacognitive processes; planning, monitoring, evaluation and good reader and the corresponding variables.
This allows to observe the metacognitive profile of the students one the interaction with the computational environment has been completed. The results thrown are presented below.
The general average of the reading consciousness scale is shown, where the mean corresponds to 124.74 and the standard deviation corresponds (FROM) to 31.6. The minimum score was 82 and the maximum was 163 points of 168 possible. Source: own elaboration.
Source: prequiz activator (performed by the author for the AABW). Gallego Torres, R. A. Design, production and implementation of learning environment based on web, LEBW 140 Revista de Comunicación de la SEECI. 15 July, 2020 / 15 November, 2020, nº 52, 119-147 The average metacognitive processes indicate an increase in metacognitive capabilities according to the results of the initial application.
Natural Science learning achievement
It is important to remember that the achievement of learning was assessed individually at the end of the interaction with each of the units that constitute the computational environment. The overall achievement was obtained from the average of the exams presented by the students. The average of the general achievement corresponds to 7.30 with a standard deviation (OF) of 1.95. The minimum value is 4 and the maximum is 10, representing the results of the three tests at the maximum possible note. Source: own elaboration.
The distribution of the notes obtained by each one of the students who participated in the study is presented below.
Graph 5: histogram of general learning achievement in natural sciences.
Multivariate analysis (Mancova)
The results of the Mancova show that the resulting models have a high level of prediction of the different dependent variables included. Undoubtedly one of the variables that reaches greater explanation of its variance is the achievement of learning, which is reached to predict 98.1% of the variance (R2=0,981).
As for the different variables of the questionnaire ESCOLA we can find significant relationships in a generalized way being the process of evaluation in the variable person with 99.3% of the variance (R2=0,993) the highest; followed by planning in the person (PF person) category with 98.7% of the variance (R2=0,987), followed by a good reader (BF reader) with 98.5% of the variance (R2=0,985), followed by the aforementioned learning achievement; Subsequently, the monitoring process is reported in the task and text variables (SF Task) and (SF text) respectively with 97% of the variance (R2=0,97), in consecutive the final evaluation process is reported in the task variable with 96.8% of the variance (R2=0,968), followed by the final planning process in the task variable (PF task) with 96.7% of the variance (R2= 0,967), to end is found, the monitoring process in the person variable (SF person) with 93% of the variance (R2=0,930). Indicating with the analysis of the relationships between the covariates and the variables the results show that each of the covariates shows a significant association especially with the final state of the same dependent variable and in some cases with other variables.
In the case of planning in the person variable, significant relationships are established with this same process in the same variable with (F=23,39; p=0,008) and with a slightly lower value with the final evaluation process in the task variable with (F=8,15; p=0,046).
On the other hand, the planning process in the task variable establishes significant relationships with the planning processes in the variable's person and task with (F= 19,51; p=0,012) and (F=6,91; p=0,058) respectively. For the monitoring process in the person variable only a meaningful relationship with the same process is established in the same variable with (F=9,03; p=0,040). As for the process of initial supervision of the text this establishes significant relationships with the final planning process of the person with (F=10,28; p=0,033). On the other hand, the initial evaluation process in the person variable managed to establish different relationships considered significant; In the first instance we find the same process in the same variable with (F=24,07; p=0,008), followed by the final planning process in the variable person with a (F=15,13; p=0,018) and ends with the achievement of learning where one (F=8,07; p=0,047) is reported.
Confronted with the condensed strategies in the category good reader, this one reports significant relations with three variables in first place with the same process in the same variable with (F=15,11 p=0,018), followed this by the achievement of learning with (F=8,71 p=0,042), and finally, the final monitoring process is reported in the task variable with (F=7,32 p=0,054).
On the other hand, the group variable establishes significant relationships with all categories of the ESCOLA, being these highly significant.
In relation to the main effects, the most significant effect is given by the presence of meta-cognitive scaffolding, since the results show an important association with the achievement of learning. A simple data survey reveals that students who worked in the environment with meta-cognitive scaffolding showed much higher results than their peers who worked in the environment without metacognitive scaffolding. Similarly, it is evident that the presence of metacognitive scaffolding favored the increase of metacognitive skills, i.e. the planning, monitoring and evaluation processes.
On the other hand, the variable "the cognitive style in the DFI dimension" does not evidence any significant associate, which leads to suppose that the scaffolding minimizes the differences previously present and given by the characteristics of the cognitive style of the students.
Limitations
There are several limitations that need to be considered for the interpretation of their conclusions. Three important limitations can be mentioned.
In the first instance we can mention that strict sense, the study was not an experimental investigation, which included all the controls of rigor, but had to be limited to a situation experimental with two groups previously formed of degree ninth in the academic space of Natural Sciences in a private school of the city of Bogotá. For this reason, it is not possible to generalize the results to all the high school students in our educational system. However, the research leaves an open space that allows other researchers to continue conducting studies in different domains of knowledge and academic levels.
On the other hand, it is stressed that the instrument used to measure the level of reading awareness reached by high school students, as for this was used a questionnaire author report (reading awareness scale, ESCOLA), widely used in this research area. For this reason, there is the probability that there is uncontrollable bias given by the tendency of the students to give socially accepted responses, a factor that the investigation failed to control.
Finally, it is important to refer to the size of the sample because when they subdivide the groups by all the independent variables, this is very small. For later experiences it is suggested to increase the sample size.
Recommendations for future research
For further research on the subject, it is important to take into account at a broader level of study the characteristics of text as a fundamental element in reading comprehension, in the same way it is important to extend the size of the sample in order that the results can be generalized, as it would be interesting to expand the learning units proposed for the environment because although the research showed important results, changes to metacognitive levels imply representative interactions with the environment.
On the other hand, it would be interesting to carry out the research with the integration of the reading comprehension component and the areas of exact and natural sciences in order to achieve inference and to promote the performance of the students in standardized tests to through web-based learning environments.
CONCLUSION
The present research contributes in the field of design of learning environments based on the Web AABW in the following way: According to the specialized theory in the field of information technologies applied to education, the importance of the design of metacognitive scaffolds is retaken in order to strengthen the possibilities of monitoring and regulation in the process of learning and in this way encourage reading comprehension and therefore the achievement of learning. For the purposes of this study the data provided by students involved in the investigative process show that a metacognitive scaffolding included in an Environment Based on the WEB increases their metacognitive skills and consequently the achievement of learning in reading comprehension tests of scientific texts.
On the other hand, it is important to emphasize the statistical technique of high complexity that was used in the present research to determine the interaction of the variables. It uses multiple robust statistical instruments and techniques such as the covariance analysis Mancova, to study the impact of independent variables in addition to controlling different variables associated that may be related to variables dependent. This analysis contributes to give the study and the protagonists of the educational field a greater validity and reliability.
Another important element that contributes significantly to the educational context refers to the population, since the majority D studies take as a reference to the university population, the present research focused its attention on the population of secondary showing evidence of how metacognitive strategies in reading comprehension play a fundamental role in the performance of standardized tests in the area of natural sciences in learning environments based on the web and in turn as it affects the presence of a Metacognitive scaffolding on these variables.
The research provides relevant information regarding the importance of designing and implementing Web-based learning environments with scaffolding that favors the development of meta-cognitive skills. Considering the different processes and variables that are involved when the subject is confronted with a scientific type reading. In the same way this context provides tools for the realization of control and monitoring of the learning process of the students. This research is of greater interest to the educational community of the country because the use of virtual learning environments is widespread and progressively becoming closer and | 2020-07-16T09:08:58.939Z | 2020-07-13T00:00:00.000 | {
"year": 2020,
"sha1": "915a86080488576c375b13d672301604e4aae7ff",
"oa_license": "CCBYNC",
"oa_url": "http://www.seeci.net/revista/index.php/seeci/article/download/617/1354",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3d2724a96d6f731fb99d3ea98e897eabf2c18d3d",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Psychology"
]
} |
264487169 | pes2o/s2orc | v3-fos-license | Existence of solution to a system of PDEs modeling the crystal growth inside lithium batteries
The life-cycle of electric batteries depends on a complex system of interacting electrochemical and growth phenomena that produce dendritic structures during the discharge cycle. We study herein a system of 3 partial differential equations combining an Allen--Cahn phase-field model (simulating the dendrite-electrolyte interface) with the Poisson--Nernst--Planck systems simulating the electrodynamics and leading to the formation of such dendritic structures. We prove novel existence, uniqueness and stability results for this system and use it to produce simulations based on a finite element code.
Introduction
As humanity aims at reaching "net-zero" by 2050, replacing fossil fuels with renewable alternatives has spurred the research in storage technologies.One of the favored technologies, the lithium battery is the object of intense scrutiny by scientists and engineers [Akolkar, b, Chen et al., b, Cogswell, Ely et al., Hong and Viswanathan, Mu et al., a, Okajima et al., Yurkiv et al., e.g.].One of the important phenomena in a Li-metal battery, i.e., a battery with a metallic lithium electrode, is the electrodeposition process during which solid structures known as dendrites may grow attached to the electrode into the electrolyte solution and lead to the battery's deterioration and ultimately its failure [Okajima et al., Akolkar, a, Liang and Chen, Chen et al., b, Yurkiv et al.].Our aim herein is a rigorous analytical and computational study of a partial differential equation (PDE) based model that describes the electrodeposition process and captures the dynamics of resulting dendritic growth phenomena.The model we study consolidates various models already addressed by the enginnering community [Chen et al., b, Liang and Chen, Liang et al.], with some modifications resulting in a system of PDEs which we rigorously prove to be well-posed.The resulting system, which we fully describe in section 2.1, is captured by three interacting PDEs: (a) a nonlinear anisotropic Allen-Cahn equation, thus phase-field type PDE, with a forcing term that accounts for the Butler-Volmer electrochemical reaction kinetics, which models the concentration of Li atoms; (b) a reaction-diffusion Nernst-Planck equation, which describes the dynamics of the concentration of Li + ; and (c) a Poisson type equation that describes the electric potential that drives the dynamics of the Li + ions.An early application of phase-field modeling to electrochemistry was introduced in Guyer et al. [a].In this work a new model was proposed, derived by a free energy functional that includes the electrostatic effect of charged particles leads to rich interactions between concentration, electrostatic potential and phase stability.This Date: 26th October 2023.
model was further studied by the same people in Guyer et al. [c,b].These papers gave motivation, mostly to engineers, to study the model and use it to describe lithium batteries.There is extensive literature with papers using variations of the model described by Guyer et al. [a] together with numerical simulations that are trying to capture the behavior of the Li-ion or Li-metal batteries in two spatial dimensions [Okajima et al., Liang et al., Akolkar, a, Zhang et al., a, Liang and Chen, Ely et al., Akolkar, b, Chen et al., b, Cogswell, Yurkiv et al., Hong and Viswanathan, Liu and Guan, Mu et al., a, e.g.] and in three [Mu et al., b, e.g.], using a Nernst-Planck-Poisson system coupled with an anisotropic phase-field equation.
The study of anisotropy in surfaces and interfaces goes back many decades, for example, in Hoffman and Cahn, a vector function was introduced as an alternative to the scalar function with which the anisotropic free energy of surfaces was usually described.Kobayashi pioneered research on anisotropic dendritic crystal growth phenomena since the mid 80s.He first introduced an anisotropic phasefield model to show the minimal set of factors which can yield a variety of typical dendritic patterns.This and later works were focused more on the solidification of pure material in two and three spatial dimensions [Kobayashi, a,b].The proposed model was proven to be able to describe realistic dendritic patterns numerically.In two spatial dimensions, the anisotropy occurs from the derivation of the energy.In three spatial dimensions, though, the anisotropy was given by an artificial term rather than from the deriviation of the energy, in order to reduce significantly the computational cost of the phase-field equation.Other works address numerical simulations more thoroughly for two spatial dimensions in Wheeler et al..A better attempt on numerical simulations in three spatial dimensions was shown in Karma and Rappel.It was the first attempt that actually computed the anisotropic diffusion tensor.It was not until 1993 that analytical results on these models were established.In McFadden et al. an asymptotic analysis in the sharp-interface limit of the model studied in Kobayashi [a], Wheeler et al. was established, including an anisotropic mobility.In Wheeler and McFadden, a ξ-vector formulation of anisotropic phase-field models was introduced, as in Hoffman and Cahn, in an attempt to investigate the free-boundary problem approached in the sharp interface limit of the phase-field model used to compute three-dimensional dendrites.Mathematical analysis on different viewpoints of the former mentions, yet significantly useful on expanding the knowledge on the anisotropic phase-field models, has been developed in Elliott and Schätzle [a,b], Taylor and Cahn.In the early 2000s, there were three papers, Burman and Rappaz, Burman et al., Burman and Picasso, that treated a coupled system of PDEs, including a phase-field equation with anisotropy, based on the model that was proposed by Warren and Boettinger for the dendritic growth and microsegregation patterns in a binary alloy at constant temperature.However, the proposed anisotropic diffusion tensor was implemented specifically for two spatial dimensions and so the analytical and numerical results were restricted in R 2 .In Graser et al. the same model was studied and the analytical results were expanded in three spatial dimensions.In the same paper different time discretizations were studied for their stability.A 3D implementation of the anisotropy was introduced considering the regularized ℓ 1 -norm [Graser et al.,Example 2.2], yet different than in Karma and Rappel, where the 3D anisotropy is given under the same principles that Kobayashi first introduced.More recently, in Li et al. new numerical schemes were introduced, but the numerical results that were presented were in two spatial dimensions.The study of the anisotropic phase-field equations is still in progress and one crucial challenge is the development of efficient numerical algorithms for three spatial dimensions.In Zhang et al. [b] numerical algorithms for the anisotropic Allen-Cahn equation with precise nonlocal mass conservation were proposed in this direction.
To our best knowledge, there are no rigorous analysis results for the system of PDEs.The Nernst-Planck-Poisson system is often coupled with the Navier-Stokes equation [Zhang and Yin, Bothe et al., e.g.], or studied on its own [Kato, Chen et al., a, e.g.].In one occasion, Liu and Eisenberg, the proposed model, called Poisson-Nernst-Planck-Fermi, is replacing the Poisson equation with a fourth order Cahn-Hilliard type PDE, but it is not sharing many similarities with the model in this paper.The purpose of this work is to establish well-posedness results for the Nernst-Planck-Poisson system, coupled with the anisotropic phase-field equation, and to provide with numerical simulations of the dendritic crystal growth.
Our goal in this paper, is to show that the Nernst-Planck-Poisson system, coupled with the anisotropic phase-field equation, that was introduced in Liang et al., Liang and Chen, Chen et al. [b], is well posed.A source of techniques we use is found in Burman and Rappaz.The difference in our paper is that the solution of the Poisson equation is used to give the vector field in the convection term of the convection-diffusion PDE, in comparison with the model of Burman and Rappaz where they produce the vector field from the solution of the phase-field equation.We have also added forcing terms to our reaction-diffusion PDE and to the Poisson equation, which are dependent on the order parameter.Our forcing term in the Allen-Cahn PDE is also coming from the Butler-Volmer electrochemical reaction kinetics, which give quite different numerical simulations both at the isotropic and the anisotropic cases.For more details, see Section 5.
The remainder of this paper is organised as follows.In Section 2 we present a rescaling of the equations (2.1a),(2.1b)and (2.1c), as well as a weak formulation of the system.We also introduce the anisotropy tensor and its properties.In Section 3 we will present the main result of this paper which is an existence result, using the Rothe's method, for a weak solution of the system.In Section 5 we present some numerical simulations.
The PDE model and its weak formulation
In this section we present the model for the Lithium batteries.In Section 2.1 we start by presenting the equations that consist our PDE system.In Section 2.2 we "nondimensionalize" the model, so that we work without units and perform correct and efficient computations later.In Section 2.3 we pass to a weak formulation of the model.In Section 2.4 we introduce the anisotropic diffusion tensor, its derivation and useful properties.At last, we present the main result of this paper, in Section 2.5.for more information on the numerical results.The model takes the form: in Ω × [0, T * ], where γ, κ, µ > 0, α ∈ [0, 1] is a parameter that affects the symmetry of the magnitude of the forcing term in (2.1a), n, F, R, T respectively are the valence of the chemical reaction of the Li + with the electrons e -, the Faraday number, the gas constant and the temperature.The activation overpotential is given by η a and it is negative, c s > 0 represents the site density of Li-metal, g, h, σ and D are continuously differentiable functions and represent the double-well potential, the primitive of the double-well potential, the effective conductivity and the effective diffusion coefficient respectively and are formulated as follows, where D e , D s > 0 are the diffusion in the electrode and the electrolyte solution respectively.The same applies to σ e , σ s > 0 for the conductivity.The doublewell potential and its primitive are Lipschitz continuous functions in the way they are defined.The effective diffusion and conductivity functions are also monotone functions when 0 ≤ u ≤ 1.The fact that the above coefficients are positive imply that these functions are also positive for these values of u.Also, we define D 1 : R 2 → R as where D min := min s∈R D(s).
Although our analysis applies to general domains, Ω, we will focus on the specific cylindirical geometry the segments of the boundary where we consider (sometimes two different) Dirichlet boundary conditions, and Γ ′ where we take Neumann boundary conditions.In particular, in d = 2, we have (2.7) (2.9) The boundary conditions for (2.1a),(2.1b)and (2.1c) are the following, (2.10a) (2.10b) (2.10c) with ϕ − ∈ [−2, 0) and n Ω denoting the outward normal vector to Ω.For the equation (2.1b) we use the natural boundary conditions.Since the system is time dependent, we need to implement initial values for the order parameter and the concentration of Li + .We choose
We start with the rescaling of the system (2.1a)-(2.1c) .We set T = T * /∆t 0 and Li = L i /l 0 , i = 1, 2, where l 0 , ∆t 0 are length and time reference values for the model and using Table 1 in Chen et al. [b] we define, We introduce the following coefficients For the consistency of our problem, we consider the following cutoff function of the coefficient of the forcing term (2.15) Using the above rescaling, and dropping the tilde notation for ease of presentation, the dimensionless form of (2.1a)-(2.1c) is given by (2.16a) (2.16c) together with the initial and boundary data (2.10a)-(2.10c)and (2.11a)-(2.11b).
2.3.The weak formulation.We next reformulate system the NPPAC system of 2.1 in weak form.To this aim, we briefly recall the functional space set-up, refering for details to standard texts [e.g.Evans].We write L p (Q; X ), for an open domain Q ∈ R q and a normed vector space X , as the space of (Lebesgue) psummable functions φ : Q → X , equipped with a norm such that First, we reformulate the nonhomogeneous Dirichlet boundary conditions for both (2.16b) and (2.16c).To this end we introduce the decomposition (2.17) It is clear that ϕ − (L 1 − x/L 1 )x has boundary conditions that coincide with the boundary conditions (2.10c) of ϕ.A direct calculation leads to where ν 2 = (ϕ − /L 1 ).So we get (2.19) For simplicity of presentation we drop the bar notation for ϕ and the system in its final form is the following: (2.20c) with the initial data as described in (2.11a) and (2.11b) and with elsewhere, (2.21) elsewhere. (2.22) A weak formulation of the above problem is given as follows: Let u(0 2.4.Anisotropic diffusion tensor.We use an anisotropic diffusion tensor for the order parameter that is related to the anisotropic function a, as proposed by Karma and Rappel, is given by for some material dependent parameters a 0 , δ > 0. This leads to the diffusion term of the phase-field equation, which is obtained from the anisotropic Dirichlet energy with w ∈ H 1 (Ω).In two spatial dimensions, this functional is proven in Burman and Rappaz to be strictly convex in w for all w ∈ H 1 (Ω) under the condition of δ < δ 0 = 1/15.The anisotropic Dirichlet energy is differentiable and its derivative is an operator and is given by The tensor-valued field A(p) of p ∈ R d ∖ {0}, which models the anistoropy of the growing crystals due to the Lithium's cubic crystalline structure as in Karma and Rappel, Burman and Rappaz, is defined by Since the matrix A has all its entries bounded, the derivative is bounded by the following upper and lower bounds (2.28) Due to the convexity of the tensor, the following inequality holds (2.29) Also, the mapping w ∈ H 1 (Ω) → J ′ a (w) ∈ H 1 (Ω) ′ is monotone and hemicontinuous.We refer to Burman and Rappaz for further details on the proofs of the properties (2.28) and (2.29).
2.5.Main result.We present here the main result of this paper.
2.6.Theorem (Existence).For every (u 0 , c 0 ) ∈ (H 1 (Ω)) 2 and T > 0 there exist The proof of Theorem 2.6 will be split in Sections 3 and 4, after establishing all the auxiliaries, and can be summarized as: Step 1. Write an implicit Euler time semidiscretization for which we prove existence of a solution, called semidiscrete approximation.
Step 2. Show maximum principle results.
Step 3. Derive energy estimates that allow to find sequences of the semidiscrete approximation that are weakly compact in the timestep.
Time discretization
In this Section we focus on the time discretization of the weak formualtion, on existence results of the system of PDEs for each time-step as well as on energy estimates for each PDE.We first start by discretizing in time equations (2.23a)-(2.23c)with a time step τ > 0.Then, we continue by proving existence of a solution (u k , c k , ϕ k ) k=0,...,K for a time-discrete approximation of the weak formulation (2.23a)-(2.23c) in Theorem 3.1.The proof of Theorem 3.1 is broken into smaller parts.We first apply operator splitting to prove existence of each time-discrete equation separately, assuming the two other unknowns are given from the previous time instant.In Lemma 3.2 we prove the existence of a unique solution u k to (3.1a), with c k−1 given, and in Lemma 3.3 we show that u k is bounded in k.Then, in Lemma 3.4 we prove the existence of a unique solution ϕ k to (3.1b) with given u k .In Lemma 3.6 we show that there is a unique solution c k to (3.1c) with given u k , ϕ k .We, finally, prove uniform bounds for We use the the backward difference quotient, d τ w k = (w k − w k−1 )/τ with τ being the timestep and we linearize the convection term in (2.23b), to obtain the following discretization of (2.23a)-(2.23c) for all (v, η) ∈ H 1 (Ω)×H 1 0|Γ (Ω) and for all k ≥ 1, k ∈ N with u 0 = u 0 (x), c 0 = c 0 (x).Note that the equations in system (3.1a)-(3.1c)are not simultaneous, but will be solved in sequential order, first the phase-field equation (3.1a) with given c k−1 , then the Poisson equation (3.1b) with given u k and finally the convection-diffusion equation (3.1c) with given u k and ϕ k .
3.1.Theorem (Existence for the time-discrete problem).For every τ > 0 with given 3.2.Lemma (Semidiscrete scheme for the Allen-Cahn equation).For every τ > 0 and k = 1, ..., K, K = ⌈T /τ ⌉, with u 0 = u 0 ∈ H 1 (Ω) there is a unique solution Proof.We rewrite (3.1a) as follows for every v ∈ H 1 (Ω), where and is the (Fréchet) derivative of the energy functional I : H 1 (Ω) → R with We now seek a minimiser to the functional which is the functional from which we derive the elliptic equation (3.2).From Theorem 3.30 in Dacorogna, for the existence of a minimiser of (3.5) it is enough to prove that is a Carathéodory function which is coercive and satisfies a sufficient growth condition.The function (3.6) is a Carathédory function according to the definition [Bartels,Remark 2.4(i)].We continue by showing the coercivity condition.From the assumption of Theorem 3.1, u k−1 ∈ L ∞ (Ω) and from (2.2), (2.3) and (2.15) we deduce that there are constants γ 1 , γ 2 > 0 such that Then, by using Cauchy-Schwartz and Young's inequalities, the lower bounds in (2.28) and (3.7) and that u k−1 ∈ L ∞ (Ω) we establish the following lower bound for F : Using the upper bounds in (2.28) and (3.7) we see that F fulfills the following growth condition It only remains to show the uniqueness of the minimizer.If w 1 and w 2 both minimize I k , then they also satisfy (3.2) and we may write Choosing v = w 1 − w 2 in (3.12) and using the Lipschitz continuity of g, (2.2), and h, (2.3), as well as the lower bound (2.28), we obtain where C is independent of τ .By taking τ ≤ ϵ 2 /C we conclude that w 1 = w 2 and thus uniqueness of the minimizer.□ 3.3.Lemma (Maximum principle).Assume that the initial value of (3.1a) and the time-step τ of the time discretization of (2.23a) is sufficiently small.Then, the solution almost everywhere for all k = 1, ..., K.
Proof.From (3.1a) we obtain We prove (3.16) by induction.Assumption (3.15) is the base case.We suppose that (3.16) holds for u k−1 and then prove it holds for u k .
Choose v = (u k ) − , where (ξ) − := min(0, ξ) ≤ 0 is the signed negative part of ξ ∈ R, so Since by assumption u k−1 ≥ 0 and noting (u k ) − ≤ 0, ot follows that the first term on the right hand side is zero or negative.Also, g ′ , (2.21), and h ′ , (2.22), are bounded.Therefore, for the right hand side remains zero or negative.So, we conclude that Proof.We introduce the form b for all w ∈ H 1 0|Γ (Ω), which is bilinear in respect to v and w and we apply the Lax-Milgram Theorem.We first show the coercivity for b 1 .Noting the equivalence of the H 1 semi-norm and the H 1 norm under a homogeneous Dirichlet boundary condition, (2.5) and Lemma 3.3, we have where σ min := min s∈R σ(s).Next we show that a is bounded.
Proof.We introduce the form b which is bilinear for w and v. Now we can rewrite (3.1c) as and we prove existence of a weak solution with the Lax-Milgram Theorem.Coercivity comes from the fact that, by definition, D has a lower bound, see (2.4), so Next we show the boundedness of (3.27), so Finally, the functional is bounded and linear, since . So, by Lax-Milgram Theorem we have a unique solution to (3.1c).□ We now proceed with the energy estimates for (u k , c k , ϕ k ).At this point we note that both g and h : R → R have linear growth, since they are polynomials in [0, 1], and constant functions outside [0, 1], in particular for all ξ ∈ R up to a positive constant.
3.7.Lemma (Energy estimates for the order parameter).Suppose that u 0 ∈ H 1 (Ω).Then, for τ ≤ 1/4C with some positive constant C that is dependent on T, ϵ, ν 0 and u 0 , we have Proof.We make use of the fact that (3.34) Combining (3.34) with (3.2), (3.3), (3.32) and (2.28) yields Then, we multiply by τ and sum for k = 1, 2, ..., l, where l ∈ N is arbitrary with hence, making use of the telescope effect for the second term and that τ ≤ 1/4C we have (3.37) We can now use a generalized discretized Gronwall lemma, see Lemma 2.2 in Bartels, to show that (3.38) which implies where C is dependent on the coefficients and the upper bound of (3.38).We now adopt the idea of Burman and Rappaz to prove an energy estimate for d τ u k in L 2 ([0, T ], L 2 (Ω)).We test (3.2) with v = d τ u k , multiply by τ and sum over k = 1, 2, ..., K, which leads to the following thus, by using the linear growth of g ′ and h ′ , (3.32), the boundedness of (2.15) on the interval [0, 1], inequality (2.29) and the generalized Young inequality we deduce that (3.41) Therefore, (3.42) □ 3.8.Remark (Regularity of the anisotropic diffusion term).The energy estimate for the time derivative and given that g ′ (u) and h ′ (u) are polynomials in u imply that ∇• (A(∇u)∇u) ∈ L 2 ([0, T ]; L 2 (Ω)).
(3.43) 3.9.Lemma (Energy estimates for the electric potential).Let ϕ k be the solution to equation (3.1b).Then, for some positive constant C that is dependent on T, ϵ, a 0 , u 0 , ν, ϕ − , σ ′ max , σ min and the geometry of Ω, we have Proof.We take equation (3.1b) and we choose η = ϕ k , to give Then, . We now use the Poincaré-Friedrichs inequality for the left hand side and the generalized Young inequality on the right hand side for both terms, We choose where C * is a positive constant dependent on ν 1 , ν 2 , σ ′ max , σ min and the geometry of Ω.We now multiply with τ and sum for k = 1, 2, ..., K, (3.50) Lemma 3.7 gives us the necessary bounds on the right hand side to finish the proof.□ 3.10.Lemma (Energy estimates for the concentration).Suppose c 0 ∈ H 1 (Ω).
Then, for τ ≤ 1/2µ and some positive constant C that is dependent on c 0 , D e , D s and the dependencies as described in Lemma 3.9, we have Proof.We take equation (3.1c) and we test it with χ = c k , so it becomes We use (3.34) to bound the first term from below, the fact that D(u k ) ∈ L ∞ (Ω) and the properties of D 1 , (2.6), to obtain Then, we multiply by τ and sum for k = 1, 2, ..., l, where l ∈ N is arbitrary with 0 ≤ l ≤ K, and by using the telescope property the above inequality becomes (3.55) So, for τ ≤ 1/(2µ) we get From Lemmas 3.7 and 3.9 the second and third terms of the right hand side of the above inequality are uniformly bounded in k.We can now use the discrete Gronwall Lemma as we did in Lemma 3.7 and since l is arbitrary chosen, we obtain For the last estimate we know that (3.1c) holds for all v ∈ H 1 (Ω), so it is true that By multiplying by τ and summing for k = 1, 2, ..., K, we reach to (3.51).□ 3.11.Proof of Theorem 3.1.Theorem 3.1 is a direct consequence of Lemmas 3.2, 3.4 and 3.6.
Weak convergence of the limits
We move onto the final step of the proof of Theorem 2.6, which is to pass to the weak limits.In Lemma 4.4 we use the uniform bounds of the time-discrete approximation, so that we pass to the limits as τ → 0 of the subsequences of the linear terms of (3.1a)-(3.1c).In Lemma 4.5 we prove strong convergence of the nonlinear terms of (3.1a)-(3.1c) in L 2 ([0, T ]; L p (Ω)), with p being dependent on the dimension of Ω.We continue with Lemma 4.6, in which we show that the forcing term in the Allen-Cahn equation and the concentration function of the convection term in the convection-diffusion equation are strongly convergent in L 2 ([0, T ]; L q (Ω)), where q is again dependent on the dimension of Ω.
For the last term of (4.3d) it is enough to show a uniform bound on the L 2 ([0, T ]; L 2 (Ω)) norm of ûτ , since from (4.1) we know that û′ τ = d τ u k .Again from Lemma 3.7 we can easily verify that the L 2 ([0, T ]; L 2 (Ω)) norm of û′ τ is uniformly bounded, noting that ûτ (t) = u + τ (t) − (t k − t)û ′ τ (t) for a.e.t ∈ (t k−1 , t k ).The bounds in (4.3e) and (4.3f) follow using similar arguments and Lemmas 3.9 and 3.9.□ 4.4.Lemma (Selection of the limits).There exist (u, c, ϕ) and (∂ t u, ∂ t c) as in Theorem 2.6 such that for a sequence (τ n ) n∈N of positive numbers with τ n → 0 as n → ∞, we have the following, Proof.From Lemma 4.3 and the energy estimates of Lemmas 3.7 , 3.9 and 3.10 we immediately get for (4.5b)-(4.5h) that there are weakly convergent subsequences that converge to appropriate limits that are also unique .For (4.5a) we use Lemma 3.3, to conclude the asserted weak - * convergence.□ 4.5.Lemma (Strong convergence of the nonlinearities).For the functions g ′ , h ′ , m, σ and D as defined in (2.21), (2.22), (2.15), (2.4), (2.5) and (2.6), with Proof.We will describe the proof for (4.6a) as the arguments are the same for the rest of the limits.In Lemma 4.4 we proved that We know that the inclusion L 2 (Ω) ⊂ H 1 (Ω) ′ is continuous and therefore is continuous too.From Aubin-Lions Lemma [e.g.Roubiček] we have that Hence, u ± τn → u almost everywhere up to a subsequence, that we denote with the same index.From the continuity of g ′ we immediately deduce that g(u ± τn ) → g(u) almost everywhere.Since g ′ is a polynomial and g ′ : [0, 1] → R, there is a positive real number M , such that |g ′ (u ± τn )| ≤ M .From the dominated convergence theorem we obtain g(u ± τn ) → g(u) in L 2 ([0, T ]; L p (Ω)). (4.11) Taking into consideration the definitions of h ′ , m, σ and D we can use the same arguments to prove (4.6b)-(4.6e).□ 4.6.Lemma (Strong convergence of the products).For m(c)h ′ (u), D 1 (u, c) ∈ L 2 ([0, T ]; L q (Ω)), where 1 ≤ q < ∞ if dim Ω = 2 and 1 ≤ q < 3 if dim Ω = 3, we have the following as Proof.We use the definition of the strong convergence in L p spaces.By the triangle inequality and the generalized Hölder inequality with 1/q = 1/p + 1/p ′ , we obtain From Lemma 4.5 we know that the sequences are strongly converging in are well defined and bounded for all q ∈ [1, ∞) if dim Ω = 2 and for all q ∈ [1, 3) if dim Ω = 3.We use similar arguments to prove (4.12b).□ 4.7.Lemma (Weak convergence of the products).For D(u)∇c, D 1 (u, c)∇ϕ, σ(u)∇ϕ, A(∇u)∇u ∈ L 2 ([0, T ]; (L 2 (Ω)) 2 ) we have the following as τ n → 0 Proof.For (4.14a) we will first show the existence of a limit.We have that The last norm is bounded from Lemma 4.3, so this implies that there is a We will use the definition of the weak convergence to prove that ξ 1 = D(u)∇c, i.e. we will show the following as τ n → 0, , the first term vanishes as τ n → 0 because of the weak limit (4.5f).The second term vanishes in the limit as τ n → 0 because of (4.6c).Similarly, we obtain (4.14b) and (4.14c).The proof of (4.14d) has already been done in full detail in Burman and Rappaz.We describe here the main arguments.
Numerical Results
In this section we will present numerical results produced by a software that we developed, using the DUNE Python module, Dedner and Nolte, and the DUNE Alugrid module, Alkämper et al.. Since (2.23a)-(2.23b)demonstrate the crystal growth inside Li-metal batteries, an engineering problem that demands for practical solutions sooner than later, it is essential for us to present numerical simulations of the fore-mentioned system.
For the numerical scheme we used a standard Galerkin adaptive finite element method on the fully discrete system (5.1a) (5.1b) h with u 0 h = u 0 (x), c 0 h = c 0 (x), where V 1 h and V 2 h are finite dimensional subspaces of H 1 (Ω) and V 3 h is a finite dimensional subspaces of H 1 0|Γ (Ω).In our simulations we linearized the anisotropic tensor and the nonlinear terms of the phase-field equation.
For the sake of the computational cost we reduced our domain to half, taking advantage of the symmetric properties of the dendritic growth of the crystal.In Figure 2 we display plots of the solution (u, c, ϕ) at three different times.These results show good comparison with Figure 4 The model we study has a unique solution under certain conditions.One main condition is that the anisotropy strength should always fulfill the inequality δ < δ 0 = 1/(ω 2 − 1).The molecular structure of the lithium atom indicates that ω = 4, which represents the mode of the anisotropy.So, δ < 1/15 ≈ 0.067.In Figure 3, the numerical computations show the order parameter for different values of δ.We chose to present several cases for values of δ that comply with the theoretical limitations for the existence of the weak solution.However, our numerical method treats the anisotropy tensor explicitly and thus we ensure numerical convergence for values that exceed δ 0 and so we also present a computation for δ = 0.1.Comparison of different values for the anisotropy strength δ, which is introduced in (2.24), at t = 0.366.We observe that image (A) corresponds to the isotropic case of the order parameter.However, the shape is not a sphere because it is affected by the convection-diffusion equation which has as forcing term the time derivative of the order parameter.By increasing the anisotropy strength we see that a crystal shape is formed.Images (B) and (C) show this, but in image (D) we finally see a full crystal shape with only one branch across the x-axis, compared to images (B) and (C), where we see two branches growing across the x-axis.
Image (E) represents an experiment for anisotropy strength very close to the theoretical convexity limit δ 0 = 1/15.Image (F) is an example of how the shape is with a value of anisotropy strength that exceeds the theoretical bound for convexivity of the anisotropic Dirichlet energy (2.25).
In Figure 4, we compare how the shaping is affected by the forcing term of the convection-diffusion equation.We have also added an image of a complete isotropic simulation, so that we can compare it with the rest of the results.We observe that in the isotropic case we get a sphere.For µ = 0 we get an elliptic shape and as we increase the magnitude of the forcing term, as it is formulated in (2.23b), we see that the shape is becoming smaller, but the crystalized structure is more precise.j.aml.2014.10.002.URL https://linkinghub.elsevier.com/retrieve/pii/S0893965914003280.
Figure 2 .
Figure 2. The solution of equations (5.1a)-(5.1c).The order parameter u is at the top row, the concentration c is at the middle row and the electric potential ϕ is at the bottom row.The captions are at the times t 1 = 0.061, t 2 = 0.244 and t 3 = 0.427.
Figure3.Comparison of different values for the anisotropy strength δ, which is introduced in (2.24), at t = 0.366.We observe that image (A) corresponds to the isotropic case of the order parameter.However, the shape is not a sphere because it is affected by the convection-diffusion equation which has as forcing term the time derivative of the order parameter.By increasing the anisotropy strength we see that a crystal shape is formed.Images (B) and (C) show this, but in image (D) we finally see a full crystal shape with only one branch across the x-axis, compared to images (B) and (C), where we see two branches growing across the x-axis.Image (E) represents an experiment for anisotropy strength very close to the theoretical convexity limit δ 0 = 1/15.Image (F) is an example of how the shape is with a value of anisotropy strength that exceeds the theoretical bound for convexivity of the anisotropic Dirichlet energy (2.25).
Figure 4 .
Figure 4. Comparison of different values for µ at t = 0.122.We observe that in the isotropic case we get a sphere.For µ = 0 we get an elliptic shape and as we increase the magnitude of the forcing term, as it is formulated in (2.23b), we see that the shape is becoming smaller, but the crystalized structure is more precise.
Proof of Theorem 2.6.Now we can prove our main result, Theorem 2.6. | 2023-10-26T18:40:58.627Z | 2023-10-24T00:00:00.000 | {
"year": 2023,
"sha1": "49610edfc1da61ccc677af84b5add6f6dad033cd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "49610edfc1da61ccc677af84b5add6f6dad033cd",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics",
"Computer Science"
]
} |
258309838 | pes2o/s2orc | v3-fos-license | Well-posed boundary conditions and energy stable discontinuous Galerkin spectral element method for the linearized Serre equations
We derive well-posed boundary conditions for the linearized Serre equations in one spatial dimension by utilizing the energy method. An energy stable and conservative discontinuous Galerkin spectral element method with simple upwind numerical fluxes is proposed for solving the initial boundary value problem. We derive discrete energy estimates for the numerical approximation and prove a priori error estimates in the energy norm. Detailed numerical examples are provided to verify the theoretical analysis and show convergence of numerical errors.
Introduction
The propagation of free surface water waves is governed by the Euler equations, under the assumption that the fluid flow is inviscid and irrotational. The free surface nature makes the problem of solving the Euler equations difficult [15]. Consequently, there are approximate models derived from the Euler equations, such as the shallow water wave equations [6] and the Serre equations [9]. In contrast to the commonly used shallow water wave equations, the Serre equations are derived without hydrostatic pressure assumptions, and hence they contain higher order nonlinear dispersive terms. As a result, the Serre equations can model dispersive water waves [22].
The Serre equations in one spatial dimension (1D) describing nonlinear dispersive water waves over a horizontal bed can be written as the system of partial differential equations (PDEs) Here, x is the spatial coordinate, t is time,ū =ū(x, t) is the depth-averaged velocity of a free surface fluid with water depth ζ = ζ(x, t), and g is the gravitational acceleration. The continuity equation (1a) describes the conservation of mass and (1b) describes the conservation of momentum.
The conservation of momentum equation (1b) contains higher order nonlinear derivative terms, and mixed space-time derivatives ofū. The presence of these terms makes the Serre equations difficult to study both numerically and analytically. In recent years, considerable research has been devoted to the development of numerical methods for solving the Serre equations. Various numerical techniques have been utilized, and they include finite difference methods [7,17,1], finite volume methods [22,3,4,9], continuous or discontinuous Galerkin finite element methods [14,5,21], or combinations of these methods. Many of these approaches are restricted to being low order accurate, and are not able to efficiently resolve highly oscillatory dispersive wave modes present in the solution. To circumvent the challenge presented by higher order and mixed derivatives terms in the momentum equation (1b), several methods introduce auxiliary variables to rewrite the Serre equations as a system of first order equations [22,3,4,9,21]. The main disadvantage of this approach is that we require additional constraints and non-physical boundary and interface conditions to solve the system of first order equations. Another drawback is that the degrees of freedom for the system increase by several factors, see for example [21], which requires up to 8 auxiliary variables to eliminate higher order derivatives terms in the 1D Serre equations. In two spatial dimensions, the required number of auxiliary variables is expected to increase significantly, further limiting the efficiency of the method.
The primary objective of this paper is the development of robust (provably stable), efficient (no auxiliary variables), and high order accurate numerical method for solving the Serre equations, with rigorous mathematical support. Our contributions are two-fold: 1) the derivation of well-posed boundary conditions for the linearized Serre equations; 2) the development of a provably energy stable discontinuous Galerkin spectral element method (DGSEM) for the initial boundary value problem (IBVP).
A necessary and important step towards developing a robust and high order accurate numerical scheme for solving PDEs in a bounded domain is to derive well-posed boundary conditions for the system of PDEs, in particular boundary conditions that ensure that the IBVP at the continuous level is well-posed [10]. Unfortunately, the theory of IBVPs for dispersive waves such as the Serre equations is less developed [4,16]. One of the main difficulties is that there are no well-defined characteristics. Therefore, the theory of characteristics often used for hyperbolic IBVPs is not applicable. In the literature, most numerical methods for the Serre equations are derived for periodic boundary conditions, and consequently they are not relevant when non-periodic boundary phenomena are important. There are however a few exceptions [4,16,13], where non-periodic boundary conditions are considered, although for specific time discretizations. In [13], non-local artificial boundary conditions for the linearized Serre equations with zero background flow velocity are proposed and analyzed. In the present work, we derive linear well-posed and energy stable boundary conditions for the linearized Serre equations with arbitrary background flow velocity. The derived boundary conditions are local, and yield bounded energy estimates for the solutions of the IBVP. Given appropriate data, the boundary conditions can be used to effectively impose inflow and outflow boundary conditions. We will derive a provably stable and high order accurate DGSEM for solving the IBVP without introducing auxiliary variables. For an element based scheme, one of the main challenges lies in how to connect adjacent elements, in particular enforcing the continuity of the solutions and their (first and second) derivatives across the elements' interfaces in a stable manner without introducing auxiliary variables. Another challenge is the derivation of stable and accurate numerical boundary treatment for the IBVP. To succeed, in addition to well-posed boundary conditions, we derive well-posed interface conditions that ensure the conservation of energy, mass, and linear momentum at the continuous level. At the discrete level, following [8], we construct spatial derivative operators that satisfy the summation by parts property (SBP) in a discontinuous Galerkin spectral element framework. Then, we use the Simultaneous Approximation Term (SAT) method [2] to weakly impose interface and boundary conditions. This SBP-SAT approach enables us to prove that the semi-discrete numerical scheme satisfies discrete energy estimates analogous to the continuous energy estimates necessary for the well-posedness of the IBVP. The semi-discrete numerical approximation is integrated in time using the classical fourth order accurate explicit Runge-Kutta method. Our proposed numerical scheme combines key ideas from the SBP finite difference method, the spectral element method, and the discontinuous Galerkin method. We perform detailed numerical experiments to verify the theoretical analysis, showing convergence of numerical errors, conservation properties of the method, and demonstrate the effectiveness of the high order numerical method in resolving highly oscillatory dispersive waves. The results obtained from the linear analysis can be applied to the nonlinear problem. This will be reported in a forthcoming work.
The paper is organized as follows: In section 2 we introduce integration by parts identities to be mimicked by the spatial discrete derivative operators. In section 3 the linearized Serre equations are introduced, and we perform continuous analysis, proving well-posedness for the initial value problem (IVP) and the IBVP. In section 4 we derive the DGSEM and prove numerical stability. Discrete error estimates are derived in section 5. Numerical experiments are presented in section 6 verifying the theoretical analysis. In section 7, we draw conclusions and suggest directions for future work.
Preliminaries
We begin by introducing some notations. Let u and v be real-valued functions defined in an interval domain Ω = [x L , x R ]. The standard L 2 -inner product and its associated norm are denoted by respectively. Assuming that u and v are sufficiently smooth, that is u, v ∈ C p (Ω) for some p ≥ 3, the integration-by-parts principles, for first, second and third derivatives, yield u, In particular, if u = v then (2), (4) yield respectively. The identities derived in this section are key ingredients for the continuous and discrete analysis performed in the next sections. In section 4.1, we will describe how to construct spatial operators that mimic these identities at the discrete level. This will enable us to prove discrete counterparts of the results obtained in the continuous analysis.
Linearized Serre equations
Linearizing (1) around the constant mean height H > 0 and constant velocity U gives the linearized Serre equations where h and u denote the perturbed height and velocity respectively. At t = 0, we augment (7) with sufficiently smooth initial conditions where the functions f h and f u are compactly supported in Ω.
Well-posedness of the IVP
Let us now consider the IVP (7)- (8) and investigate the well-posedness of the model. For several problems, such as the linear shallow water equations, the systems of PDEs have the form q t +Dq = 0, where the differential operator D depends only on the spatial derivatives ∂/∂x. In these settings, the semi-boundedness 1 of D ensures the well-posedness of the IVP [11]. However, for the Serre equations (7), and specifically in (7b), the flux contains higher order and mixed space-time derivatives of u. It is not obvious how to eliminate the time derivative from the spatial operator in (7b) without introducing auxiliary variables [21] or other conservative variables [22,20,19,18]. In order to prove the well-posedness of the IVP (7)-(8), we will bound the solution in the energy norm defined by The energy norm E(t) is a quasi H 1 -norm, and will bound the L 2 -norm of the height h and the H 1 -norm of the velocity u.
We begin with the definition for some constants α and K > 0 that are independent of the initial conditions (8), where E(t) is defined in (9).
Then, we introduce the lemma which relates the rate of change of the energy to boundary terms.
Lemma 1
The linearized Serre equations (7) satisfy the energy equation where BT | x=x R x=x L = BT | x=x R − BT | x=x L and the boundary term BT is given by 1 The differential operator D is semi-bounded in the function space V if (q, Dq) Ω ≥ α q 2 L 2 (Ω) for all q ∈ V, where α ∈ R is a constant independent of q. Proof 1 We multiply (7a) by gh and (7b) by Hu, and integrate over Ω. We have Substituting v = ∂u/∂t in (3) gives the identity u, Hence using (2), (5), (6) and (12) yields where the boundary terms have been moved to the right hand side. Summing (13a) and (13b) together completes the proof of the lemma.
The energy E(t) defined in (9) is conserved, that is where the boundary term BT is given by (11). We impose the periodic boundary conditions (14), and hence the boundary terms cancel out, which gives BT | x=x R x=x L = 0. It follows that dE/dt = 0. Integrating in time gives E(t) = E(0) for all t ≥ 0.
Theorem 1 also holds for Cauchy problems with compactly supported initial data (8).
Well-posed boundary conditions
The main aim of this section is to formulate well-posed boundary conditions for the linearized Serre equations (7). This is a necessary step towards accurate and reliable computations of the solution of (7). Recall that for many linear PDEs, such as the linear shallow water equations, the systems have the form q t + Dq = 0, where the differential operator D depends only on the spatial derivative ∂/∂x. In these situations, the maximally semi-boundedness 2 of D will ensure the well-posedness of the IBVP [11].
For the linearized Serre equations (7), when boundary conditions are introduced, we will aim to bound the solutions in the energy norm E(t). From the energy equation (10), it suffices to ensure that the boundary term is always nonpositive, namely BT| x=x R x=x L ≤ 0. Hence, we are looking for a minimal number of boundary conditions, with homogeneous boundary data, at x = x L and x = x R such that BT| x=x R x=x L ≤ 0. We begin by rewriting the boundary term (11) in a matrix form as follows: Using eigen-decomposition, see appendix A, we have where the vector w is given by with positive constants C ± = 4H 4 + 3U ± √ 4H 4 + 9U 2
2
. The corresponding eigenvalues are Without loss of generality, we only consider boundary conditions for U = 0 and U > 0. , and hence we have Since λ 4 < 0 and λ 5 > 0, we need one boundary condition at x = x L and one boundary condition at x = x R . We set the linear boundary conditions where the constants α, β must be chosen such that BT| x=x R x=x L ≤ 0 to ensure well-posedness.
Lemma 2 Consider the boundary conditions (19) and the boundary term BT defined by (16) with , applying the boundary conditions We state the following theorem which proves the well-posedness of the IBVP (7), (8) and (19).
Theorem 2 Consider the linearized Serre equations (7) with U = 0 subject to the initial conditions (8) and the boundary conditions (19). (9) is bounded by the energy of the initial data, that is
Case 2: U > 0
When U > 0, we have λ 2 , λ 3 , λ 4 < 0 and λ 5 > 0. In this case, the boundary term is Thus, we need three boundary conditions at the inflow boundary x = x L and one boundary condition at the outflow boundary x = x R . We set the linear boundary conditions, The constants α j , β j must be chosen such that BT| x=x R x=x L ≤ 0. Lemma 3 Consider the boundary conditions (20), and let the symmetric matrix If the constants α j and β j (for j = 2, 3, 4) are chosen such that v ⊤ Rv ≤ 0 for all v ∈ R 3 and where BT is the boundary term defined by (16).
Theorem 3 Consider the linearized Serre equations
The proof is complete.
Remark 1 When U < 0, the situation reverses since x = x L becomes the outflow boundary and x = x R becomes the inflow boundary. The signs of the eigenvalues also change, that is λ 2 , λ 3 , λ 5 > 0 and λ 4 < 0. The boundary term is We need one boundary condition at the outflow boundary x = x L and three boundary conditions at the inflow boundary x = x R . Similar to the previous case where U > 0, we set the linear boundary conditions where the constants α j and β j must be chosen such that BT| x=x R x=x L ≤ 0. Carrying out a similar analysis as in Case 2 will prove the well-posedness of the corresponding IBVP, (7)- (8) and (21).
Well-posed interface conditions
We now derive well-posed interface conditions that will be used to couple adjacent elements together. These interface conditions should enable conservative and stable numerical treatments.
We begin by splitting the spatial domain Ω into two subdomains Ω − and Ω + with an interface at x = 0. In particular, we have Ω The solutions of the linearized Serre equations in the subdomains Ω − and Ω + are denoted with the superscripts − and + respectively. Hence, we have where the flux functions are given by At the interface x = 0, we define the jump of a scalar/vector field v across the interface by We are now looking for a minimal number of conditions connecting the problems (22)-(23) across the interface such that the resulting coupled problem is conservative and energy stable.
Conservative interface conditions
Let φ h , φ u ∈ C ∞ (Ω) be smooth functions in Ω. We multiply (22a) and (22b) with gφ h and Hφ u respectively, and integrate over Ω − . We have Integration by parts gives Applying the same procedure to (23a) and (23b) yields Summing equations (24) and (26) gives where we have collected the interface terms in the right hand side. Similarly, adding up (27) and (25) gives Thus, if we require that the jump of the flux functions vanish, that is These equations imply that the total mass (g, h) Ω and the total linear momentum (H, u) Ω are conserved.
Theorem 4 Consider the Serre equations (22)-(23) in the split domain Ω = Ω − Ω + , and assume that the jump of the flux functions vanish at the interface x = 0, that is The theorem holds for Cauchy problems with compactly supported data and in a bounded domain with the periodic boundary conditions (14). If the numerical interface treatment satisfies a discrete analogue of Theorem 4, we say that the numerical method is conservative.
Energy conserving interface conditions
A second requirement of the interface conditions is that they should ensure energy stability for the coupled system. Hence, we need to determine the interface conditions such that the total energy in the domain is conserved. To this end, we introduce the following lemma. (22)-(23), and denote the energy in the subdomains Ω ± by E ± (t). Considering only boundary contributions from the interface x = 0, we have the energy equation
Lemma 4 Consider the Serre equations
and the vector w and the eigenvalues λ j are given by (17) and (18) respectively.
Proof 7
Applying Lemma 1 to the equations (22) and (23) in Ω − and Ω + respectively gives the energy equations Here, the boundary terms BT ± = BT(h ± , u ± ) are given by (16). Summing the energy equations together and considering only boundary contributions from the interface x = 0, we have This completes the proof.
Recall that we have four non-zero eigenvalues, namely λ 2 , λ 3 , λ 4 , λ 5 . It follows that we need four interface conditions to ensure well-posedness. The interface conditions should be imposed such that the interface terms vanish, that is IT| x=0 = 0, thus ensuring energy stability.
Since λ 1 = 0, the interface terms can be written as Hence, the interface conditions yields IT| x=0 = 0. The interface conditions (28) can be equivalently rewritten as which also ensure that the interface term vanishes IT| x=0 = 0, and imply that the jump of the flux functions vanish, that is The following theorem states that the interface conditions (29) ensure energy conservation.
Theorem 5 Consider the Serre equations (22)-(23) subject to the interface conditions (29), and let E ± (t) denote the energy in the subdomains Ω ± . Considering only boundary contributions from the interface x = 0, the total energy in the domain is conserved, that is Proof 8 Lemma 4 gives the energy equation where we have only considered boundary contributions from the interface x = 0. Since the interface conditions (29) imply (28), the interface terms vanish IT| x=0 = 0, and hence The proof is complete.
A stable and effective numerical method should as far as possible emulate the theoretical results established in Theorems 2, 3 4, and 5.
Space discretization
In this section, we present the DGSEM for the linearized Serre equations in a bounded domain. We will prove that the presented numerical method is conservative and stable by proving discrete analogues of Theorems 2, 3 4, and 5.
We begin by splitting the spatial domain Ω Each element I k can be mapped to a reference element Ω = [−1, 1] using the following affine transformation: Let P P ( Ω) be the space of polynomials of degree at most P on Ω. In spectral element methods, we use Lagrange polynomials as basis functions for the polynomial space P P ( Ω). Here, ξ 1 , ξ 2 , . . . , ξ P +1 ∈ Ω are nodes of a Gaussian quadrature rule.
Summation-by-parts spectral difference operators
We now derive discrete spatial operators that satisfy the SBP property in the reference element Ω. We first observe that any function u defined on Ω has the following Lagrange polynomial approximation u: Hence, the weak derivative of u is given by for all test function φ ∈ P P ( Ω). Following the approach used in Galerkin spectral element methods, the test function is chosen such that φ = ℓ i , i = 1, 2, . . . , P + 1, and we choose the Gauss-Lobatto quadrature rule. Since this quadrature rule is exact for polynomials of degree at most 2P − 1, (31) can be written as Here, ω i > 0 are the weights of the quadrature rule. Let M = diag(ω 1 , ω 2 , . . . , ω P +1 ) be the mass matrix, and define the discrete derivative operator D = M −1 Q. Using integration-by-parts, (32) becomes , and hence we obtain the SBP property for the first derivative operator D where B = diag(−1, 0, 0, . . . , 0, 1). So far we have discussed how to derive spatial operators in the reference element Ω. In a physical element I k , the transformation (30) gives The operator D x is a discrete derivative operator that approximates the spatial derivative ∂ ∂x , and the mass matrix M x is an operator used to approximate integration. These operators also satisfy the SBP property where B is defined in (33). We will approximate higher derivatives within a physical element using D l x ≈ ∂ l /∂x l , for l = 1, 2, 3. The spatial derivative operators D l x with l = 1, 2, 3 satisfy discrete analogues of the integrationby-parts principle (2)- (4). To see this we introduce the following discrete inner product and norm Mx > 0, for u, v ∈ R P +1 . By straightforward computations using (35), we obtain the SBP properties for first, second and third derivatives as follows: which are the discrete analogues of (2)-(4).
When v = u, (36) becomes For second and third order derivatives, substituting v = du dt in (37) and v = u in (38) gives the following identities: These identities (40) and (41) correspond to the identities (12) and (6) respectively. We conclude this section by stating a theorem regarding the accuracy of the discrete derivative operators D l x , which will be used later to derive error estimates. Theorem 6 Consider the discrete derivative operator D x and an element I k = [x k , x k+1 ] of length ∆x > 0. Let x (j) k = x k + ∆x 2 (ξ j + 1), j = 1, 2, . . . , P + 1 denote the Gauss-Lobatto quadrature nodes in I k , with ξ j ∈ [−1, 1], and u j = u(x (j) k ) be the restriction of a sufficiently smooth function u on the nodes. The truncation errors of the approximation of the partial derivatives ∂ l u/∂x l are given by where ζ l ∈ I k and C l > 0 are constants independent of ∆x > 0.
Proof 9
The proof is a straightforward adaptation of the proof of Theorem 3 in [12].
Numerical interface treatments
Consider a two-element model Ω = Ω − ∪ Ω + , where Ω − and Ω + are subdomains defined as in section 3.3. We map each of Ω − and Ω + to the reference element Ω = [−1, 1], and hence we have the discrete operators D x and M x as defined in (34) for each Ω − and Ω + . We write the nodal values of the variables as the following stacked vectors: where the minus and plus superscripts indicate the nodal values in Ω − and Ω + respectively. The discrete spatial operators are written as block-diagonal matrices and the mass matrix M gives the following discrete inner product and norm: By replacing the continuous spatial derivatives in (7) with the discrete derivative operator D, we obtain an element local semi-discrete numerical approximation The numerical approximation above has not imposed the interface conditions derived in section 3.3, and hence the numerical solutions in each Ω − and Ω + are still disconnected. The next challenge lies in connecting the solutions across the elements in an accurate and stable manner. In order to achieve this, we will impose the interface conditions using the SAT method [2]. We introduce the spatial interface operator and the penalized derivative operator where e R = [0, 0, . . . , 0, 1] ⊤ and e L = [1, 0, . . . , 0, 0] ⊤ . Hence, for any grid function u ∈ R 2P +2 , we have Bu = [0, 0, . . . , 0, u − P +1 − u + 1 , u − P +1 − u + 1 , 0, . . . , 0, 0] ⊤ . Note that for continuous functions we have u − P +1 − u + 1 = 0, it follows that Bu = 0, and hence Du = Du. For later use we state the following lemma which is a discrete analogue to (2), (12), and (6).
Lemma 5 Consider the penalized derivative operator D defined in (43). For all grid functions
To connect the solutions across the elements, we add interface penalty terms to the right hand side of (42) as follows: where τ ij , γ ij , σ ij are real penalty parameters that will be chosen later to ensure stability, and α h , α u ≥ 0 are real upwind parameters.
Remark 2
We note that the semi-discrete numerical approximation (44) is consistent with the interface conditions (28). To see this, suppose that h and u are exact solutions of the Serre equations (22) and (23) subject to the interface conditions (28). Since h and u are continuous at the interface, we have Bh = Bu = B du dt = 0. Furthermore, the continuity of ∂u/∂x and ∂ 2 u/∂x 2 at the interface implies BDu = BD du dt = BD 2 u ≈ 0, where we have ignored truncation errors that arise from spatial derivatives approximations. Therefore, any exact solutions that solves the Serre equations (22) and (23) subject to the interface conditions (28) will cause the penalty terms in the scheme (44) to vanish, and hence we recover (42).
Let us now show that for a specific choice of the penalty parameters the numerical approximation (44) is conservative and stable. The following lemma states that under an appropriate choice of the penalty parameters, we can rewrite (44) into a more convenient form for our later analysis.
Then, (44) can be written as where the operator D is given by (43).
The proof of the lemma involves mainly algebraic manipulations, and has been moved to appendix B.
The theorems below state that the numerical approximation (44) is conservative and stable when the penalty parameters are chosen as specified in (45).
Theorem 7 Consider the semi-discrete numerical approximation (44) with the penalty parameters (45) and upwind parameters that are real and positive, α h ≥ 0 and α u ≥ 0. Considering only boundary contributions from the interface, the numerical interface treatment (44) is conservative, that is Proof 10 Applying Lemma 6 and left multiplying (46a) by g1 ⊤ M and (46b) by H1 ⊤ M, we have Considering only boundary contributions from the interface, Lemma 5 and the obvious equalities D1 = 0 and B1 = 0 imply that we have 1, Dv This completes the proof.
Theorem 8 Consider the semi-discrete numerical approximation (44) with the penalty parameters (45), and define the discrete energy where D is defined in (43). Considering only boundary contributions from the interface, the discrete energy is bounded by the discrete energy of the initial data, that is Proof 11 We apply Lemma 6, and then we left multiply (46a) and (46b) by gh ⊤ M and Hu ⊤ M respectively. We have Summing them together and using Lemma 5 yields d dt where we have used α h ≥ 0 and α u ≥ 0. We recognize that the left side of (50) is the time derivative of the discrete energy (8), that is Time integrating (51) completes the proof. When the upwind parameters vanish, α h = α u = 0, the energy is conserved E M (t) = E M (0).
Numerical boundary treatments
In this section, we describe how to numerically enforce the boundary conditions derived in section 3.2 in a stable manner. To this end, we will utilize the SAT method to weakly impose the external boundary conditions. For simplicity, we will consider numerical approximations in a single element, and boundary contributions will be considered only one boundary at a time. Let us begin by considering U > 0, which corresponds to Case 2 in section 3.2. From the continuous analysis in section 3.2, we need to impose the boundary conditions (20). In order to ensure well-posedness, the constants α j , β j in (20) have to chosen such that they satisfy the conditions of theorem 3.
For convenience, we only consider the following constants: where . Thus, the boundary conditions (20) become It is straightforward to check that the chosen constants (52) satisfy the conditions of theorem 3, and hence the boundary conditions (53) are well-posed. Let us now utilize the SAT method to impose the boundary conditions (53) in a stable manner. As previously mentioned, we consider one boundary at a time in a single element. Starting from the left boundary x = x L , a single element semi-discrete numerical approximation of the IBVP (7), (8) and (53a) is given by where τ 0 , θ 0 , γ 0 , σ 0 , η 0 , µ 0 , ρ 0 are penalty parameters. The following theorem states that the numerical approximation (54) is stable under a suitable choice of the penalty parameters.
Theorem 9 Consider the numerical approximation (54) with the penalty parameters 1 6 , and define the discrete energy Considering only boundary contributions from the left boundary x = x L , the discrete energy is conserved, that is Applying the identities (39)-(41) to (55) gives where we have also used the obvious equations Summing (56a) and (56b) together, and then using (36) gives d dt and we conclude the proof by time integrating this equation.
For the right boundary x = x R , we need to impose the boundary condition (53b). A single element semi-discrete numerical approximation of the IBVP (7), (8) and (53b) is as follows: where θ N , γ N , σ N , ρ N are penalty parameters. The theorem below states that the numerical approximation (57) is stable under a specific choice of the penalty parameters.
Theorem 10 Consider the numerical approximation (57) with the penalty parameters θ N = 1, γ N = −σ N = −ρ N = 1 3 , and define the discrete energy Adding the equations in (59), and then using (36) yields d dt Time integrating this inequality completes the proof.
We recall that the case U = 0 corresponds to Case 1 in section 3.2. In this case, choosing the constants α = β = 1 in (19) gives the boundary conditions u(x L , t) = 0 and u(x R , t) = 0.
These boundary conditions can be immediately treated by substituting U = 0 in the numerical approximations (54) and (57) respectively.
Error estimates
In this section, we derive error estimates for the semi-discrete numerical approximation in the energy norm. For simplicity, the error analysis will be carried out in the two-element model Ω = Ω − ∪ Ω + as in section 4.2, and we only consider the boundary conditions (53). We note that the analysis can be easily generalized to multiple elements and other boundary conditions.
Let the vectors h ex , u ex denote the exact solution evaluated at the quadrature nodes. The pointwise error vectors can be written as Utilizing (46), we obtain the error equations where T (1) , T (2) are quadrature nodes truncation errors and P b are boundary penalty terms. Theorems 9 and 10 imply that we have Let us now use Theorem 6 to determine the order of the truncation errors T (1) and T (2) . In the left hand side of (60a), the operator D leads to a O(∆x P ) truncation error. The penalty term P (1) b in the right hand side of this equation does not contribute to the truncation error, and hence T (1) is O(∆x P ). For (60b), the second term in the left hand side contains the operator D 3 , which leads to a O(∆x P −2 ) truncation error. In the right hand side of (60b), the only term in P (2) b that contributes to the truncation error is − 1 The operator D x in this term leads to a O(∆x P ) truncation error. Since both of the operators M −1 x and D ⊤ x have the factor ∆x −1 , combining them together with the truncation error from D x gives a O(∆x P −2 ) truncation. Hence, T (2) is O(∆x P −2 ).
We define the energy norm for the error The following theorem gives a bound on the energy norm error.
Theorem 11 For the semi-discrete numerical approximation (46) with the boundary treatments (54) and (57), the energy norm error satisfies where C > 0 is independent of ∆x.
Proof 14
The proof is omitted since it closely follows the proofs of Theorems 9 and 10.
Therefore, the error in the energy norm converges to zero at the rate O(∆x P −2 ).
Numerical experiments
In this section, numerical experiments in 1D are presented to verify the theoretical results in this paper. Specifically, we first verify that our proposed numerical method is conservative, and then we present numerical convergence tests to demonstrate the accuracy of the numerical method. Finally, we perform numerical tests to demonstrate the advantage of high order accuracy in resolving highly oscillatory dispersive waves. Let us now briefly describe the test problem that we will use for conservation and convergence tests. The analytical solution of the Cauchy problem for the Serre equations (7) with the initial conditions h(x, 0) = H + 1 ω (1 + sin(ωx)) and u( is given by where ω = √ , and c is chosen such that U < c < U + √ gH. If the chosen domain is Ω = x L , x L + 2πn ω for some x L ∈ R and n ∈ {1, 2, . . .}, the analytical solution (61) satisfies the periodic boundary conditions (14), and hence we have a periodic boundary conditions problem.
Conservation test
We consider the periodic boundary conditions problem described in the previous section. The domain is discretized into N = 20 uniform elements, and we use polynomials of degree P = 4 to compute the numerical approximation. We consider two different choices of upwind parameters α h = α u = 0 and α h = α u = 1, and the numerical approximation is evolved using the classical explicit fourth order accurate Runge-Kutta method with the fixed time step ∆t = 10 −3 until the final time T = 1.0.
To verify the discrete conservation properties of our numerical approximation, at each time step we compute the differences g∆h = g 1, h(t + ∆t) M − g 1, h(t) M , H∆u = H 1, u(t + ∆t) M − H 1, u(t) M , and ∆E = E M (t + ∆t) − E M (t) using two different upwind parameters, α h = α u = 0 and α h = α u = 1, where E M is given by (8). The results are shown in Fig. 1-2. We observe that when α h = α u = 0, all considered quantities are conserved up to machine precision. When α h = α u = 1, E M is always negative, which implies that it is slightly decreasing. These observed results are consistent with Theorems 7 and 8.
Periodic boundary conditions problem convergence test
We consider the periodic boundary conditions problem described in section 6. The domain of the problem Ω is discretized into N = 10, 20, 40, 80 elements, and polynomials of degree P = 1, 2, 3, 4 are used to compute the numerical approximation. Similar to the previous section, we consider two different choices of upwind parameters α h = α u = 0 and α h = α u = 1, and the numerical approximation is evolved using the classical explicit fourth order accurate Runge-Kutta method until the final time T = 0.1. The time step is computed using the following: where ∆x is the element length.
To investigate the convergence of the numerical errors, at the final time T = 0.1, we compute the L 2 error e L 2 (Ω) ≈ u − u ex M , where u and u ex are the numerical and exact solutions respectively. We have omitted the convergence plots of the height h for brevity. The convergence plots and rates are depicted in Fig. 3-4 for u and are given in Table 1. Overall, we observe that the numerical approximation for the velocity u is (P + 1)th order accurate when P is even and P th order accurate when P is odd. For the height h, it can be seen that when α h = α u = 0, the numerical approximation is P th order accurate, and we get improved convergence rates when α h = α u = 1.
Remark 3
The time step is proportional to (∆x) 2 due to the presence of higher order spatial derivatives and the use of an explicit time stepping scheme. We note that employing an appropriate implicit time stepping scheme will eliminate this restriction, but this is not the main focus of this paper. Table 1: Periodic boundary conditions problem convergence rates of h and u for U = 0 and 0.2 with α h = α u = 0, α h = α u = 1 for different polynomial degrees P .
Initial boundary value problem convergence test
We consider the IBVP described in section 6. The numerical approximation is computed using the same discretization parameters as in section 6.2. We investigate the convergence of the numerical errors by computing the L 2 error at the final time T = 0.1. Fig. 5-6 and Table 2 depict the convergence plots and rates respectively. It is observed overall that the numerical approximation for the velocity u is P th order accurate when P is odd and (P + 1)th order accurate when P is even. The numerical approximation for the height h is P th order accurate when α h = α u = 0, and better convergence rates are obtained when α h = α u = 1. These observed convergence rates are consistent with the periodic boundary conditions case. Table 2: Initial boundary value problem convergence rates of h and u for U = 0 and 0.2 with α h = α u = 0, α h = α u = 1 for different polynomial degrees P .
To illustrate the effectiveness of high order accuracy, we compute the numerical solution using three different discretization parameters P = 2, N = 48, P = 2, N = 96, and P = 8, N = 16, where P and N denote the polynomial degree and the number of elements respectively. The classical explicit fourth order accurate Runge-Kutta method with the time step ∆t determined by (62) is used to evolve the numerical solution until the final time T = 6.
Snapshots of the numerical solution for h and u are depicted in Figure 7 and 8 respectively. It is observed that when P = 2, N = 48, the numerical solution has visible spurious oscillations. On the other hand, there are none such oscillations when P = 8, N = 16, even though they have the same degrees of freedom and take roughly the same amount of computation time. Doubling the degrees of freedom of P = 2, N = 48 by mesh refinement gives P = 2, N = 96, and the numerical solution in this case is similar to the high order case P = 8, N = 16 but it takes significantly higher computation time.
Conclusion
In this paper, we have derived and analyzed well-posed boundary conditions for the linearized Serre equations. The analysis is based on the energy method and it identifies the number, location, and form of the boundary conditions so that the IBVP is well-posed. In particular, when the background flow velocity is nonzero it was shown that we need a total of four boundary conditions, specifically three boundary conditions at the inflow boundary and one boundary condition at the outflow boundary, to bound the solution in the energy norm. When the background flow velocity is zero only two boundary conditions are needed, one boundary condition at each end boundary of a 1D interval, to ensure well-posedness. Furthermore, to couple adjacent elements we derived well-posed interface conditions that ensure the conservation of energy, mass, and linear momentum. We have developed a provably stable DGSEM for the linearized Serre equations of arbitrary order of accuracy. The discretization is based on discontinuous Galerkin spectral derivative operators that satisfy the SBP properties for first, second, and third order derivatives. These operators are used in combination with the SAT method to impose the interface and boundary conditions numerically in a stable manner. With appropriately chosen penalty parameters, we have shown that the proposed numerical interface and boundary treatments emulate the well-posedness properties in the continuous analysis. A priori error estimates were also derived in the energy norm. Numerical experiments have been presented verifying the theoretical results and demonstrating the efficiency of high order DGSEM for resolving highly oscillatory dispersive wave modes.
The numerical method developed in this paper has been extended for solving the nonlinear Serre equations, and it will be published in a forthcoming paper. An obvious extension of this paper is to develop a provably stable numerical method for solving the Serre equations in two spatial dimensions. | 2023-03-22T01:16:27.129Z | 2023-03-20T00:00:00.000 | {
"year": 2023,
"sha1": "8e180eebf0a3876fcee9278514f25134ee733892",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8e180eebf0a3876fcee9278514f25134ee733892",
"s2fieldsofstudy": [
"Mathematics",
"Engineering"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
238200832 | pes2o/s2orc | v3-fos-license | Assessing mPTC Progression during Active Surveillance: Volume or Diameter Increase?
Active surveillance (AS) is considered an alternative to immediate surgery in micropapillary thyroid carcinoma (mPTC). However, the definition of clinical mPTC progression during AS is controversial. We evaluated changes in tumor size using both tumor diameters and volume in 109 patients with mPTC followed in an AS protocol for a mean period of 31 ± 18 months. At the time of data lock, 19/109 (17.4%) mPTC reached and maintained a volume increase of ≥50%. However, only 3/19 (15.7%) showed progression, according to the diameter increase. The remaining 16 showed a slight diameter growth without reaching the original protocol progression criteria. The mean mPTC growth rate in stable cases was 0.37 mm3/month, while it was significantly greater in the mPTC, which achieved a volume change ≥50% with respect to the other. The two mPTC that developed a significant diameter increase had a growth rate of 41 and 18 mm3/month. Instead, the growth rates of the three mPTC that developed lymph node metastases were 0, 2.5 and 16 mm3/month. The ≥50% volume increase appears to be a too sensitive marker of disease progression, with a downstream higher surgery rate. The assessment of growth rate could distinguish mPTC with high and low growth rates, which would allow us to tailor the algorithm of the evaluations to a more appropriate timing.
Introduction
Active surveillance (AS) is a watchful waiting approach that allows one to closely monitor a patient's condition without treatment until clinical disease is overt [1]. It is the only available way to plan therapies and their timing, avoiding either under-or overtreatments. Recently, AS has been proposed as an alternative to surgery in unifocal and intrathyroidal papillary microcarcinoma (mPTC) [2][3][4] and it is primarily indicated to old or frail mPTC patients [2,3]. The indolent nature of disease, demonstrated in several clinical trials [5,6], poses no major threats to patients and the delay of surgery appears to be safe even when local metastases develop. Since then, observational studies from different countries in Asia, America and Europe have confirmed the favorable outcome of mPTC during AS [7][8][9][10][11][12][13][14][15][16], underlining that it is related to the indolent course of the disease and not to the efficacy of active treatment.
However, there is no clear consensus on the definition of clinical mPTC progression during AS. The criteria of mPTC growth >3 mm in each diameter and/or the appearance of metastatic lymph node may not practically reflect the real aggressiveness of the tumor. While the appearance of lymph node metastases could be an unequivocal sign of progression, tumor growth is arbitrarily defined based on experience in the field [5,6,15]. Some authors speculated that the increase in mPTC volume allowed them to earlier identify 2 of 9 those tumors that would grow at higher speed and on its turn may be more aggressive and clinically significant [9,11]. Others, however, have argued that the evaluation of tumor volume could be too sensitive and still shift to surgery in many cases [17,18].
Advances in knowledge on tumor growth could help in understanding which patients could benefit from more careful monitoring and better define the ideal frequency of evaluations and the optimal timing for surgery. In our previous report [15], we set the cut-off of disease progression at a growth of 3 mm or more in each diameter in two consecutive echo assessments, six months apart. The aim of this study was to analyze the mPTC growth by comparing the increase of 3 mm in each diameter with the volume increase to define a personalized surveillance protocol based on tumor growth.
Patients
We identified a cohort of 127 patients with cytological diagnosis or suspicious papillary thyroid cancer (PTC) measuring ≤1.3 cm at neck ultrasound (nUS), prospectively enrolled and followed in AS protocol at the University Hospital of Pisa, Italy, from November 2014 to November 2020. A total of 18/127 were excluded from this analysis because the period of observation was <6 months. The inclusion criteria in the original protocol were described in our previous report [15].
All patients who agreed to participate in this program signed an informed consent at study entry. The appearance of metastatic lymph nodes, confirmed by a cytology and thyroglobulin (Tg) measurement on the wash-out fluid of the needle used for FNAC, or an increase in size more than 3 mm for each nUS diameter of mPTC, confirmed in two consecutive examinations, defined the mPTC progression according to the original protocol criteria and were considered indications to transition from AS to surgical intervention. Patients were regularly evaluated every 6 months for the first 2 years and then yearly. Levothyroxine (LT4) therapy was administered, or maintained, in hypothyroid patients to obtain a thyroid-stimulating hormone (TSH) level within the normal range. This study (number 334/2014) was approved on 20 November 2014 by the Local Ethical Committee (Comitato Etico Area Vasta Nord-Ovest-CEAVNO).
Methods
Neck Ultrasound nUS was performed using a real time instrument (Esaote SPA, Genova, Italy; My Lab 50 machine with 7.5-12 MHz linear transducer). During the follow-up, nodules and suspicious lymph nodes in neck stations were inspected. nUS was performed by the same independent US-trained endocrinologists (E.M. and M.C.C.). Accurate descriptions of echogenicity, microcalcifications, integrity of halo, lengths of antero-posterior (AP), laterolateral (LL) and longitudinal (Long) diameters were recorded in a computerized database.
To the purpose of the present study, we calculated the nodular volume using the ellipsoid formula (AP*LL*Long/0.52); therefore, we looked at the change in volume at each control (V2) with respect to the baseline volume (V1), expressed as a percentage and using the mathematical formula (V2-V1)/V1*100. According to previous studies, including [9,11], a meaningful change in tumor volume was defined as when there is a size increase of ≥50% compared to baseline values. We also calculated the growth rate of mPTC, expressed in mm 3 /month, as the slope of the regression line between the volumes calculated at each visit and the time of AS.
Statistical Analysis
The normality of the variables was tested by means of the Shapiro-Wilk test. Data are expressed as mean ± standard deviation (variables with normal distribution) or as median with interquartile range (variables with non-Gaussian distribution). The differences between groups for continuous variables with normal distribution were evaluated using the t-Student test (two groups) after evaluating the homogeneity of the variances of the groups using the Levene test. Differences between groups for continuous variables with non-Gaussian distribution were compared by means of the Mann-Whitney U test (two groups). Associations between categorical variables were assessed with the chi-square test or Fisher's test where appropriate. All statistical analyses were conducted with SPSS software (IBM SPSS Statistics, Armonk, New York, NY, USA; version 25).
Results
In this analysis, we included 109 mPTC patients who were prospectively observed for a mean of 31 ± 18 months.
During AS, only 5/109 patients (4.5%) showed mPTC progression, according to the original protocol criteria [15]. In two patients, all three mPTC diameters increased by 3 mm in two 6-month consecutive US evaluations, while in three mPTC patients, lymph node metastases developed during AS. The epidemiological, ultrasonographic and pathological characteristics of progressing cases are summarized in Table 1. Of these latter, only one showed the diameter increase but not enough to fulfil the criteria of the original protocol. These patients developed mPTC progression after a median period of observation of 18 months (IQR 14-25), which was significantly shorter than the median time of observation of the other stable mPTC (32 months, IQR 21-48) (p = 0.04). Regarding the increase of the volume of mPTC, we found that during the AS, 22/109 (20%) patients showed an increase in mPTC volume of ≥50% compared to baseline at any time during the observation. As shown in Table 2, most of the patients showed a volume increase ≥50% at 6-, 12-and 18-month visits (18/22); only 4/22 experienced a volume increase >50% after a 24-month visit. In 19/22 (17%) cases, the volume increase of >50% was confirmed at the last evaluation; meanwhile, the remaining three showed a volume reduction within the 6 following months. As shown in Figure 1, based on the change in volume at the last evaluation, 90/109 (83%) cases had a volume variation <50% (Group A) and 19/109 (17%) (Group B) had a volume variation ≥50% (range 50-400%), compared to baseline. According to previous studies (10-12), Group B represents the "growing group" and 3/5 progressing cases for the original protocol (i.e., two cases for the increase of diameters and one case for the development of lymph node metastases) belonged to this latter group. In terms of variance, the other two cases that developed lymph node metastases did not show a meaningful volume increase.
Regarding the increase of the volume of mPTC, we found that during the AS, 22/109 (20%) patients showed an increase in mPTC volume of ≥50% compared to baseline at any time during the observation. As shown in Table 2, most of the patients showed a volume increase ≥50% at 6-, 12-and 18-month visits (18/22); only 4/22 experienced a volume increase >50% after a 24-month visit. In 19/22 (17%) cases, the volume increase of >50% was confirmed at the last evaluation; meanwhile, the remaining three showed a volume reduction within the 6 following months. As shown in Figure 1, based on the change in volume at the last evaluation, 90/109 (83%) cases had a volume variation <50% (Group A) and 19/109 (17%) (Group B) had a volume variation ≥50% (range 50-400%), compared to baseline. According to previous studies (10-12), Group B represents the "growing group" and 3/5 progressing cases for the original protocol (i.e., two cases for the increase of diameters and one case for the development of lymph node metastases) belonged to this latter group. In terms of variance, the other two cases that developed lymph node metastases did not show a meaningful volume increase. As shown in Table 3, in our series, there were no differences in the epidemiological, clinical, biochemical, US and cytological characteristics between patients of Group A and Group B. The follow-up of Group B and Group A was similar (32 months vs. 30 months, respectively). However, the timing during which the volume change occurred varied from 6 to 48 months. Because of this difference we calculated the growth rate that was 1.1 mm 3 /month in the entire series (n = 109), 6.56 mm 3 /month in Group B and 0 mm 3 /month in Group A. As shown in Figure 2, the comparison of the volume growth rates of the two groups showed a statistically significant difference (p < 0.0000001).
The volume growth rate of all our mPTC with the progression calculated according to the criteria of the original protocol was then compared. As shown in Figure 3, the mPTC growth rate was near to 0 (mean 0.37 mm 3 /month, ±SD 7.9 mm 3 /month) in stable cases. The two mPTC that progressed according to the original protocol criteria had a growth rate of 41 and 18 mm 3 /month. Instead, the growth rates of the three mPTC that developed lymph node metastases were 0, 2.5 and 16 mm 3 /month. This latter was the one in whom there was a simultaneous increase of diameters but not sufficient to fulfil the pre-defined criteria [15]. The volume growth rate of all our mPTC with the progression calculated according to the criteria of the original protocol was then compared. As shown in Figure 3, the mPTC growth rate was near to 0 (mean 0.37 mm 3 /month, ±SD 7.9 mm 3 /month) in stable cases. The two mPTC that progressed according to the original protocol criteria had a growth rate of 41 and 18 mm 3 /month. Instead, the growth rates of the three mPTC that developed lymph node metastases were 0, 2.5 and 16 mm 3 /month. This latter was the one in whom there was a simultaneous increase of diameters but not sufficient to fulfil the pre-defined criteria [15].
Discussion
Active surveillance has been proposed as an alternative strategy to immediate surgery in low-risk mPTC [2][3][4] and it is recommended for old or frail mPTC patients [2,3]. The mPTC progression during AS is the main reason for interrupting AS and shifting to surgery. There is no univocal definition of mPTC progression during AS. While the appearance of lymph nodes or distant metastases is a clear sign of progression [6,[8][9][10]14,15], the evaluation of tumor progression is quite different in different centers [6,[8][9][10][11]14,15]. Each dot corresponds to a patient. The growth rate is expressed on the ordinates axis. The green dots represent the progressing mPTC patients by each diameters increase >3 mm, the yellow ones represent the progressing mPTC who developed lymph node metastases and the blue ones represent the stable mPTC patients. The mean growth rate is near 0 mm 3 /month. The green dots have the fastest growth rate, the yellow ones have variable behavior in tumor growth and the blue ones have the slowest growth rate.
Discussion
Active surveillance has been proposed as an alternative strategy to immediate surgery in low-risk mPTC [2][3][4] and it is recommended for old or frail mPTC patients [2,3]. The mPTC progression during AS is the main reason for interrupting AS and shifting to surgery. There is no univocal definition of mPTC progression during AS. While the appearance of lymph nodes or distant metastases is a clear sign of progression [6,[8][9][10]14,15], the evaluation of tumor progression is quite different in different centers [6,[8][9][10][11]14,15]. The major controversial issue is if it is more appropriate to consider the linear increase of diameters or the volume increase.
In the present study we found that 19/109 (17.4%) of our mPTC patients had a persistent volume increase ≥50% at the last evaluation, which would have been considered meaningful by several authors [9,11]. According to these criteria, these 19 patients would have been submitted to surgery in a timeframe varying from 6 to 48 months, while 16 of them remained in AS because they did not show the progression according to the pre-defined criteria [15]. Moreover, 18 cases showed the volume increase ≥50% within 24 months, while only 5 cases progressed in this timeframe when considering the predefined criteria [15]. According to these findings, it appears that many more mPTC cases would be sent to surgery when volume increase is considered, thus reducing the real impact of the AS. In this context, more attention should be paid to smaller mPTC (i.e., 4-5 mm), because any even insignificant variation in diameters (1-2 mm) might cause a volume increase >50%, which could be misread as progression. Instead, it could likely reflect the variability in US measurements [19]. Besides, the increase of 3 mm or more in very small mPTC may not be clinically relevant and these patients may continue AS [6].
Nevertheless, and in agreement with the results of Tuttle et al. [9], both mPTC cases that enter progression because of the 3 mm diameters increase, also showed a volume increase ≥50%. This was not the case of the three mPTC progressing for the development of lymph node metastases since two of them did not reach the volume increase ≥50%, thus the appearance of cervical lymph node metastases must be a criterion for surgery independent from the tumor growth, either considered as diameter or volume increase. Following this criterion, the delayed surgery needs to be more extensive than the immediate one; however, it is still safe in term of disease outcome and surgery adverse events. Moreover, this strategy allows us to operate only the mPTC patients who need treatment, avoiding unnecessary surgery procedures.
No differences were found among the epidemiological, clinical, US and cytological characteristics between the two groups of patients divided according to the volume variation of ≥50% or <50%. Unfortunately, we were unable to find any prognostic factors of progression, neither in this study considering the increase of volume nor in the previous one using diameter as a parameter of growth [15]. Probably in the future, a characteristic molecular pattern could discriminate the risk of mPTC progression during AS. Currently, the role of BRAF mutations and TERT promoter rearrangements is not fully understood in mPTC during AS, although they strongly predict a poor prognosis in operated DTC [20]. Other gene mutations could likely give to cancer cells a selective advantage for progression. Recently, mutations of genes coding for cell adhesion molecules or migration proteins seem to discriminate mPTC with greater aggressiveness [21].
It is known that the tumor growth is non-linear by definition and that periods of growth can be followed by periods of shrinkage or stabilization [17]. The evidence of nodular growth in two consecutive US evaluations should minimize the risk of an inconstant growth and send to surgery a nodule that is really progressing. Assessing tumor growth in two different occasions, we can also reduce intra-and inter-observer variability at US [19].
According to our results, it appears that it is very relevant to consider how long this variation takes: it is plausible that reaching a volume increase ≥50% compared to the baseline in 6 months or 4 years had a different clinical relevance and the evaluation of the growth rate of mPTC could solve this aspect. In our series, almost all mPTC had a growth rate near to 0 mm 3 /month and this data is consistent with the low progression rate of mPTC during AS. However, progressing mPTC patients who developed diameters increase ≥3 mm had a faster growth rate than stable ones and those who showed lymph node metastases had a variable growth rate, ranging between 0 and 16 mm 3 /month. Our analysis highlights a different behavior in growth rate between mPTC that developed lymph node metastases from those that had an increase in diameters. On the one hand, the assessment of the growth rate could identify a subgroup of patients with mPTC at risk of progression, while on the other hand it cannot exclude the risk of developing lymph node metastases. This evidence makes essential the use of US and it is fundamental that during AS the cervical lymph nodes and not only the thyroid gland, are regularly evaluated.
Notably, in our original cohort, the mPTC progression developed in all five cases within two years after the enrolment, while other mPTC remained stable after a mean follow-up of almost 3 years. This result could represent a surrogate marker of the growth rate of mPTC and we can hypothesize that an mPTC that enters progression in a short time is likely a rapidly progressing mPTC destined to become a clinically relevant disease.
Although larger data analyzing the evolution of mPTC are needed to better define the most appropriate timing for surgery, the assessment of the growth rate of the mPTC could help us identify the minority of mPTC with a higher growth rate and thus having a greater risk of progression during AS. This subgroup could benefit from a more stringent observation with more frequent evaluations or, conversely, the other group could be less frequently surveyed. We might plan a different algorithm, according to mPTC growth rate, with more frequent evaluations in those with a higher growth rate than in those with a lower growth rate. Further studies will be helpful to determine the clinical significance of growth rate in mPTC diameter and volume and refine the thresholds for intervention. To date, the indication to surgery remains the same of previous reports based on diameter increase [15]. The objective of future research will be to find biomarkers able to early identify mPTC that tend to progress and perform less extensive therapeutic approach. One of the limitations to the present study is that a real 3D imaging technique was not used. It is possible that a more refined technique such as MRI, even if much more expensive, may improve the ability to objectively assess clinically meaningful growth, albeit minimal.
In conclusion, our study showed that, considering volume variation, to define mPTC progression, without taking into account the initial size and the period in which volume increase occurs, it can induced to operate in many more cases that could have benefited from a longer AS. Indeed, the 3 mm diameter increase in two consecutive evaluations seems, at the moment, the best trade-off in assessing tumor progression that should be matched by clinical progression. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to patient privacy and the General Data Protection Regulation.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-09-29T05:25:44.145Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "ef9d849a993604c76a3f83618fbc55b2467d9d84",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/10/18/4068/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ef9d849a993604c76a3f83618fbc55b2467d9d84",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
67857697 | pes2o/s2orc | v3-fos-license | Inflammatory bowel disease patient perceptions of diagnostic and monitoring tests and procedures
Background Inflammatory Bowel Disease (IBD) with its high incidence and prevalence rates in Canada generates a heavy burden of tests and procedures. The purpose of this study is to gain a better understanding of the transfer of information from physician to patient, as well as the patient understanding and perceptions about the tests and procedures that are ordered to them in the context of IBD diagnosis and monitoring. Methods An online questionnaire was completed by 210 IBD patients in Canada. Information on the five most-often used tests or procedures in IBD diagnosis/monitoring was collected. These include: general blood test, colonoscopy, colon biopsy, medical imaging and stool testing. Results The general blood test is both the most ordered and most refused tool. It is also the one with which patients are the least comfortable, the one that generates the least concern and the one about which physicians provide the least information. The stool test is the test/procedure with which patients are the most comfortable. Procedures raise more concerns among patients and physicians provide more information about why they are needed, their impact and the risks they present. Very little information is provided to patients about the risks of having false positives or negative tests. Conclusions This study provides an initial understanding of patient perceptions, the transfer of information from a physician to a patient and a patient’s understanding of the tests and procedures that will be required to treat IBD throughout what is a lifelong disease. The present study takes a first step in better understanding the acceptance of the test or procedure by IBD patients, which is essential for them to adhere to the monitoring process. Electronic supplementary material The online version of this article (10.1186/s12876-019-0946-8) contains supplementary material, which is available to authorized users.
Background
Inflammatory bowel disease (IBD) includes ulcerative colitis (UC) and Crohn's disease (CD). These are chronic inflammatory illnesses of the gastrointestinal tract of unknown etiology. There is no cure, and the purpose of treatment is to control the symptoms and maintain remission [1,2]. Canada ranks among the countries with the highest prevalence and incidence rates of IBD in the world [3]. Furthermore, Canada has one of the highest IBD incidence rates among the under 16 age group and this rate increasing, especially among children under 5 years of age [4]. One out of every 150 Canadians is afflicted with IBD [5]. This burden generates a significant economic weight. In 2012, IBD-related costs were estimated at $2.8 billion, of which $1.2 were direct costs (hospitalization, medication and medical visits) [1].
Diseases for which diagnosis, monitoring, and surveillance are appropriate are those that significantly impact a person's quality of life, are fatal, and are sufficiently widespread to justify investments in conditions for which early detection is beneficial and for which treatment exists [6,7]. IBD meets all of these criteria. The management of IBD patients requires assessment both at the time of diagnosis and throughout the illness to determine the activity and severity of the inflammatory lesions, disease location, progression and complications [8]. In chronic illness it has been reported that, due to time constraints on the part of the physician, patients do not receive the care they require and, as a result, their illness remains unmonitored [9,10]. On the other hand, it has also been reported that patients undergo too many tests and procedures. Some authors [11,12] have put forward the suggestion that this overuse of tests and procedures can be explained by the fact that physicians receive a bonus for each test requested or that they do more than less out of fear of lawsuits, which can lead to false positives or overdiagnoses.
IBD diagnosis and monitoring are mainly based on an in-depth physical examination coupled with the patient's medical history and various tests and procedures that include blood tests, stool tests, endoscopy with or without biopsy and medical imaging [13]. The clinical signs of UC include urgency, tenesmus, bloody diarrhea or abdominal pain. Signs of CD are more variable and dependent on the extent and the location of the gastrointestinal disease and on whether or not there are complications such as intestinal strictures, intestinal or perianal fistulas or abscesess [14]. Periodic measures including office visits, laboratory tests and procedures are part of the monitoring process that helps manage chronic illnesses [15]. Assessment of IBD activity is mainly carried out through symptom reporting, laboratory testing and endoscopy. As example, a fecal calprotectin test can be used to monitor disease activity. In addition, for CD radiologic imaging plays an important role in assessment of disease activity. [16]. The ideal test must be safe, simple, inexpensive, acceptable to the public and must also be reproducible, sensitive and specific [6,16]. IBD patients are subject to a large quantity of medical care, tests and procedures [17][18][19][20].
The various tests and procedures to diagnose and monitor IBD all have specific goals. Blood tests are used to screen for IBD and assess a patient's state of health [18]. Repeated measures of certain biomarkers such as calprotectin or lactoferrin in the stool are part of the clinical IBD management procedures [17,21]. These biomarkers allow for quick and non-invasive monitoring of inflammation [22,23]. Endoscopy plays an integral role in the diagnosis and management of IBD patients. An endoscopic examination allows physicians to distinguish CD from UC and also provides information on disease extent and severity [24]. Patients with IBD, are at higher risk of developing colorectal cancer (CRC), which is monitored through endoscopy with biopsies [25,26]. Medical imaging including ultrasound, CT Scan and Magnetic Resonance Imaging are performed in IBD patients as a diagnostic tool and to determine the extent of the damage to the intestines, monitor the illness' activity and assess for complications [19,27]. Certain tests, such as a colonoscopy or blood tests, are not appreciated by patients and generate anxiety [28][29][30][31][32]. It is therefore important that patients receive and understand the information about the risks and benefits of the various tests and procedures [33,34].
Despite improvements in the available treatment options, IBD continues to have a negative impact on the quality of life of patients [35]. Many studies have focused on the impact of IBD on quality of life [36,37], the need for information [38], strategies to adapt [39] and shared decision making [40]. Despite the funding and resources invested in the diagnosis and monitoring of IBD, few studies have focused on these activities. To the best of our knowledge, no studies have been conducted on the perception of patients toward diagnostic tests and monitoring specifically for IBD patients. Even less information is available on patient understanding of and compliance with the tests and procedures that are requested by their physicians. Yet, the literature shows that understanding how a chronic condition influences patients and their ability to adhere to health care recommendations is essential, especially as part of a patient-centered approach [41]. Questions to be posed include: What are the percentages of orders for tests and/ or procedures? What percentages of patients refuse these tests/procedures and for what reasons? What information is given to the patient about these tests? What is the patient's understanding of these tests? Do these tests generate any concerns for the patient?
The purpose of this study is to gain a better understanding of the transfer of information from physician to patient, as well as patient understanding and perceptions of the tests and procedures that are ordered by their doctor in the specific context of IBD diagnosis and monitoring.
In a patient-centered approach, it is essential to gain a better understanding of patient perceptions of the diagnostic and monitoring tests and procedures used [42] in the context of the chronic illnesses that are IBD. This study therefore aims to take a first step in this regard. With an increased awareness of the problems associated with the tests and procedures, physicians can prevent these negative perceptions and adapt their exchange of information with their patient, which will in turn soften the impact of the tests and procedures on the quality of life of their patients.
Methods
The current study was part of a larger research program aimed at translating genetic discoveries into a personalized approach for the treatment of inflammatory bowel diseases [48] for which an online questionnaire was specifically developed. One of the sections of this questionnaire was designed to better understand the concerns raised by the tests/procedures among IBD patients, the transfer of information from the physician to patients and the patients' understanding of these tests/procedures, as well as the rate of prescriptions, the reasons patients refuse to undergo such tests/procedures and the sociodemographic profile of the participants (Additional file 1: Online Questionnaire).
The survey was posted on the website of Crohn's and Colitis Canada (CCC), an association that has 933 patient members, and which made it possible to reach patients in a manner that respected their privacy. Patients could access the survey through the CCC website over a 5-month period. Five reminders were posted on the CCC website, newsletter and social media via existing CCC platforms in an attempt to reach the largest possible final sample size. In total, 210 adult participants were reached across 10 different Canadian provinces and/or territories, for a response rate of 22.5%, which is within the range previously reported in similar studies [43][44][45]. The vast majority of the 210 respondents answered all of the questions.
The questionnaire was built in five sections to collect information on the five most used tests and procedures for the diagnosis and monitoring of IBD: general blood test, stool test, colonoscopy, colon biopsy and medical imaging. For each section, patients were asked to answer 20 questions to assess their acceptance or refusal of a given test/ procedure, the reasons for the refusal, their concerns, the level of comfort with undergoing the test (confort is defined by: the patient feel relaxed and wellbeing toward those tests, have no negative perception and perceive them relatively free from pain), their understanding, and the information provided by their physician in connection with every test or procedure under study. Respondents indicated their agreement or disagreement with a given statement on a Likert scale [46]. This type of scale was chosen as it makes it possible to measure complex attitudes or individual perceptions. An even-numbered scale [6] was used, as it eliminates the respondents' tendency to choose the middle answer, known as central tendency [47]. The online questionnaire was first "pre-tested" by 14 IBD treatment experts and then by 17 patients in a gastroenterology clinic in order to check their understanding of each question. Minor changes in the wording were subsequently made to complete the final questionnaire. All participants signed the information and consent form before completing the questionnaire. The questionnaires were completed anonymously; the respondents cannot be tracked. The information and consent form was approved by the ethics committee (approval number 2013-041 / 27-09-2013).
This study presents the analysis of the responses obtained regarding the patients' laboratory tests and procedures. SPSS statistical software was used for calculations. Pearson's chi-square test (χ2) was use to evaluate the correlation between educational and understanding level. The Likert scale was grouped into low [1,2], medium [3,4] and high [5,6] response categories for effects to emerge more clearly.
Prescriptions and refusal to undergo tests or procedures
The sociodemographic profile of the entire population under study (n = 210) is presented in Table 1. More women (171) than men [39] participated to the web survey. Among the patients who anwered the questionnaire, there were more patients having Crohn's disease (145) than ulcerative colitis (65), more patients in the age range 18-34 (82) than in the age range of 35-44 (68) and the less represented age range was 45 and over (56). The educational level of patients who answered the questionnaire were grouped in high school [34], professional or college (85) or university diploma (87). The distribrution of the province or territory where were living the patients were living is also presented in Table 1. Ontario was the province from which the highest number of patients participated in the web questionnaire while Nunavut was the least represented territory.
With regard to diagnosis or monitoring, the incidence of tests and procedures requested for the entire population under study (n = 210) and the number of tests and procedures that were refused by these patients are presented in Table 2. The most-often ordered test/procedure was a blood test (96.7% of patients), followed by a colonoscopy (93.3%), a colon biopsy (81.4%), a stool test (67.1%) and the least-often requested procedure was medical imaging (58.1%). Some patients decided to refuse to undergo these tests/procedures. The rate of refusal is similar for the majority of the tests/ procedures (2 to 5 refusals), but is significantly higher for the blood test, which was rejected by 74 patients. Therefore, although a blood test is the most-often ordered test, it is also the one that is the most often refused. The reasons why patients refused to undergo the tests and procedures are presented in Table 3. Time, pain, costs, potential risks, side effects, fear of results, test too revealing, and confidentiality were reported by patients as reasons for refusal. None of the suggested reasons explained why 4 patients had refused the stool test and why 5 patients had refused the medical imaging. On the other hand, a colonoscopy was refused by 2 patients, at least one of whom had refused the procedure for several of the reasons suggested: time, pain, potential risks, side effects, fear of results, test too revealing and confidentiality. The colon biopsy was also refused by 2 patients, at least one of whom refused the test because of the potential risks. The blood test, which was refused by 74 patients with the reasons reported in the questionnaire only partly explained the reasons for the refusal. Sixty-five patients replied that the reason they had refused the test was not listed in the questionnaire. Other patients refused the blood test for the following reasons suggested in the questionnaire: time, pain, costs, potential risks, side effects, fear of results, test too revealing and confidentiality.
Patient perceptions of the impact of the tests and procedures
The level of comfort with the test or procedure was determined by the patients. Level of comfort referring to feeling relaxed and wellbeing toward those tests, have no negative perception and perceive them relatively free from pain. Although initially rated from 1 to 6 on the Likert scale, the level of comfort was grouped into three levels to more easily highlight the trends: low (rating of 1-2), medium (rating of 3-4) and high (rating of 5-6) (see Table 4). Patients also assessed their levels of concern during their specialists' presentation of the results of the tests/ procedures (see Table 4). Level of concern was lowest for the results of the blood test, where 19.5% of patients reported a high level of concern. For the colonoscopy, 59.7% of the patients reported a high level of concern upon receiving the results. The patients were also concerned about the results of the colon biopsy and the medical imaging, but to a lesser degree, with 39.4 and 38% respectively reporting a high level of concern.
Patients then assessed the impact of these test/procedure results on their concerns about their illness ( Table 4). The colonoscopy generated the greatest increase in concern about the illness (64.3%). Next were the medical imaging results (52.1%), followed by the colon biopsy (48%) and the stool analysis (37%). The blood test had the least impact on patient concerns about their illness (only 17.8% of respondents reported an increase in their concerns).
Patient understanding of the tests and procedures
The patients were asked whether their specialist had explained the reason why the test/procedure had been requested (Table 5). A total of 95.5% of patients had received explanations as to why the colonoscopy had been requested; 87.9% for the medical imaging; 83.2% for the colon biopsy; 78.7% for the blood test; and 73.2% for the stool test. The patients were also asked about their understanding of the reason why the test/procedure was requested. The highest level of understanding of the reason why the test/procedure was needed was for the colonoscopy (86.9%), followed by the stool test (76.1%), the medical imaging scan (75%), the colon biopsy (71.1%) and, lastly, the blood test (63.5%). Pearson's chi-square test (χ2) was performed to evaluate the correlation between educational level and the level of understanding of the reason why the test/procedure was requested. For every tests and procedures, no significant correlation between educational level and understanding was found. Patients then reported their level of understanding of the potential treatments in connection with the test/procedure they had undergone. The highest level of understanding was reported for the colonoscopy with only 59.4%, followed by the stool test (53.9%), the colon biopsy (48.3%), the blood test (47.8%) and the medical imaging scan (45.1%). For this question, the level of understanding was not very high, as all were reported below 60%. Table 5 presents the percentage of patients whose physician had explained why the test/procedure was requested. For all tests and procedures, other than the stool test, there is a decrease in the percentage of patients who had a high understanding of why the test was requested from those who had received the information from their physician. Therefore, even though a certain percentage of patients received explanations as to why the tests/procedures were requested there was therefore a loss of understanding. This loss of understanding was
Transfer of information about the tests and procedures
Patients were asked if the specialist had informed them of the impact the results of the tests/procedures would have on treatment options; the possibility of false positives or negatives; any potential risks; and the level of invasiveness (for procedures only) (see Table 6). Patients were therefore asked whether the physician had informed them of the impact of the results of the test/procedure on treatment options. Most patients reported having received this information about the colonoscopy (83.2%), followed by the stool test (73.2%), the colon biopsy (72.7%) and the medical imaging (72.6%), the latter three being very similar. Patients reported having received the least information about the impact of the results of blood tests on treatment (59.9%). Information about the possibility of false positives or negatives was the least shared with patients. A total of 40% of patients were informed about possible false positives or negatives for the stool test, which is the highest percentage, closely followed by the medical imaging scan (36.6%), the colon biopsy (34.7%) and the colonoscopy (32.3%). Only 22.3% of patients were informed about possible false positive or negative results of blood tests. Patients were also asked whether they had received information about the potential risks of the test/procedure. Information about potential risks was most often given to patients about the colonoscopy (73.2%), followed by the colon biopsy (68.4%), the medical imaging scan (60.5%) and the stool test (47.5%). Patients were the least informed about the potential risks of blood tests. Lastly, patients were asked if they had been informed about the level of invasiveness of the procedure. A total of 75.6% of patients had been informed of the level of invasiveness of the colonoscopy, and the medical imaging scan was the procedure about which the smallest percentage of patients received this information (63.4%).
Discussion
The present study aims to gain a better understanding of the concerns generated by tests and procedures; the transfer of information from the physician to the patient; the patients' understanding of these tests/procedures; as well as the rate of prescription and the reasons that lead patients to refuse to undergo these tests/procedures. With respect to the percentages of patients who had received the information from their physician, respectively for each test/procedure, there is a decrease in the percentage of patients with a high level of understanding of the reasons why the tests/procedures were requested. Thus, from the number of patients who had received explanations of the request for a test, fewer patients reported understanding as to why the test was requested, other than for the stool test. The stool test may be messy and most inconvenient because the patient often has to return to the clinic with the specimen. But if the patient understands the value in the stool test to rule out infection or to assess for inflammation, then this may explain why the patient has a natural high understanding of this test. No correlation was found between educational level of patients and their understading of the reason why the test was requested. These results are consistent with the literature, since knowledge acquisition is a complex cognitive process that involves learning, communication and reasoning [2]. It has been shown that information comprehension is strictly rational and based on the quality of the transfer of information [48]. Asking patients whether they understand why the test/procedure was requested makes it possible to check whether they have a rational understanding of the information.
This loss of understanding is all the more pronounced when patients were asked to assess their level of understanding of the potential treatments, respectively for each test/procedure. The link between undergoing a test/procedure and the impact on the potential treatment affects the patient on a personal level and implies that he or she must decide whether or not to undergo the test/procedure, the results of which will have a repercussion on the potential treatments. The integration of concepts involving technical medical information, health risks and probabilities, which are difficult concepts to process for patients not accustomed to this type of discourse, can be overwhelming, especially when fear comes into play. When patients are faced with complex information that involves making a decision, their ability to apply their values and principles may be hindered by their emotions or cognitive interference [12]. In such cases, the processing of information and the related decisions are based on more than a rational understanding of the information about the risks, benefits and uncertainty, as the decisions must align with the patient's fears, preferences and values. As has been previously observed, the quality of the transfer of information appears to have a direct effect on patient involvement, without being strongly mediated by the rational understanding of the information [48]. Thus, understanding why a test/procedure is requested is rational, but understanding the link with the potential treatment calls into play a degree of emotion and cognitive interference that alters the way the patient processes the information, which may translate into a loss of understanding.
In this study, it has been shown that the stool test was the test/procedure for which the risk of false positives or negatives was most often explained, with only 40% of patients receiving this information. Yet, this risk is real for each of the tests/procedures. A good test may come up normal with patients with the illness (false negatives), and may also come up positive with patients who are not ill (false positives). A potential danger of monitoring is getting false positives and the related consequences such as morbidity, unnecessary additional diagnostic tests, invasive procedures and exposure to radiation [6]. On the other hand, a lack of sensitivity and specificity can result in complications being missed, resulting in patient decline that could otherwise have been avoided. These are the reasons why physicians should take greater care to properly explain the risks of obtaining false positives or negatives to patients on the basis of the tests and procedures the patient will have to undergo.
Among the five tests and procedures used to diagnose and monitor IBD, the general blood test is the most ordered, but also the most refused by patients. This is the test/procedure with which patients are the least comfortable. The blood test is also the test that generates the least concern about the results, and the results of which generate the least concern about the illness. The blood and stool tests are the tests/procedures for which explanations about their necessity are provided the least often. The use of blood tests is very common, especially for patients with chronic illnesses, but this test is linked to a fear of needles, which can have serious consequences leading to non-adherence and avoidance of health care [30]. The severe form of this fear is a phobia that is characterized by an intense and irrational fear of blood, needles, medical care and injuries [28]. A fear of needles affects between 14 and 38% of the adult population, whereas the prevalence of the phobia lies in the 3 to 4.5% range [31]. The findings of the study presented here are consistent with these data. A total of 36% of patients (74 patients) refused to undergo a blood test. Among them, only 9 refused for reasons such as time, pain, costs, potential risks, side effects, fear of results, test too revealing or confidentiality. Thus, the remaining patients refused the blood test for reasons other than those suggested in the questionnaire. It is therefore possible that the fear of needles and blood be the main reason for refusing to undergo the blood test [30]. It is also possible that general blood test have been already performed before the patient was referred to the gastroenterologist specialist, explaining that the patient refuse to do it again. The frequency of the blood testing which was requested may explain patients' low level of comfort and compliance. Six patients mentioned cost as a reason for refusal. Since these tests are mostly covered by the public health system in Canada, future studies could further explore the costs to patients for blood testing. Additional studies would be required to better understand the reasons for refusing the blood test as part of the IBD monitoring process.
For its part, the stool test ranks among the least ordered test/procedure and few patients refuse this test. This is the test with which patients feel the most comfortable. As with the blood test, concern about the results was low and the results did not generate too many concerns about the illness. Various laboratory tests are used in screening patients with suspected IBD. These tests make it possible to identify IBD patients who are relapsing or at risk of relapse [22]. In the past, laboratory markers were underestimated due to their low specificity. Given that endoscopic assessment is invasive and requires significant resources, identifying biomarkers of an illness' activity becomes an attractive alternative [23]. Fecal biomarkers can serve as surrogate markers of gut inflammation. [21]. For example, fecal calprotectin has become a clinical standard to assess IBD activity, predict relapse and monitor response to treatment [16]. A study conducted among adolescents has revealed that, on one hand, they tend not to report all of their symptoms and, on the other hand, they are not embarrassed by the idea of collecting their stool [17]. Thus, the stool test is appropriate and well received by this patient group. Although the study presented here was conducted with an adult population, the findings are consistent with those of the study conducted with adolescents. It indeed appears that the stool test was the test with which patients were the most comfortable and, even though it is the least-often explained by physicians, patients had a very good understanding of the need to undergo this test.
Colonoscopy, colon biopsy and medical imaging are procedures that are longer to perform and more invasive than a blood or stool test). Generally speaking, procedures raise more concerns about the results, the results increase concerns about the illness and physicians explain the reasons why they are needed to patients more frequently. A colonoscopy is often requested and very seldom refused. This is the procedure with which patients are the least comfortable. It is also the one that generates the most concerns about the results and whose results raise the most concerns about the illness. This perhaps explains why, of all the monitoring tools available, this is the procedure for which physicians most often provide explanations of why it is needed, the impact it will have on treatments, the inherent invasive nature of the procedure and the inherent potential risks. A colonoscopy is the gold standard procedure to assess disease activity but it is invasive and expensive and long to perform [22]. For patients, this procedure is demanding, as it requires motivation, planning and preparation (dietary restrictions and the need to take purgatives) [41]. In one study, 70% of respondents indicated they had not received enough information about the procedure, which decreased their comfort during the procedure [29]. A study on patient perceptions of the colonoscopy has shown a lack of knowledge about anatomy, the procedure and the reason for undergoing a colonoscopy [33]. These findings could explain why physicians more often explain to patients why this procedure is needed, as well as the impact the results will have on treatment and the potential risks associated with this procedure.
A biopsy during the colonoscopy is often orderded and very seldom refused. This is the procedure for which the reasons why it is needed are the least frequently explained to patients by their physicians and of which the presentation of the results generates the least concern about the illness. Patients nevertheless reported a high level of comfort with this procedure. However, patients with longer duration of disease usually are very concerned about the results of the biopsy because they are at higher risk of cancer. The goal of surveillance with colonosocopy is to reduce mortality and morbidity associated with colorectal cancer (CRC) by detecting asymptomatic cancers and premalignant lesions [24]. Ananthakrishnan, Cagan [26] have shown that the rate of CRC among IBD patients who had recently undergone a colonoscopy (within the previous 36 months) was lower and the mortality rate was lower for patients diagnosed with a CRC, which underscores the importance of adherence to surveillance colonoscopy, as was observed in this present study. Despite the risk of developing CRC, a colon biopsy remains the procedure for which patients receive explanations of why it is needed the least often and for which their understanding of why the procedure is needed is the lowest.
Among all of the procedures presented in this study, medical imaging is the least-often requested. It is more commonly refused than colonoscopy, colon biopsy, and stool test. This is the procedure for which patient level of comfort is the highest and for which the potential risks and the level of invasiveness are the least-often explained to patients. The additional information provided by medical imaging could change therapeutic decisions and have an impact on the clinical course of the illness, particularly in CD where it is most often ordered. [27]. Medical imaging techniques such as computerized tomography (CT) and magnetic resonance imaging (MRI) are increasingly used in the assessment of IBD [14]. A CT scan is an excellent imaging method for IBD, but the radiation doses are considerably higher than with other imaging methods. Given the chronic nature of IBD, patients are at risk of being exposed to an accumulation of potentially harmful ionizing radiation throughout their lifetime of medical follow-up, thereby increasing the risk of cancer among a population already at risk. Magnetic resonance imaging and small intestine contrast enhanced ultrasonography therefore emerge as radiation-free alternatives that provide results that compare to those of a CT scan in terms of accuracy [19]. No imaging technique is perfect, but each method plays a potential role in the assessment of IBD. Each method has its share of risks and benefits, and various aspects such as costs, exposure to radiation, the need for anesthesia and image quality must also be considered [27]. One study has shown that patients with aggressive lymphoma experienced high levels of anxiety during the period in which they had to undergo routine medical imaging scans to monitor their illness [32]. A systematic review of literature has been conducted to better understand the experience of patients as they undergo medical imaging procedures [34]. It has been shown that patients frequently have a negative experience that could stem directly from certain aspects of the procedure related to the production of high-quality images, such as: MRI noise; exposure to magnetism or radiation; holding one's breath; the use of a contrast medium; and intestinal distention. As these elements help produce high-quality images, it is important that the reasons for these negative aspects be explained to patients. Yet, in the findings presented here, this is the procedure about which the fewest number of patients had received information about potential risks and the degree of invasiveness, and about which the patient level of comfort was highest.
This study may present certain limitations that could potentially limit the interpretation of the results. First, by virtue of confidentiality restrictions, participant selection was conducted through a questionnaire on the CCC website, which may represent a more motivated and engaged group of respondents. A higher response rate would have provided a larger sample size for broader application of study results.Second, the questions used to determine the reasons patients refused to undergo a test/procedure did not provide any clarifications as to the causes of the refusal, especially for the blood tests. An additional study should be conducted to shed light on the reasons that lead patients to refuse the test/procedure. With the latest advances in genomics, medical imaging and regenerative medicine, more precise diagnoses and personalized treatments have now become current fields of reseearch [49]. Thus, the development of genetic tests that could result in better diagnosis and patient response monitoring may lead to an improvement of their living conditions.
Conclusions
In a patient-centered care approach, it is essential to gain a better understanding of patient perceptions of diagnostic and monitoring tests and procedures. Acceptance of the test or procedure by patients is essential for them to adhere to the monitoring process. Furthermore, if the anxiety generated by the test/procedure is too great or if the risks outweigh the benefits, the patient is at risk of refusing the test or procedure. The present study thus takes a first step in this direction and therefore provides findings that are useful to physicians. First, it has become clear that there is problem with the blood test. This test appears to be trivialized by physicians as few explanations are given to patients, who actually refuse this test in great numbers. The level of patient understanding of this test is in fact not very high. By being aware of this problem, physicians are in a better position to prevent patient refusal by modifying their practice, by providing more information about this test and by discussing it with their patients in order to understand their reluctance to undergo this test. In addition, since patients are very comfortable with the stool test, which offers a non-invasive and very revealing monitoring solution, physicians can turn to this test with patients who refuse the blood test. As procedures generate much concern among patients, physicians must take the time to properly explain not only the reasons for these procedures, but also the potential risks and the level of invasiveness of the ordered procedure. This study also clearly shows that the risks of having a false positive or negative are not sufficiently communicated to patients. It is therefore important that physicians transfer this information to their patients as part of their practice. Lastly, the theoretical contribution of this study is based on the dissociation between a strictly rational understanding of information and the tranfer of this information to the patient's medical decision-making process that generates cognitive limitations and emotional factors, which in turn distorts the manner in which the information is processed by the patient. Thus, despite the transfer of information from physician to patient, the latter loses part of this information when the time has come to shift into action and/or make a decision, as the emotions, values and fears alter the course of the patient's processing of the information.
Additional file
Additional file 1: Online Questionnaire: Inflammatory Bowel Disease Patient Perceptions of Diagnostic and Monitoring Tests and Procedures. This online questionnaire was designed to better understand the concerns raised by the tests/procedures among IBD patients, the transfer of information from the physician to patients and the patients' understanding of these tests/procedures, as well as the rate of prescriptions, the reasons patients refuse to undergo such tests/ procedures and the sociodemographic profile of the participants (Additional file 1-Online Questionnaire). (DOC 179 kb) Abbreviations CCC: Crohn's and Colitis Canada; CD: Crohn's disease; CRC: Colorectal cancer; CT: Computerized tomography; IBD: Inflammatory bowel disease; UC: Ulcerative colitis | 2019-02-14T13:58:38.545Z | 2019-02-13T00:00:00.000 | {
"year": 2019,
"sha1": "3bd50448c18c4e647554d91b62f21186f5de0824",
"oa_license": "CCBY",
"oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/s12876-019-0946-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3bd50448c18c4e647554d91b62f21186f5de0824",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
242910898 | pes2o/s2orc | v3-fos-license | Correlation of Age and Bone Marrow Derived CD 34+ Cells and Leucocytes in 873 Patients.
Background: The use of regenerative medicine, such as autologous chondrocyte implantation (ACI), matrix associated stem cell therapy (MAST) and bone marrow derived stem cell therapy against arthritis is the gold standard for certain indications. However, the clinical improvement of patients using these novel therapies remains heterogeneous and the reasons for this are not fully understood. The impact of age is always a concern for patients and doctors and elderly patients can only be mobilized with lower total collected CD34+ cells, older age correlates with inferior results, fatty degeneration of the bone marrow, delayed fracture-healing and osteoporosis
Introduction
The use of cell therapy, such as autologous chondrocyte implantation (ACI) and matrix associated stem cell therapy (MAST) for the repair of damaged cartilage is well established, demonstrating good short to medium term outcomes. 9,24 owever, the clinical improvement of patients using these novel therapies remains heterogenous and the effects are not fully understood.Recently bone marrow derived stem cells gained further attraction in orthopedic diagnoses, where they are also used as bone marrow aspirate (BMA) or bone marrow aspirate concentration (BMAC) for stem cell therapy.
(1) Arthritis is treated in this way, aiming to generate additional cartilage, and stop the in ammatory process. 8,11,12,18,27,31,33,35 (2) Intients with spinal cord injuries, bone marrow derived stem cells can improve some neural function. 6,7,13,15,17,23 Regaess of the purpose of a cell therapy, successful cell transplantations requires the use of a su cient number of speci c cells and their engraftment. 20Better clinical results can be achieved with a higher donor-site stem cell count 19,30 and a lower age. 14The impact of age is not fully understood for stem cell procedures and controversial: Speci c qualitative age-dependent ndings were published up to now (lower proliferation and extra cellular matrix forming potential 16 , decreasing growth rate and telomere length, 4 lower mobilization rate, 1,28 fatty degeneration of the bone marrow, delayed fracture-healing and osteoporosis, 21 acquired mitochondrial DNA mutations 34 ) but the quantity of stem cells according to age has not yet been studied su ciently.In hemato-oncology is is known, that grafts from older donors do not adversely affect outcomes of allogeneic hematopoietic cell transplantation as compared to grafts from younger donors in. 10,28 ce an age-dependent range of the physiological leukocyte-and stem-cell-numbers and their vitality in the human bone marrow has not yet been reported and most clinical studies about CD 34+ stem cells refer to patients with malignant diseases, we wanted to establish a normal range of bone marrow derived leukocytes and stem cells of patients without malignancies according to age, harvested by bone amrrow aspiration (BMA).
Therefore the aim of this study is, to evaluate the number and the vitality of bone marrow derived leukocytes and CD 34+ cells in a large number of patients undergoing autologous stem cell transplantation for nonmalignant diseases study the connection to age in order to nd out, if a possible age limit is present, after which cell number and vitality decreases.
predictor exists for the amount of CD 34+ cells in the bone marrrow.
Level IIb: Retrospective cohort study
In a retrospective study the laboratory results all patients, who underwent stem stell transplantation for non malignant diseases were evaluated.All bone marrow punctures were done by the same surgeon (K.G.).The stem cells were harvested with a Yamshidi Needle (15 ga x 2.688in MAX Bone Marrow Aspiration Needle, ARGON Medical devices, Athens, USA, www.argonmedical.com)under sedoanalgesia and in compliance with all applicable laws and regulations.90ml of bone marrow aspirate was retrieved using the Technique of Kristin Oliver 26 (using 10ml syringes and changing direction repeatedly).One milliliter of this sample was immediately transferred to a laboratory and analyzed with FACS (Fluorescence Activated Cell sorter) using a stem Cell Kit from Beckman Coulter and the ISHAGE protocol (https://www.bc-cytometry.com/PDF/DataSheet/IM3630.pdf).Results 873 datasets were found in the laboratory patient database.Age ranged from 1-90 years (mean 28, median 25) and patients were clustered into age groups of 10 years.Gender distribution was 29% female and 71% male.There was no gender-difference.The anonymous laboratory data was statistically evaluated by a blinded observer.(Table 1) Table 1 Age distribution of bone marrow derived cells
Bone marrow derived leucocyte cell count
No signi cant differences between age and gender and no interaction.Even in a pairwise comparison of patients below the age of 20 versus patients at the age of 20 years and over, no signi cant differences were found (p=.9).The correlation between age and bone marrow derived leucocyte cell count is negligible (r=-.255,p<.001, Figure 1).
Bone marrow derived leucocyte vitality
No signi cant differences between age and gender and no interaction in all age groups, no signi cant correlation.(Table 1)
Bone marrow derived CD34+ cell count
No signi cant differences between age and gender and no interaction.Even in a pairwise comparison of patients below the age of 20 versus patients at the age of 20 years and over, no signi cant differences were found (p=.8).The correlation between age and bone marrow derived CD 34+ cell count is negligible and comes from patients below the age of 10 years (r=-.361,p<.001, Figure 2).
Correlation / Predictor
The number of bone marrow derived leucocytes and CD 34+ cells (as a subset of leucocytes, respectively) had a great variation between individual patients, but both cell types correlated strong and signi cant (p<.001, r 2 =822, Figure 3) within the respective patients.Bone marrow derived leukocytes are therefore a viable predictor for the amount of stem cells.The number of stem cells can be calculated as follows: Stem cells (CD 34+ cells / Microliter) = 13.5 x Leukocytes (per Nanoliter bone marrow aspirate).
Bone marrow derived leucocytes and and CD 34+ cells had a negligible tendency (r=-.255,p<.001 and r=-.361, p<.001) to decrease over life time.The weak correlation was only due to the group of children below the age of 10 years.Thereafter there was a variation between individual patients, but no decrease over time.
Discussion
Arthritis and musculoskeletal disorders constitute a major cause of disability and the burden of musculoskeletal diseases will increase with an increasing ageing population. 22Stem cells remain at the forefront of efforts in Regenerative Medicine, based on a conviction that this technology can provide an effective treatment paradigm for major diseases where there is still an unmet need. 3 2017 the rst prospective, single blind, placebo-controlled trial of bone marrow aspirate concentrate for knee osteoarthritis described a positive clinical outcome. 31A recent review of 1500 papers on stem cell therapy in orthopaedics revealed, that studies reported information on only 42% (range, 25%-60%) of the variables included within established minimum reporting standards, leaving it unclear, which amount of stem cells was really harvested and injected. 25,29 higher donor-site cell count correlates with a better outcome. 8,14,18,19,27,35 In hmato-oncology, where bone marrow transplantations are performed routinely since decades, elderly patients show inferior mobilization rates and inferior outcome in some studies. 1,28,14 ikewise, in a mice model an age-related fatty degeneration of the bone marrow was described. 21e current study looked at the donor site of 873 healthy patients (without bone marrow diseases), applicable for orthopaedic interventions, to understand the vitality and the quality of the bone marrow derived (stem) cells.
Age and cell counts
Due to its large size this study is the rst to establish a normal range of leukocytes and CD34+ stem cells in bone marrow aspirate (BMA) in the average population.The amount and vitality of bone marrow derived leucocytes and the number of mononuclear cells (stem cells) do not deteriorate over age.
The formerly described lower mobilization rate of bone marrow derived (stem) cells in the elderly in some studies 1,28 con icts with a recent study, where little of the parameter variability could be explained by age. 5 Our nding, that the number of bone marrow derived leucocytes remains stable in adults was con rmed in a prior study with 24 goats, but we did not nd further papers referring to humans. 2
Vitality
The vitality of bone marrow derived cells was always high (87-91%) in all age groups using FACS ( uorescence-activated cell sorting) analysis.Regarding dental pulp stem, the proliferation rate decreases in elderly patients. 36We could not con rm this nding in bone marrow derived cells.It is well known, that jawbones have an different bone metabolism. 32rrelation / Predictor Stem cells are mononuclear cells and therefore a 7.4% fraction (1 / 13.5) of the bone marrow derived leucocytes, as we demonstrated.
To our knowledge no prior study described the positive and strong correlation (p<.001, r 2 =822) between both parameters.Since stem cells can only be identi ed using speci c CD antigen sets (CD 34, CD 90, CD 45, CD 107,…), which is costly and laborious, this correlation can be utilized, to predict the amount of stem cells based on the number of bone marrow derived leucocytes alone, which is much easier.As a matter of fact, only a negligible share of publications reports the absolute number of stem cells used per patient. 29Using the new described correlation, a much cheaper possibility exists, to assess, if a speci c patient has a high or low stem cell number, since counting leukocytes can be done in any operating room, but counting stem cells requires at least a FACS analysis.
Limitations:
Since this was a retrospective study, we were not able to assess any social data (body weight, sport habits, smoking status,…).We did not perform colonization and differentiation experiments, since the lack of a speci c MSC marker and the low frequency of MSCs in bone marrow necessitate their isolation by in vitro expansion.
Further research is needed to link clinical outcome with the absolute number of bone marrow derived leukocytes and stem cells since it is not yet clear, if the heterogenous clinical results of individual patients are linked to the heterogenous bone marrow derived stem cell counts.It remains speculative if patients with higher cell counts will have better outcomes and might therefore be better suitable for stem cell operations.
Conclusion
We established a normal range of bone marrow derived leukocytes and stem cells in 873 patients.No age-related decrease regarding the number and the vitality of leukocytes and stem cells was found.Furthermore the number of bone marrow derived leucocytes might be used to predict the amount of stem cells (usually a 7.4% share) on an individual basis in order to focus on the "ideal" patients.
Declarations
Ethics approval and consent to participate Use of anonymous secondary data, no trace to individual patients is possible .According to the declaration of Helsinky and the Ethics Commission of our University no informed consent necessary.
Consent for publication
Approved by all authors.There is no relevant decrease of bone marrow derived leucocyte cell count with increasing age.A strong correlation between bone marrow derived leucocytes and bone marrow derived CD34+ cells was found.
Prior bone marrow stimulation or treatment with GCSF (granulocyte colony stimulating factor) Prior treatment with hormons or corticoids in the last 6 months Statistical Analysis was done using: Pearson's chi-squared tests One-way analyses of variance (with post-hoc Bonferroni-adjusted pairwise comparisons of estimated marginal means) Pearson`s Correlation
Figure 2 Even
Figure 2 | 2020-10-28T18:28:44.659Z | 2020-09-08T00:00:00.000 | {
"year": 2020,
"sha1": "78a9431ea630b932df9b9f13dbeeeec9d2736243",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-70184/v1.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1559c648c8ee6839644f1276eca193f05aa9e22e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
268722613 | pes2o/s2orc | v3-fos-license | Potential of Unmanned Aerial Vehicle Red–Green–Blue Images for Detecting Needle Pests: A Case Study with Erannis jacobsoni Djak (Lepidoptera, Geometridae)
Simple Summary An outbreak of the unique pest Erannis jacobsoni Djak in Mongolia would severely impact the forest ecosystem. Therefore, this study employed a combination mode of UAV-RGB vegetation indices and texture features, utilizing the sequential projection algorithm to extract sensitive features and machine learning algorithms to construct a damage level recognition model, achieving low-cost, rapid, and effective pest detection. The results indicate that the combined mode of the RGB vegetation indices and texture features yielded good pest detection results, with an overall accuracy of 89%. This could provide an important experimental foundation for subsequent large-scale forest pest monitoring with a high spatiotemporal resolution. Abstract Erannis jacobsoni Djak (Lepidoptera, Geometridae) is a leaf-feeding pest unique to Mongolia. Outbreaks of this pest can cause larch needles to shed slowly from the top until they die, leading to a serious imbalance in the forest ecosystem. In this work, to address the need for the low-cost, fast, and effective identification of this pest, we used field survey indicators and UAV images of larch forests in Binder, Khentii, Mongolia, a typical site of Erannis jacobsoni Djak pest outbreaks, as the base data, calculated relevant multispectral and red–green–blue (RGB) features, used a successive projections algorithm (SPA) to extract features that are sensitive to the level of pest damage, and constructed a recognition model of Erannis jacobsoni Djak pest damage by combining patterns in the RGB vegetation indices and texture features (RGBVI&TF) with the help of random forest (RF) and convolutional neural network (CNN) algorithms. The results were compared and evaluated with multispectral vegetation indices (MSVI) to explore the potential of UAV RGB images in identifying needle pests. The results show that the sensitive features extracted based on SPA can adequately capture the changes in the forest appearance parameters such as the leaf loss rate and the colour of the larch canopy under pest damage conditions and can be used as effective input variables for the model. The RGBVI&TF-RF440 and RGBVI&TF-CNN740 models have the best performance, with their overall accuracy reaching more than 85%, which is a significant improvement compared with that of the RGBVI model, and their accuracy is similar to that of the MSVI model. This low-cost and high-efficiency method can excel in the identification of Erannis jacobsoni Djak-infested regions in small areas and can provide an important experimental theoretical basis for subsequent large-scale forest pest monitoring with a high spatiotemporal resolution.
Introduction
Erannis jacobsoni Djak (Lepidoptera, Geometridae) is a unique leaf-feeding pest in Mongolia that feeds on larch needles, and it causes the most damage during its larval stage (June to July) [1].During this period, larvae violently feed on needles, causing larch to slowly shed from the top and the growth condition of the trees to gradually suffer until death, which leads to a serious imbalance in the forest ecosystem [2].According to a survey, since 1920, Erannis jacobsoni Djak has shown a trend of spreading from the northwest to the southeast of Mongolia, and the typical outbreak area of the Khentii province is only over a hundred kilometres away from China's Greater Khingan Mountains' forest area [3].Since there is no natural barrier between the two countries for interception, pest invasion is very likely.The pest has a strong adaptability to the environment; once it invades new areas, it will easily form a dominant population, which will cause immeasurable environmental damage to forest areas and economic losses.It is evident that the timely monitoring and control of this pest are extremely important to protect forest ecosystems.At present, pest prevention measures in Mongolia are based on the manual dispersal of chemical pesticides or biological pesticides [4], which are implemented mainly by experience, do not distinguish among pest distribution areas, and lack a precise guidance basis [5], resulting in the insufficient application of pesticides to severely affected areas and their excessive application to mildly affected areas, leading to environmental pollution [6].Therefore, methods that identify the level of pest damage for Erannis jacobsoni Djak can not only improve the efficiency of pesticide implementation and reduce the pollution of the environment by pesticides but also maintain the balanced development of plant ecosystems, which has theoretical significance and practical value in maintaining ecological security.
Research on pest damage monitoring has been the focus of scholars both domestically and internationally [7,8].In traditional pest research, pest monitoring and investigation are mainly carried out by professionals on-site, resulting in relatively accurate and reliable data.However, this method is time-consuming, labour-intensive, expensive, and environmentally destructive and cannot meet the demands of large-scale applications.The development of remote sensing technology has made it possible to monitor pest damage at a regional scale [9,10].Over the past few decades, satellite remote sensing technology has developed significantly, achieved a high monitoring accuracy, and been widely applied by scholars [11][12][13].However, the potential applications of satellite remote sensing in many pest research areas have been limited by low temporal and spatial resolutions, high costs, and weather conditions [14].In addition, many monitoring models can only provide high-precision experimental results at a large scale, such as at the national, provincial, or municipal level, and cannot describe the changes in pest infestation in detail within relatively small areas [15].
Recently, the utilisation rate of unmanned aerial vehicle (UAV) platforms has increased.Their advantages, such as ease of operation, high spatial resolution, and high observation frequency, have shown to be of practical value in natural disaster research [16,17].The calculation of vegetation indices based on UAV imagery spectral reflectance has been proven to be an effective method for monitoring the level of plant damage.For example, Abdollahnejad et al. used dual-temporal UAV data to calculate vegetation indices and assessed the health of mixed broad-leaved and needle-leaved forests using machine learning algorithms [18].Ma et al. combined spectral information based on UAV multispectral data and applied deep learning methods to invert the damage information of Tomicus yunnanensis [19].Guerra-Hernández et al. discriminated the level of damage of black alder under Phytophthora infestation using UAV multispectral vegetation indices, achieving a maximum accuracy rate of 75% [20].The above research confirms that, compared with traditional methods, appropriate spectral vegetation indices can be used to more effectively monitor pests and diseases, but this requires more advanced and expensive multispectral sensors as a basis.As an alternative, some scholars have investigated vegetation parameters and pests and diseases using vegetation indices obtained from the red-green-blue (RBG) images of commercial unmanned aerial vehicle RGB cameras [21].For example, del-Campo-Sanchez et al. used UAV RGB imagery to detect the damage level of Jacobiasca lybica pests in vineyards [22].De Castro et al. differentiated healthy and wilt-diseased avocado trees using vegetation indices calculated from UAV RGB images and obtained good results [23].Although the cost of acquiring data from UAV RGB vegetation indices (RGB VI ) is relatively low, the monitoring effect is comparatively inferior to that of multispectral data [24,25], and its accuracy cannot fully meet the requirements.Therefore, if other appropriate features are added on the basis of RGB VI , it may be possible to achieve an improvement in detection accuracy [26].The texture structure information of the tree canopy, which is widely recognised as a characteristic quantity in addition to spectral information, can reflect features that cannot be reflected by the spectrum.It is also one of the factors that affect the robustness of vegetation indices [27,28] and can be used as an ideal feature combined with RGB VI .It can reflect the subtle changes in trees with insect pests, avoid the influence of factors such as "same spectrum, different object" and "same object, different spectrum" in the presence of land features, and stretch the distance of an image [29].It has great potential in plant pest identification and increases the feasibility of some studies.Currently, there are more cases of using RGB texture features (RGB TF ) alone to monitor vegetation [30,31] than there are reports on using RGB TF with other data as monitoring variables.The complementary fusion of RGB VI and RGB TF features can unlock the unlimited potential of RGB images for pest observation, opening up new economic, high-frequency, and high-precision ways to monitor pests.This is of great significance for the rapid diagnosis and prevention of Erannis jacobsoni Djak pest infestation.
In terms of vegetation health detection methods, scholars have used algorithms that focus on traditional machine learning and deep learning.For example, Syifa et al. used two machine learning algorithms, that is, support vector machines and artificial neural networks, to distinguish between healthy and affected trees in the case of pine wilt disease, achieving an accuracy of 94.13% [32].Duarte et al. utilised the random forest algorithm to detect the damage status of eucalyptus trees under the threat of eucalyptus long-horned borers, achieving a classification accuracy of 98.5% [33].Liu et al. identified 31 categories of forestry pests using the YOLO-4 algorithm with convolutional neural networks and obtained excellent results [34].Among them, the random forest (RF) and convolutional neural network (CNN) classifiers are frequently and widely used for quantitative tree and vegetation pest detection due to their excellent computing speed and ability to handle complex data, respectively [35][36][37].
Based on the above discussion, the purpose of this paper is to identify the damage level of Erannis jacobsoni Djak based on UAV remote sensing images of typical areas of Erannis jacobsoni Djak infestation, combined with RGB VI and RGB TF information, using RF and CNN methods to answer three basic questions: (i) whether the successive projections algorithm (SPA) can filter out the features that are sensitive to the level of pest damage from many variables, (ii) whether the combined pattern of RGB VI and RGB TF (RGB VI&TF ) can improve the accuracy of the pest detection model, and (iii) how to choose a suitable model to build the algorithm when the sample size is unstable.
Study Area
The study area is located within the typical outbreak area of Erannis jacobsoni Djak: a region of Binder, Khentii, Mongolia, with a length of 600 m, a width of 300 m and an average altitude of 1100 m.The study area was mainly dominated by Larix sibirica, with approximately three thousand Larix sibirica with different levels of damage distributed in the area, and the tree species was relatively homogeneous, which provided natural Insects 2024, 15, 172 4 of 20 conditions for the invasion of Erannis jacobsoni Djak.The area had been frequently infested with Erannis jacobsoni Djak between 2010 and 2020, and signs of the pest were found by the local forestry survey team in late May and early June 2021.Therefore, the researchers used an UAV to collect RGB visible and multispectral image data from the test area in late June 2021.Meanwhile, 840 larch sample trees were randomly selected from the test area for the study on pest damage level identification (Figure 1).
Study Area
The study area is located within the typical outbreak area of Erannis jacobsoni Djak: a region of Binder, Khentii, Mongolia, with a length of 600 m, a width of 300 m and an average altitude of 1100 m.The study area was mainly dominated by Larix sibirica, with approximately three thousand Larix sibirica with different levels of damage distributed in the area, and the tree species was relatively homogeneous, which provided natural conditions for the invasion of Erannis jacobsoni Djak.The area had been frequently infested with Erannis jacobsoni Djak between 2010 and 2020, and signs of the pest were found by the local forestry survey team in late May and early June 2021.Therefore, the researchers used an UAV to collect RGB visible and multispectral image data from the test area in late June 2021.Meanwhile, 840 larch sample trees were randomly selected from the test area for the study on pest damage level identification (Figure 1).
Field Data
In the study area, 840 sample trees of larch with different levels of damage were selected, and a survey of the geospatial coordinates and leaf loss rate of each tree was completed.The sample trees were divided into three layers-upper, middle, and lowerand three typical branches of each layer were selected to count healthy and damaged needles and calculate the leaf loss rate using Equation (1).Then, the average value was taken as the leaf loss rate of the current sample tree.
Insects 2024, 15, 172 5 of 20 where DR denotes the rate of leaf loss and takes values between 0 and 100%, and L h and L d denote the number of healthy and damaged needles, respectively.On this basis, through the experience of visual discrimination in field and the classification criteria in previous studies, the results were classified into pest damage levels based on Table 1 [38], where the final classification results of 210 larch trees into healthy, mild, moderate, and severe levels are shown.
UAV Image Data
A test was conducted with a DJI Phantom 4 multispectral quadcopter drone equipped with an all-in-one imaging system with RGB visible sensors (red, green, and blue channels) and five multispectral sensors (blue, green, red, red-edge, and near-infrared bands).Each camera had a 200-pixel resolution, and the resolution reached the centimetre level.Data acquisition was conducted under clear, cloudless, and windless conditions from 10:00 to 14:00 British Summer Time (BST), with the flight altitude being set at 100 m.The camera was calibrated with a whiteboard before the flight, and the camera probe went down vertically during the flight to acquire the observation images.After the flight, the images were preprocessed by "DJI Terra" to obtain two types of images-RGB and multispectral images.On this basis, the sample larch was visually segmented with ArcMap10 to obtain the canopy vector, and the damage level was assigned according to the measured leaf loss rate data (Figure 2).
pleted.The sample trees were divided into three layers-upper, middle, and lower-and three typical branches of each layer were selected to count healthy and damaged needles and calculate the leaf loss rate using Equation (1).Then, the average value was taken as the leaf loss rate of the current sample tree.
where DR denotes the rate of leaf loss and takes values between 0 and 100%, and Lh and Ld denote the number of healthy and damaged needles, respectively.On this basis, through the experience of visual discrimination in field and the classification criteria in previous studies, the results were classified into pest damage levels based on Table 1 [38], where the final classification results of 210 larch trees into healthy, mild, moderate, and severe levels are shown.A test was conducted with a DJI Phantom 4 multispectral quadcopter drone equipped with an all-in-one imaging system with RGB visible sensors (red, green, and blue channels) and five multispectral sensors (blue, green, red, red-edge, and near-infrared bands).Each camera had a 200-pixel resolution, and the resolution reached the centimetre level.Data acquisition was conducted under clear, cloudless, and windless conditions from 10:00 to 14:00 British Summer Time (BST), with the flight altitude being set at 100 m.The camera was calibrated with a whiteboard before the flight, and the camera probe went down vertically during the flight to acquire the observation images.After the flight, the images were preprocessed by "DJI Terra" to obtain two types of images-RGB and multispectral images.On this basis, the sample larch was visually segmented with ArcMap10 to obtain the canopy vector, and the damage level was assigned according to the measured leaf loss rate data (Figure 2).
Feature Extraction
Vegetation indices are combinations of two or more reflectance wavelengths that enhance differences in reflectance characteristics between stands of various levels of damage and are less influenced by light and background [39].Referring to a previous study, we used 60 multispectral vegetation indices (MSVI) and 22 RGB vegetation indices (RGBVI),
Feature Extraction
Vegetation indices are combinations of two or more reflectance wavelengths that enhance differences in reflectance characteristics between stands of various levels of damage and are less influenced by light and background [39].Referring to a previous study, we used 60 multispectral vegetation indices (MS VI ) and 22 RGB vegetation indices (RGB VI ), which are widely used in plant pest and related studies, for the detection of Erannis jacobsoni Djak pests [5,15,18,25,[40][41][42].First, the MS VI and RGB VI were calculated from the corresponding images using "Envi".Second, based on the sample tree canopy vector, the average value of each feature was extracted tree-by-tree as the index value of the sample tree at hand.
In addition, to extract RGB TF , principal component analysis (PCA) was performed based on the red, green, and blue channels of the RGB images.PCA is a statistical method to transform a set of potentially correlated variables into a set of linearly uncorrelated variables by orthogonal transformation, which can reduce the dimensionality of the feature space to achieve the elimination of redundant information, help speed up the calculation, and improve the accuracy of a model [26].Among various texture extraction algorithms, the grey-level cogeneration matrix (GLCM) is widely used for texture analysis [43].The results of the PCA were used to calculate eight texture feature values by GLCM, including the mean (mean), variance (var), homogeneity (hom), contrast (con), dissimilarity (dis), entropy (ent), second moment (sm), and correlation (corr) values [44,45], and the mean value of each feature was calculated as the RGB TF value of each sample tree by the sample trees canopy vector on a tree-by-tree basis.The final calculation results of MS VI , RGB VI , and RGB TF were normalised to reduce errors.
Feature Sensitivity Analysis and Extraction
(1) Sensitivity analysis of variance An analysis of variance (ANOVA) is used to determine whether subtyped independent variables have a significant effect on numerical dependent variables by calculating the variance statistic F value and testing whether the means of each aggregate are equal.The basic idea is to determine the magnitude of the influence of controllable factors on the study results by analysing the magnitude of the contribution of variance from different sources to the total variance.With the help of ANOVA, the variances of MS VI , RGB VI , and RGB TF on the pest damage level were calculated to reveal the sensitivity of these features to the pest damage level.The larger the F value of a feature, the more significant the sensitivity of the feature to the level of pest damage.
(2) Sensitive feature extraction With the help of SPA, MS VI , RGB VI , and RGB VI&TF were downscaled to eliminate overlapping and redundant information and retain meaningful features, which were set as the sensitive feature sets of MS VI , RGB VI , and RGB VI&TF as the input variables of the recognition model.SPA is a forward variable selection algorithm that minimises the covariance of modelling variables.It has the advantage that extracting a few columns of data in the initial data set can summarise the information of the vast majority of feature variables, achieve the elimination of redundant information, minimise information overlap, perform well when dealing with large-scale data, filter modelling features, and improve model accuracy.Details of the SPA algorithm can be found in [2].
Multispectral and RGB Features for Needle Pest Recognition
(1) Pest recognition model with sensitive features By combining the damage level of the pests to the sampled larch and the corresponding sensitive feature sets, the damage level recognition model of Erannis jacobsoni Djak was established in MATLAB2022 with the help of RF and CNN algorithms, which were used to explore the application potential of sensitive features.RF is a data mining model that is commonly used for classification prediction.Its main principle is to generate a new set of training samples by repeatedly randomly sampling k samples from the original training sample set of a size N through the bootstrap resampling technique and then generate k classification trees to form a random forest based on the self-help sample set.The classification results of the new data are determined by the number of votes formed by the classification trees to obtain a more accurate and stable prediction result [46].CNN is widely used in deep learning and is a deep neural network with a convolutional structure.The model usually consists of five parts: input, convolution, pooling, dense connection, and output.It can map high-dimensional nonlinear data to a low-dimensional space, realise data dimensionality reduction, effectively reduce the number of parameters in the network, and alleviate the overfitting problem of the model [38].
(2) Analysis of the pest recognition potential of sensitive features A total of 75% of the trees from all the samples were randomly selected as the training data set (including a training set and a validation set) for modelling and optimal model selection, and the remaining 25% were used as the test data set to validate the model and analyse its pest recognition potential.To objectively evaluate the model's performance, the overall accuracy (OA), Kappa coefficient, and confusion matrix were calculated based on true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN), the main metrics for model accuracy validation [21,47,48].OA is the probability that the classification result for each random sample is consistent with the type of data tested, ranging from 0 to 1.A larger value indicates a higher accuracy of the model's implementation.The Kappa coefficient is a metric used for consistency testing and represents the proportion of error reduction generated by classification versus completely random classification; it ranges from −1 to 1. Larger values indicate a better stability of the model's implementation and vice versa.The confusion matrix is calculated by comparing the victimisation level of each measured sample with the corresponding victimisation level after prediction classification, which can characterise the classification accuracy of the model for each victimisation level in this paper; the user accuracy (UA) and producer accuracy (PA) in the confusion matrix are used to evaluate the discrimination results of each damage level.The specific formulas for the accuracy evaluation metrics above are as follows: where k is the number of classes, N p is the number of predictions, N t is the number of actual measurements, and S is the sample size.
Sensitivity Analysis of RGB Features
To investigate the response of RGB characteristics to different damage levels, four damage levels of larch and the corresponding RGB features were plotted (Figure 3).As shown in the figure, most of the features showed significant hierarchical changes in the level of damage, from healthy to severe.Specifically, the indices CIVE, ExR, R, RGRI, and VARI showed a gradual increase; the indices B, GCC, GRVI, NGRVI, PPR, and WI showed irregularities; and the remaining indices showed a decreasing trend.For RGB TF , the mean, hom, sm, and corr showed upwards trends, and the var, con, dis, and ent showed downwards trends.This is because, when the larch is healthy, its leaf loss rate is minimal, the biochemical fraction of needles is sufficient, the canopy appears green, and all the information reflected in the RGB images is normal vegetation information.When the pests begin to invade larch, abnormal changes in the content of biochemical components of the needles, an increase in the rate of leaf loss in the stand, and a change in the canopy colour from green to yellow, red, and grey, which, in turn, affect the information captured by the RGB channel, are obvious responses to changes in the level of pest damage, implying that it is feasible to use RGB features to identify the degree of pest damage.
To reveal the sensitivity of the selected features to different damage levels of the pest, the damage levels were subjected to ANOVA with the corresponding RGB features, and the variance distribution was plotted (Figure 4).As it can be seen from the figure, the condition under which the variance of different damage levels and each feature satisfies p < 0.01 is F > F 0.01 (4,840), indicating that features with F > 3.34 are sensitive to different damage levels of the pest.When F > F 0.01 −10 (4, 840), p < 0.01 −10 , indicating that the sensitivity between the features with an F value larger than 13.57 and the level of pest damage is extremely significant [49].The results showed that all the features except for B, GCC, NGRVI, and RGBVI were highly sensitive with respect to the degree of damage.All the features except for PPR had an F value of 13.57, with significant sensitivity to different damage levels of the pest.For RGB VI , the index ExGR had the highest F value of 3194.7,while, in RGB TF , the mean reached the highest value (F = 731.8).It is evident that most of the features analysed were significantly sensitive to different damage levels of the pests and had a good ability to identify the damage level.RGB channel, are obvious responses to changes in the level of pest damage, implying that it is feasible to use RGB features to identify the degree of pest damage.To reveal the sensitivity of the selected features to different damage levels of the pest, the damage levels were subjected to ANOVA with the corresponding RGB features, and the variance distribution was plotted (Figure 4).As it can be seen from the figure, the condition under which the variance of different damage levels and each feature satisfies p < 0.01 is F > F0.01 (4, 840), indicating that features with F > 3.34 are sensitive to different damage levels of the pest.When F > F0.01 −10 (4, 840), p < 0.01 −10 , indicating that the sensitivity between the features with an F value larger than 13.57 and the level of pest damage is extremely significant [49].The results showed that all the features except for B, GCC, NGRVI, and RGBVI were highly sensitive with respect to the degree of damage.All the features except for PPR had an F value of 13.57, with significant sensitivity to different damage levels of the pest.For RGBVI, the index ExGR had the highest F value of 3194.7,while, in RGBTF, the mean reached the highest value (F = 731.8).It is evident that most of the features analysed were significantly sensitive to different damage levels of the pests and had a good ability to identify the damage level.
Sensitive Feature Extraction
The recognition model was constructed by extracting sensitive input variables through the SPA algorithm, and then, based on the optimal model, the sensitive features were analysed, and their feature contributions were obtained.
The results show that, for the optimal RF model (Table 2), seven features of all MSVIs were selected as sensitive features.Among them, the NDVIreg, SI1reg, SI1reg*, and TCARI indices had a red-edge band in their constituent spectral bands because the rededge band was in the middle of the red-valley and near-infrared (NIR) bands, and its changes were influenced by the simultaneous reflectance of the red-valley and near-infrared bands that are correlated with the degree of pest damage, which is the fastest changing band in the red-edge region [50].The presence of NIR bands in the constituent spectral bands of the 2NLI, GDVI, and GMNLI indices was due to the fact that the NIR bands were controlled by parameters such as plant moisture and internal structure that had a signifi-
Sensitive Feature Extraction
The recognition model was constructed by extracting sensitive input variables through the SPA algorithm, and then, based on the optimal model, the sensitive features were analysed, and their feature contributions were obtained.
The results show that, for the optimal RF model (Table 2), seven features of all MS VI s were selected as sensitive features.Among them, the NDVIreg, SI1reg, SI1reg*, and TCARI indices had a red-edge band in their constituent spectral bands because the red-edge band was in the middle of the red-valley and near-infrared (NIR) bands, and its changes were Insects 2024, 15, 172 9 of 20 influenced by the simultaneous reflectance of the red-valley and near-infrared bands that are correlated with the degree of pest damage, which is the fastest changing band in the red-edge region [50].The presence of NIR bands in the constituent spectral bands of the 2NLI, GDVI, and GMNLI indices was due to the fact that the NIR bands were controlled by parameters such as plant moisture and internal structure that had a significant response to the level of stand damage.The RGB VI and RGB VI&TF sensitive feature sets included nine and thirteen features, respectively, of which eight features were selected in both feature sets.When RGB TF was added to RGB VI , slightly more features were selected by SPA, especially the four selected in RGB TF , indicating that the addition of texture features added necessary information to the identification of pest damage levels based on the RGB features.
In the MS VI , b, g, r, RE, and NIR represent the spectral reflectance at the blue, green, red, red-edge, and nearinfrared bands, respectively.In the RGB VI and RGB TF , R, G, and B represent the reflectance at the red, green, and blue channels, respectively."i, j" represents row number (i) and column number (j) in the matrix P; "N" represents row number or column number in P; and "P i,j " represents cell I, a normalised value in J.
For the CNN optimal model (Table 3), the sensitive feature set of MS VI included ten indices, of which seven indices had red-edge bands in the constituent spectral bands, and three indices had NIR bands in the constituent spectral bands.The sensitive feature sets of RGB VI and RGB VI&TF contained four and six features, respectively, and three of them were selected by both feature sets.In the results, the index GLA could handle the brightness of the vegetation cover images, which could attenuate the interference of shadows between the trees and, thus, reflect the change in the vegetation canopy [51].The indices GBRI, RBRI, and RGRI could describe and analyse the angular sensitivity of the vegetation indices, which could effectively deal with the complex vegetation canopy structure and were more sensitive to the rate of needle leaf loss during the damage process [25,52].The index GB could enhance the difference in spectral response between the blue and green channels and could characterise the information of conifer chlorophyll content in the forest trees [53].The dis feature was able to reflect the degree of inhomogeneity between image elements and had excellent results in edge detection, helping researchers identify complex canopy appearance shapes [54].The texture feature mean had a good ability to classify pest damage areas [29].In the MS VI , b, g, r, RE, and NIR represent the spectral reflectance at the blue, green, red, red-edge, and nearinfrared bands, respectively.In the RGB VI and RGB TF , R, G, and B represent the reflectance at the red, green, and blue channels, respectively."i, j" represents row number (i) and column number (j) in the matrix P; "N" represents row number or column number in P; and "P i,j " represents cell I, a normalised value in J.
In addition, in order to understand the influence of the input features on the model, the contribution of sensitive features was calculated to reveal the importance of each variable in pest recognition.The main purpose of this study was to demonstrate the pest recognition potential of the RGB VI&TF model, so the RF and CNN optimal models based on RGB VI&TF were used to calculate the contribution of each sensitive feature with the help of the Gini coefficient, as shown in Figure 5.The RF optimal model had the largest contribution of GLA to the model at 0.21, indicating that this feature provided more decisive information for the model.It was followed by CIVE, GBRI, RGRI, and ExR.The contribution of RGB TF was relatively low compared to RGB VI , with mean, sm, ent, and dis values of 0.03, 0.004, 0.003, and 0.0008, respectively.The contributions of RGRI and GBRI in the CNN optimal model were high, at 0.29 and 0.23, respectively, while the contribution of other vegetation indices was 0.11 for GB and 0.08 for RBRI.The contributions of the texture features mean and dis were 0.08 and 0.0009, respectively.The contribution of the mean in the texture features was significant and it was robust to outliers and could reflect global pest occurrence; thus, it is a universal feature and has the potential to be applied to pest recognition research.In the MSVI, b, g, r, RE, and NIR represent the spectral reflectance at the blue, green, red, red-edge, and near-infrared bands, respectively.In the RGBVI and RGBTF, R, G, and B represent the reflectance at the red, green, and blue channels, respectively."i, j" represents row number (i) and column number (j) in the matrix P; "N" represents row number or column number in P; and "Pi,j" represents cell I, a normalised value in J.
In addition, in order to understand the influence of the input features on the model, the contribution of sensitive features was calculated to reveal the importance of each variable in pest recognition.The main purpose of this study was to demonstrate the pest recognition potential of the RGBVI&TF model, so the RF and CNN optimal models based on RGBVI&TF were used to calculate the contribution of each sensitive feature with the help of the Gini coefficient, as shown in Figure 5.The RF optimal model had the largest contribution of GLA to the model at 0.21, indicating that this feature provided more decisive information for the model.It was followed by CIVE, GBRI, RGRI, and ExR.The contribution of RGBTF was relatively low compared to RGBVI, with mean, sm, ent, and dis values of 0.03, 0.004, 0.003, and 0.0008, respectively.The contributions of RGRI and GBRI in the CNN optimal model were high, at 0.29 and 0.23, respectively, while the contribution of other vegetation indices was 0.11 for GB and 0.08 for RBRI.The contributions of the texture features mean and dis were 0.08 and 0.0009, respectively.The contribution of the mean in the texture features was significant and it was robust to outliers and could reflect global pest occurrence; thus, it is a universal feature and has the potential to be applied to pest recognition research.
Overall Accuracy Evaluation of Pest Damage Recognition
Based on the sensitive features of RGB VI&TF extracted by the SPA algorithm, we constructed the recognition model of Erannis jacobsoni Djak pest damage degree with the help of CNN and RF algorithms under the condition of different numbers of sample trees, verified its accuracy (Tables 4 and 5), and compared and evaluated the recognition results with the sensitive features of MS VI and RGB VI .As seen from the table, among the RF models, MS VI -RF, RGB VI -RF, and RGB VI&TF -RF showed a gradual increase in accuracy as the number of sample trees increased gradually from 140 until the OA and Kappa of all three models (MS VI -RF 440 , RGB VI -RF 440 , and RGB VI&TF -RF 440 ) peaked when the number of samples reached 440 and became the optimal model, at which point the accuracy began to decrease gradually.This indicated that the best model performance for studies applying RF to recognise the damage level of Erannis jacobsoni Djak was achieved with a sample size of approximately 440.Among these optimal models, MS VI -RF 440 achieved the highest OA and Kappa values of 0.9091 and 0.8843, which were improvements compared to the 0.0636 and 0.0751values, respectively, obtained by RGB VI -RF 440 .This was because spectral reflectance was more responsive to subtle changes in covariates such as vegetation chlorophyll and leaf loss rate than R, G, and B, which were sensitive only to the canopy colour of affected larch.To improve the recognition effect of RGB VI , it was combined with RGB TF , and this method achieved a significant improvement in model accuracy.Specifically, compared to RGB VI -RF 440 , the OA and Kappa of RGB VI&TF -RF 440 improved by 0.0181 and 0.0203, respectively, and the difference between the OA and Kappa of MS VI -RF 440 was reduced by 0.0455 and 0.0548, respectively.
For the CNN model, the accuracy also improved when increasing the number of sample trees from 140 to 840.The OA and Kappa coefficients of the three models reached their highest values when the number of samples was 740; these were set as the optimal models (MS VI -CNN 740 , RGB VI -CNN 740 , and RGB VI&TF -CNN 740 ).This suggests that, when recognizing Erannis jacobsoni Djak damage levels with CNN, the training sample can be increased as much as possible to improve the model accuracy, and the sample size is most suitable at approximately 740.Among the three models, MSVI-CNN740 achieved the highest accuracy with OA and Kappa coefficients of 0.9135 and 0.8892, respectively, which were 0.0486 and 0.0586 higher than those of RGB VI -CNN 740 .By combining RGB VI and RGB TF features, optimisation of the model was achieved.Specifically, the OA and Kappa coefficients of RGB VI&TF -CNN 740 improved by 0.0216 and 0.0259, respectively, compared to RGB VI -CNN 740 , and the difference between them and MS VI -CNN 740 was reduced to 0.027 and 0.0327.These results are consistent with most scholars' findings [55][56][57].
Accuracy Evaluation of Different Damage Levels Recognition
To explore the discriminative effect of the model on each class in more detail, the confusion matrix of the optimal model was drawn by combining the results of the actual measurements and predictions (Figure 6).(MSVI-CNN740, RGBVI-CNN740, and RGBVI&TF-CNN740).This suggests that, when recognizing Erannis jacobsoni Djak damage levels with CNN, the training sample can be increased as much as possible to improve the model accuracy, and the sample size is most suitable at approximately 740.Among the three models, MSVI-CNN740 achieved the highest accuracy with OA and Kappa coefficients of 0.9135 and 0.8892, respectively, which were 0.0486 and 0.0586 higher than those of RGBVI-CNN740.By combining RGBVI and RGBTF features, optimisation of the model was achieved.Specifically, the OA and Kappa coefficients of RGBVI&TF-CNN740 improved by 0.0216 and 0.0259, respectively, compared to RGBVI-CNN740, and the difference between them and MSVI-CNN740 was reduced to 0.027 and 0.0327.These results are consistent with most scholars' findings [55][56][57].
Accuracy Evaluation of Different Damage Levels Recognition
To explore the discriminative effect of the model on each class in more detail, the confusion matrix of the optimal model was drawn by combining the results of the actual measurements and predictions (Figure 6).As seen from the figure, all the models showed excellent results in the discrimination of healthy stands, followed by the discrimination of the level of severe damage, with UA being more prominent.In the RF model, the discrimination of healthy and severely damaged stands was improved more by considering RGB VI&TF compared to RGB VI , especially in the discrimination of healthy stands, where both UA and PA reached a value of 1, indicating that there were no commission and omission.In the CNN model, compared to RGB VI , RGB VI&TF improved the discrimination of mild, moderate, and severe damage stands, with UA and PA improving by 0.0256, 0.0588, and 0.0244 and 0.0257, 0.0488, and 0.0212, respectively, while its UA decreased with respect to the discrimination of healthy stands due to two trees being misclassified as healthy larch.The above results show that the combination variables of RGB VI and RGB TF had substantial correlations with the level of tree damage and had great application effect and value for Erannis jacobsoni Djak pest recognition efforts.
Efficiency of SPA-Based Selection of Sensitive Features
In some previous plant pest studies, scholars modelled all selected features to monitor the severity or physicochemical parameter content [58], but not all the features have a positive effect on the study process, and the information contained in each feature is often "cross-informative" [59], which affects the final judgement.In this paper, SPA was used to eliminate the features with overlapping information and select numerous optimal features with less mutual redundancy for modelling to simplify the data and reduce the complexity of the model.The accuracy evaluation results of the model show that the sensitive feature variables selected by SPA can meet the needs of pest damage level identification.
To reveal the effect of SPA in feature screening more intuitively, this paper used all MS VI , RGB VI , and RGB VI&TF features based on 440 and 740 samples to construct models (Ent-RF 440 and Ent-CNN 740 , respectively) to compare them with the recognition effect of SPA's sensitive features, and the results are shown in Figure 7.In both the RF and CNN categories, the SPA-based models not only did not decrease but even improved in accuracy compared with the models based on all the features, and, especially in the models based on the RGB VI&TF features, all the accuracy metrics improved.For example, in model RF 440 , when RGB VI&TF -SPA was compared with RGB VI&TF -Ent, the OA, Kappa, UA, and PA were improved by 0.0.63,0.0416, 0.032, and 0.0446, respectively; in model CNN 740 , with RGB VI&TF -SPA being compared with RGB VI&TF -Ent, the OA, Kappa, UA, and PA improved by 0.027, 0.0311, 0.0306, and 0.0272, respectively.The reason for this may have been that the information contained in the RGB VI&TF -Ent feature set had a high degree of redundancy, which affected the accuracy of the model.
Thus, the performance of various models proves that SPA can reduce the dimensionality of the data and reduce the amount of cumbersome information in the features, a step which, in our research, could effectively extract the features which were sensitive and meaningful to the degree of pest damage and improve the model's stability and accuracy.
Differences in Recognition Accuracy for Different Damage Levels
The confusion matrix of RGBVI&TF-RF440 and RGBVI&TF-CNN740 revealed (Figure 6) that the model had a higher accuracy in recognizing health and severity levels and a slightly lower recognition accuracy for mild and moderate levels, a matter which is especially significant from the perspective of UA.This is because the difference in canopy colour caused by abnormal chlorophyll and water contents in needles is more obvious in healthy and severely damaged larch than in mildly and moderately damaged larch, which are green and grey-black, respectively, and in these cases, the features of red, green, and blue channels captured by RGB sensors are more prominent and easier to recognise, while the canopy colours of larch with mild and moderate levels of damage are more similar to one another (between yellow and red), meaning that the difference in the features reflected by the RGB sensor is small, and the RGB feature values of some sample trees with mild-and moderate-level damage appear similar, resulting in a lower prediction accuracy.In addition, Erannis jacobsoni Djak usually lays its eggs beneath the humus layer, and it feeds in a manner starting from the lower part of the larch and climbing upwards.This has a certain probability of causing a difference in the appearance of the upper (green) and lower (yellow) colours of larch at the same time.This leads to a situation in which a field survey classified a tree as having mild damage by the average rate of leaf loss, but the UAV orthophoto can only capture the green and healthy canopy of the upper part of the forest and cannot penetrate deeper to obtain information on the lower canopy, resulting in an incorrect classification.When the pest gradually eats from the lower part of the tree to its upper part, the appearance of the upper canopy of some trees starts to change, and the lower dead needles have a certain probability of regrowing (appearing green), resulting in a moderate degree of field division, and the colour information reflected to the UAV
Differences in Recognition Accuracy for Different Damage Levels
The confusion matrix of RGB VI&TF -RF 440 and RGB VI&TF -CNN 740 revealed (Figure 6) that the model had a higher accuracy in recognizing health and severity levels and a slightly lower recognition accuracy for mild and moderate levels, a matter which is especially significant from the perspective of UA.This is because the difference in canopy colour caused by abnormal chlorophyll and water contents in needles is more obvious in healthy and severely damaged larch than in mildly and moderately damaged larch, which are green and grey-black, respectively, and in these cases, the features of red, green, and blue channels captured by RGB sensors are more prominent and easier to recognise, while the canopy colours of larch with mild and moderate levels of damage are more similar to one another (between yellow and red), meaning that the difference in the features reflected by the RGB sensor is small, and the RGB feature values of some sample trees with mildand moderate-level damage appear similar, resulting in a lower prediction accuracy.In addition, Erannis jacobsoni Djak usually lays its eggs beneath the humus layer, and it feeds in a manner starting from the lower part of the larch and climbing upwards.This has a certain probability of causing a difference in the appearance of the upper (green) and lower (yellow) colours of larch at the same time.This leads to a situation in which a field survey classified a tree as having mild damage by the average rate of leaf loss, but the UAV orthophoto can only capture the green and healthy canopy of the upper part of the forest and cannot penetrate deeper to obtain information on the lower canopy, resulting in an incorrect classification.When the pest gradually eats from the lower part of the tree to its upper part, the appearance of the upper canopy of some trees starts to change, and the lower dead needles have a certain probability of regrowing (appearing green), resulting in a moderate degree of field division, and the colour information reflected to the UAV sensor in this case is a grey colour, characterizing a heavy degree of damage, thus causing recognition errors.This is consistent with the findings (UA, Health: 0.86, Mild: 0.02, Severe: 0.6, Dead:0.98) of Megat et al. when monitoring the extent of damage to eucalyptus trees from pests and diseases [60].
In the field survey, some needles of affected trees had a green or yellow semigloss form.Such needles were defined as damaged needles in the calculation of leaf loss rate and, thus, were classified as severely damaged from the perspective of the leaf loss rate but were identified as moderate or mild by the RGB colour features of the UAV camera, which explains why the PA of RGB VI&TF -CNN 740 identified the severity damage as low, at 0.7872.
The Damage Level Recognition Potential of RGB VI&TF
The recognition of Erannis jacobsoni Djak pests by multispectral and RGB features revealed that MS VI -based models consistently outperformed RGB VI models, similar to the results of most pest and disease studies [61].To this end, the pest was recognised by combining RGB VI and RGB image-derived texture features RGB TF to create a new feature set, called RGB VI&TF .Finally, the recognition of RGB VI&TF was improved compared with RGB VI , and the model accuracy was closer to that of MS VI , which proved the importance of TF in pest monitoring.This is because the study with RGB VI only was based on the assumption that pixels are independent, meaning that any spatial relationship between neighbouring pixels was not considered in the preprocessing process [62], which may have affected the results due to "pretzel noise" in the classification image.TF is a common visual phenomenon-a local structure or arrangement rule which recurs in an image-which can reflect the intrinsic characteristics of the surface of the feature, does not vary with RGB reflection brightness, can characterise the spatial patterns and details between forest pixels [54], suppresses the "same spectrum, different object" and "same object, different spectrum" phenomena between forest canopy and understory vegetation spectra, and has a better robustness to factors such as illumination and shadows, thus providing information in our study which could not be explored by RGB VI .RGB VI&TF contained not only RGB reflection information but also image texture information, capturing features related to the level of pest damage from different directions and containing more comprehensive disaster information so that model performance was significantly improved.This is consistent with the research results of Liu et al., combining the RGB vegetation index and texture features to estimate plant biomass to evaluate the health status of plants [63].The combined mode of RGB VI and RGB TF in this experiment showed excellent capability and provided a new method for pest control research with low costs and a high efficiency.
In addition, the contribution ranking in Figure 5 shows that the main informationproviding variable is RGB VI , not RGB TF , so we hypothesised that RGB TF can provide supplementary information in pest monitoring studies but cannot be applied alone.To test the above hypothesis, only RGB TF was used to complete the discrimination of pest damage levels.The results showed that the discrimination accuracy of RGB TF -RF 440 and RGB TF -CNN 740 was lower than that of the other models, and the optimal OA and Kappa did not reach 0.55, which made it difficult for the model to meet the experimental demands.This suggests that RGB TF can only be involved in the pest recognition model as an auxiliary variable in combination with RGB VI .This is in agreement with the results of Zhou et al. in a vegetation recognition study [64].
Model Application
In this paper, traditional machine learning RF and deep learning CNN were utilised as algorithms for model construction.When using all the samples for modelling, CNN showed better results compared to RF, a finding which is consistent with our previous research and Kumar et al.'s findings on using deep learning for diagnosing plant early blight and late blight [38,65].In addition, we found that the size of the sample had an impact on the model construction process, so we tested its performance, and an optimal model was obtained by changing the sample size.The RF model showed an inverted U-shaped trend in accuracy as the sample size increased, and the highest accuracy occurred when the sample size was 440, which was then the sample size set in the optimal model.The accuracy of the CNN model continued to increase as the sample size increased, and the optimal model was obtained when the sample size reached 740.The reason for this may have been that there was, inevitably, feature noise in the data due to the influence of shadows or backgrounds and label noise due to inconsistencies between the upper and lower canopies of the affected trees, leading to a gradual increase in the number of outliers in the process of increasing the sample size, which had a certain impact on the RF model and caused a decrease in accuracy and performance [66].While CNN in deep learning uses local correlations in the process of convolution, it has some robustness in the face of outliers [67], and the model's stability does not fluctuate significantly; thus, its accuracy improves as the sample size increases.
Of course, the CNN model has a sample size requirement, meaning that more samples are needed to train a more stable model [68]; therefore, deep learning algorithms outperform traditional machine learning algorithms when the sample size is sufficient [69,70].CNN can be used as a first choice to identify the level of damage of Erannis jacobsoni Djak pests.However, field sampling is difficult, and a sufficient number of samples may not be used in every trial, so the RF algorithm can be used instead in the absence of samples and can effectively recognise the presence of pests.
Limitations and Prospects
The experiment showed that the accuracy of the MS VI -based model could meet the requirements for the identification of the damage level of Erannis jacobsoni Djak [71], which was expected.However, its high acquisition cost limits the progress of some important studies.In contrast, RGB images are cheaper to acquire than multispectral images, and our research's results on the combined RGB VI&TF indices obtained by RGB images are close to those of MS VI , making low-cost and high-precision pest identification possible.
In this paper, through the experience of previous studies, field survey parameters and UAV images of the test area at the end of June were used as the basic data sources for Erannis jacobsoni Djak pest monitoring.However, the timing of canopy colour change after each insect infestation varied, a phenomenon which was related to insect population density, tree genetics, host vigour, and environmental conditions [72,73].In turn, long-term field observations were needed to collect drone images and ground data at appropriate times for the accurate identification of pests [48].
Since the focus of this paper was to investigate the potential of RGB images in pest identification, only 22 RGB vegetation indices and RF and CNN algorithms [37], which are widely used by scholars, were selected to obtain more satisfactory results in the construction of the model.If we were to refer to some more meaningful RGB vegetation indices and frontier algorithms on this basis, the accuracy and generalisation ability of the model would very likely improve, a matter which will be explored in our next experiments.
Conclusions
In this study, we used UAV images of an Erannis jacobsoni Djak outbreak area as the data source, used a combined model of RGB VI and RGB TF to construct a pest damage recognition model, and compared and analysed the experimental results with other models using MS VI and RGB VI to explore the potential of RGB VI&TF .This study confirmed that features derived from low-cost RGB images can essentially replace the multispectral identification of Erannis jacobsoni Djak pest damage levels.The experimental results were more optimistic: the accuracy of the RGB VI -based pest damage recognition model was very low compared with that of MS VI , but the results changed after combining it with RGB TF , and the accuracies of RGB VI&TF -RF and RGB VI&TF -CNN were significantly improved compared with those of RGB VI -RF and RGB VI -CNN and were close to those of MS VI -RF and MS VI -CNN.On this basis, the confusion matrix of the RGB VI&TF model revealed that the model was extremely good at recognizing healthy and severely infested regions, while the detection accuracy of mildly and moderately infested regions was low, which was caused by pest habits and the orthorectification principle of UAV.In addition, SPA eliminated redundant and overlapping information in the data to provide effective input variables for the model, and the accuracy and suitability of the model constructed by SPA were improved compared with those of the models using all the features.
Using RGB VI&TF can achieve the identification of the level of pest damage at a small scale, and the results of this model can meet the needs of the relevant forestry departments.This study also provides a reference example and a theoretical basis for subsequent lowcost, large-area pest monitoring and control with a high spatial and temporal resolution and forest ecosystem protection.
Figure 1 .
Figure 1.Location of the study area and sample trees.Figure 1. Location of the study area and sample trees.
Figure 1 .
Figure 1.Location of the study area and sample trees.Figure 1. Location of the study area and sample trees.
Figure 3 .
Figure 3.The distribution of tree vegetation indices at different damage levels.
Figure 3 .
Figure 3.The distribution of tree vegetation indices at different damage levels.Insects 2024, 15, x FOR PEER REVIEW 9 of 21
Figure 5 .
Figure 5.The importance of sensitive RGB features for optimal RGBVI&TF models.3.3.Analysis of the Pest Damage Level Recognition Potential of RGB Features 3.3.1.Overall Accuracy Evaluation of Pest Damage Recognition Based on the sensitive features of RGBVI&TF extracted by the SPA algorithm, we constructed the recognition model of Erannis jacobsoni Djak pest damage degree with the help
Figure 5 .
Figure 5.The importance of sensitive RGB features for optimal RGB VI&TF models.
Figure 6 .
Figure 6.Confusion matrices of different classification models.Figure 6. Confusion matrices of different classification models.
Figure 6 .
Figure 6.Confusion matrices of different classification models.Figure 6. Confusion matrices of different classification models.
Figure 7 .
Figure 7. Modelling accuracy of entire features and sensitive features to different levels of trees.
Figure 7 .
Figure 7. Modelling accuracy of entire features and sensitive features to different levels of trees.
Table 1 .
Damage level classification criteria.
Table 1 .
Damage level classification criteria.
Table 2 .
Input sensitive features of optimal RF models.
Table 3 .
Input sensitive features of optimal CNN models.
Table 4 .
Accuracy evaluation of RF model for trees with different damage levels.
Bold digits indicate the highest model accuracy in each features set.
Table 5 .
Accuracy evaluation of CNN model for trees with different damage levels.
Bold digits indicate the highest model accuracy in each features set. | 2024-03-28T06:19:07.169Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "098c580034c3b4f60d7dabd41fa4d103f7d5ed44",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4450/15/3/172/pdf?version=1709528088",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "278c074ff406202ffebed0aa0a5f22d42a5a2201",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
227162750 | pes2o/s2orc | v3-fos-license | Identification of novel prognostic markers of survival time in high-risk neuroblastoma using gene expression profiles
Neuroblastoma is the most common extracranial solid tumor in childhood. Patients in high-risk group often have poor outcomes with low survival rates despite several treatment options. This study aimed to identify a genetic signature from gene expression profiles that can serve as prognostic indicators of survival time in patients of high-risk neuroblastoma, and that could be potential therapeutic targets. RNA-seq count data was downloaded from UCSC Xena browser and samples grouped into Short Survival (SS) and Long Survival (LS) groups. Differential gene expression (DGE) analysis, enrichment analyses, regulatory network analysis and machine learning (ML) prediction of survival group were performed. Forty differentially expressed genes (DEGs) were identified including genes involved in molecular function activities essential for tumor proliferation. DEGs used as features for prediction of survival groups included EVX2, NHLH2, PRSS12, POU6F2, HOXD10, MAPK15, RTL1, LGR5, CYP17A1, OR10AB1P, MYH14, LRRTM3, GRIN3A, HS3ST5, CRYAB and NXPH3. An accuracy score of 82% was obtained by the ML classification models. SMIM28 was revealed to possibly have a role in tumor proliferation and aggressiveness. Our results indicate that these DEGs can serve as prognostic indicators of survival in high-risk neuroblastoma patients and will assist clinicians in making better therapeutic and patient management decisions.
INTRODUCTION
Neuroblastoma is the most common extracranial solid tumor in childhood accounting for approximately 15% of pediatric cancer death [1][2][3]. It develops anywhere along the sympathetic nervous system with 60% of the tumors occurring in the abdomen, commonly in the adrenal gland [4,5].
Outcomes ranging from spontaneous regression to relentless progression despite extensive therapies indicate the heterogeneity of neuroblastoma [6]. The Children's Oncology Group (COG) classifies neuroblastoma patients into low, intermediate and high-risk groups. Patients classified in low-risk groups have good outcomes contrary to high-risk groups who present poor outcomes despite extensive therapies [4] and with a disproportionate number dying or suffering profound treatment related morbidities [7,8]. Tumors in high-risk neuroblastoma patients are often metastatic, resulting in survival rates of less than 50% [1].
The objective of our study is to identify a genetic signature from gene expression data that can serve as prognostic indicators of survival time in high-risk neuroblastoma patients and that could be therapeutic targets in that patient category.
RESULTS
Querying the Xena TARGET dataset returned 20 and 12 SS and LS samples, respectively. Based on the gene expression levels in these samples, the edgeR filterByExpr function removed 35,873 low expressed genes and kept 24,610 genes for downstream analysis. The DGE analysis with DESeq2 identified 40 DEGs between the SS and LS groups, of which 21 genes were upregulated and 19 genes were downregulated. Table 1 shows information about the 40 DEGs.
The Gene Ontology (GO) Molecular Function enrichment analysis revealed that upregulated genes were mainly enriched in MAP kinase activity, retinol binding and RNA polymerase II activating transcription factor binding, as well as in other activities shown in ( Figure 1A). No statistically significant results (p-adjusted value < 0.05) were obtained for the downregulated genes as well as for the other GO categories; Biological Process and Cellular Component. In addition, the Disease Ontology enrichment analysis associated upregulated and downregulated genes with several genetic disorders; intellectual disability, cardiac dysfunction, bone development, impaired infertility and pulmonary dysfunction caused by diaphragm-associated abnormalities ( Figure 1B and 1C).
Reconstruction of gene regulatory networks, using the GENIE3 algorithm, for the SS and LS samples respectively deduced 1,966,606 and 1,967,020 weighted interactions involving the DEGs. Applying a weight threshold value of 0.00251 resulted in 1018 and 650 DEG interactions for the SS and LS groups, respectively. The visualization of the 1018 DEG interactions using Cytoscape enabled the detection of 4 essential regulatory networks ( Figure 2). The first network ( Figure 2A) involves SMIM28, LGR5, PRSS12, EVX2, NHLH2 and HOXD10. All of these DEGs are upregulated, and the last three genes are transcription factors. The following network ( Figure 2B) interconnect MAPK15, Lnc-ZNF814-1, EDIL3, NBAS and CYP17A1. The first two genes are upregulated and the last three genes are downregulated. Most of the DEGs in the third and last networks ( Figure 2C and 2D) are downregulated with the exception of MEG9 and STRA6 which are upregulated. Interestingly, these interactions between the DEGs are Table 3 yielded 43 SS and 19 LS samples, respectively. DEGs that do not have an associated NCBI GeneID were not found in this dataset, particularly those that are identified as long non-coding RNAs (lncRNAs). Only 25 of the 40 DEGs have expression data in this dataset. Based on the results of feature selection with scikit-learn and several classification tests, the following 16 features were selected for the machine learning construction of the training and test sets; EVX2, NHLH2, PRSS12, POU6F2, HOXD10, MAPK15, RTL1, LGR5, CYP17A1, OR10AB1P, MYH14, LRRTM3, GRIN3A, HS3ST5, CRYAB and NXPH3. The evaluation of the Support Vector Machines (SVM) and Artificial Neural Networks (ANN) models using 5-fold cross-validation resulted in an accuracy of 78% and 87% for SVM and ANN, respectively. By testing the ML models on the GSE49711 test set, ANN again achieved better results with an accuracy of 82% of samples correctly classified as SS or LS compared to SVM which obtained an accuracy of 79% (Table 2).
DISCUSSION
We aimed at identifying genes that are differentially expressed between high-risk SS and LS patients that could be potential prognostic indicators and or therapeutic targets. The results of the DGE analysis between the two groups showed the upregulation and downregulation of genes associated with neuroblastoma and other cancers.
Upregulated genes
The upregulated DEGs included some genes whose overexpression in the SS group have been correlated with poor survival in neuroblastoma and other cancers. Higher expression levels of NHLH2 were found to be higher in unfavourable neuroblastomas and was significantly associated with a poor prognosis [22]. Additional roles of NHLH2 in the obesity and fertility has also been uncovered [23,24]. The upregulation of PRSS12 in this study is similar to the results of Hiyama et al. [25], which reported the overexpression of PRSS12 in neuroblastoma tumors with high telomerase activity correlating with unfavourable tumors. Serine proteases are often altered and significantly upregulated in cancer as malignant cells need proteolytic activities to enable their growth, survival and expansion. Our result support that upregulation of PRSS12 is indicative of poor survival in neuroblastoma. HOXD10 is a transcription factor whose expression is altered in many cancers. Its high expression gives cancer cells proliferative and migratory abilities [26]. Elevated expression of HOXD genes including HOXD10 was reported to be associated with unfavourable prognosis and poor outcome in neuroblastoma [27], which supports our results indicating a more aggressive disease in the SS group.
LGR5 is a stem cell marker which is highly expressed and associated with an aggressive phenotype in neuroblastoma [28,29]. It has also been associated with pancreatic ductal adenocarcinoma [30] and colorectal cancer [31].
LGR5 potentially contributes to stem cell maintenance and self renewal and is indicative of poor survival in high-risk neuroblastoma. SMIM28 is a less studied protein whose upregulation is indicative of poor survival in this study. Similar to our results, Jiang et al. [32] reported the upregulation of SMIM28 in prostate cancer. EVX2 is a homeobox transcription factor essential for vertebrate spinal cord neuronal specification [33]. POU6F2 belongs to the POU class homeobox family whose members are transcriptional regulators and is involved in hereditary predisposition to Wilms tumor, a pediatric malignancy of the kidney [34]. Functional studies are required to elucidate the roles of SMIM28, EVX2 and POU6F2 in high-risk neuroblastoma.
MAPK15, a protein kinase involved in many cellular activity including cell proliferation was upregulated in this study. Highest levels of MAPK15 was found in aggressive embryonal carcinomas and it acts by sustaining the progression of the cell cycle of embryonal carcinomas by limiting p53 activation and preventing the facilitation of p53 dependent mechanisms that results to the arrest of the cell cycle [35]. Neuroblastoma is a malignancy of embryonal origin and upregulation of MAPK15 would be expected to facilitate tumor progression and indicative of aggressive disease and poor survival as in the SS group. RTL1, a paternally expressed imprinted gene highly expressed in the fetus, placenta and brain. It has been reported to be a driver of hepatocellular carcinoma [36]. Being a protease, it possibly promotes tumor invasion and metastasis in neuroblastoma tumors. Higher expression of RTL1 is thus suggestive of poor prognosis in neuroblastoma. Also upregulated is STRA6, a plasma membrane protein that transports retinol and is involved in a signalling mediated by JAK2, STAT3 and STAT5 [37]. Its upregulation indicates poor survival in our study possibly through its maintenance of cancer stem cells and promotion of tumor formation as reported in colorectal cancer [37,38].
Ten lncRNAs were upregulated in this study. Three of these (lnc-SPG21-45, lnc-NSUN6-1 and lnc-KLHL28-1) are antisense to ANKDD1A, CACNB2 and C14orf28 genes, respectively, which are associated with other cancers and diseases, possibly by regulating their expression. C14orf28 has been observed to be overexpressed in colorectal cancer cells, promoting proliferation, migration and invasion [39]. ANKDD1A has been described as a functional tumor suppressor with germline variants predicting poor patient outcomes in low grade glioma [40], and is frequently methylated in glioblastoma multiforme [41] and in clinically nonfunctioning pituitary adenomas [42]. CACNB2 is a calcium channel protein linked to diabetic retinopathy [43], bipolar disorder [44], hypertension [45] and autism spectrum disorders [46]. The role of calcium signalling in cancer has been reviewed by [47][48][49]. MEG9, is located in an imprinted non-coding RNA genomic region, DLK1-DIO3 [50,51]. LINC01410 is a lncRNA highly expressed in pancreatic cancer tissues and cell lines [52]. High expression in cholangiocarcinoma and gastric cancer patients have been associated with poor prognosis and survival [53,54]. These lncRNAs may be facilitating the promotion of tumor progression, proliferation and invasion which might have impacted the survival of patients in the short survival group. It is well established that metastasis is the primary cause of cancer mortality. Functional studies are required to evaluate the roles of these lncRNAs in neuroblastoma.
Downregulated genes
The downregulated genes include genes such as AMIGO2, LRRTM3, GRIN3A, MYH14, EDIL3, FNDC9, involved in neural function and development, and in extracellular matrix (ECM) organization. AMIGO2 is a transmembrane molecule expressed in neuronal tissues and participates in their formation [55]. EDIL3 is an inducer of the epithelial-mesenchymal transition, that promotes angiogenesis and invasion in hepatocellular carcinoma [56]. FNDC9 which exhibits biased expressed in the brain is an ECM protein involved in tumorigenesis in different cancers [57]. MYH14 is a myosin, an actin-dependent motor protein that plays a role in neuritogenesis. Members of the myosin superfamily have been known to enhance or suppress tumor progression [58]. MYH14 could be suppressing tumor progression in high-risk neuroblastoma. The downregulation of these ECM associated genes could be promoting the invasion and metastasis of neuroblastoma tumors. OR10AB1P belongs to the olfactory receptor family of genes. Olfactory receptors are expressed in various human tissues and are involved in different cellular processes such as migration and proliferation. Some are biomarkers for prostate, lung and small intestine carcinoma tissues [59]. Decreased expression of CRYAB indicated its tumor suppressor function in bladder cancer [60]. It may thus also be functioning as a tumor suppressor in neuroblastoma. Ubiquitin C (UBC) is a polyubiquitin precursor. Ubiquitination has been associated with many cellular processes which play roles in tumorigenesis.
CYP17A1 is a key enzyme in the steroidogenic pathway with restricted expression in the adrenal gland. Neuroblastoma tumors commonly occur in the adrenal medulla. The downregulation of CYP17A1 may be an indicator of poor prognosis in neuroblastoma. LRRTM3 has high expression in the brain and belongs to a group of proteins involved in nervous system development. How it contributes to neuroblastoma remains to be investigated but it is currently a candidate gene for alzheimer's disease [61,62], a neurological disorder. NBAS is thought to be involved in golgi-to-ER transport and is typically amplified in MYCN-amplified neuroblastoma tumors [63]. GRIN3A is a glutamate receptor that promotes nerve outgrowth [64]. The downregulation of GRIN3A may suggest a higher level of disease because glutamate is a major excitatory neurotransmitter in the CNS that is involved in many neuronal processes. Functional studies are required to investigate how these genes contribute to poor survival in neuroblastoma.
Gene and disease ontology enrichment analyses
The molecular function activities; MAP kinase activity, retinol binding and RNA polymerase II activating transcription factor binding, enriched by the upregulated genes MAPK15, STRA16 and NHLH2, respectively, are activities that promote tumor cell proliferation. Deregulation of the MAPK signalling was associated with cancer development, progression and cell proliferation [65,66]. Retinol binding through the STRA6 upregulation activates a signalling cascade that is found to play a role in cancer initiation, maintenance and growth [38]. Furthermore, the increased global transcription activity (activation of RNA polymerase II) indicates an intensity of a rapid proliferation of cancer cells [67]. These activities again demonstrate the aggressiveness of the neuroblastoma tumors in SS patients compared to LS patients.
The enriched diseases by the upregulated and downregulated genes, particularly the disorders inducing heart failure (Cardiomyopathy and Congenital heart disease) and respiratory illness (Diaphragmatic dysfunction) may have negative impact on survival [68,69], and could be the cause of death in the SS neuroblastoma patients. There is no indication in the clinical information accompanying the gene expression count dataset if the patients suffered from additional disorders in addition to neuroblastoma. However, it is reported that high-risk neuroblastoma survivors treated with intensive multimodality therapy are at risk for a broad variety of treatment-related late effects including cardiac dysfunction, bone development disease, pulmonary dysfunction and impaired fertility [70]. These treatmentrelated morbidities were enriched by the upregulated and downregulated DEGs ( Figure 1B and 1C), suggesting that the cause of death in patients with SS neuroblastoma may be due to treatment complications.
Regulatory network analysis
After the application of a weight threshold of 0.00251 to the GENIE3 output, the numbers of interactions involving the DEGs in the SS and LS groups were 1018 and 650, respectively. There is a clear substantial difference in number of interactions between the two groups, indicating a higher cellular/tumor activity in the SS group which could be a sign of tumor aggressiveness in patients with SS neuroblastoma. In addition, it is noticeable from the networks in Figure 2 the importance of the genes SMIM28, MAPK15 and UBC as origins of most of the interactions. The role of MAPK15 and UBC in tumorigenesis has been reported in many previously discussed scientific works, while SMIM28 is a less studied gene with an unclear role in cancer. However, observation of the role of the final target genes of SMIM28 in Figure 2A network, which are LGR5 and HOXD10, can shed light on the role of this gene. As reported previously, both LGR5 and HOXD10 were associated with cancer cell proliferation and tumor aggressiveness in neuroblastoma. Thus, it is
Machine learning
Although not all the DEGs were included in the training and testing of the machine learning models, the obtained prediction results were significantly good. The ANN model obtained the highest accuracy of 82% for the classification of the external neuroblastoma samples (GSE49711) into short and long survival classes. This high classification accuracy makes it possible to consider that the DEG expression profiles have been preserved in the various high-risk neuroblastoma tumors, although neuroblastoma is known to be heterogeneous. Therefore, the DEG list can serve as prognostic indicators (genetic signature) for survival time in high-risk neuroblastoma patients and can be targets for drug discovery analyses. Relatively similar to our study, other studies have proposed genetic signatures for prognostic stratification of patients with neuroblastoma. Liu et al. [71] used an unsupervised biclustering machine learning technique to find high-risk neuroblastoma subtypes. They proposed a signature of 238 neuroblastoma-specific immune genes to identify ultra high-risk and high-risk neuroblastoma subtypes. Russo et al. [72] applied K-means clustering to 27 kinome gene signature to identify ultra high-risk subtypes of high-risk neuroblastoma. Formicola et al. [73] used Cox regression and Kaplan-Meier analysis methods to propose a 18-gene expression based risk scoring system to predict overall survival of patients with stage 4 neuroblastoma. We demonstrated the use of a shorter signature in a 16-gene expression classifier based on survival time to stratify high-risk neuroblastomas into SS (ultra high-risk) and LS subtypes. The number of genes in our classifier should provide the advantage of being less costly and easier to implement.
MATERIALS AND METHODS
The workflow describing the steps and methods undertaken in this study is illustrated in Figure 3. It includes 5 essential steps; dataset retrieval, differential gene expression, disease/gene ontology enrichment, gene regulatory network inference and machine learning.
Datasets
The Therapeutically Applicable Research to Generate Effective Treatments (TARGET) initiative employed comprehensive molecular characterization of hard-to-treat childhood cancers which included neuroblastoma. TARGET data is accessible via the TARGET data matrix as well as via the Xena browser. The Xena browser is a web based visualization and exploration tool for multi-omic data large public repositories and private datasets [74]. The TARGET neuroblastoma dataset in the Xena database [74,75] is composed of high-risk neuroblastoma samples with available clinical information. Gene expression RNA-seq read counts of the TARGET neuroblastoma dataset (dataset ID: TARGET-NBL. htseq_counts. tsv) were obtained from the GDC hub in Xena browser using xenaPython package. Fields used in querying the dataset are described in Table 3. The dataset itself was composed of 151 samples in total. Querying the dataset with the above fields returned 32 neuroblastoma samples; 20 of which with an overall survival time of less than 730 days (2 years) and vital status is "dead" were considered short survival (SS), while 12 samples with an overall survival time greater than 2555 days (7 years) and vital status is "alive" were considered long survival (LS).
Differential gene expression analysis
Normalized expression counts were converted to raw counts and filtered to remove low expressed genes using the edgeR filterByExpr function in R [76]. We then performed a differential gene expression (DGE) analysis between the short and long survival groups using the DESeq2 package in R [77]. DESeq2 uses shrinkage estimators for dispersion and fold change for its comparative differential gene expression estimation [77]. Differentially expressed genes (DEGs) were selected by meeting criteria of adjusted p-value < 0.05. Gene Ontology (GO) enrichment analysis was carried out to functionally annotate the DEGs using clusterProfiler [78] and visualized using enrichplot. The DOSE R library [79] was used to detect the diseases enriched by the upregulated and downregulated DEGs.
Machine learning
Scikit-learn [80] and LibSVM [81] libraries were used for machine learning (ML) model creation and classification tasks. Features (genes) not present in the test dataset (GSE49711) were not used for machine learning prediction. The top upregulated and downregulated genes used as features for machine learning (ML) prediction of patient survival were; EVX2, NHLH2, PRSS12, POU6F2, HOXD10, MAPK15, RTL1, LGR5, CYP17A1, OR10AB1P, MYH14, LRRTM3, GRIN3A, HS3ST5, CRYAB, NXPH3. The features were extracted from the log-normalized counts data. For LibSVM, the feature values of the training and test sets were scaled using a built-in python script. With regards to ANN, the feature values of the training and test sets were scaled with the Scikit-learn MinMaxScaler function. Algorithms used for training and evaluation of the models include; Support Vector Machines (SVM) and Artificial Neural Networks (ANN). 5-fold cross validation was done to determine the best parameters of the SVM and ANN models which was then applied to our test samples. Both models, created with LibSVM and ANN were applied to predict the classification of samples in an external dataset (GSE49711) with same sample characteristics (overall survival < 730 days with vital status as dead for SS samples, and overall survival > 2555 days with vital status as alive for LS samples). The evaluation metrics for the LibSVM and ANN models were precision, recall and accuracy.
Regulatory networks
The GENIE3 package [82] in R was used for genetic regulatory network inference analysis. The GENIE3 algorithm uses a Random Forest or Extra Randomized Trees approach to infer gene regulatory networks from gene expression data [82]. It outputs a ranked list of each pairwise comparison from the most to the least confident regulatory connection. The library was run on the gene expression counts from the SS and LS samples separately. For the output analysis, only connections involving the DEGs were considered. In addition, a weighting threshold of 0.00251 was applied to reduce the large number of connections and also to focus on high confident regulatory connections. Cytoscape [83] was used for the visualization of gene regulatory networks.
CONCLUSIONS
The DGE analysis is a powerful technique for identifying DEGs in a studied condition. In this study, using DESeq2 we identified 40 DEGs between SS and LS neuroblastoma samples. Many of the DEGs were found to be related to different cancers (including neuroblastoma), thus strengthening their possibility of being associated with neuroblastoma. The ML models based on 16 DEGs were capable of stratifying high-risk neuroblastoma samples on the basis of survival time, demonstrating their ability to be used as a genetic signature or prognostic indicators of survival in high-risk neuroblastoma patients. This study furthers our understanding of the molecular mechanisms of neuroblastoma in high-risk patients. We identified SMIM28 gene to be critical for tumor proliferation making it as a possible gene therapy target. Nevertheless, additional studies are required to elucidate the role of SMIM28 in the pathogenesis of neuroblastoma. Finally, prognostic stratification of highrisk neuroblastoma patients will help clinicians in making better therapeutic and patient management decisions.
Author contributions
AC and HB funded the project. JG, AG and HB designed the study. AG and HB implemented the analysis. AG, AF and HB wrote the manuscript. All authors read and approved the final manuscript.
ACKNOWLEDGMENTS
We want to acknowledge the support received from the South African Medical Research Council and National Research Foundation of South Africa.
CONFLICTS OF INTEREST
Authors have no conflicts of interest to declare. | 2020-11-19T09:17:26.677Z | 2020-11-17T00:00:00.000 | {
"year": 2020,
"sha1": "564c83781f90f062409792f7f07745c453b86a3c",
"oa_license": "CCBY",
"oa_url": "https://www.oncotarget.com/article/27808/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "159889a47944fe30d8575217348cd10c6853dbee",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211191137 | pes2o/s2orc | v3-fos-license | Phage liquid crystalline droplets form occlusive sheaths that encapsulate and protect infectious rod-shaped bacteria
Significance In this study, we investigate how phage molecules secreted by pathogenic Pseudomonas aeruginosa bacteria drive antibiotic tolerance by forming phase-separated liquid crystalline compartments around bacterial cells. This study spans across spatial scales, combining atomic structure determination using electron cryomicroscopy with cellular electron cryotomography, optical microscopy, and biochemical reconstitution. We show that encapsulation of rod-shaped bacteria by spindle-shaped liquid crystalline droplets made of phage molecules is a process profoundly influenced by shape and size complementarity.
This PDF file includes:
Supplementary Materials and Methods Figures S1 to S10 Table S1 Legends for Movies S1 to S6 SI References Other supplementary materials for this manuscript include the following: Movies S1 to S6 capsid was fitted using real-space refinement (13) in PHENIX (14) using non-crystallographic symmetry (NCS) restraints. Density within the inner cavity of Pf4 filaments with ssDNA map could not be explained by any residue of the CoaB protein and was therefore assigned to correspond to Pf4 genomic DNA. While the ribose phosphate backbone corresponding to a linear single-stranded genome was clearly visible, the bases of the nucleotides were smeared due to averaging over the Pf4 genome. Therefore, we built a single stranded poly-adenine chain into a B-factor dampened (50 Å 2 ) cryo-EM density using Coot and refined the structure by iteratively rebuilding the model and real-space refinement in PHENIX. Comprehensive model validation of the Pf4 with ssDNA and Pf4 without ssDNA structures were performed in PHENIX (Table S1). Figures containing protein structures or cryo-EM data were prepared using USCF Chimera.
Fluorescent labelling of Pf4 phage
Purified Pf4 phage was dialysed into 10 mM sodium carbonate buffer pH 9.2 using a 10 kDa MWCO snakeskin dialysis membrane (ThermoFisher). One ml of Pf4 phage (5 mg/ml) was incubated with 100 µg A488 fluorescent dye (ThermoFisher) for 1 hour at room temperature (RT) with end-over-end agitation. The sample was then passed over two PD10 desalting columns (GE Healthcare) to separate A488-labelled Pf4 phage from free A488 dye. Pf4 ghost filaments were labelled with A568 following the same protocol.
Fluorescence Recovery after Photobleaching (FRAP) of Pf4 liquid crystalline droplets
To carry out FRAP experiments, 3 mg/ml Pf4 and 10 mg/ml hyaluronan were mixed in a 1:1 (v/v) ratio and placed in a capillary on a glass slide. The Zeiss ZEN software bleaching mode was used on the Zeiss LSM Exciter confocal microscope. A custom circular region within the field of view was selected. A laser power of 50% was used for the photobleaching step, set to occur every 20 frames beginning at frame 20, and a laser power of 1% was used for imaging of all frames between these steps. The Pf4 liquid crystalline droplets selected for FRAP experiments were in the bulk of the sample and imaged 2 hours after their formation. Frames were recorded with a time interval of 1.5 s.
Pf4 ghost production
The DNA genome of Pf4 was extracted using a modified version of a previously described protocol (1). Briefly, purified Pf4 phage (5 mg/ml) was incubated with 10 M LiCl in a 1:1 (v/v) ratio for 2 days at 46°C. The sample was subsequently diluted 1 in 10 with PBS and incubated with 10 µg/ml DNaseI (Sigma) and 1 U/ml benzonase (Sigma) for 2 hours at 37°C. The sample was then centrifuged (100,000 g, 1 hour, 4°C) and the pellet resuspended in PBS. Phage protein concentration was determined using a Nanodrop and infectivity (pfu/ml) was measured using a plaque assay as described above.
Antibiotic protection assay
An overnight culture of PAO1 ΔPA0728 was grown in LB media at 37°C, diluted 1 in 100 into LB medium and grown at 37°C to an OD600 of 0.5. Hundred µl of the resulting culture was added to a 96-well plate and grown for a further 30 minutes at 37°C. Hundred µl of the indicated Pf4 and/or polymer components were added to the culture such that final concentrations of components were: sodium alginate (4 mg/ml), Pf4 (1 mg/ml) and Pf4 ghost (1 mg/ml). Additionally, 20 µg tobramycin (Sigma) (Fig. 4A) or 20 µg gentamicin (Sigma) (Fig. 4B) or 10 µg colistin (Sigma) (Fig. 4C) was added as indicated. Cultures were grown for a further 3 hours before a 10 µl sample for each assay condition was taken, serially diluted 10fold and dilutions plated onto LB agar plates. Plates were incubated overnight at 37°C and colonies forming units (cfu) enumerated. For antibiotic titration experiments, tobramycin concentration was varied as indicated while other components of the assay remained constant as described (Fig. S7A). For Pf4 phage titration experiments, Pf4 concentration was varied as indicated in the presence of 20 µg tobramycin while other components of the assay remained constant as described above (Fig. S7B). Mean cfu/ml with standard deviation were calculated and plotted using Prism GraphPad software.
Fluorescence microscopy
Pf4 liquid crystalline droplets: 2 mg/ml A488-labelled Pf4 phage or A568-labelled Pf4 ghost was mixed in a 1:1 (v/v) ratio with sodium alginate (10 mg/ml) or hyaluronan (10 mg/ml) and incubated at room temperature for the indicated time period. Ten µl of the resulting sample was pipetted onto 0.7% (w/v) agar pads constructed using 15 x 16 mm Gene Frames (ThermoFisher) following the manufacturer's protocol, with a coverslip placed on top. The slide was imaged with a 100x objective using a Zeiss Axioimager M2 (Carl Zeiss) in bright field and fluorescence mode. Quantification of Pf4 liquid crystalline droplet major axis length was performed by thresholding, creating a bounding rectangle and measuring lengths using Fiji (15). Graphs were plotted using Prism GraphPad software. Images were background subtracted and figure panels prepared using Fiji.
Pf4 liquid crystalline droplets / P. aeruginosa bacteria: In order to image conditions equivalent to those utilised in the antibiotic protection assay, PAO1 ΔPA0728 was grown to an OD600 of 0.5 and incubated with A488-labelled Pf4 (final concentration 1 mg/ml) and sodium alginate 3D confocal and time-lapse microscopy of Pf4 liquid crystalline droplets / P. aeruginosa bacteria: Samples equivalent to those utilised in the protection assay were prepared as described above and z-stacks were acquired in the fluorescent and brightfield channels with an Olympus Fluoview FV1000 microscope equipped with a 100x objective. For time-lapse experiments images were acquired in the fluorescent and brightfield channels at 10 minute intervals over a 90 minute time period at room temperature using the same microscope and objective. Time-lapse images were drift corrected using the StackReg plugin in Fiji.
Propidium iodide staining of Pf4 liquid crystalline droplets / P. aeruginosa bacteria: PAO1
ΔPA0728 was grown to an OD600 of 0.5 and incubated with A488-labelled Pf4 (final concentration 1 mg/ml) and sodium alginate (final concentration 4 mg/ml) for 3 hours in the presence of 10 µg tobramycin before incubation with 0.1 mM propidium iodide for 30 minutes at room temperature. 10 µl of the sample was pipetted onto agar pads as described above and imaged with a 100x objective using a Zeiss Axioimager M2 (Carl Zeiss) in bright field and fluorescence mode. Quantitation of cell numbers and propidium iodide positive cells was performed by thresholding using Fiji. Graphs were plotted using Prism GraphPad software.
Pf4 liquid crystalline droplets/SU-8 rods: Cyanine 3 labelled SU-8 colloidal rods, fluorescent at 532 nm, were used as particles to mimic bacterial cells. These were approximately 0.5 µm in diameter and ranged between 1 and 8 µm in length. Initially dispersed in water, samples were placed in an oven for 5 minutes to dry out, before 10 mg/ml sodium alginate in PBS was added as the new solvent for the rods. Equal volumes of this solution and 3 mg/ml A488-Pf4 in PBS were combined and mixed on a vortex mixer for 5 minutes to ensure even dispersion of rods, before being placed in a capillary. This was placed onto a glass slide and imaged on the Zeiss LSM Exciter confocal microscope. A488-Pf4 tactoids were observed in one channel, and the rods were observed in the other channel (as reflected light).
Automated segmentation of images with liquid crystalline droplets surrounding bacterial cells
Brightfield channel images were Gaussian filtered, background subtracted and thresholded to identify the positions of bacterial cells. Bacterial cell shapes were found using the activecontour algorithm in Matlab (16). The regions of identified bacteria were dilated and used as seed inputs for the segmentation of the liquid crystalline droplets in the green channel using the activecontour algorithm. The segmented bacterial cells from the bright field and the liquid crystalline droplets from the green channel were used to calculate morphological parameters ( Fig. 5 and Fig. S8).
Statistical analysis
Statistical analysis was performed using Prism GraphPad software and an unpaired t-test was used to calculate p-values. Fig. S1. Purification of Pf4 phage from P. aeruginosa PAO1 biofilms (related to Fig. 1
Supplementary Movie Legends
Movie S1. Cryo-EM structure of Pf4 with ssDNA at 3.2 Å resolution (related to Fig. 1) The 3.2 Å resolution structure of Pf4 with ssDNA. Cryo-EM density (grey isosurface) and | 2020-02-20T09:17:04.735Z | 2020-02-18T00:00:00.000 | {
"year": 2020,
"sha1": "c00d60e133cc9e69b569d60e85ee83cd314f109b",
"oa_license": "CCBY",
"oa_url": "https://www.pnas.org/content/pnas/117/9/4724.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d0b62523f5e30647aa07e3af55f39ad4b3d1a549",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
158424169 | pes2o/s2orc | v3-fos-license | Comment The difficult equation of territorial reforms : from big is beautiful to the impossible simplification of the institutional layer ‐ cake Comment
Do territorial reforms have a meaning, an economic and spatial rationale or are they the result of a legislative whim? In this comment on the articles by Frère and Védrine, and Antunez et al., we will go back over the slow process of France’s territorial organisation and the attempts at simplification introduced by the recent reforms, as well as the issues they raise, in particular in terms of transfer of powers between local authorities and disparities in the new organisation of the regions in mainland France. We emphasise that the territorial layer‐cake was shaped patiently over the centuries, to the point of becoming very heavy indeed, and that the NOTRe and MAPTAM laws, enacted to modify the institutional architecture of the French terri‐ tories by giving priority to large structures, raise questions regarding the transfer of powers and resources, as well as on spatial inequalities, yet without providing definitive solutions toward the aim of administrative simplification.
Reshaping, carving and redefining the map of the territories is a very French game, which ignites political minds and mobilises local players, in a never-ending endeavour to delineate the strata that form the territorial layer-cake, from basic surface simplifications to the development of new spaces for growth. Yet this national passion for land planning (Béhar et al., 2009;Esteath, 2015) is not futile. It reflects the tension, which constantly springs anew, between different conceptions of France's geographical and institutional structure, torn between Jacobine temptations and decentralising advances, between efforts to concentrate developing zones and preserve natural spaces, between conurbations of globalised activity and the desire to keep local communities strongly-rooted.
In this comment on the articles by Quentin Frère and Lionel Védrine, and by Kim Antunez, Brigitte Baccaïni, Marianne Guérois and Ronan Ysebaert (this issue) we go back over the slow preparation of France's territorial architecture and the attempts at simplification introduced by the recent reforms, as well as on the issues they raise, particularly as regards the transfer of powers between local authorities and territorial disparities in the new organisation of mainland France's regions.
The slow preparation of the territorial layer-cake
The history of the tensions between the absolute power of the State and the local level advocating for more freedoms is as old as France and the patiently carried-out annexation of its provinces. It was with the French Revolution, however, and the fall of the Ancien Régime that the administrative structures still familiar to us today first took shape. 36,000 municipalities, designed as the local administrative level at the citizens' doorsteps, became the successors to the pre-1789 parishes. The same year, the départements were formed, each headed by a prefect representing the State, while the provinces faded away. From this point on, the country would be organised in a uniform manner, with four administrative layers: the départment, the arrondissement, the canton and the municipality. Far from being decentralising, this unification of territorial organisation, desired by the Jacobins, made France a "one and indivisible" Republic, centred around Paris. The Consulate, and thereafter the Empire, would only complete the centralisation of power and the search for a unitary State.
It would not be until 1861 that the first "Decentralisation" Act, in reality a De-concentration Act, would emerge. The State transferred powers to the prefects, while the prerogatives of the municipalities and départements were gradually extended. Despite the enactment of the 1884 law instituting the election of the mayor by the city council, prefectural guardianship remained omnipresent at all administrative levels. Given the large number of municipalities, the 3 rd Republic instituted, in 1890, an additional layer, with the inter-municipal syndicates. And it was only in 1955-56 that 21 "programme regions" were created, not yet considered as local authorities but supposed to provide responses, in terms of regional action and economic development, to critics describing the unequal distribution of wealth -this was described as "Paris and the French desert" (Gravier, 1947).
General de Gaulle would launch multiple attempts at regionalisation. From as early as January 1946, French economic planning came into being, with the creation of the Commissariat du Plan, the "burning obligation" born of the realisation that municipalities and départements as administrative bodies are unsuited to socio-economic issues. In March 1964, he proposed the creation of regions based on the pre-revolutionary provinces: under the supervision of a prefect, they would be the armed wing of the central government in implementing its economic planning and regional development policy. Five years later, the French would reject, by referendum, the constitutional reform instituting the regions as territorial communities, and thereby set the process for his departure in motion. The following presidencies would continue that "quiet revolution", in the words of President Giscard d'Estaing, who called for a basic law to determine the real powers of the State, départements and municipalities, while President Pompidou's mandate gave the regions status as public institutions and their own budget.
François Mitterrand's rise to power would break with 200 years of centralism, with "Decentralisation: Act I". The 2 March 1982 Law instituted the region as new local authority, while the President of the General Council replaced the prefect as the head of the département's executive. With the various councils of the municipal, départements and regional bodies then elected by the people, the Regional Audit Chamber was created, in charge of auditing local finance. In 1988, two new layers were Comment -The Difficult Equation of Territorial Reforms added: the districts and the urban communities. Ten years later, the Chevènement Act of 12 July 1999 promoted the strengthening of the intermunicipality, but it would take until 2004 for the regions to be recognised in the Constitution.
The legislative package spearheaded by President Chirac was "Decentralisation: Act II", with a significant transfer of powers to the local authorities. The region is conceived of as the active driver in economic development, while the social side is left more to the département. It is also during this period that the reference to participatory democracy emerged explicitly: regions, départements and municipalities would now be able to consult their constituents by referendum. Lastly, local authorities were granted their own resources with financial autonomy and the possibility to setting and levy local taxes. In the 2000s, with the 2010 Finance Act, President Sarkozy removed the business tax, which was accused of weighing down companies' budget, and force them into offshoring. A territorial economic contribution and a fixed tax were instituted to replace it. That same year, the bill to reform the local authorities was adopted: it is simplifying, strengthening and strongly encouraging inter-municipality. The opportunity was also taken to add a new segment to the territorial layer-cake, with the creation of metropoles.
Territorial reforms: the NOTRe and MAPTAM laws
The election of François Hollande marked a new stage in the territorial development process. The President wished to run "Decentralisation: Act III". On 3 June 2014, he announced the launch of a reform aimed at modifying the Republic's territorial architecture and attaining its ambition to simplify and clarify the territorial organisation of France, so that everyone would know who decides, who finances and using what resources. The debate quickly turned into a conflict around two points: the borders of the future regions (and their Capitals) and the maintenance or removal of the départements. It revealed deep divisions about the objectives and means of possible reform, as well as the very design of the decentralized structure of the Republic. The differences were particularly stark as regards the levels to be eliminated, the initial idea of abolishing the départements having slowly died out, due to the mobilisation of local elected officials, but also the difficulty inherent in distributing their many powers and related financing to other parts of the institutional system. The other question pertained to the boundaries of the new regions, as well as the merger of some of them, with identical scopes as no internal reconfiguration was allowed. The initial map was replaced, as discussions went along, with varying configurations and architectures, which more often than not gave primacy to local alliances rather than to rationalisation imperatives or economies of scale. The solution ultimately selected, consisting of 13 mainland regions, concentrated the alliances in the South-West, North and East of France. On 1 January 2015, the law aimed at modernising territorial public action and the affirmation of the metropoles, known as the "MAPAM Law" or "MAPTAM", created a new status for 11 metropoles 1 (conurbations of more than 400,000 inhabitants) with powers in economic development, innovation, the energy transition and city policy. Lastly, on 16 July, the law on the new territorial organisation of the Republic (or NOTRe Act) was definitively adopted, and published in the Official Gazette of 8 August 2015.
How many layers are there currently in the territorial layer cake? In addition to the three main levels of local authorities -the municipality, the département and the region -there are a multitude of other layers: metropoles, cantons, lands, communities of municipalities, urban communities, conurbation communities, conurbation syndicates, etc. These administrative levels, public institutions and intermunicipal groups are the heirs to the history of the French State's construction. There is creation, recombination, but rarely any removal.
Like many commentators, we can question the merits of these successive reforms and their advantages for people and economic activity, on the need to continuously add layers, or, conversely, group together entities that had proven themselves in the past (Torre & Bourdin, 2015). In recent years, the mantra Big is beautiful prevailed, whether with regard to large regions, metropoles or large inter-municipalities. The articles by Frère and Védrine, and Antunez et al. examine alliances and groupings between EPCIs (Public Intermunicipal Cooperation Institutions), and in particular municipalities and regions, which have reshaped the map of territorial France and led to numerous questions about their legitimacy and efficiency, as well as about the consistency of the new units formed.
The reform of the regions in question: useful or high-risk?
The article by Kim Antunez, Brigitte Baccaïni, Marianne Guérois and Ronan Ysebaert discusses the merging of the regions, resulting from the NOTRe Act, and raises questions about the legitimacy and homogeneity of those groupings. The new regions are often criticised as being rather heterogeneous and not being based on a strong internal logic, or even bringing together extremely different entities, and doing nothing more than setting institutions side by side. The transition to 13 mainland regions may sometimes appear a whim of the legislator or a step toward longer-term merging within mega-entities, without any real grounds, except the attempt to achieve larger size. Why such groupings? And what underlying logic? The authors respond in different ways, seeking to measure the heterogeneity of the new regions and territories that form them, based on data on the level of employment (rate and development), standard of living (per capita income) and demographics (youth and population density) in 2014.
The first question pertains to territorial disparities within the 13 new mainland regions, examined based on principal component factorial analysis to show the similarities and differences between the 22 initial regions. The contrasting results reveal significant disparities along two main lines of differentiation. The first contrasts the regions where the situation on the labour market is favourable (high rate of employment and levels of standard of living) with those where it is less so. The second contrasts densely populated and young areas with the more rural and ageing regions. A number of similarities can be seen between the merged regions as in the case of Nouvelle-Aquitaine (between Poitou-Charentes and Limousin in particular), but also numerous dissimilarities. This is the case with the very particular situation of Hauts-de-France, due to the unusual position of Nord-Pas-de-Calais, which is characterised by a very significant demographic dynamic compared with its low employment rate. A similar observation can be made for the new AURA region (Auvergne-Rhône-Alpes), where Auvergne appears quite aged and sparsely populated compared to Rhône-Alpes. Lastly, given the extremely advantageous position of Alsace, its reluctance to tie its fate, within the Grand Est region, with Lorraine and Champagne-Ardenne is understandable, given their far less favourable profiles (Beyer, 2017).
A second approach consists of studying possible disparities within the regions themselves, based on an analysis of the employment zones using three composite indicators relating to the labour market, shifting patterns in employment and demography. With the exception of Île-de-France and Corsica, which show a certain homogeneity, the other regions are characterised by a highly-contrasting panorama, with the coexistence of different types of job areas, from the most favourable (mainly in the Paris region, Rhône-Alpes or in the West), to those facing the most struggles, and which form the "diagonal of emptiness" (Oliveau & Doignon, 2016). This result shows that disparities persist even within metropolitan regions; moreover, the authors show that the main heterogeneities are at the heart of the latter, and not at their borders, further accentuating the idea that it is the differences between the various types of zones (urban, outer-suburban, rural, etc.) that matter above all, casting doubt on the value of the recent regional mergers in terms of territorial cohesion.
Besides these important elements, the reform of the regions also raises other questions. One may wonder, for instance, about the reconfigurations' possible negative impact on territorial equity, with greater concentration of activities in the most productive zones, but also a reduction in the quality, or even outright lack of local services with a view to reducing costs. Concerns might also be raised for inhabitants living in territories remote from major cities, in a context of reduced public resources, rationalisation of equipment and elimination of local services (high schools, hospitals, jobs, etc.) or railway lines. Some new regions are true giants, the expanse of which may cause some of the populations to be significantly removed from the decision-making centres. Many local officials or decision-makers are located far from their regional Capital, hence difficulty being heard and relaying peoples' voices and interests. The latter, meanwhile, can experience the authorities' remoteness as further withdrawal of the State from the peripheral or rural territories.
There is also concern due to the uncertainties on the links between local authorities, and especially the regions/metropoles tandem. Above and beyond collaborations between levels, it is primarily the ability to generate positive peer pressure or development effects and set shared dynamics in motion. Abolishing the universal powers clause is a step toward rationalising public action and clarifying powers; it helps
Comment -The Difficult Equation of Territorial Reforms
identify the devolutions, slows down the fragmentation of expenditure and limits the desire for unbridled intervention. The risk of a lack of specialisation is actually significant. While the European strategy for smart, sustainable and inclusive growth, "EU 2020", puts the focus on the choice, by each region, of a limited number of activities or technologies within a value chain, and therefore a differentiation in functions and output (Foray, 2014), the opposite effect is to be feared here. Organised around their metropoles, macro-regions can be tempted to behave like small States, reproducing all the internal powers and specialisations, without making real development choices.
Moreover, the French regions continue to receive very little financial endowment, the transfer of powers having been completed without the related transfer of resources. Compared to their European neighbours, their budget is very low: whereas, on average, the expenditure of European regions amounts to EUR 4,000 per inhabitant per year, that of the French regions is ten times lower. Lastly, the reform raises different questions about the role and place of the State. What is the future of regional civil servants and decentralised agencies? What is the foreseeable economic and social impact of site shutdowns, staffing reductions or staffing transfers on development or land dynamics, for example? Not to mention the related costs of reform, estimated at around EUR 1 million, for relocating the services, integrating them and aligning pay grids for civil servants, while the savings to be expected would be low according to a Standard & Poor's report (2015).
The intermunicipality: a response to the impossibility of merging municipalities
The article by Quentin Frère and Lionel Védrine is about the -somewhat unexpected -success of the 1999 Chevènement Acts. After previous failed attempts to reduce the number of municipalities (in contrast to the United Kingdom and Germany or Scandinavian and Central European countries), the intermunicipality has stirred deeply-rooted support in France as a credible alternative to merging, and one acceptable to the local populations. Accused of being the most profligate territorial level, the municipality, the lowest common denominator of the territorial organisation, tends to be increasingly questioned in territorial reforms. Today, each municipality is covered by an EPCI (community of municipalities, conurbation or urban areas, metropolitan areas and agglomeration trades), and variable rules, particularly in terms of powers transfer. However, these groupings raise numerous questions, particularly in terms of efficiency of sharing, modalities of contribution to operations delegated to the EPCI, equity between the inhabitants of the different municipalities, or social justice for the populations involved.
The authors focus on the municipalities' decision as to whether a given power should be transferred to an EPCI. This is a matter of importance because the menu of powers to be transferred is not clearly established by the legislator and the municipalities are therefore required to make choices in this regard. There can be questions in particular about their cooperative behaviour: should they transfer competences and if so, which ones? Under what circumstances should they do so? Does the size of the municipality and its specific characteristics -particularly the greater homogeneity or heterogeneity of the populations -play a part in this regard, and should they encourage different choices based on internal characteristics?
Economic theory provides responses in terms of economies of scale or ranges. For the purely territorial dimension, the Oates Theorem (1972), which inspires the article, states that the decision of the municipalities to transfer powers is the result of arbitration between the cost of citizen preferences' spatial heterogeneity and the benefits of economies of scale. In other words, considerable heterogeneity between municipality populations (and thus between the preferences of players) will plead in favour of a transfer of powers to the inter-municipality, the latter however enabling the construction of non-rival public goods such as pools or schools, which are too costly for small municipalities. The analysis is carried out on 2012 data, before the NOTRe Act, which makes certain transfers mandatory, as the heterogeneity of preferences is based on the principal component analysis of a heterogeneity indicator based on 15 sociodemographic variables. It must make it possible to assess the mechanisms that drive municipalities to transfer powers.
The results of the econometric study, based on the powers least frequently transferred, are clear and largely verify the Oates Theorem. First of all, the search for economies of scale spurs municipalities to cooperate and therefore to transfer powers, probably in order to be able to finance infrastructure or joint local-level programmes. Secondly, the significant heterogeneity of the population puts a brake on the transfer of powers and the creation of intermunicipalities, thus confirming Tiebout' s (1956) arguments on citizens "voting with their feet".
Far from being a French exception, municipal fragmentation can also be found in other countries, although rarely to the same extent. Throughout Europe, the economic crisis has fostered grouping aimed at reducing the cost of everyday operations, globalisation and increased competition between local authorities, hence better pooling of resources. It raises questions about local finances, such as the collection of funds, an issue at the fore when the aim is to lower funding from the various local authorities, and equalisation thereof in terms of financial federalism (Wildasin, 2004). Still other questions can be raised about land, with the abolition of the housing tax and the search for alternative ways of collecting funds by municipalities, such as fines on public roads, for example, and questions about land occupancy and management modes, with the intermunicipal PLUs (Local Urbanism Plans) coming into widespread use.
Lastly, no discussion of inter-municipalities would be complete without mention of the heightened role and powers of the metropoles (Brennetot & de Ruffray, 2015), which are now being given greater autonomy and extensive functions -in particular through the possibility of contracting with other EPCIs, or even adopting a driving role. This option could generate new dynamics, giving the urban populations, the largest in terms of volume, the power to take initiative. At the same time, it raises the issue of the isolation of remote rural or outer suburban spaces and the problem of a fictional urban France, at the risk of leaving many territories in dire circumstances.
* * *
The territorial reform processes initiated in Europe (Italy, Portugal, Spain, Netherlands, etc.) share a common feature. Regions and metropoles are pushed into the limelight, while intermediate territorial levels such as départements appear to be challenged. Like other European countries, the French territorial reform follows this twofold trend of deepening the role of the regional level and large cities, with a transfer of competences to the regions and large-scale intermunicipalities. However, unlike France, most European countries have only one or two levels of local authorities. The allocation of powers and financial resources between these different entities is very heterogeneous and often depends on the level of regionalism or federalism of the State in question. As a result, the process of transferring powers proves less complex to implement than in France, although this still does nothing to erase inequalities. Lastly, in France as elsewhere, size is not the determining factor. It is economic dynamism, along with the powers and resources allocated to each local authority that play the predominant role in the development of the territories. | 2019-05-20T13:03:56.379Z | 2018-02-07T00:00:00.000 | {
"year": 2018,
"sha1": "5454b37aaa7ffd02fb6c5e519861ad1d14f26ade",
"oa_license": null,
"oa_url": "https://www.insee.fr/en/statistiques/fichier/3318021/497-498-EN.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "110b1192a9ab9faae23c115794a5cb55aacac62d",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
252693165 | pes2o/s2orc | v3-fos-license | Vision-based Warning System for Maintenance Personnel on Short-Term Roadwork Site
We propose a vision-based warning system for the maintenance personnel working on short-term construction sites. Traditional solutions use passive protection, like setting up traffic cones, safety beacons, or even nothing. However, such methods cannot function as physical safety barriers to separate working areas from used lanes. In contrast, our system provides active protection, leveraging acoustic and visual warning signals to help road workers be cautious of approaching vehicles before they pass the working area. To decrease too many warnings to relieve a disturbance of road workers, we implemented our traffic flow check algorithm, by which about 80% of the useless notices can be filtered. We conduct the evaluations in laboratory conditions and the real world, proving our system's applicability and reliability.
Introduction
The safety of road maintenance workers has been a well-known object of traffic research and development projects in Europe in recent years. Guidelines for human behavior, roadwork site setups, and technical solutions were developed and implemented [1,2,3]. Almost all of these guidelines deal with preventing rear-end collisions with safety trailers. Another issue that has not been dealt with yet is the safety of maintenance personnel, especially their safety within the area of the short-term roadwork sites (STRWS). When the maintenance workers are working, sometimes they cannot sense the surroundings explicitly. Moreover, the construction vehicles in the front can obscure the view of the road workers, and they cannot see the traffic behind them. In many European countries, the short-term roadwork sites are separated from lanes with free-flowing traffic only by mobile warning signs such as traffic cones, safety beacons, or even nothing ( Figure 1). Protection by passive protective devices is not possible due to the limited duration, and no physical barriers can hold back vehicles in case of an accident. As a result, personnel in STRWS are generally exposed to higher risks. Therefore, the German Federal Ministry of Labour and Social Affairs implemented a guideline to lower the risk of accidents in short-term roadwork sites. However, Handson experience shows that this is difficult to observe for maintenance workers, as there are many situations in which the "No-Entry Zone" is entered, consciously or unconsciously.
Our solution addresses this problem by trying to detect the approaching vehicle from both front and rear directions based on computer vision and trigger a warning signal to alert the maintenance personnel. To reach that aim, we utilize the recent groundbreaking deep learning-based Object detection for detecting approaching vehicles on the road in both directions. Furthermore, the system can track such vehicles and record their trajectories. After tracking vehicles, we utilize our traffic flow check algorithm to filter unnecessary warnings. There are two inputs to the system. One is a video streaming of a front-mounted conventional RGB camera, and another is a video streaming of the rear-mounted conventional RGB camera. With these two cameras, we can observe both directions of the construction site. The whole system is composed of three main components: (1) a fast detector, (2) a detector-based tracker (3) a traffic flow check algorithm (see Figure 2).
Related works
Vehicle detection is a considerably mature and proven technology that is crucial for the whole system. Traditional detectors combine the sliding window method with traditional machine learning classifiers [4,5]. Such methods rely on handcrafted features, require much feature engineering work, and cannot achieve satisfactory accuracy. In recent decades, due to the groundbreaking improvements in deep learning, modern detectors are mainly composed of the convolutional neural network (CNN) [6,7,8]. CNN has a similar property to the traditional fully connected network but considerably reduces the amount of the neural network's parameter, making CNN much more robust and less prone to overfitting. Such CNN-based detectors require large-scale datasets to train their neural network and can achieve a better generalization capability. As a result, their accuracy is significantly improved in comparison to traditional detectors. Since the working condition of maintenance is outdoor and constantly varies from one country road to another, some country roads' backgrounds are dense forest and mountain bodies, while the others ought to be open fields. The weather can also vary seasonally, from rain to sunshine. Therefore, we need a robust and highly reliable detector for the application and choose a CNN-based detector. CNNs have a high requirement of computing resources to get a fast inference speed, so in the project, we equipped the vehicle computer with GPU, which can accelerate the neural network's inference.
There are also two main branches of modern detectors. The first is R-CNNs based on the regional proposal, and the inference process consists of two stages [6,9,10]. The other is a one-shot method, which operates the feature extraction and object localization at the same time [7,8,11]. The one-shot methods have a relatively lower accuracy but much less inference time. Since it is a real-time application with a high running time requirement, we decided to use a type one-shot detector. In recent years, multiple object tracking (MOT) has gained increasing attention, and there are still many challenging tasks like severe object occlusion and abrupt appearance changes [12]. We use MOT to find out multiple objects in single frames and calculate the trajectory of each object across continuous frames. According the criterion of initialization method, MOT can be categorized into Detection-based Tracking (DBT) [13,14] and Detection-Free Tracking (DFT) [15,16]. DFT has a manually defined and fixed number of objects, so it cannot handle the new appearance and disappearance of objects and is inappropriate for our application. In contrast, DBT uses detectors to discover new objects without number limitations, and disappearing objects are abandoned automatically.
MOT can also be grouped into offline tracking [14,17] and online tracking [18]. Offline methods utilize batches of frames or the entire image sequence to process the data. Nevertheless, our real-time application requires low inference time, so online methods would be more appropriate. We can predict the object's current state solely based on the observations from the past up to the current frame.
Vision-based warning system
In the following, we will describe the hardware components that are used in our application (Section 3.1) and discuss the proposed detector (Section 3.2) and detection-based tracking (Section 3.3) in more detail. Section 3.3 then explains the traffic flow check algorithm.
Hardware
The components of the system will include two traditional RGB cameras mounted on the maintenance vehicle and a computer with the graphic processing unit GeForce GTX 1080 Ti for computer vision purposes (See Figure 3). The vehicle computer can be operated directly using a 12 Volt power supply from the vehicle's electrical system, which means that it can be operated directly in the vehicle without the need for major structural modifications. All that requires is an adequately protected and dimensioned connection to the vehicle's electrical system. Figure 3 displays the computer which was purchased for our application. The operating system is Linux-based and thus offers a good platform for vision-based applications. The peripheral component of the system is a warning signal device worn by road maintenance workers. The component can be attached to a chest strap to improve wearing comfort ( Figure 4). The acoustic signal is implemented by a piezo buzzer, as in smoke alarms. The reason for this choice is that the frequency of the hearing protector does not filter these frequencies strongly in the buzzer range, and the signals can be accordingly easily perceived by the road workers. The acoustic signal generator, in combination with the visual warning lights, is attached to the chest strap.
Approaching vehicle detection
In the first step, we need to detect the approaching vehicles in each frame of both camera streams. We select YOLOv4 to perform the detection. In the following, we describe the training phase of YOLO.
There are three classes in our application to be categorized. They are trucks, vehicles, and pedestrians. We used the pre-trained weights from the original author 1 and applied the transfer learning concept. The transfer learning concept means the knowledge that the neural network has learned from a previous task could be applied to the current task [19]. Transfer learning can dramatically reduce the time required to train neural networks from scratch, and the model can yet perform similarly. Here, we fine-tuned the neural network with our dataset to adapt the neural network to our application.
Figure 4: Personal Road Worker Device
We have created a dataset based on the videos taken from actual maintenance environments. Our dataset contains 1200 images with annotations. The project team members manually performed the labeling task for the dataset. Moreover, the dataset was randomly split into three sets (training/development/test), where the allocation percentage of these three sets follows the rule of thumb (60/20/20).
Since we had a relatively small training set, we solely deleted the output layer and the weights feeding into that layer. Then, we created a new output layer with the output tensor shape × × [3 × (4 + 1 + 3)]. × represents the number of grid cells, and 3 is the number of anchor boxes. The bounding box of each anchor box is defined by ( , , ℎ ), ∈ [1, × × 3], where = ( , ) represents the center, the width and ℎ the height of the corresponding box. implies the frame index. 1 Objectness denotes the probability that an object belonging to any class in the training set exists in the bounding box. 3 class confidence scores ( ) ( ), ∈ [1,3] are the conditional probability, i.e., probability of class x given an object exists in this box. The Products of the Objectness and the class confidence scores specify the probability that the bounding box contains a specific object type.
The weights of the output layer were randomly initialized. Only the weights of the last two layers were optimized, and other layers of the neural network were fixed because of the lack of a large-scale training dataset and the corresponding overfitting concern. The learning rate was set to 0.001, ten times lower than the regular learning rate. The neural network was trained for 20 epochs.
Detection-based tracking
Since the maintenance workers work in the outdoor environment, imaging conditions make a significant difference to the visual appearance of the approaching vehicles, which depends considerably on weather conditions, sunlight intensity, and light reflection and varies significantly over time and location. For this reason, we do not apply the visual appearance model but solely rely on the motion model to relate the detections to the trajectories, similar to [20].
We applied a Kalman filter as the probabilistic inference model to represent the states of each approaching vehicle. Here we assume the approaching vehicle travels at a constant velocity since the vehicle's acceleration is not that high, and the interval between image frames is sufficiently small. Our test also proves that the higher-order motion assumption-based model did not improve the precision of the vehicles' location. Therefore, we employed the Kalman filter model with the assumption of constant velocity to increase computational efficiency.
We define the Euclidean distance ‖̂− ‖ 2 as the cost of detection being assigned to a trajectory , where ̂= ( −1 , −1 ) is the filter's state estimation of the trajectory at frame . Here, −1 is the location of the -th vehicles at the previous frame − 1, −1 the velocity, and the state transition function. After calculating the matching cost, the Hungarian algorithm is employed to solve the optimal assignment of detection-to-trajectory [21,22]. Therefore, we update the trajectory of the -th vehicles with = −1 ∪ in case new detection is assigned to the trajectory −1 . Then, the filter's state is updated with the new observation. If some detections cannot be assigned to existing trajectories, new trajectories are tentatively initialized for them, and such trajectories stay suspended. The trajectories could be set as active only if detections could be assigned to them in the subsequent frames. They will be terminated if the trajectories cannot match any detections for several consecutive frames.
Traffic flow check algorithm
Every time a vehicle is tracked with the help of the detection-based tracking system, the maintenance workers will be not only acoustic but also visual warned. However, in one peak hour, hundreds of vehicles approach the construction site, and for each vehicle, the warning device is triggered once, and the road worker is alerted. That means the road workers should be warned hundreds of times in one hour. Apparently, it is not an appropriate solution for road workers. Road workers cannot endure being constantly distracted from their work by such warnings, and such notices can even cause light and sound contamination to them. To overcome these issues, we also developed a traffic flow check algorithm to filter such identification signals from the tracking system.
The basic concept of the whole warning system is to alert the maintenance workers of the coming vehicles when they might be unaware of the possible danger. When a car approaches and the system alerts the road workers, they will already prepare and start observing the road condition. At that time, another vehicle is driving to the construction site. There is no need to alert the road workers again because they have already been on guard for the passing cars. The traffic flow will also keep the road workers paying attention to the road condition. It is necessary to alert the road workers of the newly emerging vehicle only if there are no vehicles on the road and the road workers relax their attention. The traffic flow check algorithm uses a while loop to monitor the signal from the tracking system. The tracking system will signal the check algorithm when it identifies a new vehicle. After receiving the signal, the check algorithm will check whether the elapsed time from the last signal to this one exceeds the pre-defined time duration . If true, the check algorithm will eventually trigger a warning alerting road workers. If false, the start time will be reset as the current timestamp, and the timing process will restart. In the initialization phase, will be set as the timestamp at that moment. With the algorithm, we can ensure that the check algorithm can trigger a warning only if there is no traffic on the country road for 10 seconds since the last vehicle. The setting of the as 10 seconds is the tradeoff between convenience and safety. Despite necessary warnings, the road workers 6 should be minimal disturbed. Empirical evidence shows that maintenance workers can maintain the warning caused attention for about 10 seconds.
Evaluation 4.1 Detector performance
Both accuracy and the inference time of the underlying detectors are key performance Indicators for the whole system. The detector should accurately identify the objects at a high frame rate. A high frame rate means that the elapsed time between frames is short; therefore, the locations change between frames is small, which ensures reliable matching between detections and trajectories. The detector will be evaluated by the test set of the application-specific dataset, which covers a diverse collection of images taken in different weather and sunlight conditions by the two cameras, which are mounted on the vehicle. Therefore, the evaluation with this test set can demonstrate how well the detector can perform in actual working environments. The average precision of the detector can reach 96.2 %, while the runtime can still be restricted to 26. 9 , which equals 37.2 frames per second (FPS). The frequency of the image input of cameras is 30 FPS, which is lower than the processing speed of the detector. That means the detector can be applied to the real-time application in combination with the two cameras. The bottleneck lays now not on the detector but the sensor.
Real-world test
The system's effectiveness is not only in laboratory conditions but also in the real world evaluated. For example, we installed the whole system in a construction vehicle, and two maintenance road workers wore the corresponding warning peripherals. During the testing days, the road workers solely did the regular work, like cleaning potholes, inserting asphalt, compressing, etc. Throughout the long-term study, we recorded (1) the timestamps of warning the road workers and (2) the timestamps of the vehicles which caused the corresponding warning passing the construction site. Then, we can calculate the time difference between both timestamps and count the number of time differences that belong to the fixed range. we can see that our system can reliably report the warning 3 to 7 seconds before the approaching vehicle passes the construction site, which allows the road worker to have adequate time to prepare for the potential dangers. We performed a manual analysis for the cases where the time differences were only equal to about 1 to 2 seconds. Such cases happened when the construction site was in the middle of the curve of some country road. The camera's view will be blocked by trees or mountain bodies so that it can't detect the approaching vehicle earlier. But this scenario itself is even more dangerous without our warning system. With the help of the front camera on the car and the vehicle recognition system, we can win up to 2 seconds for the road workers to prepare, which significantly improves the safety of construction in this dangerous scenario.
In addition, Figure 5 can present the influence of the implementation of the traffic flow check algorithm. From the graph, we can count the number of alerts received by the road worker in one day in the condition with or without the check algorithm. The road workers receive 1308 warnings in one day, on average 164 warnings per hour without the check algorithm. In peak hours, the road worker can receive even about 32 warnings per minute, which is unacceptable for the road workers. When implementing the check algorithm, the warning times are filtered to one-fifth of the original times, only 316 times. And especially in peak hours, the check algorithm makes a big difference. The received warning times are restricted to only about one warning per minute when the traffic flow is very dense because the vehicles that follow the first approaching vehicles are very close. The duration between detecting those vehicles is usually lower than (10 seconds) and will trigger no warnings. In such a situation, the road workers are aware of the existence of such approaching vehicles, so there is no need to remind them repeatedly.
On the one hand, the system can help road workers detect approaching vehicles when they focus on work and are not so sensitive to the environment; on the other hand, it can give warning to the road workers in advance when approaching vehicles are still obscured by the construction vehicle, which results in that the road workers cannot detect such vehicles in time. The latter is extremely dangerous and can be thoroughly solved by the vision-based warning system.
Conclusion
The safety of road maintenance workers on STRWS is critical, mainly because there are no physical safety barriers separating working areas and used lanes. Instead, the roadwork site and the traffic lane are separated only by traffic cones, safety beacons, or even nothing. The approach proposed in this paper, which consists of a detector, tracking, check algorithm, and warning device, aim to improve the safety of maintenance workers in STRWS with active protection. The evaluations demonstrate that the implemented system can reliably detect the approaching vehicle and decide intelligently whether the warning device should alert the road workers or not. With two cameras in both the front and rear of the construction vehicle, both sides of the country road will be under observation. The developed traffic flow algorithm can efficiently reduce the warning times so that the road workers are not too often disturbed and receive just necessary warnings. Furthermore, the warning device can send both visual and acoustic signals. The acoustic signal is implemented by a piezo buzzer, whose frequency will not be significantly filtered by the hearing protector that the road worker wears during the work to filter the noise from the roads.
Acknowledgement
The work described in this paper has been funded by the German Federal Ministry of Transport and Digital Infrastructure within the project MOSAik:D (Grant number 01MM19006C). | 2022-10-05T01:16:22.861Z | 2022-10-04T00:00:00.000 | {
"year": 2022,
"sha1": "2540172c94efc0e5288986274a80d353d7d00fa8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2540172c94efc0e5288986274a80d353d7d00fa8",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
56290884 | pes2o/s2orc | v3-fos-license | The Role Of Entrepreneurship Education In Cultivating Student ’ s Entrepreneurial Intention : A Theory Of Planned Behavior Approach
Purpose: This research seeks to prove whether entrepreneurship education can significantly grow student’s entrepreneurial intention. The approach of theory of planned behavior (TPB) is used to examine whether the effect of entrepreneurship education on student’s entrepreneurial intention initially goes through attitude variables, subjective norms, and perceptions about behavioral control first. The population of this study is the students of Narotama University who were taking the course of Strategic HR Management in the odd semester of 2017/2018. Data was analyzed using path analysis method. The result shows that entrepreneurship education has no direct effect on student’s entrepreneurial intention. As for the three elements of TPB, only attitude variable which is able to significantly mediate the influence of entrepreneurship education on student’s entrepreneurial intention. Design/methodology/approach: In this research, data in the form of respondents’ perceptions were analyzed quantitatively using path analysis method. Findings: Entrepreneurship education has no direct effect on student’s entrepreneurial intention and solely mediated by attitude of the students in order to effectively influence their entrepreneurial intention. Research limitations/implications: In this research, entrepreneurship education is seen from the learning process in one course only, not viewed holistically starting from a policy making process at the top management level of the university, the preparation of an integrated entrepreneurship curriculum, to its implementation in the learning process in each course. Also, the family background of each student was not considered as one of the determinants of their entrepreneurial intention. Practical implications: There is a need to increase affective content so that a stronger persuasive ability will be established among students to influence the norms in their social environment. Addition in psychomotor content is also needed to form more positive perception about students’ competence in doing entrepreneurial activity. Originality/value: This research combines the study of entrepreneurship education with the latest development in TPB study. This research also focuses on a population of students who are taking a certain course. Thus, the components in a learning process can be better observed. Paper type: Research paper Keyword: Entrepreneurship, Entrepreneurial Intention, Entrepreneurship Education, Theory of Planned Behavior :: IJEBD :: (International Journal of Entrepreneurship and Business Development) Volume 1 Number 2 2018 This work is licensed under a Creative Commons AttributionShareAlike 4.0 International License. ISSN : 2597-4785 (ONLINE) ISSN : 2597-4750 (PRINTED) The Role Of Entrepreneurship Education In Cultivating Student’s Entrepreneurial Intention: A Theory Of Planned Behavior Approach Bayu Airlangga Putra, Hermien Tridayanti, Agus Sukoco
I. INTRODUCTION
Entrepreneurship is needed in a country in order to realize prosperity and welfare for its people.This is because entrepreneurs can open new businesses that can absorb more labor, so as to reduce the unemployment rate.Entrepreneurs also tend to be more innovative in their business operations.For example, by utilizing technology that makes the production process more efficient, entrepreneurs can improve the competitiveness of a country.Entrepreneurship can also be viewed as an activity of business opportunities searching by individuals, governments, and communities around the world to promote economic development (Ramadani et al., 2015).In general, it can be said that entrepreneurial knowledge is an important factor to achieve success (Welsh & Dragusin, 2013).
Many studies suggest that the ideal number of entrepreneur in a country is at least 2% of the population (Santoso, 2014).The ratio of entrepreneur in Indonesia in 2017 is 3.1 percent of the total population.Despite being above the minimum limit, this figure is still below the ratio of entrepreneur in Malaysia (5%), Singapore (7%), China (10%), Japan (11%), and the United States (12%) (Humas Kemenkop dan UKM, 2017).
Thus, generating as many entrepreneurs as possible is an urgent strategic matter to be implemented by a country's government.Universities, as partners of the government in the field of education, have a strategic role to assist the development process of these entrepreneurs.The college students are excellent human resources that are expected to be an intellectual power in advancing a country, both in economic, technological and cultural aspects.Therefore, after graduation they are expected to have the knowledge, skills and motivation needed to establish the business (Welsh & Dragusin, 2013).
One of the core elements in teaching and learning activities at university is lecturing activity.Lectures are organized based on various courses that students can take per semester.In this case, the entrepreneur's creation function must be integrated into the teaching activities contained in the learning design of each course.Consequently, there is a need for an evaluation of the effectiveness of such learning in shaping entrepreneurial intention of the students who attend these courses.
Some research findings indicate that entrepreneurship education activities can trigger one's entrepreneurial intention (Fayolle et al., 2006;Kolvereid and Moen, 1997;Webb et al., 1995).But in its development, many researchers began to consider the role of the theory of planned behavior (TPB) as an important element of student's entrepreneurial intention (Ajzen, 1991;Yang, 2013;Moi et al., 2011).On that basis, this study aims to analyze the influence of entrepreneurship education on student's entrepreneurial intention through elements in TPB, namely attitude, subjective norms, and behavioral control.
Entrepreneurship Concept
It was Schumpeter (1934) who originally spelled the term entrepreneur, which he defined as a person who added value to the economy with the contribution of a new way of thinking.This definition evolves over time along with the dynamics of economy and business around the world.Recently, entrepreneur is defined as a person who exploit opportunities, often through the re-combination of existing resources, and on the other hand also endure uncertainty in the implementation (Gümüsay, 2014).According to Nadim and Singh (2011), entrepreneurs are the ones who act based on their creative ideas.The point here is that an entrepreneur is a dreamer who acts, not a person who only dreams but never acts, and not someone who manifests the dreams of others without having his own dream.Hamilton and Harper (1994) consider an entrepreneur as a person who bears certain risk to benefit from a discovery, while for Thompson (1999), an entrepreneur is capable of identifying and utilizing a new business opportunity.
While there are many definitions of entrepreneurs, there is a consensus that the entrepreneur is a person who has unique instincts, mindsets, inspirations, and visions, and has the power, will, and ability to conceptualize ideas and implement business plans.In addition, entrepreneurs also see change as an opportunity to create value (Cheng et al., 2009).Furthermore, according to Eze and Nwali (2012) entrepreneurial activity is generally considered to have an advantage because it shows the following elements: :: IJEBD ::
Entrepreneurship Education
Generally, studies in higher education are more oriented towards preparing graduates who are ready to work (i.e.employability), such as a work of Masum (2012).But today there is a demand for higher education institutions to produce entrepreneurs.In this context, various studies show a close relationship between the world of education with the emergence of entrepreneurs.For example, there has been a positive impact of the role of resources and other support mechanisms in educational settings on students' perception of entrepreneurship as a career choice (Johannisson, 1991 andAutio et al., 1997).Current trends are also raising the idea of an entrepreneurial-oriented university or so-called "entrepreneurial university".Entrepreneurial university is a natural incubator who tries to provide a supportive environment where university members can explore, evaluate, and utilize ideas that can be transformed into entrepreneurshiporiented social and economic initiatives (Guerrero et al., 2012).
The logical consequence of the above developments is the emergence of an urgent need to develop an educational program that is specifically aimed at generating new entrepreneurs.Fayolle et al. (2006) defines entrepreneurship education programs as any pedagogical or educational program for entrepreneurial attitudes and skills, involving the development of certain personal qualities.Thus, an entrepreneurship education program does not specifically lead to the creation of a new business immediately.So, this definition covers a wide variety of situations, objectives, methods, and teaching approache (Tinovitasari, Yuliastanti and Malati, 2017;Wajdi, Ummah and Sari, 2017)s.
There are three phases of entrepreneurial careers: first, potential entrepreneur (i.e.those who have the desire to become entrepreneurs); second, early-stages entrepreneurial activity (i.e.entry-level and new entrepreneurs); and third, established entrepreneur (Xavier et al., 2012).
In this research, the focus is on entrepreneurship-based learning outcomes in one of the subjects taught at the Faculty of Economics and Business, Narotama University, in the odd semester of 2017/2018, which is Strategic HR Management (SHRM) course.In the context of this study, for the time being, the learning objectives of the entrepreneurship-based learning design applied to SHRM course emphasized the first phase, namely potential entrepreneurs.Thus, the design of the lecture is more oriented to the growth of entrepreneurial intention among the students.
The contents of SHRM course are concepts and practices in HR strategy oriented toward the implementation of entrepreneurial business strategy.The audiences of this course are management major students who had chosen HR management as their study concentration.
The learning objective of SHRM course is equipping students with a number of competencies which are necessary in designing HR strategy in an entrepreneurial organization.In this respect, the students are directed to assume themselves as entrepreneurs focusing on the preparation of employees in order to support the business activity.The purpose of such arrangement is to raise the students' entrepreneurial intention, even though they will not necessarily start a business early in their career.To accomplish the learning objective, a combination of tutorial, exercise, design project, and discussion was used as the learning method.
In SHRM course, the students were asked to initially make a plan to establish a small business.Then, based on that plan, they were asked to design an HR strategy that appropriate enough for supporting the strategy implementation.The focus on small business was determined based on the premise that small business can provide conducive environment for entrepreneurship and innovation, which not always has to rely on know-how and resources control like the characteristics of a large-scale production, instead needs commitment and close cooperation among organization members (Sahut & Peris-Ortiz, 2014).
The Role Of Entrepreneurship Education In Cultivating Student's Entrepreneurial Intention: A Theory Of Planned Behavior Approach Bayu Airlangga Putra, Hermien Tridayanti, Agus Sukoco
The theory of planned behavior (TPB) states that people are assumed to be rational and systematically use the information available to them when making decisions.This theory states that (a) individual behavior is determined by their intention to carry out the behavior, which is the most accurate predictor of behavior; (b) the intention to behave is a function of attitudes toward behavior, subjective norms, and perceived behavioral control; and (c) other variables influence intention to behave indirectly through the mediation of attitudes, subjective norms, and perceived behavioral control (Ajzen, 1991).
Attitude is a subjective assessment of the consequences of a person's behavior and its effect on the person, which determines whether he likes or dislikes certain behavior (Ajzen and Fishbein, 1980).There are some party who see entrepreneurship as a last resort for people who do not get another job.On the other hand, there is a view of entrepreneurship as the main career choice that can help people achieve his selfactualization.A person who exhibit positive entrepreneurial attitudes will tend to behave as an entrepreneur and believes that entrepreneurship is not a way to make a living, but a way to achieve self-actualization (Yang, 2013).
Subjective norms refer to the individual perception that the people who are important to him think that the individual is supposed to perform or not perform a particular behavior (Ajzen and Fishbein, 1980).For example, a student may consider his parents and lecturer as important for him.When the parents and/or lecturer believe that the student must open a new company, or they support his entrepreneurial efforts, the student's entrepreneurial motivation will increase (Yang, 2013).
Perceived behavioral control refers to subjective understanding of one's level of self-control and the degree of difficulty in carrying out the desired behavior (Ajzen, 1991).Thus, entrepreneurial perceived behavior control can be defined as subjective evaluation of the capabilities and resources of one's entrepreneurial efforts, as well as the likelihood of success in entrepreneurship (Yang, 2013).When evaluating the same resources, some people will find it abundant, while others think it is rare.The same is true for one's perception of his ability.People who are positive about their resources and abilities will regard entrepreneurship as an opportunity rather than a risk, and they tend to show stronger entrepreneurial intention than negative people (Wilson et al., 2007).Yang (2013) found that TPB can be used to effectively explain student's entrepreneurial intention in China.In addition, attitudes toward entrepreneurship, subjective norms, and perception about behavioral control are also significant factors for explaining variation in student's entrepreneurial intention in China.More specifically, the research finding of Moi et al. (2011) shows that attitudes towards entrepreneurship are the most significant predictor for entrepreneurial intention among Malaysian students.
Conceptual Framework and Hypothesis
On the other hand, Krueger and Carsrud (1993) developed a model which shows that the three elements of TPB can be influenced by external influences on entrepreneurial activity.For example, attitude can be automatically formed through life experience, learning, observation of others, and so forth (Ahmad et al., 2013).Here it is clear that one of the most important formers of attitude is learning activity, which is at the core of an educational program.In other words, education can be predicted as one external factor that can affect the formation of entrepreneurial intention through the attitude element of TPB.
Although the effectiveness of entrepreneurship education undertaken in universities is still debated (Cheng et al., 2009), there are quite a number of findings that show the strong role of entrepreneurship education to foster entrepreneurial intention.For instance, Fayolle et al. (2006) found that entrepreneurship education programs have a significant effect on student's entrepreneurial intention.There are also Kolvereid and Moen (1997) who found that students who take up entrepreneurship studies have shown a greater interest in becoming entrepreneur.In addition, the students also showed greater entrepreneurial action than other students when faced with the challenge of starting a new business.Similarly, Webb et al. (1982) found that students who attended entrepreneurship programs were more likely to start their own business than other students.This is reinforced by Upton et al. (1995) who revealed that 40 percent of participants in entrepreneurship courses had started their own business.
From the above-mentioned findings of previous research, a hypothetical model can be constructed in several stages, as shown in Figure 1.The first stage is that entrepreneurship education, viewed as an external influence, will affect the three elements of TPB in relation to entrepreneurship, namely (1) :: IJEBD :: (2) perception of social norms related to entrepreneurial behavior (entrepreneurial subjective norms); and (3) perception of self-control/efficacy in entrepreneurial behavior (entrepreneurial behavior control).The second stage is that the elements of the TPB will form an intention to display entrepreneurial behavior, which can be more concisely referred to as entrepreneurial intention.
Figure 1 -Conceptual Framework of Student's Entrepreneurial Intention with TPB Approach
From the description of the stages above, then some hypothesis can be formulated as follows: • Hypothesis 1: Entrepreneurship education has a positive and significant impact on student's entrepreneurial intention.
• Hypothesis 2: Entrepreneurship education has a positive and significant impact on student's entrepreneurial intention through entrepreneurial attitude.
• Hypothesis 3: Entrepreneurship education has a positive and significant impact on student's entrepreneurial intention through entrepreneurial subjective norms.
• Hypothesis 4: Entrepreneurship education has a positive and significant impact on student's entrepreneurial intention through entrepreneurial behavior control.
Research Instrument
Entrepreneurship education (X1) is measured using a 5-point Likert scale (1 = strongly disagree, 2 = disagree, 3 = somewhat agree, 4 = agree, 5 = strongly agree), with statement items developed based on the conceptualization of design and implementation of entrepreneurship education programs (Fayolle et al., 2006) Entrepreneurial attitude (X2) is measured using a 5-point Likert scale (1 = strongly disagree, 2 = disagree, 3 = somewhat agree, 4 = agree, 5 = strongly agree), with statement items adapted from Gundry and Welsch (2001) as follows: x2.1 I would rather have my own business than work at a high salary in someone else's company.x2.2I would rather have my own business than pursue another promising career.x2.3I am willing to make great personal sacrifices in order to keep my business going.x2.4I will work elsewhere just enough to have time to build my own business.
Entrepreneurial subjective norms (X3) are measured using a 5-point Likert scale (1 = strongly disagree, 2 = disagree, 3 = somewhat agree, 4 = agree, 5 = strongly agree), with statement items adapted from Kolvereid (2001) as follows: x3.1 My closest family members will believe that becoming entrepreneur is right for me.x3.2 My closest friends will believe that becoming entrepreneur is right for me.x3.3The most important person in my life will believe that becoming entrepreneur is right for me.
Entrepreneurial behavior control (X4) is measured using a 5-point Likert scale (1 = strongly disagree, 2 = disagree, 3 = somewhat agree, 4 = agree, 5 = strongly agree), with statement items developed based on the conceptualization of perceived behavioral control (Yang, 2013) as follows: x4.1 I have resources to build my own business x4.2I can easily get various resources to build my own business.x4.3I have the physical ability to build my own business.x4.4I have the mental ability to build my own business.x4.5 I am able to overcome various challenges in building my own business.x4.6 I am able to bear the risks that arise in building my own business.
Research Population
The population of this study are the students who take the course of Strategic HR Management to complete in the odd semester of 2017/2018.The number of students enrolled in this course is 49 students, but the active participants that reach the final evaluation stage are only 45 students.The latter number is considered the amount of population, given the criteria that the respondent of this study should have attended the Strategic HR Management course completely in order to be able to provide the assessment as objectively as possible.In this study all members of the population were regarded as respondent.Thus, no sampling process is necessary.Questionnaires were disseminated at the time final examination of the semester.Of the 45 students who took the exam, all were willing to fill out the questionnaire.But, of that amount, only 41 questionnaires that can be processed, while the rest are considered invalid because of unfilled items or a double answer.
Validity and Reliability Test
Instrument validity test was done by calculating correlation value (r) between item with total (itemtotal correlation) compared with r-table value.An item of statement is considered valid if its r-calculated value is greater than r-table value.Using r-table, where for df = 41-1 = 40 and α = 0.05 found r-table value 0.304.Meanwhile the reliability test was done by calculating the Cronbach alpha value of a set of statements used to measure a variable.An instrument is considered reliable if its Cronbach alpha value is greater than 0.60.The following paragraphs describes the results of validity and reliability tests for the measurement instruments used in this study.
The first test for the measurement instrument of entrepreneurship education variable (X1) generated a Cronbach alpha of 0.682 (reliable), but there are four items of statement with r-value below 0.304 and thus are considered invalid (x1.5, x1.6, x1 .7, and x1.8) and can't be used.After the four items were removed, re-testing was done and the Cronbach alpha rose to 0.763 (reliable).
The test for measurement instrument of entrepreneurial attitude variable (X2) generated a Cronbach alpha of 0.862 (reliable) and all item statements are considered valid (> 0.304).
The test for measurement instrument of entrepreneurial subjective norms variable (X3) generated Cronbach alpha of 0.824 (reliable) and all items of statements are considered valid (> 0.304).
The test for measurement instrument of entrepreneurial behavior control variable (X4) generated Cronbach alpha of 0.858 (reliable) and all item statements are considered valid (> 0.304).
The test for measurement instrument of student's entrepreneurial intention variable (Y) generated Cronbach alpha of 0.878 (reliable) and all item statements are considered valid (> 0.304).
III. RESULTS AND DISCUSSION
This study used path analysis, which is the development of multiple regression analysis, to calculate the causal relationships among the variables that have been modeled in the above conceptual framework.In addition to this analysis, tests on various hypotheses proposed in this study were also conducted.
Based on the conceptual framework and hypothesis developed in this study, the path analysis conducted with the help of SPSS version 20 software and then generated four regression equations as follows: Regression equation 1: Y = -0.083X1+ 0,529X2 + 0,288X3 + 0,024X4 + 0,225 Furthermore, the causal relationships observed in the path analysis model along with their respective significance can be observed one by one.In Table 1, it appears that from the four dependent variables tested, only entrepreneurial attitude (X2) proved to have a positive and significant effect (0.001 < 0.05) on student's entrepreneurial intention (Y) with a beta of 0.529.This is in line with the finding of Moi et al. (2011) that attitudes towards entrepreneurship are the most significant predictor of entrepreneurial intention among Malaysian students.
The variables of entrepreneurship education are specifically observed in relation to the three TPB variables.It appears that the entrepreneurship education variable has a positive and significant effect (0.029 < 0.05) on the entrepreneurial attitude variable with a beta of 0.340.Entrepreneurship education variable also have positive and significant influence (0.008 < 0,05) on the entrepreneurial subjective norms variable with a beta of 0.409.However, the influence of entrepreneurship education variable to entrepreneurial behavior control variable is not significant (0,077 > 0,05).
IV. CONCLUSION
In the context of TPB, it is only an attitude element that can be a mediator to create entrepreneurial intention among students.Most likely this happens because, so far, the course of Strategic HR Management still put the greatest emphasis on the cognitive element.The affective and psychomotor elements are still very minimal.So, it's logical that the only strengthened aspect is student's entrepreneurial attitude.It means that while the students generally have a positive attitude toward entrepreneurship after completing the course, there is no guarantee that they will be enthusiastic and take concrete action to start a business.
If affective content were increased, there is a hope that a stronger persuasive ability will be established among students to influence the norms in their social environment.If the influence is well realized, then people who are important to the student will not hesitate to support the student's choice to :: IJEBD ::
128
become entrepreneur.Addition in psychomotor content is also expected to help the students forming more positive perception about their competence in doing entrepreneurial activity.
The limitation of this research is that entrepreneurship education is seen from the learning process in one course only.Ideally, entrepreneurship education should be viewed holistically, starting from a policy making process at the top management level of the university, the preparation of an integrated entrepreneurship curriculum, to its implementation in the learning process in each course.Furthermore, the achievement of output and outcome of the entrepreneurship education process is also need to be evaluated on regular basis to ensure the formation of truly strong student's entrepreneurial intention.It is interesting to examine more deeply the possibility of applying an assessment model of entrepreneurship education programs from Fayolle et al. (2006).In the model, there is a comprehensive assessment of the various elements of entrepreneurship education, including institutional setting, audience, type and objective of the program, content of the program (know-what, know-why, know-when, know-who, know-how), teaching and training methods, as well as teaching and training approaches.
This study also did not look at the family background of the students.Yet as Yang (2013) found in China, students whose parents had entrepreneurial experience scored higher in entrepreneurial attitude, subjective norms, and perception of behavioral control, and were more entrepreneurial oriented than students whose parents had no entrepreneurial experience.It will be interesting as a future research topic to see whether the same phenomenon also occurs in Indonesia.
Finally, entrepreneurship education is still a research topic that needs to be developed in the future.This is because of the magnitude of the challenges facing the productive age population in Indonesia in this disruptive era.Young entrepreneurs from among the students should constantly be emerged in order to realize the aim of advanced and prosperous nation.Thus, higher education institutions need to formulate more concrete entrepreneurship development policies, for example by allocating budgets to establish business incubators and providing affordable funding schemes for students interested in becoming entrepreneurs.Quality assurance also needs to be done more closely on the certain elements such as development of entrepreneurship curriculum and the university graduates profile.The existence of more in-depth research in this field would be very beneficial for higher education institutions in carrying out these great tasks.
(
International Journal of Entrepreneurship and Business Development) Volume 1 Number 2 2018 This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.ISSN : 2597-4785 (ONLINE) ISSN : 2597-4750 (PRINTED) The Role Of Entrepreneurship Education In Cultivating Student's Entrepreneurial Intention: A Theory Of Planned Behavior Approach Bayu Airlangga Putra, Hermien Tridayanti, Agus Sukoco 124 perception of the attractiveness of entrepreneurial behavior (entrepreneurial attitude); This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.ISSN : 2597-4785 (ONLINE) ISSN : 2597-4750 (PRINTED) The Role Of Entrepreneurship Education In Cultivating Student's Entrepreneurial Intention: A Theory Of Planned Behavior Approach Bayu Airlangga Putra, Hermien Tridayanti, Agus Sukoco 127 X2 = Entrepreneurial attitudes X3 = Entrepreneurial subjective norms X4 = Entrepreneurial behavior control Y = Student's entrepreneurial intention
Airlangga Putra, Hermien Tridayanti, Agus Sukoco
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
• Initiatives to combine and allocate resources • Policy making • Role as an innovator who is always involved in the creation of new ideas / products / business • Willingness to take / bear the risk
Airlangga Putra, Hermien Tridayanti, Agus Sukoco 125 x1
as follows: This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License..1This course has a clear purpose to encourage you to become entrepreneur.x1.2This course provides an adequate knowledge about entrepreneurship.x1.3This course provides a clear picture about the importance of entrepreneurship for the advancement of society.x1.4The tasks in this course help you understand the steps of entrepreneurship.x1.5 The tasks in this course encourage you to think creatively.x1.6The lecturer of this course encourages students to come up with innovative ideas.x1.7 Classroom discussion activities add to your insight into the business that attracts you.x1.8 Consultation activities with the lecturer assist you in developing a viable business plan.
Table 1 -
Causal Relationship Among Variables
Airlangga Putra, Hermien Tridayanti, Agus Sukoco
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. | 2018-12-17T23:14:13.578Z | 2018-03-30T00:00:00.000 | {
"year": 2018,
"sha1": "9cb49770ddd1d619ccf4c5aa5b563db9251fe4e6",
"oa_license": "CCBYSA",
"oa_url": "https://jurnal.narotama.ac.id/index.php/ijebd/article/download/555/306",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b63476dc7a90b5ef7b60aa23ca6aa9b6b87cc687",
"s2fieldsofstudy": [
"Education",
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
} |
247855782 | pes2o/s2orc | v3-fos-license | “There’s so much to be done”: a qualitative study to elucidate research priorities in childhood-onset systemic lupus erythematosus
Objective There is a pressing need for high-quality, comprehensive research to describe the natural history, best treatments, access to care and disparities in care for patients with childhood-onset SLE (cSLE). Building on a previously published survey study of cSLE clinicians and researchers to describe research priorities in cSLE, the primary objective of this study was to conduct expert interviews to define high-priority areas for cSLE research. Methods Individuals with identified multidisciplinary expertise in cSLE were recruited worldwide using purposive sampling technique. Experts participated in open-ended, semistructured qualitative interviews. Interviews were designed to elicit expert perspectives on research priorities, optimal research approaches, and factors that facilitate and hinder advancing cSLE research. Interviews were digitally recorded, transcribed and de-identified for analysis. Analysis for underlying themes of cSLE expert perspectives was performed using a constant comparative approach. Results Twenty-nine experts with diverse clinical and research backgrounds participated. Themes emerged within five domains: (1) expanding disease knowledge; (2) investigator collaboration; (3) partnering with patients and families; (4) improving care to optimise research; and (5) overcoming investigator barriers. Choosing a singular area of focus was difficult; experts identified many competing priorities. Despite the numerous priorities that emerged, experts described several existing and potential opportunities for advancing cSLE research. Conclusions In addition to the priorities identified by cSLE experts in this study, the opportunities for advancing cSLE research and care that were proposed should be used as a foundation for creation of a cSLE research agenda for both research and funding allocation.
INTRODUCTION SLE is a chronic, relapsing autoimmune disease affecting multiple organ systems; an estimated 10%-20% of patients with SLE have childhood-onset SLE (cSLE), defined as SLE diagnosed prior to 18 years of age. 1 cSLE has been shown to be more aggressive than adult-onset disease; children and adolescents suffer greater complications of the disease than adults, including more widespread organ involvement and higher disease activity. 2 3 Given the early onset of cSLE, patients often experience significant burden due to comorbidities and immunosuppressive treatment over their lifespan. 1 3 Although there are similarities between cSLE and adult-onset disease, there are unique considerations in the paediatric population, such as puberty, neurodevelopment, disease and medication effect on growth, drug metabolism, and medication efficacy, 2 in addition to the need to partner with the child's caregiver/family. Knowledge gaps remain in these areas, and there is a pressing need for high-quality,
Key messages
WHAT IS ALREADY KNOWN ON THIS TOPIC ► Although there have been significant advancements in the understanding and treatment of childhoodonset SLE (cSLE), significant knowledge gaps remain.
WHAT THIS STUDY ADDS ► This qualitative study examined perspectives from a diverse group of clinicians and researchers with expertise in areas relevant to cSLE to identify barriers as well as research strategies to advance understanding and care of patients with cSLE. ► Experts had difficulty selecting a singular area of focus to delegate resources reflective of many areas of need; identified priorities encompassed expanding knowledge of the disease itself, partnering with patients and improving their care to optimise research, and enabling investigator collaboration as well as overcoming individual investigator barriers.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE AND/OR POLICY ► The proposed solutions to barriers to advancement of cSLE research in this study should help guide a cSLE research agenda and may inform future funding priorities.
Lupus Science & Medicine
comprehensive research efforts to define the natural history, best treatments, access to care and disparities in care for patients with cSLE. 4 In 2015, the National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS) released an SLE research plan on behalf of the National Institutes of Health (NIH) which included broad areas for potential focus. cSLE (defined as a special population within the plan) made up a very small part of the overall research objectives. More recently, the Addressing Lupus Pillars for Health Advancement (ALPHA) project, a two-phase study, conducted expert interviews to inform a global survey of SLE experts to identify, prioritise and develop strategies for advancing SLE research, care and access as phase I of the study. Although paediatric rheumatologists were included in both the initial interview and survey components, only 6% of survey respondents primarily practised paediatric rheumatology. Responses of phase I of the ALPHA survey indicated that the lack of attention to paediatric issues is a barrier to SLE drug development and clinical care 5 ; however, specific aspects of these issues were not detailed. Subsequently, phase II of the ALPHA study proposed actionable solutions to the identified research barriers from phase I of the study. Increased representation of paediatric patients in clinical trials was named as a solution to the previous barriers that had been identified in phase I of the study. 6 To expand on the NIAMS-identified objectives and ALPHA study results, the Childhood Arthritis and Rheumatology Research Alliance (CARRA) and the Lupus Foundation of America (LFA) partnered to identify key knowledge gaps and strategies for advancing cSLE research. An initial objective of this collaboration was to prioritise cSLE research domains by surveying rheumatologists, nephrologists and dermatologists with expertise in cSLE. 7 The previously published survey study identified the highest prioritised domains as SLE nephritis, clinical trials, biomarkers, neuropsychiatric disease and refractory skin disease. 7 Building on this survey study, the objective of the current qualitative interview study was to further characterise and elaborate on high-priority areas of cSLE research. This follow-up study examined perspectives from multidisciplinary and international providers with expertise in cSLE care and research aiming to identify (1) areas of high priority in cSLE research; (2) best research approaches to address these areas; and (3) barriers and facilitators to advancing cSLE research.
Study design and patient and public involvement
Individual interviews were conducted to elicit perspectives from cSLE experts, to further characterise and elaborate on the priority domains, and identify new domains. Patients were not recruited to be part of this study and there was no public involvement.
Setting and participants
Using purposive sampling, 39 individuals with a range of expertise in cSLE were invited to participate in semistructured, individual interviews. Investigators identified experts through review of authors for cSLE research publications, review of investigators and collaborators within CARRA, and discussion with leaders in cSLE research. We aimed to sample individuals with expertise in clinical care and/or research related to cSLE, specific clinical and research areas (eg, central nervous system/mental health, nephrology, dermatology, cardiology, immunology) and a variety of research approaches (basic science, genetics, clinical trials, clinical and outcomes research, quality improvement/implementation science). Adult rheumatologists with clinical and/or research expertise in transition to adult care were included, as were several paediatric rheumatologists outside North America for a global perspective.
Twenty-nine individuals from 6 different countries and 23 different institutions were interviewed. The majority of participants practised within the USA and Canada (collectively making up 87% of the interviewees) (table 1).
Experts came from paediatric rheumatology (72%), paediatric nephrology, child psychiatry, adolescent medicine, adult rheumatology and medicine/paediatric rheumatology. They varied in their years of experience after completing training, with providers who had been in practice from 6 to 15 years making up the largest group (41%), followed by those practising more than 25 years (28%).
Data collection
Two interviewers (AC and LC) trained in qualitative interviewing techniques employed a semistructured guide (online supplemental 1) designed to explore experts' ideas and recommendations on advancing cSLE
Data analysis
Consistent with a constant comparative approach to analysis, experts' views on the state of cSLE research were iteratively examined to uncover the range of priorities among them. 8 9 AC and LC read and synthesised each expert's interview transcript to capture their observations and recognise points of difference and consensus across experts that would indicate shared priorities. Nearly half of the interview transcripts were independently analysed and synthesised by both LC and AC, followed by a joint review for alternative interpretations or missed themes. Their independent identification of themes converged within the first set of six interview transcripts. The remaining transcripts were synthesised by AC without secondary review. Next, the themes and excerpted text were organised into broad domains (eg, collaboration, funding, drug development). Together the authors (LC, AC, AH and AMK) explored the compilation of themes, domains and text to characterise the nature of the experts' perspectives and explore intersections within and across themes.
RESULTS
The priorities proposed by experts included not only 'what' to study, but the infrastructure and support needed to carry out that research. Several notable themes emerged and were categorised into five domains: (1) expanding disease knowledge; (2) investigator collaboration; (3) partnering with patients and families; (4) improving care to optimise research; and (5) overcoming investigator barriers. Illustrative quotes are shown in table 2.
Expanding disease knowledge Genetics and environment Experts indicated a need for greater understanding of the underlying genetic pathways of the disease for both identifying risk in preclinical states and determining best treatments (table 2, Q1). Experts pointed out that genetic knowledge as it relates to disease phenotype brings us closer to targeted medications (Q2-Q3). They also acknowledged a need to better understand the relationship between 'genetic predisposition' and environment in order to assess susceptibility. Studying the impact of environmental risk factors (violence, poverty, trauma, socioeconomic status, etc) on the development and course of disease may advance approaches to treatment (Q4).
Need for biomarkers
Experts described cSLE as a heterogeneous disease and emphasised the need for biomarkers to better characterise each individual's disease. They also identified biomarkers as critical to predicting organ involvement as well as a way to enable targeted therapy and potentially lead to new drug targets (Q5-Q6). Experts noted the need for biomarkers and targeted therapy frequently in the context of research devoted to cSLE nephritis and neuropsychiatric disease, given the significant morbidity associated with these disease manifestations (Q7).
Paediatric clinical trials
Experts described the efficacy of therapies for cSLE as being largely extrapolated from adult data and highlighted the need for paediatric clinical drug trials to advance therapeutic options in cSLE. Experts voiced frustration that if a medication is not effective in an adult study, then it will never be known if it would be effective in treating patients with cSLE, and that there is a delay between medications' use in adult care versus use in paediatric care (Q8). Medications were also discussed from the perspective of adherence and the need for medications that are more palatable to patients with less side effects and that work quickly (Q9).
Longitudinal outcomes
Experts said that longitudinal studies are critical to better understanding the long-term effects of treatment as well as long-term organ damage and mortality, especially in the context of nephritis and neuropsychiatric disease (Q10). When designing such studies, experts suggested researchers consider the quality, standardisation and organisation of data sources such as electronic medical records (EMRs) or registries. Many experts noted that longitudinal design is particularly suited to studies of transition from child to adulthood and to examining the long-term effects of medications. Patient registries were discussed as a tool to follow patients longitudinally, specifically the CARRA registry (Q11). It was noted, however, that longitudinal data capture may be limited by insurance changes and transition to adult care (Q12). The increasing sophistication of the EMR was noted as an opportunity to capture these data, but the cost of such studies was identified as a limiting factor. Additionally, experts said that standardised data collection is needed across centres/sites to consistently share information for registries and databases to be useful. Table 2 Illustrative quotes for qualitative themes for advancing cSLE research Expanding disease knowledge Genetics and environment Q1 "It would be very nice down the road if we can identify the patient early in the course of the lupus development -probably people are talking about preclinical -means before you have the full blown picture of lupus skin, are we able to identify those patient and say probably because of your genetic marker and the symptoms that we're seeing, this might evolve into lupus down the road and it would be ideal if we have medication to stop this from happening at the preclinical levels." -Expert AA Q2 "You look at all of their gene expression, all of their immune profiles, all of their phenotypes or clinical presentations in the very beginning after treatment and during maintenance and figure out who are the different types of the people, which ones respond to which treatments…knowing ahead of time, like targeting which immune pathway works for what group of patients. Because that would help you, A, personalize the treatment, but also, B, potentially look for new treatment targets like drug targets, specifically." -Expert BB Q3 "Like this molecular fingerprint needs Cytoxan right away and this molecular fingerprint does best with MMF…And there's molecular reasons why these pathways are selected. I think that's where things are going." -Expert E Q4 "So, this idea of adverse childhood events and the impact of trauma, violence, and chronic stress on the hypopituitary axis in the way that increases the risk for autoimmunity, but also worsen the outcomes and the way mental health plays into that. So, I think there are probably some disease modifying factors that include kind of the psychosocial realm and the environment that…probably are very rich for improving outcomes." -Expert B
Lupus Science & Medicine
Need for biomarkers Q5 "There is always room for improvement in biomarkers that would allow us to identify which patients would have certain organ involvement, which patients would have more severe disease course. So, kind of a biomarkers for assessment of disease severity, not just activity and damage." -Expert U Q6 "But if you asked me to prioritize, I would say biomarkers, looking at precision-based treatments based on pathophysiology, and then sorting through clinical phenotypes." -Expert B Q7 "So early identification of [neuropsychiatric] problems is key. Because then you can follow up on the cognitive function over time and determine if is this getting worse, getting better? Is it related to disease activity or not? Or is cognitive impairment an irreversible kind of status that we can change it over time? There are a lot of questions related to cognitive impairment that need to be answered." -Expert AA Clinical trials Q8 "A lot of our treatment is still guided by research that is being done on adults that the conclusions of which have sort of been superimposed and assumed to apply to a pediatric population which may or may not be the case. There are still a dearth of clinical trials that are being done in pediatric lupus, though more probably over the last decade than ever before but still very few." -Expert Q Q9 "We can spend all our money and time discovering these things, but if they're not palatable and if they -unfortunately, most lupus medicines take a while to work, and so you're not feeling better for a whole month after taking this medicine and it's like, what am I doing this for?" -Expert V Longitudinal outcomes Q10 "I would have a longitudinal electronic database where every new onset-lupus patient got entered in, demographic data, family history and got urine and blood samples and then every follow-up visit -the same thing with a blood and urine sample was collected and stored. And so, we would have really on a whole bunch of patients a lot of clinical and molecular data to really understand what was going on and response to medications." -Expert E Q11 "I think it's still sort of in its early phases in terms of the kind of longitudinal data that the CARRA registry has on lupus patients. But it will be a very, very powerful tool in the years to come to answer some of those questions." -Expert Q Q12 "I think because often [young adults with lupus] change services and often cities, so you don't get all the doctors from the beginning and they even lose follow up…mostly I would say it's difficult in getting access to data because you don't have all the information, but when they are adults you don't have all the information from when they were kids." -Expert O
Childhood lupus
Multicentre collaboration Q13 "20 years ago, there wasn't such a thing as a multi-PI R01 and now there is because everybody recognizes now that you really have to do collaborative research in certain fields, that it's just not possible to do it otherwise." -Expert X International priorities and collaboration Q14 "If you talk to the people in India or in South America or in Africa about lupus, they will give you different answers [about priorities]. They don't have the expensive drugs. They have an insane high burden of TB and other infections. They have poor health literacy and access to care. They have slightly different priorities…obviously, we care about the global health of kids with lupus. So somehow it has to come into it." -Expert G Q15 "I think that North American and European collaboration could be better…we collaborate more on the, let's say, the general aspects of the disease. But for certain, let's say, biomarker studies, genetic studies, of course there is always room for improvement for transatlantic collaboration." -Expert U Multidisciplinary collaboration Q16 "And that instead of producing research that simply deals with clinical rheumatology, people are exploring aspects of the disease that require the expertise of all these other disciplines and looking at things in a more -looking at this very complicated disease in a more nuanced and a deeper way." -Expert Q Q17 "They're going to get done by team science, right? A clinician putting the phenotype together, paired with a translational scientist who has a PhD who knows how to design the right experiments, so that we're doing this in the best quality, highest yield way." -Expert B Q18 "I guess myself, what my interest is in is trying to get to the basic understanding…of what causes this immune dysregulation in lupus. And so, I think you do need collaboration between geneticists and molecular biologists, immunologists and clinicians who can all come together to really understand what lupus is from their own perspectives but also in a dialogue with each other so that you can move forward." -Expert V Relationship with pharmaceutical industry Q19 "So better alignment of the pharmaceutical companies' motivations and needs with the FDA's perspective…and with the clinicians' desires and patients' desires in terms of the drugs, is important…So that all falls under the area of policy and money and prioritizationadvocacy." -Expert G Partnering with patients and families Recruiting and engaging patients Q20 "Our clinics are very, very busy…When you have that kind of pace, it's hard to spend the time that you need to really involve the family and give the feeling that we're all on the same team." -Expert K Q21 "It is easier to participate in studies for the person who speaks English, the person who has technology available to them, the person who has time to be on calls during working hours when we're having our calls. But the person who doesn't speak English or…can't get away from their job to get on a phone call at 10:00…we're not good at engaging them." -Expert G Q22 "Often times consent for studies are done with [a teen's] parents in the room and it's really important to allow that teen to have that conversation confidentially without their parents there…you want the teen to feel empowered and they're more likely to participate if they feel like this is really about them and their answers count and whether or not they can choose to consent or not and it's not something they have to do because their parents tell them to or want them to." -Expert T Q23 "My recommendation would be engaging the patient at different levels of your project because your project has different phases. It starts from the conceptualization of the idea and again you move on with phase one, phase two -application, analysis, interpretation -until you get to publication and dissemination." -Expert AA
Lupus Science & Medicine
Including outcomes important to patients Q24 "I think that PROs give you a different perspective on how to assess outcomes and especially in the diseases…where we don't actually know how to measure whether somebody's in remission…So just because somebody's protein goes away, does that really mean their lupus is in remission? And in a setting where the doctors are not 100 percent sure how to measure it, it seems to me it's even more important for us to be listening to how patients might measure it." -Expert X Q25 "I think there's a need for more patient-reported outcome measures to…hear from the adolescent about what they feel measures of success would be. …So, for example, I would say that the teenagers are more interested in achieving success in terms of functional outcomes. So, ability to participate in the way they want in school, in sport, in other extracurricular activities, to be able to secure a job without feeling limited by their illness, making sure socially they're okay, emotionally from a mental health perspective that they're okay." -Expert T Improving care to optimise research Quality improvement: intersection of healthcare delivery and outcomes Q26 "One of the goals of the CARRA registry was to be able to define variation across these centers. And to say, center X is doing really great, what are they doing, versus center Y, which is maybe -has even better outcomes, and what are they doing? So that we can share learning and develop new research studies to try to improve care." -Expert I Q27 "So things as simple as are we documenting whether or not they got their -even their primary vaccinations? And if so then another project is how we make sure that they get a pneumococcal vaccine because they need to have one, because -to protect them…We're looking at detecting vascular necrosis early. Detecting osteoporosis early. What do we implement that helps us to detect these things, and if so what kinds of treatment or prophylaxis do we offer patients?… How do we make sure that patients are having their lipids checked every year? And how do we ensure that we're looking for growth changes?… Can't use QI to look at which drug treatment is the best, or is the new drug better than what we have? But we certainly use QI to make changes every day in how we take care of patients." -Expert G Q28 "I should just say the family -what's happened with that kid's support and what the family is doing intersects with what the healthcare delivery system is doing, is really important in how well a kid does -not only psychosocially, but also medically. And so, maybe if we can't cure the disease -I mean, we sometimes say, until we know how to cure and until we have drugs that work better, let's at least make sure that we use optimally what we have…So I think that's a place for high impact." -Expert G
Childhood lupus
Optimising mental health Q29 "I'd also put money into a patient navigator…somebody to go into our patients' homes and practice medicine in their environment, as opposed to in our environment…because if you have the navigators in these homes, and you're making sure they are getting to the appointments, so then we have data [collected during home visits] and we are able to shoot it out of EPIC into like some bioinformatic platform." -Expert L Q30 "I also think that sometimes people just sort of change what they're doing because they're just worried about their patient, but their patient's actually okay and they can keep going with the protocol. I just get a sense with some of these protocols -and maybe I'm wrong.
But I worry because there's no oversight, really." -Expert F Q31 "…there are people on our research team that specialize in doing quality assessments of protocol adherence -for example, and making sure that when we're applying a tool like a SLEDAI score that we're accurately scoring it or that when we're enrolling a patient in a prospective study that we're actually seeing them the proper number of times per year and that we're not missing ordering specific labs that are in the protocol." -Expert M Q32 "We don't currently have a good way to, A, figure out is it from having a chronic illness and not having good resiliency to cope with it, or is it the disease itself that's causing it…And then we also don't know how to provide the necessary services." -Expert BB Q33 "I think we still have many unanswered questions in terms of how the psychological stress of the disease or life impacts the disease and how that impacts patient outcomes in the long term and whether there are areas that we could target in young adulthood and adolescence and childhood that we're not doing a great job of targeting right now that could have long-term effects on patient outcomes in lupus and in their lives in general." -Expert Q Q34 "[Anxiety and depression] are common just because you're a teenager, but also complicated by the fact that they have a chronic condition that they're having to wrap their head around and that this isn't gonna go away…it's very clear that…resources or therapy or medications for mood-related things, there's a strong appetite for that." -Expert T Improving adherence and transition to adult care Q35 "And at least identifying when there's not adherence and getting over the connotation of non-adherence as being a bad thing and making sure that our patients and our research subjects understand that we're asking them to adhere and we're needing to earn their adherence as opposed to expecting their compliance -so to speak." -Expert M Q36 "I think if you could take a cohort of adolescents who are getting treatment for serious lupus…and you could study them and ask about health outcomes, over a ten-year period, and understand what contributes to medication adherence and what contributes to resilience and successful transition to adulthood in a work environment, I think that would be huge." -Expert N Q37 "And when [patients transitioning to adult care] lose us as their family support, things just don't go well. So, how do you instill that kind of knowledge and self-care in these patients who don't know how to do that or don't understand what it means?" -Expert F Overcoming investigator barriers and identifying opportunities Protected time Q38 "As physicians, our time gets sort of fractionated into these teeny little pieces like three days a week we're in clinic all day…. And then the other days you get pulled into meetings, things appear on your calendar. There's almost no space to think and write and be creative. And, without that, I don't think we can move this work forward." -Expert B Funding for research Q39 "So, that is really probably what, in the long run, will be the best solution is to have all these experts advocating on behalf of the children with those diseases, and convince the powers above to come out with RFAs for -specifically for childhood SLE conditions." -Expert S Q40 "I'm seeing more young investigators not able to stick with it…the funding situation has gotten so difficult that it's really hard for them to stay in the game and they get discouraged." -Expert R Mentoring for young investigators Q41 "And it's really tough then to, A, find the people to do it who have the time to do it and, B, have the mentors who can teach the next generation to do it and I think that's a huge problem that we need to address." -Expert D
Lupus Science & Medicine
Investigator collaboration Multicentre collaboration Given the rarity of the disease, collaboration among centres was identified by experts as a necessity in advancing cSLE research. Investigators need to come together to advance this work with multicentre studies (Q13). Experts pointed to CARRA as a vehicle for research collaboration, especially with the patient registry; it has served to examine variation in care among centres and improve care overall.
International priorities and collaboration Experts outside of North America explained that international collaboration is particularly important for this rare disease because it increases the number of patients available to participate in studies, in turn enabling samples that represent the total population of children with cSLE. While in North America and Europe the focus is on advancing research, other areas of the world have more urgent needs for investment in basic healthcare, education and advocacy for patients with cSLE (Q14). They also pointed out challenges to fostering international collaboration (eg, regulatory barriers, Institutional Review Board approvals). One expert stated that international collaboration has been more successful on 'the general aspects of the disease' but would benefit from collaboration on biomarker studies and genetic studies (Q15).
Multidisciplinary collaboration
Multidisciplinary collaboration was also identified as essential to advancing cSLE research. Experts noted that understanding the multiple impacted organ systems in cSLE requires expertise from specialists across disciplines (eg, nephrologists, neurologists, dermatologists) to investigate the disease in a 'nuanced and deeper way' (Q16). Beyond subspecialties the need for partnership between clinicians and scientists from a variety of backgrounds (eg, immunologist, translational scientist, molecular biologist, geneticist, etc) was also emphasised, with experts indicating that a 'team science' approach is needed, including the potential of more basic science representation at CARRA meetings as an opportunity for collaboration (Q17-Q18).
Relationship with pharmaceutical industry
Experts noted the general cost of drug development and the unique challenges of drug development in rare diseases as barriers to new cSLE medication options. One expert noted that there needs to be better alignment between the pharmaceutical companies' motivations and the Food and Drug Administration's needs (Q19).
Partnering with patients and families
Recruiting and engaging patients Experts identified recruitment of patients in research as an important research priority, and barriers to recruitment such as lack of adequate staff or time on the part of the provider were discussed (Q20). One expert pointed out that attention to a diverse recruiting staff could encourage more patients, particularly those that would strengthen the representatives of the population in studies. Experts indicated that increased inclusion of diverse patients in research is necessary to capture a representative patient population which limits our understanding of the disease (Q21). The need to engage patients, not only parents, in studies was also emphasised especially in the adolescent population (Q22). Experts emphasised the need to include the family in any research that involves clinical intervention and the importance of involving patients at each 'phase' of a project to get their input (Q23).
Including outcomes important to patients Experts acknowledged, some emphatically, the importance of registering patients' definitions of success to guide the interpretation of assessments and interventions (Q24). In particular, patient-reported outcomes (PROs) are mechanisms to challenge assumptions about success that may be built into the design of a study. Examples of patients' concerns include effects of cSLE on education, employment, extracurricular activities and psychosocial well-being (Q25).
Improving care to optimise research Quality improvement: intersection of healthcare delivery and outcomes Several experts acknowledged the importance of quality improvement efforts to standardise care across individuals and institutions (Q26). One expert provided examples of the benefits of quality improvement studies, such as monitoring vaccinations, detecting avascular necrosis and osteoporosis, and lipid monitoring (Q27). Experts also noted that how an individual is able to navigate the complex healthcare system-which can be overwhelming for a patient and family-is tied to a patient's outcome. Effective healthcare delivery is intimately tied to understanding the patient's support system and the psychosocial impact of the disease (Q28). Therefore, financially prioritising research and support (such as patient navigators) around healthcare delivery was seen as a necessity for advancing cSLE care and ultimately research (Q29). Additionally, standardising the implementation of studies in clinical settings was a concern, particularly in terms of recruitment and ensuring the buy-in and commitment from clinicians to carry out study activities consistently (Q30-Q31).
Optimising mental health
Experts raised questions about the effect of psychological stress from the disease or disease activity and long-term outcomes. They called out the clinical conundrum of determining whether patients' psychiatric symptoms are a manifestation of primary disease activity or an expression of coping with chronic disease and encouraged devoting resources to develop diagnostic tools to differentiate between the two (Q32). Experts also noted the relationship between mental health and poor adherence to treatment and suggested that improvement in mental health screening may significantly impact outcomes (Q33-Q34).
Childhood lupus
Improving adherence and transition to adult care While some experts said that there has been progress on research around adherence, it was identified as a priority given that non-adherence remains a major barrier to the care and outcomes of patients with cSLE (Q35). Experts highlighted the need for research around best practices for promoting adherence, especially for therapies where patients do not experience immediate effect. Experts frequently discussed research and support around adherence in relation to the transition from paediatric to adult care. This was identified as a very vulnerable time for patients and an area where more resources should be dedicated to promoting successful transition to an adult rheumatologist (Q36-Q37).
Overcoming investigator barriers and identifying opportunities Protected time and funding for research Many experts identified the lack of time to obtain funding and complete projects as a major barrier to advancing cSLE research. They explained that providers have competing demands, and the obligations of clinical care leave little time to conduct high-quality studies (Q38). Providing time and funding for collaborative projects helps enable patients to be enrolled in studies and for collaborative research organisations to succeed, for example. Experts noted that current models of compensationlargely based on procedures and volume-are problematic for paediatric rheumatologists given rarity of diseases that they treat, which limits the institutional resources that they are allocated. Experts highlighted the importance of advocating for funding, with one interviewee stating that advocacy is needed to "convince the powers above to come out with [Requests for Applications] for childhood SLE conditions" (Q39). They also expressed the need for more robust funding mechanisms for cSLE projects, with special attention to opportunities for young investigators and trainees considering a research career (Q40).
Mentoring for young investigators
Experts expressed concern about the next generation of researchers due to the major obstacles with funding and beginning in a research-focused career in cSLE (Q41). They were selective about mentees because of this difficulty and the credentials and passion required to persevere. To be successful in a cSLE research career, experts said that young investigators need strong mentorship from more senior researchers, a need that currently cannot be met by the limited pool of senior researchers and their availability. One expert identified adult rheumatologists as a potential mentoring resource given the similarities between cSLE and adult-onset SLE. Other solutions proposed include creating mentorship programming specific to cSLE research with protected time for mentors and organised opportunities for networking among senior cSLE researchers and early investigators.
A path forward: establishing a paradigm for advancing cSLE research Experts noted the challenges in establishing a clear direction for cSLE research. When asked how they would spend a hypothetical grant of $10 million for cSLE research, nearly a quarter of them explicitly emphasised the difficulty choosing a single focus and some divided the money across multiple methodologies or lines of enquiry. Collection of patient biosamples (blood, urine and tissue) from diagnosis through their disease course for longitudinal characterisation of disease was the most frequent response, with over a third of experts detailing a project related to better understanding of disease biomarkers. These projects were often discussed as a mechanism to enable more targeted drug development. Despite the question's focus on cSLE research, a fifth of experts indicated that they would spend the money on improving healthcare access and delivery, which was identified as both a major barrier to advancing cSLE research and an important determinant of the patient's overall outcome.
One expert suggested the following approach to prioritisation in response to the 10-million-dollar question: I would probably target pathogenesis of the disease with genetic and immunophenotypic disease stratification to somehow get into the disease mechanism as much as possible…one of my teachers said, when you have the biggest emergency there is still time to run once around the hospital and to refresh your thoughts, and then kind of do the best rational thing, the optimal rational thing. So, sometimes you need time to reflect on different aspects and see what is feasible, what is doable, and would have the biggest impact. -Expert U Another expert asserted the importance of starting from a paradigm rather than a priority, explicating the tension between numerous and varied priorities, and calling out how difficult it is to prioritise research efforts when there is "so much to be done." So this is a problem we have in the field -we have a problem that there's so much to be done, it's really difficult to prioritize. And the question is -what is the basis for prioritization? Is the basis number of people affected…or should you give higher priority to things that are easier to do? Is it based on feasibility or should you give higher priority to things that are an emerging cool technology or biology…We don't have agreement on what we should say are the features on which to prioritize…we've prioritized kidney disease and neuro -brain disease because it's a higher mortality and morbidity. That's another way to prioritize things…But what we really need isn't to know what to work on. What we really need is to know how to work on it…People have been trying, trying, trying in both the pediatric and adult arena to address a whole array of lupus phenomenon. And our lack of progress isn't because we don't know where to focus.
Lupus Science & Medicine
So prioritization is helpful if you want to know where to put all your money or where to start. But I don't think that the lack of progress is because of people not working in a certain area…we haven't figured out a path of investigation -a path of research that advances the science and the knowledge well enough. -Expert G Similarly, two other experts stated the value of being intentional about setting such a direction, with one (Expert X) noting that in the field of autoinflammatory disease "they now have paradigms for how to study it and we still don't have a good paradigm for how to understand and study lupus."
DISCUSSION
While there have been advances in understanding disease mechanisms and treatment of cSLE in the last decade, significant needs remain, underscoring the necessity of a focused research agenda for allocation of available resources. Expanding on the findings from the ALPHA study 5 6 and CARRA-LFA survey regarding cSLE research priorities, 7 we elucidated approaches to address cSLE research priorities, exploring both barriers and opportunities to advancing cSLE research. Multidisciplinary experts in our study identified research priorities represented by five domains: (1) expanding disease knowledge; (2) investigator collaboration; (3) partnering with patients and families; (4) improving care to optimise research; and (5) overcoming investigator barriers. Many interviewees had difficulty selecting a single priority, and the need for a broader paradigm framework for cSLE investigation was identified. Additionally, and notably, partnership with patients emerged as a unifying factor for advancing research (figure 1).
We identified not only barriers but also numerous opportunities for progress (box 1). Our findings represent an initial step in formulating a research agenda and funding priorities for the cSLE community.
Experts identified a need for a path of investigation in lupus research, described as a paradigm framework to guide 'how' to do the research, which would also inform 'what' to research. A comparison was made to the field of autoinflammatory disease research, which has seen recent significant advances due to international and interdisciplinary collaboration, and harnessing of genomic and molecular technologies to link genotype to biological pathways to phenotype. 10 This has resulted in better diagnostic tools and target immunosuppressive treatment in autoinflammatory disease. A similar paradigm for lupus research, addressing the complexity and heterogeneity of the disease, would be optimal to guide lupus investigation towards improving patient care. Experts emphasised the importance of a research agenda that promotes Childhood lupus collaboration among investigators with different expertise (immunology, genetics, basic science, translational science, etc). With regard to translating findings to the clinical setting, experts also identified biomarker and drug development as prominent priority areas, acknowledging the unique regulatory and ethical considerations that exist in studies involving paediatric patients.
One promising model of investigation is the Accelerating Medicines Partnership (AMP), 11 12 which has enabled successful collaboration between the NIH, pharmaceutical companies and non-profit organisations, bringing together individuals of varying backgrounds and expertise to push the science forward. The AMP initiative has supported 'team science' studies in the adult SLE population, for example, with one published study using single-cell RNA sequencing technology to examine heterogeneity of lupus nephritis and skin disease, and identifying cellular changes on skin biopsy as a potential biomarker for lupus nephritis. 11 A trans-NIH initiative (across specialties) could be another potential opportunity for multidisciplinary collaboration to understand lupus disease heterogeneity given the multiorgan involvement in cSLE. Fostering team science collaboration inclusive of cSLE (both within cSLE and SLE research in general) will be critical to advancing research and discovery in cSLE, furthering knowledge of potential targets for more effective and less toxic medications (box 1). Furthermore, although there has been advancement in cSLE trials in recent years, 13 treatment development in cSLE will likely require innovative smaller trials with specific endpoints given the rarity and heterogeneity of the disease. 14 Another potential area for collaborative research is creating purposeful synergy between existing and newly created cSLE observational registries in order to optimise the breadth and depth of the data collected. Longitudinal observational registries which capture the full breadth of patient demographics, disease manifestations, treatments and response are critical to understanding cSLE's natural history and short-term and long-term outcomes. Harmonisation of data fields can allow for comparison or combination of related registries, which may be essential when describing less common disease manifestations or treatments. These registries are even more powerful when paired with biosample collection, allowing for determination of both clinical and biologic factors which may contribute to fluctuations in disease activity and outcomes.
Although there was significant emphasis on research of the disease itself and its manifestations, patients' barriers to engaging with the healthcare system (eg, transportation to clinic, reliable methods to communicate with providers) were a less expected, yet significant finding. This echoes priorities in the 2015 National Public Health Agenda for Lupus developed by a collaborative of the LFA, the Centers for Disease Control and Prevention, the National Association of Chronic Disease, and leading lupus and public health advocates, 15 calling for improved lupus care coordination, self-management programmes, resources and identification of health disparities. Notably, SLE mortality risk is significantly impacted by race, with black females having increased mortality from SLE as compared with other individuals with SLE. 16 17 Additionally, although individuals from racial minority groups make up a large proportion of patients with SLE in the USA, white participants are over-represented in SLE randomised control trials. 18 Health equity research is therefore greatly needed to eliminate ongoing health disparities in SLE. 15 To address the lack of adequate representation of racial minority groups in SLE research, a 2019 conference among patients with SLE, SLE physicians, clinical trialists, treatment developers from biotechnology, social scientists, patient advocacy groups and US government representatives explored solutions for increasing diversity in SLE research studies. 19 Best practices for clinicians and researchers were developed, which included reflecting on how personal and institutional culture lead to racial bias in research questions and study design, the need to avoid making assumptions about the 'kind' of patient who would participate in a trial, and increasing recruitment of under-represented minorities to the field of rheumatology. Experts in our study expressed several Box 1 Opportunities for advancing childhood-onset SLE (cSLE) research Collaboration ► Interdisciplinary collaboration across subspecialties and scientific disciplines to engage in a team science approach. ► Multicentre and multinational collaboration to develop larger representative cSLE patient cohorts with diverse disease manifestations. ► Efforts to reduce regulatory barriers to collaboration (eg, Institutional Review Board, Food and Drug Administration).
Longitudinal data ► Address gaps in clinical care, with attention to enhance standardised, comprehensive and longitudinal data collection. ► Longitudinal patient registry data collection from paediatric disease onset through adulthood. ► Data management and infrastructure support across centres/sites to share standardised electronic medical record data and link to other data sources (eg, registries, administrative data).
Patient involvement in research ► Improve healthcare delivery, with particular focus on underserved populations, to empower participation in research. ► Engage patients and families to determine meaningful research questions and outcomes and to enhance recruitment of diverse populations. ► Include patient-reported outcomes in study design. ► Include diverse staff on research teams.
Investigator support ► Increase funding opportunities for early investigators to pursue cSLE-related projects. ► Dedicated mentorship programming for cSLE research, including funded protected time for cSLE mentors. ► Facilitated networking among senior researchers and early investigators.
Lupus Science & Medicine
opportunities to address these issues, including having a diverse research staff, including outcomes that are important to patients and utility of funding for patient navigators (box 1). cSLE researchers face significant challenges-namely funding and protected time from clinical obligationsthat impede their participation in multicentre studies or to procure data for registries to support longitudinal studies of diverse cohorts. As experts pointed out, this milieu discourages the number of early career faculty who pursue research careers, leading to fewer cSLE investigators and an inevitable lack of diversity in their research backgrounds. Academic institutions have created programmes to address this need, [20][21][22] but it remains a critical issue across paediatric subspecialties and particularly in cSLE research given the small size of the paediatric rheumatology field.
As one of the smallest paediatric subspecialties, often with small institutional divisions, 23 interinstitutional mentorship is crucial for trainees and early career faculty in paediatric rheumatology. The ACR/CARRA Mentoring Interest Group 24 25 has established connections between individuals at different institutions for career mentorship. Even with funding (such as a mentored career development award, eg, NIH K award), the number and availability of mentors for cSLE research are limited. Grants that protect time for mentors, K24-like funding and Request for Application opportunities related to research mentorship from the LFA or Lupus Research Alliance are potential solutions to foster investigator mentoring (box 1). Patient advocacy efforts through private foundations could also fill such a role ( figure 1).
There are a number of strengths in our study. To our knowledge, this is the largest qualitative study of cSLE experts regarding cSLE research priorities. The group of experts had diverse backgrounds, including their disciplines (eg, nephrology, dermatology, medicine/ paediatrics rheumatology), years in practice and research backgrounds. There was also international representation. A limitation of our study is that patients were not included in our study, risking misalignment between patients' priorities and those of experts. A follow-up study is planned to include patients' perspectives, priorities and perceived barriers to being involved in research.
In conclusion, the findings of this study highlight the importance of a collaborative research framework and the interdependence of the essential components needed to advance cSLE research and patient care. These findings can inform an actionable research agenda that incorporates the complexity of foundational challenges that investigators in cSLE research face. Funding and limited infrastructure remain a major limitation, yet the potential solutions posited by experts should be prioritised in future request for applications for work related to cSLE. | 2022-04-02T06:23:36.455Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "44ec28770830a10f8ebc76c53e8fcee32bfbda60",
"oa_license": "CCBYNC",
"oa_url": "https://lupus.bmj.com/content/lupusscimed/9/1/e000659.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "646f66ea2fc863b58ab77f228253b936aabb0c8f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
22793171 | pes2o/s2orc | v3-fos-license | The C Terminus of Rpt3, an ATPase Subunit of PA700 (19 S) Regulatory Complex, Is Essential for 26 S Proteasome Assembly but Not for Activation*
PA700, the 19 S regulatory subcomplex of the 26 S proteasome, contains a heterohexameric ring of AAA subunits (Rpt1 to -6) that forms the binding interface with a heteroheptameric ring of α subunits (α1 to -7) of the 20 S proteasome. Binding of these subcomplexes is mediated by interactions of C termini of certain Rpt subunits with cognate binding sites on the 20 S proteasome. Binding of two Rpt subunits (Rpt2 and Rpt5) depends on their last three residues, which share an HbYX motif (where Hb is a hydrophobic amino acid) and open substrate access gates in the center of the α ring. The relative roles of other Rpt subunits for proteasome binding and activation remain poorly understood. Here we demonstrate that the C-terminal HbYX motif of Rpt3 binds to the 20 S proteasome but does not promote proteasome gating. Binding requires the last three residues and occurs at a dedicated site on the proteasome. A C-terminal peptide of Rpt3 blocked ATP-dependent in vitro assembly of 26 S proteasome from PA700 and 20 S proteasome. In HEK293 cells, wild-type Rpt3, but not Rpt3 lacking the HbYX motif was incorporated into 26 S proteasome. These results indicate that the C terminus of Rpt3 was required for cellular assembly of this subunit into 26 S proteasome. Mutant Rpt3 was assembled into intact PA700. This result indicates that intact PA700 can be assembled independently of association with 20 S proteasome and thus may be a direct precursor for 26 S proteasome assembly under normal conditions. These results provide new insights to the non-equivalent roles of Rpt subunits in 26 S proteasome function and identify specific roles for Rpt3.
ATP-dependent protease complexes commonly comprise two distinct subcomplexes: a cylinder-shaped protease with internally sequestered catalytic sites and an ATPase regulatory module required for delivery of substrates to those sites (1)(2)(3). The eukaryotic 26 S proteasome represents the most structurally and functionally elaborate example of such complexes (4,5). Its protease subcomplex, the 20 S proteasome, contains two copies each of 14 different gene products ar-ranged as four axially stacked heteroheptameric rings (6,7). Each identical outer ring contains seven different ␣-type subunits (␣1-␣7), and each identical inner ring contains seven different -type subunits (1-7). Three of the seven -type subunits feature N-terminal threonine residues that serve as catalytic nucleophiles and line an interior chamber in the center of the barrel-shaped structure (8,9). The regulatory subcomplex of 26 S proteasome, known as PA700 or 19 S regulator, contains about 20 different gene products, including six distinct ATPases associated with various activities (AAA) 2 subunits (Rpt1 to -6) (4, 10). The Rpt subunits are arranged in a hexameric ring that forms the binding interface of PA700 with the ␣ rings of the 20 S proteasome (11)(12)(13)(14). Binding of PA700 to the 20 S proteasome results in repositioning of interlaced N-terminal peptides of ␣ subunits that normally occlude a narrow pore in the center of the ␣ ring (15)(16)(17)(18)(19). This conformational rearrangement opens a route for substrates to reach the otherwise inaccessible catalytic sites in the interior of the proteasome. Although short peptides and some unstructured proteins pass the opened pore by simple diffusion, most physiological substrates of the 26 S proteasome are folded proteins covalently modified with a polyubiquitin chain (20 -22). Polyubiquitin serves as the principal method of targeting protein substrates to the proteasome via polyubiquitin-binding subunits of PA700, but its client substrates require additional processing by PA700 for delivery to the sites of proteolysis (23,24). Substrate processing includes unfolding, detachment from the polyubiquitin chain by resident deubiquitylating subunits, and translocation through the open pore. These coordinated activities appear mechanistically linked to one another and to Rpt-catalyzed ATP hydrolysis (21,25,26). Although molecular details of this coordination and linkage remain poorly understood, the Rpt subunits of PA700 are topologically situated and functionally suited to play a central role in proteasome function.
In addition to the obligatory role of ATP for 26 S proteasome degradation of polyubiquitylated proteins, ATP also is necessary for PA700 binding to and activation of the 20 S proteasome (27,28). However, unlike the former process, the latter requires ATP binding but not hydrolysis (21,22). Thus, the ATP-bound state of one or more Rpt subunits probably promotes a conformation in the Rpt subunit ring that opti-mizes its interaction with cognate binding sites on the ␣ subunit ring of the 20 S proteasome. Considerable insight into the molecular details of binding and consequent proteasome activation has been achieved from studies of 20 S proteasome-ATPase regulatory complexes in archaea. This structurally simpler system features a 20 S proteasome composed of a homoheptameric ␣ ring and an ATPase regulator, proteasomeactivating nucleotidase (PAN), composed of a homohexameric AAA subunit ring lacking additional non-ATPase subunits (3,29,30). These properties have facilitated imaging and crystallographic analysis of the resulting complex, revealing that residues at the extreme C terminus of PAN bind to pockets between adjacent ␣ subunits and induce proteasome gate opening (18,19,31,32) Remarkably, a seven-residue peptide corresponding to the C terminus of PAN is sufficient for both binding and activation of archaeal as well as eukaryotic proteasomes (33). The carboxyl group of the C-terminal arginine of PAN makes an essential interaction with an ⑀-amino side chain of a lysine residue on one ␣ subunit while the hydroxyl of the penultimate tyrosine residue interacts with residues on the adjacent ␣ subunit. Although conflicting data have been presented about the exact identity of these latter contacts, there is general agreement that the interactions stabilize a proline-containing reverse loop in an open gate conformation of the proteasome (18,19,32). This general mechanism explains how the C terminus of PAN participates in proteasome binding and activation. Notably, tyrosine also is the penultimate residue in four of the six distinct Rpt subunits in eukaryotic PA700; three of the Rpt subunits share with PAN an HbYX motif (where Hb is a hydrophobic amino acid) at the last three residues. Previous work by us and others showed that the different Rpt subunits of eukaryotic PA700 have at least some non-equivalent roles with respect to proteasome binding and activation. For example, enzymatic removal of the HbYX motifs from only two (Rpt2 and Rpt5) of the six Rpt subunits of PA700 completely inhibited PA700 binding to and activation of the proteasome (33,34). Moreover, as with PAN, peptides corresponding to the C terminus of Rpt2 and Rpt5 were each sufficient to bind to and activate the 20 S proteasome in a manner that depended on an intact HbYX motif. Finally, binding of the Rpt2 and Rpt5 peptides occurred at distinct and dedicated sites on the fixed order heteromeric ␣ ring, as judged by chemical cross-linking (34). However, a C-terminal peptide of another HbYX motif-containing subunit, Rpt3, as well as Cterminal peptides of the three non-HbYX-containing subunits, had no demonstrable proteasome-activating activity. The lack of activating function of non-activating peptides could reflect either their lack of proteasome binding or their inability to induce conformational changes required for gating after binding. The purpose of this work was to explore roles for non-activating Rpt subunits of 26 S proteasome.
EXPERIMENTAL PROCEDURES
Proteins-PA700, PA700 subassemblies (PS-1, PS-2, and PS-3), 20 S proteasome, and 26 S proteasome were purified from bovine red blood cells as described previously (21, 28, 34 -37). SUMO-Rpt peptide fusion proteins were generated by amplification of the whole pET28a-SUMO cassette with primers containing nucleotides appropriate for amino acid sequences of the desired peptides. The resulting His-tagged recombinant SUMO-Rpt peptide fusion proteins were expressed in Escherichia coli BL21 (DE3) cells at 15°C overnight and purified by affinity chromatography utilizing nickel-nitrilotriacetic acid beads (Qiagen). SUMO-Rpt-chimeric peptide fusion proteins were produced by generating two point mutations in the HbYX motif of a SUMO-Rpt peptide fusion protein sequence. The SUMO-chimeric peptide fusion proteins were expressed and purified as described for the SUMO-Rpt peptide fusion proteins.
Peptide Synthesis-Peptides corresponding to the sequences (or variants thereof) of C termini of Rpt subunits of PA700 were synthesized using Fmoc (N-(9-fluorenyl)methoxycarbonyl) chemistry and purified using HPLC by the Protein Core Facility at the University of Texas Southwestern Medical Center. Sequences of all peptides were verified by mass spectrometry. The sequences of these peptides from N to C termini are as follows: Proteasome Activity and Activation Assays-Proteasome activity was measured by determining rates of enzymatic cleavage of 7-amino-4-methylcourmarin (AMC) from peptide substrates Suc-LLVY-AMC, Suc-LLE-AMC, and benzyloxycarbonyl-VLR-AMC, as described previously (21). Standard assay conditions included 45 mM Tris-HCl, pH 8.0, 5 mM -mercaptoethanol, 15 nM latent 20 S proteasome, and 200 M substrate in a volume of 50 l. Incubations were carried out at 37°C for 21 min in a Biotek FL600 fluorescence plate reader with filters at 380-nm excitation/460-nm emission. AMC fluorescence was monitored once per min during the assay, and progress curves were analyzed with kinetic software. Proteasome activation by Rpt peptides and SUMO-Rpt peptide fusion proteins was determined similarly but included preincubation of 20 S proteasome with peptides or SUMO proteins for 15 min at 37°C (34). Proteasome activity is expressed as arbitrary fluorescent units produced/min. Routine control assays included reactions without proteasome. Proteasome activity against a protein substrate, [methyl-14 C]casein, was determined as described previously (37). Other details of individual experiments are provided in the appropriate figure legends. In some experiments, semiquantitative measures of proteasome activity were obtained by overlay of peptide substrates in situ on proteins separated in na-tive 4% polyacrylamide gels, as described previously (38). After incubation at 37°C for 10 -30 min, AMC at the position of the protease in the gel responsible for its production was visualized by UV light.
26 S Proteasome Assembly Assay-Assembly of 26 S proteasome from purified 20 S proteasome and PA700 was conducted by preincubating 20 S proteasome with PA700 in 45 mM Tris-HCl, pH 8.0, 5.6 mM DTT, 10 mM MgCl 2 , and 100 M ATP at 37°C for 30 min (35). 20 S proteasome and PA700 concentrations for given experiments are provided in the appropriate figure legends. Samples were either assayed directly for proteasome activity or subjected to native PAGE, after which gels were stained for protein and assayed for in situ proteasome activity, as described above.
Rpt Peptide-20 S Proteasome Binding Assays-Binding of Rpt peptides to 20 S proteasome was determined by pulldown assays utilizing purified bovine 20 S proteasome and recombinant His-tagged SUMO-Rpt peptide fusion proteins. In typical assays, 100 M His-tagged SUMO fusion protein was incubated with 214 nM 20 S proteasome at 37°C for 15 min in 50 mM Tris-HCl, pH 7.6, and 1 mM -mercaptoethanol in a volume of 100 l. Twenty-five l of nickel beads were added and mixed for 2 h at 4°C. The beads were washed with 50 mM Tris-HCl, pH 7.6, 1 mM -mercaptoethanol and eluted with 300 mM imidazole. Eluted proteins were separated by SDS-PAGE and Western blotted with antibodies against 20 S proteasome.
Chemical Cross-linking-Chemical cross-linking of Rpt peptides was conducted by methods similar to those described previously (34). Biotin-or fluorescein-containing DOPA peptides described above were incubated for 10 min at room temperature with 720 nM 20 S proteasome, 20 mM Tris-HCl, pH 7.6, 20 mM NaCl, 1 mM EDTA, and 10% glycerol (Buffer H) in a final volume of 20 l. Cross-linking was initiated by the addition of 10 mM sodium periodate and quenched after 30 s by 50 mM -mercaptoethanol. Crosslinked products were detected by either fluorescence spectrometry (488-nm excitation/523-nm emission) using a Typhoon 9410 scanning imager (GE Healthcare) or Western blotting with HRP-linked neutravidin or infrared dye-labeled streptavidin (see below) after SDS-PAGE. In some experiments, samples were detected by these methods after twodimensional gel electrophoresis. In other experiments, samples of cross-linked proteins were purified by HPLC using a Jupiter C4 5-ml reverse phase column (Phenomenex). Samples were applied to the column in 0.05% TFA and eluted with a gradient of acetonitrile at a flow rate of 1 ml/min. In preliminary experiments, we determined that fluorescent subunits representing the cross-linked products eluted between 45 and 50% acetonitrile. Therefore, the gradient was developed from 0 to 45% acetonitrile in 15 min and from 45 to 50% in 30 min. Column fractions of 1.0 ml were collected, dried by vacuum, and redissolved in either SDS-sample buffer or isoelectric focusing sample buffer (7 M urea, 2 M thiourea, 4% CHAPS, 65 mM dithiothreitol, Pharmolytes (pH 3-10), and bromphenol blue). Isoelectric focusing was conducted using a Readystrip TM pH 3-10 support (Bio-Rad). Samples subjected to cross-linking with biotin-containing peptides were enriched for cross-linked product by binding to monomeric avidin beads after exposure to denaturing conditions. Cross-linking was performed as described above. Noncross-linked Rpt3 peptides were removed by multiple washes through a Microcon-YM100 centrifugal filter in Buffer H with 0.5 M NaCl and 0.05% Tween 20. The samples were exposed to 7 M guanidine HCl for 30 min at room temperature. The samples were diluted to decrease the guanidine concentration to 1 M and mixed with monomeric avidin beads in Buffer H containing 0.5 M NaCl and 0.05% Tween 20. Beads were washed with the same buffer, and retained proteins were eluted in either SDS sample buffer or isoelectric sample buffer. Samples were separated by either one-or two-dimensional gels and Western blotted with either infrared dye-labeled streptavidin or with antibodies against selected 20 S proteasome subunit and the respective infrared dye-labeled secondary antibody to visualize the proteins of interest utilizing an Odyssey infrared imaging system (Li-Cor).
Transient Expression of FLAG-Rpt3 in HEK293 Cells-HEK293 cell lines were maintained in Dulbecco's modified Eagle's medium (Invitrogen) containing high glucose and glutamine, supplemented with 10% fetal bovine serum in the presence of 5% CO 2 at 37°C. HEK293 cells were transfected at ϳ60% confluence with cDNA for either FLAG-human Rpt3 or FLAG-human Rpt3 lacking the last three C-terminal residues subcloned into the pIRESpuro3 vector (Clontech) using FuGene 6 reagent (Roche Applied Science). Forty-eight h after transfection, cells were washed with phosphate-buffered saline and harvested with buffer consisting of 50 mM Tris-HCl, pH 7.5, 5 mM MgCl 2 , 1 mM -mercaptoethanol, 1 mM ATP, and protease inhibitor mixture (Roche Applied Science). Whole cell extracts were prepared by 15 passages through a 27-gauge needle and centrifuged for 20 min. Expression of Rpt proteins was determined by Western blot analysis using anti-FLAG M2 antibody (Sigma) and anti-Rpt3 antibody (Boston Biochem).
Affinity Purification of FLAG-Rpt Proteins from HEK293 Cell Extracts-Approximately 7 mg of a whole cell extract was mixed gently for 2 h at 4°C with 100 l of anti-FLAG M2agarose beads (Sigma) in 50 mM Tris-HCl, pH 7.5, 100 mM NaCl, 1 mM -mercaptoethanol, 1 mM ATP, 5 mM MgCl 2 , 10% glycerol, and 0.1% Nonidet P-40. The beads were harvested by centrifugation and washed three times with the same buffer. Bound proteins were eluted overnight at 4°C with 2 bed volumes of binding buffer containing 200 g/ml FLAG peptide (Sigma).
RESULTS
The C Terminus of Rpt3 Binds to but Does Not Activate the 20 S Proteasome-We previously discovered that isolated peptides corresponding to the C termini of PA700 subunits Rpt2 and Rpt5, but not those corresponding to the C termini of the other four Rpt subunits of this regulatory complex, stimulated 20 S proteasome-catalyzed hydrolysis of model DECEMBER 10, 2010 • VOLUME 285 • NUMBER 50
JOURNAL OF BIOLOGICAL CHEMISTRY 39525
substrates by a mechanism that involves enhanced gating of the substrate entry channel (34). The differential effects of the various Rpt C-terminal peptides could reflect their differential binding to the proteasome or their differential ability to promote gate opening and substrate passage after binding. To distinguish between these possibilities, we directly tested the relative binding of the Rpt peptides to the proteasome. We expressed and purified recombinant proteins in which the C-terminal 10 residues of each Rpt subunit were appended to the C terminus of His-tagged SUMO-1, a protein that otherwise does not interact detectably with the proteasome. After incubation, 20 S proteasome bound by these fusion proteins was isolated by pull-down assays on Ni 2ϩ beads and detected by Western blotting. As expected, the 20 S proteasome bound to SUMO proteins containing the C terminus of Rpt2 and of Rpt5 (Fig. 1A, lanes 3 and 6). Surprisingly, however, 20 S proteasome also bound to SUMO containing the C terminus of Rpt3 (Fig. 1A, lane 4), a subunit whose C terminus does not enhance proteasome activity. The Rpt3-containing SUMO protein consistently pulled down more 20 S proteasome than did the Rpt2 and Rpt5-containing proteins, suggesting that it bound with greater affinity to the proteasome than did the Rpt2-and Rpt5-containing proteins. The proteasome failed to bind detectably to SUMO proteins with C termini of the remaining non-activating Rpt subunits (Rpt1, Rpt4, and Rpt6). Binding of the 20 S proteasome to SUMO-Rpt3 was blocked by excess free Rpt3 C-terminal peptide but not by excess free Rpt5 C-terminal peptide (Fig. 1B, lanes 2-4) or by excess free Rpt2 C-terminal peptide (Fig. 1C). Likewise, the Rpt3 peptide did not block binding of the SUMO-Rpt5 to the proteasome (Fig. 1B, lanes 5 and 6). These results indicate that binding of the Rpt3 peptide was specific and probably occurred at a site unique from those bound by Rpt2 and Rpt5. As with the binding of Rpt2 and Rpt5, binding of the SUMO-Rpt3 protein depended on the presence of the last three residues. Thus, proteins lacking the last two or three residues had no detectable proteasome binding, whereas a protein lacking the only the last residue displayed detectable but greatly diminished binding (Fig. 1D). Binding of SUMO-Rpt3 to the 20 S proteasome was demonstrated independently in a gel shift assay (Fig. 1E). Thus, the SUMO-Rpt3 fusion protein with an intact C-terminal peptide, but not that without the last three residues, retarded the migration of 20 S proteasome during native gel electrophoresis (Fig. 1E, lanes 1-4). Similar results were obtained with SUMO-Rpt5 proteins (Fig. 1E, lanes 5-7). In-gel proteasome activity assays also reflected the differential ability of the C termini of Rpt5 and Rpt3 to activate proteasome function. Collectively, these results confirm the differential ability of the C termini of different Rpt subunits to bind to the proteasome and identify Rpt3 as a PA700 subunit that binds to the proteasome but does not directly activate proteasome hydrolysis of model peptide substrates.
The C Terminus of Rpt3 Binds to a Dedicated Site on the 20 S Proteasome-Previously, we showed that activating peptides from the C termini of Rpt2 and Rpt5 chemically cross-link to distinct, dedicated, and identifiable subunits of the 20 S proteasome (34). Consistent with the results presented above, these findings indicate that the fixed order heterohexameric ring of Rpt subunits of PA700 binds to the fixed order heteroheptameric ring of 20 S ␣ subunits with an invariant interring subunit registration. This model predicts that Rpt3 also should bind at a unique and dedicated site on the ␣ ring of the 20 S proteasome and cross-link to a specific subunit. To test this hypothesis, we applied the same general chemical crosslinking strategy to Rpt3 employed previously for Rpt2 and Rpt5. We synthesized DOPA-containing peptides corresponding to the C terminus of Rpt3 that either contained or lacked the last three residues and included either biotin or fluorescein for detection of cross-linked products. Cross-linking of the intact C-terminal peptide with 20 S proteasome produced one major product, which was similar by each detection method after SDS-PAGE (Figs. 2, A and B). In some experiments, a second band, whose intensity varied among independent cross-linking reactions, also was detected. No cross-linked product was detected with an Rpt3 peptide lacking the last three amino acids, indicating that the cross-linking was specific for conditions required for peptide binding. We exploited respective characteristics of the fluorescein and biotin tags of Rpt peptides for independent identification of the Rpt cross-linked subunit of 20 S proteasome. For ease of detection of the cross-linked product, we continued subsequent analysis with the fluorescein-tagged peptide. For more facile enrichment of the cross-linked product, we exploited the biotin moiety. With the fluorescein-labeled peptide, a single cross-linked product also was detected by two-dimensional gel electrophoresis; in some experiments, such as that shown in Fig. 2C, the product appeared as two or three closely separated spots. However, the subunit complexity of 20 S proteasome from the unenriched sample and low protein content on these gels prevented us from further identification of the cross-linked product at this stage. Therefore, we subjected cross-linked 20 S proteasome to reverse phase HPLC to enrich and purify the modified subunit. SDS-PAGE of gradient fractions showed that this method separated most proteasome subunits from one another and from the major fluorescently labeled band (Fig. 3A). We subjected this band to twodimensional PAGE, which like the unenriched sample, usually appeared as two or three closely separated spots, whose posi- DECEMBER 10, 2010 • VOLUME 285 • NUMBER 50 tion did not correspond to that of any unmodified proteasome subunit. The fluorescent spot was extracted, digested with trypsin, and subjected to mass spectrometry, which identified peptides of only one 20 S proteasome subunit, ␣1 (PSMA6), in repeated independent experiments (supplemental Table 1); in one experiment, peptides of the ␣7 (PSMA3) subunit were detected in addition to ␣1. No proteasome peptides were identified when an equivalent area of the gel was analyzed from a 20 S proteasome sample subjected to identical treatment with an Rpt3 peptide lacking the last three residues (data not shown). We attempted to confirm the identity of ␣1 as the cross-linked protein by Western blotting, but the only available antibodies were insufficiently sensitive to detect the low protein content at the position corresponding to the fluorescent spot. No antibody to other 20 S proteasome subunits produced a detectable signal at this position (data not shown). Non-fluorescent ␣1 subunit was identified by Western blotting at its expected position in the gel; it was similar for the intact and truncated cross-linking peptide and presumably represents the non-cross-linked portion of the ␣1 protein.
Rpt3 and 26 S Proteasome Assembly
To confirm the identification of ␣1 as the cross-linked product of the Rpt3 peptide, we utilized the biotin-containing peptide to enrich the resulting cross-linked product on monomeric avidin beads under denaturing conditions, as described under "Experimental Procedures." In contrast to the analysis described above, this enrichment method permitted isolation of sufficient product for subsequent analysis by Western blotting after one-and two-dimensional PAGE. As shown in Fig. 3E, cross-linking with the intact peptide but not with the Cterminally truncated peptide, produced a biotin-labeled band that migrated indistinguishably from a modified ␣1 band, detected by Western blotting on SDS-PAGE. Likewise, after two-dimensional PAGE, Western blotting for the ␣1 subunit revealed modified spots in positions similar to those observed for the fluorescently cross-linked protein in samples crosslinked with the intact peptide but not with the truncated peptide. Moreover, biotin was detected only in the modified spots, which coincided precisely with those detected by the anti-␣1 antibody (Fig. 3F). Control experiments with antibodies against several other ␣ subunits failed to detect modified proteins as a consequence of cross-linking in Western blots of either the one-or two-dimensional gels (Fig. 3E) (data not shown). The spots detected coincidentally by infrared dyelabeled streptavidin (for biotin) and the ␣1 antibody were extracted, digested with trypsin, and subjected to mass spectrometry. As with the analogous experiment described above, peptides of the ␣1 subunit were selectively identified (supplemental Table 1). Collectively, these results indicate that ␣1 is the probable cross-linked product of the Rpt3 C-terminal peptide. This subunit is distinct from subunits identified previously as cross-linked products of C-terminal peptides of Rpt2 (␣7) and Rpt5 (␣4). These results support the general model for a fixed and distinct registration between subunits of the interacting heteromeric Rpt subunits and ␣-subunit rings (see "Discussion").
Binding of Rpt3 Does Not Affect Proteasome Activation by Rpt2 or Rpt5-C-terminal peptides of Rpt2 and Rpt5 activate substrate hydrolysis by the 20 S proteasome, and their effects are either additive (with short peptide substrates) or synergistic (with longer protein substrates). Such results are consistent with the binding of these peptides to distinct sites on the proteasome and support a model in which the substrate access pore can be gated to variable degrees by multiple independent binding events (40). Therefore, we tested whether Rpt3 could modulate the gating effects of Rpt2 and/or Rpt5, despite its inability to induce gating independently. Rpt3 had no proteasome-activating activity by itself or in combination with Rpt2 and/or Rpt5, regardless of the substrate tested (Fig. 4). Thus, binding of Rpt3 C-terminal peptide neither opens the substrate access pore directly nor modulates the effect of other Rpt C-terminal peptides that do so. These results, however, monitor the relative roles of physically separated binding molecules and may not reflect the roles and effects of these peptides when they function in the context of an intact PA700 complex (see "Discussion").
Features of Proteasome Binding and Activation Are Determined by Both the HbYX Motifs and Adjacent Residues of the Rpt C Termini-Previous work by us and others has established and emphasized the important role of the last three amino acids of C-terminal Rpt peptides in proteasome binding and activation by Rpt2 and Rpt5 (33,34). These residues (LYL and YYA for Rpt2 and Rpt5, respectively) conform to a motif of HbYX. This motif also is present in Rpt3 (FYK) and, in an imperfect form, in Rpt1 (TYN). Thus, various HbYX motif-containing peptides display distinct functional properties with respect to proteasome binding and activation. Although such disparity probably reflects differences among the HbYX motifs and their cognate binding sites on the ␣ ring of the 20 S proteasome, features of Rpt C-terminal peptides other than the HbYX motif may provide additional determinants of proteasome binding and/or activation. To explore this possibility, we synthesized "chimeric" peptides containing the HbYX motif (hereafter denoted as the "tail") of given Rpt subunits and the adjacent N-terminal seven residues (hereafter denoted as the "head") of other Rpt subunits. We also produced recombinant fusion proteins of SUMO and the corresponding chimeric peptides. We selected for this analysis examples of C-terminal peptides that (i) both bind to and activate the proteasome (e.g. Rpt5); (ii) bind to but do not activate the proteasome (e.g. Rpt3); and (iii) neither bind to nor activate the proteasome (e.g. Rpt1). First, we determined the ability of various SUMO-Rpt chimeric peptide fusion proteins to bind to the 20 S proteasome in pull-down assays analogous to those used with their wild-type counterparts (Fig. 5). Neither SUMO-Rpt3 head-Rpt1 tail (B, lane 3) nor SUMO-Rpt5 head-Rpt1 tail (B, lane 6) displayed appreciable proteasome binding, thereby supporting a critical role of the HbYX motif of wild-type Rpt3 and Rpt5 C-terminal peptides for their respective proteasome binding. Surprisingly, however, chimeric peptides consisting of an Rpt3 or Rpt5 tail with an Rpt1 head displayed proteasome binding properties suggestive of an important influence of the Rpt1 head. Thus, the Rpt1 head reduced the proteasome binding expected of the Rpt3 tail (compare lanes 2 and 4) but increased the proteasome binding expected of the Rpt5 tail (compare lanes 5 and 7). The influence of head region on proteasome binding also was demonstrated by the lack of binding of the chimeric peptide consisting of an Rpt3 head and an Rpt5 tail (i.e. a peptide containing both a head and a tail of binding peptides; compare lanes 4 and 8). In contrast, a peptide consisting of an Rpt5 head and an Rpt3 tail featured a proteasome binding affinity similar to that of Rpt3. These results further highlight the obligatory role of specific HbYX motifs for proteasome binding but dem- DECEMBER 10, 2010 • VOLUME 285 • NUMBER 50 onstrate the influence of additional elements of the C terminus on this process.
Rpt3 and 26 S Proteasome Assembly
To explore the relationship between binding of the HbYX motif and proteasome activation, we compared the effects of the various SUMO-chimeric peptide fusion proteins on proteasome activation with those of their wild-type counterparts and with the binding features of these chimeric peptides. As expected, no non-binding chimeric peptide activated 20 S proteasome catalysis. Moreover, chimeric peptides containing both a head and a tail of non-activating Rpt subunits (e.g. Rpt1 and Rpt3), regardless of their binding capacity, did not activate the proteasome. Instead, proteasome activation by chimeric proteins required the tail (HbYX motif) of a normally activating Rpt peptide. Thus, neither SUMO-Rpt5 head-Rpt1 tail, nor SUMO-Rpt5 head-Rpt3 tail activated the proteasome, although each could bind. In contrast, SUMO-Rpt1head-Rpt5 tail activated the proteasome to a greater extent than did SUMO-Rpt5, an effect that mirrored the relative proteasome binding of these proteins. These results indicate that proteasome binding is influenced by features of both the HbYX motif and the adjacent residues of specific Rpt subunits but that proteasome activation is restricted to the binding of a normally activating HbYX motif (e.g. Rpt5). To test this further, we synthesized a series of peptides consisting of an Rpt5 head and the tail of each of the six Rpt subunits. Only chimeric peptides containing the activating HbYX tails Rpt2 and Rpt5 stimulated 20 S proteasome activity (Fig. 5D). Interestingly, the Rpt5 head-Rpt2 tail peptide stimulated the proteasome to a greater extent than the wild-type Rpt2 peptide but to a lesser extent than the wild type Rpt5 (Fig. 5D). These results provide additional evidence for the influence of the head region on the function of gating-competent HbYX motifs.
Rpt3 C-terminal Peptide Attenuates 26 S Proteasome Assembly in Vitro-The data presented above identify the C terminus of Rpt3 as an important binding element of intact PA700 to the proteasome. To test the role of Rpt3 in binding of intact PA700 to the 20 S proteasome, we examined the effect of a C-terminal peptide of Rpt3 on the ATP-dependent in vitro reconstitution of 26 S proteasome from purified PA700 and 20 S proteasome. Both Rpt3 peptide (Fig. 6A) and the SUMO-Rpt3 fusion protein (data not shown) inhibited the PA700-dependent activation of the 20 S proteasome, an indirect monitor of 26 S proteasome assembly. The inhibitory effect was dependent on peptide concentration and required the last three residues. Inhibition of assembly of activated 26 S proteasome activation by the intact Rpt3 C-terminal peptide also was demonstrated by native PAGE (Fig. 6B). The Rpt3 peptide had no effect on the activity of intact purified 26 S proteasome, indicating that the peptide did not exert its effect in the assembly assay by inhibiting the activity of assembled 26 S proteasome or by promoting 26 S proteasome disassembly (Fig. 6, C and D). These results suggest that the isolated Rpt3 peptide functions as a dominant negative inhibitor of 26 S proteasome assembly by competitively blocking binding of intact PA700 to the 20 S proteasome. Remarkably, this effect is manifested despite the presence of at least two other PA700 subunits (Rpt2 and Rpt5) with the capacity to bind 20 S proteasome (see "Discussion"). Previously, we showed that 26 S proteasome also could be assembled in vitro from 20 S proteasome and three subcomplexes that collectively form intact PA700 (35). The C-terminal Rpt3 peptide but not the peptide lacking the last three residues blocked assembly of 26 S proteasome from these PA700 subassemblies (Fig. 6E).
The C Terminus of Rpt3 Is Essential for Assembly of 26 S Proteasome in Intact Cells-To evaluate the relative role and importance of the C terminus of Rpt3 to 26 S proteasome assembly in intact cells, we transfected HEK293 cells with expression vectors for either FLAG-tagged wild-type Rpt3 or FLAG-tagged Rpt3 lacking the last three C-terminal residues. We analyzed cells in which expressions of these proteins were approximately equal to one another and equal to or less than that of endogenous Rpt3 (Fig. 7A). The two Rpt3-expressing cell types were indistinguishable from one another and from non-transfected HEK293 cells by general morphological features and by rates of growth (data not shown). They also had similar overall proteasome activity (Fig. 7A). In non-transfected control cells, endogenous Rpt3 displayed a trimodal distribution when soluble extracts were subjected to glycerol density gradient centrifugation. Most of the Rpt3 protein sedimented in fractions characteristic of the 26 S proteasome. Smaller amounts were found in slower sedimenting fractions corresponding to free PA700 and other lower molecular weight complexes (Fig. 7B, top). FLAG-tagged wild-type Rpt3 displayed a distribution pattern that was qualitatively similar to that of endogenous Rpt3, although proportionally more exogenous protein was distributed to the slowest sedimenting complexes (Fig. 7B, middle). The reasons for and significance of this quantitative distinction are unclear. Nevertheless, an appreciable portion of expressed wild-type FLAG-Rpt3 was assembled normally into 26 S proteasome as judged by its sedimentation position in the glycerol gradient and by anti-FLAG immunoprecipitation of proteins with structural and functional features of 26 S proteasome (Fig. 7, B and C; see below). In contrast, little or no detectable FLAG-tagged Rpt3 protein lacking C-terminal residues was present in gradient fractions corresponding to the 26 S proteasome and instead accumulated in slower sedimenting fractions corresponding DECEMBER 10, 2010 • VOLUME 285 • NUMBER 50
Rpt3 and 26 S Proteasome Assembly
to those characteristic of PA700 and smaller subcomplexes (Fig. 7B, bottom). Moreover, although immunoprecipitation was equally efficient for the wild-type and mutant Rpt3 proteins, the resulting immunoprecipitates differed significantly in other features. For example, FLAG immunoprecipitation from extracts of cells expressing wild-type Rpt3 isolated a protein with features characteristic of intact 26 S proteasome as judged by Western blotting of representative component subunits (Fig. 7C, left) (data not shown), migration on native PAGE (Fig. 7C, middle), and proteasome activity (Fig. 7C, middle and right). In contrast, FLAG immunoprecipitation from extracts of cells expressing mutant Rpt3 isolated a protein with subunits characteristic of PA700 but without 20 S proteasome subunits or proteasome activity. Collectively, these results show that lack of an intact C terminus prevented Rpt3 from incorporation into 26 S proteasome and that the contributions of intact binding elements of Rpt2 and Rpt5 were not sufficient to overcome this deficiency. These results are consistent with the ability of isolated C-terminal Rpt3 peptide to attenuate binding of intact PA700 to the proteasome and highlight an important role of Rpt3 binding in 26 S proteasome assembly.
DISCUSSION
The results presented here reveal new details about structural and functional interactions between the 20 S proteasome and PA700, two multisubunit subcomplexes that compose the 26 S proteasome. PA700 and 20 S proteasome bind at an axial interface of two different heteromeric rings. PA700 contributes a heterohexameric ring of AAA subunits (Rpt1 to -6), whereas the 20 S proteasome contributes a heteroheptameric ring of ␣-type subunits (␣1 to -7). Thus, the interaction between subcomplexes of the eukaryotic 26 S proteasome has greater structural complexity than that between subcomplexes of archaeal proteasome complexes featuring interacting rings of homomeric proteins. Previous work has established an important role for the C termini of certain Rpt subunits for binding of PA700 to the proteasome and consequent proteasome gating (33,34). However, the relative roles of the different Rpt subunits in these processes remain uncertain. The clearest examples of Rpt C termini that interact directly with the proteasome are those of Rpt2 and Rpt5. Each features an HbYX motif that binds to pockets between specific adjacent ␣ subunits of the 20 S proteasome ring. Although the structure of intact 26 S proteasome has not been solved at atomic resolution, possible explanations for the nonequivalent roles of the C termini of different Rpt subunits in proteasome binding and activation have been provided by structural studies of heterologous, artificially engineered, and simpler archaeal model systems (18,19,31,32,41). For example, the significance of the HbYX motif has been illustrated by showing the atomic details of how these residues from PAN and certain Rpt subunits interact with proteasome subunits to promote gate opening (19,31,32). Despite the considerable insight gained by these studies, a comprehensive molecular understanding of the relative structure-function relationships of the Rpt subunits for proteasome binding and activation of authentic 26 S proteasome remains elusive. Nevertheless, the non-equivalent capacity of different Rpt subunits to bind to and activate the proteasome must depend on both the features of individual Rpt C-terminal residues and the specific ␣ subunits that create their respective binding pockets. Thus, whereas Rpt2 and Rpt5 bind to gating-competent sites on the proteasome, Rpt3 probably binds to a site that cannot directly promote gating. The lack of proteasome activation by an isolated peptide, however, does not exclude its role in proteasome gating when it is part of the intact PA700 complex.
The current data provide initial information about the influence of residues upstream of the HbYX motif in proteasome binding and activation. Goldberg and colleagues (33) previously established a seven-residue minimum length requirement for proteasome binding and activation of features of the HbYX motif peptide of PAN. Multiple substitutions for residues N-terminal to the HbYX motif had little effect on binding and activation, suggesting that they did not make identity-specific contributions to these processes. In contrast, the results presented here show that the identity of residues adjacent to the HbYX motif of given Rpt peptides can have appreciable influence on proteasome binding and/or activation. Although an HbYX motif was always necessary for binding, alterations to adjacent residues could either diminish or enhance the apparent affinity of this effect. Likewise, residues adjacent to the HbYX motif had significant effects on proteasome activation. It is unclear from the current data whether these various effects reflect general structural features of the substituted peptides or features specific to the normal func-tion of given Rpt subunits. Information about the exact binding sites of chimeric peptides and their relationship to the normal binding sites of each component will be required for complete interpretation of these data.
The identification of ␣1 as the proteasome subunit to which Rpt3 specifically cross-links extends our previous results that identified ␣7 and ␣4 as the cross-linked products of Rpt2 and Rpt5 peptides, respectively (34). These collective results support other data indicating that different Rpt C termini bind to different and dedicated sites on the 20 S proteasome. However, these cross-linked products do not necessarily represent the subunits to which their respective HbYX motif residues directly bind because the cross-linking peptides' reactive DOPA residue is located up to 10 amino acids away from this site. Thus, it is not certain that these data can be used to fix the registration of interacting 20 S proteasome and PA700 rings, each of which is composed of subunits with invariant order (12,42). Our attempts to cross-link Rpt peptides in which the DOPA residue was located closer to the HbYX motif were unsuccessful. This could have many causes but may reflect the importance of the identity of residues adjacent to the HbYX motif for proper binding.
The current results demonstrate that the isolated C-terminal peptide of Rpt3 peptide blocks the in vitro assembly of 26 FIGURE 7. The C terminus of Rpt3 is required for assembly of 26 S proteasome. HEK293 cells were transfected with expression vectors without insert (Mock) or with inserts for either FLAG-tagged wild-type Rpt3 (Rpt3) or FLAG-tagged Rpt3 lacking the last three C-terminal residues (Rpt3Ϫ3C), as described under "Experimental Procedures." A, whole cell extracts were Western blotted for the indicated proteins (top) and assayed for hydrolysis of Suc-LLVY-AMC (bottom). Activity assays were normalized for total extract protein content and represent mean values of triplicate assays Ϯ S.D. B, extracts from non-transfected cells (Control) and from indicated Rpt3-expressing cells were subjected to glycerol density gradient centrifugation as described under "Experimental Procedures." Fractions were Western blotted for the indicated proteins. The arrows indicate the normal peak of sedimentation profile for purified PA700 and 26 S proteasome (data not shown). C, extracts of the indicated cells were subjected to immunoprecipitation with anti-FLAG beads as described under "Experimental Procedures." Immunoprecipitates were subjected to the following: Western blotting (WB) for the indicated antigens, including FLAG, 5 subunit of 20 S proteasome, Rpt2, Rpt5, and Rpn12 (left); native PAGE, followed by silver staining (middle; arrows indicate known migration positions of purified singly and doubly capped 26 S proteasome); and proteasome activity assays using Suc-LLVY-AMC as substrate (right). Data represent mean values of triplicate assays Ϯ S.D. (error bars) and were normalized for FLAG content. Similar results for data in each panel were obtained in three separate experiments.
S proteasome from intact PA700 and 20 S proteasome. This effect most likely results from competition of the Rpt3 peptide with the C terminus of the intact Rpt3 subunits in PA700 and indicates that binding contributions of Rpt2 and Rpt5 in intact PA700 are not sufficient to overcome this inhibition. Thus, diminished binding of only one of several competent binding elements of intact PA700 can severely impair overall 26 S proteasome assembly. Although our current studies have focused on Rpt3, it is likely that analogous results would be achieved by interference with the binding of Rpt2 and Rpt5. In fact, results compatible with this prediction have been obtained in previous independent studies in which enzymatic modification of the C terminus of either Rpt2 or Rpt5 was sufficient to inhibit 26 S proteasome assembly in vitro (34).
Additional evidence for a critical role of the HbYX motif of Rpt3 in 26 S proteasome assembly was obtained in intact cells. Unlike wild-type Rpt3, mutant Rpt3 lacking this motif was excluded from 26 S proteasome. Consistent with the biochemical data noted above, this result indicates that other binding-competent Rpt subunits (i.e. Rpt2 and Rpt5) are unable to overcome the binding deficiency of mutant Rpt3 with respect to its incorporation into 26 S proteasome. We suspect that HbYX deletion mutants of Rpt2 and Rpt5 will be similarly defective in their cellular incorporation into 26 S proteasome. Our results on the role of Rpt3 in 26 S proteasome assembly and activation appear to conflict with those of others obtained using a different experimental design and system. For example, a point mutation in the penultimate tyrosine residue or the deletion of the C-terminal lysine residue of Rpt3 each diminished the activity but not the cellular assembly of yeast 26 S proteasome (33). In a similar study, yeast expressing Rpt3 lacking a single C-terminal residue showed reduced but not abolished assembly and activation of 26 S proteasome (33,43). Reduced assembly of the single-residue deletion mirrors the effect observed for this same modification in our pull-down assays. In general, these more limited perturbations of the HbYX motif may produce less severe effects on these processes than does complete truncation.
In both previous and current work, we have established that 26 S proteasome can be assembled in vitro by ATP-dependent reconstitution from purified 20 S proteasome and PA700 (27,28). It is unclear, however, whether this process mimics the physiological pathway of 26 S proteasome assembly. In fact, several recent reports provide evidence that intact PA700 may not be a direct intermediate of the cellular 26 S proteasome assembly pathway but rather that 26 S proteasome is formed by sequential binding of multiple subassemblies of PA700 to 20 S proteasome, which would serve as a template for PA700 formation (43,44). Notably, three described subassemblies in these studies each contained two of the six different Rpt subunits including one HbYX motif subunit (Rpt2, Rpt3, and Rpt5) and one non-binding subunit (Rpt1, Rpt6, and Rpt4), respectively (45)(46)(47)(48). Independently, we purified three subassemblies of PA700 that collectively account for all component PA700 subunits (35). Each had the same content of Rpt subunits found in several of the aforementioned cellular studies, and two were identical in overall composition to cellular assembly intermediates found by oth-ers (46,49). Although we have not investigated the physiological significance of these subassemblies in detail, we note that they can be reconstituted in vitro into functional PA700 in the absence of 20 S proteasome and into 26 S proteasome in the presence of 20 S proteasome. The former result indicates that the 20 S proteasome is not an obligatory template for PA700 formation. Moreover, the cellular studies described here indicate that the C-terminal mutant Rpt3 protein defective in 26 S proteasome assembly accumulated as intact PA700. Thus, the C-terminal mutation of Rpt3 prevented only assembly into 26 S proteasome and not into intact PA700. Although this effect could reflect a direct decrease in binding affinity of the truncated protein for the proteasome, it also could be mediated by indirect mechanisms. For example, recent work has identified multiple Rpt-binding proteins that serve as 26 S proteasome assembly chaperones (45)(46)(47)(48)(49)(50). Each of these chaperones binds to a unique Rpt subunit prior to 26 S proteasome assembly but is released during assembly. One such protein, p28 (also known as gankyrin or, in yeast, Nas6) binds to a C-terminal domain of Rpt3 (51). Thus, structural alterations of the Rpt3 C terminus might block 26 S proteasome assembly by attenuating an otherwise required dissociation of Nas6 from Rpt3. In fact, previous work in yeast has shown that Rpt3 lacking a single C-terminal residue failed to release Nas6, resulting in defective association with the 20 S proteasome (43). Additional work will be required to determine the precise mechanism for the defective assembly of mammalian Rpt3 with a larger truncation studied here. | 2018-04-03T04:31:43.123Z | 2010-10-11T00:00:00.000 | {
"year": 2010,
"sha1": "973eac79505c7953cd77ab8fcac196b628fb59ab",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/285/50/39523.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "9bc7214466832e52597a36a107bc0e1a7004afcc",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
262216794 | pes2o/s2orc | v3-fos-license | Cohort study of postural sway and low back pain: the Copenhagen City Heart Study
Low back pain is a significant health problem with a high prevalence. Studies of smaller cohorts of low back pain patients have indicated increased body sway. The present paper tests the hypothesis of an association between low back pain and postural sway in a large randomly selected population. The current study used the fifth examination (2011–2015) of The Copenhagen City Heart Study where 4543 participated. The participants answered a self-administered questionnaire regarding pain, physical activity, smoking, alcohol consumption, education, and other lifestyle factors. Measurement of postural body sway was performed using the CATSYS system. Totally 1134 participants (25%) reported to have low back pain. Subjects with low back pain had higher sway area and sway velocity than subjects without. When using multivariate statistical analysis, confounding factors such as male gender, higher age, larger body height, low education level, smoking, and low activity level explained the association between low back pain and postural sway.
Background
Low back pain is a significant health problem associated with high treatment costs, sick leave, and individual suffering.Its prevalence increases with age and is in average about 20% in those aged between 20 and 59 years old [1].The underlying cause of low back pain is unclear but studies of postural sway, being an indicator of sensory-motor control, in smaller cohorts of low back pain patients have indicated increased sway in most but not in all studies [2][3][4][5].Other pain conditions along the spine such as neck pain in whiplash and tension-type headache are associated with impaired sensory-motor control in the neck muscles as part of their pathophysiology [6].Accordingly, postural sway may be of specific interest for understanding of the pathophysiology of low back pain as well as for its clinical presentation including a potential risk factor for fall-accidents due to bodily imbalance [7].Few portable test systems allow an easy evaluation of postural sway, but the coordination ability test system (CATSYS), can be used when exploring neurological disturbances in not hospitalized subjects [8].
The present study was designed to test the hypothesis of a potential association between low back pain and postural sway in The Copenhagen City Heart Study, a large population-based cohort.In this random sample of the general population, it was possible to analyze postural sway as an independent variable in relation to self-reported low back pain and hence adjust for possible confounders.Such an analysis has not previously been carried out.
Population
The Copenhagen City Heart Study is a prospective cardiovascular population study comprising a random sample of 19,329 white men and women between 20 and 93 years of age drawn from the Copenhagen Population Register as of January 1, 1976.The original purpose of the study was to focus on prevention of coronary heart disease and stroke.During the years many other aspects have been added to the study: Pulmonary diseases, heart failure, pain (including low back pain), dementia, ageing, stress, vital exhaustion, social network, arthrosis, diet, alcohol, allergy and genetics.
The first examination was carried out in 1976-1978 with 14,223 participants (response rate 74%).The current study used data from the fifth examination (2011-2015).Details have been described elsewhere [9,10].All former participants still alive, 9215 men and women, and a random sample of 1000 new persons from 20 to 29 years of age were invited to this fifth examination, where 4543 participated (response rate 49%).All participants were also asked to have their balance tested during the fifth examination and of these 4305 took part in the CATSYS measurements.Some withdrew from the balance test due to inability to stand unaided and some participants had not their balance tested due to technical problem with the equipment.
Measurements
Established procedures and examinations for cardiovascular disease epidemiological surveys were used [11].A selfadministered questionnaire requesting information about pain in several body locations, physical activity, smoking, alcohol consumption, education, among other background and lifestyle factors.High weekly alcohol consumption was defined as above 14 units/week in women and above 21 units/week in men.It was possible to register persistent or recurrent body pain experienced during the last 6 months in several locations in the body inclusive the lower back.Physical activity in leisure was graded in four levels: (1) Inactive or light activity < 2h/week, (2) light activity 2-4 h/week, (3) heavy activity 2-4 h/week, (4) heavy activity > 4h/week.The questionnaire was checked by the staff [9].
Body height without shoes, and bodyweight were measured.
Measurement of postural body sway was performed using the CATSYS system, invented in 1986 with the aim of diagnosing neurological disturbances [8,12].It recorded signal data from a 35 × 45cm balance plate, containing three force sensors.The force center coordinates were recorded as the subject tried to keep balance during the recording period.During the test the subject stood erect on the balance plate facing a fixed point straight ahead, without shoes and legs 2 cm apart.There were two test periods, one where the subject had open eyes, followed by one with closed eyes, both test periods had a duration of 60 s.Impermeant of visual input generally increases postural sway and it has been suggested that it would be more pronounced in individuals with back pain [3].The two different test conditions with open and closed eyes were designed to challenge the role of visual input on the body balance.The sway area was defined as the smallest area for the smallest polygon in the horizontal plane, measured in mm 2 .The sway velocity was calculated by dividing the total length of the trajectory with the recording period, measured in mm/s.
Statistics
The distribution of continuous variables was assessed by Kolmogorov-Smirnov tests for normality and evaluation of histograms.Differences in continuous variables between groups were assessed by nonparametric Kruskal-Wallis test, and differences in categorical variables were assessed by Chi-Square test.All analyses were stratified by gender.The relation between sway and variables were first assessed by bivariate analysis one at a time.Subjects were divided into three groups (low, middle and high sway) based on gender specific tertiles of CATSYS test results, to test for difference in variables between the three groups of sway.Secondly, the association between sway, as the dependent variable, and low back pain, as the independent variable, was assessed by the GENMOD procedure, with stepwise increasing adjustments of gender, age, height, weight, education level, smoking status, alcohol intake and leisure time activity level in multivariate analysis.Parameter estimates were generated by the maximum likelihood method.All statistical analyses were performed using SAS Enterprise Guide version 7.15 (SAS Institute Inc, Cary, NC, USA).The level of significance was set at p < 0.05.
Gender characteristics
Participants' mean age was 57.6 ± 17.7 years.More females (56%) than males (44%) participated in the study.BMI was higher in men than in women (26.5 kg/m 2 vs. 25.4kg/m 2 , p < 0.0001).There was a higher frequency of smoking among men than among women (21% vs. 17%, p = 0.0007).Mean reported intake of alcohol was higher in men than in women (11.1 unit/week vs. 6.0 unit/week, p < 0.0001).The frequency of high weekly alcohol consumption was higher in men than in women (14% vs. 10%, p < 0.0001).There were differences between education level among gender (p < 0.0001), with more men than women having craft training (28% vs. 21%) and more women than men having a short education (13% vs. 6%).More men than women had vigorous activity in leisure time (12% vs. 5%, p < 0.0001).
Low back pain
A total of 1134 participants (25%) reported to have low back pain.Women reported more low back pain than males (28% vs. 21%, p < 0.0001).Subjects with low back pain were older than subjects without.Subjects with low back pain had higher BMI than subjects without.There was a higher smoking prevalence in subjects with low back pain than in subjects without pain in this body part.There was a generally lower education level in subjects with low back pain, than in subjects without pain in this body part.Subjects with low back pain reported lower leisure time activity level, than subject without this pain.Subjects with low back pain, reported pain in more other body parts than subject without this pain.A higher proportion of subjects with low back pain were taking painkilling pills, than subjects without pain in this body part (Table 1).
Body sway
Of all participants N:4305 (95%) participated in the CAT-SYS examination, with result from N:4048 subjects when 2).Similar results were found when eyes were closed (Supplementary Table 2b).
Body sway and characteristics
At the CATSYS examination women had lower sway area and sway velocity than men, both with open and closed eyes (p < 0.0001).Subjects with low back pain had higher sway area and sway velocity than subjects without this pain (Table 1).With increasing age there were a dose respond increase in the sway area group (p < 0.0001) (Fig. 1).In independent bivariate analysis it was found that high sway area with open eyes was associated with increasing frequency of low back pain, increasing age, increasing BMI, current smoking, low education level, low activity level in leisure time and more body pain (Table 3).Same results were found when characteristics were associated to sway area with closed eyes, and sway velocity both with open and closed eyes (supplementary Table 3b-d).
Multi variates analysis
In the fully adjusted multivariate model, the association between sway area with open eyes and low back pain disappeared.In the final model were male gender (65.5044, 4).
In the fully adjusted multivariate model, the association between sway area with closed eyes and low back pain disappeared.In the final model were male gender (140.8595,p < 0.0001), higher age (14.3720, p < 0.0001), larger body height (12.5300, p < 0.0001), no education (103.8899,p = 0.0045) and low activity level (162.3480,p = 0.0024) positively associated with sway area with closed eyes (Table 4).
Discussion
The hypothesis that individuals with low back pain had an increased postural sway was supported in this study when using univariate statistical analysis.This in accordance with previous clinical studies as concluded in recent reviews [2][3][4][5].However, when using multivariate statistical analysis, which the large number of participants allowed in this study, confounding factors explained the association found using univariate statistical analysis.This observation contrasts with the existing scientific literature.The most likely explanation is that this study is an epidemiological study comprising a large randomly selected population, while all studies on the issue in the literature are clinical studies including smaller numbers of participants, which does not provide the possibility to conduct a multivariate statistical analysis.To our knowledge this is the first and only epidemiological study addressing the association between low back pain and postural sway.
The strength of this design is the inclusion of individuals from both younger and older age groups and with many different characteristics.A study design providing the option to carry out multivariate statistical analysis taking potential confounding factors into consideration.Something which is not an option in smaller clinical studies.The validity of the observations in this study is supported by the similar results found when analyzing different measures of postural sway, such as sway area and sway velocity, all tests both with open and closed eyes.
The weakness in this study is the lack of a clinical examination of individuals with and without low back pain.So, we cannot exclude that an independent association between low back pain and increased sway in some patients with specific low back disease may exist.
The most important new observations in this study are the associations found between individual characteristics and sway.Those individuals, swaying the most, were the older, the men, the taller people and those with lowest education and the smokers.Future clinical studies may be more conclusive if the above-mentioned characteristics are taking into consideration when designing a clinical study in this field.
Postural sway can be measured by different equipment and under varying study condition.By the CATSYS system measurements of postural sway was attained in an easy and noninvasive manner in a large population.However, the wide variations in sway area and sway velocity between participants, and lacking cutoff values for morbid condition, makes the diagnostic value of postural sway low at the present.The epidemiological design used in this study will allow followup studies on the possible predictive significance of postural sway for future health and survival.
Conclusion
The hypothesis suggesting an independent association between self-reported low back pain and increased postural sway was not supported in this epidemiological study providing multivariate statistical analysis.Several covariates with postural sway were observed (age, gender, body heights, educational level and smoking status).
Table 1
Characteristics versus reported low back pain stratified by gender N:4543Continuous variables are expressed as mean ± SD, and categorical variables are expressed as total number and (%) in cursive.Information about smoking status was missing in N:94 subjects.Information about education was missing in N:25 subjects.Information about leisure time activity was missing in N:79 subjects ♦Kruskal-Wallis test for horizontal difference in continuous variables between pain groups and χ 2 test when categorical variables, stratified by gender Subjects who did not complete the CATSYS examination were older and had a lower education level.The reported frequency of low back pain did not affect whether participants completed the CATSYS examination with open eyes (Table the test was executed with open eyes, and results from N:3949 subjects when the test was performed with closed eyes.
Table 2
Comparison of participants with and without CATSYS examination with open eyes Continuous variables are expressed as mean ± SD.Categorical variables are expressed as total number and (%) in cursive.Information about smoking status was missing in N:94 subjects.Information about education was missing in N:25 subjects ♦Kruskal-Wallis test for horizontal difference in continuous variables between pain groups and χ 2 test when categorical variables, stratified by gender p = 0.0420), current smoking (32.8925, p = 0.0351), reduced alcohol intake (−1.9622, p = 0.0015) and low activity level (136,5229, p < 0.0001) positively associated with sway area with open eyes (Table
Table 4
Sway versus low back pain with various adjustments (Model: sway = low back pain + adjustments) Adjustments in model 1: none.Adjustments in model 2: Gender and age.Adjustments in model 3: Gender, age, height and weight.Adjustments in model 4: Gender, age, height, weight and education.Adjustments in model 5: Gender, age, height, weight, education, smoking and alcohol.Adjustments in model 6: Gender, age, height, weight, education, smoking, alcohol and activity | 2023-09-25T06:17:24.092Z | 2023-09-23T00:00:00.000 | {
"year": 2023,
"sha1": "787a1899a94b0508cbaaf0ada641256f5eb923d9",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00586-023-07925-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "da08689ae67af18c2d2bfbd2be08e4de63fea68c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222174625 | pes2o/s2orc | v3-fos-license | Identification of Risk Factors and Symptoms of COVID-19: Analysis of Biomedical Literature and Social Media Data
Background: In December 2019, the COVID-19 outbreak started in China and rapidly spread around the world. Lack of a vaccine or optimized intervention raised the importance of characterizing risk factors and symptoms for the early identification and successful treatment of patients with COVID-19. Objective: This study aims to investigate and analyze biomedical literature and public social media data to understand the association of risk factors and symptoms with the various outcomes observed in patients with COVID-19. Methods: Through semantic analysis, we collected 45 retrospective cohort studies, which evaluated 303 clinical and demographic variables across 13 different outcomes of patients with COVID-19, and 84,140 Twitter posts from 1036 COVID-19–positive users. Machine learning tools to extract biomedical information were introduced to identify mentions of uncommon or novel symptoms in tweets. We then examined and compared two data sets to expand our landscape of risk factors and symptoms related to COVID-19. Results: From the biomedical literature, approximately 90% of clinical and demographic variables showed inconsistent associations with COVID-19 outcomes. Consensus analysis identified 72 risk factors that were specifically associated with individual outcomes. From the social media data, 51 symptoms were characterized and analyzed. By comparing social media data with biomedical literature, we identified 25 novel symptoms that were specifically mentioned in tweets but have been not previously well characterized. Furthermore, there were certain combinations of symptoms that were frequently mentioned together in social media. Conclusions: Identified outcome-specific risk factors, symptoms, and combinations of symptoms may serve as surrogate indicators to identify patients with COVID-19 and predict their clinical outcomes in order to provide appropriate treatments. (J Med Internet Res 2020;22(10):e20509) doi: 10.2196/20509
Introduction
COVID-19 is an emerging infectious disease that has quickly spread worldwide. Since its outbreak in China in December 2019, over 4 million cases have been confirmed across more than 200 countries [1] (as of May 20, 2020), with the number of cases continuing to increase. Several studies have characterized possible symptoms (physical or mental features indicating a disease condition) and risk factors (variables associated with an increased risk of disease) for patients infected with COVID-19 [2,3]. However, the majority of retrospective studies have been based on a single outcome from a single center and counted the number of aggregate cases [4], providing a scattered and incomplete picture of the risk factors for disease severity. Furthermore, uncharacterized or uncommon symptoms have made COVID-19 difficult to diagnose, making it difficult to provide appropriate treatment to patients. Lack of a vaccine or optimized treatment raises the importance of early and definitive diagnosis for this disease. Additionally, there are limited hospital resources to triage patients based on symptoms to determine who is more or less likely to require intensive treatment (eg, intensive care unit [ICU] admission or intubation).
All of these uncertainties suggest there are urgent needs for a low-cost and efficient method of gathering COVID-19 symptomand risk factor-related data as quickly as possible to reduce the medical and economic burden in our society. Instead of conducting time-consuming and costly clinical trials to examine patients, an alternative research avenue involves scraping public social media data to investigate potential risk factors of COVID-19 development. Social media provides an efficient method of gathering large amounts of representative data on the general public, in a cost-effective, scalable, and convenient manner any time of day, especially in remote or unattended regions. Here, we systematically investigated published biomedical literature to identify risk factors associated with outcomes of patients with COVID-19. We then gathered public Twitter data from COVID-19-positive users to expand the scientific literature and also examine rare or uncommon symptoms that were not previously well characterized. Open Research Dataset) [5] was used to find biomedical literature on COVID-19. We compiled retrospective studies that investigated clinical and demographic variables in various outcomes of COVID-19. To do this, we collected literature published between January 2020 and March 2020. We then generated two sets of keywords. The first set of keywords represented cohort-based and retrospective studies, and comprised the following keywords: "epidemiological characteristics," "clinical characteristics," "risk factors," "clinical features," "cohort," "clinical course," "clinical findings," "risk of death," "pathological characteristics," "retrospective," and "mortality risk." The second set represented COVID-19 ("novel coronavirus," "coronavirus," "COVID-19," "SARS-COV-2," "severe acute respiratory syndrome coronavirus 2," "2019 novel coronavirus," "2019-ncov," and "coronavirus disease 2019").
Compiling Biomedical Literature on COVID-19, and Identifying Clinical and Demographic Variables
From the semantic analysis, we found 116 articles that contained both keywords and were likely to be relevant to our study. We next extracted 535 tables in those articles using Camelot [6] based on the notion that tables listed clinical and demographical variables, and their associated statistics. Articles without tabulated information (n=12) were removed. After careful manual curation of tables, reported data, and article themselves, 45 studies were chosen (Table S1 in Multimedia Appendix 1). In total, 304 clinical and demographic variables were evaluated in 45 studies. They were composed of 92 comorbidities/complications, 49 treatments, 124 lab findings, 34 symptoms, and 4 demographic variables (age, sex, alcohol drinking history, and smoking history). Literature collection was conducted in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) guidelines ( Figure S1 in Multimedia Appendix 1) [7].
Association Between Clinical and Demographic Variables and Clinical Outcomes
To explore the consistency between individual variables and clinical outcomes, we defined three types of associations. Positive associations indicated that a variable in the outcome group had a hazard or odd ratio >1, or a higher value with statistical significance (P value <.05) compared to the control group. In case a statistical test was not performed, we decided that there was a positive association when the outcome group had a 1.5-fold higher value than the control group. Negative association indicated that a variable in the outcome group had a hazard or odd ratio <1, or a lower value with statistical significance (P value <.05) compared to the control group. In case a statistical test was not performed, we decided that there was a negative association when the outcome group had a 1.5-fold lower value than the control group. When there was no significant change between the outcome group and the control group, the variable and clinical outcomes were assigned a "no association" designation. In terms of sex, when there were more males compared to females, we assumed there was a positive association with an outcome based on the case studies of sex and age of patients with COVID-19 in Italy [8] and New York City [9] (as of April 14, 2020). To identify outcome-specific risk factors in biomedical literature, we performed consensus analysis. Risk factors were the variables associated with an increased risk of disease or infection. Seven outcomes, which were studied at least twice, and 107 variables tested two or more times in studies were used for further analyses. Clinical and demographic variables with positive associations in >50% of studies, which investigated the same output, were defined as outcome-specific risk factors.
Compiling Social Media Data and Identifying Symptoms of COVID-19-Positive Users
Twitter was used as the social media source. To identify COVID-19-positive users, we first collected users who used one of these phrases: "my positive COVID test," "my positive COVID diagnosis," "I am positive for COVID," "I tested positive for COVID," and "I have COVID-19" in their original tweets between January 2020 and March 2020. In total, 1036 users were identified as self-identified patients with COVID-19. We then collected additional tweets generated 14 days before and 14 days after the original tweets mentioning users' COVID-19 status (n=84,140 tweets). To identify symptoms that were mentioned in tweets, we applied two symptom extraction methods. The Amazon Comprehend Medical tool [10] was applied to an entire set of tweets. Symptoms were physical or mental features indicating a disease condition. We considered two medical entities (symptoms and signs) as symptoms that users mentioned. We also implemented a symptom extraction model using Scispacy, version 0.2.4 (Python Software Foundation). Scispacy can handle scientific document and extracts medical and clinical terminology [11]. The model was trained on publicly available, domain-specific corpus of medical notes, which consists of 1500 PubMed articles with over 10,000 disease and related chemical terms. The model identifies medical name entities in tweet texts. We considered the medical entity "disease" as a symptom that users mentioned. In total, 51 symptoms from 574 COVID-19-positive users were identified from both symptom identification methods.
Landscape of Clinical and Demographic Variables of COVID-19 in Biomedical Literature
To understand the clinical and demographic properties of COVID-19, we performed a meta-analysis of 45 recently published biomedical studies ( Figure 1A, and Table S1 in Multimedia Appendix 1). The literature evaluated 299 clinical variables (92 comorbidities/complications, 124 laboratory findings, 49 treatment options, and 34 symptoms) and 4 demographic variables (age, sex, alcohol drinking history, and smoking history) in 13 clinical outcomes. Seven outcomes (disease severity, death, ICU admission, diagnosis, acute respiratory distress syndrome [ARDS], O 2 saturation, and hospitalization) were studied at least twice ( Figure 1). On average, each study examined 72 variables, and 102 variables were assessed in at least five studies. Age and sex were measured in more than 80% of the studies. Diabetes and hypertension were the most commonly measured comorbidities (>50% of studies). Fever, cough, myalgia/fatigue, chest tightness/dyspnea, diarrhea, and headache/dizziness were the most commonly measured symptoms (>50% of studies). Eleven laboratory test values that measured liver and kidney function (eg, alanine aminotransferase and aspartate aminotransferase) and hematologic index (eg, lymphocytes, platelets, and neutrophils) were examined in >50% of studies. Therapy involving antiviral agents, antibiotics, and oxygen inhalation were used in more than 30% of the studies (Table S2 in Multimedia Appendix 1). We next investigated the association between identified variables and clinical outcomes of patients with COVID-19. We considered 102 frequently tested variables (≥5 studies), and examined the proportion of studies that showed positive, negative, and no associations between clinical outcomes and a given variable (Table S3 in Multimedia Appendix 1). Positive associations indicated higher values (eg, disease severity increases as patients get older), while negative associations indicated lower values (eg, disease severity increases as basophil count decreases) of variables associated with clinical outcomes (see Methods for details). A no-association designation indicated there was no relation between variables and outcomes.
We found that the majority of variables (n=95, 93%) had inconsistent associations across clinical outcomes showing mixed association types ( Figure 1B). Of those, 46 (45%) variables had all three types of associations. For example, cancer showed positive, negative, and no associations in 58% (n=11), 26% (n=3), and 16% (n=5) of studies, respectively (marked with an asterisk in Figure 1B). In total, 43% (9/21) of comorbidity/complication, 40% (6/15) of treatment, and 73% (11/15) of symptom variables showed all three types of associations. Laboratory findings showed relatively more consistent associations with clinical outcomes: 38% (18/48) of variables showed all association types. Furthermore, we found that variables had unique association types depending on different clinical outcomes. Dry or sore throat, one known symptom of COVID-19, showed positive, negative, and no associations in death, hospitalization, and O 2 saturation, respectively ( Figure 1C). Meanwhile, it showed mixed associations with other clinical outcomes, such as disease severity, ICU admission, and diagnosis. Cardiovascular disease, one common comorbidity of COVID-19, showed mixed associations in death, positive association with ICU and disease severity, but no association with ARDS ( Figure 1D).
Consensus Identification of Outcome-Specific Clinical and Demographic Variables
According to biomedical literature at the time of publication, there were no effective treatment options, or well-identified symptoms, comorbidities, and lab findings to predict outcomes of COVID-19. Therefore, it seemed relevant to find a set of clinical and demographic variables that were specific for individual outcomes for better guidance on disease detection, treatment, and control. To generalize the importance of clinical and demographic variables, a consensus (level of agreement) analysis was performed. We collected 107 variables that were tested at least twice in a given outcome and defined them as outcome-specific risk factors when they showed positive associations with a given outcome in more than half of studies ( Figure 2, and Table S4 in Multimedia Appendix 1). In total, we characterized 72 outcome-specific risk factors from the literature. As shown in Figure 2A, different sets of variables were specifically associated with individual outcomes. Arrhythmia, thyroid disease (comorbidity/complication), confusion/fluster, tonsil swelling, enlargement of lymph nodes/sinus (symptom), and levels of interleukin (IL)-10 and N-terminal pro-brain natriuretic peptide (NT-proBNP; lab finding) were specifically associated with the severity of disease progression. Level of prothrombin time was a specific risk factor for ICU admission. For death, Sequential Organ Failure Assessment (SOFA) score (lab finding) and anemia (symptom) showed positive associations. Fever was specifically associated with O 2 saturation. We also observed 15 variables that were associated with several clinical outcomes (≥3 outcomes; Figure 2B). Age was a risk factor for ARDS, disease severity, death, ICU admission, and O 2 saturation, but not for diagnosis and hospitalization. Sex (male) was a specific variable for ARDS, disease severity, death, and ICU admission. Three lab findings (D-dimer, C reactive protein, and lactate dehydrogenase) were specifically associated with four outcomes. Diabetes and hypertension (comorbidity/complication) were specific risk factors for disease severity and death. Identified outcome-specific variables could be surrogate risk factors to identify patients with COVID-19 and determine their treatment options. Variables that were only specific for one clinical outcome were shown in the red-dashed box. Clinical and demographic variables that were specific for at least three outcomes are presented in (B). Blue coloring indicates identified outcome-specific variables (risk factors). ICU: SOFA: Sequential Organ Failure Assessment; ARDS: acute respiratory distress syndrome; IL-10: interleukin 10; NT-proBNP: N-terminal pro-brain natriuretic peptide.
Expanding the COVID-19 Symptom Landscape and Identifying Novel Symptoms Using Social Media
Early identification of symptoms is important for the successful treatment of disease [12]. Although COVID-19 showed heterogeneous and uncharacterized symptoms, a limited number of symptoms known to be associated with infectious diseases, such as fever, cough, and fatigue, were considered in the biomedical literature. Social media can provide rapid and efficient surveillance of disease risk and outbreaks [13,14]. Therefore, we decided to expand the symptom landscape by integrating social media data with biomedical literature. We first identified 1036 twitter users who introduced themselves as COVID-19-positive patients, and selected 574 users (55%) who openly and voluntarily discussed their COVID-19 symptoms (see Methods for details). In total, 51 symptoms were identified in social media data ( Figure 3A, and Table S5 in Multimedia Appendix 1). On average, individual users mentioned 3 different symptoms (range 1-15). We grouped 51 symptoms into three categories based on their frequency of mention ( Figure 3B). Common symptoms (n=11) were mentioned by >10% of users. Many were nonspecific symptoms of respiratory infections, such as fever, cough, and chest tightness/dyspnea (Table S5 in Multimedia Appendix 1). However, 14 symptoms were potentially COVID-19-specific and rarely reported in social media (ie, in <1% of users). Sputum, dehydration, anemia, urination problem, hair loss, enlargement of lymph nodes or sinus, and oral problems (eg, abrasions in mouth, mouth ulcers, sensitive teeth, toothache, and dry mouth) were defined as rare symptoms. In all, 26 symptoms were observed in 1% to 10% of users (less common). They included chills, chest pain, gastrointestinal symptoms, arthritis, anorexia, allergy-like symptoms, dyssomnias, ear-and eye-related problems, and skin problems such as blister, dry skin, chapped lips, rash, and itching. We next examined the symptoms that were mentioned together in social media ( Figure 3C). We identified 612 co-occurring symptom pairs (Table S6 in Multimedia Appendix 1). In total, 264 (43%) symptoms ranged from common to less common ( Figure 3D). One major cluster of symptoms was frequently mentioned together ( Figure 3E). They were composed of 8 common symptoms, such as dry or sore throat, fever, chest tightness/dyspnea, cough, weakness, myalgia/fatigue, headache/dizziness, and body ache/pain (eg, neck and back pain and general body ache), and 8 less common symptoms, such as cold-like symptoms, chest pain and congestion, gastrointestinal symptoms, chills, stuffy or runny nose, nausea/vomiting, and respiratory symptoms such as lung pain. In total, 99 (16%) pairs were between common symptoms and rare symptoms. Cough, fever, headache/dizziness and body ache/pain (common symptoms) co-occurred with sputum, dehydration, and anemia at least 10 times (Table S6 in Multimedia Appendix 1). There were also two pairs of rare symptoms ( Figure 3D): anemia-weight gain and anemia-urination problems (eg, urinary retention and weakened bladder). . COVID-19-related symptoms extracted from social media data. (A) Landscape of symptoms identified from social media data. Orange and white indicate the presence and absence of symptoms in a given user, respectively. (B) Fraction of common, less common, and rare symptoms. Common symptoms were mentioned from >10% of users, and rare symptoms were mentioned from <1% of users. (C) Co-occurrence of symptoms. One major cluster was shown in the red-dashed box. (D) Number of symptoms pairs depending on mentioning frequency. Blue bars (bottom) indicate the number of co-occurring pairs. (E) One major cluster of symptom pairs. Green, gray, and orange indicate rare, less common, and common symptoms, respectively. Finally, we identified novel symptoms potentially related to COVID-19. We first extended the repertoire of symptoms by integrating social media data with biomedical literature. In total, 59 symptoms were identified ( Figure 4, and Table S7 in Multimedia Appendix 1). Indeed, social media identified more symptoms (n=51) than the literature (n=34), and most symptoms (26/34, 76%) in the literature were equally observed in social media. Interestingly, 25 (42%) were novel symptoms that were only mentioned in social media and were not considered in the literature. Loss of smell or taste, and body ache/pain were frequently mentioned common symptoms in social media. Problems involving eyes (eg, dry eye and eye pain) and ears (eg, ear pain and earache), sweating, sneezing, and allergy-like symptoms were mentioned at a moderate frequency only in social media. Of 14 rare symptoms, 10 were only observed in social media: dehydration, dryness-related symptoms (eg, feeling dry), oral problem, hair loss, urination problem, spasm, hemorrhoids, constipation, hiccups, and weight gain. These social media-specific rare symptoms would be potential novel symptoms for COVID-19, when we considered that 4 rare symptoms (ie, abdominal pain, anemia, sputum, and enlargement of lymph nodes or sinus) were already evaluated in the literature (Figure 4). , and comparison of symptoms between biomedical literature and social media data. Symptoms that were observed in the literature or social media are colored in blue; 25 social media-specific symptoms are presented. Green, gray, and orange indicated rare, less common, and common symptoms, respectively.
Principal Findings
Our meta-analysis based on biomedical literature showed the inconsistency of clinical and demographic variables across clinical outcomes. From the consensus analysis, we identified outcome-specific risk factors that may be helpful to identify patients and predict their specific clinical outcomes. Social media data expanded the repertoire of COVID-19 symptoms from biomedical literature. In addition to more commonly reported symptoms, social media data revealed loss of smell or taste, body ache/pain, and back/neck pain, as well as other less common and rare symptoms such as urination problems, dehydration, allergy-like symptom, and ear-and eye-related problems. Indeed, loss of smell or taste was recently proposed as one of the key features of a COVID-19 diagnosis model [15], and body ache/pain was suggested by the Centers for Disease Control and Prevention. COVID-19 is an ongoing health problem, and the symptoms that medical institutes and ordinary people should consider will be evolving as more studies are published and social media data are explored. This evolution may help improve the definitions of symptom types (common, less common, and rare symptoms) and social media-specific symptoms.
From social media data, we observed that certain combinations of symptoms were frequently observed among patients with COVID-19. Interestingly, we identified two pairs composed of rare symptoms (anemia-weight gain and anemia-urination problems, such as urination retention and weakened bladder). It has been shown that COVID-19 attacks hemoglobin in red blood cells and restricts oxygen transportation [16]. A persistent reduction of oxygen transportation leads to the development of anemia [17]. Urinary bladder is enriched with ACE2 positive cells and proposed as a target organ for COVID-19 invasion [18]. Our results highlighted that symptom combinations could guide the reliable identification of patients with COVID-19 rather than a single common symptom that may result in false positives.
Limitations and Future Work
One of the limitations of our study is the self-reported nature of social media data and the lack of more detailed information from the patients. We observed that 55% of social media users who were self-identified patients with COVID-19 mentioned symptoms, and an additional 8% mentioned potential comorbidities. Thus, only 63% of social media users communicated information on COVID-19 conditions, which means that at least 37% of users could be either false positives (they were not COVID-19-positive users) or asymptomatic patients. Alternatively, it is possible that we have not captured all COVID-19-positive patients in our social media collection due to the limited amount of keyword searches. Nevertheless, various studies have indicated that between 4% and 78% of all COVID-19-positive patients were asymptomatic [19], and this seems to vary widely based on age of patients, test location, and time of testing after infection [20][21][22][23]. Thus, our research is in line with other studies demonstrating the vast range of patients with COVID-19 who show or report no symptoms. It should also be noted that Twitter was the source of the social media data we examined, and perhaps more symptoms would be discovered if we analyzed other sources. Twitter does have a wide, representative user base around the world, and provides open source information that can be easily gathered, but future research could examine alternate social media sources.
Although social media may lack depth of patient information, it provides an effective method of collecting breadth of data. Social media data can be gathered noninvasively across the world, 24 hours a day, and is an extremely efficient method [24] for rapidly disseminating new knowledge related to COVID-19. In other words, clinicians and scientists can collect new patient information from various regional locations, as well as quickly circulate public service announcements for a wide range of audiences. Social media hubs provide a useful alternate source of patient data to explore clinical characteristics of various disease states and populations.
Another limitation of our research involves the limited number of available biomedical studies on individual outcomes. Of the 13 reported clinical outcomes for COVID-19, 7 were studied at least two times, limiting opportunities to perform more systematic and consensus analyses of the landscape of risk factors and symptoms. In fact, we observed there was no significant risk factors associated with diagnosis and hospitalization indicating the current lack of clinical understanding at the early stage of COVID-19. Use of additional literature that will be published in the future and electronic health record studies [25] may refine the assessment of risk factors and symptoms, and increase the accuracy of patient identification for different clinical outcomes.
Conclusion
In this meta-data study, we demonstrated the extensive variability present in clinical and demographic variables across COVID-19 outcomes, and the usefulness of gathering social media data as an effective and alternative way to uncover less common or other types of symptoms that have not been previously reported. Our findings show the practicality and feasibility of employing social media data for investigating new disease states. These practices could be incorporated into routine procedures for early COVID-19 identification and determination of clinical outcomes, in order to provide appropriate interventions and treatments. | 2020-09-18T13:06:10.024Z | 2020-09-13T00:00:00.000 | {
"year": 2020,
"sha1": "d068ccf0c9752987379a48e67e5ec8018934bf18",
"oa_license": "CCBY",
"oa_url": "https://www.jmir.org/2020/10/e20509/PDF",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e9f1c801b8ca8b471c7ece050d9370343d9a42be",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261698271 | pes2o/s2orc | v3-fos-license | Children exhibit a developmental advantage in the offline processing of a learned motor sequence
Changes in specific behaviors across the lifespan are frequently reported as an inverted-U trajectory. That is, young adults exhibit optimal performance, children are conceptualized as developing systems progressing towards this ideal state, and older adulthood is characterized by performance decrements. However, not all behaviors follow this trajectory, as there are instances in which children outperform young adults. Here, we acquired data from 7–35 and >55 year-old participants and assessed potential developmental advantages in motor sequence learning and memory consolidation. Results revealed no credible evidence for differences in initial learning dynamics among age groups, but 7- to 12-year-old children exhibited smaller sequence-specific learning relative to adolescents, young adults and older adults. Interestingly, children demonstrated the greatest performance gains across the 5 h and 24 h offline periods, reflecting enhanced motor memory consolidation. These results suggest that children exhibit an advantage in the offline processing of recently learned motor sequences.
Supplementary Note 1: Participant characteristics, sleep and vigilance (Experiments 1 and 2)
Group means for the measures of both Experiments 1 and 2 are provided in Tables 1 and 2 in the main text and depicted in Supplementary Figures S1 and S2 below.Results from the corresponding statistical analyses can be found in Supplementary Tables 1 and 2 below.In brief, significant age group differences were observed for gender distribution, morningness/eveningness preference, sleep duration of the night prior to participation and subjective levels of sleepiness at the time of testing for both experiments.Additionally, the time of testing and objective levels of alertness were different among age groups in Experiment 1 and 2, respectively.These age group differences can largely be considered reflective of the convenience sample in the current research as well as known lifespan differences in a subset of these measures.We elaborate on these differences in the subsequent paragraphs.
The adult groups showed a gender distribution skewed towards more female than male participants.This group difference can largely be linked to our convenience sample.The potential impact of these differences in gender distribution across our age groups on our primary measures (i.e., learning magnitude, micro-online gains, and micro-and macro-offline gains) was assessed.
Results from the corresponding gender by age group interactions revealed no significant effects (all p > 0.08).
Older adults indicated a greater preference for mornings than the other age groups.This is in line with previous studies indicating that circadian preferences shift towards the morning with increasing age in adulthood 1,2 .
In Experiment 1, participants were instructed to complete the experiment between 9 am and 7 pm to avoid testing early in the morning or late at night when performance is more likely to be impacted by circadian influences.Nonetheless, young adults completed the experimental session later in the day as compared to the other age groups, potentially due to work and/or family responsibilities during the day.In Experiment 2, participants were given specific instructions regarding the timing of the sessions, decreasing the flexibility in the time windows for task completion and thus there were no differences among age groups.
Self-reported sleep duration was longer in children in both experiments and in adolescents in Experiment 2 as compared to the adult groups.Furthermore, older adults reported a shorter sleep duration in comparison to the other age groups in Experiment 1.These findings are consistent with a known decrease in total sleep time with age 3,4 .There were no differences between the two nights in Experiment 2.
Sleep quality during the night prior to the experimental sessions was comparable among age groups and experimental nights in both experiments.Based on previous literature, one would expect older adults to report decreased sleep quality 5 .However, as Vitiello et al. 6 found a mismatch between self-reported and objectively measured sleep quality, it is possible that our older adults overestimated the quality of their sleep.
Lastly, and unexpectedly, older adults self-reported a higher alertness than the other age groups in both experiments, with a group by session interaction in Experiment 2. We speculate that, and analogous to sleep quality, older adults underestimated their levels of sleepiness at the time of testing.This is supported by the fact that the older adults were not different from the adolescents and young adults in their performance on the PVTan objective measure of vigilance -in Experiment 2. Children were slower on the PVT as compared to the other age groups, which is in line with previous research 7 .
Supplementary Note 2: Assessment of learning dynamics with non-normalized performance measures (Experiments 1 and 2)
Results presented in the main text were based on the normalized performance outcomes, adjusted for baseline performance on the pre-learning random run.For completeness, exploratory analyses were also performed on the non-normalized performance outcomes.
Non-normalized performance measures on the SRT task of Experiments 1 and 2 are depicted in Supplementary Figures S3 and S4 and results from the corresponding statistical analyses are provided in Supplementary Tables 3 and 4, respectively.In brief, absolute accuracy remained stable across blocks of practice for all task runs of both experiments, as evidenced by no main effects of block.Furthermore, similar changes across task blocks were found among groups (i.e., no groups x block interaction effects), but the overall performance levels were significantly different among age groups (i.e., group main effect).Specifically, pairwise follow-up comparisons indicated that for certain task runs of the two experiments, children were less accurate than adolescents (i.e., all task runs of Experiment 1, except for the post-learning random), young adults (all runs of Experiment 1 plus pre-learning random and test of Experiment 2), and older adults (all task runs of both experiments except for the post-learning random of Experiment 2).
Absolute response times (RTs) on the pre-learning random revealed a group effect but no block or group x block interaction effect in both experiments.During the training run of both experiments, RTs significantly decreased across practice blocks, as shown by a main effect of block, and the overall RT was significantly different among age groups.Furthermore, in Experiment 1, no group x block interaction effect was found and thus the decrease in RT across the training blocks was similar among age groups.In Experiment 2, however, results revealed a marginally significant group x block interaction, indicating that the change in performance across training blocks tended to be different among groups.During the post-learning test run of both experiments, there was no significant block effect, indicating a performance plateau was reached.This plateau was reached by all groups (i.e., no group x block interaction effect), but the age groups reached significantly different performance levels (i.e., group main effect).Similarly, the post-learning random run showed no block or group x block interaction effects but did reveal significant overall performance differences among groups.Pairwise follow-up comparisons for all the main effects of group indicated that the children were significantly slower than the other age groups for all task runs.Furthermore, the adolescents were slower than the young adults for all task runs of Experiment 2 except for the post-learning test.And lastly, the older adults were slower than the young adults for certain task runs of Experiment 1 (i.e., pre-learning random and post-learning test) and all task runs of Experiment 2.
In summary, and as expected, children were less accurate and slower on the task as compared to the other age groups.Results corresponding to these changes in non-normalized performance metrics across blocks of practice were largely consistent with those in the main text on normalized data.
Supplementary Note 3: Assessment of learning magnitude and micro-learning with non-normalized data (Experiment 1)
The dependent measures of learning-magnitude (assessing sequence-specific learning) as well as micro-offline and -online performance changes were computed with normalized RT data in the main text.Here, we present results from the same analyses but with these performance indices computed with non-normalized response time (RT) data.
Learning Magnitude
The difference in non-normalized RT between the post-learning test (averaged across the 4 test blocks) and the post-learning random (average across the 4 random blocks) was divided by the non-normalized RT in the post-learning random run (averaged across the 4 blocks).A one-way ANOVA revealed a significant group effect (F(3,129) = 5.440, p = 0.002, ƞ 2 = 0.115; see Supplementary Figure S5).Follow-up comparisons revealed that a significantly smaller sequencespecific learning magnitude was observed in children as compared to adolescents (p = 0.014, G = 0.714) and young adults (p = 0.001, G = 0.970).This finding is consistent with the results based on the normalized data presented in the main text.
In summary, the calculation of the micro-online and -offline gains based on the non-normalized data demonstrates that children exhibited smaller and larger micro-online and -offline performance gains, respectively compared to the other 3 age groups.
Supplementary Note 4: Assessment of macro-offline performance changes with non-normalized data (Experiment 2)
Macro-offline performance changes in the main text were based on inter-session differences in normalized performance outcomes, adjusted for the baseline performance on the pre-learning random run.Here, we conduct exploratory analyses on macro-offline performance changes computed on non-normalized response times (RTs).
Random and sequence-specific macro-offline performance gains
Macro-offline gains in the random condition were assessed with the same computation as described in the main text but using the non-normalized RTs (Supplementary Figure S8A).
Sequence-specific macro-offline gains were assessed with the same computation as described in the main text but using with non-normalized RTs (Supplementary Figure S8B).Significant main effects for both offline period (F(1,103) = 32.333,p < 0.001, partial ƞ 2 = 0.239) and group (F(3,103) = 7.523, p < 0.001, partial ƞ 2 = 0.180) were revealed, but there was no offline period x group interaction effect (F(3,103) = 2.079, p = 0.108, partial ƞ 2 = 0.057).Post-hoc pairwise comparisons on group differences across the two offline intervals indicated that children exhibited larger sequencespecific macro-offline gains as compared to young (p = 0.026, G = 0.627) and older adults (p < 0.001, G = 0.949).Additionally, adolescents showed significantly larger sequence-specific offline gains in comparison to older adults (p = 0.003, G = 1.360).These results suggest that the offline changes in children likely involved the strengthening of their sequential memory.These findings are consistent with the results based on the normalized RTs presented in Supplementary Note 5 below.
Supplementary Note 5: Random and sequence-specific macro-offline performance changes with normalized performance measures (Experiment 2)
Figure S11A displays the random macro-offline gains across both offline intervals per age group and as a function of age.Significant main effects for both offline period (F(1,103) = 21.409,p < 0.001, partial ƞ 2 = 0.172) and group (F(3,103) = 4.097, p = 0.009, partial ƞ 2 = 0.107) were revealed, but there was no offline period x group interaction effect (F(3,103) = 0.601, p = 0.616, partial ƞ 2 = 0.017).Post-hoc pairwise comparisons on group differences across the two offline intervals indicated that older adults exhibit lower random macro-offline gains as compared to children (p = 0.023, G = 0.700) and adolescents (p = 0.012, G = 1.041).Importantly, the children did not differ from any of the other age groups.
Figure S11B displays the sequence-specific macro-offline gains across both offline intervals per age group and as a function of age.Significant main effects for both offline period (F(1,103) = 32.558,p < 0.001, partial ƞ 2 = 0.240) and group (F(3,103) = 7.210, p < 0.001, partial ƞ 2 = 0.174) were revealed, but there was no offline period x group interaction effect (F(3,103) = 1.737, p = 0.164, partial ƞ 2 = 0.048).Post-hoc pairwise comparisons on group differences across the two offline intervals indicated that children exhibited larger sequence-specific macro-offline gains as compared to young (p = 0.034, G = 0.609) and older adults (p < 0.001, G = 0.923).Additionally, adolescents showed significantly larger sequence-specific offline gains in comparison to older adults (p = 0.003, G = 1.340).(d).Details of the corresponding statistical results are presented in the main text.In brief, there was a significant relationship between micro-and macro-offline gains collapsed across the 4 age groups (Panel a).Between group comparisons revealed that this relationship was significantly different between young adults and each of the other 3 age groups.Note, however, that these between-group differences were no longer significant if the young adult with the large negative 5hr offline gains was excluded from analyses.n = 27 in each of the 4 age groups.
Figure S3 Figure
Figure S3
Figure S4 Figure
Figure S4
Figure S6 Figure
Figure S6
Figure S7 Figure
Figure S7
Figure S9 Figure
Figure S9
Figure S10 Figure
Figure S10
Table 2 | Participant characteristics in Experiment 2.
9 and subjective and objective vigilance measures of the four age groups.Gender reports chi square statistics and the Cramer's V effect size, whereas all other variables list F-values and eta-squared effect sizes.Significant values are marked with an asterisk.Df = degrees of freedom; ƞ 2 = eta squared; SSS = Stanford Sleepiness Scale8.PVT = psychomotor vigilance task9.Corresponding group means are provided in Table2of the main text and depicted in FigureS2as part of this Supplementary Information.FigureS2also indicates significant pairwise comparisons that were conducted as follow-ups to significant contrasts shown above.
Table 3 | Absolute (non-normalized) task performance in Experiment 1.
Results from statistical analyses of non-normalized response time (RT) and accuracy measures for all task runs of Experiment 1. Significant values are marked with an asterisk.Df = degrees of freedom; part ƞ 2 = partial eta squared; B x G = block x group interaction.
Table 4 | Absolute (non-normalized) task performance of session 1 in Experiment 2.
Results from statistical analyses of non-normalized response time (RT) and accuracy measures for all task runs of session 1 of Experiment 2. Significant values are marked with an asterisk.Df = degrees of freedom;part ƞ 2 = partial eta squared; B x G = block x group interaction.
Table 5 | Statistical results on initial learning dynamics with normalized performance measures in Experiment 2. RT Accuracy
Results from statistical analyses of normalized response time (RT) and accuracy measures for all task runs of session 1 in Experiment 2. Significant values are marked with an asterisk.Df = degrees of freedom; part ƞ 2 = partial eta squared; B x G = block x group interaction.Figure4in the main text depicts the learning dynamics of these normalized measures. | 2023-09-13T13:11:34.189Z | 2024-01-11T00:00:00.000 | {
"year": 2024,
"sha1": "343deb179eb951b7eb6ec4d8e422d0b161226763",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s44271-024-00082-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "89659d31d8cf0c1e70cf774c888826f5d6074220",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
256099481 | pes2o/s2orc | v3-fos-license | Processing Validation Metrics of Syva Enzyme-Multiplied Immunoassay Technique (EMIT) Methotrexate Assay for Beckman Coulter System
Background: High-dose methotrexate (HDMTX), defined as a dose greater than 500 mg/m2, is used to treat a variety of cancers; and though safe, it can cause major toxicity. Syva enzyme-multiplied immunoassay technique (EMIT) methotrexate (MTX) assay (Gurgaon, India: Siemens Healthcare Diagnostics Ltd.) uses a homogeneous enzyme immunoassay method. Low-end precision performances are very important for laboratory methods, especially when their results have clinical significance at these levels. Methodology: A total of 25 replicates (five replicates per run, for five runs) were analyzed for profiling. Precision, accuracy, linearity, limit of blank, limit of detection, and limit of quantification were determined using existing guidelines. Imprecision profile and limit of quantitation (LoQ) at 10% were determined by fitting data with hyperbolic regression. Results: The coefficient of variation percentage (CV%) for low, mid, and high-level internal quality control (IQC) was 1.25%, 3.45%, and 1.55%, respectively. Similarly, estimated bias was -4.58%, -3.54%, -7.21% for each level. The assay linearity was maintained from a range of 0.041-1.993 mmol/L with an R2 of 0.959. The limit of detection was estimated to be 0.07 mmol/L. Conclusion: Syva EMIT MTX assay can be precisely and accurately used to measure low levels of serum methotrexate at levels lower than claimed by the manufacturer, aiding in the monitoring of toxicity in patients.
Introduction
Methotrexate (MTX) is an integral part of the treatment for acute lymphoblastic leukemia (ALL) and is effective against numerous types of cancer, which justifies its inclusion on the World Health Organization's list of essential medicines [1]. Methotrexate is an antimetabolite that inhibits folic acid metabolism. It binds to dihydrofolate reductase (DHFR) and inhibits the conversion of dihydrofolate to tetrahydrofolate by competitive inhibition. Tetrahydrofolate is required for the biosynthesis of thymidine and purines, which are necessary for DNA synthesis. Methotrexate's inhibition of tetrahydrofolate synthesis renders cells incapable of multiplying and producing proteins. Methotrexate is administered at doses that range from 12 mg intrathecally and 20 mg/m 2 orally, intramuscularly, or intravenously as weekly maintenance chemotherapy for ALL to doses as high as 33,000 mg/m 2 intravenously in cases of osteosarcoma or certain lymphomas [2].
High-dose methotrexate (HDMTX), which is defined as a dose greater than 500 mg/m 2 , is used to treat a variety of cancers. Although HDMTX may be administered safely to the majority of patients, it can cause severe toxicity [3]. Multiple systemic functions and organs, including neurotoxicity, hepatotoxicity, mucositis, myelosuppression, and nephrotoxicity, are strongly associated with HDMTX exposure. These adverse effects frequently result in the termination or cessation of chemotherapy and increase the risk of relapse [4]. Therefore, depending on the treatment protocol, HDMTX must be administered with rigorously standardized supportive care to avoid toxicity. Thus, accurate measurements of low levels of MTX in cases of prolonged exposure can help prevent any severe morbidity and mortality in patients with delayed methotrexate excretion [3]. Further, to counter MTX toxicity, supplementation of a formulation of reduced folate (leucovorin) is administered to patients and is termed leucovorin rescue [5].
The potential to quantify MTX at concentrations as low as 0.05 mmol/L is crucial for providing an adequate clinical drug monitoring service. Crucial for the successful delivery of leucovorin, a rescue drug administered following MTX therapy is the measurement of low MTX levels. Leucovorin therapy is continued until MTX levels fall below 0.05 mmol/L. In addition, the detection of low MTX concentrations allows for the assessment of toxic concentrations that may exist 72 hours after high-dose delivery of the drug [6]. As enzyme-multiplied immunoassay technique (EMIT) assays have been of importance for therapeutic drug monitoring, it has been long used for the determination of both endogenous and exogenous analytes in such clinical settings [7]. Syva EMIT MTX (Gurgaon, India: Siemens Healthcare Diagnostics Ltd.) assay uses a homogeneous enzyme immunoassay method. It relies on the competitive binding between the exogenous MTX and the drug labeled with glucose-6-phosphate dehydrogenase for the antibody binding site. The Syva EMIT MTX assay utilizes a five-point logit calibration. The range of calibrator values is between 0.2 and 2.0 mmol/L. The analytical range for the Syva EMIT MTX assay is 0.3-2 mmol/L. However, in clinical practice, the levels of MTX are frequently maintained below this limit for leucovorin rescue and need to be measured [8]. Low-end precision performances are very important for laboratory methods, especially when their results have clinical significance at these levels [9].
Hence, in this study, we have tried to establish the validation profile of Syva EMIT MTX assay on Beckman Coulter AU680 auto-analyzer system (Pasadena, CA: Beckman Coulter Inc.) using established methodologies and laboratory practices for proper monitoring and reporting of low values of MTX in serum of patients following HDMTX administration.
Materials And Methods
The quality parameters for MTX assay were studied on a Beckman AU680 autoanalyzer over a period of five days. The Syva EMIT assay provided calibrators were used to calibrate the test assay for the first time. Bio-Rad Lypochek Therapeutic Drug Monitoring (TDM) levels 1 and 2 were used as internal quality control (IQC) material. As the IQC data for the Beckman Coulter AU system was not established, the Bio-Rad unity worldwide report for therapeutic drug monitoring (April 2022) was used. Specifically, for levels 1 and 2 IQC, the data obtained for Syva EMIT based on Roche Cobas 6000/8000/c 311 (Basel, Switzerland: Roche Holding AG) was used. For level 3 IQC, as there was no equivalent data, the EIA Method Group cumulative mean, SD, and CV were used.
Verification of precision
Clinical Laboratory and Standards Institute (CLSI) guidelines EP15A3 were used for precision verification of the Syva EMIT MTX assay [10]. A total of 25 replicates (five replicates per run, for five runs) were analyzed for precision profiling. Level 1, level 2, and level 3 Bio-Rad QC materials were used. Level 3 QC was run in dilution as its mean value is higher than the analytical range of the assay. The verification of analytical performance of measurands as published by Chakravarthy et al. using CLSI EP15A3 guidelines for the process were reproduced [11]. Grubb's test was used to remove any outliers from the replicates [9].
Verification of trueness
Trueness was estimated by estimating the grand mean elicited from the runs and comparison against the peer group data obtained from Bio-Rad unity worldwide report for therapeutic drug monitoring (April 2022). Bias, total allowable error, and sigma metrics were also evaluated in accordance with established methods [12,13].
Low-end performance
Clinical Laboratory and Standards Institute (CLSI) published the EP17-A approved guideline -"protocols for determination of limits of detection and limits of quantitation" in 2004 and has defined low-end performances based on limit of blank, limit of detection, and limit of quantification [9]. Limit of blank (LoB) is thereby defined as "the highest value we expect to see in a series of results on samples that contain no analyte," limit of detection (LoD) is "the actual concentration at which an observed test result is very likely to exceed the LoB and may therefore be declared as detected," whereas the limit of quantitation (LoQ) is defined as "the actual concentration at which the analyte is reliably detected and at which the uncertainty of the observed test result is less than or equal to the goal set by the laboratory" [9].
The methodology as outlined by Armbruster et al. was utilized for estimation of LoB and LoD [15]. Five samples (two assay buffer samples, two patient samples who haven't been administered MTX, and one distilled water sample) were run for five days (LoB = Mean blank + 1.645 × SD blank ). For replicating, a low serum MTX concentration, just above the LoB, was attained by diluting a 0.2 mmol/L calibrator and running five replicates per run for five runs (LoD = LoB + 1.645 × SD low concentration sample ) [16]. Five sample concentrations from LoD to approximately two times LoD were run in five replicates for five runs. Imprecision profile and LoQ 10% were determined by fitting data with hyperbolic regression. A 95% confidence interval (95% CI) was evaluated by the inverse regression method after linearization of the hyperbolic function [17]. All data analysis was done using statistical and mathematical formulas incorporated in MS Excel 365 and by using Statistical Package for Social Science (SPSS version 26.0; Chicago, IL, IBM Corp.). IQC: internal quality control, CV% R: within run coefficient of variation percentage, CV% WL: within lab coefficient of variation percentage
Results
The manufacturer obtained CV% for within-run is 6.8% and for between-run is 5.6%. The Unity Worldwide Report provides a CV% of 3.5%, 7.5%, and 8.6% for levels 1, 2, and 3 IQC, respectively. Our precision profiling estimated a CV% of 1.5%, 3.5%, and 1.6% for within-lab precision of levels 1, 2, and 3 IQC, respectively. This is significantly lower compared to the available data.
The levels of IQC had a bias of -4.6%, -3.5%, and -7.2% respectively. All replicates had a total error (TE) less than the accepted total allowable error for methotrexate of 10% [18]. The sigma value of the IQC was also estimated and all the matrices have been tabulated in Table 3. The relative comparison of allowable errors for precision and trueness was estimated. Precision estimates for levels 1 and 3 IQC were optimal but for level 2 were desirable ( Table 4). Total allowable error was optimal, desirable, and minimal for levels 1, 2, and 3 IQC, respectively ( Table 5). The linearity of the assay was estimated by both linear and non-linear regressions. The range of linearity was from 0.041 mmol/L to 1.993 mmol/L. Non-linear polynomial regressions of the 2nd and 3rd orders were used. R 2 values 0.959, 0.960, and 0.960 were estimated for linear, 2nd order polynomial, and 3rd order polynomial, respectively. The linearity data has been represented in Table 6 and Figure 1. The limit of blank was estimated to be 0.03 mmol/L. The limit of detection was estimated to be 0.07 mmol/L. The limit of quantification was estimated from the hyperbolic regression curve to be 0.073 mmol/L and 0.085 mmol/L for cut-off CV% of 10% and 5%, respectively ( Figure 2). Linearization of the hyperbolic curve provided a 95% CI of 0.012 to 0.319 for LoQ at a 10% CV cut-off ( Figure 3).
Discussion
The quality of a patient's result is inversely proportional to the laboratory error rate. Periodically, a laboratory should review the quality of its errors in order to determine the extent of their impact on patient safety [11]. Therefore, the measurement of precision and trueness of a test procedure is of paramount importance. HDMTX toxicity may be fatal for the patient, and hence an apt measurement of MTX levels is warranted. Beckman Coulter AU systems are one of the generally used automated laboratory systems; hence validation of Syva EMIT MTX assay was highly required.
Our coefficient of variation for levels 1 and 2 IQC was 1.52% and 3.45%, respectively. The values are significantly less than the manufacturers' and Bio-Rad unity worldwide report's provided values. Our precision was comparable to that of Chavan et al. where levels 1 and 2 IQC were 0.60 and 2.8%, respectively [8]. It is, however, much lesser compared to the CV% range of 0.88-9.43 reported by Borgman et al. [6] and 5.31-8.43 reported by Suh-Lailam et al. [19]. Another study by Lim et al. also had a higher CV% than our study (1.8-11.2%) [20]. In the same study, the between-run CV% was 8.1%, 1.3%, and 3.5% for low, medium, and high-level controls, respectively, similar to our study.
Shi et al. studied the Syva EMIT MTX assay in Siemens Viva-E autoanalyzer and had a bias ranging from -1.60% to -5.67% [21]. Our study, however, had a higher range of bias ranging from -3.54% to -7.21%. The high bias is comparable to the reports of Borgman et al. who had biases as high as 21.9% and 9.46% for medium-level control materials [6].
The manufacturer claims a lower linearity level of 0.3 mmol/L. However, our study estimated linearity from 0.07 to 1.99 mmol/L, which is more sensitive than that reported by Chavan Limit of quantification is highly important because it decides what lowest value of an analyte can be measured with a high degree of performance, especially when lower concentrations matter from a clinical perspective. There is a lack of proper LoQ validation profiles for Syva EMIT MTX assay in previous literature, hence our study takes primary importance in this regard. We estimated an LoQ of 0.073 mmol/L and 0.850 mmol/L for cut-off of 10% and 5% CV, respectively. Linearized curve analysis provided a lower limit of 0.012 mmol/L (R 2 =0.778), comparable to the lower limits estimated by Lim et al. [20].
As methotrexate measurement by Syva EMIT assay did not have data for IQC and worldwide values for the Beckman AU system, for evaluation of verification matrices, we had to rely on rather generic data as provided by the manufacturer and unity report. This was one of the limitations of our study. Verification of performance metrics of methods elevates in importance when the analyte in question is of toxic proportions, even at a lower serum concentration. Hence, evaluating analytical performance through the estimation of precision, trueness, linearity, and low-level performance will help in aiding the clinical implications of methotrexate treatment aptly with minimal error. It will also help to set a benchmark for other laboratories using other instruments for processing methotrexate, as well as be a reproduction of the importance of CLSI guidelines and other relevant statistical examinations to provide proper and verified metrics to all exploring laboratories to meet their needs for their measured biochemical tests.
Additional Information Disclosures
Human subjects: All authors have confirmed that this study did not involve human participants or tissue. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2023-01-23T16:01:20.987Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "896df936ad6b185fbb60df232b285f267695199e",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/131991/20230121-13125-10krjvb.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e1d7872173f399ccc3870b116bfec13736b1aca7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246426242 | pes2o/s2orc | v3-fos-license | Genetics and Epigenetics of Bone Remodeling and Metabolic Bone Diseases
Bone metabolism consists of a balance between bone formation and bone resorption, which is mediated by osteoblast and osteoclast activity, respectively. In order to ensure bone plasticity, the bone remodeling process needs to function properly. Mesenchymal stem cells differentiate into the osteoblast lineage by activating different signaling pathways, including transforming growth factor β (TGF-β)/bone morphogenic protein (BMP) and the Wingless/Int-1 (Wnt)/β-catenin pathways. Recent data indicate that bone remodeling processes are also epigenetically regulated by DNA methylation, histone post-translational modifications, and non-coding RNA expressions, such as micro-RNAs, long non-coding RNAs, and circular RNAs. Mutations and dysfunctions in pathways regulating the osteoblast differentiation might influence the bone remodeling process, ultimately leading to a large variety of metabolic bone diseases. In this review, we aim to summarize and describe the genetics and epigenetics of the bone remodeling process. Moreover, the current findings behind the genetics of metabolic bone diseases are also reported.
Introduction: Bone Structure and Cell Types
Bone is a living tissue that supports and protects several organs in the body and provides the environment for blood cell production. It is a metabolically active tissue that undergoes a constant cycle of resorption and replacement. This continuous process allows bone to adapt to the changes required for healthy functioning, to maintain bone strength, and the changes required for fracture healing to take place [1][2][3]. Furthermore, bone tissue plays an important role in mineral homeostasis, such as calcium and phosphorus, and gives a solid base for skeletal muscles [4].
Normal bone tissue consists of two phases, i.e., an organic and an inorganic phase. In the organic phase, osteoblasts and osteoclasts are the major components of bone tissue cells, underpinning the main metabolic activities in bone. Osteoblasts, which are cells responsible for bone matrix synthesis, derive from mesenchymal stem cells (MSCs) in bone marrow [5], blood, and pericytes [6,7]. MSC migration is a complex mechanism, which is significantly important for both bone formation and fracture healing. Nevertheless, its regulation system is yet to be elucidated [6,8,9]. Alterations in MSC migration can lead to abnormal bone imbalances [7,10]. An equally important role in bone homeostasis is attributed to osteoclasts, which are multinucleated cells derived from progenitors of the monocyte/macrophage lineage that digest the components of bone allowing for bone resorption and replacement [11,12]. Other cell types that reside in bone tissue are bone lining cells and osteocytes. Specifically, the first is osteoblast-derived cells, which cover all quiescent bone surfaces where bone resorption or bone formation are not requested [13]. Conversely, osteocytes derive from osteoblasts that suspend their activity when buried in the matrix, acting as mechano-sensors capable of transducing mechanical strengths into biological signals [14]. When combined, these cells are organized into temporary anatomical structures named basic multicellular units (BMUs). BMUs carry out bone remodeling, a biological process that results in structural changes and skeletal renewal [5].
Bone matrix is considered to form the intercellular substance of bone tissue and the inorganic phase. The extracellular matrix (ECM) consists primarily of type I collagen (COL1), which is the most abundant protein in bone tissue, complexed with a crystalline inorganic component composed of hydroxylapatite (HA; Ca 10 (PO 4 ) 6 (OH) 2 ) with citrate, carbonate, and ions, such as F − , K + , Sr 2+ , Pb 2+ , Zn 2+ , Cu 2+ , Mg 2+ , and Fe 2+ [15][16][17]. The mechanical quality of the bone matrix is influenced by non-collagenous proteins [18]. The principal non-collagenous proteins of the bone matrix are bone sialoprotein (BSP), osteonectin (ON), osteopontin (OPN), and osteocalcin (OCN) [19] which contain aspartic acid (Asp) and glutamic acid (Glu) residues, with high affinity for calcium ions (Ca 2+ ) due to their charged carboxyl groups [19]. In BSP, polyglutamic acid segments are responsible for binding the protein to apatite, while the same role in OPN is operated by polyaspartic acid segments [20]. Studies on OPN reported that it behaves like bone glue [18,20]. ON is a cysteine amino acid-rich protein that is expressed in mineralized tissues. ON is also involved in osteoblast differentiation and osteoclast activity [21]. OCN, which is expressed by osteoblasts, is also known as bone g-carboxyglutamate protein (BGLAP) and is frequently used as a clinical marker of bone turnover [22]. Interestingly, different amounts of non-collagenous bone protein are contained in bone types. For example, cortical bone contains 30× more OCN than trabecular bone, while the ON excess in trabecular bone ranges from 21-to 47-fold [23].
Two main histological types of mature bone can be identified, (i) cortical or compact bone, which represents 80% of bone mass; it presents a dense and ordered structure, (ii) cancellous or trabecular bone, which is lighter and less compact than the cortical bone [24,25]. In addition, cortical or compact bone is located mainly on the surfaces of flat bones and in the shaft of long bones. It is made of bone laid down concentrically around central canals, known as Haversian systems. Blood vessels, lymphatics, nerves, and connective tissue are contained in these structures [25]. Trabecular or cancellous bone, on the other hand, has an irregular structure [25]. It is composed of a honeycomb-like network of trabecular plates, forming the ends of long bones and the central parts of flat bones [26].
Pathways Involved in Bone Metabolism
Bone metabolism consists of coupled and dynamic processes orchestrated by both osteoblasts and osteoclasts. In order to ensure bone plasticity, the bone remodeling process needs to function properly [27]. Osteoblasts derive from MSCs that differentiate into the osteoblast lineage by activating different signaling pathways, the main ones being (i) transforming growth factor β (TGF-β)/bone morphogenic protein (BMP) and (ii) Wingless/Int-1 (Wnt)/β-catenin pathways ( Figure 1).
However, other regulators of the ostegenic process are Hedgehog (Hh) and NOTCH cascades, Sox9 and Msx2 genes, Histone deacetylases (HDACs), fibroblast growth factor (FGF), and parathyroid hormone-related peptide (PTHrP) cytokines [28,29]. These molecular pathways and regulators cooperate in order to induce the expression of the main osteogenic transcription factors, such as Runt-related transcription factor 2 (Runx2) and Figure 1. Schematic illustration of the Wingless/Int-1 (Wnt) signaling pathway. The Wnt signaling pathway and its components reported to be mutated in metabolic bone disease are reported. Activation of the canonical Wnt pathway leads to an increase in bone mass. Wnt ligands interact with co-receptor LRP5/6 and frizzled (FZD) to activate the Wnt signaling pathway. The inhibition of Wnt signaling is mediated by extracellular factors, such as sclerostin (SOST) and Dickkopf-related protein 1 (DKK-1) and leads to bone mass decrease. DKK-1 binds to the LRP5/6 co-receptor, thereby preventing activation by Wnt ligands. Inhibitory transmembrane protein LRP4, which is a SOSTinteracting protein, is recruited and the Kremen proteins, which are high-affinity DKK-1 receptors, cooperate with DKK-1 to decrease Wnt signaling. In addition, secreted frizzled-related protein (SFRP) inhibits the canonical Wnt pathway by sequestering Wnt ligands. Mutations in Wnt signaling components result in pathway activation/inhibition. For example, loss-of-function mutations affecting SOST and LRP4 lead to Wnt pathway activation causing bone tissue sclerosteosis, as well as loss-of-function mutations of LRP5 and the WNT1 ligand result in Wnt signaling pathway inhibition leading to osteoporosis disorders.
Osteoclast activity is regulated by different factors and signaling cascades, such as BMPs, calcitonin, interleukins 1/6/11, PTHrP, Wnt cascade, receptor activator of nuclear factor kappa-B (NF-κβ) ligand (RANKL), and macrophage colony-stimulating factor (M-CSF) [31]. Interaction between M-CSF and its receptor c-Fms induces osteoclast proliferation by activating Src, PLC-, PI(3)K, Akt, and Erk kinases. In addition, RANKL binds to its receptor, named Receptor activator of NF-κB (RANK), and promotes the association of RANK to TRAF6, thus activating Erk, JNK, and p38 kinases. The triggered kinases positively regulate the expression of the main transcription factor for osteoclasts formation and function, named Nuclear Factor of Activated T Cells 1 calcineurindependent (NFATc1). Osteoprotegerin (OPG) acts as a decoy receptor for RANKL, inhibiting osteoclast activity and promoting apoptosis. Specifically, OPG binds RANKL inhibiting its interaction with RANK, thus preventing excessive bone resorption. Wnt/β- Figure 1. Schematic illustration of the Wingless/Int-1 (Wnt) signaling pathway. The Wnt signaling pathway and its components reported to be mutated in metabolic bone disease are reported. Activation of the canonical Wnt pathway leads to an increase in bone mass. Wnt ligands interact with co-receptor LRP5/6 and frizzled (FZD) to activate the Wnt signaling pathway. The inhibition of Wnt signaling is mediated by extracellular factors, such as sclerostin (SOST) and Dickkopf-related protein 1 (DKK-1) and leads to bone mass decrease. DKK-1 binds to the LRP5/6 co-receptor, thereby preventing activation by Wnt ligands. Inhibitory transmembrane protein LRP4, which is a SOST-interacting protein, is recruited and the Kremen proteins, which are high-affinity DKK-1 receptors, cooperate with DKK-1 to decrease Wnt signaling. In addition, secreted frizzled-related protein (SFRP) inhibits the canonical Wnt pathway by sequestering Wnt ligands. Mutations in Wnt signaling components result in pathway activation/inhibition. For example, loss-of-function mutations affecting SOST and LRP4 lead to Wnt pathway activation causing bone tissue sclerosteosis, as well as loss-of-function mutations of LRP5 and the WNT1 ligand result in Wnt signaling pathway inhibition leading to osteoporosis disorders.
The Bone Turnover Cycle
Bone remodeling is a continuous process throughout life [10]. It consists of some sequential steps, i.e., the initiation, reversal, and termination phases [33]. During the initiation phase, bone resorption consists of recruiting osteoclast precursors, which differentiate into mature osteoclasts. Osteoclastogenesis requires specific key mediators, such as M-CSF and RANKL. Specifically, M-CSF, which is produced by osteoblasts and many other cell types, is required for osteoclast precursor proliferation, as well as their differentiation and fusion into osteoclasts [34]. Furthermore, RANKL is expressed by several types of cells, including osteoblasts [35]. RANKL binds its receptor, which is localized on the surface of osteoclast precursors allowing fusion, maturation, survival, and osteoclast activation [36]. Several studies reported that osteocytes produce the majority of the RANKL required for osteoclast formation in bone remodeling [37,38].
During bone resorption, many factors that lead to MSC recruitment and differentiation are released through bone remodeling to enable bone formation in the bone marrow microenvironment [39]. Bone resorption inhibition occurs in the subsequent transient, or reversal phase, where osteoblasts are recruited to allow bone formation. Osteoblasts may produce osteoprotegerin protein (OPG), which is a decoy receptor for RANKL, preventing its binding to RANK, with the consequent inhibition of osteoclast activation and differentiation [12].
The termination phase represents the final step of the remodeling cycle. In this phase, an equal amount of resorbed bone is replaced [40]. Osteocytes contribute to completing the remodeling process by producing sclerostin, a small protein encoded by the SOST gene, which inhibits bone formation induced by Wnt signaling in osteoblasts [33,41]. At the end of the process, mature osteoblasts undergo apoptosis, become bone lining cells or differentiate into osteocytes [5].
The fine balance between bone resorption and bone formation allows skeleton integrity to be maintained. The coordinated activity of these cells, as well as the integrity of the calcium and phosphate homoeostatic mechanisms, which are primarily mediated by the parathyroid hormone, FGF23, and vitamin D, are required for normal bone formation, metabolism, and repair [42]. Alterations to this mechanism, such as dysfunctions in pathways and factors that regulate osteoblastic bone formation and osteoclastic bone resorption may lead to the onset of bone metabolic diseases, including osteoporosis [43], which is caused by excessive bone resorption, or osteopetrosis due to inadequate osteoclast function and excessive bone formation [44,45].
DNA Methylation
DNA methylation is the process of transferring a methyl group to the 5 position of cytosines in CpG dinucleotides in the DNA sequence [30,53]. DNA methylation occurs mainly on CpG islands in gene promoter regions [54,55]. The enzymes that catalyze DNA methylation are the DNA methyltransferases (DNMTs), including DNMT1, DNMT2, and DNMT3 [56,57]. Demethylation is the reverse process of methylation in which a methyl group is removed [58,59]. This process is catalyzed by several DNA demethylases, such as TET1, OCT4, and GADD45A, and results in hypomethylation [60,61]. Generally, the methylation status of promoter regions is inversely correlated to gene expression [62]. High methylation levels can suppress bone-related gene activation, thus impairing osteogenesis [55,60]. In contrast, low methylation levels in the promoter regions increase the expression of genes involved in osteogenic differentiation [55,60] Both methylation and demethylation processes are implicated in bone turnover. Published studies have shown that DNA methylation in bone plays a fundamental role in controlling genes associated with both osteoblast and osteoclast differentiation, including RUNX2, OSX, OCN, ALP, Wnt, RANK/RANKL/OPG, and other important signaling pathways [61,[63][64][65][66].
Methylation status can determine the fate of MSCs from various sources. Bone marrowderived MSCs (BMSCs) tend to differentiate into osteoblasts, whereas adipose isolated MSCs (ASCs) tend to turn into adipocytes. This may be due to the hypomethylation of the promoter regions of specific osteogenic genes, such as RUNX2 and OCN, in BMSCs, whereas complete hypomethylation of the adipose tissue-related gene PPAR-g2 was observed in ASCs. In addition, the expression of the corresponding mRNAs is also higher [61]. Moreover, the RUNX2 promoter of ASCs is hypermethylated, resulting in low mRNA expression of RUNX2 [63].
RUNX2 and OSX are specific transcription factors involved in the bone formation process and osteoblast differentiation. OSX is the downstream target of RUNX2 [67,68]. RUNX2 methylation levels are reduced during osteogenic differentiation in MSCs, suggesting that RUNX2 methylation is involved in regulating osteoblast differentiation [69]. Moreover, the methylation level in the promoter region of OSX changes dynamically during osteogenic differentiation, suggesting that OSX epigenetic regulation may modulate osteogenic differentiation in MSCs. Inhibition of DNA demethylase reversed the expression levels of these genes, suggesting that RUNX2 and OSX are mainly regulated by DNA methylation mechanisms during osteogenic differentiation in ASCs [61].
Of all the growth factors that stimulate osteogenic differentiation in MSCs, BMP2 plays a fundamental role [70]. Hypermethylation of the BMP2 promoter region in osteoblasts leads to the silencing of certain genes associated with bone formation [71,72]. Indeed, the methylation level of the BMP2 promoter has been reported to significantly correlate with the degree of osteoporosis in affected patients, resulting in BMP2 downregulation [72].
Methylation levels are variable and can change during differentiation and in response to external stimuli [73]. Indeed, promoter hypomethylation in an early osteogenic marker, that is ALP, has been shown to induce high ALP expression in osteoblasts [64]. However, downregulation of ALP expression was found in mature osteocytes due to hypermethylation in its promoter region [64]. These results suggest that the DNA methylation pathway may inhibit the expression of ALP during osteogenic differentiation and may be time-dependent and variable during differentiation [64,74]. Moreover, OCN, an important marker of osteogenic differentiation [75], exhibited a high level of methylation of its promoter during the first differentiation phase in primary osteoblasts, which gradually decreased. These results suggest that OCN hypomethylation may promote osteogenesis [76].
The Wnt/β-catenin signaling pathway is one of the major signaling cascades involved in MSC osteoblast differentiation [77]. The genes in the Wnt signaling pathway are also epigenetically regulated by DNA methylation. During BMSC osteogenic differentiation, a decrease in methylation has been detected in the receptor tyrosine kinase-like orphan receptor 2 (ROR2) promoter region of the Wnt signaling pathway [65].
As mentioned above, bone remodeling is also regulated by the RANKL/RANK/OPG system [78]. DNA methylation of RANKL and its receptor OPG plays a key role in regulating osteoclast differentiation [79]. Quantitative methylation of all types of bone cells and pyrolysis sequence analysis showed that methylation of the transcription initiation regions of RANKL and OPG inhibits expression of RANKL and OPG genes [66,79,80]. Therefore, methylation regulation of RANKL/RANK/OPG plays an important role in bone regeneration [79,81].
Finally, osteolytic changes in myeloma patients have been reported to be related to an increase in IRF8 methylation and IRF8 down-expression [89]. Inhibition of IRF8 further induces bone resorption, suggesting that epigenetics may be a potential target for bone disease treatment [89].
Histone Post-Translational Modifications
Histones H1, H2A, H2B, H3, and H4 are small nuclear proteins rich in positively charged basic amino acids, i.e., arginine and lysine, which can interact with the negatively charged DNA [56]. The interaction between an octamer of four histones (H2A/2B/3/4) and a 147-bp segment of DNA leads to chromatin compaction and assembly into the nucleosome, which is the basic chromatin unit [56]. The continuous nucleosome formation and folding are at the bases of the chromatin remodeling processes that regulate gene expression. Histone post-translational modifications (PTMs) occur at the N-terminal position of these nuclear proteins, thereby promoting changes in the chromatin structure. PTMs modulate the expression of genes, ultimately leading to the regulation of a large variety of cell functions, including osteogenesis [81,90,91]. The main histone modifications comprise (de)acetylation, (de)methylation, (de)phosphorylation, and ubiquitylation modifications [56]. Histones PTMs are modulated by several classes of modifying enzymes, including (i) histone acetyltransferases (HATs) and histone deacetylases (HDACs); (ii) histone methyltransferases and demethylases, and (iii) histone kinases and phosphatases, that promote or eradicate specific modifications, respectively [56].
A growing body of evidence indicates that histones PTMs and, in particular, histone (de)acetylation, (de)methylation, and modifying enzymes play a role in bone remodeling processes [81,90,91]. Two genome-wide studies characterized the chromatin landscape during MSC differentiation [92,93]. A global enrichment of a variety of histone marks has been identified as essential for multipotent differentiation of MSCs, including H4K5ac, H3K4me3, H3K9ac, H3K27ac, and H3K36me3, H3K4me1, H3K4me3 [92,93].
Histone marks and modifying enzymes localized within specific bone remodeling candidate genes have been identified. Indeed, PTMs, such as acetylation and methylation at histones associated with promoters of bone-related genes are functionally related to the chromatin remodeling processes that regulate their expression during bone remodeling [94].
An enrichment in the H3K4me3 and H3K27ac marks at the Runx2 promoter region has been related to the epigenetically-forced expression of RUNX2 and osteocalcin under myoblastic differentiation [95]. Moreover, H3/H4 acetylation marks have been found to enhance the promoters of the osterix and osteocalcin genes, while a reduction in histone deacetylase 1 recruitment has also been determined at those promoters [96]. The implication of histone acetylation upon the expression of osteocalcin has also been investigated in the context of osteoblastogenesis. A transcriptionally active osteocalcin gene has been linked to acetylated histones H3 and H4 localized in the osteocalcin promoter region [97,98]. Moreover, these modifications seem to be mediated by a complex interaction between different factors, including the transcriptional coactivator and HAT p300, alongside PCAF and RUNX2 [99]. Additional studies indicated that the acetylation of both H3 and H4 at the osteocalcin promoter can be prevented by NFATC1, which specifically interacts with HDAC3 [100], being capable, in turn, to antagonize the transcriptional activity of RUNX2 [101,102]. The suppressive role of HDAC3 on osteogenic differentiation by deacetylating H3 localized on bone-related genes has also been demonstrated at the bone sialoprotein promoter [103]. Previous data indicated that the HAT PCAF, which is a p300/CBP-associated factor (PCAF/KAT2B), is implicated in the osteogenic commitment of MSCs [90], while CBP and p300 have also been found nearby promoters of osteoblastic genes during osteoblast differentiation [90]. A stimulated expression of PCAF has been determined following Smad-driven osteogenic induction, accompanied by an overexpression of BMP pathway genes by H3K9 acetylation [104]. Moreover, additional HATs, such as MOZ-related factor (MORF/KAT6B) and monocytic leukemia zinc finger protein (MOZ/KAT6A), have also been found to play a role in osteogenesis by interacting with RUNX2 [105].
Histone methylation on the promoters of bone-remodeling-related genes also plays a role in osteogenesis. For instance, the H3K36 tri-methyltransferase Wolf-Hirschhorn syndrome candidate 1 (WHSC1 or NSD2) has been reported to interact with both RUNX2 and p300, thereby leading to H3K36 trimethylation expression of downstream bone-related genes [106,107]. Enhancer of zeste homolog 2 (EZH2/KMT6) can suppress osteoblastogenesis via H3K27me3 mark on the promoters of osteoblast-related genes [107,108]. However, an intriguing aspect of EZH2 is that this enzyme seems to play a dual role during bone formation as it seems to either favor the proliferative expansion of osteoprogenitor cells or suppress osteoblast-related genes [109]. Indeed, this enzyme can lead to osteogenic differentiation of MSCs by epigenetically regulating important osteogenic genes, such as RUNX2, MX1, ZBTB16 other than OP, OC, and FHL-1, via H3K27me3 mark on their promoters [108,110]. An additional histone methyltransferase named SUV420H2 has been found to be implicated in osteoblast differentiation. The functional silencing of SUV420H2 by a siRNA model can lead to a global decrease in H4K20 methylation alongside a reduced expression of osteogenic transcription factors and bone-related genes [111]. A recent in vitro study identified Lysine-specific demethylase 1 (LSD1), which removes H3K4/K9 mono-/di-methylation marks, as an important inhibitor of the differentiation of human MSCs toward osteoblasts [112]. In particular, the functional silencing of LSD1 osteoblast progenitor cells leads to an increased bone mass by negatively regulating the expression of BMP2 and WNT7B through H3K4me2 methylation loss on their promoters [112].
In summary, the aforementioned investigations underlined the important role of histone-modifying enzymes and histone modifications, such as acetylation and methylation, which occur on the promoters of bone-regulating genes, in the regulation of the bone remodeling process.
Non-Coding RNAs
ncRNAs are RNA molecules without protein-encoding capability [3]. ncRNAs can epigenetically regulate gene expression by inhibiting the translation of their mRNA targets. ncRNA classes, involved in the regulation of bone metabolism, include miRNAs, lncRNAs, and circRNAs. miRNAs are small RNA transcripts consisting of ∼22 nucleotides that regulate gene expression by binding 3 -UTR of mRNA targets [113]. The complete complementarity of miRNAs with the mRNA target induces its degradation [56], while miRNA-mRNA incomplete binding leads to mRNA expression inhibition [114]. Several studies demonstrated miRNA involvement in the epigenetic regulation of bone remodeling. Specifically, miRNAs regulate bone homeostasis by inhibiting the positive or negative transcription factors implicated in the pivotal osteoblast and osteoclast differentiation pathways. miRNAs that target osteogenic factors, including Runx2, Osx, Opn, Ocn, and BMPs, are negative regulators of osteoblastic differentiation. Contrarywise, miRNAs that suppress the activity of osteogenic inhibitors, such as Smad7 and DKK-1, positively regulate osteoblast formation [28]. Similarly, miRNAs that inhibit the expression of RANK, RANKL, NFATc1, and TRAF6, suppress osteoclastogenesis [51]. Moreover, miRNAs also control the RANKL/OPG ratio, thus influencing osteoclastic differentiation [115].
lncRNAs are transcript molecules of ∼200 nucleotides that regulate gene expression by inducing chromatin modification or by inhibiting mRNAs' target expression [116]. lncRNAs can also indirectly regulate mRNA expression by acting as a miRNA sponge. Specifically, they suppress the inhibitory activity of miRNAs and allow the expression of its mRNA target [3]. lncRNAs' activity is implicated in bone homeostasis. Growing evidence has shown a role for lncRNAs in the bone formation process by stimulating the expression of osteogenic transcription factors. Most of these induce osteogenesis by cross-talking to miRNAs, thus blocking their inhibitor activity on osteogenic factors [3]. Despite osteoblast differentiation, the role of lncRNAs in osteoclast differentiation is still largely unknown. lncRNA AK077216 promotes osteoclast differentiation-inducing NFATc1 expression [117], whereas lncRNA-NONMMUT037835.2 negatively regulates osteoclastogenesis by inhibiting the expression of the main osteoclastic factors, including RANK and NF-κB/MAPK signaling pathways [118]. Other lncRNAs stimulate osteoclastogenesis and bone resorption by acting as a miRNAs sponge, such as lnc-MIRG, lnc-MALAT1, and lnc-Neat1, that bind miR-1897, miR-124, and miR-7, respectively [119].
circRNAs are circular transcripts with a length of hundreds or thousands of nucleotides derived from alternative splicing [120]. Due to their circular conformation, circRNAs are resistant to exonuclease digestion, including by the RNAse R enzyme [121]. circR-NAs process RNA, regulate the transcription of their parental genes, and act as miRNAs sponges [122]. In recent years, accumulating epigenetics data revealed the involvement of circRNAs in bone metabolism regulation. Several bioinformatic analyses showed differentially expressed circRNAs during BMSC osteogenic differentiation [121,123], and in hematopoietic stem cells (HSCs) [124]. circRNA hsa_circ_0074834 helps BMSCs toward an osteogenesis-angiogenesis combined process, regulating the expression of osteogenic factors ZEB1 and VEGF by sponging miR-942-35p [125]. Hsa_circ_0006393 promotes osteogenic pathway activation by binding miR-145-5p, thus upregulating the forkhead box O1 (FOXO1) gene. Indeed, this circRNA is downregulated in glucocorticoid-induced osteoporosis (GIOP) [126]. Moreover, circRNA_009934 expression levels increase during osteoclastogenesis and are correlated with bone resorption. Predictive results demonstrate that this circRNA acts as a positive regulator of TRAF6 expression by blocking miR-5107 activity [127]. An in vivo experiment revealed that the osteoclast differentiation of bone marrow monocyte/macrophage (BMM) cells is positively regulated by molecular axis consisting of circRNA_28313-miR-195a-CSF1 [128]. The few studies reported herein prove how lnc-RNAs can epigenetically influence the bone remodeling process.
Genetics of Metabolic Bone Diseases
Metabolic bone diseases include a varied group of disorders characterized by alterations in skeletal homeostasis, which are often associated with abnormal circulating concentrations of calcium, phosphate, and/or vitamin D metabolites [129]. Metabolic bone diseases represent the third most common endocrine disorders after thyroid diseases and diabetes [130].
Excluding osteoporosis, which is considered the most common of these, bone metabolic diseases comprise a large group of different disorders with low prevalence in the general population. Among these disorders, the most common are Rickets, Osteomalacia, and Juvenile Paget's disease. Osteoporosis affects approximately 200 million individuals worldwide, while it has been estimated that about 1 in 3 and 1 in 5 females and males, respectively, exhibit low bone mass and/or osteoporotic fractures during their lifetime [131]. Variable Rickets rates have been reported according to different geographical areas, being reported as merely sporadic in some areas, such as Europe [132], while in other areas, e.g., lowincome countries, up to 9% of the childhood population is affected [133]. The prevalence of osteomalacia and Juvenile Paget's disease has been estimated as ranging between 1-5% and 1-2% of the general population, respectively [134,135].
Commonly, these diseases present a genetic basis and represent either a (i) monogenic disorder due to a germline or somatic single-gene mutation, or a (ii) digenic, oligogenic, and polygenic disorder that involves variants in more than one gene [136]. Inheritance patterns of monogenic disorders occur as one of the following traits: (i) autosomal dominant, (ii) autosomal recessive, (iii) X-linked dominant and recessive, (iv) Y-linked, (v) non-Mendelian mitochondrial defects. Moreover, monogenic metabolic bone diseases may also be caused by sporadic postzygotic mosaicism. Germline single-gene mutations causing Mendelian diseases typically have high penetrance, whereas the genetic variations causing oligogenic or polygenic disorders are each associated with smaller effects with additional contributions from environmental factors (multifactor diseases) [129]. Notably, the same metabolic bone disease usually presents multiple modes of genetic inheritance. Therefore, the same disease can present different genetic backgrounds, which convergently leads to the same phenotype. As a result of different mutations and dysfunctions in pathways that regulate bone turnover, several metabolic bone diseases can be described, some of them are outlined below.
Osteoporosis
Osteoporosis is the most common metabolic bone disease. This disease is mainly characterized by reduced bone mineral density with the consequent deterioration of bone tissue and its microarchitecture [137]. This leads to bone fragility with an increased risk of fractures, particularly to the wrist, spine, and hip, the latter being associated with a 12-24% and 25% mortality rate, within the first year of fracture, in females and males, respectively [138]. These statistics are predicted to increase as the result of global aging in the world population. Osteoporosis can be associated with complicated polygenic characteristics, with more than 200 loci linked to it, or as a monogenic condition [129,139]. Indeed, genetic factors play an important role in the development of osteoporosis.
Mutations in more than 15 genes with both structural and regulatory functions have been implicated in the pathogenesis of osteoporosis. These genes largely comprise regulators of bone metabolism, including local regulators of bone metabolism and bone matrix components, as well as cell receptors and calciotrophic hormones [140]. Among them, Vitamin (1, 25-dihydroxyvitamin) D receptor (VDR), estrogen, and androgen receptors and the Collagen type I α (COLIA1) gene have been the most extensively investigated; both polymorphisms and mutations within the regulatory and coding sequences of these genes have been identified as related to osteoporosis [140].
The VDR gene is located on chromosome 12q12-q14 and encodes for the vitamin D receptor [141]. VDR plays a key role in bone metabolism and maintenance of serum calcium homeostasis by binding to its ligand vitamin D and regulating the expression of the response genes. Morrison et al., in 1994, first identified polymorphisms in the 3 region of the VDR, which have been linked to low ostecalcin levels and an increased risk of osteoporotic fracture [142]. Overall, allelic variation of this gene has been related to up to 75% of the genetic effect on bone mineral density. However, the relationship between VDR-3 genotype and bone mineral density may be modulated by high vitamin D and calcium intake. VDR-5 polymorphism has also been identified and related to low bone mineral density in elderly individuals and intestinal calcium absorption in children [143,144].
Both estrogen and androgen receptors play a critical role in regulating bone growth and in maintaining skeletal mass [145,146]. Two isoforms of the human estrogen receptor, i.e., estrogen receptor alpha (ERα) and estrogen receptor beta (ERβ), are expressed by two different genes named ESR1 and ESR2, which are located at chromosome 6p25.1 and 14q22-q24, respectively [147]. In particular, ESR1 is considered a strong candidate gene for osteoporosis [147]. Previous studies indicated that both polymorphisms and mutations at ESR1 play an important role in the onset and development of this metabolic bone disease. A TA dinucleotide repeat in the ESR1 promoter and are two single nucleotide polymorphisms in the first intron give rise to reduced bone mineral density in postmenopausal and premenopausal women, while potentially related to the acquisition of peak bone mass [148]. However, the mechanisms behind these polymorphisms and osteoporosis is still unclear. Regarding the androgen receptor, a polymorphic (AGC)n repeat trinucleotide polymorphism, which has been identified in the first exon of the coding sequence, has been associated with reduced transcriptional activity of the receptor, thereby leading to a reduced bone mass in women with high levels of sex hormone-binding globulin (SHBG) and with an increased risk of osteoporosis fractures [147].
Mutations in NR3C1, a glucocorticoid receptor (GC), altered the expression of ECM-, osteoblast-, and osteoclast-related genes [149]. The NR3C1 BclI gene C/G polymorphism has been proposed to be one of the genetic causes of osteoporosis [150]. Furthermore, GC receptor gene polymorphism was previously shown to be closely related to bone mineral density (BMD). The haploid frequency of SNP rs1866388 was higher in patients with higher BMD. In addition, the SNPs, rs1866388 and rs2918419, differed between individuals with extremely low and high BMD and seemed to be involved in BMD regulation in a gender-dependent manner [151]. COLIA1 gene is located on chromosome 17q21.31-q22 and encodes the alpha I chain of type 1 collagen. This gene is particularly considered as a candidate for susceptibility to osteoporosis since type I collagen is the major structural bone protein [152]. Indeed, mutations in this gene have been associated with the osteoporotic phenotype in the osteogenesis imperfecta. A polymorphism located on intron 1 (transition guanine to thymidine) and affecting a binding site for transcription factor SP1 is of particular clinical interest. This polymorphism is considered a possible genetic risk factor for clinically important conditions of osteoporotic fractures [141]. Particularly, the SP1 polymorphism has been related to a reduction in bone mineral density [153] and to disc degradation in older women and men [152]. Similar polymorphic changes have been identified within the COL11A1 gene, located on chromosome 1p21.1. In particular, the T allele of COL11A1 C4603T polymorphism may increase Intervertebral Disk Disease (IVDD) susceptibility [154]. Both the COL11A1 gene and the COL11A2 gene encode one of the two alpha chains of type XI collagen [154]. Mio et al. studied the association of the type XI collagen genes, such as COL11A1, COL11A2, and COL2A1, with IVDD, and observed that the COL11A1 gene SNVs, rs1463035 and rs1337185, as well as the COL11A2 gene SNV, rs2076311, were associated with disc bulges [155,156]. While Yang et al. observed for the COL11A2 gene that the carriers of the A allele for rs2071025 and carriers of the C allele for rs986522 presented an increased risk of IVDD [157]. The COL1A2 gene lies on chromosome 7q22 and contains 52 exons [158]. Polymorphisms between the Eco RI, Pvu II, and Del38 sites in the COL1A2 gene have been previously correlated to osteoporosis [159]. Furthermore, variants, such as p.(Arg708Gln), p.(Gly247Cys), and p.(Gly193Ser), previously found in osteogenesis imperfecta and idiopathic osteoporosis, were detected in atypical femoral fractures and patients presenting fractures associated with low spinal BMD, respectively [160]. An additional gene involved in collagen production, COL2A1, also plays a primary role in skeletal development, bone resorption, and homeostasis. COL2A1 has a significant impact on cortical and trabecular bone mass, and, therefore, may influence skeletal architecture. In fact, COL2A1 glycine mutations p.Gly144Val and p.Gly267Asp cause the prototypical COL2A1-disease Stickler syndrome [161]. On the other hand, variation c.1946G > C (p.Gly649Ala) on the COL9A1 gene, involved in synthesizing type IX collagen and located on chromosome 6q13, has been correlated to ossification of the posterior longitudinal ligament [162]. While the COL9A1 promoter region can be transactivated by SOX9, rs73354570 of SOX9 has been significantly associated with postmenopausal osteoporosis [163].
The gene PLS3 encoding plastin 3 was recognized to be involved in X-linked osteoporosis [164]. Furthermore, early-onset osteoporosis has been characterized as a heterozygous mutation of the Wnt family member 1 (WNT1) gene, while autosomal-recessive loss-of-function mutations at c.1096G > A/p.V366 M of the LRP5 gene, encoding for a receptor within the Wnt pathway, were identified as responsible for osteoporosis-pseudoglioma syndrome [129,165]. Mutations and polymorphisms related to osteoporosis have also been identified at the low-density lipoprotein receptor-related protein 5 (LRP-5) gene [166], which maps to chromosome 11q12-q13 and at the Osteoprotegerin gene [167], which is a soluble protein receptor for RANKL. Moreover, the gla matrix protein (MGP) plays an important role in bone and vascular mineralization, as confirmed by a MGP-deficient murine model, where -138T > C, -7G > A, and Thr83Ala SNPs were associated with bone loss [168]. Similarly, rs2288377, rs35767, and rs2229765 polymorphisms within Insulin-like growth factor (IGF) genes, which are critical regulators for bone cell function [169][170][171], and T116G and G287T polymorphisms in exon 2, and A224T in exon 3 of BMP-2 gene, have been strongly associated with osteoporosis [172].
Additional genes whose mutations have been related to an increased risk of osteoporosis and fractures are Cytochrome P450 (CYP1A1) gene [173]. CYP1A1's two variants, which occur in 19% of healthy individuals, have been related to osteoporosis [174] and Transforming growth factor beta (TGF-β) which presents polymorphisms. These can be divided into three classes according to their position: promoter, coding, and intronic polymorphisms. In particular, the promoter polymorphisms of TGFB1, i.e., C-1348T and G-1369A, may hamper gene expression. While coding polymorphisms T29C and C788T could affect the protein structure [175]. Finally, a rare polymorphism in intron 4 has been associated with reduced bone mineral density and osteoporotic fracture [141]. Osteocalcin, with a C-> T transition in the gene promoter has also been related to low bone mineral density [176]. Furthermore, a polymorphism at the restriction enzyme Hind III site in the promoter region was identified, and together with SNV, rs1543294, are likely important candidate sites involved in bone mineral density and osteoporosis [177]. The polymorphic gene Apolipoprotein E (ApoE), with three common alleles (ε2, ε3, ε4) coding for three isoforms (E2, E3, E4), might play a role in osteoporosis. Indeed, the ApoE4 variant may be important in determining spine bone mass and hip fractures in postmenopausal women [178]. Additional genes whose polymorphisms and mutations have been related to osteoporosis comprise: (i) Sclerostin (SOST), whose polymorphisms have been associated with some parameters of osteoporosis, such as B bone mineral density or risk fracture. In particular, the SRP 10565insGGA, which is located upstream of the SOST transcriptional site has been found to prompt a decrease in bone mineral density in older women [179]; (ii) Calcitonin receptor, whose coding polymorphism causes a proline-leucine substitution at codon 436 of the gene has been reported in patients with reduced bone mass [141]; (iii) Interleukin-1 receptor antagonist (IL-1RN), whose 86 base pair VNTR polymorphism in the second intron of the coding sequence has been related to early postmenopausal bone loss at the spine [180]; (iv) osteonectin gene, whose polymorphisms have been suggested to play a role in inherited osteoporosis susceptibility, such as the haplotype (1046C-1599G-1970T), which has been correlated to lower bone density [181].
Rickets and Osteomalacia
Rickets and osteomalacia are both caused by vitamin D, calcium, or phosphorus deficiencies. Rickets and osteomalacia usually affect children and adults/elderly, respectively, while both can be genetic or acquired. Rickets is a metabolic bone disorder that causes weak, soft bones in children, as a result of inadequate mineralization due to a prolonged deficiency of vitamin D, calcium, and/or phosphorus metabolism [182]. The most frequent cause of rickets is nutritional vitamin D deficiency, whereas about 13% of total rickets is due to genetic problems associated with the absorption of calcium and phosphorus [183]. The latter rickets category can be divided into two groups: (i) disorders of vitamin D biosynthesis and action, such as vitamin D-dependent rickets, and (ii) hereditary hypophosphatemic rickets [182]. Vitamin D-dependent rickets types 1 and 2 are autosomal recessive genetic diseases due to mutations in the renal 1-hydroxylase (CYP27B1) and vitamin D receptor (VDR) genes, respectively [184,185]. Digenic inheritance has been reported in a family with hereditary hypophosphataemic rickets with hypercalciuria (HHRH), which harbors heterozygous mutations of the SLC34A1 and SLC34A3 genes, encoding the renal sodiumphosphate co-transporters type 2a and 2c, respectively [129]. Autosomal dominant hypophosphataemic rickets is associated with mutations in the fibroblast growth factor (FGF) family, FGF23 [186]. Furthermore, X-linked hypophosphataemic (XLH) rickets results from mutations in a phosphate endopeptidase on the X chromosome (PHEX) gene [185]. Mutations in the PHEX gene cause increased levels of hormone fibroblast growth factor 23 (FGF23), leading to renal phosphate squandering and poor skeletal and dental mineralization in this illness [187]. In general, XLH is usually inherited as an X-linked dominant trait, although other familial patterns may occur [188].
Osteomalacia is among the most common osteometabolic diseases and has been described as one of the most disabling bone diseases of the elderly. It is more common in elderly females than elderly males [189]. Osteomalacia is caused by disorders that lead to decreased mineralization of bone [190]. It is associated with poor bone quality causing atraumatic fractures, pseudofractures, delayed fracture healing, and bone pain [191]. This disease is defined by a marked softening of the bones and is commonly caused by a lack of vitamin D [190]. Perhaps the rarest cause of osteomalacia is that caused by a neoplasm, so-called tumor-induced osteomalacia (TIO) [192]. X-linked hypophosphatemia (XLH) is a rare, lifelong disease caused by loss-of-function mutations in the PHEX gene, resulting in an excess of FGF23, which impairs renal phosphate reabsorption and suppresses the production of 1,25-dihydroxyvitamin D. This process results in chronic hypophosphatemia and persistent osteomalacia [191].
Juvenile Paget's Disease
Juvenile Paget's disease is a rare disorder affecting bone growth, which appears during infancy or early childhood [193]. This disorder causes abnormally large, misshapen bones which fracture easily, while its symptoms become more severe during the adolescent growth spurt when bones grow quickly compared to adulthood. Genetically, Juvenile Paget's disease is an autosomal dominant monogenic disorder caused by mutations in a member of the TNF-receptor superfamily named TNFRSF11B [193]. TNFRSF11B is a gene involved in bone remodeling. Mutations within its coding sequence cause an abnormally fast bone remodeling rate starting in childhood, thereby leading to a larger, weaker, less organized new bone tissue than physiological bone. This abnormally fast bone remodeling underlies serious problems as bones are highly prone to fracture.
Osteogenesis Imperfecta
Osteogenesis imperfecta is a condition characterized by bone fragility and extraskeletal symptoms [194]. Reduced bone strength leads to fractures in atypical locations or low-trauma fractures, while extra-skeletal presentations may include dental anomalies, joint hypermobility, hearing loss, blue-gray sclera, etc. The condition is present at birth and develops in children with a family history of the disorder [194]. Different inheritance modes have been identified, comprising both autosomal dominant and recessive monogenic inheritance. Specifically, mutations in the two genes coding for collagen type I, COL1A1 and COL1A2, are the most common cause of osteogenesis imperfecta. In the past 10 years, defects in at least 17 other genes have been identified as responsible for osteogenesis imperfecta phenotype, with either dominant or recessive transmission [195].
Osteopetrosis
Osteopetrosis, also known as Marble bone disease or Albers-Schönberg disease, is a rare genetic, heritable condition that causes increased bone density [45]. Osteopetrosis may be caused by mutations in at least 10 genes. Genetically and clinically, osteopetrosis is very heterogeneous, therefore, accurate molecular classification is relevant for prognosis and treatment [196]. The disease progresses as the bones grow; the cavities of the marrow are filled with compact bone which results in a reduced amount of marrow, which in turn reduces the bone's capacity to produce red blood cells. This can lead to severe anemia. Three forms of osteopetrosis can be distinguished based on the pattern of inheritance: (i) autosomal recessive, (ii) autosomal dominant, and (iii) X-linked. The first, which accounts for the most severe forms, is caused by biallelic mutations in TCIRG1, CLCN7, OSTM1, SNX10, and PLEKHM1 genes, encoding for proteins involved in the acidification of the resorption lacunae and/or in vesicular transport and loss-of-function mutations leading to osteoclast-rich osteopetrosis. Furthermore, mutations in RANKL and its receptor RANK are associated with osteoclast-poor autosomal recessive, where osteoclastogenesis is blocked [197,198]. The second osteopetrosis form can be type I or II; both differ in the presentation of clinical features and genetic mutations located in the LRP5 and CLCN7 genes, respectively. Type I derives from enhanced osteoblast activity due to reduced LRP5 affinity for the extracellular antagonists SOST and dikkopf-1 (DKK-1) and consequent increased Wnt canonical signaling [199], while the most common cause of type 2 is the presence of inactivating mutations in the chloride channel 7 (CLCN7) gene, which results in ineffective, osteoclast-mediated bone resorption [200]. The X-linked type of osteopetrosis results from mutations in the IKBKG gene, which encodes for NEMO, the regulatory subunit of the IKK (IkappaB kinase) complex, essential for NF-kappa B signaling [201].
Fibrous Dysplasia
Fibrous dysplasia (FD) is a rare metabolic bone disease resulting in weakened fracture and deformity-prone bones, where bone is replaced by structurally unsound fibro-osseous tissue [202]. FD is characterized by a highly disorganized mixture of immature fibrous tissue and fragments of immature trabecular bone [203]. It is an uncommon mosaic disorder caused by sporadic post-zygotic activating mutations in GNAS, resulting in dysregulated GαS-protein signaling in affected tissues [202]. Currently, there is no cure for FD, and treatment options may include surgery for relieving pain and repairing bones. FD presents along a broad clinical spectrum due to varying degrees of mosaicism, resulting in a disease that can range from asymptomatic to severely disabling [202]. Fibrous dysplasia may arise in isolation or as a multisystem developmental disorder known as McCune-Albright syndrome (MAS) [202,203]. Recent research suggests that the Wnt/B-catenin pathway may play a role in fibrous dysplasia, as patients with activating GNAS mutations specifically show that Gas mutations activated Wnt/B-catenin signaling [204].
Pyle Disease
Pyle disease is a rare autosomal recessive monogenic bone disorder [205]. Its main feature is the irregular development of long bones in the arms and legs. In this process, the trabecular bone is expanded, while cortical bone becomes thinner than normal. As a result, bones became fragile and are prone to fracture. Pyle disease is a monogenic disease that is inherited in an autosomal recessive pattern, where both copies of the secreted frizzledrelated protein 4 gene (SFRP4) are mutated [205]. SFRP4 is involved in Wnt signaling. Regulation of Wnt signaling by the SFRP4 protein is critical for normal bone development and remodeling. The dysregulation of Wnt signaling due to SFRP4 mutation leads to the bone abnormalities characteristic of Pyle disease [205].
The monogenic bone disorder Autosomal dominant hypocalcemia (ADH) type 1 is caused by heterozygous activating mutations in the calcium-sensing receptor (CASR), which increase the CASR sensitivity to extracellular ionized calcium [209]. Extracellular calcium is essential for life and its concentration in the blood is maintained within a narrow range. This is achieved by a feedback loop that receives input from CASR, expressed on the surface of parathyroid cells. In response to low ionized calcium, the parathyroids increase secretion of parathyroid hormone (PTH), which increases circulating calcium levels [209]. CASR is also highly expressed in the kidneys, where it regulates calcium reabsorption from the primary filtrate.
Additional genetic pathologies affecting bone tissue comprise sclerosteosis (SOST) types 1 and 2, which are inherited in an autosomal recessive pattern. Sclerosteosis type 1 is caused by ten homozygous loss-of-function mutations within the gene SOST that encodes the inhibitor of Wnt-mediated bone formation, sclerostin. Sclerosteosis type 2 is a condition caused by one heterozygous or two homozygous loss-of-function mutations in the lipoprotein receptor-related protein 4 (LRP4) gene, which is involved in bone homeostasis [211].
Conclusions
In conclusion, different signaling pathways, including TGF-β/BMP and Wnt/βcatenin pathways, as well as epigenetic processes, including DNA methylation, histone post-translational modifications, miRNAs, lnc-RNAs, and circRNAs, play a pivotal role in bone formation and turnover. The identification of specific epigenetic markers could be extremely useful for assessing individuals at risk of future non-communicable disease and allowing novel pathways that influence the phenotype to be discovered. Detecting such indicators early in life, even in peripheral tissues, could provide useful predictive markers for the later phenotype in cell types that are more relevant to different bone metabolic diseases, thus allowing better treatment options to be considered. Moreover, epigenetics may be a potential target for the treatment of bone diseases.
Mutations and dysfunctions in pathways that regulate bone turnover might influence the bone remodeling process, ultimately leading to a large variety of metabolic bone disorders. A large fraction of metabolic bone diseases presents a genetic basis and represents either a (i) monogenic disorder due to a germline or somatic single gene mutation, or a (ii) digenic, oligogenic, and polygenic disorder that involves variants in more than one gene. Identifying these heritable diseases represents a significant clinical opportunity, as it can enable early recognition and therefore widen therapeutic options. High throughput methods might strongly improve the identification of genetic abnormalities linked to metabolic bone diseases. Therefore, it is of paramount importance to develop new strategies aimed at pinpointing the genetic background of these diseases, in order to improve patient outcomes and to produce novel therapies. Further studies in this direction are urgently needed.
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-01-31T16:03:27.125Z | 2022-01-28T00:00:00.000 | {
"year": 2022,
"sha1": "eba8f239e766b5289526ae96336548df534f650a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/3/1500/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "45dc911b160ea1d02a13665ae97ccfe6abcc51e4",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250929088 | pes2o/s2orc | v3-fos-license | Research on the non-point source pollution of microplastics
Microplastics are characterized with universality, persistence and toxicity to aquatic organisms, the pollution of microplastics has attracted worldwide attention. At present, studies on microplastic pollution were mainly focused on the composition, abundance and species of microplastics in water bodies and sediments, and few studies were focused on the source and influence characteristics of microplastics in surface water bodies. Starting from the sources of microplastic pollution in surface water of this paper, the pollution status of agricultural microplastics was analyzed, and the importance and urgency of studying microplastic pollution in agricultural non-point sources were put forward. Therefore, it was intended to provide effective scientific basis and technical support for the control of microplastics non-point source pollution in river basins.
Introduction
Due to the ubiquity, persistence and toxicity of microplastics to aquatic organisms, the pollution problem of microplastics has attracted worldwide attention (Abalansa et al., 2020;Huang et al., 2021). Recently, 125 scientific issues have been raised by scientists on the Frontier of a new age around the world, including finding environmentally friendly alternatives to plastic and managing plastic waste (Anderson et al., 2016;Levine, 2021). The natural degradation of microplastics in the aquatic environment takes centuries, but the microplastics are gradually broken down into smaller substances through physical, chemical and biological reactions. Due to the characteristics of small particle size (particle size is less than 5 mm), high specific surface area, and strong hydrophobicity, microplastics are carriers for long-distance migration of pollutants in different media and have good adsorption effect on pollutants such as antibiotics (Liu et al., 2021;Li et al., 2018a), heavy metals (Zhou et al., 2019;Sarkar et al., 2021) and bacteria (Fueser et al., 2020;Leiser et al., 2021). The combination of microplastics with antibiotics enhances antibiotic resistance genes in the aquatic environment, while, microplastics ingested by biota can negatively affect the physiology, reproduction and immunity of these organisms (Mallik et al., 2021;Mallik et al., 2021;Bertoli et al., 2022), affecting bioaccumulation in the food chain, potentially threatening human health (Prata et al., 2021;Atugoda et al., 2021). The concept of microplastics was proposed by Richard since 2004(Richard et al., 2004, a large number of related studies have been appeared at home and abroad based on the harmfulness of microplastics. From the perspective of the main body of the polluted environment, the investigation of microplastic pollution was mainly focused on the marine environment, followed by the river, and finally the soil environment. Divided from the categories of pollution sources, the current research was mainly focused on point source pollution such as urban sewage treatment plants, followed by a small number of reports on urban rainwater runoff pollution on major traffic roads. The rainfall process will transfer the microplastics in the soil environment to the surface water through agricultural runoff, combining the microplastic pollution of different environmental subjects. However, there is still a lack of research about agricultural non-point source microplastic pollution.
In summary, this paper started from the sources of microplastic pollution, so the present situation of agricultural microplastic pollution was analyzed, and the importance and urgency of studying microplastic pollution in agricultural nonpoint sources were put forward. Finally, it provided effectively scientific basis and technical support for the control of microplastics non-point source pollution in river basins.
Runoff is the main source of microplastics pollution in surface water
The concept of microplastics was proposed by Richard since 2004(Richard et al., 2004, the microplastic pollution of surface water bodies was mainly concentrated in the marine environment, followed by rivers. At present, researches on microplastic pollution in surface water was mainly focused on the composition, abundance and species of microplastics in water and sediment (Olubukola et al., 2018;McCormick et al., 2014;Grbic et al., 2020). There was few studies have traced the origin and impact characteristics of microplastics in surface water. Sources of microplastics in surface water bodies should include sewage discharge, overflow of combined sewage pipes, rainwater runoff from rural areas and roads, etc. (Dris et al., 2017;Horton et al., 2017;Cheng et al., 2021). There are few existing studies mainly focused on the following two aspects: 1) The abundance of microplastics in the influent and effluent of sewage plants: Mason et al. analyzed the effluent of 17 sewage treatment plants in the United States, and found that the average concentration of microplastics was 0.05 pieces/L, with more than four million pieces/d of microplastics emitted per wastewater treatment plant (Mason et al., 2016). Murphy et al. found that the average concentration of microplastics in the inlet and outlet water of a secondary sewage treatment plant in Scotland was 15.70 and 0.25 pieces/L, respectively. Although the removal rate of microplastics in the sewage treatment plant was as high as 98.41%, the emission of microplastics could still reach 6.5 million pieces/d (Murphy et al., 2016). At the same time, the removed microplastics were discharged back into the environment in the form of residual sludge. Bai et al. (Bai et al., 2018) conducted a study on a sewage plant in Shanghai and found that the microplastic particle size in the inlet and outlet water was mainly 0.36-1.00 mm, and the removal rate of microplastics was 55.6%. The microplastic discharge in the effluent water of the sewage plant was 145.6 billion/d, and the microplastic content in the residual sludge was 540 million/d, which was 40,000 times for the average of the American sewage plant.
All the above studies indicate that the wastewater treatment plant is an important source of microplastics in surface water, and the residual sludge is an important way for the wastewater treatment plant to transfer microplastics pollution. The emission can still reach 6.5 million/d, and the removed microplastics are discharged into the environment again through the excess sludge. Bai et al. (Bai et al., 2018) studied a sewage plant in Shanghai and found that the particle size of microplastics in the influent and effluent is mainly 0.355-1 mm, and the removal rate of microplastics is 55.6%. The microplastic content of the excess sludge is 540 million/d, which is 40,000 times the average of the US sewage treatment plants. All the above studies show that sewage treatment plants are an important source of microplastics in surface water, and excess sludge is an important way for sewage treatment plants to transfer microplastic pollution.
2) Characteristics of microplastic pollution in rainwater runoff from urban roads: Horton et al. (Horton et al., 2017) investigated that British rainwater carried a large amount of plastics such as synthetic fibers, which migrated and accumulated in large amounts to surface water bodies. Chen et al. reported that the total annual load of microplastics from surface runoff, domestic sewage and sewer sediments was almost six times that of wastewater discharge from sewage treatment plants. Bailey et al. (Bailey et al., 2021) indicated that both primary and secondary microplastics could enter the water ecological environment through non-point sources, and urban rainwater runoff carried microplastics related to dust, construction activities, artificial turf, landfill leachate. Tire particles, vehicle debris, or debris from road marking paint produced in the process also contributed to microplastic pollution in urban surface runoff. After decades of research and management, point source pollution has been well controlled. Therefore, non-point source pollution generated by surface runoff is one of the main ways for microplastics to enter water bodies, and the extent of its impact is still unclear.
Status of agricultural micropastic pollution
Most of the old cities in my country have a combined pipe network, and urban runoff will enter the sewage pipe network, and overflow directly into the river when it exceeds the treatment capacity of the sewage plant, which will have a great impact on the water ecological environment. The use of plastic mulch, agricultural irrigation, and fertilization in the agricultural production process will cause microplastic particles to accumulate in the soil, and enter the river together with the traditional agricultural non-point source pollutants, causing water ecological pollution. Within the scope of the data reviewed, there have been no reports on the overflow of combined sewage pipes and microplastics in agricultural rainwater runoff. Due to the huge contribution of soil to agricultural rainwater runoff pollution, there are many studies on microplastics in soil, mainly focusing on the following aspects: 1) Pollution of plastic mulch: From 2000 to 2016, amount of mulch used in China was increased from 724,000 tons to 1.468 million tons, reaching a peak and accounting for about 70% of the world's total, covering an area of nearly 1.77 hm 2 × 10 7 hm 2 and accounting for 90% of the world's total coverage (Bigalke et al., 2021;Bian et al., 2015). With the promulgation of the "Soil Pollution Prevention and Control Plan" and the plastic restriction order, China's agricultural film production will drop to 774,000 tons in 2020 . While the use of plastic mulch has declined, plastic concentrations can build up in the soil over time due to its refractory degradation in the environment. When the mulching period in Xinjiang increasing from 5 to 30 years, the content of microplastics in soil was also increased from 91.2 mg/kg to 308.5 mg/kg (Jin et al., 2020). 2) Agricultural irrigation pollution: Since the concentration of microplastics in surface water is 1.00 × 10 −5 to 1.00×10 5 /L (Olubukola et al., 2018;Mak et al., 2020), the use of surface water and domestic sewage irrigation caused microplastics to re-accumulate in the soil. Bigalke et al. found that microplastic emissions from agricultural drainage in Switzerland were 9.3×10 12 per year (Bigalke et al., 2021). 3) Fertilization pollution of excess sludge products: According to statistics, the amount of microplastics from sludge fertilization in European farmland was ranged from 63,000 to 430,000 tons each year, and the amount of microplastics in North American farmland was increased from 44,000 to 300,000 tons, which exceeding the pollution concentration of surface seawater (Hao et al., 2021;Lares et al., 2018). The amount of microplastics entering the environment from sludge in China is 1.56×10 14 per year (65% of which are fibers) (Li et al., 2018b), which is much higher than Iran's emission of 1.0×10 11 per year (Petroody et al., 2021). After the accumulation of the above-mentioned ways, the microplastics contained in the soil are 4-23 times more than that in the ocean (Chen H P et al., 2020). According to existing reports, the microplastics in soil are mainly PP, PE, and PVC . In the Sydney area of Australia, the abundance of soil microplastics was ranged from 300 to 67,500 mg kg −1 (Fuller and gautham, 2016). In the Melipira region of Chile, the concentration of microplastics less than 1 mm was ranged from 18,000 to 41,000 kg −1 (Corradini et al., 2019). The concentration of microplastics in Yunnan area of China is between 7,000-53,090 kg −1 (Zhang and Liu, 2018); The abundance of microplastics in Wuhan plastic filmcontaminated vegetable fields is 320-12,560 kg −1 ; The agricultural plastic film-contaminated vegetables with concentration of microplastics in the ground is 6 × 10 5 A/kg (Ding et al., 2020). Due to the different industrial and agricultural planting patterns in each country and region, the concentration of microplastics in each region varies greatly. The migration mechanism of microplastics in soil includes vertical migration to deep soil and food chain migration. Among them, the microplastics with smaller particle size have the largest downward movement (Rillig et al., 2017), but it is difficult for the low-density microplastics to migrate downwards, and it is easier to migrate with the runoff (O'connor et al., 2019). However, no microplastics in the soil was flowed to the surface water through the runoff. In order to effectively control microplastic pollution in surface water, it is urgent to study the microplastic pollution process and accumulation mechanism of rural non-point source pollution.
Status and control methods of microplastics non-point source pollution in river basin
Xie. (2020) studied the application of sewage treatment plant effluent and sludge-based fertilizers in soil on the accumulation of microplastics in the Lijiang River Basin environment. However, due to the lack of data on microplastic pollution from non-point sources in the Lijiang River Basin, the source analysis of the watershed is not complete. Mao et al. (Mao et al., 2020) traced the possible sources of microplastics in the Yulin River, a typical tributary of the Three Gorges Reservoir area, from the perspective of point and non-point source pollution. Urban runoff scouring Frontiers in Chemistry frontiersin.org pollution is the main source of microplastic pollution in Yulin River. However, the authors only sampled and analyzed the soil on the bank slope to speculate the impact of agricultural nonpoint source pollution, and did not further analyze the microplastic pollution concentration and impact mechanism of rainwater and non-point source pollution. Zha.(2021) Investigated the distribution characteristics of micro plastics in different non-point source pollution in the Taihu Lake Basin, and found that the average concentration of micro plastics in non-point source pollution from aquaculture, planting and rural domestic sewage was 58.33/(5 L), 14.50/(5 L) and 33.00/(5 L) respectively. The agricultural industry was developed, and the extensive use of agricultural film and chemical fertilizer made the content of micro plastics in local Tiaoxi the highest, reaching 25/(5 L). However, the above studies did not further analyze the relationship between various nonpoint source pollution and micro plastics in surface water of Taihu Lake.
Zha. (2021) investigated the non-point source pollution control technology on the removal effect of microplastics in Taihu Lake basin, and the results were shown in Table 1. It was found that the process of planting grass ditch, three-stage prepond, multi-stage constructed wetland, submerged plant and paddy field could play an obvious role in intercepting microplastics from non-point source pollution. Rural domestic sewage treatment process showed the higher removal effect of microplastics in domestic sewage, which was 79.6%. Construction of wetland and ecological filter bed played an important role in the removal of microplastics, However, there was still a lack of microplastic pollution characteristics and source analysis in the process of non-point source pollution in the watershed, which is consistent with the research of (Daniel and Tony, 2021).
Conclusion 1) Sources of microplastics in surface water included sewage discharge, combined sewer overflows, rural and road stormwater runoff. Sewage treatment plant was an important source of microplastics in surface water, and the residual sludge was a way for sewage treatment plants to transfer microplastics pollution. After decades of research and management, point source pollution had been well controlled. Therefore, non-point source pollution generated by surface runoff was one of the main ways for microplastics entering water, but its degree of influence was not clear. 2) The plastic mulching, agricultural irrigation and fertilization used in agricultural production can cause microplastic particles to accumulate in soil and enter rivers along with traditional agricultural non-point source pollutants, resulting in water ecological pollution. There are no reports about microplastics in the overflow of combined sewage pipes and agricultural rainwater runoff. At present, the accumulation of microplastics in soil is very serious. In order to effectively control the microplastics pollution in surface water, it is urgent to study the microplastics pollution process and accumulation mechanism in rural non-point source pollution process. 3) The pollution characteristics and source resolution process of microplastics in watershed non-point source pollution are not clear and need further study.
Author contributions
LH and WG wrote the manuscript. BZ and ZO managed resources and analyzing information. JF and ST performed the review and editing. All the authors read and approved the manuscript.
Funding
This research was funded by the Talents of Guizhou Science and Technology Cooperation Platform (2018)5784-04, Zunyi Science and technology projects (2021) 197 and Talent base for environmental protection and mountain agricultural in Chishui River Basin.
Conflict of interest
Author JF was employed by the company CNOOC Petrochemical Engineering Co., Ltd.
The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 2022-07-22T13:22:58.019Z | 2022-07-22T00:00:00.000 | {
"year": 2022,
"sha1": "b0ac53986a1cafef9aa5f0538bd022a03723ebe2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "b0ac53986a1cafef9aa5f0538bd022a03723ebe2",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
234949421 | pes2o/s2orc | v3-fos-license | Designing of Natural Scaffold Coated with Herbal Extracts for Wound Healing
Skin is the most important organ of the human frame. It acts as abarrier and protection to the complete human body. When skin gets injured, the repair process entails removal of the damaged tissue and laying down of a new extracellular matrix (ECM) over which epidermal continuity can be re-established. Burns can cause many of the reasons. It can cause tremendous problems for themselves and their families. Burns are classified according to its depth of severity. Nowadays industries are moving towards the water origin collagen, particularly the fish to create a scaffold for wound treatments. Instead of deriving collagen from fish, the fish skin acts as the scaffold. Tilapia fish is rich in collagen which is used to create a scaffold to treat wounds. This is the first time that the origin the collagen is directly applied on the skin for wound healing. Now a day’s modern medicines are derived from medicinal value herbs. Herbs have its own medicinal values. In this study, tridax procumbens is chosen because of its higher wound healing and reepithelialization property. The herb is extracted and blended with tilapia’s skin to minimize the healing time. FTIR studies are confirmed that the components responsible for wound healing are present in both tilapia skin and extract of tridax procumbens. Structural studies are done by SEM analysis. The breaking point of that skin is found by tensile strength analysis. The biodegradation study revealed that the scaffolds mass reduced to 50% in 20 days. After 35th day the rate of reduction in mass was very less when compared with previous days. MTT study revealed that the coated scaffold has less toxicity when compared to the coated scaffold.
Introduction
Many advancements are arising to treat the wounds, especially for burn wounds. Wounds that can cause psychological and economic problems in human life. However, the treatments get advanced day by day. The general classification of wounds is shown in bellow. Now a days industries are moving towards the collagen origin, which are all cost effective to the application of wounds. Collagen type-I are responsible for fibril formation, which leads to faster healing. Wound healing contains three major phases. In that surface the platelets are attached. Through the adhesive glycoproteins (fibrinogen, fibronectin, thrombospondin and von willebrand factor) the adherence occurs between exposed collagen surface and other platelets. The factors which attract other • Extrinsic pathway.
Phases of wound healing
By activating the factor XII (Hageman factor) the intrinsic pathway begins. Whenever the blood is exposed to extravascular surface this factor is activated. Similarly, by activating the tissue factor the extrinsic pathway begins. Particularly the tissue factor found in the extravascular cells in the presence of factors VII and VIIa. Then both are proceeding with thrombin. The thrombin is then activated. It converts fibrinogen to fibrin. The fibrin is the important factor in the wound healing mechanism. The wound healing is impeded if the fibrin matrix is removed. As the result aggregation of platelet and coagulation cascade is clot formation. It occurs only in the site of injury.
In addition to activation of fibrin, thrombin facilitates migration of inflammatory cells to the sites of injury by increasing vascular permeability. By this mechanism, factors, and cells necessary to healing flow from the intravascular space and into the extravascular space ( Figure 2).
Proliferation Phase
Proliferation phase is followed by inflammatory phase. It remains 24 to 72 hours. Mitotic activity is carried on the surface of the wound by bursting out the epidermal cells into mitotic activity.
Across the surface of the wound, the cells begin to migrate. In the deeper parts of the wound the fibroblasts starts to proliferate.
Page 3 of 23
Small amounts of collagen are synthesized by these fibroblasts and it helps further fibroblast proliferation by acting it as a scaffold.
Granulation tissue consists of capillary loops supported in this developing collagen matrix, also appears in the deeper of the wound. Fibroblastic phase is carried out in this phase. Depending upon the seviourity of injury, after four to five days, fibroblasts begins to produce large amounts of collagen and proteoglycans.
Collagen fibres are laid down randomly and are cross-linked into large, closely packed bundles. Proteoglycans are used to enhance the formation of collagen. In some injuries this phase remains up to few months. After 15 to 20 days of this phase it enters into maturation phase.
Maturation Phase
In this phase the collagen is remodeled into more organized In this work the scaffold was prepared by sterilizing the tilapia's skin and it is coated/blended with the tridax procumbens extract by using the laboratory instruments.
In the previous studies the scaffold is prepared only by sterilizing it by using specially designed equipment to it [1][2][3][4][5].
The collagen is derived to make a scaffold from the marine origin animals like fish because of its availability and cost. The derived collagen is especially used in wound healing application [5][6][7][8][9][10]. The herbs have its own medicinal property. And it is used for various medicinal applications. The tridax procumbens has the capability to heal the wound [5]. shelf life of up to two years ( Figure 6). Then it is filtered using Whatman number 2 filter paper to remove granules. After filtering it is centrifuged for 15 minutes at 30 rpm to get a proper extract mixture. Then it is stored in the centrifuge tube at 4°C for future use ( Figure 8).
Blending of herbal in skin
The extracted herbal was taken in the beaker along with the glycerol in the ratio of 1:1. Then the sterile tilapia skin is soaked in that mixture and it is stirred for 15 minutes in the magnetic stirrer to get the end product ( Figure 9). that provide structural insights. The interpretation table helps to analyses the spectrum (Table 1).
Mechanical strength
In this work, the tensile strength was found in the mechanical strength. Tensile strength is the stress at which a force applied causes the material to lengthen then break. For an axial load material, the breaking strength in tension is s=P/a where s is the breaking strength, P is the force that can cause it to break and a is the cross-sectional area. The tensile strength is dependent on the type of material and the cross-sectional area.
SEM (Scanning Electron Microscope) scans a focused electron
beam over a surface to create an image. The electrons in the beam interact with the sample, producing various signals that can be used to obtain information about the surface topography and quantitative composition. It gives the high-resolution images and precisely measures the small feature of the sample.
Cell viability is a viability assay is an assay to determine the ability of organ, cells or tissues to recover its viability. It is used to measuring the results of cell proliferation, cell death and cytotoxicity of the sample. In this proposed system MTT assay method is used. It is the high accuracy as well as less hazardous.
Biodegradability test
The biode gradation method is usually to predict the degradation of a particular sample in the environment. Many factors that may affect the degradation. The aim is to apply the end product in the human skin. So the biodegradability is done in the three different fluids (tap water, distilled water, and saline). Because our body contains higher fluids. Finally, the results were compared with these fluids.
F FTIR
In each of the steps to observe the changes of components present in the samples were observed by FTIR analysis. Collagen is the major component in the wound healing which is present in the tilapia skin. It was discussed in this section detailed.
Herbal Extract
The herb here used was tridax procumbence. It has lysine,
Comparison of Stored and Fresh Herbal Extract
It is proved that the two resultants are same. The brown color represents the FTIR result of transmittance of fresh herbal extract and the red color shows the transmittance of stored herbal extract.
The comparison of transmittance of fresh herbal extract and stored herbal extract is shown below (Figure 14).
Raw Tilapia Skin
The reason to choose the tilapia skin was rich in collagen content.
Sterilized Tilapia
After sterilizing the tilapia skin the FTIR was taken. The stretches are all same as the result of raw tilapia skin. It implies that the components present in the raw tilapia are not diminished even after sterilizing. The band is broader when compared to raw tilapia skin. It is due to the use of glycerol in the sterling process.
The transmittance and absorbance of the sterilized tilapia shown below (Figure 17,18).
Stored Tilapia Skin
The sterilized tilapia skins were stored at 4°C for a week. After it was analyzed with FTIR, to check the presence of components in that. Even though it is stored the components present in that are same as the sterilized tilapia skin. It is ratified by the FTIR results.
The transmittance and absorbance of stored tilapia skin are shown below (Figure 19,20).
Sterile Tilapia Blended with Herbal Extract
The sterile tilapia skins were blended with the herbal extract in the beaker with glycerol in the ratio of 1:1 by the use of magnetic stirrer for 15 minutes. After that, it was analyzed by FTIR. The transmittance and absorbance of the sterile tilapia blended with herbal extract are shown in below (Figure 21,22).
Skin
The herbal coated tilapia skin has higher and more component
Transmittance of Stored Tilapia's Skin
The prepared scaffold was stored at 4°C for 6 months. Its transmittance was observed by FTIR, and it is compared with figure 18. Even though the scaffold was stored the compounds present in that scaffold are not changed (Figure 25,26).
Tensile Strength
In this work tensile strength is measured to know the breaking
Page 13 of 23
In figure 21(c) the fiber bundles are shown clearly.
Cell Viability
The viability of the cells is tested using this MTT assay method. To prepare the Extract, Sterile fish-skin samples were weighed and cut into small pieces, aseptically. These pieces were then homogenized in DMEM to get the extract of the desired concentration (500 mg/ml).
Biodegradability
In this work, the biodegradability was checked through the fluids, because at the end the product is going to be applied to the skin, which consists of the majority of fluids. To check that three different fluids (tap water, distilled water, and saline water) were taken. Based on the PH values these fluids were chosen. Two sets of samples were checked. In one set the weight was measured in the daily bases. In another set weights were measured in the interval of two days. In both, the sets sterile and coated tilapia skins were under observation for 60 days. The data set is shown in below table.
It is clearly shown that in the tap water the sample degraded slowly when compared to distilled water and saline water. The saline water took more time to degrade. One more consideration is that the coated one took more time to degrade when compared with the normal one.
Biodegradability test in a daily basis
Itis done in two different sets. One in a daily manner. Another in two days' time interval. The dataset which were taken daily shown in below ( Table 2) The graphical representation of the six samples in the three different fluids is shown below. It is clearer that the sample in the saline water takes more time to degrade when compared to tap water ( Figure 32).
Page 18 of 23
The biodegradability of the sample in the tap water is shown in below graph. In that the orange dots represent the coated sample, and the blue dots denotes the sterile sample. From that graph, it is clear that the coated one take some time to degrade when compared with uncoated one (Figure 33).
The biodegradability of the sample in the distilled water is shown in below graph. In that, the orange dots represent the coated sample, and the blue dots denotes the sterile sample. From that graph, it is clear that the coated one take some time to degrade when compared with uncoated one. The samples in the distilled water degraded slowly when compared with tap water (Figure 34).
The biodegradability of the sample in the saline water is shown in below graph. In that, the orange dots represent the coated sample, and the blue dots denotes the sterile sample. From that Page 20 of 23 graph, it is clear that the coated one take some time to degrade when compared with uncoated one. The samples in the distilled water degraded slowly when compared with distilled water and tap water ( Figure 35).
Biodegradability test with two days' time interval
In that, the weights are measured in two days once. And it is compared with the dataset which was taken daily. The two data sets are matched. The dataset which was taken in two-day time interval shown in below (Table 3) The graphical representation of biodegradability of sample in three different fluids is shown in below. From the graph, it is clear that the samples in the saline water some more time to degrade when compared to tap water and distilled water ( Figure 36).
The biodegradability of the sample in the tap water is shown in below graph. In that, the orange dots represent the coated sample, and the blue dots denotes the sterile sample. From that graph, it is clear that the coated one take some time to degrade when compared with uncoated one. It is matched with the dataset which was taken daily (Figure 37).
The biodegradability of the sample in the distilled water is shown in below graph. In that, the orange dots represent the coated sample, and the blue dots denotes the sterile sample. From that graph, it is clear that the coated one take some time to degrade when compared with uncoated one. The samples in the distilled water degraded slowly when compared with tap water. It is matched with the dataset which was taken daily (Figure 38).
The biodegradability of the sample in the saline water is shown in below graph. In that, the orange dots represent the coated sample, and the blue dots denotes the sterile sample. From that graph, it is clear that the coated one take some time to degrade when compared with uncoated one. The samples in the saline water degraded slowly when compared with tap water and distilled water. It is matched with the dataset which was taken daily ( Figure 39). | 2021-05-22T00:03:11.762Z | 2020-12-22T00:00:00.000 | {
"year": 2020,
"sha1": "b0e595ec6ffa6aa71255df8501df152a5e8a006e",
"oa_license": "CCBYNC",
"oa_url": "https://irispublishers.com/abeb/pdf/ABEB.MS.ID.000602.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8fe34605a98e496a531c8632c19df104834f12ae",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
19072649 | pes2o/s2orc | v3-fos-license | Unexpected spatial impact of treatment plant discharges induced by episodic hydrodynamic events: Modelling Lagrangian transport of fine particles by Northern Current intrusions in the bays of Marseille (France)
Our study highlights the Lagrangian transport of solid particles discharged at the Marseille Wastewater Treatment Plant (WWTP), located at Cortiou on the southern coastline. We focused on episodic situations characterized by a coastal circulation pattern induced by intrusion events of the Northern Current (NC) on the continental shelf, associated with SE wind regimes. We computed, using MARS3D-RHOMA and ICHTHYOP models, the particle trajectories from a patch of 5.104 passive and conservative fine particles released at the WWTP outlet, during 2 chosen representative periods of intrusion of the NC in June 2008 and in October 2011, associated with S-SE and E-SE winds, respectively. Unexpected results highlighted that the amount of particles reaching the vulnerable shorelines of both northern and southern bays accounted for 21.2% and 46.3% of the WWTP initial patch, in June 2008 and October 2011, respectively. Finally, a conceptual diagram is proposed to highlight the mechanisms of dispersion within the bays of Marseille of the fine particles released at the WWTP outlet that have long been underestimated.
Introduction
Large coastal cities are a significant source of marine pollution in the Mediterranean Sea. Metal and organic pollutants are discharged into the marine environment through the sewage system and from the surrounding catchments. They are released as diffusive inputs or during flood events. The city of Marseille has one of the largest Wastewater Treatment Plants (WWTP) in Europe, serving 1.7 million inhabitants and using both physical and biological treatment processes. A large proportion of the treated wastewaters are discharged into the PLOS rivers (50%), with a noticeable signature of trace metals from the WWTP effluent during baseflow [1]. In dry periods, continental outflows from urban and industrial areas merge in Marseille and mix with the WWTP effluent down to an outlet located at the sea surface at Cortiou on the southern coastline. During flood events, significant amounts of trace metals are discharged into the surface aquatic system through runoff processes [2,3], and in extreme cases as the outflows exceed the WWTP outlet capacity, a significant part of the continental waters are directly channeled into the southern bay of Marseille. On average, during flood events, 90% of the continental waters and particulate matter are channeled through the outlet and only 10% are diverted to the southern bay. In addition, the discharge of untreated wastewater in the coastal zone was estimated between 456 and 1450 t . y -1 of Suspended Particulate Matter during the period 2001-2007 [4]. As the WWTP effluent is a major source of water and particles discharged from the city to the coastal area, previous modelling work focused on Marseille as a source of nutrients or contaminants for the coastal area. The focus was on modelling the impact of meteorological and hydrodynamic conditions on the fate of the WWTP contaminants in the coastal waters [5,3]. Successive modelling studies of the hydrodynamics in the bays of Marseille have been carried out. Firstly, we used the 3D-Princeton Ocean Model on a 250 x 350 horizontal grid with a horizontal resolution of 100 m and 11 vertical sigma levels, in order to map the wind-induced hydrodynamic connections between the artificial reefs submerged in the central southern bay and the surrounding shorelines [6,7]. Secondly, 2 versions of the 3D-model MARS3D-RHOMA (for Rhône-Marseille Area), with respective resolutions of 200 m and 400 m and 30 vertical sigma levels, were implemented on a domain extending from the Rhône River (roughly 40 km west of Marseille) to Cap Sicié (roughly 40 km southeast of Marseille), within the framework of the METROC, FP7 PERSEUS and EC2CO/PNEC MASSILIA projects, with the aim of coupling hydrodynamic, sediment transport and biogeochemical models. The results, described in [5,[8][9][10][11], confirmed that the water circulation in the bays of Marseille is mainly controlled by the wind regimes, but also by the Rhône River seasonal floods, and the periodic intrusions of the Northern Current (NC) on the continental shelf south of Marseille. These intrusions of the NC on the continental shelf have been studied and explained by many authors on the basis of surveys involving in situ Acoustic Doppler Current Profiler (ADCP) measurements and 3D numerical modelling [12][13][14][15]. The latter results led the authors [15] to propose 3 types of wind conditions likely to favor an intrusion of the NC on the shelf: i) 'intrusions under easterlies' that correspond to a stable SE wind regime; ii) 'intrusions under northwesterlies' that correspond to a relaxation of a NW wind, in stratified conditions; iii) 'a combination of the 2 wind patterns' involving a strong NW wind event immediately followed by a SE wind regime that reinforces the intrusion of the NC on the shelf.
In addition, a recent combination of the results of the 3D RHOMA model (resolution of 200 m) and the Lagrangian model ICHTHYOP was used to compute the wind-induced connectivity at the surface between different populations of the seaweed Cystoseira amentacea within the bays of Marseille under strong and well-established NW and SE wind regimes [16]. The results highlighted very rapid northwestward connections of propagules, considered as passive particles, at the surface in less than 18 hours across the bays from the WWTP outlet at Cortiou to the limit of the northern coastline. However, this modeling experiment showed that particle trajectories induced by well-established SE winds gave rise to strong connections between the eastern and western limits of the domain, passing south of the Frioul archipelago, but without entering the inner bays of Marseille and not resulting in connections on the shorelines of the northern and southern bays [16].
As a result, more recent simulations of connectivity were performed by combining the results of the RHOMA model (200 m resolution) with the ICHTHYOP tool, still under efficient SE wind regimes, but this time taking into account the deeper sub-surface circulation between 10 and 30 m depth, likely to enter the inner bays. These new sets of computations of the sub-surface circulation entering the inner bays of Marseille provided a basis for only taking into account the episodes of intrusions of the NC on the continental shelf that occurred under SE wind regimes, which have emerged as the dominant process giving rise to connections between southeastern offshore areas and the northern and southern bays. This result is of major importance in that it concerns the inner bays of Marseille, characterized by the highest number of sites that are vulnerable with regard to human activities such as tourism and the commercial activities of ports or beaches, fish farming, artificial reefs for production or sampling stations for water quality monitoring.
The aim of this study was to provide new insight into the previously under-estimated impact of the Cortiou WWTP effluent on the water quality of the bays of Marseille. Our approach was not chosen according to the usual approach defined in terms of the dilution of dissolved constituents, but according to a Lagrangian approach that consisted in computing the trajectories of a patch of passive and conservative particles discharged at the WWTP outlet, under the specific conditions of circulation which characterize the intrusion of the NC on the continental shelf near Marseille. The impact of the WWTP effluent on the water quality of the bays is identified, both from the spatial and quantitative points of view, by quantifying the amounts of fine, non-motile, neutrally-buoyant particles that reach 10 different specific sites, chosen along the shorelines of both northern and southern bays, according to their vulnerability with regard to local human activities. [17], where a bottom-moored ADCP measured the currents associated with intrusions of the NC on the continental shelf between February 2012, and April 2015 [12,15].
Study area
In addition, Fig 1 presents the locations of the 10 similar square boxes of 650 m (4.2x10 5 m 2 ), which were chosen all along the shoreline of both the northern and southern bays as representative of the most vulnerable areas with regard to human activities likely to be affected by particles transported from the WWTP outlet. Table 1 presents the coordinates of the 10 boxes considered, which represent harbours (Madrague, Niolon, Joliette); beaches (Rouet, NW Frioul, Prophète, Prado), fish farming (SE Frioul), artificial reefs (Récifs) and a sampling station for water quality monitoring (Somlit).
Winds
The local wind regime was observed at the Frioul meteorological station, located in the Frioul archipelago at the center of the domain, 74 m above sea level (Fig 1) and managed by the Pytheas Institute of Aix-Marseille University. Fig 2 presents the corresponding wind rose that highlights the 2 prevailing NW 300˚-340˚and SE 100˚-160˚wind sectors, with frequencies of occurrence in time over the 1976-1998 period for hourly averaged wind speeds greater than 2 m . s -1 . The frequencies of occurrence are 38.1% and 21.8% for the NW 300˚-340˚and SE 100˚-160˚sectors, respectively.
In addition, each period of computation of water circulation and Lagrangian transport was chosen in relation to the wind forcing conditions of our model. Fig 3 presents the time series during the 2 periods chosen in our study, at a location corresponding to Somlit station (Fig 1), of the wind conditions computed every 3 hours at 10 m above sea level by the MM5 meteorological model (3 km resolution wind conditions of the RHOMA model). During the period of June 2008 (Fig 3a), the wind conditions were characterized by a 10-day period of moderate S-SE 130˚-170˚winds, corresponding to hourly averaged speeds < 7 m.s -1 measured at the Frioul meteorological station 74 m above sea level, from about 2 days before the particle release until the end of the particle dispersion, from June 15 2008 to June 25 2008. This period is interspersed with periods of unstable and weak NW winds, from June 18 2008 to June 21 2008. During the period of October 2011 (Fig 3b), the wind conditions were characterized by a strong NW winds event, from October 7 2011 to October 14 2011, followed by weak E-SE winds as reported in [11] and corresponding to hourly averaged speeds < 5 m.s -1 measured at the Frioul meteorological station (74 m above sea level). After the particle release on October 14 2011, the wind conditions presented weak and unstable E-SE 80˚-110˚winds until the end of the particle dispersion on October 19 2011.
NC Intrusions
It should be noted that our study does not claim to be representative for all intrusion events, which would require intensive statistical work, and would offer conclusions which would still involve uncertainties as the definition of an "intrusion" is not easy to match with realistic situations. Only typical situations are considered here, with the associated limitations. On the one hand, according to [12] and from an academic point of view, an intrusion of the NC on the continental shelf south of Marseille is defined as a branch of the NC crossing onshore (northwestwards) the 200 m isobath between 5.1˚E and 5.8˚E (Fig 1). We took into On the other hand, according to [15], among the three types of wind conditions liable to favor an intrusion of the NC on the continental shelf, only the 2 situations mentioned as i) 'intrusions under easterlies' and iii) 'a combination of the 2 wind patterns', involve the forcing of a SE wind regime, and are the only situations that allow particles to be transported from the WWTP effluent to the inner bays of Marseille.
Therefore, for our study focused on the particle connections within the bays, we selected 2 periods characterized by 2 different SE wind conditions representative of the local variability (Fig 2): firstly, a period of 22 days from June 15, 0:00, to July 7, 0:00, 2008 and secondly, a period of 5 days and 18 hours from October 14, 6:00, to October 20, 0:00, 2011. These periods are characterized by intense intrusions of the NC on the continental shelf, and specific SE wind regimes: the period of June 2008 is characterized by 10 days of quite stable S-SE winds interspersed with periods of unstable and weak NW winds (Fig 3a), which is representative of the type i) intrusion, according to [15]; the second period in October 2011 is characterized by 5 days of E-SE winds following a strong NW wind event (Fig 3b), which is representative of the type iii) intrusion, according to [15].
Models
We used the current, temperature and salinity fields computed by the RHOMA version of the model MARS3D on a horizontal grid extending from the Rhône River to Cap Sicié, with 30 vertical sigma layers, and a fine horizontal resolution of 200 m or 400 m, with corresponding time steps of 60 s or 30 s, respectively. This model, described in detail in [8,18], takes into account the forcing by the NW Mediterranean general circulation, on the basis of a nesting strategy with the large scale MARS3D-MENOR configuration (1.2 km resolution), by the atmospheric fluxes from the MM5 meteorological model (3 km resolution) and the average daily inputs of the Rhône River. We considered the 2 sets of results computed by the RHOMA model corresponding to the 2 selected study periods (see above), considered as representative of strong intrusions of the NC on the shelf associated with S-SE and E-SE wind conditions. The Lagrangian trajectories of the fine suspended particles through the bays of Marseille were computed with the ICHTHYOP software, developed by IRD and PREVIMER and described in [19]. Ichthyop is an Individual-Based Model (IBM) with various sub-models including biological behaviors. Here, we used only the movement sub-model that simulates the following processes: horizontal and vertical advection, horizontal and vertical dispersion. In our study, horizontal and vertical advection were used in the movement equation but no horizontal or vertical dispersion was applied. Particles were considered as passive tracers. For time-stepping, a fourth order Runge-Kutta integration scheme was used with a constant time step of 30 s that respected the CFL criterion on the entire domain. In our study, the ICHTHYOP model was run taking into account the 3-hour resolution fields of Eulerian current velocities, temperature and salinity, previously computed by the MARS3D. In ICHTHYOP, these fields are interpolated in space to provide values at any individual (particle) location. They are also interpolated in time to feed the IBM time step (30 s) that addresses subgrid scale processes. Simulations consist of tracking the locations and properties of the individuals (particles), typically during periods from a few hours to a few weeks. The output time step was 3 hours. Each computation took into account the same design for the initial source located at the WWTP outlet (Fig 1): a circular patch of 5x10 4 particles with a diameter equal to a half mesh size (eg. 100 m or 200 m for the version of the model used), located at 1 m depth with a vertical thickness of 1 m. In addition, in each ICHTHYOP computation we considered the particles as passive elements in suspension without any biogeochemical transformation, with no buoyancy and no sinking velocity. All the Lagrangian trajectories were computed by ICHTHYOP as direct trajectories, with particles being transported from a single source, still located at the surface (1 m depth) at the WWTP, towards their respective targets within the bays. In addition, as the time of release strongly impacts the trajectories and final destination of particles, various tests were made on the time of the release for each event considered, and a choice was made to select the timing associated with trajectories that best transport the particles further up the bays. The source files of all the particle trajectories computed by ICHTHYOP in our study are available within the Supporting information S1 Dataset, with the two netcdf files S1A and S1B corresponding to the computations of the entire chosen periods of June 2008 and October 2011, respectively.
Finally, the number of suspended particles reaching each regional box shown in Fig 1, during each Lagrangian transport computation, is then counted and compared to the number of particles constituting the initial patch.
Particulate organic matter
The ratios of the stable isotopes of carbon ( 13 C/ 12 C) and nitrogen ( 15 N/ 14 N) allow the identification of Particulate Organic Matter (POM) sources in coastal waters. Different literature data from the same season and area were used to identify the potential sources of the POM in suspension in the surface sea water at Somlit station (Fig 1). The stable isotope ratios of the marine phytoplankton (δ 13 C = -20.64+0.12 ‰, δ 15 N = 3.90+0.06 ‰) measured by [20], offshore, in the euphotic zone at the maximum of Chla (at 90-100 m isobaths), may be associated with the marine phytoplankton transported by the NC. The continental river inputs of POM (δ 13 C = -26.32+0.89 ‰, δ 15 N = 5.19+1.56 ‰ and δ 13 C = -26.25+0.51 ‰, δ 15 N = 4.48+0.41 ‰, respectively) are mainly represented by various types of detritus, mainly of terrestrial plants (mainly C3 photosynthetic type) and freshwater phytoplankton [21,22]. Measurements at the surface in the WWTP plume confirmed that POM was composed of small particles from 2 to 6 mm with a concentration of 2.12x10 5 particles . μL -1 , and most of them (94%) were detritus particles and bacteria [22]. These particles and probably bacteria used for water treatment at the WWTP induce a particularly low δ 15 N ratio (0.59+0.85 ‰) compared to the other POM sources [22]. The δ 13 C ratio (-25.5+0.62 ‰) in the POM from the WWTP plume is lower than that of the Somlit POM (δ 13 C = -23.59+1.37 ‰) [22]. The Somlit POM is a mixture from different sources of particles and generally the mean δ 15 N (2.18+1.39 ‰) is intermediary between marine phytoplankton, WWTP effluents and continental outflows POM.
These literature data regarding sources were compared with the stable isotope ratios of the Somlit POM on October 17 2011 [23]. As no rain event or continental outflow occurred in the previous days at Marseille, marine phytoplankton and WWTP POM were identified as the 2 main potential sources of suspended POM at Somlit on October 17 2011. Estimation of WWTP POM percentage in Somlit POM was calculated using the mixing equations (adapted from [24], where X was the ratio (δ 13 C or δ 15 N); the difference up to 100% was the percentage of marine phytoplankton: where X is the ratio (δ 13 C or δ 15 N). Thus, P% is the percentage of WWTP POM in the pool of suspended POM sampled at Somlit, while the difference up to 100% is the percentage of marine phytoplankton. on October 15, 0:00, 2011, respectively. This period corresponds to a time lapse of 18 hours after the particle release at the WWTP outlet on October 14, 6:00, 2011.
Start of the particle intrusion
These results illustrate the first starting step of the process of the Lagrangian transport of passive particles released at the surface at the WWTP outlet, over the first 2 days after the particle release, during 2 different SE wind induced periods of intrusion of the NC on the shelf. Note that both cases considered, in June 2008 and in October 2011, illustrate specific modes of behavior of the particles, just after being released at the WWTP outlet, according to the crossshore (longshore) S-SE (E-SE) wind induced hydrodynamics along the southern coastline. Fig 4a, in June, 2008, shows the surface circulation of the NC entering the bays of Marseille with current velocities ranging from 0.10 m.s -1 to 0.30 m.s -1 and rising sea surface elevations (z) along the southern coastline near the WWTP outlet, leading to an increased crossshore sea surface slope of about 2 cm by 4 km. This hydrodynamic situation occurred during a period characterized by a moderate S-SE 130˚-170˚winds < 7 m.s -1 hourly averaged speed, measured at the Frioul meteorological station. It should be noted that the sea water accumulation on the southern coastline and the resulting downwelling are reinforced during this period by the lower angles of incidence (less than 50˚) of the S-SE wind induced surface currents with respect to the east-west orientation of the southern coastline near the WWTP outlet, that corresponds S-SE wind directions higher than 120˚ (Figs 3a and 4a). Fig 4d shows an intense starting of the Lagrangian transport, but with particles remaining at the surface and being directly and quickly transported northwestwards from the WWTP outlet to the southern end of the Frioul archipelago, which can be reached in less than 1 day. It should be noted that this westward jet along the southern coastline is reinforced at this period by the higher angles of incidence (more than 50˚) of the E-SE wind induced surface currents with respect to the east-west orientation of the coastline near the WWTP outlet, that corresponds, as seen above, to E-SE wind directions lower than 120˚ (Figs 3b and 4c). These results illustrate the second developing step of the process of the Lagrangian transport of passive particles sub-surface (about 20m depth) throughout the bays of Marseille, during SE wind induced periods of intrusion of the NC on the shelf. Note that both cases considered, in June 2008 and in October 2011, illustrate specific dynamics of particle transport that differ spatially and temporally. Modelling spatial impact of WWTP effluent in coastal areas circulations in the northern and southern bays, with an anticyclonic eddy in the southern bay, and 2 cyclonic and anticyclonic eddies in the more complex northern bay. In addition, the field of temperature (T˚C) in Fig 5a shows the extension of warmer sea waters flowing westward and northwestward along the shoreline with a maximum crossshore temperature gradient reaching 1.5˚C. This temperature gradient confirms the intrusion into the bays of Marseille of allochtonous water masses transported by the NC, which is characterized by Levantine waters, warmer than the waters of the NW Mediterranean continental shelf. The particles remaining near the surface (< 10 m depth) are transported very rapidly, in less than 30 hours, to the limits of the northern and the southern bays along trajectories passing west and east of the Frioul archipelago, respectively. By contrast, the particles sinking in the downwelling slowly extend at sub-surface (about 20 m depth) along the eastern coastline of the Frioul archipelago only, where they can partly join the anticyclonic sub-surface circulation induced by the intrusion of the NC within the southern bay. In fact, the most striking result evidenced by this computation in October 2011, is to highlight the succession of 2 different modes of particle transport from their release at the WWTP outlet: i) the rapid E-SE wind induced westward transport at the surface of the whole patch along the southern coastline; ii) the extension of the transport towards the inner northern and southern bays as a result of the intrusion of the NC, but at different depths depending on whether the particles remain at shallower depths (< 10 m) or sink deeper owing to the downwelling at the southeastern coastline of the Frioul archipelago (about 20 m depth).
Development of the particle intrusion
The ending of the process of particle transport by intrusion of the NC (not shown) is characterized by an extensive dissemination of particles at shallow depths (< 20 m) throughout the entire northern and southern bays, induced by internal residual circulations after the NC current is redirected to the west; in fact, the 2 intrusion events of the NC within the bays of Marseille considered in our study ceased on June 27, 12:00, 2008, and October 18, 2011, respectively. Results in Fig 6 highlight the progressive decrease of the landing particle amounts from the limit of the southern bay (Somlit) to the eastern limit of the northern bay (Joliette), following a path west of the Frioul archipelago. In fact, from the computation of June 2008, the maximum values of particle amounts decreased from 281 particles at Somlit to 36 particles at NW Frioul, 18 particles at Niolon and 24 particles at Joliette. Note that the landing particle amounts are quite similar at Niolon and Joliette, located on opposite sides of the northern bay, and that the landings at Rouet, located further westwards along the northern coastline, remain close to zero.
Time series of landing particle amounts
In addition, results in Fig 6 clearly show the increasing time lapse within which the particles reach the different locations around the bays, again along a gradient oriented from the limit of the southern bay (Somlit) to the eastern limit of the northern bay (Joliette). In fact, from the computation of June 2008, particles successively reach Somlit, NW Frioul and the northern In addition, results highlight that the central box corresponding to the artificial reefs (Récifs) is mostly impacted by particle landings shaped as 2 peaks of 50 and 40 particles, occurring within time lapses of 12 and 22 days after their release, respectively. In contrast, the other boxes representative of the shorelines around the southern bay are characterized by weak but regular particle landings, less than 20 particles at Madrague, decreasing to less than 10 particles at Prophète and Prado. Results for June 2008 in the western limit and the northern bay show the progressively decreasing impact of the WWTP effluent following a northeastward gradient, from a value of 7.7% for the initial patch at the limit of the southern bay (Somlit) to values of 3.3% along the western coastline of the Frioul archipelago (NW Frioul), and about 1% and 1.7% on the opposite sides of the northern bay, at Niolon and Joliette, respectively. Note than the results close to zero at the box Rouet confirm the existence of a spatial limit of the impact of the WWTP, at an undefined location between Niolon and Rouet on the northern coastline.
Cumulative amounts of landing particles
Results for June 2008 in the southern bay show a dominant impact of the WWTP effluent in the central area corresponding to the artificial reefs (Récifs), with the maximum value of 3.6% for the initial patch. Moreover, results present a northward decreasing gradient of the WWTP impact along the 2 opposite shorelines of the southern bay, with values ranging from about 1.3% of the initial patch at SE Frioul and Madrague, and decreasing to less than 1% of the initial patch at Prophète and Prado.
Results for October 2011 in the western limit and the northern bay, unlike results shown for June 2008, do not present any spatial gradient of the WWTP impact, but show moderate values of proportion of the initial patch at Somlit (3,7%) and NW Frioul (1%), and this time, peak values up to 11.5% 4.3% along the 2 opposite sides of the northern bay, at Niolon and Joliette, respectively.
Results for October 2011 in the southern bay, unlike results for June 2008, evidence a mode of distribution of the particles characterized by an eastwards decreasing gradient of the impact levels along the shorelines of the southern bay, from the peak value of 23.2% of the initial patch at SE Frioul to values of the same magnitude as in June 2008 at Récifs (2.1%), and then values becoming close to zero at the eastern stations (Prophète, Prado and Madrague).
Finally, it is worth noting that our computations highlight the variability of the amounts of particles discharged at the WWTP at Cortiou that reach the inner shorelines of the bays of Marseille, with total proportions, taking all boxes added together, ranging from a total of 21.2% of the initial patch in June 2008, up to 46.3% of the initial patch in October 2011. (Fig 1), on October 17, 2011. The 2 identified main sources of suspended POM sampled at Somlit during that period were marine phytoplankton and WWTP effluent POM. The δ 13 C stable isotope ratio of the POM at Somlit on October 17, 2011, was higher than the mean value of POM at Somlit estimated in [22], and lower than the mean value of marine phytoplankton estimated in [20]. The δ 15 N stable isotope ratio of the POM at Somlit on October 17, 2011, was even lower than the mean δ 15 N of WWTP effluent POM.
Particulate organic matter
The mixing equations applied to the sampled POM at Somlit using δ 13 C estimated the proportion of the WWTP effluent at 40%, while when using δ 15 N it was estimated at 100%. Similarly, the proportion of the marine phytoplankton was estimated at 60% when using δ 13 C ratios.
Discussion
Our approach remains schematic but results reveal, by way of example, that the potential impact of an urban effluent such as that of the WWTP outlet at Cortiou, near Marseille, may represent a major concern in terms of health risk for most of the coastal activities of such a large city. Our study takes into account the specific hydrodynamic context which is the only one allowing the transport of particles from the WWTP effluent towards the inner bays of Marseille: a period of intrusion of the NC on the continental shelf associated with a SE wind regime.
Characterization of the NC intrusions
We selected our 2 events of intrusion of the NC within the bays of Marseille based on the computed velocity fields (RHOMA model), which exhibited a clear on-shelf velocity component at the entrance of the southern bay. The 2 intrusion events had an impact over the entire water column with discernible onshore velocity components computed from 20 m depth down to about 80 m [11].
Authors in [11] used the modeled meridian velocities to compute an average on-shelf These results confirm that the intensity of NC intrusions are not directly driven by SE winds conditions as shown by the authors of [15]. In fact, the intensity of SE winds during the period of June 2008 was greater than in October 2011. Inversely, the intensity of the NC intrusion in June 2008 was weaker (in terms of currents and northward flow) than the intensity of the NC intrusion in October 2011. Our results confirm that in October 2011, the intrusion of the NC was triggered by a strong NW wind event until October 14, 2011, and was then reinforced by the SE wind that followed as described in [11]. This type iii) of wind combination accelerates the intrusive current as described in [15], giving rise to stronger current intensity.
On the contrary, in June 2008, the intrusion of the NC is related to a downwelling induced by S-SE winds as described above and in the literature. Authors in [25] suggested that the Ekman transport associated with SE winds induces a downwelling, which generates a westward coastal current that transports the NC waters onto the shelf. This hypothesis was confirmed by authors in [15] in the case of an intrusion induced by SE winds, associated with sea water accumulation along the southern coastline that drives a longshore shelf-intruding current, occurring independently of the stratification and concomitantly with the wind forcing.
Spatial variability of particle connectivity
Results highlight that the northern and the southern bays of Marseille are not impacted in the same way, revealing that the connections of particles from the WWTP effluent result from the complex interaction between SE wind induced current fields and the circulation induced by intrusions of the NC on the shelf.
Firstly, as seen in Figs 6 and 8, the hydrodynamics induced by a NC intrusion and a E-SE wind (as seen in October, 2011), promotes rapid surface, and then near-surface (< 10 m depth), particle connections that are more efficient at accumulating particles along the southeastern coastline of the Frioul archipelago (SE Frioul) than entering the southern bay (Prophète, Prado, Madrague). In addition, this type of near-surface circulation is particularly favorable for transporting more particles to the farthest shorelines of the northern bay (Niolon, Joliette).
Inversely, as seen in Figs 7 and 8, the hydrodynamics induced by a NC intrusion and a S-SE wind (as seen in June, 2008), promotes slower particle connections sub-surface (about 20 m depth) that are more efficient at entering the southern bay and dispatching particles all along the inner shorelines (Madrague, Récifs, Prophète, Prado). In addition, this type of sub-surface circulation following trajectories located a little further south than those induced by E-SE winds, is particularly favorable for transporting more particles to areas located south and west of the Frioul archipelago (Somlit, NW Frioul), with the same amount of particles connecting NW Frioul and Récifs, located on each side of the Frioul archipelago.
Secondly, our results in Fig 8 show that stations that are geographically distant from one another may be connected by the same number of particles, and much more than stations that are close to them. For example in October 2011, the remote stations Somlit and Joliette connect in the same way and much more than NW Frioul; in the same way in June 2008, the remote stations Madrague and Joliette connect in the same way and much more than Prophète.
In addition, the results in Figs 6 and 8 show a kind of 'boundary' of connectivity between Niolon and Rouet, which conversely are stations geographically close to one another along the northern coastline; in fact, Niolon is highly impacted by the effluent, while Rouet remains without any particle connection according to both computations in June 2008 and October 2011. Note that in both computations in June 2008 and October 2011, the eastern limit of the northern bay (Joliette) is strictly impacted from the west, according to the circulation induced by the NC intrusion on the shelf, and without any connection from the central southern bay (Récifs), despite the geographical proximity of these 2 locations.
Therefore, all these results should be displayed in the form of a conceptual diagram to present 2 modes of particle connectivity from the WWTP outlet to the inner shorelines of the bays, in both cases under periods of intrusion of the NC on the shelf, but depending on the S-SE or E-SE wind direction. Fig 10a presents a conceptual diagram showing the start of the particle transport in the vicinity of the Cortiou WWTP outlet, following 2 types of path according to E-SE or S-SE winds. This diagram offers a clearer explanation of the S-SE wind induced downwelling process in front of the WWTP outlet and the resulting sub-surface particle transport, versus the E-SE wind induced longshore jet and the resulting westward particle transport at the surface.
Conceptual diagram of particle connectivity
On the one hand, an E-SE 80˚-110˚wind regime (Fig 10a, wind 1 and black path), as studied in October 2011, which is characterized by higher angles of incidence (60˚-90˚) relative to the orientation of the southern coastline, locally induces a coastal circulation characterized by northwesterly currents (Fig 4c), a lower crossshore sea surface slope of about 1 cm by 4 km, with lower water accumulation onshore and an intense longshore jet that rapidly transports the particles westwards at the surface and close to the southern coastline, from the WWTP outlet straight to the south end of the Frioul archipelago, which is reached in 18 hours (Fig 4d).
On the other hand, a S-SE 130˚-170˚wind regime (Fig 10a, wind 2 and red path), as studied in June 2008, which is characterized by lower angles of incidence (0˚-40˚) relative to the orientation of the southern coastline, locally induces a coastal circulation characterized by northward currents (Fig 4a), a higher crossshore sea surface slope of about 2 cm by 4 km, with higher water accumulation onshore associated with a downwelling circulation that causes the particles from the WWTP outlet to sink down to 40 m depth. Then, particles are transported offshore to the north of Riou island in 2 days, where they may join at about 20 m depth the sub-surface westward circulation induced by the intrusion of the NC on the shelf (Fig 4b).
Fig 10b presents a comparative diagram showing the different paths of the particle connectivity from the Cortiou WWTP outlet with and without intrusion of the NC on the continental shelf and still associated with E-SE and S-SE winds.
Firstly, under a SE wind regime but in the absence of intrusion of the NC on the continental shelf, the particles are directly and rapidly transported at the surface to the north-west (Fig 10b, grey path), along trajectories passing south of the Frioul archipelago without entering the inner bays [16].
Secondly, in periods of intrusion of the NC on the continental shelf, from the moment the particles reach the southern end of the Frioul archipelago, and whatever their depth, they continue to be transported northward as a result of the intrusion of the NC within the bays (Fig 5a and 5c), but at different depths depending on the starting mode that they followed according to the prevailing wind regime at the moment of their release at the WWTP outlet (Fig 10a).
In a period of intrusion of the NC associated with a E-SE 80˚-110˚wind regime, as studied in October 2011, particles reaching the Frioul archipelago at the surface (Fig 10b, wind 1 solid black path), sink slightly deeper (< 10 m depth) under the effect of the wind induced downwelling that prevails along the southeastern coastline of the Frioul archipelago (thick black arrow). Then these particles continue to move rapidly northwards, passing west (east) of the Frioul archipelago to the extreme (inner) shorelines of the northern (southern) bay, which are reached in less than 30 hours (Figs 5d and 10b dotted black paths).
By contrast, in a period of intrusion of the NC associated with a S-SE 130˚-170˚wind regime, as studied in June 2008, particles reaching the Frioul archipelago sub-surface (Fig 10b, wind 2 red path) continue to move northwards at the same depth (about 20 m depth) on both diagram showing the start of particle transport in the vicinity of the Cortiou WWTP outlet, following 2 paths according to E-SE (wind 1) and S-SE (wind 2) wind conditions. (b): comparative diagram of particle connectivity from the Cortiou WWTP outlet with and without intrusion of the NC on the shelf and associated with E-SE and S-SE winds; grey path: particle transport at the surface induced by SE winds without NC intrusion; solid black path: particle transport at the surface induced by NC intrusion and E-SE wind; dotted black path: particle transport near the surface (< 10 m depth) induced by NC intrusion and sides of the Frioul archipelago, either to disperse throughout the southern bay according to the local anticyclonic circulation, or to reach the distant shorelines of the northern bay within approximately 8 days (Figs 5b and 10b red paths).
Note that a fraction of the patch of particles that reach the southeastern end of the Frioul archipelago at the surface (Fig 10b wind 1 solid black path), sink deeper in the water column to about 20m depth (thick black arrow) that leads them to join the sub-surface (about 20 m depth) northward circulation of the intrusion of the NC, which enters deeper into the southern bay (Figs 5b and 10b red path).
In addition, as shown in Figs 6, 7 and 8, our results show that Rouet is not impacted by the particle connections, which suggests locating the spatial limit of the impact zone of the WWTP effluent somewhere along the northern coastline between Niolon and Rouet (grey dotted line).
Although these 2 proposed modes of particle transport from the WWTP outlet to the bays of Marseille present a similar pattern, since they are both controlled by the intrusion of the NC within the bays, they differ essentially in the depth at which the particles are transported, and the resulting speed of their transport. In addition, it is interesting to note that our results identify 2 specific locations of downwelling, where particles from the WWTP outlet sink to subsurface depths, according to the angle of incidence of the SE winds, and resulting surface currents, with respect to the orientation of the southern coastline at the site of the WWTP outlet. In summary, an E-SE wind with higher incidence (direction < 120˚) will induce delayed particle sinking and shallower and faster connectivity, especially effective in reaching the distant shorelines of the northern bay, while a S-SE wind with lower incidence (direction > 120˚) will induce immediate particle sinking and deeper and slower connectivity, especially effective in reaching the inner shorelines of the southern bay.
Confirmation of the particle origins
On the basis of the measurements of stable isotopes, our results confirmed that the POM at Somlit is partially constituted of particles transported from the WWTP effluent. As suggested by the δ 13 C stable isotope ratio, the sea surface POM, which arrived at Somlit on October 17, 2011, was probably constituted by a mixture of marine phytoplankton transported by the NC and detritus particles from the WWTP effluent. Moreover, the δ 15 N stable isotope ratio of POM at Somlit was particularly influenced by the POM from the WWTP plume. These particles are consumed by the zooplankton and integrate the pelagic food web particularly during the cold season (fall and winter) [22]. These particles may also be consumed by benthic filter feeder organisms after their release at the WWTP outlet [26]. A mixture of marine phytoplankton and a large part of the POM from the WWTP effluent may be consumed by the filter feeder organisms in the artificial reefs (Récifs), even if the main source of this food web suggested by the authors is the nanophytoplankton [27]. In fact, according to these authors, the estimated value of the basis of the food web (δ 13 C = -25.23±1.16 ‰, δ 15 N = 1.77±0.25 ‰) is rather close to values of the POM from the WWTP effluent at Cortiou (δ 13 C = -25.5±0.62 ‰, δ 15 N = 0.59±0.85 ‰). The results of the present paper, relating to intrusions of the NC on the shelf and the high resulting impact of the particles from the WWTP effluent in the artificial reefs zone (Récifs), support this hypothesis. E-SE wind; red path: particle transport sub-surface (about 20 m depth) induced by NC intrusion and S-SE wind; thick black arrow refers to the downwelling at the southeastern coastline of the Frioul archipelago; thick red arrow refers to the downwelling front of the WWTP outlet; grey dotted line refers to the presumed northwestern limit of the WWTP impact zone. https://doi.org/10.1371/journal.pone.0195257.g010 Interestingly, hydrodynamic circulation and stable isotope ratios could be used at the same time to better identify the respective contributions of the different seawater sources of POM. For example, in October 2001, at Somlit: 40% and 60% of the sampled POM may be attributed to particles from the WWTP effluent and marine phytoplankton, respectively. Thus, Lagrangian trajectories computed in this study confirm that a significant part of the particles from the WWTP effluent are effectively connected to Somlit (3-7%), and furthermore the study published by [11] showed that the NC intrusion of October 17, 2011, induced an intrusion on the shelf of the warmer and mostly oligotrophic water from the NC, characterized at Somlit by its signature in terms of offshore marine phytoplankton.
Concerns about human activities
From a health point of view, Niolon is a diving and bathing site that is potentially at risk of being contaminated by the particles from the WWTP outlet. Finally, it is crucial that Rouet, which is the site of a Protected Marine Area, is almost never impacted by the potentially polluted particles. Joliette is a commercial harbour that is more likely to be polluted by vessel traffic than by particles from the WWTP effluent.
In the southern bay, it should be noted that the sites the most impacted by the WWTP effluent are SE Frioul (especially in October, 2011) and Récifs, each characterized by vulnerable human activities such as fish farming (SE Frioul) and a zone of artificial reefs for production (Récifs). In addition, we highlighted that the sites Récifs (artificial reefs), and also Somlit (water quality monitoring station), both located in the middle of the southern bay, were highly impacted by particles from the WWTP effluent, especially in June 2008. This result reveals that the particles from the WWTP can potentially be integrated into the pelagic and the benthic food webs within the southern bay, where professional and recreational fishing is highly developed. Finally, bathing areas such as Prophète, Prado and Madrague, located at the southeastern limit of the anticyclonic sub-surface circulation induced by the intrusion of the NC within the southern bay, remain weakly impacted by particles from the WWTP effluent.
Conclusion
The mechanism we propose in the present study for the dispersal of fine particles from the WWTP effluent at Cortiou throughout the bays of Marseille involves surface, shallow (< 10 m depth) and sub-surface (about 20 m depth) modes of particle transport, induced by intrusions of the NC on the continental shelf, associated with SE wind regimes. These modelling results are quite new and unexpected. In fact, the present study is of interest because the importance of the sub-surface circulation is too often under-estimated compared to the surface circulation, more intuitively considered, but very different from the deeper circulation and, in our case, ineffective in connecting sites located in the inner bays of Marseille. Our study highlights that the transport of particles from the WWTP effluent towards the inner shorelines of the bays of Marseille is not negligible. For each studied event, if we take together all the amounts of particles having connected the 10 vulnerable sites chosen around the northern and southern bays (Figs 6, 7 and 8), we obtain the relatively high values of 21.2% and 46.3% in June 2008 and October 2011, respectively. Moreover, the observations at Julio station (Fig 1) reveal that the occurrence frequency of the intrusions of the NC on the continental shelf is not so uncommon. In fact, the 3 series of measurements performed at Julio and covering 23 months between February 2012 and April 2015, show an average of 1 to 2 intrusions of the NC per month (A. Petrenko, pers. comm.). The intrusions of the NC on the shelf at sub-surface depths therefore represent a significant health risk in relation to coastal water quality along the shorelines of both the northern and southern bays of Marseille, and seeking greater knowledge in terms of their intensity, frequency and dynamics are ongoing research topics. At present, the analysis of the observations available at Julio station (2012-2015) is being processed in relation to the analysis of wind regimes and modeled current fields (A. Petrenko, pers. comm.). More generally, it is clear from our recent experience that enhanced monitoring of the sub-surface circulation, often under-estimated, with bottom-moored ADCP stations, as well as periodic analyses of the POM at sea surface with the stable isotope method, might play a major role in the detection and prediction of health risk episodes in a large number of highly urbanized coastal areas.
Supporting information S1 Dataset. Source files of computed particle trajectories (ZIP file). S1A and S1B (netcdf) correspond to the ICHTHYOP computations of June 2008 and October 2011, respectively. (ZIP) | 2018-05-03T01:02:46.411Z | 2018-04-25T00:00:00.000 | {
"year": 2018,
"sha1": "23ca339d1f100cce0baf1a0df8f4c422e67859c5",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0195257&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f1a53b4d73cbfe6684b14cffad2e660a7c91e233",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
} |
247104516 | pes2o/s2orc | v3-fos-license | Adaptation and Psychometric Properties of an Instrument to Assess Self-Efficacy in Client-Centeredness (SECCQ)
ABSTRACT This article assesses the reliability and validity of the Self-Efficacy in Client-Centredness questionnaire (SECCQ). SECCQ assesses social worker students’ subjective belief in their ability to provide client-centered care in their daily interaction with children or families. Self-efficacy is defined as an individual’s judgments concerning their capability to perform certain skills necessary to attain a desired outcome of behavior. Client-centeredness, on the other hand, relates to how social workers treat clients, not only from a clinical perspective, but also from an emotional, mental, and social perspective. Overall findings demonstrate that this questionnaire has satisfying psychometric properties and high reliability. Hence, the SECCQ may be a valuable tool for raising students’ awareness of their self-efficacy and the evaluation of student learning outcomes.
included in social work human behavior and practice courses, as one of many theoretical frameworks for social work practice and thought, his basic assumptions and guidelines for practitioners continue to permeate social work practice and education. The strength perspective and empowerment have been major themes in the recent social work literature, which also contain clear elements of the clientcentered approach. Particularly relevant is Rogers's emphasis on the client as an active participant who respects the client's subjective experiences, not presenting oneself as the expert (Green, 2017).
There is a need to develop good tools in Norwegian to measure the students' abilities to practice a client-centered approach and achieve learning outcomes, such as personal changes and self-efficacy. These would be used both to give students more orientation and feedback regarding this important teaching goal and a formative assessment for developing evidence-based education methods that positively stimulate the student's self-efficacy. Perceived self-efficacy forms the basis of any decision to act. It refers to a person's ability to implement situation-specific behaviors to attain established goals, expectations, or designated types of outcomes (Hoffman, 2013). There are multiple ways to define and measure learning outcomes within an educational context, and the most common one focuses on the student's behavioral performance. Our interest in self-efficacy relates to student learning outcomes, and by attending various lessons and different exercises, the main goal is to strengthen the student's self-efficacy in line with their progression within the study program, thereby facilitating good professional practice (Artino, 2012). Bandura (1977) defined self-efficacy as the conviction that one can successfully execute a behavior to produce a specified outcome. Perceived self-efficacy is concerned with a person's belief in their capability to exercise control over their own functioning and events that affect their life (Bandura, 1986). Self-efficacy is one of the central variables that distinguishes one individual from another. People can be characterized primarily based on their belief in the ability to control their own lives, because those beliefs powerfully determine the effort they make to adapt to their surroundings. Selfefficacy theory predicts that the more an individual feels capable of predicting and controlling threatening events, the less vulnerable they will be to anxiety or stress disorders in response to traumatic experiences (Fleming et al., 2003). It influences all forms of a person's motivation, cognition, emotions and actions. It operates mainly through the cognitive and affective channels and plays a crucial role in shaping our perception of life experiences as the belief in personal efficacy affects life choices, level of motivation, quality of functioning, resilience to adversity, and vulnerability to stress and depression (Bandura, 1986). People's belief in their efficacy is developed by four main sources of influence. These include our social skills, cognitive skills, observational learning, and social background. This self-system is the backbone of our personality, and self-efficacy is one its essential components (Bandura, 1977;Fleming et al., 2003). It has been found that increasing a person's perceived self-efficacy will optimize their developmental potential (Hoffman, 2013). Self-efficacy is a central construct of social cognitive theory (Bandura, 1977(Bandura, , 1986, formerly known as social learning theory. It states that behavior results from an individual's belief that the action will lead to a desired outcome. People who are confident in their abilities are thereby more likely to attempt difficult tasks, exert greater effort to master such tasks, and persist in their attempts despite difficulties (Williams & Bond, 2002). There is no guarantee that higher levels of self-efficacy will improve client outcomes. There is, however, reason to believe that a high level of self-efficacy within a social worker may improve client outcomes due to the social worker's confidence in their abilities, which is likely to influence their professional performance. Olesen and Jørgensen (2018) suggested the possibility of adapting the Self-Efficacy in Patient Centeredness Questionnaire (SEPCQ) developed by Zachariae et al. (2015) to be used as an instrument within social work. 1 SEPCQ is presented as a valid and practical questionnaire to assess competencies in patient-centeredness by focusing on student self-efficacy. Although this questionnaire is directed toward healthcare situations, focusing on physicians and medical students, we found most of the items within the questionnaire were also relevant to other professions, such as social work. The 1 Permission to use the SEPCQ was given by Dr. Martin Olesen (Aarhus University Collage). questionnaire provides several statements describing different aspects of how physicians and medical students can relate to and communicate with patients, with underlying teams focusing on the student's self-efficacy within the following skills: (a) exploring the patient perspective, (b) sharing information and power, and (c) dealing with communicative challenges (Zachariae et al., 2015). However, to be an instrument for social work students, it is necessary to adjust and adapt the questionnaire so that it assesses competencies in client-centeredness.
Aim
The aim of this study is to adapt the SEPCQ into the Self-Efficacy in Client-Centeredness Questionnaire (SECCQ) questionnaire by changing its target audience from medical to social science and evaluating its psychometric properties, providing preliminary evidence for its reliability and validity as a tool to measure perceived self-efficacy among social work students in client-centered communication.
Participants
A total of 156 out of approximately 200 second semester undergraduate students were present at four lectures in January and February 2019 where the SECCQ was distributed. The participants in the study were students from four different social science bachelor programs in Norway: social work, child welfare, work and welfare studies, and social education. Questionnaires that were less than 50% completed (n=10) were excluded from the statistical analysis (Meadows, 2012;Ware, 1995). A total of 146 completed questionnaires were returned. As the participants (n=146) constitute more than five times the size of the questionnaire (Q=27, ≥135), the sample size is in line with recommendations for validating an instrument derived from Willett (1998).
Measures
SECCQ is a Norwegian instrument developed to measure client-centered communication practice within social work. As in Zachariae et al.'s (2015) work with the SEPCQ, SECCQ uses a 5-point Likert scale with "1" (to a very low degree) and "5" (to a very high degree) as endpoints, and higher scores indicating higher level of self-efficacy. The general instruction from SEPCQ (Zachariae et al., 2015) was adapted, emphasizing that the questions cover neither actual behavior nor the desirability of the behavior. The questionnaire is developed as an instrument to measure how confident social worker students are in their ability to relate and communicate with the client. The students are asked how confident they are in their ability to make the client experience the particular behavior implicitly, as a necessary element of professional practice.
Procedure and data collection
SEPCQ contains 27 items with three underlying self-efficacy factors: (a) exploring a patients' perspective, (b) sharing information and power, and (c) dealing with communicative challenges (Zachariae et al., 2015). The adaption of the SEPCQ instrument to fit students from various studies within social work was conducted in six steps.
First step: Selection of existing items
The first step of the adaption was to identify and translate items in SEPCQ that are equally relevant for patient and client-centeredness communication. Items 9, 13, and 27 were identified and adapted from SEPCQ by translating the items from Danish to Norwegian. To achieve equality in the meaning of the content from the two versions, the chosen items were translated according to the "Guidelines of the Process of Cross-Cultural Adaption of Self-Reports" (Guillemin et al., 1993). Initially, a back translation was also conducted, where the difference between translated versions was evaluated and satisfactory compliance with the original scale achieved through the consensus of the translators. Three researchers from within the field evaluated the completed Norwegian item version cultural appropriateness. When deciding on the final items, the content of the original questionnaire was weighted more heavily than the direct meaning of the translated words, as recommended by Polit and Beck (2020).
Second step: Modification of existing items
The second step of the adaption concerned the modification of items. Of the 27 original SEPCQ items, 13 were modified to better suit the social work context by altering the formulations. For example, items 1, 4,5,8,10,11,14,15,16,17,20,22, and 23 were adapted by replacing "patient" with "client." The chosen items were then translated from Danish to Norwegian according to the "Guidelines of the Process of Cross-Cultural Adaption of Self-Reports" (Guillemin et al., 1993), as described under the first step.
Third step: Construction of social work-specific items
Eleven items needed a more comprehensive content change, as the original items focused on healthcare-specific skills or themes that are not used within social work. Eleven new items were therefore developed to cover the social work-specific aspects of client-centered communication. To achieve equality of meaning between the content of the two versions, a linguistic analysis was used to determine new formulations adapted to social work. The new items included one item on "exploring the client perspective" (item 24), eight items on "sharing information and power" (items 2, 6, 7, 12, 18, 21, 25, and 26), and two items on "dealing with communicative challenges" (items 3 and 19). For example, "Accept when there is no longer curative treatment for the patient" was changed to "Accept when there is no intervention that will change the client's situation."
Fourth step: Adaption of instruction and response categories
To begin with, the general instruction from SEPCQ (Zachariae et al., 2015) was adapted by translating it from Danish to Norwegian, as were the response categories and 5-point Likert scale with "1" (to a very low degree) and "5" (to a very high degree) as endpoints. As described under the first step, equality of meaning was achieved in the content of the two versions by translating the introduction from Danish to Norwegian according to the "Guidelines of the Process of Cross-Cultural Adaption of Self-Reports" (Guillemin et al., 1993).
Fifth step: Pilot evaluation
A pilot evaluation of the content validity and comprehensibility of the 27-item social work questionnaire, along with clarity of instructions, was conducted by one of the researchers (TGA). This involved interviews with eight students and two social workers after their completion of the questionnaire. Students in the pilot study were third-year bachelor's students. Two students were attending programs in social work with the other six in child welfare. They gave their opinion on the relevance, acceptability, and understandability of the instructions and items.
The results from the pilot evaluation showed that the items comprised the quality of client-centered communication and that they, along with the instructions, were comprehensible. However, there was some reflection on the choice of words, which led to some minor adjustments. There was also a question regarding the suitability of item 3, as this could be interpreted differently depending on the students' professional understanding. Nevertheless, item 3 remained in the questionnaire for further assessment.
Sixth step: Data collection
The next step was to administer the SECCQ to students from four different social science bachelor's programs in Norway; namely, social work, child welfare, work and social welfare, and social education. All those who participated met the criteria of being second-year students attending their fourth semester of the bachelor's program within the social work area. Data collection was undertaken by researchers at the end of January and beginning of February 2019. Students were asked to participate during a lecture by one of the researchers. The students were provided written and oral information about the study; this information explained that participation was voluntary and that their answers would be treated confidentially. Students gave their consent by filling out the questionnaire and participating in the study. All participants filled out the questionnaire on paper and in person.
Ethical approval
All data were collected anonymously. In Norway, studies that use exclusively questionnaire-based data are not subject to ethical approval.
Statistical analysis
The students' responses were coded and entered into IBM SPSS version 25. To describe the sample characteristics, descriptive statistics were used, and data were checked for normality both graphically and by assessing skewness and kurtosis. As there is no previous exploration of the underlying components for the SECCQ, a principle component analysis (PCA) was performed to explore the links between the observed variables (items) and the latent variables (factors) and identify the factor structure. The data's suitability for PCA was assessed using Bartlett's test of sphericity, which evaluates the overall significant differences in the correlation matrix, and the Kaiser-Meyer-Olkin (KMO) test, which verifies if the sample adequacy was appropriate (Field, 2017;Hair et al., 2007). To guide the extraction of factors, Kaiser's criterion was used, and factors with eigenvalue ≥1.0 were obtained for further analysis (Field, 2017). To provide further support for the extraction of factors, oblique rotation, produced by the Oblimin with the Kaiser normalization method in SPSS, was used, as it allows for a degree of theoretical correlation among dimensions (Field, 2017;Tabachnick & Fidell, 2013). Items loading at ≥.4 were considered acceptable loadings for the factors (Hair et al., 2007).
Cronbach's alpha was used to establish internal consistency, with values above 0.70 indicating an acceptable level of reliability (Cronbach, 1951;Streiner & Norman, 2008). The number of missing values was less than 1%, and these were imputed as recommended at the series mean (Meadows, 2012;Ware, 1995).
Results
Descriptive statistics at item level for the SECCQ are presented in Table 1. A total of 146 questionnaires with complete or almost complete data (>50) were returned. The number of missing values was less than 1%, and missing data were replaced at series mean.
Identifying factor structure
The suitability of data for factor analysis was assessed. An inspection of the correlation matrix revealed the presence of coefficients on 23 items of .4 and above. For three items this was between .35 and .4, and for one below .3. The KMO value was .89, exceeding the recommended value of .6, indicating that the sample should produce reliable and distinct factors (Field, 2017). Bartlett's test of sphericity (Bartlett, 1954) was of high statistical significance (p≤.001), supporting the factorability of the correlation matrix. To explore and ensure a stable factor solution for the 27 items of SECCQ, PCA was conducted using SPSS version 25. The PCA revealed the presence of six factors with eigenvalues exceeding 1, explaining 36.4%, 6.4%, 5.7%, 5.1%, 4.2%, and 3.7% of the variance, respectively. Parallel analysis using the free software Monte Carlo PCA (Watkins, 2000) showed, however, only two factors with eigenvalues exceeding the corresponding criterion values for a randomly generated data matrix of the same size (27 variables × 146 respondents).
The two-dimensional solution explained 42.8% of the variance, with dimension 1 contributing 36.4% and dimension 2 contributing 6.4%. To aid in the interpretation of these two dimensions, oblique rotation was performed. The rotated solution revealed the presence of a simple structure (Thurstone, 1947), with both dimensions showing a number of strong loadings and variables loading substantially on only one dimension (Table 2). There was a negative correlation between the two dimensions (r=−.63).
A content analysis suggests the following two underlying dimensions of SECCQ; Factor 1, exploring and dealing with communicative challenges (16 items), and Factor 2, sharing information and power (11 items), as shown in Table 2.
Psychometric properties
Cronbach's alpha was used to measure the internal consistency and reliability of the construct. The Cronbach's alpha coefficient for the overall scale was high (α=.89), with high levels on both factor dimensions: Factor 1 (exploring end dealing with communicative challenges) α=.87, and Factor 2 (sharing information and power) α=.89.
Discussion
This article presents a SECCQ for use by Norwegian social work students. While the original questionnaire by Zachariae et al. (2015) was developed for medical students and physicians with the use of medical terms, the current study adapts and investigates the questionnaire's psychometric properties via an adapted version aimed at undergraduate social work students. The satisfying psychometric properties suggest high reliability in the context of social work students. Exploring the possible underlying factor structure of SECCQ, we identify two preliminary factors, consisting of 27 items: (a) exploring and dealing with communicative challenges and (b) sharing information and power. From both a statistical and a content-based perspective, the two factors appear to be valid subscales covering core aspects of client-centeredness self-efficacy. The construct validity was evaluated by performing an exploratory factor analysis using PCA, which shows a stable twofactor solution, with most items correlating strongly with the factors. Furthermore, the internal consistencies were generally high across different items.
Compared with the original SEPCQ (Zachariae et al., 2015), the items correlated differently, combining "exploring the clients' perspective" and "dealing with communicative challenges" into one dimension: exploring and dealing with communicative challenges. Additionally, two items (19 and 20) correlated with factors that differed from the SEPCQ. Item 19, "to stay focused on what is best for the client if there is a professional disagreement about the choice of intervention and the professional assessments," was previously described as an item belonging to the "dealing with communicative challenges" dimension. However, the content of the item, as it is in SECCQ, shows a good fit within the "sharing information and power" dimension. The change in item content may be due to cultural differences and different student academic professions. Linguistic differences may also lead to different contextual understandings. To "stay focused on what is best for the client (. . .)" is about actions that may lead to a feeling of empowerment for the client, as only the client would know what is best for itself. The item adds up to "sharing information" with the client, and thereby power. Within its social work context, both the item's content and correlation show that it belongs strongly in the "sharing information and power" dimension. Item 20, "make the client feel that he/she can talk with me about confidential, personal issues," was previously described as an item belonging to a different dimension. According to SEPCQ, item 20 correlated to the "exploring the patient perspective" dimension. Within the Norwegian language, item 20 contains elements of "dealing with communicative challenges" and "exploring the client perspectives," which is consistent with the first subscale of SECCQ. Item 20 loads almost equally on both subscales (.374 and −.380). The content of the item appears, however, much more conceptually related to the first subscale. By removing item 20 from one subscale and adding it to the other, the overall scale shows no change in the internal consistency and reliability of the construct (α=.89), albeit there was a slightly increased value of the first subscale, with a Cronbach's alpha coefficient increasing from .86 to .87. Equivalent reduction of value occurred in the second subscale, from .90 to .89. Overall, the assessment of item 20, as it is presented in SECCQ, therefore shows a good fit within the "Exploring and dealing with communicative challenges" dimension.
Four items have a correlation coefficient below .4. Items 2 (−.395), 13 (.352), and 20 (−.380) all have a correlation coefficient between .35 and .4. This indicates a moderate linear relationship to its dimensions. The content of items 2, 13, and 20 is, however, considered as a necessary skill for performing acceptable social work and therefore retained in the model based on content analysis. Item 2 addresses the students' reflections on what a "comprehensive mapping" consists of, while items 13 and 20 appeal to the students' emotional presence within social work.
The item about accepting when "there is no intervention that will change the client's situation" (item 3) has a correlation coefficient of .207, indicating a weak positive linear relation to its dimension. Exploratory analysis identified this item with strong correlations (.644) outside the two-factor solution, leading to a possible single item in the instrument. However, neither the Cronbach's alpha coefficient for the overall scale nor the factor dimension showed value of significance when deleting item 3 from the instrument. As this item addresses the skills necessary to perform acceptable social work, and excluding the item from the instrument did not show any value of interest, the item was retained in the model based on content analysis.
Limitations
Although the overall findings demonstrate that the SECCQ has satisfying psychometric properties and high reliability, and may be a valuable tool for raising students' awareness of their self-efficacy and the evaluation of student learning outcomes, the current study has some limitations. First, the validation and cross-cultural adaption process is based on a Norwegian context. Even though there is an English version of the instrument presented, the applicability of the instrument within other English-speaking countries is promising but unknown. There is therefore a need for further testing of the instrument within other cultural contexts, to be able to assess the validity of the instrument within other countries.
Second, the quality of measurements is important to all sciences. Exploratory factor analysis can be used to collect an important type of validity evidence, and increases the reliability of the scale by identifying inappropriate items that can be removed. However, validity is not a property of the measurement instrument but rather refers to its proposed interpretation and use. An instrument may be validated for a certain population and purpose, but that does not mean it will work across all populations and for all purposes (Knekta et al., 2019). Validity must be considered each time an instrument is used (Kane, 2016).
Third and last, it cannot be ruled out that differences in the structure and emphases of different educational systems across countries can result in differences regarding the conceptualization of the SECCQ construct. A cross-cultural assessment before use within other countries is therefore recommended.
Conclusion
The overall findings demonstrate that SECCQ has high psychometric properties and may be used as a valid and reliable instrument to measure the students' self-efficacy in client-centredness. The SECCQ could be a useful instrument for evaluating client-centeredness self-efficacy in various contexts (e.g., when evaluating outcomes of communication training courses or a student's development within professional practice). Nevertheless, it is important that further studies use SECCQ to measure its effectiveness. Further research is also needed on both the value of using this instrument within an educational context and what influence it may have on the students' self-development and learning outcomes.
As the instrument appears, it raises the question of whether it may be equally applicable to both students and practitioners. It is, however, developed for use within an educational context. The suitability of the instrument for practitioners is therefore as it appears in this study, hypothetical. There is thus a need for further research, where the instrument is introduced to various groups of practitioners, before concluding its validity or suitability outside the educational context.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes on contributors
Tina Gerdts-Andresen is an associate professor at Østfold University College. Anne Margrethe Glømmen is an assistant professor at Østfold University College. Inger Hjelmeland is an associate professor at Østfold University College. Erna Haug is an associate professor at Østfold University College. Heidi K. Grønlien is an associate professor at Østfold University College.
Ethics approval
The study was performed in accordance with the principles of the World Medical Association (Declaration of Helsinki).
Open access
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC-ND) license, which permits others to distribute, remix, adapt, build upon this work noncommercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is noncommercial. | 2022-02-26T00:05:32.587Z | 2022-02-23T00:00:00.000 | {
"year": 2023,
"sha1": "82d09d58d2af110a1020f741151ad2e9df895236",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/10437797.2021.2019632?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "929d64f6cc2743f7886987241ea8b21da63881ba",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
13599300 | pes2o/s2orc | v3-fos-license | Coronary artery bypass grafting using the radial artery : influence of proximal anastomosis site in mid-term and long-term graft patency
Objective: To determine whether the proximal anastomosis interferes or not in midand long-term patency
Conclusion:
The site of proximal anastomosis of the RA coronary grafts does not interfere in mid-and long-term graft occlusion and patency.
INTRODUCTION
Even nowadays, the radial artery (RA) graft is a controversial alternative, but with wide use in coronary artery bypass grafting.Since the first publications in the 1970s [1], with disappointing short-term results [2][3][4], until its "resurgence" in the 1990s [5], the results indicated by numerous series of patients founded well established concepts about this graft.It is known, for example, about the effect of the degree of obstructive lesion of the target coronary on evolution of its flow [6,7].However, only recently, other aspects of RA have been studied such as, for example, the different techniques for graft dissection or the site of proximal anastomosis [6,[8][9][10].
The aim of this study is to determine whether the site of proximal anastomosis influences on patency in mid-and long-term of RA grafts.
METHODS
Between 1994 and 2006, 481 patients underwent CABG at the Heart Institute of Clinics Hospital of Faculty of Medicine, University of São Paulo (InCor / FMUSP), operated by the same surgical team (surgeon and first assistant).In all surgeries, was used at least one RA graft.Of these patients, 123 (25.57%) underwent coronariography restudy in the postoperative period, according recommended by the Clinics of Coronary Disease of the institution.The data -including the coronariography exams performed and their reports -were obtained by evaluating the records, with approval of the Ethics Committee for
Of this total, 150 arteries were grafted with RA by means of single end-to-side or side-to-side anastomosis (sequential) in more than one coronary branches.The RA dissection in all cases was the same, using single incision through the ventral side of the forearm selected, from the distal portion of the vessel, with manual dissection and ligation of branches, thus preserving adjacent tissues and extending up to the junction of the interosseous branch.The left marginal branches (LMB) -or equivalent from the same territory -were the most prevalent (48.67%).The other coronary branches, in decreasing order of use of RA, were diagonal (DB) (30.67%), right coronary artery (RCA) (15.33%) and anterior interventricular branch (AIB) (5.33%) (Figure 1).
In this group of patients, the proximal anastomosis of RA was performed directly in the aorta in 50 (40.65%)patients, through traditional end-to-side technique and using 7-0 polypropylene thread.In the remaining 73 (59.35%) patients, the RA was used as composed graft, through end-to-side Y-shaped anastomosis to the left internal thoracic artery (LITA) and right internal thoracic artery (RITA), using 7-0 polypropylene thread.There were no complications during the postoperative period and all patients were discharged from hospital with maintenance of outpatient follow-up.behavior could be observed similarly in both groups of proximal anastomoses, with statistical significance only in the Y-shaped proximal anastomosis (P=0.0031)(Figures 5 and 6).
Coronary angiography was performed in 123 patients studied, within a mean period of 5.36 + 3.21 years, also with injection in aorto-coronary grafts and LITA and/or RITA, if present.
The data were divided into two categories, as the proximal anastomosis (aorta/ composed graft) and the graft patency (occluded/pervious).For comparison among variables, we used chi-square test for two proportions (or linear trend chi-square for comparison among more than two proportions), with confidence interval (CI) of 95%.
RESULTS
Considering the evolution of the RA grafts, irrespective of the proximal anastomosis, we noted a patency rate of 82.11%.The grafts with total obstructive lesion in anastomoses (distal/proximal) and/or on the graft were considered as "occluded" (17.89%).
There was no statistically significant difference in distribution of obstructive lesions of grafted coronaries between the two groups of proximal anastomosis, and the most prevalent obstructions were between 90% and 99% (Figure 2).
We noted a predominance of graft patency in the most severe obstructive lesions, particularly in groups with obstructions of 90% or more (P=0.0003)(Figure 4).This
DISCUSSION
It is known the historical evolution of the use of RA in CABG.Since the pioneering proposal and comments of Carpentier et al. [1] in 1973, investigations were made in various aspects, aiming to evaluate its effectiveness.The first studies, even in the 1970s, showed unfavorable results [2,3], which raised interesting in different studies to determine possible factors -perhaps at the histopathological level -that may justify the behavior of early occlusion of RA [4,11,12].In the early 1990s, numerous technical and pharmacological advances [6.8] raised again the interest in this graft [5,13,14].With this new approach, there were very satisfactory results, but always in series of patients restudied early or in short-term [5], between 14 and 18 months [6,10,13,15].
Fig. 6 -Evolution of RA grafts patency as the obstructive lesion of the grafted coronay -group "Y-shaped" anastomosis (n=73)
From this time and since then there has been preference for Y-shaped proximal anastomosis, usually involving the LITA or RITA [7,[15][16][17][18].This trend was based on the favorable adaptation shown by in situ internal thoracic arteries by appropriate supplying of blood for Y-shaped composed grafts [7.19].Furthermore, some authors have shown good results with RA, using the proximal anastomosis in the aorta, by always considering the shortterm restudies [9,20,21].
Good results have also been showed in recent seriesof which the follow-up time reaches 6 years [22.23].In these studies, there is no evidence that the proximal anastomosis affects RA patency at mid-term [24].Currently, the site of proximal anastomosis is recommended, considering, for example, the less or even no manipulation of the ascending aorta [21].It is also considered that the topography of the target coronary could interfere with the RA patency [25] data that have not yet been shown by other authors.
The degree of preoperative obstructive lesion interferes with the development of RA patency.The possibility of competition between the flow and the graft is described, especially when the obstructive lesion of the targetcoronary is not subocclusive [22].
In a series of 54 patients restudied with one year of postoperative, there was also apparent vulnerability of the RA grafts under situations of "theft of flow."The authors describe 50% of graft occlusion for preoperative obstructions less than 60% [26].In postoperative angiographic restudy (mean 32 months) of 123 patients undergone surgery using ITA and RA grafts was observed patency of 99.2% and 92% respectively.The highest patency rates of RA were recorded in the "target coronary" with obstructions of 90% or more (98%) in relation to the less obstructive lesions (83.3%, P<0.05) [27].Similar results were shown in retrospective assessment of 600 patients, where 93 (15.5%) were restudied, with 92.5% of RA grafts patency rate, and all occluded grafts were related to the coronary arteries with less severe obstructions, 56.3% + 15.4 (P <0.001) [28].
It should be considered further, the progress of the RAPCO study (Radial Artery Patency and Clinical Outcome), that compares the RA grafts with free RITA and saphenous vein graft (SVG).Preliminary results of this study include mean time of follow-up of 3.5 years.The authors note that although with tendency to be better than PVS, the free arterial graft does not maintain patency comparable to in situ LITA or RITA.Clear differences on the influence of the proximal anastomosis in the results were not shown [29].
The number of patients we studied who had undergone CABG using RA, presents midterm evolution compatible with the data available in recent literature.
Still, considering the time of follow-up of these patients and time of restudy, it is noted that the favorable results described herein extend from mid-to long-term.
CONCLUSION
It is concluded that the site of proximal anastomosis (aorta/ "Y-shaped" with in situ LITA or RITA) did not interfere with the mid-and long-term patency of the RA grafts.
The degree of obstructive lesion interferes with the patency of RA grafts with a higher number of pervious grafts in obstruction of 90% or more.
CARNEIRO, LJ ET AL -Coronary artery bypass grafting using the radial artery: influence of proximal anastomosis site in mid-term and long-term graft patency Rev Bras Cir Cardiovasc 2009; 24 (1): 38-43
Fig. 2 -Fig. 1 -
Fig. 2 -Distribution of different degrees of obstructive lesion in both groups of proximal anastomosis
Fig. 5 -
Fig. 5 -Evolution of RA grafts patency as the obstructive lesion of the grafted coronay -group proximal anastomosis in the aorta (n=50) | 2017-08-16T03:25:22.526Z | 2009-01-01T00:00:00.000 | {
"year": 2009,
"sha1": "47e6d649af4584e461e91c807e4c9076e8d5bc1b",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rbccv/a/BRttN8DBG7JYqh9SNKVSxMq/?format=pdf&lang=pt",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "47e6d649af4584e461e91c807e4c9076e8d5bc1b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234539495 | pes2o/s2orc | v3-fos-license | Hydro-Geochemical Mechanisms of Brackish Shallow Groundwater Development- Coastal Greater Accra Region, Ghana
This study investigated the processes influencing the chemistry of surface and shallow groundwater in tropical coastal environments south east of greater Accra region of Ghana using GIS models and combination of geological and hydro geochemical techniques for sustainable management of freshwater resources and abundantly available brackish water resources. A total of 37 shallow groundwater and 11 surface water samples were collected and analysed for their physico-chemical constituents. The samples were adjusted to room temperature after which the hydrogen ion concentration (pH), Total Dissolve Solids (TDS) and Electrical Conductivity were measured with a precision of 0.01 for both parameters using a La Motte, USA in unfiltered samples. The analysis of major and minor ions were performed using ion chromatography (DIONEX-ICS-1000 Series). The geographical locations of the samples were recorded with the aid of a handheld Global Position System (GPS). Original Research Article Mensah et al.; JGEESI, 24(8): 44-53, 2020; Article no.JGEESI.63081 45 The analysed shallow groundwater indicates minimum salinity values of 70.2 psu/ppm, maximum salinity of 4398.3 psu/ppm with an average of 1571.4 psu/ppm whilst surface water has minimum salinity of 33.8 psu/ppm, maximum of 43574 psu/ppm with an average of 4972.6 psu/ppm therefore highly saline. The total minimum dissolved solid (TDS) concentrations is 87.5 ppm, maximum of 5160 ppm with an average of 1911.96 ppm for shallow groundwater and for surface water minimum of 38.4 ppm, maximum of 27100 ppm and average 3938.12 ppm.
INTRODUCTION
In 1995, the Government came up with a policy (Ghana Vision 2020, 1998, which aimed at supplying all the rural communities with potable water mainly from groundwater sources by the year 2020. Also, at the United Nations Millennium Summit held between 6 th and 8 th September 2000 at the United Nations Headquarters in New York, 189 Heads of State adopted the Millenium Development Goals (MDGs), which set clear, numerical, time-bound targets for making real progress, by 2015 in tackling the most pressing issues facing developing countries. One of the MDGs is to cut by half the proportion of people without sustainable access to safe drinking water and sanitation by 2015 [1].
Recent studies suggest that we are on the verge of a freshwater crisis wherein demand relative to supply is projected to lead water scarcity for significant percentages of the global population in the relatively near future. It has been estimated that each person on the earth needs a minimum of about 1000 m 3 of water per year for drinking, hygiene, and growing food for sustenance; whether people get enough water to meet their needs depends primarily on where they live as the distribution of global water resources varies widely [2]. Almost half the world's population lives within 60 km of the coast; 75% of large cities are located near coasts. Coastal aquifers are part and in many cases the only part of the water supply equation to these crowds. Unfortunately, as is the case with so many of our water supplies, we have (collectively) failed to look after these resources [3]. Coastal hydrogeologic systems, particularly in areas of modest rainfall, runoff, and recharge, are complex and difficult to decipher. The primary forcing function of less precipitation results in less erosion, smaller aquifers, slower groundwater flow rates and a predominance of brackish groundwater [4]. The coastal area of south eastern greater Accra Region, Ghana, has a similar setting, as shown in Fig. 1.
Aquifers don't just stop at the shoreline. Both unconfined and confined aquifers extend beneath the sea, strata nearing to numerous 10s of kilometres across continents. There is a dynamic relationship between the land-derived fresh water, and sea water that enters the aquifer beyond the coast. Improved knowledge and exploitation of unconventional water resources will increase water safety and assist economic growth into the future. Putting in place the responsible development of brackish groundwater will help alleviate pressure on freshwater resources and mitigate potential water crisis in the years to come.
Brackish Groundwater
Brackish groundwater has a high concentration of total dissolved solids (TDS), including the common salt sodium chloride. It is often defined as water containing between 1,000 and 10,000 ppm TDS (Seawater contains about 35,000 ppm TDS, and the secondary standard for drinking water in the US is 500 ppm TDS), Brackish and saline groundwater frequently occur in hydraulic contact with fresh groundwater and can cause very considerable constraints upon the exploitation of the fresh water resources. Saou A., et al. [5], in their study used the analysis of major and trace elements in water in order to determine the origin of the high salinity in a basin and to describe their spatial and temporal evolution.
In understanding the response of a saline groundwater body to freshwater abstraction a knowledge of the total system including head controls, flows etc. is necessary which will clearly be of benefit in the management of other adjacent freshwater resources, though it is frequently a difficult proposition. Traditionally groundwater has been classified upon their TDS content which has been applied particularly to the non-fresh groundwater as given in Table 1.
Occurrence and Origin of Groundwater Salinity
As a result of chemical and biochemical interactions between waters and the material through or over which they flow and to a lesser extent because of contributions from the atmosphere, the waters acquire salinity proportional to their flow experience. The salinities may be acquired within the ground as groundwater chemistry evolves or may be introduced during aquifer matrix deposition or subsequent groundwater or surface water movement. Because of the various ways in which salinity is imparted to a groundwater certain chemical signatures can be recognized which may also indicate the origin of the salinity [6].
Modern Seawater Intrusion
In estuaries and adjacent to coastlines modern seawater intrusion frequently occurs into aquifers either under natural flow controls or because of flows induced by abstraction.
Based on the sources of intruding water bodies, seawater intrusion can be divided into two types: intrusion of saline water derived from modern seawater, and intrusion of subsurface brine and saline water derived from paleo-seawater in shallow Quaternary sediments. There are some distinct differences in their formation, mechanism and damage. The subsurface brine intrusion is a special type, which can cause very serious disaster [7]. Various coastal environments in different coastal sections result in three types of intrusion: seawater intrusion, saline groundwater intrusion, and mixed seawater and saline water intrusion.
The recognition of modern seawater intrusion normally should not pose difficulties; however, it may be associated with saline groundwater of other origins or may have been modified by residence in the aquifer. In any case a reasonably comprehensive hydrochemical interpretation is worthwhile as it may shed light upon the mechanisms controlling intrusion or the rates of flow [6].
Geology and Hydro-Geologic Setting
The coastal plain hydrogeologic province is underlain by semi consolidated to unconsolidated sediments ranging from cretaceous to Holocene in age in south-eastern Ghana and in a relatively small isolated area in the extreme south-western part of the country.
Rocks of the Dahomeyan System of Neoproterozoic era (550Ma) underlie the Accra Plains and southern parts of the Eastern and Volta Regions. They extend from Ho in the Volta region to Accra, the nation's capital. The rocks consist mainly of crystalline gneiss and migmatite, with subordinate quartz schist, biotite schist and sedimentary-rocks remnants. The gneiss is generally massive and has few fractures. The two main varieties are silicic and mafic gneisses, which weather, respectively, to slightly permeable clayey sand and nearly impermeable calcareous clay. The generally impervious nature of the weathered zone and massive crystalline structure of the rocks limit the yields that can be obtained from hand-dug wells or boreholes [1].
Study Objectives
The study objectives included mainly using geochemical strategies to understand the processes and factors that control the evolution of water mineralization. Specifically: Determine hydrogeochemical structure, Interrelationship between surface water and groundwater, Configuration of fresh and saline water, Aid water management
The area has wetlands and marshes, sand dunes and islands. The Songor wetland situated within the study area consists of a shallow brackish water lagoon (10-50 cm) with mudflats, riverine islands, a broad sandy beach southwards, flood plains with degraded mangroves and coastal savannah vegetation.
The lagoon is low-lying. The highest elevation above sea level is 75 m whilst the lowest elevation above sea level is 10 m. Maximum depth below mean sea level is 50 cm. (Ecological Mapping of the Songor Ramsar Site-Ghana National MAB Committee, 2009). There is a narrow sand dune along the coast with no cliffs on the smooth coastline. Further, shoreline recession through tidal activity also require management intervention.
The area receives about 750mm rainfall recorded at Ada Foah Meteorological Station. Temperatures are generally high ranging from 23°C to 33°C.
Methodology
A total of 37 groundwater samples from shallow wells used for human consumption derived from shallow aquifer and 11 surface water samples comprising 3 lagoon samples, 4 river samples and 4 stream samples, were collected. Prior to analysis all samples were stored under hygienic and required temperatures in the water chemistry laboratory of Geomorphology and Environmental Geology Group, Wadia Institute of Himalayan Geology. To negate such changes as chemical composition, samples were kept in potable ice chest under freezing conditions immediately they are obtained. Wheatstone [10] has shown that freezing has no adverse effect upon a wide range of determinants. After adjusting the samples to room temperature; the pH, TDS and electrical conductivity were measured with a precision of 0.01 for both parameters using a La Motte, USA in unfiltered samples. The analysis of major and minor ions were performed using ion chromatography (DIONEX-ICS-1000 Series).
GPS coordinates of samples locations
The geographical locations of the samples were recorded with the aid of a handheld GPS. On each samples point, three set of readings were taken after observing for about five minutes, the averaged value is taken as the coordinates. GIS maps of distribution of pertinent parameters are used to aid water management and to examine ideas about geologic structure, location and flowpaths of groundwater, interaction of surface water and groundwater, configuration of fresh and saline water, and possible areas to extract additional brackish groundwater to be treated for potable use and to describe their spatial and temporal evolution.
Results
The results of the chemical analysis of the samples is presented in a summary statistics in Table 2. The relative abundance of cations in the groundwater is in the order Na>Mg>Ca>K and that of anions is Cl>SO4>NO3>HCO3>Fl. Na and Cl are the dominant cation and anion respectively. Similar as was obtained by Mensah and Bartarya [11].
Chlorides
Chloride is assumed to be a conservative tracer and the relationship between chloride and the other major and trace elements controlling chemical compositions of surface and shallow groundwater helps to well understand the processes of mineralization (salinity) increase. The relation Na-Cl is often used to identify the mechanisms of acquisition of salinity and marine intrusion. Na is lower and positively correlated with Cl (r2=0.998.) Fig. 2. A linear relation between Na and Cl represents simple mixing of the fresh groundwater with the seawater.
The high concentration of sodium and chloride in the groundwater could be related to the weathering of silicate rocks, and/or seawater intrusion. The dissolution of halite related to rock weathering/or likely evaporites from within alluvial sediments. This pattern is confirmed by low Na+/Cl− molar ratios, ranging from 0.34 to 5.45. The ratio of Na+/Cl− where equal to one indicates that halite dissolution could be responsible for the sodium concentration in the water samples. Based on the Na+/Cl− ratios, the majority making up of the core of the study area was covered with ratios less than 1.5, which indicated halite dissolution whilst on the fringes outside the core of the study area exhibit ratios greater than 1.5 indicating the presence of silicate weathering contributing to sodium in the study area. There is identified a boundary where the two activities could be observed occurring at the same time. Groundwater salinity may also relate to the formation of salt layers by leaching from the soil surface during evaporation in and during dry seasons. The interpretation of the results by using the correlation of major elements w variations of SO4/Cl, and Mg/Ca ratios showed and revealed zones with strong salinities as a The interpretation of the results by using the correlation of major elements with chloride, variations of SO4/Cl, and Mg/Ca ratios showed and revealed zones with strong salinities as a result of marine water/or dissolution of evaporitic formations. Several factors control groundwater chemistry, which can be related to the physical situation of the aquifer, bedrock mineralogy and weather condition.
G U L F O F G U I N E A ( A T L A N T I C O C E A N
Hydro-Geochemical groundwater evolution of chloride ion tends to be most conservative in that it is readily removed from matrix materials but rarely precipitate under dilute solution cond from literature. Chloride concentrations therefore normally increase down the hydraulic gradient (Fig. 6) and with groundwater flow experience and residence. Chloride is a good indicator of groundwater flow direction and better permeability conditions. This phenomenon is observed where, the two circles increased mineralization (salinities) at lower elevations (<10 m) and decreased mineralization at higher elevations (>10 m) across study area in Fig 4.
Conjunctive water use
The increasing acuteness of water scarcity problems, poor quality water and management of rising watertable worldwide, requires the adoption of a double approach of water supply result of marine water/or dissolution of evaporitic formations. Several factors control groundwater chemistry, which can be related to the physical situation of the aquifer, bedrock mineralogy and Geochemical groundwater evolution of chloride ion tends to be most conservative in that it is readily removed from matrix materials but rarely precipitate under dilute solution conditions, from literature. Chloride concentrations therefore normally increase down the hydraulic gradient 6) and with groundwater flow experience and residence. Chloride is a good indicator of groundwater flow direction and better ons. This phenomenon is observed where, the two circles increased mineralization (salinities) at lower elevations (<10 m) and decreased mineralization at higher m) across study area in Fig. 8.
The increasing acuteness of water scarcity problems, poor quality water and management of rising watertable worldwide, requires the adoption of a double approach of water supply management and water demand management. The adoption of an integrated river ba management approach for elaborating policies and strategies of water resources development, management and conservation would help consider the water resources as one system and would avoid a water resources development approach focused only on surface approach also facilitates the management of the resource itself, allowing a better understanding, by water users, of the hydrological issues involved. Governments tend to consider river basins as water resources management units and as a spatial basis for the formulation of water management strategies integrating all cross sectoral issues such as water resources conservation, environment, water resources 50 management and water demand management. The adoption of an integrated river basin management approach for elaborating policies and strategies of water resources development, management and conservation would help consider the water resources as one system and would avoid a water resources development approach focused only on surface water. This approach also facilitates the management of the resource itself, allowing a better understanding, by water users, of the hydrological issues involved. Governments tend to consider river basins as water resources management units al basis for the formulation of water management strategies integrating all crosssectoral issues such as water resources conservation, environment, water resources allocation, water demand management, etc. The conjunctive use of surface and groundwater is one of the strategies of water supply management which has to be considered to optimize the water resources development, management and conservation within a basin. Conjunctive use of surface and groundwater consists of harmoniously combining the use of both sources of water in order to minimize the undesirable physical, environmental and economic effects of each solution and to optimize the water demand/supply balance. Usually conjunctive use of surface and groundwater is considered within a river basin m programme -i.e. both the river and the aquifer belong to the same basin.
Map of Mg/Ca ratios across study area
; Article no.JGEESI.63081 allocation, water demand management, etc. The conjunctive use of surface and groundwater is one of the strategies of water supply management which has to be considered to optimize the water resources development, management and conservation within a basin. Conjunctive use of surface and groundwater consists of harmoniously combining the use of oth sources of water in order to minimize the undesirable physical, environmental and economic effects of each solution and to optimize the water demand/supply balance. Usually conjunctive use of surface and groundwater is considered within a river basin management i.e. both the river and the aquifer Decrease in good quality water resources stresses the need of using surface water and groundwater resources conjunctively for water supply, irrigation etc. The conjunctive use permits the utilization of poor quality water, which cannot be used as such for potable use or irrigation due to its harmful effect.
Properly managed integrated water resources systems can yield more water with more economic rates than those separately managed surface-water or groundwater systems. Conjunctive use of surface water and groundwater has been extensively studied and a number of methods/techniques have been reported for supporting conjunctive water use
Plot of salinization vs elevation
Decrease in good quality water resources stresses the need of using surface water and groundwater resources conjunctively for water conjunctive use permits the utilization of poor quality water, which cannot be used as such for potable use or Properly managed integrated water resources systems can yield more water with more se separately managed water or groundwater systems. Conjunctive use of surface water and groundwater has been extensively studied and a number of methods/techniques have been reported for supporting conjunctive water use planning and management [12,13 18,19,20].
CONCLUSION
Acquiring better knowledge and understanding of hydrogeological resources will allow policy makers to better decisions about how to manage brackish groundwater resources and protect aquifers, both brackish and fresh. The management challenge of increasing municipal supply is to capture more of the fresh groundwater on its way to the ocean and extract some of the brackish groundwater for treatment using reverse osmosis. The deep multiple well sites could be used to characterize the ; Article no.JGEESI.63081
Correlation Cl vs SO4
13,14,15,16,17, Acquiring better knowledge and understanding of hydrogeological resources will allow policy makers to better decisions about how to manage brackish groundwater resources and protect aquifers, both brackish and fresh. The management challenge of increasing rural and municipal supply is to capture more of the fresh groundwater on its way to the ocean and extract some of the brackish groundwater for treatment using reverse osmosis. The deep multiple-depth well sites could be used to characterize the geologic, hydrologic, and geochemical systems and to monitor seawater intrusion, land deformation, and effects on coastal systems.
RECOMMENDATIONS
Our objectives was to contribute to a better understanding of the process of increased mineralization in the shallow groundwaters in the study location and as a result make a few recommendations below in a way to promote brackish water management use in the region. They are: 1) National and local Governments to be more proactive to institutionalize Conjunctive water usage practices to save fresh water. 2) Irrigation development authority and water and sanitation ministry should play an active role to chalk out action plan for "conjunctive water use for agricultural purposes. 3) Further research on alternative crops which will be more profitable by using conjunctive water should be conducted. 4) Creating awareness amongst the farmers about practicing conjunctive water uses. | 2020-12-24T09:13:12.484Z | 2020-12-14T00:00:00.000 | {
"year": 2020,
"sha1": "a560e2ae72d53de3ed6828c147539203821032d8",
"oa_license": null,
"oa_url": "https://www.journaljgeesi.com/index.php/JGEESI/article/download/30248/56781",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5ce85e3895f5ffd418391e76cb3741f04c6d3406",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
89731180 | pes2o/s2orc | v3-fos-license | Evaluation of Reference Genes for Differential Gene Expression Study of Bovine Tuberculosis
Relative quantitative real-time PCR (qPCR) assays serve as important tools for validating differential gene expression data. A reference gene that is stably expressed across sample types and experimental treatments is crucial for accuracy in interpretation of relative qPCR data. Twelve previously validated reference genes were evaluated in this study to identify a most suitable reference gene that can be used for gene expression study of bovine tuberculosis (bTB) infected and bTB test-false positive cattle, in peripheral blood mononuclear cells after a 4 hour or after an overnight stimulation with bovine tuberculin antigen. Stability of the candidate reference genes were evaluated using the BestKeeper, geNorm and NormFinder programs. The SDHA was found to be the most stably expressed reference gene, regardless of infection status and varying length (4 hours or overnight) of antigen stimulation, while expression of many widely used reference genes are not stable under the studied experimental conditions. We also confirmed that the geNorm and NormFinder programs yielded similar findings in determining the stability of reference genes, which differ largely from the BestKeeper program. This finding stresses the importance of validating the reference gene(s) chosen for each experimental study, and the need for using multiple programs for the evaluation.
Introduction
Quantitative real-time PCR (qPCR) has become the method of choice for quantification of gene expression levels. It is a very sensitive and accurate method for quantification of mRNA transcripts, allowing a direct measure of differential transcription of mRNA for genes of interest, and an indirect measure of regulation of gene expression in biological process [1,2]. Relative qPCR is a rapid and robust method for quantification, and is preferred over the absolute qPCR when the absolute copy number of mRNA is not required [1]. In relative qPCR, a reference gene is used to normalize disparities in RNA recovery and cDNA synthesis efficiency. This permits true comparisons of gene regulation between samples from within a group, and between samples among different groups [1,3,4]. The reference gene is subjected to the same experimental conditions as the genes of interest, and thus serves as a normalizer for correction of experimental variability. The underlying assumption is that the reference gene is expressed at a constant level, and the level of expression remains unchanged across sample types and experimental treatments. Thus, the detected level of expression of the reference gene will correlate with experimental error, which can be normalized directly. Use of a reference gene with an expression level that fluctuates randomly can lead to increased non-specific variation, and use of a reference gene with an expression level that changes with the sample type or with experimental treatments can lead to erroneous interpretation of data [1,5,6].
Conventional housekeeping genes such as glyceraldehyde-3-phosphate dehydrogenase (GAPDH), beta actin (ACTB) and 18S rRNA were widely used as reference genes in early gene expression studies because it was assumed that their expression levels remained constant. However, many studies have shown that expression of housekeeping genes can be influenced by sample types and experimental treatments [7][8][9][10]. Use of the18S rRNA as a reference gene was based on the assumption that the rRNA: mRNA ratio would be the same in all samples and would remain unchanged after treatment; however, that assumption is not always valid [11]. Moreover, the abundance of rRNA, as compared with mRNA, in the total RNA sample could introduce technical issues in the qPCR assay. The disproportionate rRNA: mRNA ratio can complicate optimization of the PCR reaction and performance of the qPCR assay when the preferred fixed sample concentration for each reaction is used. The outcome can be a wide range of 18 Evaluation of Reference Genes for Differential Gene Expression Study of Bovine Tuberculosis qPCR amplification plots for rRNA target that affects the baseline subtraction step in qPCR analysis [2]. Proper validation of reference genes under specific experimental conditions and sample types is critical for accurate gene quantification by the relative qPCR method [6].
Several programs such as BestKeeper [12], geNorm [10] and NormFinder [13] have been developed to evaluate the stability of candidate reference genes. These programs employ different algorithms for calculation of stability values, which may result in different estimates for a stability value. Side by side evaluations using 2 or all 3 of these programs have shown that the best agreement is between geNorm and NormFinder for ranking the most and least stable genes [14][15][16]. To date, there is no single best program for ranking of suboptimal reference genes. The ranking of the candidate genes by 2 or all 3 programs may be necessary before deciding on the best choice for a reference gene [17]. Alternatively, a normalization factor based on 2 to 3 reference genes has been proposed for use as the normalizer. Use of multiple reference genes for a normalization factor can improve the accuracy of quantification [10]. However, this method is costly, labor intensive, and not practical for use with a large number of samples or when resources are limited. This is because all of the normalizer genes must be included on every qPCR plate for every sample [10,18].
Bovine tuberculosis (bTB) is a high impact disease for both animal and human health [19]. A microarray hybridization study was used to examine gene expression profiles from small groups of bTB infected and antemortem test-false positive cattle in the state of Michigan in the United States [20][21]. The overall results showed that differential gene expression profiles exist between test-false positive and true bTB infected cattle. Quantitative real-time PCR (qPCR) assays were chosen to validate differential gene expression data derived from the microarray hybridizations, and to test selected molecular markers for differential diagnosis of the disease.
In this study, a search for a suitable reference gene was performed to identify the best candidate gene that can be used for qPCR of gene expression profile of bTB infected and bTB test-false positive cattle, using mRNA harvested from peripheral blood mononuclear cells (PBMC) after a 4 hour or overnight stimulation with bovine tuberculin antigen (bPPD). Before the initiation of the study, a literature search was performed to identify reference genes previously used in gene expression studies employing i) various bovine sample types, ii) leukocytes or peripheral blood mononuclear cells (PBMC) of species other than bovine, or iii) antigen stimulation studies of human tuberculosis patients. Twelve commonly used and previously validated reference genes were chosen for evaluation using the BestKeeper, geNorm and NormFinder programs. The objective of this study was to determine the most suitable reference gene (with minimal variability in expression level) for qPCR assays of differential gene expression of bTB infected and bTB test-false positive cattle after a 4 hour or overnight stimulation with bPPD.
Materials and Methods
A total of 12 commonly used reference genes as listed in Table 1 were selected from the literature to be considered for use in this study. Published primer sequences for those genes were evaluated in bioinformatics software (a). Primers that met the study criteria were used as published, otherwise new primers were designed using available software (a, b). All primers were synthesized commercially (c), concentration of the PCR primers and qPCR conditions were evaluated and optimized prior to testing of the samples (Table 1).
Experimental Animals and bTB Infection Status
Experimental animals in this study consisted of cattle culled from herds because they showed positive reactions in antemortem diagnostic tests for bTB. The cattle were transported to the Diagnostic Center for Population and Animal Health at Michigan State University the day before a regulatory bTB postmortem examination was done.
Three study groups of cattle were used, which included bTB positive cattle (bTBP) that were positive reactors in antemortem diagnostic tests for bTB and confirmed positive for bTB by postmortem examination; double test-false positive reactor cattle (DFP) that were positive reactors on primary and secondary antemortem diagnostic tests and were negative for bTB on postmortem examination; and single test-false positive cattle (SFP) that originated from bTB positive farms. This last group of cattle was positive reactors in primary and negative on secondary antemortem diagnostic tests, and was negative for bTB on postmortem examination.
Blood Collection and Antigen Stimulation
Whole blood (30-45 ml) was collected from 6 animals in each of the bTBP, DFP, and SFP groups (total of 18 animals) into 10 ml heparinized tubes immediately before euthanasia for postmortem examination. Within 3 hours of collection, the blood from each animal was divided into 3 individual sterile conical tubes and subjected to 1 of 3 treatments. Two of the tubes were stimulated with bPPD (purified protein derivative prepared from the filtrate of a heat-killed M. bovis) (d) at 20µg bPPD/ml of blood. One aliquot was stimulated with bPPD for 4 hours and the other one was stimulated with bPPD overnight (18-22 hours) before being processed for buffy coat harvest. The last tube was harvested without stimulation, serving as nil control. The 2 different stimulation time-points were evaluated simultaneously, to identify a reference gene that could be used for both the 4 hour stimulation study [20] and the overnight stimulation study [21].
Isolation of Peripheral Blood Mononuclear Cells, and Purification of RNA
After stimulation, the blood was centrifuged at 1200 x g for 15 minutes at 18°C to form layers of plasma, buffy coat cells, and red blood cells. Buffy coat cells and 2 ml of red blood cells immediately below the buffy coat cell layer were harvested and transferred to new 50 ml conical tubes. Two rounds of hypotonic lysis of red blood cells were performed by addition of ice-cold diethyl pyrocarbonate treated-sterile de-ionized water for 2 minutes, followed by addition of an equal volume of ice-cold diethyl pyrocarbonate-treated sterile 2X saline (1.7 % w/v NaCl). Intact cells were pelleted by centrifugation at 1200 x g for 15 minutes at 18°C after the first round of hypotonic lyses, then at 190 x g for 10 minutes at 4°C after the second round. After the second round of hypotonic lyses, the supernatant was decanted and 1 ml TRIzol Reagent (e) was added to the loose cell pellet for each 9 ml beginning volume of whole blood. This mixture was frozen at -84°C until use. For isolation of RNA, the mixture was thawed on ice, and subjected to 10 passages through a 20 gauge needle. The resulting homogenate was divided into 1 ml aliquots and the remainder of the RNA extraction procedure was performed according to the manufacturer's recommendations. The total cellular RNA from each animal was then pooled into a single tube and treated with RQ1 RNase-Free DNase (f) according to manufacturer's recommendations. The treated RNA was extracted again using equal volumes of phenol-chloroform, followed by purification using MEGAclear Purification Kit (g). The purified RNA was immediately stored at -84°C until use.
Before use, the RNA from each of the study cattle was thawed on ice and the integrity and concentration of the RNA was determined using an Agilent 2100 Bioanalyzer and RNA Nano 6000 Kit (h).
cDNA Synthesis
Synthesis of cDNA was performed with 2 µg of total RNA from each study animal using Superscript™ II Reverse Transcriptase and Oligo (dT) 12-18 Primer (e), according to manufacturer's recommendations. Upon completion of cDNA synthesis, the RNA template in each reaction was removed with 1U of RNase H (e). The cDNA was purified using QuickClean Enzyme Removal Resin (i) according to the manufacturer's recommendations. The concentration of purified cDNA was measured using NanoDrop ND-1000, and diluted to final concentration of 10 ng/µl. All cDNA were stored at -20°C until use in qPCR assays.
qPCR Assay
A constant amount of cDNA was used in duplicate qPCR reactions for each reference gene. The qPCR assays were performed using SYBR Green PCR Master Mix and an ABI 7500 Real-time PCR System (j). Each 20 µl reaction consisted of 1x SYBR Green PCR master mix, 20 ng of cDNA and a pair of primers at pre-determined optimal concentrations ( Table 1). The reaction conditions were 95 °C for 10 minutes, then 40 cycles of 95 °C for 15 seconds and 58°C for 1 minute. Dissociation curve analysis was done for each reaction to verify the specific amplified products were 20 Evaluation of Reference Genes for Differential Gene Expression Study of Bovine Tuberculosis obtained.
qPCR Data Analysis
The raw cycle threshold (Ct) value for each reaction was exported into an Excel spreadsheet; the ΔCt value was calculated as the difference in Ct of a stimulated sample (4 hours/overnight) from the Ct of the non-stimulated sample from a given animal [Ct(stimulated) minus Ct(unstimulated)].
The stability of each reference gene was evaluated and compared in BestKeeper, geNorm and NormFinder programs. The most stable gene, as determined by use of all programs, was selected and used as the reference gene for subsequent qPCR assay.
Results
Initially, RNA samples from 6 cattle (2 from each group) were used to test the 12 selected reference genes (Table 1) using qPCR assay. The qPCR data (Ct value) of the 12 reference genes were generated for each animal using RNA that was subjected to 3 different treatments (samples without stimulation, 4 hours or overnight antigen stimulation). The data were then evaluated using the BestKeeper, geNorm and NormFinder programs. Both the geNorm and the NormFinder programs used the ΔCt value for calculation of the stability value, while the BestKeeper used the raw Ct value. The ΔCt value was calculated as [Ct(4 hours/overnight stimulated sample)] minus [Ct (no antigen stimulation sample)], and the 2^-ΔCt value was used to compute the stability value in the NormFinder or the geNorm programs.
In the geNorm program, the gene expression stability measure (M) for a reference gene was calculated as the average pairwise variation (V) for that gene with all other tested reference genes. Stepwise exclusion of the gene with the highest M value then allowed ranking of the tested genes according to their expression stability. The NormFinder program utilized a mathematical model to estimate the reference gene's stability based on direct estimation of expression variation and the taking into account of sample subgroups within the data (no sample subgroup was defined in the current study). In the BestKeeper program, the raw Ct values of all data points were used to compute the geometric means, arithmetic means and standard deviation for each reference gene. Stability of the reference gene was determined based on repeated pair-wise correlation analysis and standard deviation of geometric means.
The ranking of the stability values for all 12 reference genes by the three programs is listed in Table 2. The stability ranking for the reference genes varied among the 3 programs, especially with the BestKeeper program that was not based on ΔCt values. Significant discrepancies were observed for a few genes, such as SDHA, B2M and ACTB. Some consistency in ranking was observed among the very unstable gene candidates. The H2A was determined as the least stable gene, with GAPDH, TBP and HPRTI also considered as unstable genes. The most stable gene was not clearly identified using data from 6 animals tested.
Based on the initial results, 5 of the most stable genes ranked by all 3 programs were selected for further evaluation; these genes are SDHA, H3F3A, YWHAZ, B2M and UBC. Despite the lower stability ranking, ACTB and GAPDH were also selected for further evaluation because these were the most commonly used reference genes in the published literature. Additionally, 12 blood samples that represented all 3 study groups were added to increase the sample size to 18 cattle. The qPCR assays were performed on RNA of 18 animals (each with 3 treatments, as mentioned above) for the 7 selected reference genes. Data analysis was performed as the initial run. Based on the data from the 18 animals, Figure 1, Table 3 and Table 4 show the output of analysis by the geNorm, NormFinder and BestKeeper program respectively. Table 5 summarizes the overall stability rankings of the 7 selected genes by the 3 different programs. Discrepancy was again Advances in Zoology and Botany 5(2): 17-24, 2017 21 observed in stability rankings among the 3 programs. The GAPDH gene remained the most unstable among the 7 selected genes. Interestingly, ACTB was ranked the third most stable gene by the BestKeeper program and was ranked as second least stable gene by the other 2 programs. Overall, SDHA was shown to be the most stable gene by all 3 programs. Based on this result, SDHA was determined as the most stable reference gene for this study.
Discussion
It is clear from the literature that expression of many housekeeping genes can be influenced by different experimental conditions, which should prevent their use in qPCR assays when those conditions are encountered [6,7]. Based on this knowledge, 12 candidate reference genes were selected for evaluation under this study, with the goal to find the most stable reference gene that can be used in the qPCR assays for validation of differential gene expression associated with the bovine tuberculosis disease status. The experimental design of this study includes comparison of animals with and without M. bovis infection (cause of bovine tuberculosis), as well as samples with and without exposure to antigen stimulation. Under these experimental conditions, expression levels of H3F3A, YWHAZ, B2M, UBC, HMBS, RPII, ACTB, HPRTI, TBP, GAPDH and H2A were all shown to be unstable. Surprisingly, GAPDH and ACTB, the 2 most widely used reference genes in early gene expression studies by many researchers, were found to be the least stable genes. In a similar gene expression study of human tuberculosis; GAPDH, ACTB, and B2M were found unstable when tuberculin antigen stimulation was used [5]. Furthermore, the use of GAPDH as a reference gene has been reported to result in erroneous interpretation of the IL-4 gene expression in TB patients [18]. The influence of a stimulant on the expression of housekeeping genes in various cell cultures has also been reported [9,23]. Our findings are in agreement with previous findings, suggesting that stimulation of samples with bPPD antigen has impact on the expression level of those commonly used housekeeping genes tested in this study. Wedlock et al. [22] reported increased expression of housekeeping molecules such as gamma-actin, ACTB, and B2M in M. bovis infected macrophages. Our study using comparisons of animals with and without M. bovis infection yields similar results, confirming the influence of infection status on the expression level of many commonly used housekeeping genes.
Expression levels of housekeeping genes have been found to vary among different cell types (normal versus cancerous cells) [24], different physiological states of cells [25] or infectious status [26]. All of these previous findings stressed the importance for validation of reference gene(s), to ensure that expression is stable irrespective of the experimental conditions. In the current study, the data clearly shows the instability of many candidate reference genes under our experimental conditions. The SDHA gene encodes the succinate dehydrogenase complex subunit A protein, which is a major catalytic subunit of succinate-ubiquinone oxidoreductase, a complex of the mitochondrial respiratory chain. The SDHA gene appears stable in its expression level, and therefore has been evaluated for use as a reference gene in many research studies. For the design of this experimental study, SDHA was found to be the most stable reference gene, across the infection status (bTB positive or negative) and across experimental conditions (4 hours or overnight antigen stimulation). Stability of SDHA has been previously validated in gene expression studies involving bovine polymorphonuclear leukocytes [27], the developing bovine embryo [8], and bovine liver and pituitary tissues [28], which is in agreement with our findings.
BestKeeper, geNorm and NormFinder are programs developed to evaluate the stability of candidate reference genes. Due to the use of different algorithms for calculation of stability values, differences in outputs by these programs have been previously reported [6, 14-17, 24, 26, 29]. Side by side evaluations using multiple programs have been widely recommended for determination of the best choice for a reference gene. In this study, the ranking of gene stability by the geNorm and NormFinder programs was similar, and seldom in agreement the BestKeeper program. Overall, the programs agreed on the least stable genes and, to a lesser extent, on the most stable genes. A similar observation was reported by Perez et al. [14] using the programs to evaluate reference genes for the study of gene expression in bovine muscle tissue, as well as Wang et al. [24] for evaluation of reference genes for the study of human laryngeal cancer. Wood et al. [16] reported good agreement when evaluating reference genes with all three programs, and Skovgaard et al. [15] found good agreement between geNorm and NormFinder for ranking of reference genes. Our finding is in agreement with other researchers, confirming the importance of using multiple programs for evaluation of candidate genes in order to find the best choice for a reference gene for relative qPCR assays. To date, besides the 3 programs used in this study, there are also many other freely available programs for researchers to utilize for reference genes evaluation. Without extra cost, by putting in a little extra time and effort in data analysis, researchers can be assured that they have chosen a suitable reference gene for their study and avoid possible erroneous interpretation of expression study using relative qPCR assays.
Conclusions
Results from this study showed that SDHA was found to be the most stable reference gene, across the infection status (bTB positive or negative) and across experimental conditions (4 hours or overnight antigen stimulation), and is thus the reference gene of choice for use under these experimental conditions. Our results also showed that expression levels of many widely used reference genes are not stable under the studied experimental conditions, thus stressing the importance of validation of suitable reference gene(s) for each experimental study. An adequate reference gene must show stable expression irrespective of the experimental conditions. In the current study, we found discrepancies among 3 commonly used programs in determining the stability of reference genes, where geNorm and NormFinder programs yield similar findings, and were seldom in agreement with the BestKeeper program. Our findings confirmed the need for using multiple programs for evaluation of candidate genes before deciding on the best choice for a reference gene for relative qPCR assays. | 2019-04-02T13:08:03.693Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "05787b18af621e05b8cc4cf8821572a6a1f60639",
"oa_license": "CCBY",
"oa_url": "http://www.hrpub.org/download/20170530/AZB2-11409450.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "37b0b660a75ac4d46012d47f9618b324b296eb1f",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
71694494 | pes2o/s2orc | v3-fos-license | Simple Method of Forest Type Inventory by Joining Low Resolution Remote Sensing of Vegetation Indices with Spatial Information from the Corine Land Cover Database
The paper presents a simple, inexpensive, and effective method allowing for frequent classification of the forest type coniferous, deciduous, and mixed using medium and low resolution remote sensing images. The proposed method is based on the set of vegetation indices such as NDVI, LAI, FAPAR, and LAIxCab calculated from MODIS and MERIS satellite data. The method uses seasonal changes of the above-mentioned vegetation indices within annual cycle.Themain idea was to collect and carefully analyse seasonal changes in vegetation indices in a given ecosystem type proven by a Corine Land Cover, 2006 database, and to compare them afterwards with those of a particular forest under study. Each type of a forest ecosystem has its own specific dynamics of development, thus enabling recognition of the type by comparing temporal changes of the proposed measures based on vegetation indices. Temporal measures of changes were created for selected reference stands by the ratios of particular indices determined in July and April, which are the middle and the beginning of a vegetation season in Poland, respectively. The analysed vegetation indices were additionally provided with chosen statistical measures. The statistical analyses were carried out for Poland’s main national parks which represent the natural stands of temperate climate.
Introduction
Forests cover about 31% of the land [1], and their role in the natural environment and in human activities is essential.For example, forests have a significant influence on the composition of dust and atmospheric gases, air and soil temperature, the amount of water present, and forested areas, both in soil and in vegetation cover.Forests also play an important role in the exchange of water between the soil and the atmosphere.Forest management requires timely and accurate information on forests [2] and remote sensing methods have been used for forest inventory for decades [3][4][5].A major problem in forest research is the diversity of ecosystems, which provides for the possible couplings of various physical and biological processes, especially when there is a need of mesoscale assessments.It is therefore essential how large areas can be represented by the data (satellite, terrestrial, and statistical assessment) and what time resolution measurements are performed with.For all these reasons, studies on forest ecosystem, have always required considerable effort and resources.
In remote observations of forests, spectral analysis allows for the determination of various biophysical parameters such as NDVI, LAI (leaf area index), FCover (fraction of vegetation cover), FAPAR (fraction of absorbed photosynthetically active radiation), and LAIxCab (canopy chlorophyll content of A and B types) [6][7][8][9].These indicators and their properties are well described in the literature [10,11], so they will not be detailed discussed here.Applying these vegetation indices, it is possible to determine quite accurately, both quantitatively and qualitatively, the state of vegetation on the Earth's surface.The vegetation indices are closely related to the fundamental energy and biological and physical phenomena, but at the same time they are distant from the direct parametric assessments.Thus, one uses vegetation indices to evaluate more complex quantities, such as biomass and its productivity, the ability to bind water with vegetation or even with soil under the trees, diverse statistical characteristics of trees (e.g., diameter at breast height, cross-sectional area), evapotranspiration, and the ability of carbon binding.
It is worth mentioning that diverse sophisticated vegetation indices as well as their processors specialized to specific satellite imagery are still developed giving new impetus to the development of better methods for classification of forests.
High resolution observations are most commonly associated with the necessity of obtaining and processing large amounts of data [10,11] while (contrary to popular opinion) they are usually rare, and even incidental, limited to the time of satellite flight, depending on cloud conditions, research cost, and so forth.Therefore, it is generally accepted that the satellite data with an average or low spatial resolutions are also very valuable, because such data are easily and regularly accessible, often at no cost, as open to the public.
It should be also emphasized that high spatial resolution, in observing the Earth's surface, including the forest ecosystems, is not the only or even the most important criterion of quality satellite observations.Increasing spatial resolution capabilities have caused a sharp increase in the amount of data and resources needed to process them, resulting in increased costs of research and investment of human labour.A suitable compromise in the selection of spatial and temporal resolutions is therefore essential [12,13].An example of a large-scale, technologically advanced satellite program designed to study the content of water in the soil, as well as the amount of the water bounded with vegetation, including forests, is the mission SMOS (soil moisture and ocean salinity), working in the range of microwave radiation (1.400-1.427GHz), which ensures the relevance of the large-area assessments.At the same, time resolving power of this probe is relatively low, about 32 to 50 km [14].Inventory of forest environments obviously must also involve high resolution observations, including ground-based data, but such observations are not reliable enough, if they do not enable comparisons of separate and isolated areas, such as, for instance, various national parks.Often there is not only the need for an exact assessment of detailed information as, for example, distribution of tree species, but also, especially in the climate-forest interaction studies, some essential forest features as forest type or forest state, which can sometimes be difficult to be distinguished in too small or directly neighbouring forest areas.Inexpensive and convenient observations at large scales and meso-(i.e., intermediate) scales are and will always be necessary for many regional assessments.Therefore, medium and low resolution methods of forest ecosystem observations are intensively developed together with high resolution ones.
The aim of this study was to develop an inexpensive method to distinguish between three selected classes of forest: coniferous, deciduous, and mixed by means of satellite images of low (in terms of forest applications) resolution, based on seasonal changes in selected indicators of vegetation.The results of this method could be used, for example, to validate the aforementioned SMOS data or used in the climate-forest interaction investigations.It was assumed that the classification results could be easily updated annually, which is much more frequent than, for example, the spatially detailed, and widely used in European Union, Corine Land Cover database [15], which has been updated at great expense every few years (namely, in 1990, 2000, and 2006).Therefore, the Corine Land Cover data should be considered as independent of seasonal changes and may serve for validating analysis with the use of lower resolutions satellite sensors as MERIS, MODIS, SPOT, CHRIS, and so forth.
In addition, the aim of this work was to motivate the interest in remote observations at medium and low resolution, which are inexpensive, relatively easily accessible, and very well equipped with free tools available for data analysis.Particular value of spectral analysis is an opportunity of evaluating evapotranspiration of forest areas, as it is one of the most important elements of water balance on forest area.
Study Area
The statistical analyses were carried out for seven national parks of Poland that represent the typical, natural stands of temperate climate, namely, the Białowieski National Park, the Biebrzański National Park, the Bolimowski National Park, the Kampinoski National Park, the Kozienicki National Park, the Roztoczański National Park, and the Świętokrzyski National Park.The location of these parks on the map of Poland is shown in Figure 1.The total area of studied parks was about 2234 km 2 , including about 1582 km 2 of coniferous, deciduous, and mixed forests which is more than 70% of the examined area.The detailed information on land cover within the whole studied area is given in Table 1.As a test area, the Kampinoski National Park was selected, which is located in the vicinity of Warsaw, capital of Poland.The total area of this park is about 385 km 2 .
Satellite and Test Data Description
The study was based on low spatial resolution data FR 1P and FR 2P of the ENVISAT/MERIS (medium resolution imaging spectrometer, spatial resolution 300 m) satellite sensor of levels, as well as on similar MOD15A2 MOD13A2 data of the MODIS/TERRA (moderate resolution imaging spectroradiometer, spatial resolution 1000 m).The selection of images was guided by the above-described reasons, as well as the by the evaluation of image quality in terms of the instantaneous cloud cover.MODIS images were obtained free of charge directly from the WIST site (The Warehouse Inventory Search Tool).MERIS images were also obtained free of charge in cooperation with the Space Research Centre, Polish Academy of Sciences, the Cat-1 in the frame of project AO-3275.All the preliminary operations and the analysis of vegetation indices from MERIS images were performed using the Visat (BEAM) program from Brockmann Consult and Contributors, shared free of charge by the European Space Agency (ESA).To prepare remote images of good quality before the main analysis, it was necessary to perform carefully many timeconsuming operations such as calibration, scaling, and image orthorectification.Detailed description of these necessary, but time-consuming analyses, is not possible in this paper.Figures 2 and 3 show examples of spatial distributions of exemplary indices (FAPAR MERIS and FAPAR MODIS) in the area covering the whole of the satellite image.
When performing the spatial analysis, the GIS (Geographical Information Systems) tools were also used, namely ArcGIS 9.1, while the statistical calculations were made in the Statistica package.The Corine Land Cover, 2006 database, was used to take into account only natural stands from the above-described seven national parks.To assure the best quality of results of GIS and statistical analyses, all pixels taken for calculations were carefully filtered.The area, which included the above-mentioned three types of forests and was not obscured by clouds in satellite images, had an area of approximately 875 km 2 .
Results and Discussion
The demonstration of significant differences between the three forest types coniferous, deciduous, and mixed in remote sensing data of relatively low resolution satellite images required proper selection of vegetation indices.This was made by carefully analyzing the major seasonal changes of vegetation indices, ranging from classical vegetation indeces NDVI and LAI, to much more sophisticated, such as FAPAR, fCover, LAIxCab, and MGVI.After preliminary analysis described in the previous section, seven different vegetation indices were chosen for the study: NDVI MERIS, FAPAR MERIS, LAI MERIS, LAIx-Cab MERIS, NDVI MODIS, and IL LAI MODIS.
In view of the fact that most conifer trees do not lose their needles, one can expect that their vegetation indices show much smaller seasonal changes than those deciduous ones [16,17].
As an example of such behaviour, histograms presented in Figures 4 and 5 forests is much higher than that in case of coniferous one.In addition, one can notice that the value of the FAPAR index in April for coniferous forests is greater than that for deciduous forests, while in July the FAPAR index value is greater for deciduous forests.As it can be seen in Figures 4 and 5, indices for mixed forests take, in both cases, intermediate values as compared with those for deciduous and coniferous ones.
This justifies the use of some indices based on seasonal changes in vegetation for the classification of forest types.In this work, the ratios of particular vegetation indices determined in July and April, which are the middle and the beginning of a vegetation season in Poland, were calculated.These ratios of vegetation indices are further known briefly as ratio indices, and indicated by the letters IL.For example,
=
value of FAPAR from MERIS in July value of FAPAR from MERIS in April . (1) It is worth noting that in order to perform such calculations it was necessary to ensure that pixels values determined in July and April were taken exactly from the same place.
The similar histograms to those for FAPAR were obtained also for all seven above-listed vegetation indices calculated from MODIS and MERIS satellite images.For all analysed national parks, the same marked tendency has been observed: ratio indices determined for coniferous forests had the smallest mean values, those determined for deciduous forests were the largest, whereas those obtained for mixed forests were intermediate.The average values of ratio indices calculated from the whole area of national parks are presented in Figure 6, whereas Table 2 shows the mean values of ratio indices calculated both for the whole area and separately for each national park.The distinct differences in mean values of ratio indices of specific forest type made it possible to use the ratio indices for classification purposes.To do so, also the ranges of all ratio indices corresponding to different forest types were determined for the whole studied area and separately for each national park.Determination of precise limits of the ranges of all ratio indices for the selected type of forest was a difficult task because the ranges of these indices overlap each other due to natural variations in the pixel values.One could expect that the ranges of ratio indices obtained from the whole study area were more reliable; therefore they were used to obtain the spatial distributions of the forest type in the test area.Table 3 presents the limits of ranges of ratio indices with the corresponding classification accuracy of the mixed forests.Ranges of ratio indices when applied to the classification of forest type in the Kampinoski National Park led to surprisingly good results both for coniferous and deciduous forests.The obtained accuracy ranged from about 60% to about 90%, depending on chosen ratio index, which could be consider as good result [18,19].
Tables 4 and 5 show the results of the classification for different types of forest for the test site, that is, the Kampinoski National Park using ratio indices calculated on the basis of MODIS and MERIS images, respectively.
The results presented in Tables 3 and 4 indicate a reasonably good classification accuracy of deciduous and coniferous forests for both types of images and apparently lower classification accuracy of mixed forests.The last effect can be attributed to too coarse spatial resolution of remote imagery.It is worth noting that the classification accuracy of the mixed forests was significantly improved when MERIS images with better spatial resolution (more than three times) were used.It should also be stressed that the highest classification accuracy of the mixed forests was obtained using IL FAPAR MODIS and IL FAPAR MERIS indices based on the FAPAR vegetation index.In order to check the proposed method spatially, also the maps at test area were drawn showing spatial distribution of the forest types obtained by means of the low resolution imagery and then compared with true maps of forests from the Corine Land Cover database.As can be seen in Figure 7, surprisingly good fit of both kinds of forest type distributions was obtained, in particular in areas with large surface and regular shape.It can be seen again that the distributions of forest types based on vegetation indices derived from satellite imagery of MERIS sensor show a much better fit than those of the MODIS sensor.
Summary and Conclusions
This paper describes a rapid and inexpensive method of the annual classification of forest ecosystems into three categories: coniferous forests, mixed forests, and deciduous ones, using optical satellite images of MODIS/TERRA and MERIS/ENVISAT sensors with low spatial resolution.Direct use of vegetation indices such as NDVI, FAPAR, and LAI for this purpose did not provide sufficient accuracy, even in the case of high-resolution satellite observations.In this study, it was decided to use seasonal changes in these indices.The use of ratios of vegetation indices (the ratio indices) calculated from satellite observations performed in July and April was here proposed.The typical ranges of values of selected ratio indices for coniferous, deciduous, and mixed forests in natural stands were determined.The results of analysis performed in the work show that limits of ranges of ratio indices developed in this work may be used for efficient forest classification using low resolution satellite imagery.The method was verified at the Kampinoski National Park area using the Corine Land Cover database.Good accuracy of classification was obtained despite the relatively low spatial resolutions used in the analysis comparing to those used commonly in the satellite observations of forest ecosystems.Lack of full agreement of the classification results with the true spatial distribution of forests may also be due to the fact that boundaries of different types of forest are not sharp but blurred.The transition between forest specified in the Corine Land Cover database as coniferous and mixed in nature takes place smoothly.Moreover, in areas where there is a large variation of the forest type, the corresponding pixel in satellite image may be located on the border between two or even three forests and thus incorrectly classified.The highest classification accuracy of the mixed forests was obtained using ratio indices based on FAPAR.In view of further global-scale research of the hydrological conditions and evapotranspiration at forest areas, the use of MODIS optical data and very low resolution microwave SMOS data is considered.The results obtained in this work could be applied in such studies.
Figure 1 :
Figure 1: Location of study area on the map of Poland.
Figure 2 :Figure 3 :
Figure 2: Example of spatial distribution of the IL NDVI MODIS vegetation index obtained from the MERIS image, on July 3, 2008, and on April 2, 2009.
Figure 4 :
Figure 4: Multiple histogram of the FAPAR MERIS vegetation index for the photograph on April 2, 2009.
Figure 5 :Figure 6 :
Figure 5: Multiple histogram of the FAPAR MERIS vegetation index for the photograph on July 3, 2009.
Figure 7 :
Figure 7: Comparison of the classification results of forest types obtained using ratio indices with the spatial distribution obtained on the basis of the accurate data from CLC database.Example based on FAPAR NDVI vegetation indices calculated from MERIS and MODIS images, with resolutions 300 m and 1000 m, respectively, in the area of the Kampinoski National Park.
Table 1 :
Land cover class in the studied area according to the Corine Land Cover, 2006 database.
Table 2 :
The means of ratio indices for the studied types of forests in Poland.
Table 3 :
Limits indices for the studied types of forest and the classification accuracy for the mixed forests in Poland.
Table 4 :
The results of the classification of studied types of forest in the Kampinoski National Park using ratio indices calculated on the basis of MODIS images.
Table 5 :
The results of the classification of studied types of forest in the Kampinoski National Park using ratio indices calculated on the basis of MERIS images. | 2019-03-08T14:19:54.737Z | 2013-03-07T00:00:00.000 | {
"year": 2013,
"sha1": "9f6e4e00306c3ff8caa7d992857e95504de71369",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/archive/2013/529193.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "28a44e23c60d24e2adcb3f37839299c205e76706",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
271342435 | pes2o/s2orc | v3-fos-license | Dynamic Equilibrium of Protein Phosphorylation by Kinases and Phosphatases Visualized by Phos-Tag SDS-PAGE
: The phosphorylation state of 20 types of intracellular proteins in the presence of the protein phosphatase 1 (PP1)- and PP2A-specific Ser/Thr phosphatase inhibitor calyculin A or the Tyr phosphatase inhibitor pervanadate was visualized by Phos-tag SDS-PAGE followed by immunoblotting. All blots showed a Phos-tag pattern indicating increased phosphorylation in the presence of one or both phosphatase inhibitors. The increase in phosphorylation stoichiometry per protein tends to be greater for Ser/Thr phosphatase inhibition than for Tyr phosphatase inhibition. This is consistent with the fact that the number of Ser/Thr kinase genes in the human genome is greater than that of Tyr kinases and with the fact that the phospho-Ser/phospho-Thr ratio in the actual human phos-phoproteome is far greater than that of phospho-Tyr ratio. This suggests that cellular proteins are routinely and randomly phosphorylated by different kinases with no biological significance, simply depending on the frequency of substrate encounters. Phosphatase is responsible for routinely removing these unwanted phosphate groups systematically and maintaining the dynamic equilibrium of physiological protein phosphorylation. Phos-tag SDS-PAGE visualized that the kinase reaction involves many incidental phosphorylation and that phosphatases play broader roles besides being strict counterparts to kinases.
Introduction
Protein phosphorylation plays a critical role in the regulation of fundamental cellular processes in all living cells [1].The phosphorylation state of proteins is constantly maintained in dynamic equilibrium by the action of kinases and phosphatases [2][3][4].Protein kinases are well characterized, whereas protein phosphatases have been less discussed.This may be because phosphorylation by kinases is a signaling switch-on reaction, whereas phosphatases are thought to have the passive function of signaling switch-off by dephosphorylation.
Previously, our phosphorylation analysis of intracellular proteins using Phos-tag two-dimensional fluorescence difference gel electrophoresis (2D DIGE) showed that many proteins switch to a hyperphosphorylated state in the presence of the Ser/Thr phosphatase inhibitor calyculin A [5]. Calyculin A is a cell-permeable inhibitor of protein phosphatase 1 (PP1) and protein phosphatase 2A (PP2A) which binds to the catalytic subunit of PP1 and PP2A [6].The IC 50 s for PP1 and PP2A are 2 nM and 0.5-1 nM, respectively, with potent cytotoxicity at the nanomolar level [7].Among the Ser/Thr phosphatases, PP1 and PP2A are ubiquitously and abundantly expressed in various types of eukaryotic cells and are involved in many of the universal biological activities, such as the cell cycle, metabolism, cytoskeletal regulation, ion channel and membrane receptor regulation, transcription, cell signaling, and cell differentiation [8].2D DIGE suggested that many proteins, despite routinely, randomly, and accidentally acting as substrates for various Ser/Thr kinases, are returned to their normal phosphorylation state by PP1 and PP2A, which are responsible for removing unintended phosphate groups.Mass spectrometry of gel spots after 2D DIGE showed that cytoskeletal proteins such as lamins, keratins, and vimentin are hyperphosphorylated by calyculin A [5].While 2D DIGE only visualized proteins with high intracellular abundance, the present study used Phos-tag SDS-PAGE followed by immunoblotting to investigate the changes in phosphorylation states of a larger number of proteins with low intracellular abundance, such as proteins involved in signal transduction.In this study, we discuss the broader role of PP1 and PP2A, beyond simply switching off proteins activated by kinases.
To further discuss the role of Tyr phosphatase, changes in the phosphorylation state of intracellular proteins by the Tyr phosphatase inhibitor pervanadate were also examined in a similar manner.Protein tyrosine phosphorylation regulates cellular signaling pathways underlying a wide range of fundamental physiological processes [9].Tyr kinases and Tyr phosphatases work in a coordinated manner to regulate reversible phosphorylation reactions that occur within seconds to minutes.The human genome encodes 90 Tyr kinases [2] and 107 putative protein Tyr phosphatases [10].The almost identical number of Tyr kinase and Tyr phosphatase genes suggests similar levels of complexity between the two families.The diversity of Tyr phosphatases suggests that they are highly specific in function and substrate recognition in the regulation of signaling.Protein tyrosine phosphatases are a highly diverse family of enzymes, defined by the active-site signature motif His-Cys-X-X-X-X-X-Arg, in which the Cys acts as a nucleophile and is essential for catalysis [9].In vivo Tyr phosphatase activity is irreversibly inhibited by pervanadate, which oxidizes Cys at the catalytic site [11].In this paper, we discuss the role of Tyr phosphatases in contrast to the role of Ser/Thr phosphatases.
Selection of Target Proteins
The total lysate of HeLa cells was subjected to conventional SDS-PAGE and Phostag SDS-PAGE followed by immunoblotting with different types of specific antibodies for cellular proteins.Twenty antibodies that specifically detected the protein of interest with an excellent signal-to-noise ratio were selected (Figure S1).Therefore, although the proteins analyzed in this study were randomly selected, they were categorized into the following nine types: (1) proteins related to the MAPK pathway, namely, A-Raf, ATF2, JNK1, MAPKAPK2, MEK1, p38 MAPK, and p42 MAPK; (2) proteins related to the JAK-STAT pathway, namely, JAK2, STAT1, STAT3, and STAT6; (3) proteins related to the mTOR pathway, namely, mTOR, Raptor, and Rictor; (4) a Wnt signaling pathway-related protein, β-catenin; (5) a cell cycle-related protein, CDK2, (6) a glycogen synthesis-related protein, GSK-3β; (7) a tumor suppressor protein, p53; (8) a phosphoinositide 3-kinase, PI3 kinase p110β; and (9) serum response factor, SRF.The number of phosphorylation sites and the estimated upstream kinases in human cells registered in the online database of posttranscriptional modifications, PhosphoSitePlus, for the 20 proteins are summarized in Table 1.
Phosphorylation State of MAPK Pathway-Related Proteins in the Presence of the Phosphatase Inhibitor
The Phos-tag SDS-PAGE patterns of MAPK pathway-related proteins, namely, A-Raf, ATF2, JNK1/3, MAPKAPK2, MEK1, p38 MAPK, and p42 MAPK, in the presence of calyculin A or pervanadate are shown in Figure 1.Each blot is described below.
A-Raf: A-Raf, B-Raf, and C-Raf are the Ser/Thr kinases, which are the main effectors recruited by GTP-bound Ras to activate the MEK-MAPK pathway [12].In untreated cells (lane C), multiple bands were detected, indicating that several sites are constitutively phosphorylated under homeostatic conditions.In calyculin A-treated cells (lane 1), only a few bands remained in the same position as in control cells, and several significantly up-shifted bands were detected.This suggests that multiple sites are phosphorylated by multiple Ser/Thr kinases.Putative upstream Ser/Thr kinases in vivo have not been deposited in PhosphoSitePlus (see Table 1).However, since A-Raf is similar in sequence and function to C-Raf, and several kinases are involved in the phosphorylation of the activation sites [13], it is likely that there are several upstream Ser/Thr kinases similar to C-Raf.In the pervanadate-treated cells (lane 2), the banding pattern was almost the same as in the control cells.This suggests that A-Raf is not phosphorylated by any Tyr kinases.
ATF2: cyclic AMP-dependent transcription factor 2 (ATF2) interacts with a variety of viral oncoproteins and cellular tumor suppressors and is a target of the SAPK/JNK and p38 MAP kinase signaling pathways [14].ATF2 is phosphorylated by activated p38 MAP kinase.In untreated cells (lane C), multiple bands were detected, indicating that several sites are constitutively phosphorylated under homeostatic conditions.In cyclin A-treated cells (lane 1), few bands remained in the same position as in control cells, and one exaggeratedly shifted band was detected.This suggests that multiple sites are phosphorylated by multiple Ser/Thr kinases.In pervanadate-treated cells (lane 2), three up-shifted bands were detected.The Tyr phosphorylation sites were not registered in PhosphoSitePlus (Table 1), whereas the protein may contain potential substrates for multiple tyrosine kinases and phosphorylated by multiple Tyr kinases.A-Raf: A-Raf, B-Raf, and C-Raf are the Ser/Thr kinases, which are the main effectors recruited by GTP-bound Ras to activate the MEK-MAPK pathway [12].In untreated cells (lane C), multiple bands were detected, indicating that several sites are constitutively phosphorylated under homeostatic conditions.In calyculin A-treated cells (lane 1), only a few bands remained in the same position as in control cells, and several significantly upshifted bands were detected.This suggests that multiple sites are phosphorylated by multiple Ser/Thr kinases.Putative upstream Ser/Thr kinases in vivo have not been deposited in PhosphoSitePlus (see Table 1).However, since A-Raf is similar in sequence and function to C-Raf, and several kinases are involved in the phosphorylation of the activation sites [13], it is likely that there are several upstream Ser/Thr kinases similar to C-Raf.In the pervanadate-treated cells (lane 2), the banding pattern was almost the same as in the control cells.This suggests that A-Raf is not phosphorylated by any Tyr kinases.
ATF2: cyclic AMP-dependent transcription factor 2 (ATF2) interacts with a variety of viral oncoproteins and cellular tumor suppressors and is a target of the SAPK/JNK and p38 MAP kinase signaling pathways [14].ATF2 is phosphorylated by activated p38 MAP kinase.In untreated cells (lane C), multiple bands were detected, indicating that several sites are constitutively phosphorylated under homeostatic conditions.In cyclin A-treated JNK1: Stress-activated protein kinase (SAPK)/Jun-amino-terminal kinase (JNK) is potently and preferentially activated by a variety of environmental stresses [15].Activation of JNK1 occurs via the phosphorylation of Thr183/Tyr185 by MKK4 and MKK7 [16].In untreated cells (lane C), distinct bands due to the non-phosphorylated 46 kDa and 54 kDa isoforms and several up-shifted low-intensity bands were detected.In the calyculin A-treated cells (lane 1), several newly appearing phosphorylated bands were detected.This suggests that multiple sites are phosphorylated by multiple Ser/The kinases.In the pervanadate-treated cells (lane 2), one new phosphorylated band was detected.JNK1 would be phosphorylated by several Tyr kinases, whereas the stoichiometry of Tyr phosphorylation at a particular site is less than 10%, as estimated from the intensity of the phosphorylated band in lane 2.
MAPKAPK2: MAP kinase-activated protein kinase 2 (MAPKAPK2) is rapidly phosphorylated by p38 MAPK and activated in response to cytokines, stress, and chemotactic factors [17].Activation of MAPKAPK2 occurs through the phosphorylation of Thr222, Ser272, and Thr334 by p38 MAPK.In untreated cells (lane C), multiple bands were detected, indicating that several sites are constitutively phosphorylated under homeostatic condi-tions.In the calyculin A-treated cells (lane 1), few bands remained in the same position as in control cells and two newly phosphorylated bands were detected.In the pervanadatetreated cells (lane 2), four newly appeared phosphorylated bands were detected.This suggests that MAPKAPK2 is phosphorylated by multiple Ser/The and Tyr kinases.
MEK1: Mitogen-activated ERK-regulated kinase 1 (MEK1) is a dual-specificity protein kinase that functions in an MAPK cascade to control cell growth and differentiation [18][19][20].MEK1 is activated by the phosphorylation of Ser218/Ser222 by C-Raf [19,21].In untreated cells (lane C), multiple bands were detected, indicating that multiple sites are constitutively phosphorylated under homeostatic conditions.This is consistent with our previous study using Phos-tag SDS-PAGE, which showed that multiple phosphorylation variants of MEK1 are constitutively present in typical human cells [22].In the calyculin A-treated cells (lane 1), few bands remained at the position of the non-phosphorylated form, and several newly appeared phosphorylated bands were detected.In the pervanadate-treated cells (lane 2), a stronger signal was detected at a position just above the non-phosphorylated band, although several bands observed in the control cells appeared to be shifted down.The degree of migration in the Phos-tag gel is not defined by the number of phosphorylation sites but by the phosphorylation state, so this band pattern indicates the change in the phosphorylation state in the presence of pervanadate.This suggests that MEK1 is phosphorylated by several Ser/Thr and Tyr kinases.In our previous studies, we showed that MEK1 is phosphorylated by epidermal growth factor (EGF) treatment and that the intensity of the bands corresponding to active MEK1 (indicated by black circles on the left of the panel) increases, as detected by anti-phosphoMEK1 (Ser218/Ser222) [22].The band patterns of lanes 1 and 2 were clearly different from the pattern after EGF stimulation, suggesting that they were phosphorylated by kinases unrelated to the activation of the MAPK pathway.
p38 MAPK: p38 MAPK is involved in a signaling cascade that controls cellular responses to cytokines and stress, similar to SAPK/JNK [17,[23][24][25].Activation of p38 MAPK occurs through the phosphorylation of Thr180/Tyr182 by MKK3, MKK4, and MKK6 [26].Four isoforms of p38 MAPK have been identified: p38α, β, γ, and δ.In untreated cells (lane C), a clear band of the non-phosphorylated form and several up-shifted low-intensity bands were detected.Meanwhile, in the calyculin A-treated cells (lane 1), four distinct phosphorylated bands were detected.In addition, in the pervanadate-treated cells (lane 2), three phosphorylated bands were clearly detected.This suggests that specific sites are phosphorylated by multiple Ser/Thr and Tyr kinases.
p42 MAPK: The p44/42 MAPK signaling pathway can be activated in response to a variety of extracellular stimuli, including mitogens, growth factors, and cytokines [27].Activation of p42 MAPK occurs through the phosphorylation of Thr183/Tyr185 by MEK1/2 [18,28].In untreated cells (lane C), a non-phosphorylated band was detected.In the calyculin Atreated cells (lane 1), two up-shifted low-intensity bands were detected, while most proteins remained in the non-phosphorylated form.p42 MAPK is randomly phosphorylated by several Ser/Thr kinases, with the stoichiometry at a particular site being less than 10%, as estimated from the intensity of the phosphorylated band in lane 1.In the pervanadatetreated cells (lane 2), few bands remained in the same position as in the control cells, and one clearly up-shifted band was detected.This suggests that p42MAPK is phosphorylated by several Tyr kinases.Estimated from the intensity of the phosphorylated bands in lane 2, the stoichiometry of Tyr phosphorylation at a particular site is more than 50%.In our previous studies, we showed that p42 MAPK is phosphorylated by EGF treatment and that there is an increase in the intensity of the bands corresponding to active p42 MAPK, as detected by anti-phospho-p42/p44 MAPK (Thr202/Tyr204, Thr202, or Tyr204) [5].Activation of p42 MAPK by treatment with calyculin A and pervanadate was examined by immunoblotting with an anti-phospho-p42/p44 MAPK (Thr202/Tyr204), as shown to the right of the p42 MAPK panel.Signals indicating the activation of p42 MAPK were detected in lanes 1 and 2, but at a different position from the phosphorylated band detected by anti-p42 MAPK in lane 2. This suggests that p42 MAPK is randomly phosphorylated in the presence of pervanadate by Tyr kinases unrelated to the MAPK pathway.
Effect of Hydrogen Peroxide Treatment on MAPKs
The MAPKs are known to be activated by oxidative stress, such as hydrogen peroxide [29].Pervanadate, the tyrosine phosphatase inhibitor used in this study, was prepared as a mixture of vanadate and hydrogen peroxide.Changes in the phosphorylation state of MAPKs (p38, JNK1, and p42 MAPK) upon treatment with hydrogen peroxide alone were assessed by Phos-tag SDS-PAGE (Figure 2).The phosphorylation state of each protein was monitored from 0 to 30 min after the addition of hydrogen peroxide.No change in the phosphorylation state of JNK1 and p38MAPK was observed with hydrogen peroxide, whereas one phosphorylated band was detected for p42MAPK.The position of this band is different from that of the phosphorylated band observed for cells treated with pervanadate.For comparison, the same panel as that for p42 MAPK shown in Figure 1 is shown on the right.The results suggest that the changes in the Phos-tag pattern of MAPK in pervanadatetreated cells shown in Figure 1 reflect the inhibition of tyrosine phosphatase by pervanadate and not activation of the MAPK pathway by hydrogen peroxide.A-Raf, ATF2, MAPKAPK2, and MEK1 associated with the MAPK pathway may also be phosphorylated by hydrogen peroxide treatment, although, in the present study, it was assumed that the effect was not sufficient to significantly alter the Phos-tag pattern.
Phosphorylation State of JAK-STAT Pathway-Related Proteins in the Presence of the Phosphatase Inhibitors
The Phos-tag SDS-PAGE patterns of JAK-STAT pathway-related proteins, namely, JAK2, STAT1, STAT3, and STAT6, in the presence of calyculin A or pervanadate are shown in Figure 3.Each blot is described below.
Phosphorylation State of JAK-STAT Pathway-Related Proteins in the Presence of the Phosphatase Inhibitors
The Phos-tag SDS-PAGE patterns of JAK-STAT pathway-related proteins, namely, JAK2, STAT1, STAT3, and STAT6, in the presence of calyculin A or pervanadate are shown in Figure 3.Each blot is described below.
JAK2: JAK2 is a tyrosine kinase that is activated by ligand binding to a number of associated cytokine receptors [30].Upon cytokine receptor activation, JAK2 is autophosphorylated at Tyr1007/Tyr1008 and phosphorylates its associated proteins such as STATs [31].In untreated cells (lane C), most of the protein was detected in a non-phosphorylated state.In the calyculin A-treated cells (lane 1), little change was observed compared to the findings for control cells.Meanwhile, in the pervanadate-treated cells (lane 2), new phosphorylated bands were detected.Estimated from the intensity of the phosphorylated bands in lane 2, the stoichiometry of Tyr phosphorylation at a particular site is more than 50%.This suggests that JAK2 is phosphorylated by several Tyr kinases, whereas it is not phosphorylated by Ser/Thr kinases.
Phosphorylation State of JAK-STAT Pathway-Related Proteins in the Presence of the Phosphatase Inhibitors
The Phos-tag SDS-PAGE patterns of JAK-STAT pathway-related proteins, namely, JAK2, STAT1, STAT3, and STAT6, in the presence of calyculin A or pervanadate are shown in Figure 3.Each blot is described below.Zn 2+ -Phos-tag) was performed using a neutral pH gel system buffered with Tris-acetate for JAK2.Phos-tag SDS-PAGE (8% w/v polyacrylamide, 20 µM Zn 2+ -Phos-tag) was performed using a neutral pH gel system buffered with Bis-Tris HCl for STAT1, 3, and 6.
JAK2: JAK2 is a tyrosine kinase that is activated by ligand binding to a number of associated cytokine receptors [30].Upon cytokine receptor activation, JAK2 is autophosphorylated at Tyr1007/Tyr1008 and phosphorylates its associated proteins such as STATs [31].In untreated cells (lane C), most of the protein was detected in a non-phosphorylated Zn 2+ -Phos-tag) was performed using a neutral pH gel system buffered with Tris-acetate for JAK2.Phos-tag SDS-PAGE (8% w/v polyacrylamide, 20 µM Zn 2+ -Phos-tag) was performed using a neutral pH gel system buffered with Bis-Tris HCl for STAT1, 3, and 6.STAT1: STAT1 is a signal transducer and activator of transcription that mediates cellular responses to interferons, cytokines, and growth factors [32][33][34][35].In untreated cells (lane C), one non-phosphorylated and two phosphorylated bands were detected, indicating that several sites are constitutively phosphorylated under homeostatic conditions.In the calyculin A-treated cells (lane 1), one newly appeared phosphorylated band was detected, and the intensity of two phosphorylated bands observed at the same position as for the control cells also changed, indicating a change in phosphorylation state.This suggests that several sites are phosphorylated by several Ser/Thr kinases.In the pervanadate-treated cells (lane 2), a newly phosphorylated band was detected.STAT1 is phosphorylated by several Tyr kinases, whereas the stoichiometry of Tyr phosphorylation at a particular site is less than 10%, as estimated from the intensity of the phosphorylated band in lane 2.
STAT3: STAT3 is a signal transducer and activator of transcription that mediates cellular responses to interferons, cytokines, and growth factors [32].In untreated cells (lane C), one non-phosphorylated band and one phosphorylated band were detected.In the calyculin A-treated cells (lane 1), two new bands were detected.In the pervanadatetreated cells (lane 2), two new up-shifted bands were detected.This suggests that STAT3 is phosphorylated by multiple Ser/Thr and Tyr kinases.
STAT6: STAT6 is a signal transducer and activator of transcription involved in interleukin-mediated signaling [35,36].In untreated cells (lane C), multiple bands were detected, indicating that several sites are constitutively phosphorylated under homeostatic conditions.In the calyculin A-treated cells (lane 1), few bands remained in the same position as in control cells and smear-shifted bands were detected.In the pervanadate-treated cells (lane 2), a newly appeared phosphorylated band was detected.This suggests that STAT6 is phosphorylated by several Ser/Thr and Tyr kinases.
Phosphorylation State of mTOR Pathway-Related Proteins in the Presence of the Phosphatase Inhibitors
The Phos-tag SDS-PAGE patterns of mTOR pathway-related proteins, namely, mTOR, Raptor, and Rictor, in the presence of calyculin A or pervanadate are shown in Figure 4.Each blot is described below.
Phosphorylation State of mTOR Pathway-Related Proteins in the Presence of the Phosphatase Inhibitors
The Phos-tag SDS-PAGE patterns of mTOR pathway-related proteins, namely, mTOR, Raptor, and Rictor, in the presence of calyculin A or pervanadate are shown in Figure 4.Each blot is described below.mTOR: A Ser/Thr kinase, a mammalian target of rapamycin (mTOR), functions as a sensor of ATP and amino acids to control cell growth [37].When sufficient nutrients are available, mTOR is phosphorylated at the Ser2448 residue by the PI3kinase/Akt signaling pathway and then autophosphorylated at the Ser2481 residue.In untreated cells (lane C), most of the protein was detected in a non-phosphorylated state.In the calyculin A-treated cells (lane 1), two new phosphorylated bands were detected.In the pervanadate-treated cells (lane 2), the band pattern was almost the same as that of control cells.This suggests that mTOR is phosphorylated by several Ser/Thr kinases, whereas it is not phosphorylated by Tyr kinases.In our previous studies, we showed that mTOR was phosphorylated by treatment with fetal bovine serum (FBS) and that phosphorylated bands appeared, as detected by anti-phospho-mTOR (Ser2448 or Ser2481) [38].The band patterns of lane 1 were clearly different from the pattern after FBS treatment, suggesting that they were phosphorylated by kinases unrelated to activation of the mTOR pathway.
Raptor: Raptor is an adaptor protein of mTOR and a member of the mTORC1 complex [39].This complex promotes ribosome production and protein biosynthesis, suppresses proteolysis, and stimulates cell growth by providing information on nutrients, energy, and redox status [40].The activity of mTORC1 is inhibited by rapamycin [41].In untreated cells (lane C), one non-phosphorylated and one phosphorylated band were detected.In the calyculin A-treated cells (lane 1), few bands remained in the same position as in the control cells, and smear-shifted bands were detected.Meanwhile, in the pervanadate-treated cells (lane 2), the band pattern was almost the same as in the control cells.This suggests that multiple sites are phosphorylated by multiple Ser/Thr kinases, whereas phosphorylation by Tyr kinases does not occur.
Rictor: Rictor is an adaptor protein of mTOR and a member of the mTORC2 complex.Similar to mTORC1, mTORC2 is regulated by growth factors and nutritional status and its activity is not inhibited by rapamycin [42].In untreated cells (lane C), one non-phosphorylated band and one phosphorylated band were detected.In the calyculin A-treated cells (lane 1), few bands remained in the same position as in the control cells, and an exaggeratedly up-shifted band was detected.In the pervanadate-treated cells (lane 2), a newly appeared phosphorylated band was detected.This suggests that Rictor is phosphorylated by several Ser/Thr and Tyr kinases.
Phosphorylation State of Other Proteins in the Presence of the Phosphatase Inhibitor
The Phos-tag SDS-PAGE patterns of β-catenin, CDK2, GSK-3β, p53, PI3 kinase p110β, and SRF in the presence of calyculin A or pervanadate are shown in Figure 5.Each blot is described below.β-Catenin: β-Catenin is a key downstream effector in the Wnt signaling pathway [43].In untreated cells (lane C), multiple bands were detected, indicating that multiple sites are constitutively phosphorylated under homeostatic conditions.This is consistent with our previous study using Phos-tag SDS-PAGE, which showed that multiple phosphorylation variants of β-catenin are constitutively present in typical human cells [44].In the calyculin A-treated cells (lane 1), few bands remained in the same position as in control cells, and several newly appeared bands were detected due to changes in phosphorylation states.In the pervanadate-treated cells (lane 2), several newly appeared bands were also observed due to changes in phosphorylation states, whereas some bands remained in the same position as in the control cells.These findings suggest that β-catenin is phosphorylated by several Ser/Thr and Tyr kinases.
CDK2: Cyclin-dependent kinase 2 (CDK2) is an important component of the cell cycle machinery, and its kinase activity is regulated by its phosphorylation state [45].In untreated cells (lane C), one phosphorylated band with low signal intensity was detected, whereas most proteins were unphosphorylated.In the calyculin A-treated cells (lane 1), three newly appeared phosphorylated bands were detected, whereas most proteins remained in the non-phosphorylated form.In the pervanadate-treated cells (lane 2), one newly appeared phosphorylated band was detected, whereas most proteins remained in the non-phosphorylated form.These findings suggest CDK2 is phosphorylated by several Ser/Thr and Tyr kinases, whereas the stoichiometry of random phosphorylation at a particular site is less than 10%, as estimated from the intensity of phosphorylated bands in lanes 1 and 2.
GSK-3β: Glycogen synthase kinase-3β (GSK-3β) regulates glycogen synthesis in response to insulin by phosphorylating and inactivating glycogen synthase [46].In untreated cells (lane C), multiple bands were detected, indicating that several sites are constitutively phosphorylated under homeostatic conditions.In the calyculin A-treated cells (lane 1), several newly appeared phosphorylated bands were detected.This suggests that specific sites of GSK-3β are phosphorylated by several Ser/Thr kinases.In the pervanadate-treated cells (lane 2), two new phosphorylated bands were detected.These findings suggest GSK-3β is phosphorylated by several Tyr kinases, whereas the stoichiometry of β-Catenin: β-Catenin is a key downstream effector in the Wnt signaling pathway [43].In untreated cells (lane C), multiple bands were detected, indicating that multiple sites are constitutively phosphorylated under homeostatic conditions.This is consistent with our previous study using Phos-tag SDS-PAGE, which showed that multiple phosphorylation variants of β-catenin are constitutively present in typical human cells [44].In the calyculin A-treated cells (lane 1), few bands remained in the same position as in control cells, and several newly appeared bands were detected due to changes in phosphorylation states.In the pervanadate-treated cells (lane 2), several newly appeared bands were also observed due to changes in phosphorylation states, whereas some bands remained in the same position as in the control cells.These findings suggest that β-catenin is phosphorylated by several Ser/Thr and Tyr kinases.
CDK2: Cyclin-dependent kinase 2 (CDK2) is an important component of the cell cycle machinery, and its kinase activity is regulated by its phosphorylation state [45].In untreated cells (lane C), one phosphorylated band with low signal intensity was detected, whereas most proteins were unphosphorylated.In the calyculin A-treated cells (lane 1), three newly appeared phosphorylated bands were detected, whereas most proteins remained in the non-phosphorylated form.In the pervanadate-treated cells (lane 2), one newly appeared phosphorylated band was detected, whereas most proteins remained in the non-phosphorylated form.These findings suggest CDK2 is phosphorylated by several Ser/Thr and Tyr kinases, whereas the stoichiometry of random phosphorylation at a particular site is less than 10%, as estimated from the intensity of phosphorylated bands in lanes 1 and 2.
GSK-3β: Glycogen synthase kinase-3β (GSK-3β) regulates glycogen synthesis in response to insulin by phosphorylating and inactivating glycogen synthase [46].In untreated cells (lane C), multiple bands were detected, indicating that several sites are constitutively phosphorylated under homeostatic conditions.In the calyculin A-treated cells (lane 1), sev-eral newly appeared phosphorylated bands were detected.This suggests that specific sites of GSK-3β are phosphorylated by several Ser/Thr kinases.In the pervanadate-treated cells (lane 2), two new phosphorylated bands were detected.These findings suggest GSK-3β is phosphorylated by several Tyr kinases, whereas the stoichiometry of Tyr phosphorylation at particular sites is less than 10%, as estimated from the intensity of newly appeared phosphorylated bands in lane 2.
PI3Kp110β: Phosphoinositide 3-kinase (PI3K) catalyzes the production of phosphatidylinositol-3,4,5-triphosphate.Growth factors and hormones trigger this phosphorylation event, which in turn coordinates cell growth, cell cycle entry, cell migration, and cell survival [47].Four isoforms of the PI3K catalytic subunit, p110α, p110β, p110γ, and p110δ, have been identified.In untreated cells (lane C), one non-phosphorylated and one phosphorylated band were detected, indicating that several sites of PI3 kinase p110β are constitutively phosphorylated under homeostatic conditions.In the calyculin A-treated cells (lane 1), few bands remained in the same position as in the control cells, and three new phosphorylated bands were detected.In the pervanadate-treated cells (lane 2), three new phosphorylated bands were detected.This suggests that specific sites of PI3 kinase p110β are phosphorylated by multiple Ser/Thr and Tyr kinases.
p53: The p53 tumor-suppressor protein plays an important role in the cellular response to DNA damage and other genomic aberrations.p53 is activated by phosphorylation to induce either cell cycle arrest and DNA repair or apoptosis [48][49][50].In untreated cells (lane C), multiple bands were detected, indicating that several sites of p53 are constitutively phosphorylated under homeostatic conditions.In the calyculin A-treated cells (lane 1), few bands remained in the same position as in the control cells, and multiple significantly up-shifted bands were detected.In pervanadate-treated cells (lane 2), the band pattern was almost the same as in control cells.This suggests that multiple sites of p53 are phosphorylated by multiple Ser/Thr kinases, whereas it is not phosphorylated by Tyr kinases.SRF: Serum response factor (SRF) is a phospho-protein that, together with auxiliary factors, modulates the transcription of immediate early genes containing serum response elements at their promoters [51,52].In untreated cells (lane C), multiple bands were detected, indicating that several sites of SRF are constitutively phosphorylated under homeostatic conditions.In the calyculin A-treated cells (lane 1), few bands remained in the same position as in the control cells, and smear-shifted bands were detected.In the pervanadate-treated cells (lane 2), the band pattern was almost the same as in the control cells.This suggests that multiple sites of SRF are phosphorylated by multiple Ser/Thr kinases, whereas it is not phosphorylated by Tyr kinases.
Discussion
The phosphorylation state of 20 types of intracellular proteins in the presence of the PP1 and PP2A inhibitor calyculin A or the tyrosine phosphatase inhibitor pervanadate was analyzed by Phos-tag SDS-PAGE followed by immunoblotting.The Ser/Thr phosphatases PP1 and PP2A which are specifically inhibited by calyculin A are ubiquitous and abundant phosphatases involved in universal cellular activity.Meanwhile, pervanadate is an irreversible inhibitor of the active center common to all Tyr phosphatases.All immunoblots showed a Phos-tag pattern indicating increased phosphorylation in the presence of one or both phosphatase inhibitors.This suggests that Ser/Thr and Tyr kinases constantly phosphorylate various proteins on a routine basis.Protein phosphorylation in the presence of these phosphatase inhibitors would be random and incidental with no biological significance due to the disordered kinase reaction with loss of reversibility.
In calyculin A-treated cells, phosphorylation was increased in all blots except JAK2 compared with the findings in control cells.Ten blots (A-Raf, ATF2, MAPKAPK2, MEK1, STAT6, Raptor, Rictor, β-catenin, PI3Kp110β, and SRF) showed a shift to the hyperphosphorylated state with few remaining bands showing the same phosphorylation state as control cells (Figures 1 and 3-5).Two blots (p42 MAPK and CDK2) showed several new low-intensity phosphorylated bands, indicating a slight increase in phosphorylation (Figures 1 and 5).In pervanadate-treated cells, phosphorylation was increased in 16 blots compared with the findings for control cells, except for A-Raf, mTOR, p53, and SRF (Figures 1, 3 and 5).Two blots (p42 MAPK and JAK2) showed a shift to a hyperphosphorylated state with few of the bands observed in control cells remaining (Figures 1 and 2).Eight blots (JNK1, STAT1, STAT3, STAT6, Raptor, Rictor, CDK2, and GSK-3β) showed new low-intensity phosphorylated bands, indicating a slight increase in phosphorylation (Figures 1 and 3-5).
The increase in phosphorylation stoichiometry per protein tended to be greater for Ser/Thr phosphatase inhibition than for Tyr phosphatase inhibition for the proteins analyzed in the 20 blots.This could be explained by the fact that the number of Ser/Thr kinases in the human genome is 428, which significantly outnumbers the 90 Tyr kinases [2,10].Furthermore, based on proteomic data from more than 2000 phosphoproteins, the frequencies of pTyr, pThr, and pSer sites are 1.8%, 11.8%, and 86.4%, respectively [53].Comparing the number of pSer/pThr sites with the number of pTyr sites registered in PhosphoSitePlus for the proteins analyzed in the 20 blots, the number of pSer/pThr sites is higher for all except JAK2 (Table 1).JAK2 alone had more pTyr sites (n = 23) than pSer/pThr sites (n = 6) and was phosphorylated at a higher stoichiometry by pervanadate treatment, whereas calyculin A treatment did not change its phosphorylation state (Figure 2).Considering our results from the number of Ser/Thr and Tyr kinase genes on the genome and the proteomic data, it is suggested that many kinase reactions are random, simply depending on the frequency of substrate encounters, irrespective of their biological significance.Recent advances in MS-based phosphoproteomics have enabled the measurement of extremely low-abundance phosphorylated proteins and have led to an increasing number of publications on studies involving high-throughput analysis of phosphorylation sites [54].The biological significance and upstream kinases of most of these sites have not been determined, which could be explained by substantial incidental phosphorylation having been captured.
If many of the kinase reactions are random, phosphatase must perform systematic dephosphorylation to maintain proteins in a physiological phosphorylated state.Tyr phosphatases have been identified to be encoded by 107 genes, more than for Tyr kinases [10], suggesting that Tyr phosphatases are highly specific in function and substrate recognition in the regulation of signaling.For example, in the mechanism by which the MAPK pathway is inactivated, MAPK phosphatases, the dual-specificity Tyr phosphatases for pTyr and pSer/Thr, display distinct patterns of subcellular localization and specificity for individual MAPKs, thereby forming a phosphatase response network that functions in the attenuation of MAPK-dependent signaling pathways [55].In this study, we have shown that p42 MAPK is highly phosphorylated at the Tyr which is not the site of phosphorylation upon activation of the MAPK pathway under conditions not regulated by any Tyr phosphatase (Figure 2).This suggests that the Tyr phosphatases are not only strict counterparts to individual Tyr kinases but also responsible for removing random unintended phosphate groups.The human genome contains 428 genes encoding Ser/Thr kinases, but far fewer, approximately 25, encoding Ser/Thr phosphatases have been identified [56].A relatively small number of Ser/Thr phosphatases control many specific dephosphorylation reactions.In the case of PP1 and PP2A, this is explained by the formation of many different holoenzymes from a shared catalytic subunit and numerous regulatory subunits to achieve broad substrate specificity [57].PP1 and PP2A have not been characterized as well as kinases due to their broad substrate specificity, but since the discovery of specific inhibitors such as calyculin A, it has become clear that they actively regulate specific protein functions.For example, the hyperphosphorylation of tau, a neuronal microtubule-binding protein, in Alzheimer's disease is due to a defect in PP2A [58].However, PP1 and PP2A must also be constantly working to remove random and unintended phosphate groups, as shown in this study.The stoichiometry of random Ser/Thr phosphorylation per protein would be far greater than that of Tyr phosphorylation.The broad substrate specificity achieved by the existence
Figure 1 .
Figure 1.Phosphorylation state of MAPK pathway-related proteins in the presence of phosphatase inhibitor.Total lysates of HeLa cells treated with calyculin A (lane 1) or pervanadate (lane 2) were subjected to Phos-tag SDS-PAGE together with untreated control lysate (lane C).The antibody used for the blot is indicated below each panel.Rf values indicating mobility are shown to the left of each panel.The arrowhead indicates the position of the non-phosphorylated form.The arrows on the left or right of each panel indicate newly observed bands or changes in band intensity in lane 1 or lane 2, respectively, compared to lane C. Phos-tag SDS-PAGE (8% w/v polyacrylamide, 20 µM Zn 2+ -Phostag) was performed using a neutral pH gel system buffered with Bis-Tris-HCl.
Figure 1 .
Figure 1.Phosphorylation state of MAPK pathway-related proteins in the presence of phosphatase inhibitor.Total lysates of HeLa cells treated with calyculin A (lane 1) or pervanadate (lane 2) were subjected to Phos-tag SDS-PAGE together with untreated control lysate (lane C).The antibody used for the blot is indicated below each panel.Rf values indicating mobility are shown to the left of each panel.The arrowhead indicates the position of the non-phosphorylated form.The arrows on the left or right of each panel indicate newly observed bands or changes in band intensity in lane 1 or lane 2, respectively, compared to lane C. Phos-tag SDS-PAGE (8% w/v polyacrylamide, 20 µM Zn 2+ -Phos-tag) was performed using a neutral pH gel system buffered with Bis-Tris-HCl.
Figure 2 .
Figure 2. Phos-tag pattern of MAPKs (JNK1, p38MAPK, and p42MAPK) treated with hydrogen peroxide.Total lysates of HeLa cells treated with 3 mM hydrogen peroxide (H2O2) for 0, 2, 5, 10, or 30 min were subjected to Phos-tag SDS-PAGE.The used antibody or the blot is indicated below each panel.Rf values indicating mobility are shown to the left of each panel.The arrowhead indicates the position of the non-phosphorylated form.The arrow indicates the newly appeared band phosphorylated by hydrogen peroxide treatment.For comparison, the panel of p42 MAPK shown in Figure 1 is presented at the extreme right.
Figure 3 .
Figure 3. Phosphorylation state of JAK-STAT pathway-related proteins in the presence of phospha-
Figure 2 .
Figure 2. Phos-tag pattern of MAPKs (JNK1, p38MAPK, and p42MAPK) treated with hydrogen peroxide.Total lysates of HeLa cells treated with 3 mM hydrogen peroxide (H 2 O 2 ) for 0, 2, 5, 10, or 30 min were subjected to Phos-tag SDS-PAGE.The used antibody or the blot is indicated below each panel.Rf values indicating mobility are shown to the left of each panel.The arrowhead indicates the position of the non-phosphorylated form.The arrow indicates the newly appeared band phosphorylated by hydrogen peroxide treatment.For comparison, the panel of p42 MAPK shown in Figure 1 is presented at the extreme right.
Figure 3 .
Figure 3. Phosphorylation state of JAK-STAT pathway-related proteins in the presence of phosphatase inhibitor.Total lysates of HeLa cells treated with calyculin A (lane 1) or pervanadate (lane 2) were subjected to Phos-tag SDS-PAGE together with untreated control lysate (lane C).The antibody used for the blot is indicated below each panel.Rf values indicating mobility are shown to the left of each panel.The arrowhead indicates the position of the non-phosphorylated form.The arrows on the left or right of each panel indicate newly observed bands or changes in band intensity in lane 1 or lane 2, respectively, compared to lane C. Phos-tag SDS-PAGE (5% w/v polyacrylamide, 20 µMZn 2+ -Phos-tag) was performed using a neutral pH gel system buffered with Tris-acetate for JAK2.Phos-tag SDS-PAGE (8% w/v polyacrylamide, 20 µM Zn 2+ -Phos-tag) was performed using a neutral pH gel system buffered with Bis-Tris HCl for STAT1, 3, and 6.
Figure 3 .
Figure 3. Phosphorylation state of JAK-STAT pathway-related proteins in the presence of phosphatase inhibitor.Total lysates of HeLa cells treated with calyculin A (lane 1) or pervanadate (lane 2) were subjected to Phos-tag SDS-PAGE together with untreated control lysate (lane C).The antibody used for the blot is indicated below each panel.Rf values indicating mobility are shown to the left of each panel.The arrowhead indicates the position of the non-phosphorylated form.The arrows on the left or right of each panel indicate newly observed bands or changes in band intensity in lane 1 or lane 2, respectively, compared to lane C. Phos-tag SDS-PAGE (5% w/v polyacrylamide, 20 µMZn 2+ -Phos-tag) was performed using a neutral pH gel system buffered with Tris-acetate for JAK2.Phos-tag SDS-PAGE (8% w/v polyacrylamide, 20 µM Zn 2+ -Phos-tag) was performed using a neutral pH gel system buffered with Bis-Tris HCl for STAT1, 3, and 6.
Figure 4 .
Figure 4. Phosphorylation state of mTOR pathway-related proteins in the presence of phosphatase inhibitor.Total lysates of HeLa cells treated with calyculin A (lane 1) or pervanadate (lane 2) were
Figure 4 .
Figure 4. Phosphorylation state of mTOR pathway-related proteins in the presence of phosphatase inhibitor.Total lysates of HeLa cells treated with calyculin A (lane 1) or pervanadate (lane 2) were subjected to Phos-tag SDS-PAGE together with untreated control lysate (lane C).The antibody used for the blot is indicated below each panel.Rf values indicating mobility are shown to the left of each panel.The arrowhead indicates the position of the non-phosphorylated form.The arrows on the left or right of each panel indicate newly observed bands or changes in band intensity in lane 1 or lane 2, respectively, compared to lane C. Phos-tag SDS-PAGE (3.5% w/v polyacrylamide, 20 µM Zn 2+ -Phos-tag) was performed using a neutral pH gel system buffered with Tris-acetate.
Figure 5 .
Figure 5. Phosphorylation state of β-catenin, CDK2, GSK-3β, p53, PI3 kinase p110β, and SRF in the presence of phosphatase inhibitor.Total lysates of HeLa cells treated with calyculin A (lane 1) or pervanadate (lane 2) were subjected to Phos-tag SDS-PAGE together with untreated control lysate (lane C).The antibody used for the blot is indicated below each panel.Rf values indicating mobility are shown to the left of each panel.The arrowhead indicates the position of the non-phosphorylated form.The arrows on the left or right of each panel indicate newly observed bands or changes in band intensity in lane 1 or lane 2, respectively, compared to lane C. Phos-tag SDS-PAGE (8% w/v polyacrylamide, 20 µM Zn 2+ -Phos-tag) was performed using a neutral pH gel system buffered with Bis-Tris HCl.
Figure 5 .
Figure 5. Phosphorylation state of β-catenin, CDK2, GSK-3β, p53, PI3 kinase p110β, and SRF in the presence of phosphatase inhibitor.Total lysates of HeLa cells treated with calyculin A (lane 1) or pervanadate (lane 2) were subjected to Phos-tag SDS-PAGE together with untreated control lysate (lane C).The antibody used for the blot is indicated below each panel.Rf values indicating mobility are shown to the left of each panel.The arrowhead indicates the position of the non-phosphorylated form.The arrows on the left or right of each panel indicate newly observed bands or changes in band intensity in lane 1 or lane 2, respectively, compared to lane C. Phos-tag SDS-PAGE (8% w/v polyacrylamide, 20 µM Zn 2+ -Phos-tag) was performed using a neutral pH gel system buffered with Bis-Tris HCl.
Table 1 .
Number of potential phosphorylation sites and putative upstream kinases in human cells registered in PhosphoSitePlus database1, for the 20 proteins analyzed. | 2024-07-24T15:54:12.168Z | 2024-07-19T00:00:00.000 | {
"year": 2024,
"sha1": "b04d1e2a04c9ef3257b936e5c26dec26ba511e71",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2813-3757/2/3/14/pdf?version=1721359973",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "74d1c6a32054055bd2048e55856aebffa9db70be",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": []
} |
229419194 | pes2o/s2orc | v3-fos-license | Macrophage supernatant infected with Leishmania major mediates the cytology of fibroblast cells in skin wounds
Objective: Healing of Cutaneous Leishmaniasis relies on the effective and modulates protective immune responses. Although the immune system is necessary to eliminate the parasite, it could be considered as the main cause of ulcers. Therefore, main aim of this study was to explore the possible regulatory functions of macrophage supernatant infected with Leishmania major on the fibroblast cells. Materials and Methods: In this experimental study, different concentrations of infected macrophage supernatant extract (50, 100, 150, 200, and 250μg/mL) were tested at different times (6, 24, 48, and 72h) and the effect of the leishmanicidal extract on fibroblast cells was determined by MTS assay. Also, the flow-cytometry technique was used for the investigation of apoptosis induction percentage. Results: MTS assay showed that the leishmanicidal effect of infected macrophage supernatant extract was dependent on the concentration and the time of treatment. So, the best efficacy was observed in 200 μg/mL with 72 hours exposure time. Flow cytometry analysis showed that the infected macrophage supernatant extract could induce apoptosis in cultured fibroblasts. Conclusions: We have demonstrated that reduction of survival rate and induction of apoptosis in fibroblasts displayed a similar manner to keratinocytes when exposed to infected macrophages with L. major. Our data suggest that such a phenomenon can be the underlying cause of lesions with scarring, and future, the mechanism remains to be elucidated.
Introduction
Leishmaniasis is a disease caused by obligate and intracellular parasites belonging to the Leishmania genus. Cutaneous Leishmaniasis (CL) is caused by the inoculation of the parasite Leishmania from sandflies to humans. The severity of the disease depends on the species and inducing host immune responses. Several preventive efforts, such as insect vector control and vaccination, were performed to control the disease, but none of them had been entirely successful. 1,2 Both wound healing and the creation process are depended on the host immune response. Although the immune system is necessary to eliminate the parasite, it could be considered as the leading cause of ulcers, as demonstrated by the absence of ulcers in patients with AIDS. 3 The presence of parasites in wound site after years of wound healing indicates that wound healing is not only related to the parasites elimination and the presence of the parasites is not the only factor playing a role in the establishment of such wounds. Thus, it seems that how the host immune system response to the parasite leads to scarring.
Since Leishmania major (L. major) is an intracellular parasite able to survive and proliferate only in specific immune system cells such as macrophages in vertebrate hosts, its interaction with adjacent cells is possible only by changing the behavior of the immune cells. [4][5][6] In many skin lesions, activation of the immune system (by autoimmune or external antigen) would induce cell death in cells of the epidermis by apoptosis, and which is one of the most important causes of skin ulcers. [7][8][9] In a few studies which investigate the mechanism of injury caused by Leishmaniasis, it has been demonstrated that L. major leads to induce apoptosis in the keratinocytes. [10][11][12][13][14] However, to create wounds that scars, epidermis cells death are not enough solely, and dermis natural building also should be significantly damaged to remain a healing scar. 15 So infected macrophage and sensitized lymphocytes with L.
[page 12] [Infectious Diseases and Herbal Medicine 2020; 1:83] major not only effect on epidermal cell apoptosis but also have a significant impact on dermis structure and cells.
As mentioned, the behaviors of the dermis and epidermis cells are unknown in response to Leishmania infection, and the effect on tissue regeneration and wound healing is still unclear. Even there is no complete information in the case of loss of cells (apoptosis, necrosis, or reduce proliferation). Therefore, in this study, the authors are supposed to understand the necessary information about the effect of Leishmania on the most important cells of the epidermis and dermis, the keratinocytes and fibroblasts. However, the epidermis cell death has been explicitly investigated by molecular mechanisms in previous studies. [16][17][18] However, transfected cell lines or only one normal cell line was used in previous works cannot be a true representative of the epidermis cells. Therefore, in this study, we are looking for more reliable picture from stimulation of macrophage with L. major and their effect on human fibroblast by using cells from donors and with the least possible passage in the culture medium. On the other hand, primary information about the effects of L. major on the skin tissues was also achieved by fibroblast cells culture and compared to the effectiveness of parasitic infection on fibroblasts and keratinocytes.
Parasite Culture
The strain of L. major (MRHO/IR/75/ER) was obtained in cryopreserved from Department of Parasitology and Mycology, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran. Promastigotes were cultured in RPMI 1640 (Gibco, UK) enriched with FBS 10% (Fetal Bovine Serum, Bio-idea, Iran), 100IU/mL of penicillin, and 100 μg/mL of streptomycin and 100 μg/mL gentamycin for mass production. 19
Isolation of macrophage by Ficoll density gradient separation
The Ficoll density gradient separation of whole blood remains the most commonly used procedure for the separation of mononuclear cells. First, 20 mL of blood from the collection vial was transferred to a 50 mL tube, and an equal volume of PBS was added and mixed. 15 mL Ficoll (Biosera, England) was added in a second 50mL tube and carefully layered the diluted blood over the Ficoll. In the following, the tube was centrifuged without the stop at 400g for 30 min at 18-24°C. The tubes were carefully removed from the centrifuge so as not to disturb the layering and remove the macrophage layer and transfer to a new 50mL tube. In the next step, we washed macrophage fraction by PBS and centrifuged it at 100g for 10 min at 18-24°C in triplicate. Finally, the supernatant was decanted, and the cell pellet was re-suspended by the appropriate volume of PBS (or media). The cells were cultured in RPMI 1640 medium (Bio-idea, Iran) supplemented with 2 mM L-glutamine, 100U/mL penicillin, 100µg/mL streptomycin, 10% heatinactivated FBS, which was changed on day 2 and every 3 days subsequently. Macrophages were used between days 7 and 10 of culture. 20 LPS is known to induce the appearance of cell characteristics consistent with mature macrophages. 21
L. major Infected Macrophages
In the next step, L. major was added to cultured cells in a ratio of ten parasites to one cell and was incubated at 37°C in complete RPMI 1640 medium. After 24 hours, the monolayers were extensively washed to remove extracellular parasites and adherent cells.
The supernatant was obtained in 24, 48, 72, and 96 hours after incubation and was frozen at -20°C. 22
Fibroblast cell culture
The fibroblast cells were obtained by a skin punch, Iran. Cells were cultured in RPMI 1640 medium (Gibco-Invitrogen) supplied by 10% fetal bovine serum (Gibco-Invitrogen), 1% antibiotic mixture containing penicillin (Sigma-Aldrich) and streptomycin (Sigma-Aldrich). The cells were stored at a humidified atmosphere at 37 °C with 5% CO 2 . The culture medium of cells was changed approximately every two days, and when they reached more than 80% confluence, they were split with 0.05% Trypsin/0.02% EDTA and sub-cultured for more passages.
Cell proliferation measurements by colorimetric MTS assay
Cells were seeded at 4×10 3 per well of a 96-well tissue culture plate. After 48 hours, the supernatant of infected macrophages and uninfected macrophages (24,48,72, and 96h) were removed by centrifugation and then added to the fibroblast cells. Cell viability was investigated using the MTS assay, the Cell Titer 96 Aqueous One Solution Cell Proliferation (ProMega), after 48 hours. Optical Density (OD) was recorded at 490nm in a 96-well plate reader (Biorad). Cell survival was evaluated using the following equation: 23,24 Apoptosis assessment with flow cytometry Apoptotic cells were detected using PI staining of small DNA fragments followed by flow cytometry. The Annexin-V Staining Kit (Roche, Germany) with Propidium Iodide (PI) was used for the detection of apoptotic and necrotic cells according to the manufacturers protocol. Briefly, the wells were treated with 20 mg/mL concentration of extract and were incubated at 24•C. After 72 hours, promastigotes were washed in cold Phosphate-Buffered Saline (PBS) and centrifuged at 1400g for 10 min and the pellet re-suspended in the binding buffer to a concentration of 1×105/mL of promastigotes. Afterward, they were incubated for 15 min at the room temperature in darkness, with 10μL of Annexin-V in the presence of PI. Then, the samples were analyzed with FACS Calibur flow cytometer (Becton Dickinson and Cell Quest software), and the percentage of positive cells was determined for each sample. 24,25
Statistical analysis
The data were presented as the mean ± SD. Finally, the data were analyzed using SPSS for Windows (Version 16.0) (SPSS Inc., Chicago, IL, USA), and P<0.05 was considered a significant level.
A decrease of fibroblast cell survival fraction by infected macrophage
The cells were incubated with 24, 48, 72, and 96 hours-supernatant. As shown in Figure 1, no significant differences were seen between the controls and those treated with the 24 and 48 hourssupernatant over the whole range of both supernatant concentrations (p>0.05). Survival fraction decreased after incubation with 72 and 96 hours-supernatant so that this reduction was more considerable for 96 hours over the whole range of both supernatant concentrations (p<0.05). These results indicated that 96 hourssupernatant had remarkable cytotoxicity on fibroblast cells. Therefore, it was selected as a useful supernatant. Also, there was no significant difference between the survival fraction of cells incubated with infected and normal macrophage (p-value>0.05)
Infected macrophage cause fibroblast cell apoptosis
To evaluate apoptosis, dual staining was used. Annexin V-FITC represents early apoptosis (lower right quadrant, LR) for internalized Phosphatidylserine (PS), and PI is the symbol of late apoptosis (upper right quadrant, UR). Therefore, the sum of LR and UR represents the apoptosis rate. Figure 2 shows the fraction of fibroblast cells undergoing apoptosis 48 hours after incubation with 96 hours-supernatant. Significant increase apoptosis was observed in fibroblast cells, which incubated with 96h-supernatant compared to control cells (p<0.05).
Discussion
Macrophages have long been considered to be crucial immune effector cells. They also play an important role in purifying the cells destroyed by apoptosis and necrosis. Phagocytosis of these components by macrophages leads to dramatic changes in their physiology, including alterations in the expression of surface proteins and the production of cytokines and pro-inflammatory mediators. 26 Both cutaneous wound creation and healing processes are depended on the host immune response to the parasite. Although the immune response is required to remove parasites, it can be a significant cause of ulcers. The main objective of this study was to investigate the released from macrophages after infection with L. major and their effects on fibroblast as the central constructive cells of the skin. 27 Based on our findings, when fibroblasts were exposed to macrophages infected with Leishmania major, the reduction of survival rate and induction of apoptosis occurred similarly to keratinocytes through secreting various elements that are more or less known. This phenomenon can be the underlying cause of lesions with scarring. However, these events were less frequent in the uninfected group. Macrophages are part of the innate immune system cells. The consequences of the effects of innate immune responses in humans exposed to certain types of pollution are not known, and in this regard, two different scenarios can be presented. A rapid inflammatory response can sometimes enable the host to control the infection until the immune response arrives. However, rapid and robust responses to intrinsic immunity may alone, or by reinforcing the effects of acquired responses, in some cases lead to the development and intensification of the complications of the disease. Examples of such fact include increasing the level of IL-12 production in other protozoal diseases, such as malaria, or the use of this cytokine, as an immunotherapy agent in cancers. In immunohistochemical studies, increased levels of this cytokine have led to complete improvement in cancer in the tested mice.
Article
Nevertheless, their effect on other cells can lead to the stimulation of secondary cytokine production, which could have irreparable pathophysiological effects. [28][29][30][31] On the other hand, nitric oxide is one of the elements released from activated macrophages, which is considered as a free radical and is highly toxic and destructive, just like other free radicals. 32 Macrophages are activated by Lipopolysaccharide (LPS) and it seems that the presence and activation of high levels of macrophages in the culture medium in the uninfected group may lead to secrete high levels of IL-12, and therefore it could be one of the reasons for increased mortality in the uninfected group.
However, the inhibition of IL-12 production at the onset of the parasite into the macrophage is one of the essential parasitic escape mechanisms from immune responses. When the parasite infects the macrophage, it does not allow macrophage to be activated. More importantly, the parasite continuously inhibits the release of cytokines such as IL-12 from infected cells and prevents them from stimulating such cytokines through several pathways. By the suppression of these cytokines and their stimulation pathways, the parasite begins to multiply until the cells break down and infect other surrounding cells, resulting in a complete parasite infection without the attention of the immune system. 33,34 Surprisingly, as noted above, the introduction of the parasite into the macrophage does not permit macrophages to be activated. As a result, the passive macrophage also loses its ability to produce free radicals such as IL-12; this difference is seen between the uninfected and infected group. So it is expected that, unlike the uninfected group, which produces large amounts of free radicals, the other agents, which are released due to L. primary parasite infection, lead to the destruction of skin tissue. [35][36][37]
Conclusions
In the end, the destruction of human fibroblasts in the medium through both infected and uninfected groups was evident. Because macrophages do not have such harmful effects on skin cells in the normal state of the body, and on the other hand, the infected macrophages are inactive and cannot produce IL-12 and free radicals, such as nitrite oxide, it seems that what leads to tissue destruction due to infection with the L. major is the response of the macrophage to the presence of the parasite. In our current study, we did not fully explore which elements were released from macrophage infected with L. major and compared their levels with healthy macrophages. Therefore, the nature and mode of the postulated mediators remain to be explored in future studies. | 2020-12-27T10:05:40.221Z | 2020-12-02T00:00:00.000 | {
"year": 2020,
"sha1": "5242c6d30108df9c2e8a7b741a0466f432eceb52",
"oa_license": "CCBYNC",
"oa_url": "https://www.pagepress.org/medicine/idhm/article/download/83/42",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "819d98cb0140f9c6c12d20f570231d0befb59aee",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
267039817 | pes2o/s2orc | v3-fos-license | An analysis of the dual burden of childhood stunting and wasting in Myanmar: a copula geoadditive modelling approach
Objective: To analyse the spatial variation and risk factors of the dual burden of childhood stunting and wasting in Myanmar. Design: Analysis was carried out on nationally representative data obtained from the Myanmar Demographic and Health Survey conducted during 2015–2016. Childhood stunting and wasting are used as proxies of chronic and acute childhood undernutrition. A child with standardised height-for-age Z score (HAZ) below –2 is categorised as stunted while that with a weight-for-height Z score (WHZ) below –2 as wasted. Setting: A nationally representative sample of households from the fifteen states and regions of Myanmar. Participants: Children under the age of five ( 4162). Results: Overall marginal prevalence of childhood stunting and wasting was 28·9 % (95 % CI 27·5, 30·2) and 7·3 % (95 % CI 6·5, 8·0) while their concurrent prevalence was 1·6 % (95 % CI 1·2, 2·0). The study revealed mild positive association between stunting and wasting across Myanmar. Both stunting and wasting had significant spatial variation across the country with eastern regions having higher burden of stunting while southern regions having higher prevalence of wasting. Child age and maternal WHZ score had significant non-linear association with both stunting and wasting while child gender, ethnicity and household wealth quintile had significant association with stunting. Conclusion: The study provides data-driven evidence about the association between stunting and wasting and their spatial variation across Myanmar. The resulting insights can aid in the formulation and implementation of targeted, region-specific interventions towards improving the state of childhood undernutrition in Myanmar.
productivity of an individual (6,7) and can potentially hinder a country's human, social and economic progress (8)(9)(10) .Overall, early childhood undernutrition leads to an intergenerational cycle of undernutrition and prevents large section of the population to escape poverty traps through a vicious cycle of deprivation.Increased awareness of the debilitating consequences of childhood undernutrition has led to its being identified as a major global health priority by international organisations such as United Nations and WHO.In doing so, concrete timelines have been set for meeting global nutrition targets such as sustainable development goals, specifically SDG 2, that call for an end to all possible forms of hunger and malnutrition by 2030.
There exists considerable literature on the determinants of childhood undernutrition specifically as it manifests through stunting and wasting (11)(12)(13) .These can be at the individual, maternal and household level such as child's age, gender, mother's health, education and working status as well as household location and wealth status (14) .In addition, dietary diversity, complimentary feeding practices as well as access to potable water and sanitation have been shown to have significant influence on childhood undernutrition (15) .Recent studies have also explored the impact of climatic and environmental anomalies in precipitating childhood undernutrition (16,17) .
Existing studies on childhood undernutrition usually model the covariate-response relationship for stunting and wasting separately.However, it is entirely conceivable that childhood stunting and wasting are related in a particular population and can vary across regions.In studies carried out across Africa, Americas, Asia and Eastern Mediterranean, it has been found that the prevalence of stunting and wasting as well as their association varies considerably across regions (18) .Specifically, it is observed that stunting and wasting have low correlations in Africa and Latin America but high positive association in Asia and the Eastern Mediterranean regions.In a recent review study on low-and middle-income countries, stunting and wasting were found to have a strong association whereby episodes of wasting contribute to stunting while stunting leads to wasting as well, albeit to a lesser extent (19) .Moreover, children with concurrent stunting and wasting were found to have the highest risk of mortality compared to children who were either stunted or wasted.In this study, we explore the spatial variation and antecedents of childhood stunting and wasting in Myanmar while accounting for the possible association between the two growth measures.
Myanmar is the second largest country in Southeast Asia as well as one of the poorest.Since its independence in 1948, Myanmar has been majorly ruled by an oppressive military junta, which was responsible for wide spread ethnic persecution and severe curtailment of democratic and civil liberties of its citizens.This led to a gradual deterioration of the social, industrial and economic health of the country (20) .In addition to political and ethnic upheavals, Myanmar is also vulnerable to a wide range of environmental disturbances.Since 2002, it is estimated that more than 13 million people have been affected by natural disasters in Myanmar (21) , each of which has resulted in massive displacement of populace and caused widespread damage and destruction of crops, livelihoods and property.Scenarios like the above create a volatile ecosystem that breeds food insecurity and deprivation (22) .Childhood undernutrition is a direct consequence of this, since young children are particularly susceptible to nutritional deficiencies when households experience food insecurity and shortages.Needless to say, childhood undernutrition has been a major public health concern in Myanmar with over 1•3 million or nearly 29 % of under five children being stunted and more than 300 000, or nearly 7 % wasted (21) .These are some of the highest rates of stunting and wasting among children under five among the Association of Southeast Asian Nations.
Existing research on the determinants of childhood undernutrition for Myanmar is relatively recent but provides crucial pan-country perspectives about the drivers of childhood stunting and wasting (11,14,23) as well as the dual burden of childhood stunting and maternal overweight and obesity (24) .However, in these studies, the determinantresponse pathways are modelled separately for each of the malnutrition indices.In this context, the current study provides four novel contributions to the literature.First, we jointly model childhood stunting and wasting in relation to various socio-economic and demographic determinants.Second, we use a flexible modelling approach to incorporate non-linear association between the determinants and the undernutrition indices.Third, we account for spatial variations of stunting, wasting as well as their association across the regions of Myanmar by incorporating both withinregion and between-region spatial effects.Finally, we quantify the joint likelihood all possible combinations of stunting and wasting across all the regions and produce spatial maps of the same.We implement these through a comprehensive modelling framework based on the copula geoadditive modelling approach (25,26) .To our knowledge, this is possibly the first study that implements a joint modelling framework to analyse the dual burden of childhood stunting and wasting based on nationally representative data from any country in general and Myanmar in particular.We hope that the results of this study will inform policy makers and programme managers about regional variation in the joint prevalence of childhood stunting and wasting in Myanmar and will aid in the design and implementation of regionally sensitive nutritional interventions for improving the state of childhood undernutrition in the country.
Myanmar Demographic Health Survey
We used data from the Myanmar Demographic Health Survey (MDHS) carried out during December 2015 to July only Demographic Health Survey (DHS) to be carried out in Myanmar till date and was funded by USAID and Three Millennium Development Goals Fund and carried out by the Ministry of Health and Sports.The survey provides information on health and nutritional characteristics of a nationally representative sample of women in their reproductive age (15-49 years).
Myanmar is administratively divided into seven regions, seven states and one union territory.The regions are Ayeyarwady, Bago, Magway, Mandalay, Sagaing, Tanintharyi and Yangon while the states are Kachin, Kayah, Kayin, Chin, Mon, Rakhine and Shan.The national capital namely NayPyidaw constitutes the sole union territory.The MDHS followed a two-stage stratified-cluster sampling scheme to select a nationally representative sample of households from the entire country.In the first stage, each of the above states and regions was divided into rural and urban segments, each of which formed a separate sampling stratum.In the second stage, a random sample of household clusters was selected independently from each of those strata using proportional allocation.This led to a total of 442 clusters, of which 123 were urban and 319 were rural.Lastly, thirty households were selected from each cluster using equal probability systematic sampling, resulting in a representative sample of 13 260 households from the country.Of them, 12 500 households were interviewed.From each of the selected households, information was collected from all women aged 15-49 years who were either permanent residents or who stayed in the households the night before the interview was administered.For the purpose of our analysis, we used the children's data file, which had information on 4815 children born within 5 years prior to the interview.Of those, 653 children had to be excluded for having missing values on various child, maternal and household-specific attributes as well as for having heightfor-age and weight-for-height Z scores (HAZ and WHZ) below -6 or above 6.Upon exclusion, the final analysis-ready dataset consisted of complete observations on all the necessary variables for 4162 children and their mothers.Spatial mapping of childhood stunting and wasting was carried out at the regional level since that is the lowest administrative level at which publicly available spatial data files of Myanmar are available.The MDHS dataset too mapped the sampled children with their state and region of residence.
Ethical approval was not required as the DHS datasets are publicly available and use accepted procedures for data collection with ethically approved guidelines and informed consent from the participants.Details regarding the sample design and survey instruments are provided elsewhere (27) .
Study variables
In this study, childhood stunting and wasting are used as proxies for chronic and acute childhood undernutrition, respectively.Anthropometric measures of children's HAZ and WHZ scores were used to construct indicators of these indices.These scores and the corresponding thresholds for attribution are based on growth standard median metrics formulated by the WHO for children below 5 years.Specifically, children whose standardised HAZ and WHZ scores are below -2 are labelled as stunted and wasted, respectively.Children with scores more than 6 or less than -6 are treated as outliers and dropped.Thus, the dependent variables are binary with categories 'stunted' and 'not stunted' and 'wasted' and 'not wasted', respectively.
We considered the following risk factors of stunting and wasting based on those accepted in existing literature as having some association with these conditions: child gender (1 if male, 2 if female) (11,24) , child age (months) (11) , maternal age at first birth (in years) (24) , maternal working status (1 if working, 0 if not working) (11,24) , maternal educational attainment (0 if no education, 1 if primary, 2 if secondary, 3 if higher secondary) (24) , household location (1 if rural, 0 if urban) (24) , gender of household head (1 if male, 2 if female) (17) , household wealth quintile (1 if poorest, 2 if poorer, 3 if middle, 4 if richer, 5 if richest) (11,24) , toilet facility (1 if improved, 0 if not improved) (11,13) , on-premise drinking water (1 if available, 0 if not available) (11,13) and water treatment (1 if done, 0 if not done).The categorisation for toilet facility and on-premise drinking water was done as per guidelines laid down in the DHS manual.We also considered ethnicity of a child as well as standardised HAZ and WHZ scores of the mother as proxies for maternal stunting and wasting status.As per ethnicity is concerned, a child hailing from Chin, Kachin, Kayah, Kayin, Mon, Rakhine or Shan was considered as 'minority' (ethnicity = 0) else a 'majority' (ethnicity = 1).This categorisation is as per country demographic distribution and is well accepted in existing literature and common discourse.
Bivariate copula regression model
In this study, we jointly model childhood stunting and wasting while accounting for the spatial distribution of their association across Myanmar.For that purpose, we use a bivariate copula regression model that incorporates spatial effects at the regional level as well as flexible non-linear functions of covariates.The resulting modelling framework is known as a copula geoadditive model (25) .In this framework, the dependence structure between the responses is modelled using copulas, which are functions that enable flexible specification of the marginal models of the responses separately from that of the joint distribution governing their dependence structure (28) .Although a relatively new concept, copulas have been extensively used for modelling association between diverse classes of responses across multiple fields.Applications range from modelling mixed binary-continuous data (29) , continuous and discrete longitudinal data (30) , censored data (31) and count data (32) .Copula models have been used in finance Dual burden of childhood stunting and wasting, Myanmar and insurance (33,34) , forestry and environment (35) and marketing (36) as well.Excellent reviews of copula models are provided by Trivedi and Zimmer (37) and Genest (38) .
Assuming Y is and Y iw to be the stunting and wasting status of the i th child such that the child is wasted (not wasted), the copula structure enables specification of the joint probability of the i th child being stunted as well as wasted, that is, where x is and x iw are the vector of determinants of stunting and wasting, respectively.Here, C : ½0; 1 2 !0; 1 ½ is a two-place copula function, while θ, known as the copula parameter, quantifies the dependence between stunting and wasting prevalence among the children (26) .A latent variable representation of binary probabilities is used the linear predictor of consisting of linear, non-linear and spatial effects while ε is is a white noise error.F s Àη is ð Þis the cumulative distribution function of the error and determines the structure of the marginal model linking the stunting indicator Y is to the corresponding linear predictor.The flexibility of the copula approach lies in the fact that F : ð Þ can correspond to a broad class of univariate distributions (Gaussian, logistic, Gumbel for instance) depending on the assumed distributional form for the error term.For instance, a standard normal distributional assumption for ε is would lead to a probit specification for the corresponding marginal model.A similar setup can be replicated for the wasting indicator, Y iw .
Marginal model specification
We incorporate four types of effects in the marginal models for stunting and wasting, namely (i) regular fixed effects of the categorical variables and of those continuous variables, which are linearly related to the response; (ii) flexible nonlinear effects for variables, which have a curvilinear association with the response; (iii) within-region (unstructured) spatial effects to account for the presumed similarity in stunting and wasting prevalence among children residing in the same region and lastly (iv) between-region (structured) spatial effects to account for the assumed dependence in stunting and wasting prevalence among children residing in adjacent regions.The non-linear effects are estimated by thin-plate regression splines while the structured spatial effects are estimated by Markov random field smoother, which is based on the neighbourhood structure of the regions (26,29) .
Model selection
The flexibility of the copula approach enables the selection of the optimal copula representation for modelling the dependence between the responses independently of the structure of the marginal models.Accordingly, we employed a two-step approach for selection of the optimal framework, namely (i) assuming a probit representation for each of the marginal models, we selected the copula representation that produced the lowest values of Akaike information criterion (AIC) and Bayesian information criterion (BIC) across multiple copula choices; (ii) given the optimal copula representation so obtained, we selected the marginal model structure that corresponded to the lowest AIC and BIC values across competing marginal models.Accordingly, we chose the survival Gumbel copula along with a complementary log-log link for the marginal model of wasting and probit link for the marginal model of stunting since it corresponded to the lowest values of AIC as well as BIC.The AIC and BIC values for all the competing models are provided in the supplementary document.
In addition to the marginal models of stunting and wasting, we modelled the copula parameter, θ, with respect to the regions in order to capture any spatial variation in the association between stunting and wasting.This may enable the identification of regions where stunting and wasting are strongly or weakly associated, which, in turn, can provide valuable insights to policy makers on the need for regionspecific interventions.For ease of interpretation, the copula parameter was transformed to Kendall's tau correlation coefficient (τ 2 À1; 1 ½ ), which is a measure of the degree of concordance between two variables (26) .Accordingly, variation in the region-specific τ values would be indicative of a spatial variation of the dependence between stunting and wasting across Myanmar.
Analysis was carried out using the R package GJRM (generalised joint regression modelling) (26,39) while mapping was carried out in QGIS 3.22 using shapefiles freely obtainable from the Spatial Data Repository maintained by the DHS Program (https://spatialdata.dhsprogram.com/boundaries).All estimates have been weighted using the sampling weights provided in the DHS data file.The standard errors of all the estimates have been suitably adjusted to account for the multistage cluster sampling carried out in the MDHS.Statistical significance has been assessed at the customary 5 % significance level except for sparse data situations in which the more liberal 10 % level has been used.
Sample characteristics
Table 1 shows the weighted prevalence and corresponding 95 % confidence intervals of stunting, wasting and both corresponding to the categorical attributes considered in the study.Overall, the weighted prevalence of stunting, wasting and both stunting and wasting was 29 %, 5 % and 1•7 %, respectively.No tangible differences in the prevalence of wasting or both stunting and wasting were seen across the categories of any of the child, maternal and household attributes.However, stunting prevalence did reveal interesting features across the various attributes.For instance, males had higher stunting prevalence than females, a pattern that was true for wasting and stunting and wasting as well.Children belonging to ethnic minority regions had considerably higher stunting prevalence (33 %) than those belonging to ethnic majority regions (24 %).Similarly, children of working mothers had higher stunting prevalence (31 %) than those belonging to nonworking mothers (20 %).There was a steady decline in stunting prevalence with increase in maternal educational attainment and household wealth.Children hailing from rural areas had a much higher stunting prevalence (31 %) than those belonging to urban locations (20 %).Gender of household head did not seem to have any effect on the prevalence of stunting, wasting or both.However, lower stunting prevalence was observed in households having access to improved toilet facility and in-house drinking water compared to households having restricted or no access to such facilities.Water treatment did not seem to have any effect on stunting prevalence.
Figure 1 depict the boxplots of the continuous covariates, namely age of child (in months), maternal HAZ and WHZ scores and maternal age at first birth.For each of the covariates, separate box plots are constructed for children who are stunted, wasted, both stunted and wasted and who are neither stunted nor wasted.Each of the box plots reveals the minimum, maximum as well as first, second and third quartiles of the respective covariates for each of the child samples.Wasted children seem to have the lowest median age among the four groups, which was much lower than that of the other three groups.In fact, the median ages of stunted and stunted and wasted children were comparable to the third quartile of the ages of wasted children.Maternal age of first birth corresponding to wasted children was higher, on average than that of the other three groups.The median standardised HAZ scores of mothers who had stunted as well as stunted and wasted children were lower than that of mothers who had wasted or healthy children.However, no tangible difference in the distribution of the maternal WHZ scores were observed across the four samples.
Table 2 cross classifies the observed sample of 4162 children according to their stunting and wasting status.Overall, stunting and wasting prevalence were 30•39 % and 6•65 %, respectively, while 1•68 % children were both stunted and wasted.A Pearsonian Chi-square test of association generated a test statistic with a P-value of 0•064, which was significant at 10 % significance level, thus implying mild association between stunting and wasting.
Bivariate copula model
As part of the bivariate copula model, three distinct models are fitted, namely marginal model for stunting, marginal model for wasting and the model for the copula parameter.Each of the marginal models incorporates fixed effects of the categorical covariates, non-linear effects of the continuous covariates as well as separate unstructured and structured spatial effects to accommodate for within and between-region correlations in stunting and wasting, respectively.The model for the copula parameter accounts for spatial variation in the association between stunting and wasting across the regions of Myanmar.All the categorical and continuous covariates were accommodated in both the marginal models.Various modelling frameworks were considered with varying copula specifications and marginal model structures and the one with the lowest AIC was adjugated as the optimal one.For this study, the optimal model corresponded to a complementary log-log link for the marginal model for wasting, a probit link for the marginal model for stunting and a survival Gumbel copula function for the dependence between the two responses.
Fixed effect results
Table 3 shows the parameter estimates, standard errors and P values corresponding to the categorical predictors included in the marginal models for stunting and wasting.
Based on the results, boys and children belonging to ethnic minority regions are significantly more likely to be stunted compared to females and ethnic minority children (gender estimate = 0•138, P value = 0•007; ethnicity estimate = -0•20, P value = 0•0003).The likelihood of stunting progressively decreased with increasing household wealth.Specifically, children belonging to households in the two highest wealth quantiles are significantly less likely to be stunted than those belonging to the poorest households (quintile 5 wealth estimate = -0•28, P value = 0•02; quintile 4 wealth estimate = -0•20, P value = 0•04).Maternal working status, maternal educational attainment, household location (urban or rural), drinking water or sanitation and gender of household head did not seem to have any significant association with stunting prevalence controlling for the other factors.Childhood wasting had significant association with household wealth as well as with child gender and household location, albeit at 10 % significance level.Specifically, children hailing from richer households were significantly less likely to be wasted than those from the poorest households (quintile 4 wealth estimate = -0•49, P value = 0•07).Children hailing from rural regions had lower prevalence of wasting than those hailing from urban regions (region estimate = -0•33, P value = 0•09).Lastly, boys had significantly higher likelihood of wasting (gender estimate = 0•23, P value = 0•09).Remaining variables such as childhood ethnicity, access to proper sanitation or drinking water, gender of household head or maternal educational attainment did not have any significant association with the prevalence of wasting.
Non-linear and spatial effects Table 4 depicts statistical significance for the non-linear and spatial terms for both stunting and wasting.Age of child and maternal WHZ score have significant non-linear effects on the likelihood of stunting as well as wasting at 1 % significance level while maternal HAZ score and age at first birth have significant non-linear effect on childhood stunting at the 1 % and 5 % levels, respectively.However, none of these two variables seems to have any significant association with childhood wasting.
Figures 2 and 3 depict the non-linear patterns of these variables on the likelihood of stunting and wasting.The likelihood of stunting steadily increases with child's age till about 30 months post which it declines.However, wasting seems to have an overall decreasing trend with age with noticeable fluctuations.Moreover, the prevalence of wasting declines steadily with an increase in maternal WHZ scores implying that wasted mothers have a considerably higher likelihood of having wasted children and vice versa.Finally, the likelihood of stunting reduces with higher age at first birth until about 25 years, post which it gradually increases.On the other hand, likelihood of wasting has a sharp drop between 12 years till 18 years and remains fairly stable till about 35 years of age post which there is again a sharp increase.
The unstructured spatial effects of both stunting and wasting are significant at 5 % level implying that stunting and wasting prevalence have strong within-region variations.Moreover, the structured spatial effect of wasting is significant at 5 % level while that of stunting is significant at 10 % level.This implies that childhood wasting has strong between-region variation in Myanmar while that of stunting is mild.Figure 4 provides a visual depiction of the structured spatial effects of stunting and wasting, respectively.In both the figures, darker or reddish zones correspond to regions of lower incidence while lighter or yellowish zones correspond to regions of higher incidence of stunting and wasting.It is clear that wasting has a higher degree of variation across the country with eastern and southern regions like Kayah, Kayin, Mon, Tanintharyi and Shan recording the highest prevalence while western and south-western regions like Rakhine, Ayeyarwaddy and Yangon having the lowest prevalence.However, stunting has the opposite pattern with higher prevalence in western states like Chin, Rakhine, Ayeyarwaddy, Yangon, Magway and Bago and lower prevalence in the southern states like Mon and Tanintharyi.Two critical insights can be derived from these maps.First, stunting and wasting have mild association, which is indicative of the presence of other factors that drive their Dual burden of childhood stunting and wasting, Myanmar prevalence.Second, any socio-economic or nutritional intervention should account for these regional variations for impact maximisation.
Association between stunting and wasting
The copula parameter θ was set to vary across regions through a Markov random field specification.The estimated value of the parameter, averaged over the 15 regions, was 1 with a 95 % CI (1,17) .The corresponding Kendall's tau coefficient was close to zero with a 95 % CI of (0, 0•941).The values remained similar across the regions implying mild, homogeneous positive association between stunting and wasting.We averaged the estimated joint probabilities of stunting and wasting for each of the fifteen regions based on the bivariate copula model while accounting for this inherent association.Specifically, the joint probabilities correspond to four distinct events of a child being neither stunted or wasted, being stunted but not wasted, being wasted but not stunted and being stunted as well as wasted.The spatial maps of the joint probabilities are shown in Fig. 5.
As per Fig. 5(a), the joint probabilities of a child not being stunted and wasted are quite high across Myanmar.This is also corroborated from Fig. 5(d), which depicts low joint probabilities of a child being both stunted and wasted throughout the country.Fig. 5
Child characteristics Gender Male
Table 4 Chi-square test statistic and associated P-values for the non-linear and spatial components of the bivariate copula model for stunting and wasting
Stunting Wasting
Chi-square value P Ã Chi-square value P Ã regions to the west and those in the east bordering China have higher joint probabilities of a child being stunted but not wasted compared with that in the central and southern regions.Similarly, Fig. 5(c) indicates that the joint probability of a child being wasted but not stunted is quite low across the country.
Discussion
This study is devoted to exploring the spatial variation and socio-economic drivers of childhood stunting and wasting in Myanmar while accounting for any inherent dependence between the two.This is accomplished using a bivariate copula regression model, which enables us to incorporate linear and non-linear covariate effects as well as within-and between-region spatial variation in the responses.The copula framework enables the separation of the dependency structure of the responses from their marginal models and hence the estimation routines (26) .The flexibility of the copula approach over typical multivariate analysis is that it does not require the normality and linearity assumptions for modelling the dependence between responses (28) .Moreover, it can seamlessly incorporate various types and associated combination of responses like binary, ordinal, count and continuous in both cross-sectional and longitudinal settings (25,(29)(30)(31)(32) .Specifically for the current study, the copula framework enables us to produce spatial maps of the joint probabilities of stunting and wasting prevalence as well as quantify their association across the regions of Myanmar.These maps can provide crucial data-driven insights about regional variations in the joint prevalence of childhood stunting and wasting, which, in turn, can aid in the formulation of region-specific interventions to arrest and reverse the progression of these ailments.To the best of our knowledge, this is the first study that deploys such a flexible framework for joint modelling of stunting and wasting among children under 5 on nationally representative survey data of any country in general and Myanmar in particular.Overall, the prevalence of stunting and wasting in the country were 29 % and 5 %, respectively, while that of both stunting and wasting prevalence was only 1•7 %.The marginal prevalence of wasting and concurrent prevalence of both stunting and wasting was uniformly low across all categories of the sampled children.However, the marginal occurrence of stunting and concurrent occurrence of both stunting and wasting was higher among children belonging to stunted mothers compared to those belonging to healthy mothers.
The copula modelling results indicate mild positive association between childhood stunting and wasting that remains homogeneous throughout the country.However, marked differences were observed in the spatial distribution of stunting and wasting prevalence across the regions.Specifically, eastern regions bordering the Indian Ocean have higher prevalence of stunting but lower prevalence of wasting while southern regions have higher prevalence of wasting but lower prevalence of stunting.The overall best performing regions were Mandalay and Naypyidaw, the capital.This indicates that in terms of policy formulation and implementation, a 'one-size-fits-all' strategy may be sub-optimal and calls for a more nuanced, region-specific intervention framework.Spatial maps of the joint probabilities of stunting and wasting depicted low likelihood of their dual occurrence throughout the country, thus corroborating the positive strides taken by Myanmar in recent years towards achieving the global nutritional targets (40) .However, there were noticeable variations in the joint likelihood of stunting but no wasting across the country.Specifically, coastal regions in the east and mountainous regions in the northwest had higher estimated joint probabilities of a child being stunted but not wasted compared to the central and southern part of the country.While corroborating that stunting and wasting are associated, albeit mildly, these maps are also indicative of the presence of additional drivers for stunting other than wasting, specifically in the eastern and western regions.Hence, effective policy formulation should account for regional attributes for impact maximisation.Analysis of the marginal models of stunting and wasting yielded expected patterns of association between socioeconomic and demographic variables and the likelihood of stunting and wasting.Specifically, boys and children hailing from poorer households had significantly higher susceptibility to stunting as well as wasting, which is consistent with those available in the existing literature on childhood undernutrition for low-and middle-income countries (41)(42)(43) .Maternal working status had mild association with stunting and wasting prevalence but in opposite directions.Specifically, children belonging to working mothers had a lower risk of being wasted but a higher risk of being stunted.Moreover, children from rural areas had a significantly lower risk of being wasted compared with their urban counterparts.Interestingly, ethnicity had significant association with stunting prevalence with ethnic minority children being at a significantly higher risk of being stunted compared with their majority counterparts.This was a relevant finding since the ethnic minorities of Myanmar have been subjected to severe persecution and discrimination by the military which has ruled the country since its independence.Similar findings on ethnic disparities in childhood undernutrition have been reported in studies from Vietnam and Latin America (44)(45)(46) .Maternal educational attainment, gender of household head and sanitation and drinking water facilities did not have any noticeable effect on the likelihood of stunting and wasting.
In addition to the fixed effects, children's age, maternal HAZ and WHZ scores as well as maternal age at first birth had complex non-linear association with the likelihood of stunting and wasting.Specifically, stunted and wasted mothers are more likely to have stunted children while the same is true for overweight mothers as well.The results are also indicative of the fact that having the first child post 30 years increases the chance of stunting while conceiving before 18 years or after 35 years increases the odds of having a wasted child.Overall, the results are indicative of the necessity of deploying flexible modelling frameworks to decipher the complex association patterns between various maternal attributes and the prevalence of stunting and wasting.These findings add to the insights available in the existing literature on childhood undernutrition in Myanmar (11,14,24) .This study is not free from limitations, which may provide pointers for future research.The limitations are model-based as well as data-based.As far as the former is concerned, the copula framework, despite its varied advantages and flexibility, has limitations in terms of nonuniqueness of copulas for modelling discrete and mixed outcomes, which leads to difficulty in the interpretation of results.Second, the usage of Markov random field smoother for estimating structured spatial effects relies on the assumption of correlated responses across adjacent areas, a premise that is often difficult to verify.Third, the specification of a particular copula heavily relies on the validity of assumption about the nature of dependencies of the responses, both Gaussian and non-Gaussian.Lastly, a major challenge for copula models lies in implementing a robust model selection and validation mechanism that simultaneously incorporates selection of optimal marginal models, predictor specification and conditional dependence structure between the responses.This is made all the more difficult due to the non-generalisability of standard tools of model checking (say, quantile-quantile plots) for correlated responses, specially binary ones.
As far as data-based limitations are concerned, first and foremost, the Myanmar Demographic and Health Survey data have only been collected once, in 2015-2016.The cross-sectional nature of these data limited the scope for performing causal inference.Second, environmental predictors such as cluster height, vegetation index and land surface temperature have not been accounted for in the modelling framework, which, if done, can further our understanding about the relationship between environmental drivers and the likelihood of stunting and wasting.Third, socio-economic and demographic variables can be incorporated in the model for the copula parameter in addition to the region-level structured spatial effect.Doing so can yield further insights about the dependence between stunting and wasting and their association with maternal and household-specific variables.
The main contribution of this study is threefold.First, precise spatial mapping of the marginal prevalence of stunting and wasting across regions of Myanmar accounting for both within and between-region variation.Second, spatial delineation of the various combinations of the joint prevalence of stunting and wasting.Third, deciphering the underlying non-linear association between childhood age and maternal HAZ and WHZ scores, thus enabling a more nuanced understanding of the effect of maternal stunting and wasting status on that of the children's.
The findings of our study can be utilised for crafting targeted policies and programmes towards improving the state of childhood undernutrition in Myanmar.An essential component of successful implementation of any such policies is proper identification of the most vulnerable subgroups and regions of the country.The fixed and spatial effects obtained from the study provide necessary pointers towards achieving that goal.What stands out in that process is the necessity of accounting for the implicit dependence between various indices of childhood undernutrition for better understanding of the mechanisms through which those can be tackled through effective, region-specific policy formulations.This study provides an alternative and more robust framework for achieving the same compared with typical multivariate models that fail to account for such subtleties.
Fig. 2
Fig. 2 Estimated non-linear effects of child's age, maternal HAZ score, maternal WHZ scores and maternal age at first birth on the likelihood of stunting.HAZ, height-for-age Z score; WHZ, weight-for-height Z score.
Fig. 3
Fig. 3 Estimated non-linear effects of child's age, maternal HAZ score, maternal WHZ scores and maternal age at first birth on the likelihood of wasting.HAZ, height-for-age Z score; WHZ, weight-for-height Z score.
Table 1
Marginal and concurrent prevalence of stunting, wasting and both stunting and wasting for children under 5 in Myanmar Wt%: weighted prevalence with weights being the sampling weights.
Table 2 2
× 2 contingency table cross-classifying the sampled children according to their stunting and wasting status
Table 3
Parameter estimates, standard errors and P values for the fixed effects for the bivariate copula regression model for stunting and wasting | 2024-01-20T06:17:05.606Z | 2024-01-19T00:00:00.000 | {
"year": 2024,
"sha1": "db8076ed8c076ceff9788588cee14b0ba6c2468d",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/D538B338E5403086901EBC9A3373FC18/S1368980024000193a.pdf/div-class-title-an-analysis-of-the-dual-burden-of-childhood-stunting-and-wasting-in-myanmar-a-copula-geoadditive-modelling-approach-div.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b76d38278314653d6afef35e34711e9488153493",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267329287 | pes2o/s2orc | v3-fos-license | A Computer Vision Based Colonoscopy Support System for Real-Time Monitoring of Bowel Preparation and Colonic Anatomical Localization
The high prevalence of late stage colorectal cancer underscores the need for robust detection systems capable of mitigating its progression during its early stages. While routine colonoscopies have been the industry-standard for identifying signs of early colorectal cancer, it is crucial to uphold several key quality benchmarks to ensure their effectiveness and precision. These quality indices include factors like the scope withdrawal rate and bowel preparation, among others. Our approach leverages on image processing and deep learning to establish a supportive system that highlights areas requiring improvement during scope procedures for clinical practitioners. We demonstrate this via a fine-tuned ResNet-50 architecture to assess bowel preparation yielding 98.5% average accuracy, and a curvature-tracking based approach for colonic anatomical localization for precise monitoring of the withdrawal speed and bowel preparation. We show a pilot iteration of this integrated system on pre-recorded colonoscopy videos, and propose steps for further clinical testing.
INTRODUCTION
Colorectal cancer stands as one of the most prevalent cancers affecting both males and females, ranking third in incidence among men and second among women [1].Fortunately, its prognosis is significantly improved when detected early, and patients can generally expect a high chance of full recovery.In particular, the presence and consequent removal of pre-cancerous polyps pose a risk to growing into more advanced adenomas and possible malignancies [2].The removal of these adenomas has shown to be correlated to reduced cancer incidence, and thereby, mortality from colorectal cancer [3].Colonoscopies are well established procedures and have shown to lead to early detection of these pre-cancerous polyps when executed appropriately.However, they are susceptible to human error, and these deficiencies in quality control during their execution can result in overlooked lesions and subsequent interval cancers.Specifically, quality indicators encompassing the scope withdrawal time and the segment-specific cleanliness and bowel preparation play a critical role in determining the procedure's efficacy.Spending adequate time inspecting each segment of the colon during the withdrawal phase has been linked to a higher adenoma pick-up rate due to more meticulous examination of the mucosa [4].Moreover, ensuring a properly cleaned bowel offers a clearer visual field, which is pivotal for practitioners identifying pre-cancerous polyps.Therefore, precise documentation and monitoring of these quality indices hold the potential to assist medical professionals maintain good adenoma detection rates, consequently improving the probability of early-stage colorectal cancer detection and the facilitation of prompt medical intervention.
The medical domain has seen an accelerated convergence of machine learning and computer vision methodologies, serving as a framework for prognostic and predictive systems in diagnosing various diseases and conditions.These systems also find utility in refining decision-making processes.Deep learning in particular has been identified as an ideal candidate for these applications, due to its prior use in aiding medical decision-making and its rapid acceleration in recent years.It has been frequently used in numerous applications such as image reconstruction, classification of diseases, image segmentation among others.A main advantage of deep learning methods is their ability to improve efficiency in decision making by reducing the time that would otherwise be spent manually, such as in drug discovery [5].Classical computer vision and processing techniques also hold an important place within these applications, specifically via feature extraction and automation of image analytical procedures, and are extensively used across various medical domains such as pathology, cardiology, endoscopy, and others [6].A key application of such computational methods is in augmenting existing systems or assisting clinical practitioners by providing a kind of "second opinion" to compare against, and to use as a benchmark to standardise decision-making.Within the realm of colonoscopy assistance, as described above, deep learning and computer vision techniques therefore pose an ideal method to augment clinical practice and provide clinicians with a valuable additional tool.
In this paper, we present a methodology to simultaneously interpret the bowel quality in each detected system of the colon and monitor the scope withdrawal rate.Such an approach has the ability to enhance the efficacy of routine colonoscopy procedures by monitoring and advising on the previously mentioned quality indices.The task of interpreting the bowel quality is accomplished through a transfer learning approach for a multi-class classification model using a ResNet-50 neural network backbone.We propose a simple curvature-tracking based method to make inferences on the segment of the colon being scoped at a particular time.Withdrawal time, which refers to the time spent by the practitioner during the withdrawal of the scope from the caecum, is monitored by timing the bowel quality prediction output presented to the practitioner at every 30 seconds and advising on the speed of the scope based on camera motion capture.
RELATED WORK 2.1 Bowel quality measure
The "quality" of the bowel prior to and during a colonoscopy routine is pivotal in assessing the effectiveness of the procedure.The Boston Bowel Preparation Scale (BBPS) is a 4-point scoring system that is used to ascertain the preparation quality of a bowel.A score of 0 indicates poor bowel preparation and a score of 3 indicates high quality bowel preparation.In clinical practice, the BBPS is calculated for 3 main segments of the colon and are aggregated to produce a final score out of 9, which determines the overall cleanliness of the bowel [7].A score below 6 is typically regarded as insufficient bowel preparation [8].
Deep learning for medical diagnosis
Machine learning techniques in endoscopy augmentation, and more specifically, in the inference of bowel quality, have made considerable headway in recent years.Zhou et al. presented their system ENDOANGEL that classifies images of the colon based on the bowel quality score (the BBPS score) using a deep convolutional network and report an average accuracy of 91.89% using a deep-convolutional neural network [9].Nam et al. used a similar convolutional neural network architecture to evaluate small-bowel preparation and achieved an accuracy of 93% [10].In other related research, various model architectures and techniques have shown to be effective in medical diagnostics.For example, Almalik et al. explored vision transformers for the classification of chest X-ray images and reported an accuracy of 97.64% [11].Using transfer learning, Alzubaidi et al. achieved an accuracy of 97.51% for breast cancer classification [12].Similarly, numerous studies also explore the application of deep learning for various medical diagnostics related to cardiac, respiratory, endocrine, and cranial diseases among others.
Colonic anatomical localization
The colon is divided into seven key locations that can be thought of as "colonic segments": the rectum, sigmoid colon, descending colon, splenic flexure, hepatic flexure, ascending colon, and the caecum.Broadly, we can combine the hepatic and splenic flexure into one umbrella term: the transverse colon.Precise localization, which refers to identifying which segment of the colon the scope is in during the colonoscopy routine, is important to promote targeted treatment if a pre-cancerous polyp is detected.During the procedure, physicians typically use visual markers as well as the movement of the scope or the shape the scope moves in to determine the scope location.
Automating this procedure is a non-trivial task, and not many approaches to this localization problem have been explored in recent literature.Most commonly, deep learning has been used to classify images of the bowel to their respective locations.Saito et al. [13] used a pre-trained convolutional neural network to classify bowel images into seven anatomical locations, reporting an overall accuracy of 66.6% on the test data after training the model on ∼10000 images.Houwen et al. [14] used a similar method but used images from magnetic endoscope imaging to train a pre-trained classifier, reporting an overall accuracy of 63%.Hence, using deep learning to infer the location given an image of a section of the bowel not only requires a large amount of data, but also does not show remarkable results.This can be attributed to the complexity of the images, and it is often difficult for practitioners to identify the location using solely visual biomarkers.A different method proposed by Herp et al. [15] describes a feature-based tracking approach using an endoscopic pill that models the shape of the colon as the pill is ingested.This method achieved an average accuracy of 86% in reconstructing the colon shape and subsequently labeling the anatomical regions.None of these approaches suggest a real-time application, which poses a significant gap in the deployment of such algorithms in clinical practice.Hence, not only should a new approach accurately predict the anatomical region, but it should also do so feasibly in real-time.
METHODS
Our proposed system can be broken down into three blocks: a classifier that is used to infer the Boston Bowel Preparation Scale (BBPS) Score and hence provide a measure of the quality of the bowel, a localizer that predicts the current location of the scope in the bowel, and a movement monitor that advises on the scope speed and stability.The aggregated BBPS score is shown every 30 seconds, with 2 readings per location (approximately 1 minute in total per predicted segment).The speed and stability are monitored continuously through the procedure.
Classifier and BBPS Score Prediction
We used the publicly available Nerthus dataset [16] containing 5525 labelled images taken from 21 videos of colonoscopy routines.These images were labelled according to their BBPS score, i.e., 0, 1, 2 or 3.The dataset was randomly split into 70% training data, 20% validation data and 10% test data sets.Further testing was also done on a dataset curated by the National University Hospital.
The training dataset was subjected to various pre-processing and augmentation techniques including resizing, rotating, flipping and normalization to enhance the variability of data in each class.
We used transfer learning via fine-tuning of the last few layers of a ResNet-50 backbone, hereby called ScopeNet and used focal loss as an alternative to cross-entropy to account for the imbalance in image classes.
The term(1 − ) is a modulating factor introduced in their paper [17] .It under-weighs misclassifications from easier classes, i.e., when 0.5, since the value converges to 1.This decreases the influence of easier classes on the overall loss. is the focusing parameter which adjusts the rate at which easier classifications are under-weighed.The ResNet-50 backbone consists of 48 convolutional layers, 1 max pooling layer, and 1 average pooling layer.Specifically, it leverages on skip connections to solve the problem of vanishing gradients.The Nerthus dataset is small, and training a highly deep model from scratch would lead to overfitting.By using pre-trained weights and unfreezing the last few layers of the ResNet-50 model, and with the inherent skip connections that further reduce the vanishing gradients problem, we retain the depth the model provides while still preventing potential overfitting.
Colonic Anatomical Localization
To track the motion of the colonoscope during the procedure, we propose a simple curvature-based tracking methodology.As seen in Figure 4a, there are three key turns in the colon which occur between the ascending colon and the hepatic flexure, splenic flexure and the descending colon, and the descending colon and the sigmoid.Hence, given a priori knowledge that the starting point is at the caecum (from the withdrawal phase), by monitoring which turn the scope takes, we can determine the current location in the colon.The camera attached to the scope offers a wealth of valuable information.In particular, when the camera peers through an opening, like the colon, we are able to perceive the overall direction of the scope's movement and anticipate the next destination.This is perhaps best understood through the analogy of peering into a tunnel.Similar to how we perceive the direction of the tunnel by observing the darkest portion, the same concept applies when we look at an image of the bowel through the scope camera.Just as the darkest point in the tunnel indicates its continuing path, the darkest part of the bowel image serves as a visual cue, allowing us to discern the direction or curvature of the scope's movement.Figure 4b demonstrates this idea.
We can identify this point of minimum intensity in the image using 2D wavelet analysis.The Daubechies wavelet of order 4 maximizes efficiency and is suitable for edge detection, which is critical for depth estimation.On performing wavelet decomposition and thresholding, the resulting image has areas of high and low intensities.The deepest point in the image can be thought of as the location with the lowest intensity value in the thresholded image.We follow the coordinate of maximum depth at each timestep in the colonoscopy routine to visualize the rough path of the scope.On this path, we perform curvature analysis to determine the critical turning points which indicate a change in the location in the colon.This is done using the Frenet-Serret formulae [19], a set of mathematical equations that describe the behavior of a curve in a 3-dimensional Euclidean space.Given a parameterized curve r(t) in 3D space, the Frenet-Serret formulae allow us to compute the tangent vector T, normal vector N, and binormal vector B at any point along the curve.The tangent vector represents the direction of the motion of the curve at that point, the normal vector represents the direction of curvature, and the binormal vector represents the direction of twist.Using these, we can compute the normalized curvature at any point along the curve, which represents the rate at which the curve changes direction.
For the pilot study, we focus on running tests to find the appropriate threshold for the curvature measure at which a "turn" to the next location is indicated.To implement this localization mechanism into a real-time system, whenever the value measured at every point relative to the past coordinates in the path surpasses the set threshold, the location predicted by the system would be updated.
Motion Analysis
An additional feature we propose to enhance the effectiveness of the colonoscopy procedure is motion analysis and advising on the scope speed.If the practitioner moves the scope too fast during the procedure, the quality of the images passed to the neural network and the localizer would be substandard, which would negatively impact the performance of the system.Additionally, monitoring the speed can be useful in ensuring the colon is adequately scoped.We use Fast-Fourier Transform (FFT) analysis to evaluate the blurriness of the images being passed to the pipeline.Particularly, by analyzing the magnitude of the FFT of the image and applying the inverse FFT, we can estimate the degree of motion blur and determine if the scope is moving too fast.Further testing in a clinical setting is required to ascertain the exact threshold to determine when the scope is moving too fast based on the FFT results.
EXPERIMENTS 4.1 Boston Bowel Preparation Scale Score Prediction
We ran the ScopeNet model using standard parameters; i.e, 10 epochs, learning rate of 0.01, batch size of 32, 0.1 gamma and step size of 7. We used Bayesian optimisation to tune these parameters, with the final set being 15 epochs, learning rate of 0.025, a batch size of 80 and a step size of 10.Our model performed better than other architectures as seen in Table 1.
Colonic Anatomical Localization
Real images obtained from colonoscopy videos exhibit complexity, including blurry frames and specular noise.As a result, the curvature-based method is initially tested on a publicly available dataset of synthetic colon images, generated from CT-colonography [21].This dataset comprises 16,016 RGB images depicting different segments of the colon; however, they lack location annotations.Thus, this dataset is solely utilized to assess the viability of the proposed method.Additionally, we used the dataset curated by the National University Hospital (NUH) for further testing of the localization methodology for the pilot study.
For the initial testing of the curvature analysis approach, 364 images were used from the CT-colonography dataset.Following the pipeline, the 3D visualization of the entire path flow is shown in Figure 4c. Figure 4d and 4e show the curvature analysis for a short segment of this path.
From the curvature graph in Figure 4e, we can see that the maximum curvature occurs at ∼20s and from the first frame.This corroborates with the 3D plot, implying that the method can work at ∼40s and ∼60s on the synthetic data.On the NUH dataset, due to the images being more complex, we evaluate the performance on shorter segments.Particularly, since the movement of the scope is more rough due to the back-and-forth motion, there is a lot more variability in the path.In addition to this, the real data has a lot more noise in the images than the synthetic dataset, specifically motion blur and specular noise.Hence, we apply a median filter and evaluate the performance with varying kernel sizes to determine the best results.We observe that having a bigger kernel size generally improves the prediction of the path.This is particularly because of the removal of noise, which affects the estimation of the point of greatest depth.However, increasing the kernel size also results in information loss which may be useful in later stages, so the optimal kernel size is worth investigating.
Based on the 3D path graphs, we can make estimations about the current location from our prior knowledge that the withdrawal phase begins from the caecum.Therefore, we assume that the initial point of maximum curvature indicates a turn from the ascending colon to the hepatic flexure, the second point of maximum curvature signifies a turn from the splenic flexure to the descending colon, the third point indicates a turn from the descending colon to the sigmoid, and finally, that the last point denotes a turn from the sigmoid to the rectum.It should be noted that differentiating between the hepatic and splenic flexure is not feasible with this method.However, as outlined in the following section, deep learning can be employed in conjunction with this approach to identify these anatomical regions.Furthermore, due to the complexities present in real data, resulting in numerous disturbances in the path, accurately modeling the complete path at the current stage has proven to be challenging and requires further research.Figure 5 provides an annotated path from the descending colon to the splenic flexure to demonstrate the method's validity as a proof-of-concept.
BBPS Score Prediction
In the experiments carried out as described, the main evaluation metrics used were accuracy, precision, and recall (for each individual class).After Bayesian Optimization, we found that the ResNet-50 backbone provides the best performance overall and can be used for deployment.ResNet-50 particularly outperforms VGG and DenseNet due to its inherent architecture of allowing shortcut connections and residual functions that reduce the training loss, while still maintaining the complexity in terms of the number of stacked layers [22].With 48 convolutional layers, 1 max pooling layer, and 1 average pooling layer, ResNet-50 facilitates this behavior.Moreover, the inclusion of residual functions mitigates the issue of vanishing gradients, a common limitation of the VGG architecture.To support our findings, further tests should be conducted on more challenging test sets.An interesting observation is the poor performance by the vision transformers.One particular reason why this might have occurred is due to the limited training set since we were training the vision transformer from scratch.Additionally, vision transformers typically do not perform as well on images that mainly differ on texture and other finer details as seen in the images [23].
Colonic Anatomical Localization
Although this method establishes a baseline for anatomical location prediction, further refinement and investigation are necessary to enhance its rigor.One significant limitation lies in the depth estimation stage, where the point estimated is incorrect due to the perspective of the camera.By applying perspective transformations to the images, those captured at the same or similar timepoints can be standardized, resulting in reduced fluctuations within short time periods.Another challenge tackled in this study relates to the usability of frames in the colonoscopy video.Since some frames are not usable, this means there are often various "jumps" between frames, where some images within some time periods are not used.This discontinuity in the images adds to the fluctuation and can cause the exact moment of location change to be missed.To overcome this, we increased the number of frames taken per second, which resulted in slightly more images being classified as usable.However, since this did not solve the problem completely, in the 3D visualization, we used an interpolated spline to smoothen the path representation.Further work on overcoming this problem should be done to make the system more robust.Lastly, while this method works in some cases on poorly prepped bowels, it does not generalize well, making it limited in its scope to predict anatomical regions in only well-prepped bowels at this stage.This system can be enhanced by combining it with a visual learner such as a CNN to improve the confidence of the prediction as well.However, this integration presents a significant challenge, as it requires a substantial amount of well-annotated data.
INTEGRATION TO A REAL-TIME SUPPORT SYSTEM
The application of the BBPS predictor and the colonic anatomical localizer is real-time monitoring and advising during the colonoscopy routine.We demonstrate a pilot iteration on a pre-recorded colonoscopy video obtained from NUH to showcase the methodology.The colonic anatomical localizer is not explicitly shown, however, this video was cropped to show only a homogenous segment of the colon.The demonstration can be found here.In our video, we show the score prediction every 5 seconds (in longer videos, it is every 30 seconds), a real-time histogram recording the scores, and a running motion analysis (the methodology of which was described in the Methods section).
CONCLUSIONS AND FURTHER WORK
In this paper, we presented a methodology to augment colonoscopy outcomes and discuss proof of viable translation to clinical settings.We presented ScopeNet, a transfer-learning based architecture to predict the Boston Bowel Preparation Scale Score with an average accuracy of 98.5% from images of the bowel, a segment-wise colonic localizer, and additional tools such as speed and motion analysis.
The integration of such systems has the ability to reduce the burden on practitioners in real time, and improve the early detection of pre-cancerous polyps during colonoscopy routines.However, this research will vastly benefit from further work to the subcomponents in this system.Studies performing a benchmark analysis against human physicians are vital to evaluate the necessity and efficacy of the BBPS predictor we described.Furthermore, innovations in data pre-processing to exploit the visual characteristics of bowel images could promote new methods and insights into medical computer vision.The colonic anatomical localizer presented will also require further extensive testing with various types of colonoscopy videos, and new improvements and methodologies should be proposed to encapsulate abnormal cases of colonoscopies.In addition to improvements in these phases of the project, further research in the realm of augmenting colonoscopy outcomes using machine learning and automated systems could include volumetric analysis through 3D renderings of the colon.This would help visualize the scope routine in more detail, promote more precise targeted treatment and make the system more complete.
Figure 2 :
Figure 2: a Images are passed into the ScopeNet model (described in the Methods section) sequentially.The model outputs the BBPS score for the image.b Simultaneously, the location of the scope in the colon is predicted (the segment of the colon the scope is in).c Additional features such as motion analysis is done via blur detection using FFT.
Figure 4 :
Figure 4: a Anatomical segmentation of the colon [20].b Visualisation of the determining the next location.c 3D visualisation of the path taken by the colonoscope during a colonoscopy.d Visualisation of a short path segment during a colonoscopy.e Curvature analysis calculated using Frenet-Serret equations.
Figure 5 :
Figure 5: Annotated path from descending colon to splenic flexure and corresponding curvature analysis.
Table 1 :
Results of Comparative Experiments | 2024-01-31T16:06:19.916Z | 2023-10-20T00:00:00.000 | {
"year": 2023,
"sha1": "9fd1968422189e1a66e5f95fc34270a707b8850b",
"oa_license": "CCBY",
"oa_url": "https://dl.acm.org/doi/pdf/10.1145/3634875.3634885",
"oa_status": "HYBRID",
"pdf_src": "ACM",
"pdf_hash": "1e143555f48ae264b3f1f216cd6b77a5d6041783",
"s2fieldsofstudy": [
"Medicine",
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
214627714 | pes2o/s2orc | v3-fos-license | Experimental and Constitutive Model on Dynamic Compressive Mechanical Properties of Entangled Metallic Wire Material under Low-Velocity Impact
In this paper, the dynamic compressive mechanical properties of entangled metallic wire material (EMWM) under low-velocity impact were investigated and the constitutive model for EMWM under low-velocity impact was established. The research in this paper is based on a series of drop-hammer tests. The results show that the energy absorption rate of EMWM is in the range from 50% to 85%. Moreover, the EMWM with a higher relative density would not plastically deform macroscopically and has excellent characteristics of repetitive energy absorption. With the increase in relative density, the maximum deformation of EMWM decreases gradually, and the impact force of EMWM increases gradually. With the increase in impact-velocity, the phenomenon of stiffness softening before reaching the maximum deformation of EMWM becomes more significant. A constitutive model for EMWM based on the Sherwood–Frost model was established to predict the dynamic compressive mechanical properties of EMWM. The accuracy of the model was verified by comparing the calculated results with the experimental data of the EMWM with different relative densities under different impact-velocities. The comparison results show that the established model can properly predict the dynamic compressive mechanical characteristics of EMWM under low-velocity impact loading.
Introduction
Entangled metallic wire material (EMWM) is a novel porous material made of wire through a series of special processes. It can dissipate vibration energy through dry friction between adjacent wire helixes [1,2]. Some researchers also use the term metal rubber (MR) [3], metal wire mesh (MWM) [4], or elastic porous wire mesh (EPWM) [5]. However, they are the same kind of material because the manufacturing process and mechanical properties of these materials are highly similar. Compared to traditional polymer material (such as natural rubber), the EMWM has some outstanding advantages, such as high-low temperature resistance and corrosion resistance. Therefore, EMWM has been widely used in extreme environments, such as the vibration reduction in ship's foundations under high temperature [6], the micro-vibration isolation system of spaceborne cryocoolers [7], and the sealing of rotors for turbo-machinery [8].
Porous materials, such as metal foams and honeycomb materials, can effectively absorb impact energy and have attracted the attention of many researchers and material manufacturers. These porous materials are "hard frame materials" (HFMs). The edge of the cell element of a HFM is a rigid beam or face, and the connection between the cell elements is a rigid connection. When a HFM is subjected to impact load, the impact energy will be absorbed through plastic deformation of the edges of the cell elements of the HFM. Therefore, the HFM cannot dissipate impact energy repeatedly because of irreversible plastic deformation. The EMWM is a kind of open-cell "soft frame material" (SFM) [9]. The edge of the cell element of EMWM is variable edge, which is determined by the contact point between adjacent wire helixes. When EMWM is subjected to impact load, the impact energy will be absorbed through dry friction between the adjacent wire helixes and air damping caused by air discharge and inhalation of internal pores [9]. Wu et al. [9] noted that the EMWM can return to its original size under multiple impact loads. However, when the impact load exceeds the bearing limit of the EMWM, the EMWM may be molded again. This means that the EMWM will plastically deform.
In previous research, the researchers' investigations on EMWM mainly focused on the quasi-static mechanical properties [10][11][12], dynamic mechanical properties [13][14][15], the influence of different factors on its mechanical properties [16][17][18][19][20][21], damping mechanism [1,22] and engineering application [23,24]. However, the dynamic compressive mechanical properties of entangled metallic wire material under low-velocity impact have not received enough attention from researchers. Liu et al. [25] studied a kind of sintered entangled metallic wire material (SEMWM) under impact load and found that the impact toughness of SEMWM increases with the decrease in its porosity. The internal adjacent wires of the SEMWM were sintered together at the point of contact through a vacuum furnace. Therefore, the SEMWM is a kind of "hard frame material". Guérard et al. [26] investigated the mechanical properties of a single wire entangled material with different high strain rate (by a Hopkinson bar device). The results of their experiments show that the strain rate and density have a significant influence on the dynamic mechanical properties of EMWM. The results of Guérard et al. also confirm that the EMWM has good suitability for impact energy absorption but did not analyze the impact energy dissipation mechanism of EMWM. Xia et al. [27] carried out theoretical and experimental research on the shock protection characteristics of two types of EMWM isolator. Liu et al. [28] applied the EMWM to the gun's latch block buffer, and their test results show that the cushion device made by EMWM has better properties and life on impact environment than that made by polymer materials. Jeong et al. [29] designed a frequency tunable vibration and shock isolator with mesh washer, and the results from their experimental reveal that the isolator can not only achieve the performance of shock attenuation and but also can avoid the vibration amplification. Recently, Wu et al. [9] investigated the mechanical behavior of EMWM under quasi-static and low-velocity impact loading and noted that the EMWM has excellent characteristics of repetitive energy absorption. However, the influence of material parameters on its energy absorption performance was not considered by Wu et al. [9] To facilitate the application of EMWM for shock absorption, it is necessary to investigate the constitutive model of EMWM, which can predict the dynamic compressive mechanical properties of EMWM under impact loading. Due to the complex internal spatial structure of EMWM, it is difficult to establish a constitutive model of EMWM with the consideration of the actual internal space structure. Therefore, many scholars have adopted statistical analysis and parameter fitting methods to obtain macro-mechanical models of EMWM based on experiments [30,31]. The Sherwood-Frost model [32], as shown in Equation (1), was proposed by Sherwood and Frost in the 1990s, and has often been used to establish the constitutive models of various metal foams [33][34][35][36]. By comparing and analyzing the constitutive model of metal foam and EMWM, Li et al. [37] presented a constitutive model for knitted-dapped EMWM by using the Sherwood-Frost model and performed parameter fitting. Ding et al. [38] proposed a modified constitutive model for plate-like EMWM with the consideration of thermal expansion.
where σ eq is the equivalent stress, H(T) is the temperature softening term, G(ρ) is the density term, ρ is the density, M(ε, . ε) is the strain rate enhancement term, and f (ε) is the shape function. The EMWM is a kind of porous material with energy absorption and excellent impact resistance. The dynamic compressive mechanical properties of EMWM under low-velocity impact will be Materials 2020, 13, 1396 3 of 17 investigated by a series of drop-hammer tests. The effect of impact velocity and relative density on mechanical properties and energy absorption mechanism of EMWM will be studied. Finally, a constitutive model for EMWM will be proposed to predict the dynamic compressive mechanical properties of EMWM under low-velocity impact loading.
Materials Used and Specimen Preparation
In this paper, austenitic stainless-steel wires 304 (0Cr18Ni9) with a diameter of 0.3 mm were used as the raw material for manufacturing EMWM specimens. A three-step process was adopted to fabricate the EMWM specimens [1,9,10,26]. First, the straight austenitic stainless steel wire was processed into a dense wire helix according to the principle of coil spring processing; second, a rough porous base material of EMWM was prepared by fixed-pitch stretching and cross weaving of the tight wire helix; third, the base material was put into a specific mold, and molded to obtain an EMWM specimen.
Relative density is the most important structural parameter of porous materials [39,40], and is often used to evaluate the porosity in a porous material. The relative density of EMWM (ρ r ) can be calculated as where ρ is the density of EMWM, ρ s is the density of the base material, ρ s = 7.87 g/cm 3 , and ϕ is the porosity of EMWM.
To investigate the influence of relative density on the dynamic performance of EMWM and assess the repeatability of the results, 4 batches of EMWM with different relative densities were manufactured, with each batch being composed of 5 specimens. One of the manufactured EMWM specimens is shown in Figure 1. The specific size parameters of the EMWM specimens are listed in Table 1, and CI is the confidence intervals of parameter values in 0.95 of the confidence. investigated by a series of drop-hammer tests. The effect of impact velocity and relative density on mechanical properties and energy absorption mechanism of EMWM will be studied. Finally, a constitutive model for EMWM will be proposed to predict the dynamic compressive mechanical properties of EMWM under low-velocity impact loading.
Materials Used and Specimen Preparation
In this paper, austenitic stainless-steel wires 304 (0Cr18Ni9) with a diameter of 0.3 mm were used as the raw material for manufacturing EMWM specimens. A three-step process was adopted to fabricate the EMWM specimens [1,9,10,26]. First, the straight austenitic stainless steel wire was processed into a dense wire helix according to the principle of coil spring processing; second, a rough porous base material of EMWM was prepared by fixed-pitch stretching and cross weaving of the tight wire helix; third, the base material was put into a specific mold, and molded to obtain an EMWM specimen.
Relative density is the most important structural parameter of porous materials [39,40], and is often used to evaluate the porosity in a porous material. The relative density of EMWM (ρr) can be calculated as
1
( 2) where ρ is the density of EMWM, ρs is the density of the base material, ρs = 7.87 g/cm 3 , and φ is the porosity of EMWM.
To investigate the influence of relative density on the dynamic performance of EMWM and assess the repeatability of the results, 4 batches of EMWM with different relative densities were manufactured, with each batch being composed of 5 specimens. One of the manufactured EMWM specimens is shown in Figure 1. The specific size parameters of the EMWM specimens are listed in Table 1, and CI is the confidence intervals of parameter values in 0.95 of the confidence.
Drop-Weight Impact Tests
The dynamic compressive mechanical properties of EMWM under low-velocity impact was tested by a series of drop hammer tests. A self-designed drop hammer test device, which is shown in Figure 2a, was used to carry out the tests. The drop hammer test device was mainly composed of a hammer, a dynamic force sensor (YX-60T, Yi Xuan Electronic Technology Co., Ltd., Yangzhou, China), a displacement sensor (MTS-H10C, GIVI, Nova Milanese, Italy) and a real-time data acquisition and control system. The weight of the hammer was 76 kg. The contact surface between the drop hammer and the specimen was flat. The maximum lifting height of the drop hammer was 4 m. The real-time data acquisition and control system was built up on the bias of the LabVIEW-RT system and an X series data acquisition device (NI PCIe-6351, National Instruments, Austin, TX, USA). The maximum detection range of the YX-60T is from 0 to 600 kN. The displacement resolution of MTS-H10C is 10 µm.
For this device, the force and magnetic sensors with different measuring range and accuracy can be replaced according to the testing needs. During the impact test, each EMWM specimen was installed in a pre-designed fixture, as shown in Figure 2b. The impact velocity and corresponding impact energy and initial strain rate are summarized in Table 2. The tests were divided into two parts. The first part entailed each sample being tested repeatedly from the lowest to the highest impact velocity, and the change in the sample height was recorded at the end of each test. In the second part, for the specimens whose height had changed after the first part of the test, the samples with the same parameters were reprepared and only a single impact test was carried out to compare the effects of different impact times on the mechanical properties and energy absorption properties of the specimens.
Drop-Weight Impact Tests
The dynamic compressive mechanical properties of EMWM under low-velocity impact was tested by a series of drop hammer tests. A self-designed drop hammer test device, which is shown in Figure 2a, was used to carry out the tests. The drop hammer test device was mainly composed of a hammer, a dynamic force sensor (YX-60T, Yi Xuan Electronic Technology Co., Ltd., Yangzhou, China), a displacement sensor (MTS-H10C, GIVI, Nova Milanese, Italy) and a real-time data acquisition and control system. The weight of the hammer was 76 kg. The contact surface between the drop hammer and the specimen was flat. The maximum lifting height of the drop hammer was 4 m. The real-time data acquisition and control system was built up on the bias of the LabVIEW-RT system and an X series data acquisition device (NI PCIe-6351, National Instruments, Austin, TX, USA). The maximum detection range of the YX-60T is from 0 to 600 kN. The displacement resolution of MTS-H10C is 10 μm. For this device, the force and magnetic sensors with different measuring range and accuracy can be replaced according to the testing needs. During the impact test, each EMWM specimen was installed in a pre-designed fixture, as shown in Figure 2b. The impact velocity and corresponding impact energy and initial strain rate are summarized in Table 2. The tests were divided into two parts. The first part entailed each sample being tested repeatedly from the lowest to the highest impact velocity, and the change in the sample height was recorded at the end of each test. In the second part, for the specimens whose height had changed after the first part of the test, the samples with the same parameters were reprepared and only a single impact test was carried out to compare the effects of different impact times on the mechanical properties and energy absorption properties of the specimens. To investigate the dynamic compressive mechanical properties and energy absorption of EMWM, specific energy absorption (SEA), energy absorption rate (η D ) and impact stiffness (k) were derived from the experiment data.
According to the law of conservation of energy, total impact energy (E 0 ) is defined as Equation (3).
where m is the mass of the hammer, m = 76 kg; g is the acceleration of gravity, g = 9.8 m/s 2 ; h is the lifting height of the drop hammer; and V 0 is the initial impact velocity. The energy absorbed (E a ) by EMWM can be expressed as follows: where V 1 is the velocity of the drop-hammer at which the force is almost zero at the end of the unloading process. The specific energy absorption (SEA) is the energy dissipated by the EMWM of per unit mass and is defined as Equation (5).
where m r is the mass of EMWM specimen. The energy absorption rate (η D ) is used to evaluate the energy absorption capability of EMWM under low-velocity impact and can be calculated as The energy absorption rate (η D ) can be obtained by the combination of Equations (3), (4) and (6). η D can be expressed as The impact stiffness (k) represents the loading-bearing capacity of EMWM. Its expression is as Equation where F max and X max are the maximum force and maximum displacement, respectively.
To facilitate the establishment of the constitutive model for EMWM under low-velocity impact, the force-displacement curves of EMWM can be transformed into stress-strain curves. The stress (σ), strain (ε) and initial strain rate ( . ε) can be calculated, respectively, as where F is the force, S is the cross-sectional area of EMWM specimen, H is the height of EMWM specimen, and x is the displacement.
Impact Process Analysis
A high-speed 10-bit CMOS camera (PCO.1200hs, PCO AG, Kelheim, Germany) was used to observe the entire impact process at a frame rate of 2000 f/s. The impact process of EMWM is shown in Figure 3. Figure 3a shows the moment when the drop hammer comes into contact with the EMWM specimen. At this moment (t = 0 ms), the kinetic energy of the drop hammer reaches the maximum. Figure 3b (t = 4.5 ms) shows the deformation of the EMWM specimen in the process of drop hammer compression. Figure 3c (t = 9.0 ms) shows the moment when the deformation of the EMWM specimen reaches the maximum. Figure 3d (t = 13.5 ms) shows the deformation of the EMWM in the process of recovery. Figure 3e shows the moment when the drop hammer and the EMWM specimen begin to separate. Figure 3f shows the moment when the EMWM specimen returns to its original position. Figure 3g presents the force-displacement curves of the homologous impact process. It can be seen from the impact process that the EMWM compresses and then recovers rapidly to its original position under low-velocity impact loading. In general, the EMWM can withstand repeated impacts at finite deformation or loading. A similar deformation mode of EMWM under quasi-static compression was also observed by Rodney et al. [41].
where F is the force, S is the cross-sectional area of EMWM specimen, H is the height of EMWM specimen, and x is the displacement. A high-speed 10-bit CMOS camera (PCO.1200hs, PCO AG, Kelheim, Germany) was used to observe the entire impact process at a frame rate of 2000 f/s. The impact process of EMWM is shown in Figure 3. Figure 3a shows the moment when the drop hammer comes into contact with the EMWM specimen. At this moment (t = 0 ms), the kinetic energy of the drop hammer reaches the maximum. Figure 3b (t = 4.5 ms) shows the deformation of the EMWM specimen in the process of drop hammer compression. Figure 3c (t = 9.0 ms) shows the moment when the deformation of the EMWM specimen reaches the maximum. Figure 3d (t = 13.5ms) shows the deformation of the EMWM in the process of recovery. Figure 3e shows the moment when the drop hammer and the EMWM specimen begin to separate. Figure 3f shows the moment when the EMWM specimen returns to its original position. Figure 3g presents the force-displacement curves of the homologous impact process. It can be seen from the impact process that the EMWM compresses and then recovers rapidly to its original position under low-velocity impact loading. In general, the EMWM can withstand repeated impacts at finite deformation or loading. A similar deformation mode of EMWM under quasi-static compression was also observed by Rodney et al. [41].
Impact Process Analysis
During the 0-9.0 ms period, the EMWM is compressed by the impact load and is significantly deformed. During this process, part of the impact energy is dissipated by the EMWM in the form of dry friction and air damping, and the rest is stored in the EMWM in the form of elastic potential energy. In addition, for low density EMWM, the plastic deformation of the wire helixes will also consume part of the impact energy. The EMWM is a kind porous material, and its inner pores are filled with air. When the EMWM is compressed, its internal air will be squeezed out. According to the theory of fluid mechanics, air resistance is proportional to the square of the relative velocity between the internal air and wire helixes. This means that the more severe the deformation of the EMWM is, the faster the internal air is extruded, and the greater the air damping generated by the EMWM. During the 9.0-21.0 ms period, the impact energy, which is stored in the form of elastic potential energy, is released. During this process, part of the elastic potential energy is dissipated by the EMWM in the form of dry friction and air damping, and the rest is converted into the kinetic energy of the drop-hammer. Similar to the principle of air damping generated by extruded air, there is also a damping effect when external air enters the internal pores of the EMWM. During the 21.0-25.5 ms period, the EMWM continues to recover its shape. During this process, residual elastic potential energy is continuously absorbed by the EMWM. Figure 4 presents two force-displacement curves of an EMWM specimen under different initial impact velocities. It can be seen from Figure 4a that the shape of the curve of the EMWM with a low impact velocity (2 m/s) is similar to that under quasi-static loading, and can be divided into three regions: linear region, plateau region and stiffened region [10]. As shown in Figure 4a, the slope of During the 0-9.0 ms period, the EMWM is compressed by the impact load and is significantly deformed. During this process, part of the impact energy is dissipated by the EMWM in the form of dry friction and air damping, and the rest is stored in the EMWM in the form of elastic potential energy. In addition, for low density EMWM, the plastic deformation of the wire helixes will also consume part of the impact energy. The EMWM is a kind porous material, and its inner pores are filled with air. When the EMWM is compressed, its internal air will be squeezed out. According to the theory of fluid mechanics, air resistance is proportional to the square of the relative velocity between the internal air and wire helixes. This means that the more severe the deformation of the EMWM is, the faster the internal air is extruded, and the greater the air damping generated by the EMWM. During the 9.0-21.0 ms period, the impact energy, which is stored in the form of elastic potential energy, is released. During this process, part of the elastic potential energy is dissipated by the EMWM in the form of dry friction and air damping, and the rest is converted into the kinetic energy of the drop-hammer. Similar to the principle of air damping generated by extruded air, there is also a damping effect when external air enters the internal pores of the EMWM. During the 21.0-25.5 ms period, the EMWM continues to recover its shape. During this process, residual elastic potential energy is continuously absorbed by the EMWM. Figure 4 presents two force-displacement curves of an EMWM specimen under different initial impact velocities. It can be seen from Figure 4a that the shape of the curve of the EMWM with a low impact velocity (2 m/s) is similar to that under quasi-static loading, and can be divided into three regions: linear region, plateau region and stiffened region [10]. As shown in Figure 4a, the slope of the force-displacement curve increases with the increase in deformation. However, the shape of the curve of the EMWM with a relatively high impact velocity (8 m/s) can be divided into four regions: linear region, plateau region, stiffened region and softening region. Stiffness softening occurs before maximum deformation is reached (blue dotted circle in Figure 4b). The reason for the difference in the shape of the curves is that the air damping is not obvious at a relatively low impact velocity. In the case of a relative-high velocity impact loading, the impact energy is dissipated by the EMWM in the form of plastic deformation, dry friction and air damping, and the deformation velocity of the EMWM will become slower until it reaches maximum deformation. This means that air damping will gradually decrease until it is zero. maximum deformation is reached (blue dotted circle in Figure 4b). The reason for the difference in the shape of the curves is that the air damping is not obvious at a relatively low impact velocity. In the case of a relative-high velocity impact loading, the impact energy is dissipated by the EMWM in the form of plastic deformation, dry friction and air damping, and the deformation velocity of the EMWM will become slower until it reaches maximum deformation. This means that air damping will gradually decrease until it is zero. For each batch of EMWM, the test results of the EMWM specimens with the same relative density are similar. Therefore, only the force-displacement curve of one specimen of each batch was presented. Figure 5 shows the force-displacement hysteresis loops of the EMWM with different relative densities under different initial impact velocities. It is noted that the stiffness softening phenomenon is more obvious with the decrease in the relative density of EMWM. It can also be seen from Figure 5 that the maximum deformation of EMWM mainly depends on impact velocity and relative density. As impact velocity increases, the maximum deformation significantly increases. For each batch of EMWM, the test results of the EMWM specimens with the same relative density are similar. Therefore, only the force-displacement curve of one specimen of each batch was presented. Figure 5 shows the force-displacement hysteresis loops of the EMWM with different relative densities under different initial impact velocities. It is noted that the stiffness softening phenomenon is more obvious with the decrease in the relative density of EMWM. It can also be seen from Figure 5 that the maximum deformation of EMWM mainly depends on impact velocity and relative density. As impact velocity increases, the maximum deformation significantly increases. Figure 6 presents the force-displacement curves of EMWM with different relative densities under 5 m/s impact. The maximum deformation of EMWM with different relative density is different under the same impact energy. It is noted that with the increase in relative density, the maximum deformation of EMWM decreases gradually, and the impact force of EMWM increases gradually. The reason for this is that the internal porosity of the EMWM with higher relative density is smaller, and the wire helix is more likely to extrude each other, so the maximum deformation will be reduced. On the other hand, at the same initial impact velocity (5 m/s), a smaller amount of deformation of EMWM means that the drop hammer bears a greater deceleration. Therefore, according to Newton's second Figure 6 presents the force-displacement curves of EMWM with different relative densities under 5 m/s impact. The maximum deformation of EMWM with different relative density is different under the same impact energy. It is noted that with the increase in relative density, the maximum deformation Materials 2020, 13, 1396 9 of 17 of EMWM decreases gradually, and the impact force of EMWM increases gradually. The reason for this is that the internal porosity of the EMWM with higher relative density is smaller, and the wire helix is more likely to extrude each other, so the maximum deformation will be reduced. On the other hand, at the same initial impact velocity (5 m/s), a smaller amount of deformation of EMWM means that the drop hammer bears a greater deceleration. Therefore, according to Newton's second law, the maximum impact force of EMWM with small deformation is greater. Figure 6 presents the force-displacement curves of EMWM with different relative densities under 5 m/s impact. The maximum deformation of EMWM with different relative density is different under the same impact energy. It is noted that with the increase in relative density, the maximum deformation of EMWM decreases gradually, and the impact force of EMWM increases gradually. The reason for this is that the internal porosity of the EMWM with higher relative density is smaller, and the wire helix is more likely to extrude each other, so the maximum deformation will be reduced. On the other hand, at the same initial impact velocity (5 m/s), a smaller amount of deformation of EMWM means that the drop hammer bears a greater deceleration. Therefore, according to Newton's second law, the maximum impact force of EMWM with small deformation is greater. It is known from the forming process of EMWM that the EMWM is cold-formed under a specific external load. The forming process of EMWM is the plastic deformation process of metal wire helix. To fabricate a denser EMWM, a greater forming force must be applied. When the critical load is Experimental force-displacement curves of EMWM specimens under different relative densities.
Force-Displacement Response
It is known from the forming process of EMWM that the EMWM is cold-formed under a specific external load. The forming process of EMWM is the plastic deformation process of metal wire helix. To fabricate a denser EMWM, a greater forming force must be applied. When the critical load is exceeded, the wire helix of the EMWM may be partially plastically deformed. Therefore, the EMWM with lower relative density is more prone to plastic deformation, as demonstrated in Figure 7. It can be seen that the extent of plastic deformation of EMWM decreases with the increase in relative density. The macroscopic plastic deformation of EMWM shows as follows: its molding direction height reduced, and non-molding direction expanded outward.
Materials 2020, 13, x FOR PEER REVIEW 10 of 18 exceeded, the wire helix of the EMWM may be partially plastically deformed. Therefore, the EMWM with lower relative density is more prone to plastic deformation, as demonstrated in Figure 7. It can be seen that the extent of plastic deformation of EMWM decreases with the increase in relative density. The macroscopic plastic deformation of EMWM shows as follows: its molding direction height reduced, and non-molding direction expanded outward. Figure 8 presents the height variation curves of EMWM with different relative densities under different impact loadings. The EMWM with relative densities of 0.29 and 0.32 have no change in height at impact speeds from 2 to 8 m/s. This means that the two batches of EMWM specimens were not plastically deformed, and the impact energy is not dissipated through plastic deformation of the material, or is negligible. Meanwhile, the height of the EMWM with relative densities of 0.22 and 0.25 decreases with decreasing density at a relatively high impact velocity. The mean values and standard deviation of the maximum displacement and maximum force under different impact velocities are presented in Figure 9a,b. Figure 9c shows the mean values and standard deviation of impact stiffness under different impact velocities. As shown in Figure 9c, the impact stiffness increases with the increase in impact velocity, and the impact stiffness is linearly related to the impact velocity. Meanwhile, the impact stiffness of the EMWM with the relative density of 0.22 at 8 m/s increases significantly. This is caused by the plastic deformation of the EMWM with a lower relative density. At 8 m/s, the increasing trend of the impact stiffness of EMWM with the relative density of 0.32 slows down obviously. The mean values and standard deviation of the maximum displacement and maximum force under different impact velocities are presented in Figure 9a,b. Figure 9c shows the mean values and standard deviation of impact stiffness under different impact velocities. As shown in Figure 9c, the impact stiffness increases with the increase in impact velocity, and the impact stiffness is linearly related to the impact velocity. Meanwhile, the impact stiffness of the EMWM with the relative density of 0.22 at 8 m/s increases significantly. This is caused by the plastic deformation of the EMWM with a lower relative density. At 8 m/s, the increasing trend of the impact stiffness of EMWM with the relative density of 0.32 slows down obviously.
Energy Absorption Characteristics
Based on the measured force-displacement curves, the energy absorbed (Ea) by EMWM in the tests can be calculated according to Equation (4). The mean values and standard deviation of the absorbed energy and the corresponding specific energy absorption (SEA) of EMWM under different impact velocities are presented in Figure 10. It can be seen from Figure 10a that the impact energy absorbed by the EMWM (ρr = 0.22, 0.25 and 0.29) is almost the same in the low-velocity impact tests (2, 3, 4, 5 and 6 m/s), while that absorbed by the EMWM (ρr = 0.32) is the least. In the low-velocity impact tests (7 m/s), the energy absorption of EMWM (ρr = 0.22) is significantly higher. However, in the low-velocity impact tests (8 m/s), the energy absorbed by the EMWM (ρr = 0.22) is significantly reduced. The reason for this is that the EMWM (ρr = 0.22) cannot maintain its original energy absorption ability after producing plastic deformation in the molding direction (6 and 7 m/s). Therefore, the EMWM may affect energy absorption after the plastic deformation. The red brace (Ep) in Figure 10a represents the dissipated energy by the plastic deformation of EMWM. It can also be seen from Figure 10b that the specific energy absorption significantly decreases with the increase in relative density.
Energy Absorption Characteristics
Based on the measured force-displacement curves, the energy absorbed (E a ) by EMWM in the tests can be calculated according to Equation (4). The mean values and standard deviation of the absorbed energy and the corresponding specific energy absorption (SEA) of EMWM under different impact velocities are presented in Figure 10. It can be seen from Figure 10a that the impact energy absorbed by the EMWM (ρ r = 0.22, 0.25 and 0.29) is almost the same in the low-velocity impact tests (2, 3, 4, 5 and 6 m/s), while that absorbed by the EMWM (ρ r = 0.32) is the least. In the low-velocity impact tests (7 m/s), the energy absorption of EMWM (ρ r = 0.22) is significantly higher. However, in the low-velocity impact tests (8 m/s), the energy absorbed by the EMWM (ρ r = 0.22) is significantly reduced. The reason for this is that the EMWM (ρ r = 0.22) cannot maintain its original energy absorption ability after producing plastic deformation in the molding direction (6 and 7 m/s). Therefore, the EMWM may affect energy absorption after the plastic deformation. The red brace (E p ) in Figure 10a represents the dissipated energy by the plastic deformation of EMWM. It can also be seen from Figure 10b that the specific energy absorption significantly decreases with the increase in relative density. absorbed by the EMWM (ρr = 0.22, 0.25 and 0.29) is almost the same in the low-velocity impact tests (2, 3, 4, 5 and 6 m/s), while that absorbed by the EMWM (ρr = 0.32) is the least. In the low-velocity impact tests (7 m/s), the energy absorption of EMWM (ρr = 0.22) is significantly higher. However, in the low-velocity impact tests (8 m/s), the energy absorbed by the EMWM (ρr = 0.22) is significantly reduced. The reason for this is that the EMWM (ρr = 0.22) cannot maintain its original energy absorption ability after producing plastic deformation in the molding direction (6 and 7 m/s). Therefore, the EMWM may affect energy absorption after the plastic deformation. The red brace (Ep) in Figure 10a represents the dissipated energy by the plastic deformation of EMWM. It can also be seen from Figure 10b that the specific energy absorption significantly decreases with the increase in relative density. To investigate the effect of plastic deformation of EMWM on the mechanical properties and energy absorption, a new-batch EMWM specimen (ρr = 0.22, 0.25) was manufactured to conduct a low-velocity impact test (7 m/s and 8 m/s). The results show that the energy absorption of EMWM subjected to a single impact is almost equal to that of EMWM subjected to accumulated impacts in the previous tests, which have undergone plastic deformation. The plastic deformations under accumulated impacts are summarized in Figure 8. The plastic deformation of EMWM with high density is negligible, so the effect of plastic deformation on its energy absorption properties can be ignored, and vice versa. To investigate the effect of plastic deformation of EMWM on the mechanical properties and energy absorption, a new-batch EMWM specimen (ρ r = 0.22, 0.25) was manufactured to conduct a low-velocity impact test (7 m/s and 8 m/s). The results show that the energy absorption of EMWM subjected to a single impact is almost equal to that of EMWM subjected to accumulated impacts in the previous tests, which have undergone plastic deformation. The plastic deformations under accumulated impacts are summarized in Figure 8. The plastic deformation of EMWM with high density is negligible, so the effect of plastic deformation on its energy absorption properties can be ignored, and vice versa.
To analyze the influence of the plastic deformation of EMWM, the force-displacement curves under accumulated impact and single impact are shown in Figure 11. For EMWM with relatively low density, plastic deformation will occur after the accumulated impact, resulting in an increase in its stiffness. To analyze the influence of the plastic deformation of EMWM, the force-displacement curves under accumulated impact and single impact are shown in Figure 11. For EMWM with relatively low density, plastic deformation will occur after the accumulated impact, resulting in an increase in its stiffness. Tables 3-6 and Figure 12. It can be observed that the energy absorption rates of EMWM with different relative densities are more than 0.5. The results show that the EMWM is a kind of material with high energy absorption rate. Especially when the relative density of EMWM is greater than a certain value, it will not undergo plastic deformation and can withstand repeated impacts.
As the impact velocity increases, the energy absorption rate first decreases and then increases at a critical point, which is shown in Figure 12. The critical point is 4 m/s when the relative density of EMWM is 0.22, 0.25 and 0.29. The critical point is 5 m/s when the relative density is 0.32. This phenomenon is caused by insufficient friction and air damping and has been explained in the author's previous research [9].
It can be observed that the energy absorption rate of EMWM decreases with the increase in the Figure 12. It can be observed that the energy absorption rates of EMWM with different relative densities are more than 0.5. The results show that the EMWM is a kind of material with high energy absorption rate. Especially when the relative density of EMWM is greater than a certain value, it will not undergo plastic deformation and can withstand repeated impacts. As the impact velocity increases, the energy absorption rate first decreases and then increases at a critical point, which is shown in Figure 12. The critical point is 4 m/s when the relative density of EMWM is 0.22, 0.25 and 0.29. The critical point is 5 m/s when the relative density is 0.32. This phenomenon is caused by insufficient friction and air damping and has been explained in the author's previous research [9].
It can be observed that the energy absorption rate of EMWM decreases with the increase in the relative density. There are two reasons to explain this phenomenon. First, as the relative density increases, the internal porosity of the EMWM becomes smaller, and the wire helices are more likely to extrude each other, which will result in a reduction in the amount of energy dissipated by friction. On the other hand, the increase in relative density will lead to a decrease in air content in EMWM and then the weakening of the air damping.
Modified Shape Function
Sherwood and Frost expressed the shape function f (ε) of polyurethane foam by power series. In their test, the maximum strain of polyurethane foam varies little under different strain rates, so it is appropriate to use power series to express the shape of the stress-strain curves. However, it is not appropriate to directly use power series as the shape function of EMWM under low-velocity impact. As shown in Figure 5, the maximum strain of EMWM varies greatly under different impact velocities.
Zheng et al. [42,43] proposed a dynamic material model with the dynamic plastic hardening function (D-R-PH), which was expressed as where σ l is the effective principal stress, D is a fitting parameter and σ d 0 is the dynamic initial crush stress.
The advantage of the D-R-PH model is that the parameters are simple and have a high degree of consistency between the calculated result and experimental data. However, the D-R-PH model is rate-independent under dynamic compression process.
The shape function f (ε) = ε/(1 − ε) 2 is a good description of the stress-strain trend for foam materials. After comparing with the test data from the dispersion degree and considering the particularity of EMWM, we find that the similar function f (ε) is suitable for describing the stress-strain curves of EMWM under low-velocity impact. The modified shape function f (ε) can be defined as where D 0 is an empirical fitting parameter. The value of D 0 obtained by data fitting using the data of reference strain rate and reference relative density ( . ε 0 = 33.33 s −1 , ρ r0 = 0.22) is 13.95 MPa. The constitutive model for EMWM under low-velocity impact can be initially obtained by the combination of Equations (1) and (13). It can be expressed as where H(T) is the temperature softening term, all tests were carried out at room temperature, H(T) = 1; R(ρ r ) is the relative density term; ρ r is the relative density; M(ε, . ε) is the strain rate enhancement term; and f (ε) is the modified shape function.
Effect of the Relative Density
As mentioned above, the relative density has a significant influence on its mechanical behavior under low-velocity impact. The power function or linear function is often used to express the relationship between relative density and stress [32,37,38]. Compared with the experimental data, the relation between the relative density of EMWM and stress is approximately exponential, and then the relative density term R(ρ r ) can be expressed as where ρ r0 is the reference relative density, ρ r0 = 0.22. A is a fitting parameter. In this research, A = 2.46.
Effect of the Strain Rate
The stress-strain curves of EMWM under different impact velocities overlap highly. After comparison and analysis with experimental data, the relation between strain rate and stress is approximately linear, and then the strain rate enhancement term M(ε, . ε) can be expressed as where B and C are fitting parameters. The strain rate enhancement term was fitted, and the strain rate enhancement term is given by
Constitutive Model Verification
The constitutive model for EMWM under low-velocity impact can be obtained by the combination of Equations (16)- (18). Therefore, the constitutive model for EMWM can be expressed as where ρ r0 is the reference relative density, ρ r0 = 0. 22. The values of the other parameters are presented in Table 7. A modified constitutive model for EMWM under low-velocity impact is established based on the Sherwood-Frost model. It contains the modified shape function f (ε), the relative density term R(ρ r ), and the strain rate enhancement term M(ε, . ε). To evaluate the constitutive model, the calculation results are compared with the experimental results. The comparison results are shown in Figure 13. It can be seen from Figure 13 that the calculated stress-strain values of the EMWM with different relative densities matched well with the measured data. Although the deviation between the calculated data and measured data is larger for the EMWM with the relative density of 0.32, it can still predict its change trend. The comparison results show that the established model has high parameter identification accuracy and can well describe the mechanical properties of the EMWM under low-velocity impact.
Conclusions
In this paper, the low-velocity impact behaviors of EMWM with different relative densities were investigated through a series of drop-hammer impact tests. The effect of impact velocity and relative density of EMWM on mechanical properties and energy absorption of EMWM were studied. Moreover, a semi-empirical model was established to predict the dynamic compressive mechanical properties of EMWM under low-velocity impact loading. The main conclusions from this work are as follows: (1) The impact energy absorption capacity of EMWM is strong, and the energy absorption rate is between 50% and 85%. The energy absorption capacity of EMWM decreases with the increase in density. (2) The EMWM with the high relative density has excellent characteristics of repetitive energy absorption. Low-density EMWM will undergo plastic deformation under impact load. (3) With the increase in relative density, the maximum deformation of EMWM decreases gradually, and the impact force of EMWM increases gradually. With the increase in impact-velocity, the phenomenon of stiffness softening before reaching the maximum deformation of EMWM becomes more and more obvious. Although the test results can qualitatively show that air damping and plastic deformation influence the energy dissipation characteristics of EMWM, the test method of this paper cannot provide quantitative analysis, which is also the focus of our work in the future.
Conclusions
In this paper, the low-velocity impact behaviors of EMWM with different relative densities were investigated through a series of drop-hammer impact tests. The effect of impact velocity and relative density of EMWM on mechanical properties and energy absorption of EMWM were studied. Moreover, a semi-empirical model was established to predict the dynamic compressive mechanical properties of EMWM under low-velocity impact loading. The main conclusions from this work are as follows: (1) The impact energy absorption capacity of EMWM is strong, and the energy absorption rate is between 50% and 85%. The energy absorption capacity of EMWM decreases with the increase in density. (2) The EMWM with the high relative density has excellent characteristics of repetitive energy absorption. Low-density EMWM will undergo plastic deformation under impact load. (3) With the increase in relative density, the maximum deformation of EMWM decreases gradually, and the impact force of EMWM increases gradually. With the increase in impact-velocity, the phenomenon of stiffness softening before reaching the maximum deformation of EMWM becomes more and more obvious. Although the test results can qualitatively show that air damping and plastic deformation influence the energy dissipation characteristics of EMWM, the test method of this paper cannot provide quantitative analysis, which is also the focus of our work in the future.
Conflicts of Interest:
The authors declare no conflict of interest. | 2020-03-25T13:07:44.677Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "985d9de69daa8b7e79ab1f7a083494140bb45e51",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/13/6/1396/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "00e5068d156f606c585e51198adc928e9f4e0257",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
204542756 | pes2o/s2orc | v3-fos-license | Rich Magnetic Quantization Phenomena in AA Bilayer Silicene
The rich magneto-electronic properties of AA-bottom-top (bt) bilayer silicene are investigated using a generalized tight-binding model. The electronic structure exhibits two pairs of oscillatory energy bands for which the lowest conduction and highest valence states of the low-lying pair are shifted away from the K point. The quantized Landau levels (LLs) are classified into various separated groups by the localization behaviors of their spatial distributions. The LLs in the vicinity of the Fermi energy do not present simple wave function modes. This behavior is quite different from other two-dimensional systems. The geometry symmetry, intralayer and interlayer atomic interactions, and the effect of a perpendicular magnetic field are responsible for the peculiar LL energy spectra in AA-bt bilayer silicene. This work provides a better understanding of the diverse magnetic quantization phenomena in 2D condensed-matter materials.
Si-X and X-X bonds by VASP calculations 15,17 . Such phenomena might induce metallic or semiconducting behavior in materials. However for bilayer silicene, phenomenological models might not be suitable for solving the magnetic-field-dominated fundamental properties due to significant buckling, the largely enhanced spin-orbital interactions and the complex interlayer hopping integrals. Specifically, optimization of reliable tight-binding parameters which are necessary for reproducing the low-lying two pairs of valence and conduction bands are likely impossible because of the complex features in the first-principle results for the energy dispersion relations [8][9][10][11][12][13][14] . Bilayer silicene with AA and AB stackings are predicted to present the non-monotonic energy dispersion and irregular valleys at non-high-symmetry points. Additionally, the free carrier densities are expected to be quite sensitive to buckling and stacking configurations.
In this paper, we explore the diverse quantization phenomena in AA-bt silicene by employing a generalized tight-binding model. Calculations and analysis will target the band properties both below and above the Fermi level, energy dispersion relations, critical points in energy-wave-vector space, distinct valleys, special structures of van Hove singularities in the density-of-states (DOS), significant magnetic-field dependencies, classification of valence and conduction LL groups and their main features. Specifically, the valley-enriched magnetic quantization will be investigated by detailed examination of the spatial oscillation modes of the magnetic wave functions. The interesting combined effects of distinct LL groups are clarified from different stable or metastable valleys. Furthermore, we also discuss the important differences in the essential physical properties between AA-bt bilayer silicene, monolayer silicene and graphene from the electronic valley structure point of view.
Method
We have developed a generalized tight-binding model for AA-bt bilayer silicene in the presence of a uniform perpendicular magnetic field and utilized it to explore the magnetoelectronic properties. The crystal structure of AA-bt bilayer silicene with hopping interaction terms is clearly illustrated in Fig. 1(a). The two layers possess opposite buckling order, in which the [A 1 , A 2 ] sublattices lie in the inner planes while the [B 1 , B 2 ] sublattices are located at the outer planes. The lattice constant and bond length are a = 3.83 Å and b = 2.21 Å, respectively. A primitive unit cell contains four silicon atoms. Accordingly, the critical Hamiltonian is built from the four tight-binding functions of Si-3p z orbitals. It can be written as www.nature.com/scientificreports www.nature.com/scientificreports/ In this notation, † c i l (c i l ) is the creation (annihilation) operator which could generate (destroy) an electronic state at the i-th site of the l-th layer. U i l (A l , B l ) is the buckled-sublattice-height-dependent Coulomb potential energy due to the applied gate voltage. γ ′ ij ll represents the intra-and inter-layer atomic interactions, in which the former comes from the nearest-neighbor interactions in the [A l , B l ] sublattices (γ 0 = 1.004 eV) and the latter stand for interactions between sublattices from different layers (γ 1 = −2.110 eV, γ 2 = −1.041 eV, and γ 3 = 0.035 eV), as shown in Fig. 1(a). These parameters are optimized using the Slater-Koster tight-binding method in order to reproduced the energy bands from the first-principle results 9,11 . Interestingly, our optimization shows that γ 1 is much larger than when γ 2 is comparable to γ 0 . In addition to the high symmetry of stacking configuration, this result might be responsible for the negligible spin-orbital couplings. It seems that in the absence of spin-orbital couplings, the above Hamiltonian is sufficient for the investigation of certain essential physical properties.
The application of a uniform perpendicular magnetic field evidently changes the main characteristics of the lattice. The original unit cell is considerably enlarged to become a long rectangular due to the field-induced extra Peierls phases 17 . The extended unit cell includes 16φ 0 /φ Si atoms, where φ 0 = hc/e is the magnetic flux quantum and φ = B a 3 /2 z 2 is the magnetic flux through a unit cell. Consequently, the magnetic Hamiltonian is a huge Hermitian matrix, e.g., the size is ~13000 × 13000 for a (k x = 0, k y = 0) state at B z = 20 T. For AA-bt bilayer silicene, the model calculation takes into account the buckled honeycomb structure and complicated intra-and inter-layer atomic interactions. The combined effect of those ingredients and an external magnetic field is expected to generate the diverse physical phenomena, especially the magneto-electronic properties.
Results and Discussion
Zero-field electronic properties. AA-bt bilayer silicene presents the unique and feature-rich energy dispersion, as demonstrated in Fig. 1(b,c) for 3D and 2D views, respectively. There are two pairs of valence and conduction bands due to the Si-3p z -orbital π bondings and they are slighly asymmetric about the Fermi level of E F = 0. It shows the semimetallic behavior with zero gap and a finite density of states (DOS) at E F , as clearly illustrated in Fig. 2(a). The outer pair of conduction and valence energy bands is originated at higher and deeper energy ranges (|E c , v| ≥ 2 eV). On the other hand, the pair of energy bands near E F is expected to dominate the low-energy essential properties of the system. Interestingly, the stable and non-stable electronic valleys are formed from the electronic states near the high-symmetry points of the hexagonal first Brillouin zone. In particular, there exist the M, K and Γ valleys with different conduction and valence band-edge state energies of about (0.5 eV, −0.51 eV), (1.19 eV, −1.21 eV), and (1.43 eV, −1.50 eV), respectively. These special valleys are expected to be closely related to the magnetic quantizations of the initial Landau levels, as discussed later.
The M valley belongs to the saddle point while the K and Γ valleys correspond to the local extreme points, therefore they lead to the different van Hove singularities ( Fig. 2(a)) and the diverse LL energy spectra. Furthermore, the Dirac-cone energy dispersion is absent because the low-lying electronic states closest to the Fermi level are not formed at K/K′ valleys. Instead of upward conduction and downward valence Dirac cone initiated from the K/K′ points as in monolayer graphene 20 , there exist the concave-downward conduction and concave-upward valence valley in AA-bt bilayer silicene. Such a significant difference between the two system might come from the more complicated and stronger atomic interactions of the latter. It should be noticed that, the lowest conduction and highest valence electronic states in AA-bt stacking are located at the midway of M and Γ points, moreover, they are separated by a very small energy spacing of ~8 meV. As a result, the density of states near E F present the special structure with a finite value. Moreover, the low-lying Landau levels do not correspond to the initial magneto-electronic states. In general, the electronic properties in AA-bt bilayer silicene are in great contrast with those of monolayer silicene 4 and a AA-stacked bilayer graphene 21 . The van Hove singularities in www.nature.com/scientificreports www.nature.com/scientificreports/ the DOS are closely associated with three types of band-edge states, as illustrated in Fig. 2(a). The DOS spectrum exhibits feature-rich structures, including asymmetric peaks in the square-root divergent form, shoulder structures, and logarithmically divergent peaks. There exists a pair of temple-like cusp structures crossing the Fermi level and they are separated by quite a small energy spacing of ~8 meV. These structures come from the extraordinary conduction and valence band structures near E F which could be considered as the one-dimensional parabolic dispersion relations. Away from the Fermi level, the two pairs of prominent symmetric peaks arising from the 2D saddle point (M point) are located at (0.50 eV & −0.52 eV) and (2.60 eV & −2.65 eV), respectively. Regarding the extreme points, there appear special shoulder structures at (1.10 eV, −1.15 eV) and (2.16 eV, −2.20 eV) for the K valley and (1.43 eV, −1.5 eV) for the Γ one. The special DOS spectrum in AA-bt bilayer silicene reflects the unique energy dispersion, it could be examined from high-resolution STS experiments [22][23][24][25] . This method of measurement is useful for investigating the interplay between the buckled structure and atomic interactions in bilayer silicene. For AA-bt stacking, the DOS is relatively high near the M point compared with other high-symmetry points, leading to the very complex LL energy spectrum, dissimilar to those of monolayer graphene and silicene 19,26 . Roughly speaking, the M valley is regarded as the unstable one for magnetic quantization from where LLs could not be initiated.
The Bloch wave functions consisting of the Si-3p z -orbital tight-binding functions on the four sublattices strongly depend on the wave vectors, as clearly shown in Fig. 2(b). Those on the A and B sublattices are represented by orange and green curves, respectively. For comparison, the result for monolayer silicene is also shown as solid and dashed blue curves. The low-lying conduction band (and valence band; not shown) exhibit equal distribution probability on the B 1 & B 2 [A 1 & A 2 ] sublattices due to the same (x, y)-plane projections with similar chemical environment. Specifically, electronic states near the K point exhibit the vanishing A 1 and A 2 components and the identical B 1 and B 2 ones. Along the K → M → Γ directions, the B l -and A l -sublattice probabilities vary as 0.5 → 0.375 → 0.265 and 0 → 0.125 → 0.2355, respectively. Such behavior is quite different from that in monolayer silicene where the distribution of all sublatices become identical along M → Γ. Generally, the distribution probability of B l sublattices is evidently dominated by the low-lying energy spectrum. The opposite is true for the higher conduction and deeper valence bands. That is, the equivalence of the intralayer [A l , B l ] sublattices is completely broken by the strong interlayer hopping integrals (γ 1 and γ 2 ). On the other hand, monolayer silicene with significant spin-orbital coupling present similar distribution probabilities of the sublattices of the same spin state during variation of the wave vector due to the honeycomb lattice symmetry.
Diverse magnetic quantization phenomena. AA-bt stacking possesses distinctive and diverse magnetic quantization phenomena, mainly owing to the feature-rich valley structures of the low-lying pair of conduction and valence bands. The LL energy spectra exhibit a number of interesting characteristics, These include LL state degeneracy, localization behavior of wave functions, sublattice dependence of localized oscillation modes, well-behaved and not well-behaved LLs, complex magnetic field dependence, and LL crossing phenomenon. It is worth mentioning that the localization centers of LLs are continuously changed with the variation of (k x , k y ). Similar phenomena are observed for different electronic states. Our numerical calculations in this work mainly focus on the magnetic quantization at (k x = 0, k y = 0) state, which is sufficient in understanding the essential magneto-electronic properties. The LLs could be classified into different groups based on the electronic valleys and the main features of spatial distributions.
Regarding the LLs initiated from the Γ-point top and bottom valleys, the conduction and valence energy spectra present asymmetry behavior explicitly. The conduction and valence Landau levels are, respectively, located at 1.43 eV and −1.53 eV for B z = 40 T, as shown in Fig. 3(a,b). The LLs exhibit a nearly uniform energy spectrum, as a result of the isotropically parabolic energy dispersion near the Γ point, similar to that for a 2D electron gas 27,28 . The well-behaved magnetic subenvelope functions on the four sublattices are localized at 0 and 1/2 of the B z -enlarged unit cell, and they are degenerate. The oscillation modes of the LLs on all four subenvelope functions are equivalent. Additionally, the LL wave functions on the (A 1 & A 2 ) as well as (B 1 & B 2 ) sublattices are observed to be identical. These special characteristics of the LLs may be related to the equivalence of the intralayer [A l , B l ] sublattices and the same chemical environment for the interlayer [A 1 , A 2 ] & [B 1 , B 2 ] sublattices. The quantum number of each LL, n c v 1 , , is determined by the number of zero modes of its wave function on the dominated B i sublattices. The fact that the B l sublattices clearly dominate the energy spectra at the Γ valley is consistent with the zero-field wave vector-dependent wave functions as shown in Fig. 2(b). In general, each LL is four-fold degenerate through the spin and localization degrees of freedom. This is dissimilar to the eight-fold degeneracy of the LLs in monolayer graphene and silicene 19,26 .
The conduction and valence Landau levels magnetically quantized from electronic states near the K/K′ points display diverse characteristics. However, there are LLs corresponding to both the low-lying and outer pairs of energy bands. The former are of central interest, as illustrated in Fig. 4(a,b). Although the LL energy spectrum presents roughly uniform spacing, similar to that near the Γ valley, the LLs show quite different behaviors. The magnetic subenvelope functions are revealed at four distinct localization centers (1/6, 2/6 4/6 and 5/6) of the extended unit cell. Especially, the LL spatial distributions are identical for 1/6 and 4/6 as well as 2/6 and 5/6 areas, leading to eight-fold degeneracy of the LLs, as observed in monolayer graphene 24 . The first few LLs are well-behaved and their quantum numbers (n c v 2 , ) could be determined based on the wave functions of the evidently dominated B l sublattices. The spatial distributions on the A l and B l sublattices exhibit one-mode difference, in which those on B l is higher (for 1/6 center) or less (for 2/6 center) than the corresponding ones on A l , depending on the localization centers. Apparently, the initial n c v 2 , = 0 LLs at 2/6 center are exceptionally four-fold degenerate because such magneto-electronic states only originate from the B l sublattices.
Away from the initiated level, LLs which are quantized from Γ and K valleys are slightly distorted because of the strong interlayer atomic interactions (γ 1 and γ 2 ), leading to difficulty in defining their quantum numbers. www.nature.com/scientificreports www.nature.com/scientificreports/ Regarding the n c v 1 , LLs, although their oscillation distributions become much wider, which are ~35% of a unit cell, they still remain two degenerate localization centers of 0 and 1/2. This further illustrates the monotonic variation of the stable Γ valley along the various directions. For the n c v 2 , LLs, the splitting of LL states at different localization centers of (1/6, 2/6, 4/6, and 5/6) is no longer observable. Their quantum numbers are expected to be much smaller than those of the n c v 1 , group due to the higher DOS in the K-related valleys (Fig. 2(a)). Additionally, the magnetic quantization demonstrates the fact that the M-point saddle structure belongs to the K/K′ valley.
In the vicinity of the Fermi level, there exist only conduction and valence LLs which are quantized from the Γ valley, as clearly shown in Fig. 5. Here, the magneto-electronic states are quite dense and complicated. Therefore, it is quite difficult to characterize the LLs. The LL spatial distributions present thousands of oscillation modes which are localized at 0 and 1/2 centers. Such unique magnetic quantization phenomena are never observed in other condensed-matter systems according to prior theoretical and experimental studies 17,19,26 . Apparently, high-resolution STS measurements 23-26 may not be able to directly identify the low-energy LLs in AA-bt bilayer silicene. Nevertheless, the array of low-energy magneto-electronic states in AA-bt bilayer silicene should be associated with the other essential physical properties, such as, the delta function-like van Hove singularities, magneto-optical absorption spectra with specific selection rules, quantum Hall transport and inter-Landau level damping and magnetoplasmon modes. These should be worthy of a systematic investigation despite possibly formidable challenges to be encountered in the numerical technique and detailed data analysis.
The magnetic field-dependent energy spectra are critical for the particularized comprehension of the magnetic quantization phenomena. Although the conduction and valence LLs possess asymmetric spectra, their main features are similar. Therefore, the following discussion will be confined to the conduction bands. The quantized LLs of the outer pair of energy bands show linear B z -dependence, as demonstrated in Fig. 6(a) for the conduction 7(a,b).
We now turn our attention to the n c 1 LLs originating from the Γ point. The initiated LLs are linearly dependent on B z without any intragroup crossing ( Fig. 6(b)), like that for the outer energy bands but with smaller LL energy spacing. With decreasing energy, the B z -dependent spectrum consists of both the n c 1 (green curves) and n c 2 (red curves) LLs, as shown in Fig. 7(a,b). Interestingly, the two groups of LLs present crossing phenomena without any hybridization of magneto-electronic states. This is because the two stable valleys of K and Γ are independent of each other. The evolution of n c 2 LLs with an increase of magnetic field strength has a simple linear behavior for sufficiently high energy, E c ≥ 0.5 eV, as clearly demonstrated in Fig. 7(a). In contrast, LLs at E c ~ 0.45 eV are formed with a shoulder-like structure, corresponding to the energy dispersion around the M point. The main reason for this is that the electronic states at very high DOS cannot be quantized into the well-behaved LLs. It is worthy mentioning that, the magnetic field-dependent energy spectra enables the prediction of LL characteristics at different field strengths when those at a specific B z are known. Since there is no interaction (anti-crossing) between the levels for the variation of B z , the LL wavefunctions remain unchanged.
The magneto-electronic properties of an AA-bt bilayer system are in great contrast with those in monolayer silicene and graphene 19,26 . Whereas the former shows initial LLs at higher and lower energy ranges, those of the latter begin near E F = 0. Apparently, the main differences lie in the main characteristics of the LLs around the Fermi level which are very complicated for AA-bt stacking. Monolayer systems possess the conduction and valence Dirac cones with the Dirac point or extremely narrow gap at the K/K′ point. The low-lying LLs present well-behaved spatial distributions and therefore their quantum numbers are easily determined based on the number of zero modes. Moreover, the B z dependence of LL energies exhibit linear form in the absence of crossing phenomenon, unlike the frequent crossing LLs in AA-bt stacking. The above-mentioned diverging points of AA-bt bilayer silicene compared with other monolayer systems mainly come from its unique intrinsic properties, such as the buckling structure with opposite ordering and significant interlayer atomic interactions.
conclusion
We have presented a comprehensive investigation of the unusual and diverse magnetic quantization phenomena in AA-bt bilayer silicene using a generalized tight-binding model. This material possesses special electronic structure with two pairs of energy bands, in which the low-lying pair shows an interesting oscillatory shape. Remarkably, the lowest conduction and highest valence states near the Fermi level are away from the K point, dissimilarly to that for graphene and other 2D materials. The intricate energy dispersion is closely related to the feature-rich magnetically quantized LLs. These LLs originating from tje K and Γ valleys are quite different in their main characteristics, covering the LL degeneracy, spatial distributions, the dominant sublattices, and the magnetic field dependence. Specifically, in the vicinity of the Fermi energy, there are many magneto-electronic states which do not present simple wave functions. The geometry symmetry, intralayer and interlayer atomic interactions, and effect of a perpendicular magnetic field are responsible for the peculiar LL energy spectra in AA-bt bilayer silicene. | 2019-10-15T14:40:04.850Z | 2019-10-15T00:00:00.000 | {
"year": 2019,
"sha1": "00c5f9ec9ee5193e9b984263a2147b8f3a412a0b",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-50704-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "00c5f9ec9ee5193e9b984263a2147b8f3a412a0b",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
151841929 | pes2o/s2orc | v3-fos-license | Adolescents ’ Negative Experiences in Organized Youth Activities
Research indicates that organized youth activities are most often a context of positive development. However, there is a smaller body of evidence suggesting that these activities are sometimes a context of negative experiences that may impede learning or lead to dropping out. To better understand negative experiences in youth activities, we conducted ten focus groups with adolescents. Youths’ descriptions provide an overview of the range of types of negative experiences they encountered, as well as how they responded to them. The most frequent types of negative experiences involved peers and peer group dynamics and aversive behavior attributed to the adult leaders of the activities. The youth described two types of responses to their negative experiences a passive response of feeling negative emotions, and active coping, which sometimes led to learning. Research is increasingly showing that organized youth activities, such as extracurricular activities and community based youth programs, are a context of positive development for adolescents (Eccles & Templeton, 2002; Mahoney, Larson & Eccles, 2005). Yet there is also evidence – of a less complete nature – that these activities are sometimes a context of negative experiences. Studies suggest that participation in sports can lead to increased alcohol use (Eccles & Barber, 1999) and that participation in both music and sports can create adverse levels of stress (Scanlan, Babkes, & Scanlan, 2005; Smoll & Smith, 1996). Research on Swedish youth centers suggests that peer interactions in these contexts can reinforce negative norms and behavior patterns (Mahoney, Stattin, & Magnusson, 2001; Stattin, Kerr, Mahoney, Persson, & Magnusson, 2005). And there is evidence that some adults in organized programs act in ways that promote inappropriate behavior or have a negative influence on young people’s sense of self and faith in others (Eder & Parker, 1987; Grossman & Rhodes, 2002). The present investigation was designed to identify and begin to categorize the range of adolescents’ negative experiences in youth activities. Our framework for this investigation conceptualizes negative experiences as experiences that disrupt the processes of positive development within organized youth activities. Most scholars working in this area agree that engagement is the critical vehicle of positive development. This includes psychological engagement in which youth are challenged, motivated, and devote deep attention to being successful in the activity (Csikszentmihalyi, Rathunde, & Whalen, 1993; Larson, 2000), and through which they become active producers of their own development (Lerner, 2004; Lerner & Busch-Rossnagel, 1981; Silbereisen, Eyferth, & Rudinger, 1986). This also includes engagement in relationships. Developmental systems theory posits that positive development occurs in and through a young person’s participation in meaningful relationships (Lerner, 2002, 2004). In youth programs this can include engagement with supportive and caring adult leaders (Halpern, 2005; Rhodes, 2004), engagement in positive collaborations with peers (Larson, Hansen, & Walker, 2005; Mahoney, Cairns, & Farmer, 2003), and engagement with community adults who provide various forms of support and social capital (Jarrett, Sullivan, & Watkins, 2005). Negative experiences are important to understand because they can interfere with these different forms of positive engagement. A youth who is upset, distressed, or angered by an event in a program is less likely to be psychologically engaged and to devote attention to learning. Emotion researchers recognize that one of the functions of negative emotion is typically to shift attention from long-term goals such as development, toward immediate concerns of safety and well-being (Clore, 1994). In situations when a negative experience in a program occurs at the same time as a young person is experiencing other negative events, it can contribute to a “pile up” of stress; and we know from a large body of research that youth who experience multiple simultaneous stressors are more likely to become depressed, use substances, or manifest other problems (Chassin, Husson, Barrera, Molina, Tim, & Ritter, 2004; Garber, 2004), all of which sidetrack developmental processes. Similarly, negative experiences may disrupt engagement in important developmental relationships within an organized activity. A recent National Academy of Sciences report identified eight features that make youth programs contexts of positive development: physical and psychological safety, appropriate structure, supportive relationships, opportunities for belonging, positive social norms, support for efficacy and mattering, opportunity for skill building, and integration of family, school, and community efforts. All eight features are factors that are influenced by adult leaders (Eccles & Gootman, 2002). Negative experiences with an adult leader are likely to interfere with the adult’s ability to shape these features. Research on mentoring and youth sports suggests that a single negative experience with a mentor or coach often has proportionally more influence on that relationship than a single positive experience (Rhodes, 2002; Smoll & Smith, 1996). Likewise, conflict with peers can reduce the learning that might occur within collaborative peer relationships (Larson et al., 2005), and the same could be said for interactions with community members. Negative experiences can also lead youth to drop out of organized activities, and totally disengage from learning in this context. Research on youth sports shows that performance anxiety can lead youth to drop out (Scanlan et al., 2005). Studies of youth in other organized activities also suggest that negative experiences contribute to youth dropping out (Hultzman, 1993; Patrick, Ryan, Alfred-Liro, Fredricks, Hruda, & Eccles, 1999) Given the potentially disruptive effects of negative experiences on youth’s engagement in programs, they need to be a significant topic of study. Knowledge of the variety of negative experiences that youth encounter and their consequences is essential to evaluating and ultimately to improving youth programs (Dubas & Snider, 1993). Since the current knowledge on negative experiences is sparse and unsystematic, we felt that it was essential to begin by listening and documenting the range of negative experiences that youth report in organized activities. Our choice to focus on youth’s own open-ended accounts was based on two premises; first, that understanding the reality that people experience is important in its own right (Patton, 1990; van Manen, 1984), and second, that youth’s conscious appraisals of their experiences in an activity influence their engagement in the activity and their decision to remain in it. To identify the range of young people’s negative experiences, the present investigation utilized focus group interviews. Focus groups are one approach to group interviews that utilize group dynamics to elicit detailed information (Taylor & Bogdan, 1998). Focus group methodology is designed to create a non-threatening environment that promotes self-disclosure. Although the data obtained in group interviews can be influenced by social desirability (Krueger, 1988), youths’ experiences in organized activities often emerge and are given voice through interactions with others (Patrick et al., 1999; Rogoff, Baker-Sennet, Lacasa, & Goldsmith, 1995). Thus, the dynamic of focus groups is well suited to eliciting young people’s accounts of the variety of negative experiences they encounter in this context.
Our framework for this investigation conceptualizes negative experiences as experiences that disrupt the processes of positive development within organized youth activities.Most scholars working in this area agree that engagement is the critical vehicle of positive development.This includes psychological engagement in which youth are challenged, motivated, and devote deep attention to being successful in the activity (Csikszentmihalyi, Rathunde, & Whalen, 1993;Larson, 2000), and through which they become active producers of their own development (Lerner, 2004;Lerner & Busch-Rossnagel, 1981;Silbereisen, Eyferth, & Rudinger, 1986).This also includes engagement in relationships.Developmental systems theory posits that positive development occurs in and through a young person's participation in meaningful relationships (Lerner, 2002(Lerner, , 2004)).In youth programs this can include engagement with supportive and caring adult leaders (Halpern, 2005;Rhodes, 2004), engagement in positive collaborations with peers (Larson, Hansen, & Walker, 2005;Mahoney, Cairns, & Farmer, 2003), and engagement with community adults who provide various forms of support and social capital (Jarrett, Sullivan, & Watkins, 2005).
Negative experiences are important to understand because they can interfere with these different forms of positive engagement.A youth who is upset, distressed, or angered by an event in a program is less likely to be psychologically engaged and to devote attention to learning.Emotion researchers recognize that one of the functions of negative emotion is typically to shift attention from long-term goals such as development, toward immediate concerns of safety and well-being (Clore, 1994).In situations when a negative experience in a program occurs at the same time as a young person is experiencing other negative events, it can contribute to a "pile up" of stress; and we know from a large body of research that youth who experience multiple simultaneous stressors are more likely to become depressed, use substances, or manifest other problems (Chassin, Husson, Barrera, Molina, Tim, & Ritter, 2004;Garber, 2004), all of which sidetrack developmental processes.
Similarly, negative experiences may disrupt engagement in important developmental relationships within an organized activity.A recent National Academy of Sciences report identified eight features that make youth programs contexts of positive development: physical and psychological safety, appropriate structure, supportive relationships, opportunities for belonging, positive social norms, support for efficacy and mattering, opportunity for skill building, and integration of family, school, and community efforts.All eight features are factors that are influenced by adult leaders (Eccles & Gootman, 2002).Negative experiences with an adult leader are likely to interfere with the adult's ability to shape these features.Research on mentoring and youth sports suggests that a single negative experience with a mentor or coach often has proportionally more influence on that relationship than a single positive experience (Rhodes, 2002;Smoll & Smith, 1996).Likewise, conflict with peers can reduce the learning that might occur within collaborative peer relationships (Larson et al., 2005), and the same could be said for interactions with community members.
Negative experiences can also lead youth to drop out of organized activities, and totally disengage from learning in this context.Research on youth sports shows that performance anxiety can lead youth to drop out (Scanlan et al., 2005).Studies of youth in other organized activities also suggest that negative experiences contribute to youth dropping out (Hultzman, 1993;Patrick, Ryan, Alfred-Liro, Fredricks, Hruda, & Eccles, 1999) Given the potentially disruptive effects of negative experiences on youth's engagement in programs, they need to be a significant topic of study.Knowledge of the variety of negative experiences that youth encounter and their consequences is essential to evaluating and ultimately to improving youth programs (Dubas & Snider, 1993).Since the current knowledge on negative experiences is sparse and unsystematic, we felt that it was essential to begin by listening and documenting the range of negative experiences that youth report in organized activities.Our choice to focus on youth's own open-ended accounts was based on two premises; first, that understanding the reality that people experience is important in its own right (Patton, 1990;van Manen, 1984), and second, that youth's conscious appraisals of their experiences in an activity influence their engagement in the activity and their decision to remain in it.
To identify the range of young people's negative experiences, the present investigation utilized focus group interviews.Focus groups are one approach to group interviews that utilize group dynamics to elicit detailed information (Taylor & Bogdan, 1998).Focus group methodology is designed to create a non-threatening environment that promotes self-disclosure.Although the data obtained in group interviews can be influenced by social desirability (Krueger, 1988), youths' experiences in organized activities often emerge and are given voice through interactions with others (Patrick et al., 1999;Rogoff, Baker-Sennet, Lacasa, & Goldsmith, 1995).Thus, the dynamic of focus groups is well suited to eliciting young people's accounts of the variety of negative experiences they encounter in this context.
Method Sample
Ten focus groups were conducted with 4-9 adolescents in each.A total of 55 adolescents (23 boys and 32 girls) participated.Six focus groups were conducted in the high school of an ethnically diverse mid-sized Midwestern town.School counselors selected students to participate who were active in school activities and whom they thought would be articulate.In order to be certain to be inclusive of community-based youth organizations, three additional focus groups were formed from members of a community-based arts program, an FFA chapter, and a service-learning, leadership organization for high school women, primarily African American, sponsored by a university sorority.One additional focus group was also formed from student volunteers at a university high school.This use of purposeful sampling resulted in focus groups that were representative of young people who were or had been actively involved in activities.
The focus groups planned through the high schools were conducted during the school day, in the school building.The focus groups formed from community-based organizations were conducted during the groups' regular meeting time, either at their regular meeting location or in the researchers' lab.The focus groups were mixed gender, age, and race, whenever not restricted by the demographics of the population (e.g., for the community groups).These youth had a mean age of 16 years (range 14-18).Twenty-two percent of the participants identified themselves as African American, 18% identified as bi-racial, 4% identified as Asian, and just over half (56%) of the participants identified themselves as White.
Procedures
Prior to participating in the focus group, youth completed a brief background questionnaire: providing information on their age, gender, the activities they were involved in, and how often they participated in each.Youth activities were defined to include school-based extracurricular activities, community-based youth organizations, and all organized activities and programs for youth that are both voluntary and structured (Larson, 2000).These youth were highly involved in a variety of activities; 83% were involved in a club or organization, 60% were involved in performance or fine arts, and 72% were involved in sports.
One of the authors or a trained graduate student was the moderator for each focus group.The moderator followed a semi-structured interview guide, designed to get the students to describe their specific negative experiences in youth activities.To establish rapport, the focus group began with open-ended, descriptive questions aimed at getting all students involved and talking about their experiences in organized activities (Taylor & Bogdan, 1998).Engaging each participant helps establish a common base for sharing, and makes it easier for participants to speak again (Krueger, 1988).
Following this rapport-building stage, the moderator first asked participants to describe the types of growth and learning experiences they had in youth activities.The results of these data are reported elsewhere (Dworkin, Larson, & Hanson, 2003).Next, youth were asked to describe the types of negative and "bad" experiences they have had in organized youth activities.After students' spontaneous descriptions of negative experiences were exhausted, five probes were used to help identify additional negative experiences that had not spontaneously emerged.These probes asked about types of experiences that have been mentioned in other studies: negative group interactions, negative peer influences, negative interactions with adults, stress, and discovering something about yourself you did not like.We encouraged youth to give specific examples of negative experiences, however, in the flow of the conversation they also identified generalized experiences, using language suggesting that they had encountered them more than once and believed that other youth also had these experiences (e.g., "Sometimes you fail, and you don't want to do anything again or try anything new.").The focus group sessions lasted 45 to 60 minutes, with approximately one-third of each focus group session dedicated to talking about youths' negative experiences.The focus groups were tape recorded.
Data Analysis
The focus group transcripts were coded to identify recurrent themes and categories of negative experiences (Taylor & Bogdan, 1998).Codes were developed from the focus group transcripts (Charmaz, 1988).Consistent with a phenomenological perspective, these data were analyzed under the assumption that the data provided by participants correspond to their actual experiences and to the meanings they apply to these experiences.In addition, interpretation of the data included distinguishing between youth's statements that were made spontaneously and those elicited in response to the probes.We used NVivo, a computer program, to assist with the coding and sorting of the data (Richards, 1999).
The interviews were transcribed verbatim, noting salient features such as long pauses and laughter.To preserve participant confidentiality, the interviews were transcribed using pseudonyms and eliminating any identifying information.To ensure accuracy, the transcripts were then carefully checked against the tapes.Next, open coding was used to identify themes, patterns, and concepts in youths' spontaneous descriptions of their negative experiences.Every event and idea of a given phenomenon was named.We determined that these negative experiences most readily categorized in two ways -according to the person or persons portrayed as the source of the negative experience (e.g., peers, adult leaders, oneself), and by the way in which the youth responded to the negative experience (e.g. a passive response of feeling negative emotions, active coping).Then we coded students' responses to our interview probes.We found that these responses fit into the two larger categories already identified.As a final step, axial coding, a more intense form of coding used to identify properties of domains that emerged during open coding (Strauss, 1987), was used to identify the types of negative experiences related to each category of person and the types of responses to negative experiences.Through this process, themes within these broader categories emerged.
The categories that emerged from students' descriptions of their experiences are described in the following sections.Given that multiple responses to an item were sometimes provided by members of the same focus group, sometimes in response to each other, we did not feel it was useful to provide counts of the frequency with which different categories were reported.On reporting the results below, we indicate that a category was frequent, only when it was reported by multiple youth across multiple focus groups.In addition, the categories are exemplified by direct quotes from students (Ryan & Bernard, 2000).
Types of Negative Experiences
Students identified five categories of persons portrayed as the sources of their negative experiences -peers, adult leaders, themselves, parents, and community members.Axial coding revealed the types of negative experiences related to each category of persons.
Peers
The largest number of negative experiences was attributed to peers and to peer group dynamics within the activity.First the students described encounters with aversive peer behavior.A boy in FFA held the position of Historian and had the role of recording all the activities of the chapter.He complained that "the members will start getting mad at you and start erupting at you."The authoritarian style of youth in leadership roles was a commonly reported aggravation.A student in a theater production reported that the student directors of the play "get all up in your face and mad."Unsportsperson-like behavior was another aversive behavior that students identified.A girl reported quitting the basketball team because other team members played too aggressively, "They didn't even try to go after the ball, they were just trying to hurt somebody.It wasn't about basketball."These frequent reports of aversive behavior may stem from the nature of the activities, demands created by competition, or from the fact that youth activities often bring adolescents into contact with peers with whom they would not have otherwise chosen to affiliate.
A second type of negative experience was the formation of cliques and exclusive friendship groups among participants in the activity.These made interactions difficult and led to some youth being left out.A male cheerleader described his experiences: "They all got their little cliques, like three or so stay together, and then, if you go and talk to one group or another group you're favoring somebody."A girl in track reported that group divisions had created a situation in which, "No one wants to have fun in practice, and the coach is always stressed out because everyone has a problem with everyone else.So, it's hard to do what you're supposed to be doing." Third, and interrelated with these experiences, youth described poor cooperation as another category of negative peer experiences.They reported frustrations with the lack of synergy within the entire group and between individuals.Disruptions in teamwork were attributed to personality clashes and to people procrastinating or not doing their part.One girl said, "When somebody didn't show up or didn't get a task done, it kind of left the rest of us hanging."A boy working on the yearbook staff reported: "At the beginning of the year, we had a lot of people join, and now it's down to the staff we had last summer.So almost everybody quit, and we have to do all the pages that they didn't do.We gotta really work to get our stuff done."The failure of others to do their part interfered with the achievement of goals that youth had for the activity, and sometimes left them doing much more work than they planned.Fourth, some students described being subject to negative peer influences.Some of the experiences in this category resulted from our probe on this topic.For instance, one boy said, "If they're your teammates, that would probably have a bad influence.I mean, everybody wants to party every now and then.So you give in and you do what they do.That's probably bad."This boy did not identify the type of behavior he was being pressured to adopt, but his statement suggests peer processes similar to those lying behind the finding that participation in sports was related to increases in alcohol use (Eccles & Barber, 1999;Moore & Werch, 2005).
A final category of negative experiences attributable to peers, dealt not with peers in the activity but with those outside of it.A number of youth reported being ridiculed for belonging to the activity or for the performance of the team or group.A boy in FFA reported being taunted that their initials stood for "Future Fags of America."A girl reported that their dance group had performed a really hard routine during halftime of a basketball game, but "everyone in the crowd is like, 'You guys suck'."These comments from non-members often stung.Someone on a losing sports team said, "Sometimes the reputation that we have kind of pulls our self-esteem down." The high rates of negative experiences with peers can be understood in terms of the developmental features of adolescence.Friends and peers are often the most important people in adolescents' lives, so teens are very sensitive to how peers act and what they think (Brown, 2004).Participating in activities has the potential to provide youth with many social benefits (Patrick et al., 1999).Yet managing interpersonal relationships with other teenagers, including those whom you are thrown together with in an organized activity, is challenging.Of course, difficulties in relating to and working with others occurs across the life span, but Larson, Hansen, and Walker (2005) have argued that the cognitive egocentrism of this developmental period may increase the difficulties for teenagers.Their nascent ability to see others' points of view and coordinate actions with others may heighten the possibility for peer misunderstanding and conflict.
Adult Leaders
The students also reported that many negative experiences were attributable to their adult leaders.First, youth described frequent experiences of being upset when leaders favored certain youth over others.These were situations where they perceived that some youth received special treatment, while others were picked on.A youth reported, "I was hurt one time, and when I was hurt, it didn't matter [to the coach].But there was another player that was hurt and, 'Oh, you need to sit out, and you need to make sure your arm's okay'."A girl reported a similar reaction in a different situation: "That makes me so mad, because when a coach picks favorites, it doesn't help anybody else but that person.It makes you feel like they only care about that one person.And sometimes they're just so busy about that one person that that one person has all that stress and that one person can't even do it all anyway and it's going to end up hurting that person."This sense of injustice was also reported when a leader or coach selected out students for criticism.One boy said, "My band director can be difficult sometimes and sometimes it really seems like he picks on some of my friends for no reason or for stupid reasons."The frequency of these reports suggests that adolescents are very sensitive to unequal treatment from adults.
A second common category of negative adult experiences was leaders who were disrespectful or demeaning.One youth said, "This coach of mine can make you feel like you were born wrong."Another said, "Our coaches are always negative; they put us down."In several cases, youth saw this type of cutting comment as an attempt to motivate them, but this was rare.In one case, this demeaning attitude was experienced as discrimination, "They think that you don't know nothing.It also has to do with your race and your gender."Research suggests that support from adults is a critical feature of fostering development in youth programs (Eccles & Gootman, 2002); disrespectful and demeaning comments from leaders are likely to undermine young people's experience of support.
Third, youth described leaders placing unreasonable demands on them.Some students attributed this to the fact that leaders sometimes "think that your life is centered around what you do with them."A girl complained that the adult leader wanted them "to choose between practice and our religion."In another situation, a youth reported: "The advisor he takes on more than what we can handle, and it ends up that everybody gets burned out and doesn't want to do it, but yet he brings it on and decides that we need to do it.It kind of gets discouraging."An underlying problem was when leaders' expectations for the activity did not match those of the youth.One girl said, "With coaches if winning is everything to them, and it's not to you, that can really make a season very unenjoyable."Another complained that they "expect you to be something better than you can do it.And you know that you're working hard, but they don't believe you."In an observational study, Zeldin and Camino (1999) found that youth practitioners sometimes expect youth to do things that even they could not do, a situation that sets youth up for failure.
A fourth type of negative experience was related to adults who were unknowledgeable or poor leaders.This included leaders who were inexperienced either with the activity or in serving as a leader for that activity.For instance, one girl said, "On our JV volleyball team, we had a new coach this year and that was her first year.I mean we had to teach her basically everything."Another girl said, "Last year during basketball season, we had a coach who wasn't all there, and he didn't really know how to talk to the girls, and one of the girls just ended up going off on him, and crying, and leaving during practice and everything.And that just pulled everything apart for a little while.Everyone was just really nervous about what had happened."In one instance, a youth reported that the girls in the program became so angry and confrontational with their coach for his ignorance and repeated absences from practice that he responded by placing a note on his door that said, "Since you guys don't want me to be the coach, then I won't coach today.You guys can coach yourselves."Many of the adults leading youth programs are volunteers with little or no training (Carnegie, 1992), and this may increase the frequency with which youth encounter incompetent and immature adult behavior.This can be particularly discouraging as youth activities provide a critical opportunity for young people to gain new skills.
Fifth, several students reported negative experiences when leaders tried to be more of a friend than a leader.These students felt the adults were unsuccessful at maintaining their role as leader when they were also trying to be a friend.For example, one girl said, "I get fed up with that when they try to be my sister or my mother.That's not what I'm here for.I'm here to sing."Although many youth appreciate leaders who are empathic and provide emotional support (Eccles & Gootman, 2002;McLaughlin, Irby, & Langman, 1994), in some cases youth find a leader's attempts to relate to them on a personal level to be intrusive and disruptive to their participation in the activity.
Lastly, students described instances of inappropriate and unethical adult behavior.One girl in track complained that, while her coach followed the rules, coaches on other teams cheated, "Like when they're doing times, they'll say their person's time was faster even though it was really slow."In several cases, coaches were reported to encourage physical violence.One boy said, "It got to a point where he [football coach] doesn't tell us, go out there and win a game.It's go out there and hurt somebody."This is consistent with the findings of Eder and Parker (1987) that being physically aggressive was praised in football and that some coaches taught youth that winning required being overly aggressive, even to the point of injuring another player.
The students' large volume and variety of negative experiences with adult leaders reflects the central role that leaders play in organizing and setting the climate in most youth activities (Eccles & Gootman, 2002).When combined with the finding of Hansen, Larson, and Dworkin (2003) that youth report high rates of inappropriate adult behavior in sports activities, it becomes evident that many adult leaders are not meeting the developmental needs of youth.The wide array of negative experiences reported by our students reflect different ways in which adult leaders failed to provide, or undermined, different features of a positive and facilitative environment.
Oneself and Other Parts of One's Life
Another category of negative experiences included those portrayed as originating from the adolescents themselves.This included negative experiences related to the students' selfevaluations and to conflicts between the organized activity and other domains of their lives.
A first type of negative experience in this category was performance anxiety, most often reported in sports and occasionally in music.As expressed by one boy, "The night before a big game, you start getting worried.If I mess up here, I might cost the whole team."Research on youth sports shows that performance anxiety can impair performance and lead youth to drop out of an activity (Scanlan et al., 2005;Smoll & Smith, 1996).
An associated type of negative experience was the distress students reported after they did not perform as well as they expected.This distress was related both to individual and group performance.For instance, a girl said, "If things don't go the way you expect them, it gets a whole lot more frustrating and it's a lot harder."A boy said, "Our football season wasn't so good.I mean, we just kept losing and losing.It's never fun when you lose.You just want to give up and quit."When individuals did not perform well, they also described feeling that they had let others down.For example a girl said: "I get down on myself a lot when I'm not doing what I want to, and so the other teammates see it and they can't bring me up if I get so mad.It directs the whole level of the whole team if one player can't get back up."As with performance anxiety, research on youth sports has documented that failure to achieve goals within the activity is a significant source of distress (Brustad, Babkes, & Smith, 2001;Scanlan et al., 2005).
Another type of negative experience related to self was encountering one's own negative behavior or traits.A number of these responses resulted from our probe on this topic.One student described how losing his position as captain forced him to recognize his own negative behavior: "The most disappointing thing that I figured out is that I didn't have enough ambition and I didn't have enough drive and determination to do what I wanted to do, until I figured out that it was hurting the whole team…and then I got my captain's spot taken away from me.And that hurt, that made me realize that I let the team down and it hurt really bad."Another boy described learning he did not like how he interacted with his teammates."I discovered that I had a problem with a couple players on the team, and instead of just talking it out with them, I just….go off into the corner and be to myself…so I didn't really like that about myself."Youth activities are a context in which teenagers explore their identities, and it is inevitable that sometimes youth discover selves that they do not like.
A final area of negative experience in this category was stress related to competition between organized activities and the demands from other parts of their lives.As one youth said, "Stress happens to me when I am trying to balance homework, housework, and a job."Another described how this stress could accumulate: "Like sometimes, if you're in a lot of activities, your day might be too long, and so you get to bed really late and then you have to get up early for school and you don't get enough sleep.And then, falling asleep in class, your grades suffer and basically it's a chain reaction."The stress experienced by contemporary "hurried youth" has been described in the popular writings (Brooks, 2001;Meeks & Mauldin, 1990).Although time budget research suggests that the majority of American teens are not over-programmed (Larson & Seepersad, 2003), it is important to understand the experiences of those who are in this situation.
Adolescence is a period when a young person's sense of self is in flux, which may make youth more vulnerable to negative identity experiences.Youth programs engage youth in actions and expose them to norms and values that may bear in both positive and negative ways on their sense of who they are (Youniss, McLellan, Su, & Yates, 1999;Youniss & Yates, 1997).
Parents
A smaller reported source of negative experiences was students' parents.Most often this took the form of feeling pressure from parents.This pressure included pressure to join an activity, pressure to quit an activity, pressure to stay in an activity, and pressure to perform better in an activity.For instance, one student said, "Sometimes, parents pressure you to either stay in or get out, like if they really want you to do something and you really don't want to do it.Maybe you'll do it anyway to make your parents proud of you.[But] if my parents are always going to be on my case about it, then it's not worth it."Several students described their parents yelling at them.One boy said, "If I ever quit something, my dad would probably scream my head off."Students also described being forced to quit an activity when a parent felt they were over committed and their schoolwork or job was suffering.Other research has documented similar findings of stress related to parental pressure within sports (Leff & Hoyle, 1995;Scanlan et al., 2005); less is known about whether similar pressure is a problem in other activities.Participation in youth activities has been found to facilitate a positive parent-child relationship and parental monitoring (Mahoney & Stattin, 2000).When a young person's relationship with his or her parent around activities is not positive, these positive influences on the parent-child relationship are less likely to occur.
Community Members
The final source of negative experiences described by the students was community members.These youth encountered inappropriate or unappreciated community member behavior.A few students described negative fan behavior at sporting events."The teammates' dads, they don't think their kids can do anything wrong.In one instance, she was just not sharp that game and her dad was yelling at everybody else trying to make it sound like it was their fault."This unappreciated behavior may have come from the parents' of their peers or from other members of the community.One student described having a negative interaction with community members when trying to sell tickets to the school play: "We were trying to sell tickets this week.
And a lot of times we'd go and ask if anybody wanted to buy a ticket, and they're like, 'No, we don't want to buy tickets to the play, why would we want to come see the play?'I mean, we put a lot of hard work into it."In some cases this negative adult behavior came from adults who had a peripheral relationship to the activity.A youth complained that some of the adults at a youth leadership conference had "a whole bunch of stereotypes about teenagers and what we do and how we act."Another youth reported that the sponsor of a theater group was prone to temper tantrums, "If she doesn't like something, she throws chairs and stuff and does other interesting things." Although infrequent, these negative experiences with adults in the community are an important cautionary warning.Currently there is much emphasis in the field of youth development on encouraging greater engagement of youth in communities, often via youth programs (Hughes & Curan, 2000;Zeldin, Camino, & Wheeler, 2000).And interacting with community members can contribute to youths' emerging self-definition (Eccles & Barber, 1999).But, although it is possible to screen and to train the adult leaders of youth programs, it is not feasible to do the same for all the adults whom youth may come into contact with via community outreach.
Responses to Negative Experiences
The students described two types of responses to their negative experiences in organized youth activities.One was a passive response of feeling negative emotions, which in some cases exacerbated and prolonged the negative experience.The other entailed some form of active coping.
Negative Emotions
Students frequently described negative emotions as a central part of their response to negative experiences.The emotions they described included anxiety, anger, sadness, and being stressed or upset.One girl said, "When you don't do as well as you'd like to, you get really upset.I get really mad."Another youth reported that after experiencing stress, "your nerves are shot completely."Yet another observed that "some girls can't take criticism and it just tears them apart." These negative emotions became particularly problematic when they were prolonged.One girl described her inability to deal with feelings of anger, "When you don't do very good, I get mad, and I just keep getting worse."Another girl described her mounting feelings of stress in dance: "Every week we had to learn a new dance, and you have to have that four minute dance learned by the end of the week.If you don't have it, then what are you supposed to do?And it's hard having to learn that on your spare time and then having to have other time for everything else and schoolwork.It was really really hard and very stressful."Although some of the negative emotions passed quickly, in other instances such as these, they persisted.
In some cases, the negative emotions affected youth in ways that led to their perpetuation.The students talked about not being able to prevent their emotions from interfering with their attention and performance.For example, one girl said, "It kind of carries over to your schoolwork.Like if you're having a bad time in your extracurricular activity, if you're not getting with that, then your attitude's like, 'well I'm not going to get this either'."The students also reported that negative emotions led them to act in ways that interfered with their relationships with peers.One youth reported, "I'll be pissed off and yell at people."Another observed that she would develop this "really hatey attitude....I have this face like I want to kill somebody."The students also reported that negative emotions can damage relationships with adult leaders.
One girl said, "A coach of another team, she kind of made me upset one time and I accidentally told her to shut-up.I didn't mean to, but she made a very bad comment about somebody on our team.And it almost got us disqualified from the track meet."Through these various scenarios, negative emotions can create self-perpetuating cycles.
Certainly many negative emotions come and go quickly, but that is not always the case, especially when the situation eliciting them is a continuing one.Students' reports indicate that negative emotions can disrupt and interfere with their psychological engagement and their participation in meaningful relationships in the activity, as well as leading youth to drop out.
Coping and Personal Growth
The second type of response to negative experiences was an active coping response.This included, first, using emotion-focused coping.One girl described how "I get really mad and I can't calm down," but if she separates herself from people, she can get over it: "I just have to sit alone for an hour and it goes away."For some youth this stratagem took the form of restructuring how they interpreted a situation so that their emotional involvement was reduced.For example, a boy explained: "A couple weeks ago, our baseball coach was quoted in the paper as saying we weren't very good, and a lot of people on our team took that the wrong way.But, I kind of took it as he's just kind of telling them the truth because we only won three games.And we just lost like 17 to nothing.And a lot of people on the team took it like he was trying to say that we all sucked and we were worthless.But I took it as you know he was just telling people how it is.But we have to get better -that's what our goal was this year -to get better."Although many team members had been upset by the coach's statement, this youth was able to use the statement as a stimulus to work harder.
In some cases, successful emotion-focused coping led to learning.Students reported that experiences in youth activities had helped them learn to control anger and anxiety.They also recounted acquiring strategies for managing stress and learning to prevent emotions from interfering with attention and performance (Dworkin et al., 2003).Several students also reported that they had learned from youth activities that they could get along with an adult leader, even if they disliked him or her.
Next, students reported the use of problem-focused coping.For example, in response to competing time pressures from school, work, and jobs, a number of youth reported developing strategies to better organize their week.One girl said, "It has taught me to organize my time better.I have had to put my social life on hold sometimes."In some cases problem-focused coping took the form of standing up to adult leaders, as in this example: "One of our coaches, she would always make us have to choose between practice and our religion, you know.It was really hard and I had to walk out of practice a few times before she got the point that I believe in God really strongly, and she's not going to make me go to heaven.So if it took me having to miss practice for her to understand that, then I was sorry.But finally she came around and she started understanding that sports was not my life, it was something I liked to do."This girl responded to the negative behavior of her leader with a deliberate, long-term, and ultimately successful campaign of action.
A central point conveyed by these examples is that negative experiences sometimes led to positive development for these students.By confronting an emotion or solving the problem that created the negative experience, youth sometimes learned a great deal.The experience provided the material for developmental change and learning.Sometimes this is a long haul requiring much determination, as in the example of one girl's negative experience with her dance team."I thought that I was a very good dancer before I was in Eaglettes and they would always have to stop and teach me things over and over again.I was getting so frustrated.And I ended up getting the 'Most Improved Dancer Award', but it was hard.And they would always tell me, 'you're not doing this right.Put your leg straight.Do this.Do that.'I'm like, oh my gosh, I'm quitting.I was getting so upset, but I finally got it.And then I could give other people the criticism that they were giving me.And you feel more sure of yourself, and it feels good." Research shows that in many circumstances, such as these, adversity stimulates basic human adaptation processes, which in some cases leads to positive development (Masten, 2001).What we cannot determine from our data is: what individual and situational conditions make the difference between a negative experience just becoming more negative, versus becoming an opportunity for constructive learning and growth?Given this lack of knowledge, it would be irresponsible to dismiss young people's negative experiences in youth programs.But it does suggest that, when these experiences occur, adult leaders, parents, and community members should look for ways to help youth use them as opportunities for growth and learning.
Conclusion
The findings of this research provide a beginning cataloging of the variety of negative experiences that adolescents encounter in organized activities, as well as how they respond to them.Negative experiences are important to understand because they disrupt youth's process of engagement in the developmental systems provided by youth programs.They can interfere with attention to activities, reduce engagement in the relationships through which development occurs, lead to burn out or drop out, and -in some cases -can provide the seeds of learning and growth.
Limitations and Future Directions
In interpreting these findings, it is essential to keep in mind the limits and strengths of the methodology used.Focus groups create a tumbling exchange of rich reports from participants, but these reports are limited to what youth are willing and able to report.The group context did not seem to impede students from talking about negative peer experiences, but it might have limited descriptions of topics that young people find personal and private (e.g., sexual harassment), thus we cannot claim to have provided a full accounting of the range of negative experiences that youth encounter.It is also possible that some of the reported negative experiences have minimal long-term significance.Most importantly, the method was limited in that it did not allow us to identify the specific situations or programs in which these negative experiences occurred -we suspect that they are much less likely in some programs than others.The strength of the methodology lies in obtaining youths' own accounts of their negative experiences, in their own words.By obtaining what is salient to them we are most likely to have captured material important to their decisions to remain in an activity and to developmental processes for which they are the agent.
Keeping these limitations and strengths in mind, three highlights stand out from the students' accounts.
• First, almost all of the negative experiences involved other people.They involved disruptions in youth's participation in relationship systems.Among these, peers and adult leaders were clearly the most frequently identified sources of negative experiences.This is not surprising given that peers and leaders are the main people youth have contact with in an organized activity.Nonetheless, the frequency of negative experiences with peers -with aversive behavior, cliquishness, and negative group dynamics -suggest the need for close attention to these peer dynamics as a crucial part of young people's experience.Both researchers and adult leaders need to draw on existing literature on adolescent peer relationships (e.g., Brown, 2004) and ask how it applies within organized settings.Likewise, the frequency of reported negative experiences with adult leaders -with their playing favorites, disrespecting youth, and upsetting young people in other ways -alerts us to the sensitivity of adolescents to adults' behaviors.Relationships between youth and adults are challenging and often contain ambiguities, for example, regarding ownership, authority, equality, and whether the adult is more like a parent, teacher, or friend (Camino, 2000;Krueger, 2000;Larson et al., 2005).Researchers have an important role to play in helping adult leaders better understand the dynamics of these relationships.
• Second, we observed that a disproportionate number of the youths' examples of negative experiences came from sports.This is partly attributed to the fact that more youth are involved in sports than in any other organized activity (Carnegie, 1992; U.S. Department of Education, 1995).But in a recent survey study of one Midwestern community, we found that, in comparison to other youth activities, adolescents reported higher rates of negative peer interactions and inappropriate adult behavior within a given sports activity (Hansen et al., 2003).These higher rates, research suggests, are attributable to the competitive nature of sports, which can elicit inappropriate behavior in youth, their parents, and coaches (Brustad et al., 2001;Siegenthaler & Gonzalez, 1997).It is important to note that the students we studied also reported high rates for certain types of positive learning experiences in sports (Dworkin et al., 2003;Hansen et al., 2003).So sports appear to be related to a pattern of high negatives and high positives.Research needs to be directed at understanding how to minimize the former while maximizing the latter.This leads us to the third highlight of our findings, one that makes this balancing of negatives against positives more complex.
• Third, the students reported that some experiences that they initially appraised as negative sometimes led to positive outcomes.In many cases, the youths' aversive experiences were just negative, or even led to chains of further negatives, as when anger interfered with performance or caused youth to lash out at peers.But in some instances, the students reported coping effectively with aversive experiences.They reinterpreted the situation in ways that were less aversive, for example, by recognizing that a coach's demeaning statements were intended to motivate them; or they learned something from the negative experience that made them better able to avoid or deal with that type of experience in the future.This pattern leaves us with the complex question of when and for whom adversity leads to growth and when and for whom it has negative consequences.
These findings must be seen as a very preliminary step in categorizing and understanding negative experiences in organized youth activities.Survey research is needed to better document the rates of negative experiences across populations of youth as a function of the type of activity, characteristics of adolescents, and the strategies and styles of adult leaders.
This study obtained limited contextual information for interpreting many of the reported experiences; further qualitative research is needed to understand the situations that lead to their occurrence -from the viewpoints of other youth, adult leaders, and participant observers.
Ultimately, longitudinal studies are needed to evaluate long-term sequella: to better understand when different constellations of negative and positive experiences are related to dropping out, adverse consequences, and positive development.We do not yet have sufficient information to conclusively say which experiences are likely to be most adverse and how they will affect youth.
Implications for Practice
While as researchers, our professional inclination is to say that more research is needed before applying these findings to practice, clearly the findings suggest issues that adult leaders need to be attuned to, ranging from paying attention to how their own actions are interpreted by participants to helping youth restructure adverse experiences.For example, in programs where young people are brought into direct contact with community members, the findings suggest that leaders should work to prevent or prepare young people for negative behavior by these adults.Research on youth sports shows that even short term training programs for coaches can significantly improve the experiences of athletes (Smith & Smoll, 1997;Smoll & Smith, 2001).We need to draw upon the expertise of seasoned youth leaders and the emerging body of research knowledge to ensure that adult leaders learn the skills needed to optimize the developmental experiences of youth in their programs. | 2017-10-20T05:18:05.184Z | 2007-03-01T00:00:00.000 | {
"year": 2007,
"sha1": "c52c6999ab170fbba8e3bf2d96b191215230b84a",
"oa_license": "CCBY",
"oa_url": "https://jyd.pitt.edu/ojs/jyd/article/download/373/359",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "89a3d26fc61bcd65251f58561a5fdf45dba3b90f",
"s2fieldsofstudy": [
"Education",
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
237555304 | pes2o/s2orc | v3-fos-license | Unilateral Acute Idiopathic Optic Neuritis With Superior Altitudinal Visual Field Defect as a Presenting Feature
Patients with acute optic neuritis typically present with acute loss of vision. We describe a case of a young lady of 25 years of age with blurring of vision in the upper visual field of the right eye with otherwise intact visual acuity as the only presenting symptom. Although altitudinal visual field defect is not unknown to be associated with acute optic neuritis, it is generally considered a relatively uncommon occurrence. Our case illustrates an unusually unique occurrence of upper altitudinal visual field defect in association with unaffected visual acuity as the sole presenting symptom of acute idiopathic unilateral optic neuritis. When an altitudinal visual field defect is a presenting feature, besides the usual vascular and compressive causes, optic neuritis should be remembered in the list of differential diagnoses.
Introduction
Inflammatory optic neuropathy, or optic neuritis (ON), is the most common cause of optic nerve injury in young adults. ON has multiple etiologies, including demyelinating, infectious, and autoimmune causes. It can also occur independently as idiopathic optic neuritis.
A variety of visual field defects can be found in optic neuritis. The Optic Neuritis Treatment Trial (ONTT) identified different types of visual field defects at the baseline in the affected eyes. Diffuse visual field loss was present in 66% of the affected eyes and central field loss in 27%. Altitudinal visual field abnormality which is classically believed to be highly characteristic of the ischemic optic neuropathy occurred in 8% of the affected eyes at the baseline in ONTT [1].
Acute onset altitudinal visual field defect, which involves loss of visual sensation in the horizontal half of the visual field, often vascular in origin, can rarely be caused by compressive neuropathy due to a tumor or aneurysm [2]. Acute optic neuritis is generally considered a relatively uncommon cause of the altitudinal field defect.
Patients with optic neuritis typically present with acute unilateral loss of vision usually in association with impaired color vision. We describe a case of a young lady of 25 years of age who presented with acute onset of blurred vision in the upper visual field with no subjective changes either in visual acuity or color vision. Humphry's visual field plot confirmed the presence of superior altitudinal scotoma in the affected eye. The local and systemic investigations suggested the idiopathic nature of acute optic neuritis. Our case therefore aims to illustrate an unusual scenario of subjective blurring in the superior visual field from altitudinal scotoma as a presenting feature of acute unilateral idiopathic optic neuritis.
Case Presentation
A 25-year-old Bahraini female presented to the eye clinic with blurring of vision in the upper field of her right eye of four days duration. There was no history of preceding febrile illness, immunization, trauma, or any past systemic or neurological disease or any medications. Her past ocular history included laser correction in both the eyes for a mild myopic refractive error four years ago. There was no significant family history. Her visual acuity on presentation was 6/6 in each eye. Ishihara test revealed a mildly impaired color vision in the right eye and fully normal color vision in the left eye. Ocular movements showed a full range but were associated with some pain on supraduction of the right eye.
Anterior segment and adnexal examinations were unremarkable. A relative afferent pupillary defect grade two was noted in the right eye. Fundus examination revealed normal-looking optic disc, retinal and vasculature details. Her visual field plot, 24-2 on Humphrey's field analyzer, revealed in the right eye a superior altitudinal visual field defect ( Figure 1). The fellow eye showed a normal field plot. Optical coherence tomography and B scan ultrasonography of the optic discs revealed no abnormalities.
FIGURE 1: Humphrey 24-2 visual field plot at presentation showing superior altitudinal visual field defect in the right eye (OD) and normal visual field plot in the left eye (OS).
Systemic evaluation including full neurological evaluation and ENT status were within normal limits. Brain and orbit MRI scans were unremarkable. Blood investigations including full blood count, renal function tests, erythrocyte sedimentation rate, C-reactive protein, serum folate and B 12 levels were all also within normal. Serum autoimmune markers such as anti-cardiolipin antibodies, antinuclear antibody (ANA) screen and double-stranded DNA were also within normal range. Serological infective screening that included TPantibody for syphilis, anti-hepatitis C virus (HCV), HIV AB/AG combo and hepatitis B surface antigen were all non-reactive. Serum anti-aquapore-4 antibody test also returned negative. Electrophysiological study of visual-evoked cortical potential (VEP) revealed delayed latencies suggestive of optic neuritis.
Two days after the presentation, her visual acuity deteriorated to 6/15 at which point she was started on a five-day course of high dose intravenous methylprednisolone 1 gm daily under the care of a neurophysician. On her review four days after the completion of the treatment, that is on the 11th day from the initial presentation, she was noted to have total recovery of the visual acuity, full restoration of the color vision on Ishihara's testing and complete disappearance of the altitudinal field defect on Humphrey's perimetry ( Figure 2).
Discussion
Inflammatory optic neuropathy, or optic neuritis (ON), which can be defined as inflammation of the optic nerve due to various causes, is the most common optic neuropathy under 50 years among general ophthalmic practice [3]. Optic neuritis has multiple etiologies, including demyelinating, infectious, and autoimmune causes. Isolated optic neuritis not associated with any specific neurological or systemic disease is labeled as idiopathic optic neuritis. However, cases presenting as idiopathic optic neuritis may be the initial presentation in 20% of multiple sclerosis patients [4].
Altitudinal visual field defect in optic neuritis, though not completely unknown, is a relatively uncommon feature of optic neuritis. The patients included in the Optic Neuritis Treatment Trial (ONTT) presented with various types of visual field defects in the affected eye, with two-thirds showing diffuse defects and onethird with localized field defects. Keltner et al. [1] observed that only 8% of the participants from the ONTT presented with altitudinal field defect. Altitudinal defects have been reported both in superior and inferior half of the fields in optic neuritis.
It is suggested that either the inflammation of the optic nerve itself or the secondary perfusion defect caused by the inflammation of the optic nerve may lead to the altitudinal field defect in optic neuritis [5]. Chin and Ismail described a case of multiple sclerosis-associated optic neuritis in a 17-year old girl presenting with severe reduction in visual acuity in association with inferior altitudinal defect [6]. Our case is different in a way that the superior altitudinal defect was the presenting subjective complaint with no changes in the visual acuity. Kale et al. [4] suggested that typically the presence of altitudinal defect should warrant a consideration of other non-inflammatory differential diagnoses. Its presence should therefore alert the clinician to look for more classic causes of vascular or compressive in nature. An interesting occurrence of acute onset altitudinal field defect has been described as a sole presenting feature of compressive lesion of intracanalicular meningioma [7]. Clinical profile of the patient along with pain on upgaze and normal looking discs with reduced color vision and prolonged latencies on VEP all pointed towards an inflammatory cause in our patient. There was no radiological evidence of any abnormality including any demyelinating lesions on magnetic resonance imaging of brain and orbit. Clinical and laboratory investigations excluded other causes of optic neuritis from infective, autoimmune and adjacent sinus disease. Optic neuritis associated with neuromyelitis optica (NMO) remained an important differential diagnosis as it is known to have higher incidence of non-central scotoma and altitudinal defects [5]. There was no clinical or serological evidence for NMO in our case. Our case of idiopathic optic neuritis presenting with upper visual field defect with intact visual acuity is a scenario not described in the literature to the best of our knowledge. There is a possibility that the current etiological diagnosis of idiopathic origin may alter in future on further follow up.
A number of randomized, controlled, double-blind trials of corticosteroid treatment of optic neuritis were evaluated in a meta-analysis in 2012. One randomized and controlled trial, the Optic Neuritis Treatment Trial, had a major effect on the current standard of treatment. In this trial, oral prednisone treatment at a dose of 1 mg/kg body weight/day for 14 days was compared with intravenous methylprednisolone treatment at 1000 mg/day for three days followed by oral prednisolone (1 mg/kg BW) for 11 days, and with placebo treatment. Treatment with intravenous methylprednisolone, which was not blinded, led to a more rapid recovery of vision, but the final outcome with respect to visual acuity, fields, and perception of contrast and color was no better than with oral prednisone alone, or indeed with placebo. Similar results were found in earlier and later studies as well; thus, it was concluded in the meta-analysis of 2012 that faster recovery is the sole benefit of steroid treatment. Among the patients in the Optic Neuritis Treatment Trial who were treated only with low-dose oral prednisolone, early recurrences within six months were twice as common as in the placebo group. Since the publication of these findings, low-dose oral prednisolone alone has been considered to be contraindicated for patients with typical optic neuritis [8].
The patients in the Optic Neuritis Treatment Trial who received intravenous methylprednisolone for three days all also received oral prednisolone over the ensuing 11 days. It is unclear whether this is necessary, and the guidelines leave the question open. Some authorities do not give oral prednisolone after intravenous methylprednisolone. Some current neurological and ophthalmological guidelines advocate treatment of optic neuritis with methylprednisolone at a dose of 500-1000 mg/day for three to five days [8].
For patients with ON whose brain lesions on magnetic resonance imaging indicate a high risk of developing clinically definite multiple sclerosis, treatment with immunomodulators (eg, interferon beta-1a, interferon beta-1b, glatiramer acetate) may be considered. Intravenous immunoglobulin treatment of acute ON has been shown to have no beneficial effect [8].
Recovery of visual function in acute optic neuritis is known to occur within the first month of the onset. In the ONTT, 79% of the participants had started to improve by three weeks and 93% by five weeks [9]. In our patient, four days after the completion of the course of a high dose of intravenous methylprednisolone, altitudinal field defect disappeared completely.
In summary, we described a case of acute idiopathic unilateral optic neuritis with the subjective complaint of upper altitudinal visual field defect as a sole presenting feature in the presence of normal visual acuity.
Conclusions
Optic neuritis is a multifaceted disease that may have atypical clinical presentations. The clinician should be familiar with atypical presentations which should prompt other differential diagnoses without ruling out the diagnosis of acute optic neuritis. If atypical features are present, urgent further investigations are indicated to exclude the differential diagnoses. Our case of acute idiopathic unilateral optic neuritis had an unusual presenting symptom of upper visual field blurring in the presence of unaffected visual acuity. When an altitudinal visual field defect is a presenting feature, besides the usual vascular and compressive causes, optic neuritis should be remembered in the list of differential diagnoses. Further imaging to exclude other | 2021-09-19T05:17:57.945Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "0fdbd189c907c56e630c7ac133cfd8a561a924eb",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/64373-unilateral-acute-idiopathic-optic-neuritis-with-superior-altitudinal-visual-field-defect-as-a-presenting-feature.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0fdbd189c907c56e630c7ac133cfd8a561a924eb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
58075200 | pes2o/s2orc | v3-fos-license | Healthcare systems within the Middle East
Diverse health systems within the Middle East continue to experience a high degree of variability with regards to accessibility, capacity, and the quality of care provided within each individual country. This paper summarizes the unique challenges and achievements within the healthcare systems of six countries in the Middle East region. Additionally, the review aims to provide evidence for how healthcare systems in the Middle East are managed and sustained despite differences in wealth and infrastructure, as well as the presence of conflict in certain areas. Canada can play an important role in supporting these countries with unique healthcare needs, and in supporting populations arriving to Canada from these countries. introduction Healthcare systems around the world are constantly evolving in order to adapt to new challenges presented by changes in the environment, disease patterns, demographics, and a myriad of other factors that may affect the delivery of healthcare services.1 Many factors have a direct impact on local health systems, including a country’s wealth, population size, human resource capacity, and exposure to conflict.2 This paper will illustrate the heterogeneity of healthcare systems across the Middle East by profiling six countries. The countries included are Oman, United Arab Emirates (UAE), Egypt, Lebanon, Palestine, and Yemen. These countries share a similar two-tiered healthcare system structure, with both public and private streams of financing and service delivery. However, there is a great deal of variability of public and private insurance coverage within a given population, as well as the amount of cost-sharing that may be required for public health services. There is a significant range in amount of government funds allocated towards healthcare across the region. For example, Oman is considered to be at the higher end of the range, where the healthcare system is 82% government funded. Meanwhile, Yemen’s healthcare system is only 28% government funded.1,3 In terms of infrastructure and organization, health service delivery across the region varies greatly. Oman has a centralized system, whereas the remaining five countries are considered to have a decentralized service delivery organization. As the world becomes more globalized and conflicts around the world cause vast migrations of people from one continent to the next, Canada can learn from international health systems such as those in the Middle East in order to prepare for the challenges that may arise abroad and at home. An awareness of the various vulnerabilities that can affect a citizen’s health is the first step towards building a resilient health system, and by observing others this awareness can occur much faster. oil wealth The Persian Gulf states of the Middle East, including the relatively small populations of Oman and the UAE, acquire significant wealth from the oil industry.2 The wealth attained by these oil-producing countries allows them to allocate significant amounts of resources towards their healthcare systems, providing a means to further develop their physical infrastructure, healthcare training programs, and healthcare administration capabilities.2 Additionally, this wealth provides opportunities of forming partnerships at an international level. These conditions have allowed the UAE to strive to become a global leader in healthcare, whereby efforts to advance the current health system include improvements to information technology infrastructure and enhancing integration of services throughout the Emirates.4 These attributes are driving forces behind the creation of effective health systems within the Persian Gulf region, which will continue to play a role in improving the health of citizens living within this region.4 adequate infrastructure and social determinants of health Countries with minimal financial resources and larger populations, such as Egypt and Lebanon, often have sufficient infrastructure, healthcare professionals, and other resources required to adequately support the delivery of health services in the population.2 Unfortunately, these healthcare systems often result in health service inequities due to the significant effects of the social determinants of health, including family income, insurance coverage, education, gender, and geographical location, which result in a wide array of negative long-term health outcomes.5 In Egypt, an individuals ability to access health insurance and high-quality health services is heavily influenced by their financial status and income.5 Therefore, an individual who lacks financial resources will ultimately find themselves with a reduced quality of healthcare. This is the case for 55% of the population, whereby uninsured citizens must exclusively pay out-of-pocket when accessing healthcare services.1 Many of these countries continue to face additional burdens on their healthcare system from the high influx of refugees fleeing from nearby conflict zones, as seen in Lebanon. An approximate 4.6 million residents live in Lebanon, including over one million Palestinian and Syrian refugees who have sought refuge from conflicts.6 This has led to significant instability within Lebanon, both structurally and socially, which has negatively influenced access to high quality healthcare services.1 With an already overstretched healthcare system, utilization of primary healthcare centers for maternal and child health-related services has increased by approximately 50% since the Syrian refugee crisis in recent years.6 Although there are ongoing international efforts in supporting the Syrian and Pal-
introduction
Healthcare systems around the world are constantly evolving in order to adapt to new challenges presented by changes in the environment, disease patterns, demographics, and a myriad of other factors that may affect the delivery of healthcare services. 1any factors have a direct impact on local health systems, including a country's wealth, population size, human resource capacity, and exposure to conflict. 2This paper will illustrate the heterogeneity of healthcare systems across the Middle East by profiling six countries.The countries included are Oman, United Arab Emirates (UAE), Egypt, Lebanon, Palestine, and Yemen.
These countries share a similar two-tiered healthcare system structure, with both public and private streams of financing and service delivery.However, there is a great deal of variability of public and private insurance coverage within a given population, as well as the amount of cost-sharing that may be required for public health services.There is a significant range in amount of government funds allocated towards healthcare across the region.For example, Oman is considered to be at the higher end of the range, where the healthcare system is 82% government funded.Meanwhile, Yemen's healthcare system is only 28% government funded. 1,3In terms of infrastructure and organization, health service delivery across the region varies greatly.Oman has a centralized system, whereas the remaining five countries are considered to have a decentralized service delivery organization.
As the world becomes more globalized and conflicts around the world cause vast migrations of people from one continent to the next, Canada can learn from international health systems such as those in the Middle East in order to prepare for the challenges that may arise abroad and at home.An awareness of the various vulnerabilities that can affect a citizen's health is the first step towards building a resilient health system, and by observing others this awareness can occur much faster.
oil wealth
The Persian Gulf states of the Middle East, including the relatively small populations of Oman and the UAE, acquire significant wealth from the oil industry. 2The wealth attained by these oil-producing countries allows them to allocate significant amounts of resources towards their healthcare systems, providing a means to further develop their physical infrastructure, healthcare training programs, and healthcare administration capabilities. 2Additionally, this wealth provides opportunities of forming partnerships at an international level.These conditions have allowed the UAE to strive to become a global leader in healthcare, whereby efforts to advance the current health system include improvements to information technology infrastructure and enhancing integration of services throughout the Emirates. 4These attributes are driving forces behind the creation of effective health systems within the Persian Gulf region, which will continue to play a role in improving the health of citizens living within this region. 4
adequate infrastructure and social determinants of health
Countries with minimal financial resources and larger populations, such as Egypt and Lebanon, often have sufficient infrastructure, healthcare professionals, and other resources required to adequately support the delivery of health services in the population. 2 Unfortunately, these healthcare systems often result in health service inequities due to the significant effects of the social determinants of health, including family income, insurance coverage, education, gender, and geographical location, which result in a wide array of negative long-term health outcomes. 5In Egypt, an individuals ability to access health insurance and high-quality health services is heavily influenced by their financial status and income. 5herefore, an individual who lacks financial resources will ultimately find themselves with a reduced quality of healthcare.This is the case for 55% of the population, whereby uninsured citizens must exclusively pay out-of-pocket when accessing healthcare services. 1any of these countries continue to face additional burdens on their healthcare system from the high influx of refugees fleeing from nearby conflict zones, as seen in Lebanon.An approximate 4.6 million residents live in Lebanon, including over one million Palestinian and Syrian refugees who have sought refuge from conflicts. 6his has led to significant instability within Lebanon, both structurally and socially, which has negatively influenced access to high quality healthcare services. 1With an already overstretched healthcare system, utilization of primary healthcare centers for maternal and child health-related services has increased by approximately 50% since the Syrian refugee crisis in recent years. 6Although there are ongoing international efforts in supporting the Syrian and Pal-feature article estinian refugees currently residing with Lebanon, contributions are far from being sufficient enough to provide coverage of healthcare services for all individuals residing within the country in an equitable manner. 6
war and conflict
Countries that are currently involved in war or conflict experience unique and severe complications within their health system.Examples include Palestine and Yemen, although other Middle Eastern countries have also experienced similar effects since the Second World War. 2 Conflicts in these countries have placed a great deal of stress on healthcare systems, affecting infrastructure, organization, financing, and human resources.For example, the civil war that has resulted in Yemen over the years has led to only 45% of the 3507 healthcare facilities within the country to be fully functional. 7Additionally, since only 28% of healthcare financing comes from government, Yemen cannot provide full health coverage to its citizens, resulting in cost-sharing and community health insurance initiatives. 1Within this region, hospitals and healthcare professions have often been the targets of conflict, thus resulting in a high degree of uncertainty regarding the safety of seeking healthcare services and their availability to citizens. 8he geopolitical context in Palestine results in limited freedom of movement and economic stability due to Israeli occupation.Such factors introduce major challenges for the maintenance of health for the Palestinian population. 9Additionally, a lack of political unity in Palestine leads to the exacerbation of inaccessibly to high-quality health care services, which already exist due to imposed territorial segregation. 10Due to years of political instability and conflict, large waves of Palestinian refugees have escaped to nearby countries.For the population that remains in Palestine, the UN Relief and Works Agency, the Ministry of Health, Hamas, NGOs, and private sector players have been responsible for administering healthcare services to Palestinians within Gaza and the West Bank regions. 11nclusion There are many important factors to consider when creating and maintaining a robust healthcare system within a country.The highest quality of care appears to be a direct result of sufficient finances, adequate infrastructure, exceptional governance, and social stability within the country.There will continue to be significant challenges to accessing quality care for populations throughout the Middle East as a result of politics, past and current conflicts, lack of financial resources, physical and social environments, and the negative outcomes that are derived from inaccessible medical treatments and healthcare services.These intersectional factors create unique and complex challenges to ensuring a high quality of life for residents.By addressing current issues at national and international levels, with particular focus on the governance, organization, and financing of health services, opportunities for successful interventions can be created and implemented.
In a similar manner to the Canadian context, anticipating evolving healthcare needs and appropriately planning and investing in the future of the system is vital.As stakeholders of the Canadian healthcare system, we must remain aware of the vulnerabilities that exist for incoming refugees that result due to previous exposures to war, conflict, social instability, and consequent unmet health needs.By adopting a health equity lens and working with the populations we serve, we must move towards reducing health disparities and improving overall quality of life for vulnerable individuals who arrive in Canada. | 2018-12-29T16:28:51.359Z | 2017-12-03T00:00:00.000 | {
"year": 2017,
"sha1": "8e22b0d98d621d8ce0cc5645034129530328f265",
"oa_license": "CCBY",
"oa_url": "https://ojs.lib.uwo.ca/index.php/uwomj/article/download/2009/1307",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8e22b0d98d621d8ce0cc5645034129530328f265",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Business"
]
} |
250407847 | pes2o/s2orc | v3-fos-license | NOx emissions trends in hydrogen lean premixed flamelets at high strain rate
NO$_{\rm x}$ formation in lean premixed and highly-strained pure hydrogen-air flamelets is investigated numerically. Lean conditions are established at an equivalence ratio of 0.7. Detailed-chemistry, one-dimensional simulations are performed on a reactants-to-products counter-flow configuration with an applied strain rate ranging from $a=100 \, {\rm s}^{-1}$ to $a=10000 \, {\rm s}^{-1}$ and the \texttt{GRI3.0} mechanism. Following a similar setup, two-dimensional direct numerical simulations are also conducted for representative strain rates of $2000 \, {\rm s}^{-1}$ and $5000 \, {\rm s}^{-1}$. Both solutions show a decreasing NO$_{\rm x}$ trend as the applied strain rate is increased. This decreasing emission outcome is highlighted for the first time in this study for lean pure-hydrogen flamelets. A deep analysis of the 2D solution underlines that there is no production of NO$_{\rm x}$ in the second dimension, thus proving that the emission trend is not a result of a setup preconditioning, but is instead a direct physical effect of stretch on the flame. Furthermore, a detailed analysis of the NO$_{\rm x}$ formation pathways at $a=2000 \, {\rm s}^{-1}$ and $a=5000 \, {\rm s}^{-1}$ is performed. Thermal NO$_{\rm x}$ and NNH pathways are shown to both contribute significantly to the total NO$_{\rm x}$ production. While the NNH route contribution is roughly constant at different strain rates, a significant decrease is observed along the thermal NO$_{\rm x}$ route. Overall, results show that lean and highly-strained hydrogen flames experience a significant decrease of NO$_{\rm x}$. This property is discussed and analysed in the paper.
Introduction
The yearly increasing energy demand is currently substantially met by fossil fuels, with consequent emissions of CO 2 and other pollutants. Hydrogen has been identified as one of the possible alternatives to satisfy this demand and reduce green-house emissions simultaneously. In fact, hydrogen is carbon-free, and can be produced with electrolysis using renewable energy [1,2]. In the last decades, research efforts have focused on the possibility to burn hydrogen in lean premixed conditions, where the lower adiabatic flame temperature allows to decrease NO x formation via the thermal route. Furthermore, hydrogen's strong reactivity and high lower-heating-value allows to reach ultra-lean conditions without approaching lean blow-off. Earlier studies have discussed the influence of hydrogen addition on lean blow-off, showing that a minimal hydrogen enrichment can consistently decrease the blow-off equivalence ratio and thus broaden the possible burning regimes [3][4][5][6]. However, many technological challenges are involved with controlling hydrogen combustion in lean turbulent conditions. In fact, flashback and uncontrolled flame propagation that are typical of this regime [7,8] are amplified by the combination of hydrogen's high reactivity and so its higher flame speed [9], and its ability to auto-ignite [10,11] and quickly diffuse [12]. One interesting property of hydrogen flames that has not been fully understood yet is its behaviour under strain. Previous studies have highlighted the peculiar performance of hydrogen-enriched laminar flames with varying strain rates, suggesting that high strain regimes can potentially be exploited for hydrogen combustion practical applications. Hydrogen addition has been shown to delay the extinction strain rate [3,4], proving that hydrogen is able to sustain very high strain rates. It has also been shown for syngas fuels that hydrogen percentage influences the lean flame response with strain in terms of flame temperature and NO x emissions [13], and flame temperature and consumption speed [9]. In particular, in lean conditions the consumption speed is shown to increase with strain, and this trend is opposite to the one observed for hydrocarbon flames [14]. Furthermore, the hydrogen-enhanced differential diffusion effects and (a) (b) Figure 1: Sketch of the reactants-to-reactsants (a) and reactants-to-products (b) counter-flow premixed strained flame configurations.
the strain sensitivity have been proved to be interdependent and to have a combined influence on the flame response [15,16]. Even more interestingly, while the mass burning rate decreases with strain for pure methane flames, a certain amount of hydrogen addition inverts this trend, particularly in lean conditions [17]. Similar considerations hold for the heat release rate, which is found to increase with strain in lean hydrogen enriched flames [18]. However, the effect of these distinctive hydrogen burning features at high strain rates on NO x emission for purely hydrogen lean laminar flames is still an open question. In this study we numerically investigate counter-flow premixed hydrogen flames to shed light on their behaviour under intensive level of strain, in particular the effect on NO x emissions. This is, to best of the authors' knowledge, the first time that such an investigation is conducted. The increase in heat release rate with strain discussed in the previous paragraph would suggest a corresponding increase of temperature and so of NO x emissions. Counter-intuitively, this study highlights that NO x emissions do not increase with strain, and conversely show a decreasing trend in lean conditions, particularly at very high strain rates. Nevertheless, despite the discussed advantages and potentialities, additional unexplored control challenges are potentially introduced at high strain regimes, such as complex vortex dynamics, turbulence-flame interaction and local flame extinctions. Further modelling challenges are associated with the prediction of these phenomena distinctive of the regime. As reported in Figure 1, there are two premixed strained flamelet configurations documented in literature. The first one is represented by a back-to-back or reactants-to-reactants counter-flow configuration, where two flames are stabilized symmetrically with respect to the flow stagnation plane (Figure 1a). Several studies are available where the effects of hydrogen enrichment and strain on lean blow-off, extinction strain, mass burning rate, and NO x emissions are investigated with this setup [3,4,13,17]. Specifically on NO x emissions trends, a technical report from Xie and Wang [19] underlines a decreasing trend with strain with this configuration in rich pure-hydrogen flames, attributing to the NNH pathway the predominant contribution to NO x formation at high strain levels. However, the limitation of this configuration is represented by the proximity of the flames to the stagnation plane, particularly at the high strain rates achievable by purely hydrogen flames. In these conditions, in fact, the combustion has no space to complete, as it is clearly visible in previous studies. Minor species are shown not be fully burnt within the stagnation plane in Figure 5 and 6 of Jackson et al. [3]. Similarly, the reaction rates of NO x pathways achieve zero within the stagnation plane for lower strain rates in Ning et al. [13] (see Figures 6-8 of their work), but not at higher strain rates. The same happens to the water rate of production in Xie and Wang [19], Figure 2. Hence, this configuration appears less suitable for investigations on emissions, and ultimately not practical for the development of flamelet databases for turbulent LES simulations. The second premixed strained flamelet configuration is represented by a reactants-to-products counter-flow, where a single flame stabilizes on the reactants side of the domain (Figure 1b). On the one hand, the single flame setup allows the combustion reactions to complete even at very high strain rates, as the fuel and the radicals have space to be burnt completely. This evidence can be found for instance in Marzouk et al. [18], Figure 4-6. This consideration suggests that the configuration is more appropriate for emission analyses in highly-strained premixed flamelets. On the other hand, the presence of complete combustion hot products on one of the boundaries can precondition the problem, particularly considering the products temperature. Despite this, the reactants-to-products configuration has been employed for physical analysis of strained syngas and hydrogen-enriched flames [9,18], and widely discussed for methane flamelets [20,21], particularly in the application of LES-FGM models with strained flamelets [22,23]. Therefore, the reactants-to-products configuration is employed in the present study. The purpose of the present work is to investigate lean hydrogen strained flamelets, with a particular focus on NO x emissions trends. Detailed-chemistry one-dimensional and two-dimensional DNS analyses are performed to achieve a deeper understanding of the employed counter-flow strained flamelet configuration. It will be shown for the first time that NO x emissions of purely-hydrogen lean laminar flames display a decreasing trend with strain. A detailed chemical analysis of NO x formation pathways is performed to support this evidence, further showing that thermal NO x mechanism is predominant. This physical behaviour of laminar flames is the first step to shed light on potential features of novel combustor systems, where NO x emissions are controlled by stabilising the flame against intensive strain. This paper is organised as follows. The flamelet equations along with the modelling choices performed for the 1D and 2D cases are introduced first, followed by an overview of the 1D and 2D numerical setups. Emission results are discussed next. Finally, relevant conclusions are drawn on the influence of strain on NO x emissions.
Model
Both one-dimensional and two-dimensional simulations are performed. For the two-dimensional simulations, the reactingFoam solver in OpenFOAM [24] is employed. The reacting Navier Stokes equations [25] are solved for mass, momentum, absolute enthalpy and N species with detailed chemistry. The equation for the generic species k is where subscripts i denotes direction i, W k is the molar mass, ρ the mixture density, V k,i the diffusion velocity vector, andω k the molar rate of production of species k. A low Mach number approximation is used in this study. Radiation, body forces, and viscous dissipation effects are neglected. The Dufour effect on the heat flux is also neglected. The ideal gas law and the caloric equation of state are used as thermodynamic model, where the species heat capacities are obtained using the JANAF polynomials. Only laminar conditions are considered in this study, while the effect of the turbulence-chemistry interaction will be investigated in a future study. Detailed kinetic data of reactions from GRI3.0 mechanism [26] are used to obtain consumption and production rate of species. In order to speed up the simulations, the TDACChemistryModel is adopted, consisting of the combination of the in situ adaptive tabulation (ISAT) algorithm with the dynamic adaptive chemistry (DAC) reduction scheme [27]. A mixture-averaged diffusion model is used [25] to account for the low Lewis number of the hydrogen fuel as follows: where the validity of the approximation performed in Eq. (2a) is verified in post-processing. The binary diffusion coefficients D kl for the species involved in the reactions are found with the Chapman-Enskog correlation [28,29]. One-dimensional simulations are run with CHEM1D [30]. In one dimension and for a flat reactants-to-products counter-flow flame, the set of conservation equations solved in CHEM1D is re-arranged as follows [31]: where the density of the products mixture ρ p , the applied strain rate a, the local stretch rate K, along with Newton's law for viscous stresses have been introduced. The applied strain rate is a setup parameter and is defined as the velocity gradient at the products boundary: The influence of the y-component of the flow on the transport of scalars is taken into account with the introduction of the local stretch rate [31]: As a consequence of the continuity equation, the relation between the two parameters reads K(x → ∞) = a. Similarly to two-dimensional simulations, GRI3.0 is used a chemical mechanism, along with a mixture-averaged diffusion model (Eq. (2)), while the binary diffusion coefficients in CHEM1D are computed using molecular potentials and are tabulated as a function of temperature in polynomial form [32].
Computational setup
The combustion of hydrogen is evaluated at atmospheric conditions. Temperature and species boundary conditions at the reactants and products boundary are prescribed as follows for both 1D and 2D cases. An equivalence ratio of φ = 0.7 is imposed for both streams. The reactants temperature is T r = 300 K, while the products temperature is set to the adiabatic flame temperature of an unstrained hydrogen flamelet with the same equivalence ratio computed with CHEM1D, T p = 2021 K. Mass fractions at the products boundary are imposed from complete combustion. A summary of the temperature and species boundary conditions for both 1D and 2D simulations are reported in Table 1.
1D setup
For the one-dimensional simulations, a wide computational domain of L 1D = 20 cm is chosen. CHEM1D uses an adaptive mesh algorithm, thus ensuring that a proper mesh refinement in the flame region. The adaptive grid accounts for 200 points in total. An exponential differential scheme is used for the spatial discretization, and a second-order time integration of the differential equations is performed by the stationary solver. The time step is adjusted automatically by the numerical tool to achieve convergence, and ranges between 10 −6 s and 10 −8 s. The only input parameter for the velocity field is represented by the applied strain rate a, defined in Eq. (4). According to the applied strain rate definition, the higher is a, the higher will the boundary velocities be and the stretch rate experienced by the flame. A broad range of applied strain rates is investigated for 1D simulations, from 100 s −1 up to 10000 s −1 to eventually observe the occurrence of flame extinction. The simulations wall clock time ranges from a few minutes for very high strain rates up to a few hours for lower strain rates.
2D setup
The PISO time integration solver [33], with an implicit second-order Euler-backward discretization scheme, is used for the 2D simulations. Similarly, a second-order central scheme is used for the convective term of all resolved quantities. preliminary coarse 2D simulations and 1D simulations, and is then double-checked in the refined simulation solution a posteriori. After the refinement, the minimum spacing results into ∆x min = 0.2/2 3 = 0.025 mm, which ensures at least 12 cells are present within the flame thickness. The number of cells enclosed within the flame thickness is similar to that found in previous DNS studies on reacting flows [34][35][36]. In contrast to CHEM1D, where the applied strain rate a was the only input parameter for the velocity field, a uniform velocity at the two inlets is prescribed in OpenFOAM's setup. Given the extended-domain one-dimensional simulation at a given applied strain rate, the following horizontal velocity boundary conditions are thus applied in the 2D case: (U p,2D ) a=const = (U 1D (x = 1 cm)) a=const . (6b) Hence, at the upper and lower outlets, a zero gradient boundary condition is prescribed. The discrepancy in the inlet boundary conditions definition precludes the possibility to directly compare 1D and 2D simulations at a given applied strain rate, because the resulting 2D horizontal velocity profile does not perfectly overlap with the one in the 1D solution. However, as it will be further discussed in the Results section, the purpose of two-dimensional simulations is solely to show that the same NO x emission trends with strain are observed with respect to 1D simulations. Therefore, this setup discrepancy is accepted for the objectives of the present study. Furthermore, only two high applied strain rates configurations are investigated in the 2D case, a = 2000 s −1 and a = 5000 s −1 .
Results NO x emissions trends NO x generally refers to nitrogen oxides, including both NO and NO 2 . However, NO 2 mass fraction peaks are shown to be at least three orders of magnitude smaller than the ones of NO for any setup investigated. Therefore, similarly to previous studies [13], NO emissions can alone be considered representative of the overall NO x emissions and are thus examined for the purposes of the present study. The behaviour of Y NO in the longitudinal direction at different applied strain rates predicted by CHEM1D in the proximity of the stagnation plane is shown in Figure 3. It is immediately visible that both the peak and the area under the curve decrease consistently as the applied strain is increased. In addition, no extinction is observed at the highest strain rates achieved. As one could expect considering previous studies involving hydrogen-enriched fuels [3,4], this evidence confirms that pure hydrogen flames can potentially sustain very high strain levels. In Figure 4a the decrease of the peaks of Y NO in one-dimensional and the center line of two-dimensional simulations is further highlighted. The discrepancies in the exact values of Y NO peaks at the same applied strain can be pointed to the different definition of the inlet boundary conditions discussed in the 2D setup section. Nevertheless, the graph shows that two-dimensional simulations already confirm the trend of decreasing NO x emissions with increasing strain rate. Similar considerations hold for the trend of NO flux reported in the left y-axis of Figure 4b, which is calculated as One might object that NO is a slow species which would form downstream of the flame in the hot products region, and this formation in the products side is prevented by the right boundary condition, where Y NO (x = L/2) = 0 is imposed. In fact, in the present configuration Y NO peaks at the stagnation plane to then suddenly drop to fulfill the boundary condition, as clearly visible in Figure 3. Nevertheless, the investigation of the 2D simulations, where the NO is free to form in the y-direction (tangential to the flame in Figure 1), suggests this not to be the case, i.e. NO is suppressed at high strain by some other means. Hence, the focus is shifted to the understanding of the emission phenomena in the flame-tangential direction of the two-dimensional simulations. One can compute the two-dimensional NO flux for the two setups investigated by integrating on the 2D-domain surface as follows: This flux is reported on the right y-axis in Figure 4b. It can be immediately observed that the same decreasing NO flux trend is observed for the 1D and 2D cases. Furthermore, from the analysis of the 2D data, it is highlighted that i.e. the 1D flux of NO calculated across the center line multiplied by the vertical dimension is approximately equal to the 2D flux. These considerations raise confidence on the fact that the decrease of NO emissions observed in 1D simulations and in the 2D domain centreline as strain is increased is not compensated by any additional NO formation in the vertical direction. This evidence can be further proved following the same streamline at the two different strain rate cases investigated with OpenFOAM. The streamline investigated originates at y = −0.2 mm, such that it crosses the flame and thus travels in the products and exits the domain without clashing with the stagnation plane. As shown in Figure 5b, the particle residence time inside the a = 5000 s −1 domain is estimated around ∆t = 1.23 ms. The path that the particle originating in the same position has travelled in the same time interval in the a = 2000 s −1 domain is reported in Figure 5a. Hence, a domain cut is taken at the y-location the particle has reached at ∆t in the two cases (green line in Figure 5), and the NO fluxes computed along the line cuts are reported in Table 2. Not only do the data show a still lower NO flux at higher strain rates, but it can be also observed that the values are very close to the ones at the center line (reported in the graph in Figure 4b). Therefore, it is further proved that there is no additional NO formation in the vertical direction at higher strain rates. It can be concluded that, at least within the studied framework, the observed NO emissions decrease with increasing strain is not a consequence of the specific counter-flow configuration chosen and of the boundary-preconditioned numerical setup, but is instead a direct effect of the increased tangential velocity gradients on the hydrogen flame combustion and dissociation reactions.
NO formation pathways
The focus is now shifted to the evaluation of the NO formation pathways, with the goal of further understanding the physical reasons behind the observed NO decrease with strain. As carbon species are not involved in hydrogen combustion, the prompt NO x formation pathway is not considered, and the only three pathways evaluated are thermal NO, NNH-NO and N 2 O-NO. CHEM1D solutions data are used for the purposes of the present NO formation routes investigations, since the trends of NO emissions with strain predicted by one-dimensional and two-dimensional simulations have been shown to compare with good approximation (see Figure 4b). Two high applied strain rates setups of a = 2000 s −1 and a = 5000 s −1 are analysed. For a sample reaction r involved in a pathway ν A A + ν B B ↔ ν C C + ν D D, the forward reaction coefficient is obtained with Arrenhius law, and thus the reverse reaction coefficient can be found [25]: where A r , β r and E a,r are found in the GRI3.0 mechanism reaction web page [26], and ∆S 0 r and ∆H 0 r are respectively the reaction enthropy and enthalpy that can be found as a function of temperature with JANAF polynomials. Thus, the forward and reverse reaction rates are found as a function of T (x) introducing respectively the reactants and products concentrations:ω Three-body and pressure-dependent reactions have been treated with dedicated formulas [37]. Hence, following the formulation of Ning et al. [13], the overall reaction rate (ORR) of the reaction r is obtained by integrating the reaction rate across the longitudinal (flame-normal) direction: Summing the contribution of the single reactions ORR along a specific route, a chart diagram showing the NO formation pathways at a = 2000 s −1 and a = 5000 s −1 is obtained and is reported in Figure 6a. While all the reaction rates involved in the pathways show a decrease at higher strain, it can already be observed that this decrease is remarkably more consistent considering the reactions involved in the thermal NO pathway. Hence, the ORR of the reactions directly producing NO in each pathway are summed and the bar chart in Figure 6b is obtained. The histogram shows that the main NO production pathways involved in lean hydrogen flames at high strain rates are both the thermal and the NNH. Considering syngas fuel mixtures and twin counter-flow configuration, Ning et al. [13] have shown with a similar analysis that the NNH pathway was by itself predominant, even with a high hydrogen content. The reason of the discrepancy with the present study is probably related to the lower equivalence ratio of their investigation (φ = 0.5), which may decrease the maximum flame temperature and thus the weight of thermal NO pathway on the total NO formation process. In addition, they conclude that with a high hydrogen content in the fuel and high strain rates the NO production is exacerbated. Once again, the fuel mixture and the equivalence ratio, along with the maximum strain rates achieved and the counter-flow configuration chosen do not allow for a direct comparison even in the trends with the present study, which instead investigates on completely novel setup with 100% hydrogen fuel and very high strain rates. Figure 6b confirms that the main decrease of NO production at higher strain rates is associated to the thermal pathway. Speth and Ghoniem [9] have shown that for syngas with a high content of H 2 , the flame temperature has an increasing trend with strain within their limited strain range, i.e. up to a = 500 s −1 . Furthermore, extending the strain range would probably result in a non-monotonic function, as their study shows for fuel mixtures with less hydrogen content. NO x emissions can be expected to follow directly the flame temperature function. In contrast, the present study shows that NO x emissions decrease monotonically in a very broad range of strain rates, thus indicating that the emissions function does not follow directly the temperature function. Hence, the reason behind the observed NO decreasing trend may be not only imputable to a change in the flame temperature. This consideration suggests that a more extensive analysis should be performed on the local temperature and concentration of the radicals involved in the pathway reactions in order to further shed light on the physical explanation behind the phenomenon of decreasing NO x with strain in lean premixed hydrogen flames.
Conclusion
Detailed-chemistry one-dimensional and two-dimensional DNS analyses are conducted on pure hydrogen lean premixed and strained flamelets in a counter-flow reactants-to-products configuration. A broad range of strain rates is investigated, up to very high levels which have been rarely considered before in the literature, and which hydrogen is shown to be able to sustain without experiencing extinctions. Both 1D and 2D simulations show that NO x emissions have a decreasing trend as the strain rate is increased. This is shown for the first time for pure hydrogen fuel and lean flamelets. The analysis of the 2D results also indicates that there is no compensating NO x formation in the second dimension, and thus that the trend observed is not a consequence of any possible setup preconditioning. Therefore, the greater tangential velocity gradients may have a direct effect on the flame, somehow affecting combustion efficiency and consequently the rate of reactions through which NO x is formed.
In-depth analyses of the NO x formation mechanisms and reaction rates further underline that the main contribution to the decreasing trend arises from the thermal NOx pathway. Future work will focus on a more extensive local temperature and radical concentration study on the setup investigated and at further conditions in order to achieve a deeper understanding of the physics behind the phenomenon. Consumption speed trend with strain and the role of differential diffusion effects are also hydrogen-peculiar features that may play a role and that should be further explored.
If the emission trend with strain are confirmed in turbulent conditions, the low NO x outcomes of lean premixed and highly-strained hydrogen flames can be exploited in future technological applications of hydrogen combustion systems. | 2022-07-11T01:15:51.304Z | 2022-07-08T00:00:00.000 | {
"year": 2022,
"sha1": "01659cfbe6148669e537180f07c21c1c889cd674",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "01659cfbe6148669e537180f07c21c1c889cd674",
"s2fieldsofstudy": [
"Engineering",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
262196199 | pes2o/s2orc | v3-fos-license | Commuting to College: An Analysis of a Suburban Campus on the Outskirts of Madrid
. Tis paper aims to analyse human mobility in a university campus on the outskirts of the Madrid region. Several surveys which were distributed to students for completion during the 2017-2018, 2018-2019, and 2021-2022 courses were examined. Both an exploration of existing transport modes using clustering techniques and a statistical analysis on trip origins, travel times, and distances were performed. Not all municipalities with the highest number of trips were the closest to the university. Te clustering analysis identifed a lower variability in the use ratio of the transport modes in the 2017-2018 course. Te private car, which exhibited a low sharing rate, was the most utilised transport mode. Tis was followed by public and university transportation. Similarities between the probability distributions of journeys using public and university transports were found. High and moderate correlations between the number of the existing stops and the amount of trips by subway and urban bus were detected. Te lowest median values of travel distances corresponded to students, administrative staf, teachers, and researchers who exhibited very similar values. Considering the three analysed academic years as a whole, the most likely travel times were 30–60 minutes. It was detected that a higher gross annual income did not imply higher private car use. Residents in areas with the highest ozone concentrations also exhibited a high use of motorised vehicles. A low familiarisation with car-sharing and car-pooling platforms was also found. Globally, a high level of comfort during the trip was mostly perceived.
Introduction
1.1.Importance of Human Mobility in Urban Spaces.Both human and goods mobility are key elements of urban development, determining the urban spaces and the way in which they function.Te private car is one of the most widely used modes of transportation for the human displacements European Commission [1], Eurostat [2].In Spain, according to Ministerio de Fomento [3], the functional dependence of peripheral areas in the city centre has been reduced by the increase in facilities.At the same time, infrastructures and services have increased urban development, resulting in urban sprawl.Tis has led to (i) an increment in travel distances, (ii) an increase in motorised mobility, (iii) a dispersed mobility demand, which is difcult to satisfy through public transport, and (iv) a raise in congestion, travel times, and environmental pollution.
In 2018, the Regional Transport Consortium of the Madrid community conducted a survey on mobility E. D. M. [4] which was based on telephone and face-to-face interviews.Te target population was individuals over 3 years of age residing in the region.75,208 persons provided data on their mobility, allowing a total of 222,744 trips to be analysed.According to the survey, 27.3% of trips carried out on working days were work-related, 15.7% were study-related, and those caused by both shopping and personal matters accounted for 12% each.29% of journeys were motivated by other reasons (walking, sport, accompanying another person, leisure, and medical issues).Regarding the distribution of the transportation modes, the most utilised was private cars (39%), followed by walking (34%), public transportation (24.3%), and others (2.7%).
In accordance with the abovementioned, educational services are among those services that produce the greatest number of commuters in the region, which is generally the case in cities Cadima et al. [5].University students frequently use more diverse modes of transportation than other citizens Paéz and Whalen, Zhou, Whalen et al., .As a result, some cities have adopted transport plans to improve the overall quality of mobility around university campuses Sgarra et al. [10].
At present, urban transport on ofer is undergoing several transformations that will have an impact on the achievement of a more sustainable mobility.Information and Communication Technology helps the implementation of new modes of mobility through so-called "smart mobility," which is expected to facilitate multimodality and increase the use of public transport, as well as improve accessibility in low-demand areas Momentum [11].Te emergence of new mobility paradigms, such as mobility as a service will also have implications on road capacity, trafc evolution, land utilisation, and the urban designs Wong et al. [12].
1.2.Background.Because university students are generators of a large number of commutes, both the traits and factors infuencing their mobility have been extensively analysed.Nash and Mitra [13] collected data concerning several universities in Toronto to check patterns in the transportation behaviour of post-secondary students.Two-thirds of the students predominantly travelled on foot, by bicycle, or carried out transitions between transport modes.Obregón Biosca [14] explored the commuting characteristics of students at the Universidad Autónoma de Quétaro, pointing out that the transport choice is mainly related to the trip distance and the individuals' socioeconomic features.Maia et al. [15] suggested that public transport prevails among low-income users in certain locations.Students who can aford higher costs tend to prefer a private vehicle, which is seen as being safer and more comfortable.Tis is in accordance with Balcombe et al. [16], which mentioned that fares, seven categories of attributes that are decisive in the service quality, incomes, and car-owning impact on the use of public transport.Liu et al. [17] found that student transit riders inhabited in larger households (with more vehicles by resident).Travel time and vehicle ownership per household individual was demonstrated to have a very low infuence.Obregón Biosca [14] in line with Nash and Mitra [13] stated that the main cause for using the private car between university students was the lack of alternatives to reach the destination by public transport.
Te infuence of gender variables on the use of public transport has also been analysed Pérez, Nayum and Nordfjaern [18,19].Male students seem to show lower intention to use public transport than female students, as a consequence of the fact that both sexes difer in their orientation towards proenvironmental behaviours Nayum and Nordfjaern, Finisterra do Paço et al., Torgler,[19][20][21].
Sustainability in universities has also been investigated by Kaplan [22], Balsas [23], Daggett and Gutkowski [24], Gurrutxaga et al. [25], and Cattaneo et al. [26] to obtain reports that provide an overview of the students' commutes in order to achieve reductions in pollution.In particular, the Conference of Rectors of Spanish Universities (CRUE) has, among its aims, evaluated the implementation of good practices in both sustainable mobility and accessibility in Spanish universities, promoting the development of mobility plans Gutíerrez and Jaraíz [27].Balsero et al. [28] based on the aforementioned mobility survey carried out by the Regional Transport Consortium of the Madrid Community E. D. M. [4] found that the use of public transport was at its majority in six Spanish public universities, with percentages in the range 52%-65%.
Miralles-Guasch and Domene [29] detected that the most important restrictions for changing the travel mode from private car to travel by nonmotorised/public transportation were as follows: (i) lack of appropriate infrastructure, (ii) low level of walking and cycling as a priority mode of travel among the population, and (iii) longer travel times when public transport is utilised.Te goals to be achieved are derived from the answers to the questions previously exposed.
Motivation and Objectives of the
As a novelty with respect to research described in Section 1.2, various diferent statistical techniques as well as clustering analysis were used (see Section 2.2).Te study was carried out for all profles of individuals attending campus, as well as for all types of academic studies being pursued.In particular, the following features were examined: (i) Main travel characteristics: most frequent travel origins, used transport modes (various aspects related to sustainability were considered), travel times, and trip-related distances, and also trip comfort.(ii) Supplemental travel information: transportation sharing as well as utilisation of car-sharing and car-pooling platforms.
Specifcally, as a novelty with respect to Balsero et al. [28], in addition to the abovementioned, the human mobility examination was conducted in a private university campus.Balsero et al. [28], based on E. D. M. [4], for each individual's profle attending six public universities in 2018, calculated the percentage of used transport modes and compared universities with each other.In order to evaluate the impact caused by COVID pandemic, the information was contrasted with the data obtained from a survey carried out in 2021.Te authors also examined information such as the number of trips, the trip times, and schedules.
Sample Selection: Design of Surveys-Collected Data.
In this research, the target for the surveys was students, teachers, researchers, and people working in administrative services.Undergraduate and graduate level students as well as those taking professional training courses were considered.All surveys were sent to participants through the academic services of the university.For the 2017-2018 and 2018-2019 courses, paper surveys were used, while for the 2021-2022 questionnaire, a Microsoft Form was utilised.
Te procedure used for the estimation of the ideal sample selection and the obtained results have been included in the supplementary material document (Section S1.1, Table S1).
Te information collected through the surveys was as follows.
(i) Respondent's profle, which includes data concerning the role he/she plays at the university (researchers and teachers, administrative staf, or students), studies pursued (in the case of the student role), which comprises Because of the abovementioned, in order to be able to compare the surveys with each other, a data migration procedure was performed.
All used questions in this work can be found in the supplementary material document (Section S1.2 and Section S1.3).
Overview of Used Methods.
In this section, all utilised procedures in this research are explained.Te software resources that were required for the implementation of the methods have been briefy described in the supplementary material document (see Section S2).
Analysis of Principal Travel Attributes.
In this Section, the used methods for the study of the main trip characteristics are explained (see Section 1.3).
(1) Study of Journey Origins.Most of the people working or studying at the university lived in the Madrid community, which consisted of 179 municipalities.One of them is Madrid city, which is administratively divided into 21 districts.Tere are 135 postcodes in the Madrid community.For the Madrid community, bar plots showing the number of individuals coming from the 10 most frequent municipalities and postcodes were calculated.We also plotted a histogram representing the number of commuters by postcode.In addition, a map showing the number of individuals by postcode going to the university was displayed.
(2) Study of Used Transport Modes.For the 2017-2018 and 2018-2019 academic courses, each individual could be characterised by a vector v indj which represented the ratio of the use of each transport mode over the total number of utilised modes for one individual j. v indj is a vector of six elements, where each element (i) symbolises one of the possible transport types: i � 1: public transport, i � 2: university transport, i � 3: private car, i � 4: private motorcycle, i � 5: bike, and i � 6: walking.
where j can vary from 1 to NR, symbolising NR, the number of respondents.For instance, an individual (indj) who uses public transport and the private car would be represented by a vector of the form v indj � (0.5, 0, 0.5, 0, 0, 0, 0, 0).For the survey referring to the 2021-2022 course, the usage ratio for each individual was estimated, considering the used transport type in each trip step.
In order to study the variables that describe the usage ratios for each transport type, V i variable, ∀i � 1 . . .6, their cumulative probability distributions were computed.
Cumulative distribution function of a random variable X is symbolised by F(x), which is stated as F(x) � Pr(X ≤ x).Te cumulative distribution function for a discrete random variable X, n ≤ N, where N is the number of possible outcomes of X, is defned as follows: ( With the aim of comparing the cumulative probability distributions of the variables (V i ), ∀i � 1 . . .6, the Kolmogorov-Smirnov test Berrendero [30] was applied Massa [31].Te following hypotheses in conjunction with a signifcance level (α) equal to 0.05 were considered.
(i) Null hypothesis (H 0 ): "the samples came from the same distribution" (ii) Alternative hypothesis (H a ):"the samples came from diferent distributions" If a p value < 0.05 was obtained in the test, H 0 would be rejected, and H a would be taken.
Te Kolmogorov-Smirnov test allowed us to know if the transport usage ratios exhibited similar distributions.Te analysis was carried out both at the overall university level and by the level of studies.
In addition to all the abovementioned, a clustering analysis of transport usage ratios was also carried out.Tis is a multivariate technique Cisewski, Rodríguez et al. [32,33] which assigns observations to groups (called clusters) so that in each cluster the observations, which are characterised by certain attributes, are similar to each other, that is, analogous objects are placed in the same group.However, all groups difer from each other.
In order to detect the existing clusters in the transport usage ratios, the K − Means method Ullman et al. [34] was utilised, considering the obtained v indj for all respondents of the surveys.Various indexes for determining the optimal number of clusters were also calculated.
A set of data points D V ind1 , V ind2 , . . ., V indNR was considered, in which V indj � v indj1 , v indj2 , . . ., v indjr , symbolised a vector V indj in ⊆R r , and r represented the number of dimensions (r � 6).Te k − Means method groups the data under analysis into k clusters, as follows Ullman et al. [34].
(i) Each cluster possesses a centroid (ii) Given k, the k − Means algorithm performs the following operations (1) Randomly select k data points (which are the initial seeds), representing the initial centroids.(2) Using the Euclidean distance as a metric, associate each data point to the nearest centroid.(3) Compute the centroids again utilising all data points present in the current cluster.(4) If a convergence criterion is not met, steps 2 and 3 must be repeated.Te convergence criteria corresponded to the best estimate of 26 indexes (see Table S20, supplementary material document).Te optimal number of clusters was considered to be that value in which the highest number of indexes coincided ("majority rule").
In addition to all the above, for the Madrid community, where most of the people attending the university reside, we analysed whether there was a relationship between the infux at the stops and the mode of transport used on each trip.Both the postcode, which represents the administrative region where people reside, and the geolocation of the public transport stops were known.Te former could be obtained from the survey results, and the latter could be easily extracted from the data provided by the Madrid Regional Transport Consortium CRTM [35].In view of the previously mentioned, in each postcode, it was possible to compute the number of stops as well as the number of individuals that use each transport mode.Consequently, we were able to estimate whether there was a relationship between these two quantities at each step of the journey.
To compute the correlation between the aforementioned magnitudes, it was necessary to know whether they were derived from a normal distribution.If there was normality in the variables, Pearson's method should be applied; otherwise, Spearman's algorithm should be used.Te normality of the distributions was tested using the Shapiro-Wilks test.
In addition to previously described, the relationship between the type of used transport and variables such as year weighed gross income by postcode/municipality and air pollution was analysed.
Te gross annual income data corresponding to 2020 was used, which was obtained from the National Institute of Statistics [36] and the Tax Agency [37].
It must be noted that Madrid city corresponds to the largest and most populous municipality (604.45 square kilometres and 3,280,782 inhabitants in 2022).For Madrid city, the average gross annual income by person and postcode was considered.For the rest of the municipalities, the average gross annual income by person and municipality was taken.Because information on gross income in 2020 was only available for 56 postcodes (see supplementary material document, Section S2), a common average value was contemplated for the remaining postcodes.
Regarding air pollution, information corresponding to each municipality was utilised.In particular, data related to the presence of ozone gases (the amount of nitrogen monoxide, nitrogen dioxide, suspended particulates, and nitrogen oxides were irrelevant compared to that ozone) 4 Journal of Advanced Transportation during May in 2021 was utilised.Tis information was provided by the Madrid community administration [38].In order to analyse the concern of individuals who commute to the university, about the pollution in their municipality, the presence of the aforementioned pollutants in the atmosphere was graphically related to the used transportation mode.
(1) Study of Trip Distances, Trip Times, and Trip Schedules.
With respect to the examination of the distance travelled by individuals from the origin to university, from the approximate geolocation (longitude/latitude) corresponding to the place where each individual resides, the distance (d) travelled could be estimated.Te cumulative probability distributions of distances for students, researchers, and teachers, as well as administrative staf were calculated.To summarise the central tendency and variability of each distribution, the statistical quartiles were also computed.
For the calculation of approximate distances from origin (ori) to University (des), the Haversine Method Prasetya et al. [39] was applied, where R symbolises the equatorial earth radius (6,378 kilometres).
Te function that best fts each distribution was also identifed.
Regarding the exploration of minimal distances to public transport stops according to the trip origin, information about the geolocation of all existing public transport stops in the Madrid community was used.For each individual's profle, the approximate lowest distance to each transport node type from the trip's origin was calculated.Te cumulative probability distributions and statistical quartiles were also computed.Additionally, the function that provided the best ft with the aforementioned distributions was also identifed.
Similarly to trip distances, travel times from origin to destination were statistically studied.For each academic course, the cumulative probability distributions of the trip journeys from origin to destination were compared through the Kolmogorov-Smirnov test.Additionally, histograms were drawn.Te trip schedules to university (percentage of individuals travelling by time range) were also examined.
(2) Study of Trip Comfort.Te trip comfort was also examined as well as the main reasons for dissatisfaction.Te cumulative probability distribution of this variable was analysed.
Analysis of Supplemental Trip Attributes.
In this section, the used methods for the study of the supplementary trip characteristics are explained (see Section 1.3).
Both private car occupancy and usage of car-sharing/carpooling platform variables were checked.Teir histograms as well as their main statistical parameters (median, mode, and mean) were studied.
Principal Travel Attributes.
In this section, the results corresponding to the analysis of the main trip characteristics are presented (see Section 1.3).
Journey Origins.
Figure 1 depicts the number of individuals travelling to university by postcode.Te location of the university is marked in red.It can be seen that the municipalities with more people travelling to university are some of the closest, although not all of them.
Used Transport Modes.
For the 2017-2018, 2018-2019, and 2021-2022 academic courses, as well as for all individual profles and all levels of study, a higher use of the private vehicle over public and university transport was identifed (see Tables 1 and 2).An exception was found for the vocational training studies, in which the utilisation of public transport was more frequent.
In 2017-2018 and 2018-2019, per academic year, the percentage of trips by car were 51.23% and 49.96%.Te proportion of trips by public transport was 45.69% and 44.35%.Te percentage of journeys by university transport was 44.64% and 42.58%.With respect to the 2021-2022, per academic year, in the frst trip stage, the most recurrent used transportation modes were, in order, car 46.03% and walking 24.43% and 17.45%.In the second step, the most widely used mode of transportation was public transport with 24.34% of trips.
For the 2021-2022 course, histograms of the mode of transport used both at each stage of the journey and by postcode (in stages 1 and 2) have been described in the supplementary material document (Figures S3-S5).Tables showing data on the percentage of use of transport modes by individual's profle (Tables S2-S10 in addition to Tables S11 and S12) have also been included.
Also, for the 2021-2022 course, regarding trip step 1, Figure 2 depicts by postcode, the number of individuals that used each transport mode by postcode.In Figure 3, for trip step 2, similar information is displayed.
Regarding travel stages, 34.62% of persons performed two steps in their journey.22.03% and 5.28% of the people carried out steps 3 and 4, respectively.And only 1.78% and 0.25% of the people make journeys with steps 5 and 6.It must be noted that various individuals did not consider walking to the car as the frst step of the trip.
For all analysed courses, the percentage of use of each type of car motorisation, which was calculated over the total number of car trips could be obtained (see Tables 3 and 4).As previously indicated (see Section 2.1), in the survey corresponding to 2021-2022, more motorisation typologies could be chosen by respondents.
Tables 3 and 4 depict a slight reduction in the percentage of diesel and gasoline cars as well as a small increase in the percentage of hybrid and electric cars during the analysed academic years.
Regarding the usage ratio of transport modes, in 2017-2018 and 2018-2019, in line with the results of Kolmogorov-Smirnov test (p value > 0.05), similarities existed between the probability distributions of public and university transports, as well as between the displacements by motorcycle and bike.No similarities between the aforementioned distributions were detected in the 2021-2022 course (see Table 5).
In the analysis by educational level, for undergraduate and graduate students, the similarity was detected between public and university transport in 2017-2018, as well as between bike, motorcycle, and walking in both 2017-2018 and 2018-2019.For undergraduate students, analogies were found between bike and motorcycle transportation in 2021-2022.Graduate students exhibited similarities between walking, bike, motorcycle, as well as public and university transport.
Tables displaying information on the transport utilisation ratios and the obtained p value in the Kolmogorov-Smirnov test by individual's profle can be found in the supplementary material document (see Tables S13-S18).
With respect to clustering analysis, for the 2017-2018 course, according to "the majority rule," the optimal number of clusters was 2. Tis magnitude was 3 for the 2018-2019 and 2021-2022 courses.For the 2017-2018 and 2018-2019 courses, in cluster 1, the highest means, medians, and modes corresponded to public (0.42, 0.56, and 0.42) and university transport (0.42, 0.27, and 0.42).In cluster 2, the highest values were associated with trips by car (0.98, 0.89, and 0.98).For the 2021-2022 course, cluster 1 and 2 exhibited the highest values for car (0.99, 1, and 1) and public transport (0.77, 1, and 1) journeys, respectively.Cluster 3 showed the largest values for public transport trips (0.50, 0.50, and 0.33).It is noteworthy that in all courses, there was a cluster of individuals who used mainly private cars (modes took the values 0.98 and 1).According to the results, in the 2018-2019 course, there was a cluster of persons that relevantly utilised university transport (mode is equal to 0.89).In 2021-2022 a cluster that exhibited a high use of public transport existed (mode took a value equal to 1).
A table depicting the number of obtained clusters according to the indicated metrics in Section 2.2.1 can be found in the supplementary material document (Table S19).Te document also describes tables showing mean, median, mode, and standard deviation in each cluster (Tables S20-S22 as well as Table S23).
For the 2021-2022 course, the dependence of the chosen transport mode on the number of available public transport stops was also analysed.Te Shapiro-Wilks test was applied in order to check the normality of the variables.None of the variables derived from a normal distribution, since the p value was lower than 0.5 in all cases (see supplementary material document, Tables S24 and S25).Since, there was no normality in the aforementioned variables, the Spearman's algorithm was applied to calculate the correlations.A high correlation existed between the displacements by subway and the numbers of subway stops.However, a low association was found in the movements by commuter trains and Madrid's urban buses as well as by light subway (see Table 6).It is noteworthy that the journeys by interurban bus exhibited a low negative correlation with the number of stops.A moderate association was found in the displacements by urban bus.
In 2021-2022, the relationships between the type of transport used and the variables of economic income and air pollution were also examined.In Figure 4, which displays the transport mode as a function of the weighted gross income in step 1 (A) and 2 (B), it can be seen that a higher gross annual income do not seem to imply higher private vehicle use.Regarding pollution, Figure 5 depicts the transport mode as a function of ozone concentration in steps 1 and 2. It can be observed that even in those areas with the highest ozone concentrations, individuals were travelling to the university by private car and motorcycle.S9 and S10).For the 2021-2022 course, the document also includes histograms of duration in each step of the journey (Figure S11) in addition to histograms of the schedules in which individuals stay in the university (Figure S8).
Regarding travel distances, Table 7 depicts, for each individual's profle, the statistical quartiles that correspond to the trip distance.Te lowest median values corresponded to students.Administrative staf, teachers, and researchers exhibited very similar values.
Tables 8 and 9 display the functions that provide the best ft with the cumulative probability distribution of the aforementioned distance.Students as well as researchers and teachers were characterised by an accumulative probability distribution which could be conveniently described by a Pareto 2 function.In contrast, fgures which correspond to the administrative staf were symbolised by a Box-Cox power exponential function (see Table 8).A Box-Cox power exponential function can be defned as follows Rigby, et al., Stasinopoulos and Rigby, R-gamlss, Stasinopoulos et al. [41][42][43][44]: ] � (−inf , +inf), and τ > 0.
Analogous information which referred to the distribution of closer distances to public transportation stops is described in Tables 10-12.In the Madrid community, the most dispersed networks correspond to interurban bus and urban bus networks.Tis is in addition to the commuter trains network.Terefore, these networks are those that exhibit the greatest reach (see Figure 6).
Transport type
Step 1 (%) Step 2 (%) Step 3 (%) Step 4 (%) Step 5 (%) Step As can be observed in Table 11, for administrative staf as well as teachers and researchers, the cumulative probability distributions followed a type 2 Pareto function.
For students, researchers, and teachers, the closest public transport stops correspond to the subway and light subway.By contrast, for administrative staf the nearest stops are those associated with the commuter trains (see Table 10).Figures showing the number of individuals by cause of discomfort (time, distance, trafc problems, transportation, insufcient parking, and other reasons) have been incorporated in the supplementary material document (Figure S12).It must be noted that not all persons answered the questions related to travel comfort.
Supplemental Trip Attributes.
In this section, the results corresponding to the analysis of the supplementary trip characteristics are presented (see Section 1.3).
For all analysed courses, the highest probability of the private car occupancy variable corresponded to vehicles occupied by only one person.
Figures displaying the histograms and the cumulative probability distributions corresponding to the number of people in private cars have been included in the supplementary material document (Figures S6 and S7).Table 13 depicts the main statistical parameters referred to the aforementioned variable.
Regarding the usage of car-sharing and car-pooling platforms, most persons knew the car-sharing platforms but did not use them in all academic courses (in 2021-2022, 69.60% persons had knowledge of the car-sharing platforms but did not utilise them, while 20.58% individuals had never heard of them.Tese features were 61.69% and 9.92% and 61.69% and 14.10% in 2017-2018 and 2018-2019, respectively).
Principal Travel Attributes.
In this section, the results referring to the main trip characteristics are discussed (see Section 1.3).
Journey Origins.
As has been previously commented on, not all postcodes with the highest number of students are the closest to the university.Tis is in line with Studygram [49] which explains that the most relevant factors that determine the choice of university are the academic majors, the academic quality, and the university reputation; the accreditation, the cost of tuition and living, as well as the available scholarships are signifcant factors.Finally, the university location, the level of career opportunity and success, the faculty members' helpfulness as well as the campus environment are among the main causes [49].Word-of-mouth marketing has been recognised as one of the most important drivers of university choice, particularly recommendations from friends and family Elliott and Healy, Spearman et al. [50,51].Various pieces of research describe that working and nonworking students value the factors that motivate them diferently when selecting a university.Students who were not working estimated that social media, word-of-mouth, and school presentations strongly infuence their decision.Working students are more infuenced by outdoor advertising Spearman et al. [51].
Used Transport Modes.
Te results show that the car was the most used transportation mode, followed by the public and university transport in the campus.Bike, motorcycle, and walking were much less utilised.Te slight increment detected in the number of hybrid and electric cars was probably due to government subsidies, existing tax credits, and rebates for the purchase of these types of vehicles.
A greater use of private vehicles by students compared to other modes of transportation was also detected in certain universities around the world Zhou, Ribeiro et al. [52,53].
With respect to the dependence of the chosen transport mode on the number of available public transport stop, the correlation analysis show that the used transport mode in step 1 may depend more on other factors such as the used mode in the next stage of the journey than on the number of existing public transport stops.
Te journeys to university are usually multimodal, as a completed journey often requires two or three modes of transportation (one of them walking).Transfers in the public transport system, in addition to impact on the use of certain routes and destinations, have been demonstrated to cause disruption in the trip experience and diminish public transport's competitiveness with the private car, which provides door-to-door service Guo and Wilson [54].Te utilisation on-demand services with shared use to reach the university from certain origins (postcodes) could be an alternative option to the private car utilisation.
Te results describe a higher variability in the use of transport modes during the latter two courses (since they exhibited a higher number of clusters).Each cluster consists of individuals who are relatively homogeneous in terms of modes of transportation to commute to the university.A characterisation of individuals on the basis of clusters of transport usage ratios could be useful for the determination of appropriate policies and new strategies related to human mobility in the campus.Meetings could be held with some of the individuals located in each cluster, in order to fnd out in detail what motivates them to use each transport mode, as well as to explore the positive and negative perceptions of each.
In order to reduce the use of private cars by university employees and students and thus achieve a more sustainable conduct, a variety of the following factors could be taken into consideration.
According to Schwartz [55], ecological conducts are the results of the activation of personal norms that manifest sentiments of moral obligation to either execute or refrain from performing actions.If a desire to execute an ecological behaviour exists, the individual has a favourable attitude towards it.However, if the person faces external (psychosocial and contextual causes) or internal (personality causes) barriers, an attitudebehaviour gap is formed.Consequently, there is a decrease in the rate of compliance with these ecological conducts even when the intention is to carry them out.In particular, [56] the awareness of consequences, the attribution of responsibility, the personal norms, the behavioural intention, as well as the habit of using the private car, and the access to it, impact on the fnal fulflment of the aforementioned conducts.To modify the habit of using the private car, it is required that universities implement both structural and psychological actions Setiawan et al. [56].Te interventions must include principles and techniques both to eliminate the custom and to increase the responsibility towards the negative impact of the utilisation of the private car Setiawan et al. [56].
As was previously commented on, cycling, motorcycling, and walking have low usage rates in the university campus analysed in this research.With regard to walking, some pieces of research showed that in Europe, individuals aged 17-18 years old commute by walking up to 2 kilometres D'Haese et al., Van Dyck [57,58].It is clear that the further away one lives from a commuting location, the less likely one is to commute actively.However, the walkable distance from home to the study location seems to depend on other factors as well (urban planning and cultural perceptions) Rousseeuw, Rodríguez-Rodríguez et al. [59,60].In accordance with this, Djurhuus et al. [61] detected, based on a Danish National Health Survey, that there was no signifcant association between the distance to the nearest bus stop and an active commute.
Regarding cycling, Spanish Statistics National Institute, INE.[62], provides information related to European Health Survey corresponding to 2020, which describes the use of the bicycle to get around (for population aged 15 and over).Features show that 92.19% of the Spanish population did not use the bicycle, indicating a general absence of cycling culture.Research points out that several cities exhibit rather relevant statistics related to cycling among students, which has been achieved because of several factors Pogačar et al. [63]: (i) universities are within walking distance of public transport stations, (ii) detailed planning and relevant investments were made over many years to promote sustainability, (iii) special policies towards students have been implemented (limiting the use of parking), (iv) propitius topography, (v) good quality infrastructure, and (vi) support for students in cycling activities.
In the analysed university campus in this research, in order to achieve the use of a more sustainable mode of transport, in addition to considering some of the actions listed above, which were proposed in Pogačar et al. [63], a reorganisation of land use, prioritising cyclists, pedestrians, and university transport over private cars could be carried out.
Te result show that a higher gross annual income does not imply higher private vehicle use, which is in line with Aluko [64], who found, focusing on a higher institution's teaching staf in Ekiti State (Nigeria) that the majority of workers (81.8%) used private cars, while only a small percentage (9.1%)used public transport services.Tis was apparent despite the lower cost of public transport compared to private vehicles (around 16%).An analysis carried out by the University of the West of England in 2019 Chatterjee et al. [65] points out that economic circumstances and spatial context appear to account for much of the variation in bus use.
With respect to the air quality, and the used transport mode, the results show that environmental policies and awareness-raising in the zones with the highest ozone concentration should be carried out by public administrations.Tis is because, in those areas, individuals continued to use motorised vehicles.
Trip Distances, Travel Times, and Schedules.
Certain variability was observed with regard to the most likely travel times during the three analysed academic years.However, if the three academic courses are taken as a whole, the most probable travel time was 30-60 minutes.Tis fgure is higher than the one found for the Madrid community, where the average journey time is 25 minutes E. D. M. [4].
With respect to travel distances, as we have mentioned, the results show that the cumulative probability distributions of the travel distances, could be appropriately characterised by Pareto 2 and Box-Cox power exponential functions.Various existing pieces of research showed that the probability of an individual's movement with distance ∆d, denoted by P(∆d), decreases with an increase in ∆d.In addition it is suggested that P(∆d) can be ftted by diferent statistical distributions such as power law, exponential law, or exponentially truncated power law, among others Liu et al. [66].
Te medians of the distances to reach the university campus were similar to the average travel distances in the European Union but higher than the existing ones in the Madrid Community.In Europe, according to the report "Transport New Mobility Patterns Study: insights into passenger mobility and urban logistics", which was conducted from March to August 2021 by the European Commission [67], displays the average length of travel distance per person and per day in urban trips does not exceed 14 km.According to E. D. M. [4], in the Madrid community, the average travel distance in mechanised modes was 8.8 km.In order to improve the travel comfort (time and distance), several transformations, which could be implemented in the public transport commute, were identifed.Modifcations, which should be carried out in a coordinated dialogue between the Madrid City Council, the Regional Transport Consortium, and the University, are as follows: (i) Improving the design of currently available public transport routes, even if they require individuals to make more transfers.(ii) Establishment of public transport frequency in coordination with university schedules.(iii) Improvement of time, reducing the period in which users walk between and wait for transfers.(iv) Actions to make better use of travel time could be performed, for example, by providing passengers with electronic devices (computers, laptops, and tablets) including ofce and entertainment applications to be used during the trip.
Supplemental Trip Attributes.
In this section, the results referring to the supplementary trip characteristics are discussed (see Section 1.3).
As we have mentioned, during the journey, the most probable number of persons in a private car was 1. Tis is in line with information related to the European Union where the private car is the dominant mode of transport with less than 2 persons on average per car.In particular, for the population aged 15-84, the mean occupancy rate for a passenger car used in urban trips is in the range [1.20, 1.90] individuals, with a minimum value of 1.17 in Italy (population aged 15-80) and a maximum value of 1.87 in Romania Eurostat [68].
With regards to the use of car-sharing and car-pooling platforms, for all academic courses, a non-negligible percentage of individuals said they were not familiar with these types of platforms.It must be noted that many universities Genikomsakis et al., have among the objectives of their mobility policy to reduce the number of vehicles on their campuses as well as to fght against the utilisation of the car alone.Tis has led them to implement car-sharing and car-pooling strategies.However, various factors must be considered for carrying out these actions successfully.Certain research points out that saving money [72,73].Other relevant reasons are environmental efciency, comfort, trafc, socialisation, and curiosity.Among the main causes of deterrence are the privacy protection and the perception of the danger of travelling with strangers Ciasullo et al. [73].Also, an analysis conducted in Lahore city Muhammad and Ali [74] showed that certain factors had a relevant infuence on the individual's decision to adopt a car-pooling alternative.Tese included, commuters' marital status, education level, daily
16
Journal of Advanced Transportation trip distance, the mode of travel usually used, household income, car ownership, and the possession of a driving licence.Additionally, both the reason for the trip and the number of people with whom the vehicle was shared also had an important impact.
With respect to car-sharing, users often perceive it as an innovation to be tested.Car-sharing provides a fexible alternative that meets diverse transportation needs while reducing the negative impact of private vehicle ownership Duziak, Ramos et al. [75,76] which was carried out a survey taking as sample 1,519 users and 3,695 nonusers of carsharing.Te survey included 36 questions regarding attitudes towards car-sharing, the environment, political orientation, personal norms, frequency of utilisation transport modes as well as transport mode choice for different trip purposes.Te authors found fve clusters, including both users and nonusers of car-sharing whose members difered regarding both psychological and behavioural aspects.Tey were multimode and low environmentalism (user mobility type 1), car-focused ambivalent (user mobility type 2), active public transport green (user mobility type 3), car-focused low-green (nonuser mobility type 4), and multimode and high environmentalism (nonuser mobility type 5).
Conclusions and Future Research
Te conclusions obtained for each of the objectives are detailed below.
What are the most frequently used modes of travel to university?Is there a pattern of mobility, and has this pattern changed from academic year to academic year?What are perceptions on journey comfort?
Globally, the most frequently used mode of transportation for commuting to university was the private car.In the analysis by level of studies, only vocational training students used cars and public transport in a similar percentage.According to cluster analysis, some diferences in the patterns of mobility existed among the three academic years.Two clusters were detected for 2017-2018 and for 2018-2019 courses.By contrast, for the 2021-2022 course, three clusters were identifed.In all three courses, a cluster of individuals who mainly used private cars existed, while in 2018-2019, a cluster of persons that relevantly utilised the university transport was found.In 2021-2022, a cluster that exhibited a high use of public transport existed.
In order to establish actions to encourage public transport or that which is ofered by the university, focus groups and qualitative surveys should be conducted.Tis would allow us to obtain more detail on the causes that motivate, in general, a greater use of the private car.
With respect to the trip's comfort, the opinion was mostly positive.Time, distance, and trafc were identifed as the main inconveniences.
What are the characteristics of private car usage?Are digital car-sharing and car-pooling platforms known and used?
As indicated by the statistical metrics used, the highest probability corresponded to vehicles occupied by only one person.More than 60% of individuals said they were aware of car-sharing/car-pooling platforms but had never used them.Works aimed at increasing the awareness of individuals about the benefts of car-pooling should be carried out.
A slight increase in the use of hybrid and electric cars in detriment of gasoline and diesel cars was also detected.
What are the statistical characteristics of distances and journey duration to the university?
With respect to approximate distances to the university, diferent cumulative probability distributions were detected among the diferent profles of individuals.Students, as well as researchers and teachers, were appropriately characterised by a Pareto 2 function.In contrast, the cumulative probability distribution associated with the administrative staf followed a Box-Cox power exponential function.
Te lowest median of the approximate distance to the university corresponded to students.Researchers and teachers, as well as administrative staf exhibited very similar values.Tis suggests that distance to the university is among many of the factors that may infuence the choice of university.
Travel times changed over the analysed academic years, probably because of the higher academic ofers.In 2021-2022, the most probable trip times were higher or equal to 30 minutes.
What are the statistical characteristics of the closest distance an individual can travel in order to reach a public transport stop?Is there any relationship between the mode of transport used and this distance?
For administrative staf as well as teachers and researchers, the cumulative probability distributions of the aforementioned approximate distance followed a type 2 Pareto function.By contrast for students, the distribution could be appropriately represented by a Box-Cox-Cole-Green-Orig function.
For students, researchers, and teachers, the lower median of the distance to the nearest public transport stops corresponded to subway and light subway.Nevertheless, for administrative staf, the lower closest stops were associated with the commuter trains.
Te chosen transport mode for one trip step seems to depend more on the mode of transport to be utilised in the next trip stages and, to a lesser extent, on the number of stops in the postcode of the origin of the journey.
Additionally, no direct relationship was detected between income level and private car use.
Both private vehicle and motorcycle use was associated with higher ozone pollution.In these areas, specifc awareness-raising actions with an important focus on sustainability should be carried out.Journal of Advanced Transportation Tis work could be continued by analysing the relationships between the used mode of transport and other socioeconomic factors not examined in this research.Tese could be, among others, the level of education of the students' parents, or the place of birth of the commuters.
An exploration of the perceptions of trip comfort based on the analysis of the text completed by the respondents in future surveys could be carried out.Various methods concerning sentiment analysis, which is a procedure that mechanically identifes attitudes, opinions, or emotions from text, could be used Balakrishnan, Janda et al., .
Carbon footprint could also be evaluated for commuting to university based on ISO 14069 standards, even distinguishing by individual profles.ISO 14069 I. S. O. [80] provides guidance on the quantifcation and reporting of greenhouse gas emissions for organisations.
Research.Tis research aims to analyse the mobility of individuals attending a private university on the outskirts of Madrid.Student surveys conducted during the 2017-2018, 2018-2019, and 2021-2022 academic years were used.It must be noted that no surveys were prepared for the 2019-2020 and 2020-2021 academic courses because of the COVID-19 pandemic.Tis investigation focuses on answering the following questions.(i) What are the most frequently used modes of transport when travelling to university?Is there a pattern of mobility, and has it changed throughout diferent academic years?What is the perception of trip comfort?(ii) Which are the general trends in relation to private car use?Are car-sharing and car-pooling platforms known and used?(iii) What are the statistical characteristics connected with the closest distance to public transport stops?Is there any relationship between the mode of transport used and the aforementioned distance?
Figure 1 :
Figure 1: For the Madrid community, individuals travelling to university by postcode in 2021-2022 course.Some information retrieved from CRTM [35] was used for the construction of the map.
3. 1 . 3 .
Trip Distances, Travel Times, and Schedules.With regards to trip times, it can be observed that the most likely travel times are 15-30 minutes and 30-60 minutes for the 2017-2018 and 2018-2019 academic courses.In the 2021-2022 academic course, the most likely duration of trips are 30-60 minutes and ≥ 1 hour.With respect to schedules, most individuals stay in the university during the morning in all analysed academic courses.
Figure 2 :
Figure 2: For the Madrid community, for the 2021-2022 course, the number of individuals who used public transport in step 1.Some information retrieved from CRTM [35] was used for the construction of the map.(a) Subway.(b) Light subway.(c) Interurban bus.(d) Urban bus.(e) Madrid urban bus.
Figure displaying the histograms related to carsharing and car-pooling platforms for all analysed courses
Figure 3 :
Figure 3: For the Madrid community, for the 2021-2022 course, the number of individuals who used public transport in step 2. No individual used the commuter trains for trip step 2. Some information retrieved from CRTM [35] was used for the construction of the map.(a) Subway.(b) Light subway.(c) Interurban bus.(d) Urban bus.(e) Madrid urban bus.
Figure 5 :
Figure 5: Transport mode as a function of ozone concentration in step 1 and 2. EMTrefers to Empresa Municipal de Transportes (municipal transportation company), which manages the urban buses in Madrid.
Figure 6 :
Figure 6: For the Madrid Community, in each postcode, for 2021-2022 course, location of stops by transport type.Information retrieved from CRTM [35] was used for the construction of the map.(a) Subway.(b) Light subway.(c) Commuter trains.(d) Interurban bus.(e) Urban bus.(f ) Madrid urban bus.
Table 1 :
Te percentage of the use of each transport type for each academic course.
In the 2017-2018 course, 36.80% and 27.15% of students stay in the university during the morning or afternoon, respectively.29.51% is there during the morning and afternoon.In the 2018-2019 course, these percentages were 43.09%, 16.82%, and 33.85%.With respect to the 2021-2022 course, these features were 72.12%, 19.35%, and 4.12%.Figures showing the cumulative probability distributions and histograms related to trip times for the 2017-2018, 2018-2019, and 2021-2022 courses have been described in the supplementary material document (Figures
Table 2 :
Te percentage of the use of each transport type, in each step, for 2021-2022 course.
Table 3 :
Te percentage of the use of each type of car for the 2017-2018 and 2018-2019 courses.
Table 4 :
Te percentage of car motorisation types for the 2021-2022 course.
Table 5 :
Similarity between cumulative probability distributions of transport utilisation ratios.
Table 6 :
Correlation between number of stops by postcode and number of individuals by transport mode in trip steps 1 and 2 for the Madrid community in the 2021-2022 course.
Table 7 :
Statistical quartiles of the travelled distance in kilometres from origin to university for each individual's profle.
Table 8 :
Function that provides the best ft with the cumulative probability distribution of the approximate distance in kilometres from the origin to the university for each individual's profle.: residual degrees of freedom for the ft.DOF: degrees of freedom for the ft.GD: global deviance. RDOF
Table 9 :
Parameters corresponding to the function that provides the best ft with cumulative probability distribution of the approximate distance from the origin to the university in kilometres for each individual's profle.
Table 10 :
Statistical quartiles of approximate distance to public transport nearest stop from the origin in kilometres for each individual's profle.
is one of the principal causes for utilising car-pooling Bento et al.,Ciasullo et al.
Table 11 :
Functions that provide the best ft with the probability cumulative distribution of the approximate distance to public transport nearest stop from the origin for each individual's profle.: residual degrees of freedom for the ft.DOF: degrees of freedom for the ft.GD: global deviance. RDOF
Table 12 :
Parameters of the functions that provide the best ft with the probability cumulative distribution of the approximate distance to public transport nearest stop from the origin for each individual's profle.
Table 13 :
Main statistical parameters corresponding to the number of individuals in private car for each course. | 2023-09-24T15:59:16.381Z | 2023-09-20T00:00:00.000 | {
"year": 2023,
"sha1": "b433b37605fe5572dbce11a3392c182cadc96e82",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jat/2023/1868826.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1519ecee6289c6f723a749069344bd9dd7aa8c09",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
269633777 | pes2o/s2orc | v3-fos-license | Validating interRAI Chinese self-reported carer needs (SCaN) assessment and predicting caregiving distress among informal Chinese caregivers of older adults
Background This study aims to (1) determine the reliability and validity of the interRAI Chinese Self-reported Carer Needs (SCaN) assessment among informal Chinese caregivers of older adults, (2) identify predictors of caregiving distress in Asian regions with long-standing Confucian values of filial piety and family responsibility. Methods This cross-sectional study recruited 531 informal Chinese caregivers of older adults in Hong Kong, Shanghai, Taiwan, and Singapore. The scale reliability was examined using Cronbach’s alphas (α) and McDonald’s omega coefficient (ω). The concurrent validity and discriminant validity were assessed using Spearman rank correlations (rho). To examine the predictors of caregiving distress among informal caregivers of older adults, we employed hierarchical linear regression analyses informed by the Model of Carer Stress and Burden and categorized the predictors into six domains. Results Results revealed good internal consistency reliability (α = 0.83–0.96) and concurrent validity (rho = 0.45–0.74) of the interRAI Chinese SCaN assessment. Hierarchical linear regression analysis revealed that entering the background factors, primary stressors, secondary stressors, appraisal, and exacerbating factors all significantly enhanced the model’s predictability, indicating that the source of caregiving distress is multidimensional. In the full model, caregivers with longer informal care time, lack of support from family and friends, have unmet needs, experience role overload, have sleep problems, and low IADL functioning are at a higher risk of caregiving distress. Conclusions The interRAI Chinese SCaN Assessment was found to be a reliable and valid tool among the Chinese informal caregivers of older adults. It would be useful for determining family caregivers’ strengths, needs, and challenges, and tailoring interventions that address the potentially modifiable factors associated with caregiving distress and maximize support. Healthcare providers working in home and community settings should be aware of the early identification of caregiving distress and routine assessment of their needs and empower them to continue taking care of their needs and providing adequate care to the care recipient. Supplementary Information The online version contains supplementary material available at 10.1186/s12877-024-05014-0.
Validating interRAI Chinese self-reported carer needs (SCaN) assessment and predicting caregiving distress among informal Chinese caregivers of older adults Shicheng Xu 1,2 , Vivian W. Q.Lou 1,2* , Iris Chi 1,3 , Wai Chong Ng 4 , Jing Zhou 5,6 , Lung-Kuan Huang 7,8 , Carol Hok Ka Ma 9 and Moana Jagasia 4 The world population is ageing rapidly, resulting in rising long-term care demands and healthcare costs, which in turn lead to a shift from formal to informal care [1].Supporting informal caregivers thus becomes a crucial public health issue worldwide [2].An informal caregiver is a relative, spouse, partner, or friend who provides care and support to someone at home without pay [3].They are viewed as a valuable extension of the healthcare system and the first line of support for older people with medical, behavioral, disability, or other conditions [4].However, despite the important role of informal caregivers in longterm care provision, their needs are poorly understood and largely remain under-recognized by service providers [5].In the past decade, the complexity of caregiving has increased, requiring informal caregivers to acquire a sophisticated understanding of the care recipients' conditions and new skills, but oftentimes, they are not adequately trained or prepared, and lack of choice in taking on the role [2,6].Meanwhile, informal caregivers of older adults are at an increased risk of experiencing distress, resulting in physical and mental health problems and discontinuation of caregiving, which may negatively impact patient outcomes, leading to hospitalization and nursing home placement [7][8][9][10].
Caregiving distress is the psychological distress associated with caring for a chronically ill individual, which is distinct from the objective and tangible costs indicated by caregiving burden [11].Measuring the subjective experience of caregiving is arguably more predictive of caregiver health and well-being [12].However, to date, the caregiving distress process identified in the literature is mostly specific to caregivers of people with a particular condition, such as cancer and dementia, and not generalizable across conditions and caregiver contexts [8,13].Identifying and addressing potentially modifiable factors contributing to caregivers' distress are much needed to ensure their health, well-being, and ability to remain in the caregiving role [14].
Asia is ageing faster than any other continent [15].By 2030, Asia will be the region with the largest elderly population in the world, exceeding 4.9 billion, and the informal caregiving demand will continue to increase [16].Compared to developed countries, most developing countries in Asia lack structured health schemes for older adults, resulting in a higher prevalence of family caregiving for older adults [17].However, due to the rapid socio-economic and demographic changes, Asia has witnessed an upsurge of nuclear families, and the reliance on the care provided by families has become untenable [18].Caregiving in Asia is further complicated by the long-standing Confucian values of filial piety and family responsibility [19,20].Originating in China, Confucianism is still predominant in many Chinese-majority societies, including mainland China, Hong Kong, Taiwan, and Singapore [12].Filial piety prescribes that adult children provide care, respect, and financial support to their older parents [21].However, such sociocultural values of family caregiving can be burdensome [22].The cultural sense of family obligation may prevent caregivers from seeking help outside of the household despite support is much needed [23].Consequently, their needs are neglected, and health is compromised, and informal caregivers of Chinese ethnicities are susceptible to increased risks of caregiving distress [24].Therefore, it is imperative to validate internationally recognized assessment tools that can measure the needs and distress perceptions of Chinese caregivers, facilitating cross-cultural comparisons, and enabling the development of interventions and support services that are sensitive to cultural differences [25].
In an international effort to support assessment, care planning, outcome evaluation, and resource allocation, interRAI is a collaborative network of researchers and practitioners in over 35 countries committed to improving care for persons who are disabled or medically complex (www.interRAI.org).A family of standardized assessment tools have been developed and rigorously tested for vulnerable older adults in long-term care homes, home care, acute care hospitals, rehabilitation, and palliative care since 1992.These tools have also been used to investigate predictors of caregiver distress using patients' assessments, such as interRAI Home Care and interRAI Palliative Care.However, with only a few items dealing with informal caregivers, most predictors are derived from the patient's conditions [14,26].Therefore, introducing a standardized, self-reported caregiver supplement to the current interRAI suite would help understand their source of distress, needs and challenges, and navigate support services [26].Routine screening and assessment of psychological and physical health needs, as well as preventive measures oriented towards informal caregivers across their caregiving journey, should be core elements of optimal family-centered and communitybased care [2,3,6,10,17,[27][28][29].
In recent years, interRAI developed a comprehensive assessment tool, interRAI Self-Reported Carer Needs (SCaN) assessment, to systematically gather information from the caregivers' perspective about 1) their unmet financial, physical, emotional, or social needs; (b) their emotional and physical functioning; (c) their ability to realistically provide care while still be involved in other activities; and (d) services that best match their unique needs, challenges, and expectations.As part of interRAI's multinational research initiative, four interRAI Asian members, Shanghai (in mainland China), Hong Kong, Taiwan, and Singapore collaborated to explore the applicability of the interRAI SCaN assessment among Chinese caregivers, as the four regions share a similar context of having a Chinese-majority population.To date, there is a lack of standardized assessment to examine the caregivers' strengths, needs, and challenges among Chinese informal caregivers of older adults.The Model of Carer Stress and Burden (MCSB), derived from the classic Pearlin stress process model, is one of the main models used to explain negative caregiving outcomes.It assumes the risk and protective factors identified in caregiving situations can be categorized into six distinct domains (background factors of stress, primary stressors, secondary stressors, appraisal, and exacerbating factors), each with a unique contribution to caregiving distress [30,31].As the predictors of caregiving distress across conditions and caregiver contexts have not been systematically reported in the Asian context, this study will apply MCSB to identify and stratify the level of caregiving distress predictors in the interRAI Chinese SCaN assessment.In summary, the present study aims to (1) translate and determine the reliability and validity of the Chinese SCaN assessment among informal caregivers of older adults in communities with a Chinese-majority population; (2) identify the predictors of caregiver's distress among informal Chinese caregivers of older adults.
Settings
This study involves informal caregivers of older adults in four Asian sites: Hong Kong, Shanghai, Taiwan and Singapore (Singapore, 75.9% of the population is Chinese people, [32]).Despite the diverse sociopolitical structures, the four study sites all comprise a majority of the Chinese population and share Confucian cultural values [33].Families in Hong Kong, Shanghai, Taiwan, and Singapore are all considered central in caring for frail older people [22,[34][35][36].The four sites also face similar challenges in family care provision, such as declining fertility rates, increased life expectancy and old-age dependency ratio, family downsizing and reduced intergenerational cohabitation, and concomitantly increased informal caregiving demands [34].Thus, exploring their needs, challenges, and strengths can provide a comprehensive understanding of the caregiving experience within the broader Chinese cultural context.
Sampling and participants
Power analysis indicates that the optimal sample size in each group is 45 (group k = 4, effect size f = 0.25, alpha = 0.05).To account for missing data, we will oversample by 10%, thus each region will recruit 50 to 100 caregivers, resulting in a final sample size of 200 to 400 survey respondents.The inclusion criteria of informal caregivers are: (1) aged 21 or above; (2) understanding written Chinese; (3) caring for older adults aged 60 years or above (can be family or non-relative); (4) primary caregivers, unpaid for their service and providing most of the care.The exclusion criteria are: (1) self-reported cognitive or mental health issues; (2) the care recipient lives in institutional settings, where care is provided by professionals and paid staff.All four study sites used convenience sampling with the same inclusion and exclusion criteria to ensure the consistency.
Data collection procedures
This study uses self-reported survey data collected from January to March 2023.Currently, there is only an English version of the interRAI SCaN assessment.Considering the needs of Chinese-speaking caregivers, the original questionnaire was translated into Chinese by two Hong Kong-based bilingual research assistants with social work backgrounds and then using the back-translation procedures proposed by Brislin and Freimanis [37].The Hong Kong version was further slightly revised and modified by Shanghai, Taiwan, and Singapore investigators as the health care system and the use of the Chinese language were different in different Chinese societies.For instance, despite Chinese being the common language across four regions, Cantonese is the main dialect in Hong Kong in the form of traditional Chinese, whereas most Taiwanese speak Mandarin but use traditional Chinese as their written language.In comparison, Shanghai and Singaporean Chinese also speak Mandarin but use simplified Chinese as the written language.Also, a few vocabularies, grammar, and syntax were changed, resulting in three versions of interRAI Chinese SCaN assessments (Hong Kong Cantonese version, Taiwan traditional Chinese version, Shanghai simplified Chinese version).The translated assessments were then piloted among five caregivers in each site and minor adjustments were made to improve understandability and clarity.All principal investigators administered the translation and examined the face validity and content validity of the translated versions.
InterRAI SCaN assessment is designed to be selfreported by caregivers as research has shown that most caregivers can complete the assessment independently (interRAI, 2022, p. 4).Therefore, the translated Chinese version of the interRAI SCaN assessment was completed via Qualtrics, an online surveying tool.The informed consent form was first obtained from all participants at the beginning of the survey.Participants were informed of the study objectives, and they had the right to withdraw from the study without any negative consequences.Only research team members will be granted access to the collected data.
The data collection approach was slightly different in the four study sites.Potential participants were either approached via social media or elderly care centers in the community, where they learned about the study from social workers.In Hong Kong, the research center distributed recruitment posters via social media to reach potentially eligible caregivers across the territory.Those interested and meeting the inclusion criteria could scan the QR code to complete the questionnaire.They were also encouraged to share the posters and links with others.Participants in Shanghai, Taiwan, and Singapore were approached in local elder care centers.Social workers were mainly responsible for introducing the study to caregivers and sending them the Qualtrics link.If the caregivers could not complete the questionnaire independently, the social workers would assist them or print out paper versions to facilitate the process.
The final sample size is 531 informal caregivers caring for older adults aged 60 years old and above.Sixty-four invalid cases were deleted as the age of care recipients is under 60.Thus, the valid response rate is 89.24%.The average time to complete an assessment was about 20 to 30 min.In our sample, 30.3% of caregivers were not digitally literate, interviewers thus guide them to use the digital platform, or they might be completely interviewed rather than self-report.This study was approved by the university's Human Research Ethics Committee on July 4, 2022 (HREC Reference Number: EA220277).
Measurements for construct validity
Patient Health Questionnaire-4 (PHQ-4), Pearlin Role Overload Measure, and Lubben Social Network Scale (LSNS-6) will be used to measure construct validity.Caregivers are required to complete both the Chinese version of the interRAI SCaN assessment and the validation instruments.
Patient health questionnaire-4 (PHQ-4) This is a valid ultra-brief questionnaire to detect both anxiety and depressive disorders [38].PHQ-4 has also been validated in the Chinese context as a brief and valid measure of psychological distress [39].It consists of a 2-item depression scale (PHQ-2) and a 2-item anxiety scale (GAD-2).Each item is rated on a 3-point Likert scale from 0 (not at all) to 3 (nearly every day).
Pearlin role overload measure The 4-item Pearlin Role
Overload Measure assesses the caregiver's stress and constitutes not only the level of fatigue felt by caregivers but also the relentlessness and uncompromising nature of its sources [40].The Chinese version was utilized by Cheng, Lam [41] among Hong Kong Chinese caregivers.Each item is rated on a 4-point Likert scale from 1 (not at all) to 4 (completely).
Lubben social network scale (LSNS-6)
This is a validated tool for assessing social networks and isolation among older Chinese by measuring the number and frequency of social contact with friends and family members and their perceived social support [42].It consists of 6 items, and each question is scored from 0 (none) to 5 (nine).
Independent predictors for caregiving distress
The interRAI Chinese SCaN Assessment offers a multidimensional perspective of the caregiving role.It is comprised of three sections: (1) demographic information; (2) carer health and wellbeing, including memory and cognition, social participation, function/endurance/stamina, mood, and health conditions (e.g., sleeping quality, pain, breath); (3) carer needs, including the supports needed and received from both caregivers and care recipients.It also identifies the challenges they encounter as an unpaid caregiver.Based on MCSB and the supporting evidence, twelve variables in the assessment were included in the multivariable regression models under five domains to predict caregiving distress (see Table S1 for background evidence that supports the inclusion of each factor).It hypothesized that each domain of stressors would significantly improve the predictability of the linear regression model for caregiver distress.
Background factors of stress
The caregiver's year of birth, gender, and relationship to the care recipient were self-reported.The relationship factor includes the caregiver as the care recipient's spouse/partner or child.
Primary stressors Primary stressors include patient characteristics and care situations, and this study includes self-reported co-residence and informal care time as predictors.Co-residence is indicated by answering "We live together" to the time it takes to travel from the caregiver's home to where the care recipient lives.Informal caregiving time is measured by the hours of care provided in the last three days, which was recoded into "less than 36 hours" and "36 hours or more".
Secondary stressors Secondary stressors arise from primary stressors, and this study includes financial difficulty and lack of social support as predictors.Financial difficulty is indicated by answering "yes" to "I have financial difficulties (e.g., have to make trade-offs using funds to cover food, shelter, clothing, or medications)", and lack of social support is indicated by answering "yes" to "I lack enough support from family and friends".
Appraisal Appraisal includes the caregiver's perception of caregiving, situational control, role conflicts, and the meaning of caregiving.In this study, appraisal was measured by three variables: whether the caregiver/care recipient has unmet needs and the score of role overload.The unmet needs perceived by caregivers are measured by whether they received and needed 14 support services for care recipients and 6 support services for caregivers.Role overload is measured by summing the score of whether caregivers report "yes" in managing the four areas: (1) Work, job; (2) Family, children; (3) Attend school; (4) Make enough money to live on, in addition to their role as an informal caregiver.
Exacerbating factors Exacerbating factors refer to the caregiver's physical health, including sleep problems, selfrated health, and performance of instrumental activities of daily living (IADL).Sleep problems are measured by the frequency of experiencing difficulty in falling asleep or staying asleep, waking up too early, restlessness, nonrestful sleep in the last three days.The score ranges from 0 (not in the last three days) to 3 (daily in last three days).Self-rated health is measured by one question: "In general, how would you rate your health?".The score ranges from 0 (excellent) to 3 (poor).IADL is measured by summing the scores of six activities related to independent living, with which the caregiver reported the level of assistance he/she needs (meal preparation, ordinary housework, managing finances, managing medications, shopping, and transportation).
Outcomes: caregiving distress
InterRAI Chinese SCaN has five items to assess the caregiver's self-perceived distress: (1) In the last 3 days, how often have you felt little interest or pleasure in things you normally enjoy?(2) In the last 3 days, how often have you felt anxious, restless, or uneasy?(3) In the last 3 days, how often have you felt sad, depressed, or hopeless?(4) In the last 3 days, how often have you felt overwhelmed by the Care Recipient's condition?(5) In the last 3 days, how often have you felt unable to continue caring activities?Responses to each question were rated on a 4-point scale ranging from 0 to 3 points, and a summed total score (range 0-15 points) was calculated.A higher total score corresponds to a higher level of caregiver distress.
Statistical analysis
Descriptive statistics were calculated to examine the demographics characteristics.Internal consistency of the interRAI Chinese SCaN scales was assessed using Cronbach's alpha (α), with a value of 0.60 indicating acceptable internal consistency and more than 0.70 indicating good internal consistency [43].Although Cronbach's alpha is a widely used measure of reliability, McDonald's omega coefficient (ω) relies on fewer and better realistic assumption/s and thus has been proved to be more robust than alpha [44,45].Thus, McDonald's omega coefficient will also be reported.Construct validity, including concurrent validity and discriminant validity, were assessed using Pearson's correlation (r) or Spearman rank correlations (rho) by correlating the interRAI Chinese SCaN scales and validation measures.To determine which scales would be appropriate for comparison, an indepth evaluation of similar items, scales, and composites will be conducted during the content validation stage.A five-step hierarchical regression analysis will be applied to examine the predictive role of each domain in predicting caregiver distress.To evaluate multicollinearity among variables, the adjusted generalized standard error inflation factor (aGSIF) was used.All statistical analyses were performed using the R Studio.The level of statistical significance was set at p < .01threshold (two-tailed).
Sample characteristics
Table 1 presents descriptive statistics for the 531 caregivers' profile.The final sample includes 50.47%Shanghai caregivers, 27.68% Hong Kong caregivers, 13.94% Singapore caregivers, and 7.91% Taiwan caregivers.Among all caregivers, 64.15% are female, with a mean age of 53.69 (SD = 15.8).The care recipients are 57.98%female, with a mean age of 77.08 (SD = 9.78).In this sample, 43.88% of caregivers are adult children, 17.7% are spouses or partners.
In the primary stressors, 53.48% of caregivers live together with the care recipient, and 26.55% provided more than 36 h of care in the past three days.In the secondary stressors, 36.35% of caregivers reported financial difficulties and 30.89% lack of support from family and friends.Regarding appraisal of the caregiving role, 67.61% of caregivers reported unmet supportive needs, primarily including episodic relief from caregiving and carer support groups.56.23% of care recipients also have unmet supportive needs, mostly mental health services and physical rehabilitation.The mean score of role overload is 1.66 (SD = 1.59), indicating caregivers usually have difficulties managing one to two responsibilities in addition to the caregiving role.Finally, in terms of the exacerbating factor (physical health), the mean sleep problem score is 0.77 (SD = 0.95), the mean self-rated health score is 1.23 (SD = 0.79), and the mean score for IADL is 1.14 (SD = 2.11).Higher scores indicate worse physical health.
Internal consistency reliability
The internal consistency of the two scales, which are summarized by more than 1 question, is evaluated by Cronbach's alpha and McDonald's omega coefficient.As
Face validity and content validity
Face validity was examined by all principal investigators in the four regions by assessing whether the items were (1) relevant to the Chinese caregiver's needs (2), relevant to the context of informal caregiving in the Chinese population, and (3) suitable for informal Chinese caregivers in terms of acceptability and readability.The research team agreed that the items showed adequate face validity.Content validity was examined by comparing the translated and original items of the interRAI SCaN assessment, and the results suggest that the content validity is at an acceptable level.The scales that were compared were determined during the content validation stage based on the similarity between the definition and approach in developing the scales.Table 2 outlines the scales that were determined to be appropriate for comparison.
Concurrent validity and discriminant validity
The validity coefficients provided evidence of the scales' construct validity.Given the non-normality of the distributions, correlations were determined using Spearman's rho.The results in Table 2 indicate that the concurrent validity of interRAI Chinese SCaN scales were moderate to good (rho ranged from 0.45 to 0.74).The test between the two scales and LSNS-6 was also proved to be unrelated (rho ranged from − 0.23 to -0.2).
Predictors of caregiving distress
Table 3 summarizes the results of the five-step hierarchical linear regression analyses that predicted caregiving distress.The adjusted generalized standard error inflation factor ranged from 1.04 to 1.44, providing evidence that there was no problem related to multicollinearity (see Table S2).
The full model revealed a significant overall model fit, explaining 53.62% of the variance in caregiver distress.Entering the background factors, primary stressors, secondary stressors, appraisal, and exacerbating factors all significantly enhanced the model's predictability, indicating that the source of caregiver distress is multidimensional.In the full model, caregivers with longer informal care time over 36 h in the past 3 days (p < .001),lack of support from family and friends (p < .001),have unmet needs (p < .01),experience role overload (p < .001),sleep problems (p < .001),and low IADL functioning (p < .001)are at a higher risk of caregiver distress.
Discussion
Providing care at home is a highly demanding task both emotionally and physically, but informal caregivers are often a neglected and at-risk population [27].This study was the first to translate and validate the Chinese version of the interRAI Self-reported Carer Needs (SCaN) assessment among the Chinese populations.Although the sample sizes in each region are slightly different due to the restraints of resources, we further calculated reliability and validity with sub-samples from each region, and the result shows good consistency (see Table S3).This study also sought to better understand caregiving distress across different patient conditions and caregiver contexts using the MCSB and hierarchical linear regression models.A sensitivity analysis was conducted with PHQ-4 as a measure of caregiving distress.The result shows no significant difference between these two models.It hypothesized that each domain of stressors (background factors of stress, primary stressors, secondary stressors, appraisal, and exacerbating factors) would significantly improve the predictability of the linear regression model for caregiving distress.The result confirmed this hypothesis as entering the five domains all significantly improved the model fit and demonstrated the multidimensionality of distress sources.
In the background factors, our result did not find informal caregiver's age, gender, and relationship to the care recipients as predictors of caregiving distress.Nonetheless, from Model 1 to Model 3, Shanghai caregivers are significantly less distressed, which may contribute to the In the primary stressors, co-residence is not a significant predictor of caregiver distress, which is consistent with previous research showing that Chinese caregivers are more resilient to the stress brought by cohabitation compared to Western caregivers as Chinese family members traditionally live together, and it is perceived as a way to fulfil their care responsibility [46].We also found that informal care time significantly predicts caregiving distress across four models, especially those who provided more than 36 h in the past 3 days.This group comprised 26.55% of the caregivers in our sample, indicating the necessity to identify and provide respite and supportive services to the highly stressed caregivers with high caregiving intensity.
In the second stressor, the financial difficulty is only significant in Model 3, while the lack of support from family and friends remains significant and robust across Model 3 to Model 5.The findings largely mirrored previous studies demonstrating that sufficient family/social support is grounds for caregiver empowerment (e.g., 8).
In terms of appraisal of the caregiving role, caregivers' unmet needs significantly escalate distress.Informal caregivers often need assistance and support to meet their physical, emotional, social, financial, and mental health needs to enable them to continue the caregiving role, but our result shows that 67.61% of Chinese caregivers have unmet needs.Meanwhile, caregivers with multifaceted roles such as employment, family, and education, in addition to their caregiving role, are at higher risk of experiencing distress.
Lastly, caregiver's physical health is a robust predictor of caregiving distress, including their sleeping quality (p <. 001), IADL performance (p <. 001), and self-rated health (p <. 05), indicating caregivers are struggling to maintain their own health while providing care.While previous studies mostly focus on how the care recipient's condition influences caregiving distress, few studies have explored the caregiver's own health conditions.Older adult caregivers (age > 50 years older) particularly suffer from additional health risks due to insufficient time for self-care, the ageing process, and the high prevalence of chronic illnesses [47].In our sample, the mean age of caregivers is 53.69 (SD = 15.8), and it is important to recognize the physical health needs and caregiving challenges they face to maintain their health and well-being.Otherwise, informal caregivers themselves will increasingly become recipients of care.
Implications and limitations
The validation of the standardized assessment tool provides valid and reliable information and common grounds for international researchers to compare data cross-nationally.Early screening and routine assessment of caregiver distress should be part of the comprehensive care planning to address the collective needs of the care recipient-caregiver dyads.As an online anonymous survey, 70% caregivers completed it independently, demonstrating its feasibility and efficiency of caregiver selfreport in clinical settings.In addition, influenced by the Confucian culture, Chinese caregivers may feel guilty for taking a break from the caregiving tasks and addressing their own needs.Therefore, healthcare practitioners should share the assessment result not only with the caregiver but also with their family, friends, and care recipients to help them validate their role and make their needs visible.In future, more targeted interventions and culturally appropriate support should be developed in alignment with the assessment outcomes.The present study had four limitations.First, the study was cross-sectional; thus, the cause-and-effect relationships were not established, and it was not possible to explore the trajectory of caregiver distress over time.Longitudinal studies are required to overcome this limitation in the future.Second, caregivers were selected by convenience sampling method, which could result in selection bias and limit the generalizability of this study to other populations.The differences in sample sizes across sites can increase the risk of selection bias.Future work should investigate this further in a larger, representative sample to address the potential bias.Third, we did not conduct a test-retest exercise to determine the stability of caregiver responses.Fourth, as the interRAI SCaN assessment captures the caregiver's characteristics, the care recipients' conditions remain largely unknown (e.g., disease types, care dependency level, depressive levels, cognitive abilities).Therefore, we cannot address the care recipients' factors contributing to the caregivers' distress, such as Alzheimer's disease and related dementias (ADRD) and behavioral and psychological symptoms of dementia (BPSD) have been proven major causes of caregiving distress for dementia care recipients (e.g., [48,49]).Nor can we differentiate the caregivers' distress by the acuity and severity of the disease.Future research can link the interRAI SCaN questionnaire and other inter-RAI instruments, such as interRAI Home Care, to examine caregiving distress more comprehensively.Caution is needed in the interpretation of our findings.
Conclusion
In conclusion, our findings suggest that the interRAI Chinese SCaN assessment is a valid and reliable tool among Chinese informal caregivers of older adults.Healthcare professionals should early screen caregivers with longer informal care time, lack of support from family and friends, have unmet needs, experience role overload, sleep problems, and low IADL functioning, and provide supportive services across the caregiving journey.
of care provided in the last three days to the care recipient
Note.CG = Caregiver; CR = Care recipient; IADL = Instrumental activities of daily living; PHQ-4 = Patient Health Questionnaire-4; LSNS-6 = Lubben Social Network
Table 2
Internal consistency reliability, concurrent validity, discriminant validity of interRAI Chinese SCaN scales | 2024-05-10T06:17:47.907Z | 2024-05-08T00:00:00.000 | {
"year": 2024,
"sha1": "dcea71c5fce72f9d4ca6837e09d11e4edbc43158",
"oa_license": "CCBY",
"oa_url": "https://bmcgeriatr.biomedcentral.com/counter/pdf/10.1186/s12877-024-05014-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1481f575497f90ad36186adbe21cbcc19876bb87",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
155678722 | pes2o/s2orc | v3-fos-license | Intergenerational developments in household saving behaviour
This paper examines the saving behaviour of different generations of households in New Zealand over the period 1984–2010 using data from the Household Economic Survey. The paper employs a life-cycle framework to estimate regression models that identify the influence of age and birth year on household saving rates. The results show that household saving rates exhibit a hump shape over the life cycle and that, from the baby boomers onward, the average saving rates of each generation exceed those of the generation preceding it. These findings suggest that the movement of the baby boomers into their higher saving years has contributed positively to aggregate saving rates, but that future effects of population ageing are likely to be negative. However, it is possible that the lift in saving rates over recent generations will provide an ongoing positive contribution to aggregate saving rates throughout the projection period ending 2030.
Introduction
Elevating household saving rates has been an ongoing focus for policy-makers in New Zealand. In particular, concern is often expressed about the rates of saving among younger generations (e.g., Savings Working Group, 2011). However, owing to the paucity of readily available household-level information, little is actually known about New Zealand households' saving behaviour beneath the aggregate level. This paper uses household-level data from the Household Economic Survey (HES) over the period 1984À2010 to examine the saving rates of different generations of households in New Zealand. The paper employs the life-cycle model as a framework to estimate regression models that identify differences in average household income, consumption expenditure and saving according to birth year (or 'cohort') and age. Knowing these differences is useful for understanding the aggregate saving implications of population ageing and of changes in saving behaviour between older and younger generations. It can also be helpful for assessing the potential effects of various policy interventions designed to raise household saving rates.
The paper extends analysis of HES data for the period 1984À1998 by Scobie (2001a, 2001b) and Scobie and Gibson (2003). This suite of papers represents the only previously published empirical work that examines household saving in New Zealand using household-level saving data. 1 The rest of the paper is organised as follows. Section 2 introduces the dataset and the method for constructing cohorts. Section 3 outlines the approach to estimation, based on a simple life-cycle framework. Section 4 describes the results, followed by sensitivity analysis and extensions in Section 5. Section 6 considers the implications for aggregate saving rates of the foregoing analysis and Section 7 concludes.
Data
This section introduces the HES dataset in Section 2.1, with particular attention to its limitations. Section 2.2 briefly describes the HES sample and the sample restrictions, and the approach to constructing cohorts along with its advantages and disadvantages. Section 2.3 provides the definitions used for saving and the saving rate, and finally Section 2.4 compares the preferred HES aggregate saving rate measure with the national accounts measure of the national household saving rate.
The Household Economic Survey
Household saving rates are calculated in this paper as the difference between household income and expenditure as recorded in HES. Each HES survey provides a rich set of income, expenditure and demographic data for an independent and representative sample of New Zealand resident households. The primary purpose of HES is to provide information for the calculation of inflation measures and for some components of the National Accounts. The survey is analogous to the Living Costs and Food Survey in the United Kingdom, the Consumer Expenditure Survey in the United States, and the Household Expenditure Survey in Australia. Although these surveys are typically the best source of household-level saving data and are frequently used in the international literature, they have some drawbacks. Indeed, while there are no better household-level saving data available for New Zealand, Statistics New Zealand warns that HES is not designed to measure saving and should be used for this purpose with caution (Bascand, Cope, & Ramsay, 2006).
The main drawback of HES is that it underestimates actual household income and expenditure. The problem stems from undercoverage of both the population (for example, those in old-age care institutions are not captured) and the actual total income and expenditure of those households that are surveyed. It is also likely that coverage varies across different types of households, potentially introducing bias into the age and cohort effects identified in this paper. Bascand et al. (2006) provide detail on the coverage and other differences between HES and the national accounts Household Income and Outlay Account (HIOA) saving measure. Fesseau, Wolff, and Mattonetti (2013) show that these differences, between 'micro' and 'macro' income and consumption measures, are common across many countries. The HIOA saving measure is also not free of measurement problems, which has been reflected in substantial historical data revisions over recent years as outlined by Gorman, Scobie, and Paek (2013). survey years, there is a wide range in the total number of observations in each cohort. Cohorts born between 1946 and 1966 each contain more than 1000 households, compared with less than 100 households in each of the youngest (birth years 1983À1991) and oldest (1910À1912) cohorts. By age (again summing across all survey years), the range in the number of observations is narrower, although there are still substantially fewer observations at either extreme of the age spectrum compared with mid-life ages.
Defining saving and the saving rate
The preferred measure of household saving in this paper has been chosen to correspond closely to the HIOA definition of household saving. Specifically, saving is defined as: (1) where Y D HES 'total income' ¡ net tax and transfer payments, 5 and C D HES 'total household expenditure' ¡ HES 'contributions to savings' ¡ HES 'mortgage principal payments' ¡ HES 'life and health insurance payments' ¡ HES 'purchases of property' C HES 'sale of property' (classified as negative expenditure in HES).
In other words, saving is defined as the difference between household income and expenditure, plus mortgage principal and life and health insurance payments. More detail on this measure, including the New Zealand Household Expenditure codes for the expenditure categories listed, is provided in the Appendix section. Alternative definitions of saving are incorporated as sensitivity analysis in Section 5.1.
Saving rates are defined in the usual way as saving divided by disposable income. For year-cohort cells, saving rates are calculated as the ratio of each cohort's mean (total) saving to mean (total) disposable income, 6 i.e., (2) where S hbt D saving of each household h, belonging to cohort b, in survey year t, and Y hbt D disposable income of each household h, belonging to cohort b, in survey year t. The aggregation properties of this 'ratio-of-averages' measure (as opposed to a 'average-of-ratios' measure) are useful when considering the implications of results for aggregate household saving, which is similarly calculated as total household saving divided by total household disposable income. In addition, the ratio-of-averages measure reduces the effects of outliers and measurement error. These effects are particularly relevant when using disposable income as the denominator because HES records some households as having very low (or zero) disposable income, which leads to extremely high (or non-defined) negative saving rates for those households and an associated bias in average-of-ratio measures.
A disadvantage of the ratio-of-averages measure is its limited use for understanding behaviour at the household level. Ratios of means are more influenced by higher income households than lower income households, and therefore may be uninformative about households at the median or lower parts of the saving distribution. Ratios of quantiles, such as the median, can be difficult to interpret because the median household by income is not necessarily the same median household by consumption expenditure, so the median saving rate may be derived from two different households. Therefore, to get a better sense of the behaviour of the 'typical' household, household-level data, rather than cohort-year cells, are used as the unit of analysis in the extensions discussed in Section 5.2. Figure 1 shows the aggregate HES saving rate by year alongside the HIOA gross (i.e., excluding consumption of fixed capital) saving series. The HES saving rate shows a relatively steady upward trend over the sample period. The HIOA saving rate is generally lower than the HES measure and, at least up until 2001, appears to show a downward trend. This divergence in HIOA and HES trends mirrors the divergence in trends seen between analogous measures in the UK and US (Barrett, Crossley, & Milligan, 2010). Although the move to a triennial HES complicates comparison in more recent years, HIOA and HES measures appear to show a similar pattern from 2001, with a dip in the early 2000s and subsequent recovery. Nevertheless, the historic differences between movements in the HES and HIOA measures indicate that caution is needed when drawing conclusions about changes in the latter from changes in the former. 7 3. Method: estimating cohort and age effects within a life-cycle framework The basic life-cycle model assumes that individuals smooth consumption over their lifetimes, saving in one period to consume in another. Because income is typically 'hump shaped' over the life cycle, the life-cycle model predicts that saving will typically exhibit a corresponding hump shape, with individuals saving during working age and dissaving during retirement. Saving behaviour will therefore differ between different individuals at different points in their life cycles. It may also vary over time, and across cohorts, because of the effects of public policy changes, economic growth, and/or fluctuations in the 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 economic cycle that impact different cohorts (at different ages) contemporaneously. This paper generally follows Gibson and Scobie (2001b) by estimating a life-cycle model that allows the separate identification of cohort, age and time effects on saving behaviour, as developed by Deaton (1997).
Comparing Household Economic Survey and national accounts saving rates
In Deaton's model, an individual's consumption expenditure level is proportional to lifetime wealth (W), known with certainty at birth, with a factor of proportionality (determined by preferences) that depends on age. Interest rates are ignored. 8 Therefore, for an individual i born in year b and observed in year t (age D t ¡ b), with preferences represented by the function g, consumption expenditure can be expressed as This model can be adapted to households by assuming that lifetime wealth is known at the time of household formation, with the function g representing household preferences. Taking logs of Equation (3) and then averaging over all households with a household head in the cohort born in year b and observed at t gives so that average consumption expenditure is decomposed into the sum of two components, one that depends only on age and one that depends only on cohort. Equation (4) can be estimated by regressing the mean of the logarithms of consumption expenditure for each cohort in each survey year on a set of age and cohort dummy variables. The coefficients on the age dummies recover preferences about intertemporal choice and the coefficients on the cohort dummies capture the lifetime wealth levels of each cohort. There is no need to assume a functional form for preferences or to measure lifetime wealth levels. This model is consistent with some level of uncertainty, provided members of each cohort estimate their future earnings correctly on average. It is also possible to incorporate the effects of 'one-off' macroeconomic shocks that surprise households by adding a function representing time effects. Because of the identification problem caused by the linear relationship between age, cohort and time, it is not possible to capture time effects by simply adding a set of survey-year dummies. To overcome this problem and allow for the separate identification of age, cohort and time effects, a normalisation is used to make the time effects sum to zero and orthogonal to a time trend. 9 This approach effectively attributes any time trends in the data to a combination of age and cohort effects, not to time effects, which capture cyclical fluctuations that average to zero over the long run. The normalisation is restrictive, but can be justified on several grounds, as discussed by Attanasio (1998).
Introducing the normalised time effects together with the age and cohort dummies, and adding variables for the mean number of children n c (defined as individuals aged 15 or younger) and adults n a in each year-cohort cell to allow their different consumption requirements, Equation (4) can be estimated as where D a is a matrix of age dummies, D b is a matrix of cohort dummies, the coefficients a C and b C are the age and cohort effects on consumption expenditure, g C represents time effects, d t , d C1 and d C2 control for the effect of demographic composition, and u c is the error relating to the sample estimate of log mean consumption expenditure for households born in year b and observed in year t. The estimation uses the logarithm of the mean rather than the mean of the logarithms (shown in Equation 4) to better account for the measurement problems discussion in Section 2.1. Household income can be treated in the same way as consumption expenditure. The underlying relationship corresponding to Equation (5) is that income can be expressed as proportional to lifetime resources, where the factor of proportionality depends on age. 10 The corresponding estimated equation for income is thus The difference between the logarithm of income and the logarithm of consumption expenditure is a monotone increasing function of the saving-to-income ratio. When the saving ratio is low, this difference is approximately equal to the saving ratio, and so together, the income and consumption expenditure cohortÀageÀtime decompositions provide a decomposition of the saving rate as where the terms in brackets represent the respective effects on the saving rate of age, cohort and time. Demographic effects are ignored for reasons outlined in the next section. Rather than relying on the approximation of .lnY bt ¡ lnC bt / as the dependent variable, Equation (7) can be estimated directly using sr bt as the dependent variable and this is main the specification adopted for analyzing saving behaviour in this paper.
Results
This section reports regression estimates for the effects of age, cohort and time on saving rates corresponding to the equations for consumption expenditure (5), income (6) and saving (7). 11 Although saving rates are the principal focus, the separate consumption and income analysis provides a useful breakdown of the saving patterns. A constant is included in each equation, and variables for the mean number of children and the mean number of adults are also included in the consumption expenditure and income equations, but excluded from the saving equation. Excluding these demographic variables (which enter in the extensions covered in Section 5) from the main saving equation makes little difference to the pattern of results (implying their effects on income and consumption offset one another), but greatly simplifies the consideration of aggregate implications covered in Section 6. Each equation is estimated on 1064 year-cohort cells using weighted least squares, with the weights equal to the number of household observations in each year-cohort cell. The weighting method provides an efficient way of estimating parameters when using cell averages (as opposed to household-level data), by accounting for the greater variance in cells with fewer observations.
To provide some context to the regression estimates, Section 4.1 first shows the raw saving rate data by household head age divided into 10-year-birth cohort groups. Sections 4.2 and 4.3 help to clarify the observed patterns in the raw data by reporting regression estimates, in the form of figures, for age and cohort effects. Finally, Section 4.4 reports estimated time effects for the saving Equation (7).
Saving rate age profiles by cohort
Figure 2 provides a first glimpse of the age and cohort patterns in household saving. Each point in the figure shows the mean cohort saving rate, across surveys, at a particular household head age. 12 For example, the saving rate at age 35 for the 1950À1959 cohort is the mean of the rate for 35-year-old-headed households across the 10 surveys from 1985 to 1994. Although the use of 10-year-birth cohorts smoothes the picture considerably, substantial underlying volatility in the data is still evident. Two interesting patterns are nonetheless observable. First, there is a tendency for cohorts born after the 1930s to have successively higher saving rates than previous cohorts. It is less clear whether there is a difference in behaviour between the older three cohorts. Second, there appears to be a hump-shaped profile in saving rates over the life cycle, as predicted by the life-cycle model. Saving rates tend to be lower (or negative) at the younger household head ages, before rising to peak when household heads are in their fifties. Saving rates decline at ages thereafter, but they do not become consistently negative at older household head ages and in fact appear to increase from ages in the mid-to-late sixties.
A rise in saving rates during old age is inconsistent with the predictions of the lifecycle model, and it perhaps reflects some remaining wealthÀmortality bias (or household dissolution bias) in the data, despite the truncation of the sample to household heads younger than 74 years old. On the other hand, non-negative saving rates in old age are not surprising here because of the treatment of pension income as income rather than as the drawdown of savings. Jappelli and Modigliani (1998) argue that public pension payments should be treated as dissaving, rather than as transfer income, and that tax and other payments that contribute to these pensions should be treated as saving, rather than as reductions in disposable income. Unless these adjustments are made, they claim that true household saving will be understated during the pre-retirement period and overstated during retirement. Coleman (2006) makes these adjustments for New Zealand and finds a marked impact on measured saving rates in the expected directions. Such tax and pension payment adjustments are not made in this paper. 13 It is worth emphasising that because no one cohort is observed across all household head ages, the single age profile for all cohorts is estimated using observations from age ranges that are different for each cohort, controlling for cohort and time effects. The assumption that age effects are constant across cohorts and time is relaxed as part of sensitivity analysis described in Section 5.3. The age profiles shown in Figure 3 show the estimated mean percentage difference that age makes to disposable income and consumption expenditure. 14 The estimates include controls for the number of adults and children in the household. Income exhibits a clear hump shape, rising through younger ages to peak when household heads are in their 50s, before falling at older ages. Consumption expenditure also exhibits a hump shape, but it is less pronounced. The variation in consumption expenditure over the life cycle, which is consistent with findings for other countries, appears to contradict the predictions of the basic life-cycle hypothesis. However, it is not inconsistent with more sophisticated life-cycle models that incorporate, for example, precautionary motives À as discussed by Browning and Crossley (2001).
Having controlled for cohort and time effects, the estimated age profile in Figure 4, which includes 95% confidence intervals, exposes the profile in household saving rates more clearly than Figure 2. Household saving increases sharply when household heads are in their early twenties, before rising more gradually to peak when they are in their mid-to-late fifties. Although they decline at ages thereafter, saving rates for households with older household heads remain generally above those of households with heads in their thirties and forties. The late-age rise in saving rates apparent in Figure 2 is also visible in Figure 4.
Cohort effect estimates
Estimated cohort effects for the consumption expenditure, income and saving equations are shown in Figures 5 and 6. The reference cohort is set to the 1930 birth cohort. As with the estimation of age effects, each cohort effect is estimated using observations from an age range that is different for each cohort, controlling for age and time effects. The assumption that cohort effects are constant across age and time is also relaxed as part of sensitivity analysis described in Section 5.3.
The cohort effects in Figure 5 represent the estimated percentage difference that birth year makes to the mean consumption expenditure or income of a cohort, compared with the 1930 cohort, at any given age between 19 and 74 years old. For example, the mean disposable income of a household headed by someone born in 1960 is estimated to be around 50% higher, at any age, compared with a household headed by someone born in the 1930. Figure 5 shows a trend increase in both mean disposable income and consumption expenditure by cohort between birth years in the 1910s and 1950s, with consumption expenditure rising more rapidly than income across the earlier-born cohorts and less rapidly across the later-born cohorts. For cohorts born after the 1950s, consumption expenditure appears to broadly plateau, while disposable income continues generally to increase by cohort up until birth years in the 1980s. There then appears to be some decline in both disposable income and consumption expenditure through cohorts born during the 1980s. However, there is clearly more year-on-year volatility (and lower statistical significance) in the estimates for these youngest cohorts, reflecting a smaller number of household observations in the sample. More caution is therefore required in interpreting the estimates for these youngest cohorts. Figure 6 shows the estimated cohort effects on saving rates, along with 95% confidence intervals. These effects indicate the estimated percentage point difference that birth year makes to the mean saving rate of a cohort. Consistent with the cohort trends in income and consumption expenditure, there is a general decline in the mean saving rate by cohort between those with birth years from 1910 to around 1930. For cohorts born subsequently, there appears to be a near-linear trend increase in mean saving rates, through to those born in the mid-1980s. The rise in estimated mean saving rates by cohorts over this period is substantial. Saving rates of households with a 'baby boomer' head (roughly those born between the mid-1940s and the mid-1960s) are approximately 15 percentage points higher than households with heads in the 'silent generation' (born in the 20 years previous). Saving rates of households with heads in Generation X (born between 1965 and 1980) are approximately 12 percentage points higher than those of households with baby-boomer heads. Saving rates of households with heads in Generation Y (born between 1981 and 1991) are approximately eight percentage points higher again.
The widening confidence intervals towards the right (and left) of Figure 6 clearly illustrate the loss of statistical significance that occurs at both ends of the birth-year spectrum, as a result of smaller samples. In addition, because these cohorts are observed at relatively few ages and in few surveys (compared with the middle cohorts), there is a need to be more cautious in interpreting the estimates as being representative of their lifetime behaviour.
Cohort effects interpretation
According to the basic life-cycle model, cohort effects should be zero, because any difference in lifetime income across cohorts is matched by differences in lifetime consumption. An exception, in which cohort effects would be expected to be positive, is possible if intergenerational bequests, as a proportion of income, increase with the level of income (i.e., bequests are a luxury good). Empirically, estimated cohort effects might also be expected to be nonzero if the retirement period is not fully captured in the data. In this case, a positive estimated cohort effect could represent greater accumulation of wealth by that cohort during working life to support consumption during the retirement period. This might occur, for example, if younger cohorts have lower expectations of the level of publicly provided services provided in old age, or because of increases in life expectancy.
Empirical studies using comparable frameworks to this paper have interpreted variation in estimated cohort effects in different ways. Some, such as Attanasio (1998), Scobie and Gibson (2003) and Chamon and Prasad (2010) assume that they indicate true differences in saving behaviour between cohorts. Others consider nonzero cohort effects as anomalous features of the data (Deaton, 1997) or as representative of measurement error (Dynan, Edelberg, & Palumbo, 2009). In the context of this study, it would seem reasonable to assume that the pattern of cohort effects likely reflects elements of both measurement error and true effects.
The existence of measurement error in the cohort effects is suggested by the difference in trends between the aggregate HES and HIOA saving rate measures shown in Figure 2 together with the fact that the estimated cohort effects account for much of the trend increase in the HES measure as discussed in Section 6. An example of how measurement error may bias the estimated cohort effects relates to the survey coverage issues discussed in Section 2.1. If a category of expenditure (or income) is underestimated by HES, and the proportion of household expenditure on this category increases systematically with birthyear cohort (adjusting for age), then this would bias estimated cohort effects upward compared with their true value. Further work being undertaken by Statistics New Zealand may shed more light on how measurement error in HES may affect the findings of this study.
On the other hand, as argued by Scobie and Gibson (2003), the identified differences between cohort saving rates are consistent with the evolution of policy and economic conditions during the last century, and may therefore reflect true cohort effects. In particular, the period from 1950 to 1980 was marked by the prevalence of relatively 'favourable conditions' for New Zealand households, with high levels of public sector welfare provision and low unemployment. This benign period would explain why cohorts who were in their peak-earning ages at the time are found to have lower lifetime saving rates than older or younger cohorts. In support of this argument, Talosaga and Vink (2014) provide strong empirical evidence showing a lift (between 1992 and 2001) in the eligibility age for New Zealand's public pension, New Zealand Superannuation, led to higher saving rates among affected (younger) cohorts. 15 Increases in life expectancy provide another plausible explanation for rising cohort effects over recent generations. For example, cohorts born in 1991 (the youngest in the sample) are projected to have a life expectancy at an age of 50 years old that is more than 10 years longer than cohorts born in 1930 are estimated to have had at the same age (Statistics New Zealand, 2014). The effect of these increases in life expectancies on saving rates will depend on the extent to which households make corresponding adjustments to their expected retirement age. The effects will be lower if longer expected lifetimes are matched with longer expected working lives. Figure 7 shows estimated time effects for saving Equation (7), along with 95% confidence intervals and an indication of the timing of New Zealand's economic recessions. 16 As noted in Section 3, these time effects sum to zero, are orthogonal to a time trend, and can be interpreted as representing macroeconomic shocks. Consistent with the literature, the estimates suggest that recessions are associated with higher saving rates, while booms tend to correlate with lower saving rates. 17 Although the estimated time effects are statistically significant, their exclusion from Equations (5), (6) and (7) does not materially affect the pattern of age and cohort effects discussed above.
Extensions and sensitivity analyses
This section provides estimates for alternative specifications to those used to generate the main results presented in the previous section. These alternative specifications provide a check of the main results, as well as additional insights into household saving behaviour. Section 5.1 considers the analysis using two definitions of saving that have been used in other studies of saving in New Zealand. 18 Section 5.2 changes the unit of analysis to the household and examines the effects of different household characteristics on saving behaviour. Finally, Section 5.3 considers several examples which relax the assumption that age, cohort and time effects are constant.
Alternative measures of saving
As discussed in Section 2.3, the preferred measure of saving in this paper corresponds to the HIOA definition of household saving. However, other definitions may be preferable from an economic perspective. Gibson and Scobie (2001b) use a definition of saving that classifies expenditure on items that provide consumption benefits over more than a year, such as consumer durables, health and education, as saving rather than consumption. The justification for this definition is based on the element of durability of these expenditure items which means they may be better considered as 'investment items' (and therefore a form of saving) rather than consumption. 19 Gorman et al. (2013) make an adjustment for these investment items and show substantial effects on measured household saving at the aggregate level in New Zealand. They also calculate another household saving measure, which includes an adjustment to incorporate the fact that the inflation component of nominal interest charged on outstanding financial liabilities is an implicit capital repayment (not an income payment) to the lender. If the inflation component of interest payments is considered capital, unadjusted household saving rates overstate (understate) the 'true' saving of lenders (borrowers), especially when inflation is high. Figures 8 and 9 show the results for age and cohort effects of estimating Equation (7) using these two alternative saving measures. The overall pattern in both age and cohort effects is similar for each of the measures. Of the two alternatives, classifying investment expenditure items as saving leads to larger differences from the HIOA measure, with reduced cohort effects ( Figure 8) and a less pronounced age profile (Figure 9). These differences reflect the fact that consumer durable spending as a ratio of income is highest for younger age groups and that this ratio has declined substantially over the sample period as the price of consumer durables to nondurables has fallen.
Household-level analysis
As discussed in Section 2.3, the ratio-of-averages saving rate measure used in the paper up until this point provides limited insight into the saving behaviour of typical households. In this section, Equation (7) is re-estimated using household-level saving rates as the dependent variable. Quantile regression is used to reduce the effects of outliers and measurement error, which would lead to substantial bias in least squares regression because of the presence of households with near-zero incomes. 20 Using household-level data also allows the model to be conditioned for household characteristics. These characteristics are captured by including dummy variables representing differences in gender, ethnicity, employment status, dwelling tenure, family structure and education. 21 The influence of these conditioning variables on saving rates is interesting in its own right. Because of the potential for the composition of the sample to change over time, the conditioning variables also provide a useful check on the robustness of the estimates. Figures 10 and 11 compare the estimated age and cohort effects reported in Sections 4.2 and 4.3 with those estimated using median regression, with and without conditioning variables. Overall, the results are similar for the different specifications, lending support to the robustness of the main results. The size of the age and cohort effects is somewhat lower at the median than for the main estimates, suggesting that the influence of age and cohort on saving is most marked at the upper end of the income distribution. The results from quantile regressions at the 25th and 75th percentiles (not shown) corroborate this suggestion, with respectively lower and higher degrees of variation in estimated age and cohort effects than at the median.
The estimated coefficients of the conditioning dummy variables are shown in Figure 12 and discussed briefly below. 22 Gender has relatively large and statistically significant saving effect, with male-headed households saving nearly six percentage points more than female-headed households. Maori or Pacific ethnicity has a smaller positive but significant effect, with Maori-or Pacific-headed households' saving rates four percentage points higher than those of households with heads of other ethnicities. Owning a house with a mortgage has little effect on saving rates compared with renting, but owning a house freehold is associated with saving rates six percentage points higher than for renting households. Having children has a negative effect on saving rates, lowering saving rates by two percentage points for sole parents and six percentage points for couples. The effects of basic educational qualifications, while small, are surprisingly negative, at two percentage points each for high school and vocational qualifications, while tertiary education has no statistically significant effect. This apparent anomaly is partially explained by Figure 11. Estimated household saving rates by age for cohort means and median households.
the correlation of these variables with employment. Household head employment has the largest effect of the conditioning variables at 12 percentage points (compared with not working). Reflecting this and employment's correlation with other factors, excluding the employment dummy has a significant effect on several of the other coefficients. Most notably, it reduces the size and significance of the negative saving effects of educational qualifications, raises the negative effect of sole parenthood, raises the positive effect of male gender and reduces the effect of Maori or Pacific ethnicity.
Allowing variation in age, time and cohort effects
The empirical model used in this paper assumes that age, time and cohort effects are constant and independent from one another. This means, for example, the shape of the estimated age profile of saving does not vary across time, in response to policy or other environmental changes, or by cohort. Rather, average differences between the saving behaviour of cohorts are captured by a level shift (the cohort effect) in the age profile, which is constant across all ages. There are limitations in the extent to which these assumptions can be relaxed because no cohorts are observed across all ages and time periods. However, the following subsections provide some insight into how age and cohort effects may have evolved over time (Section 5.3.1), how age effects may differ by cohort (Section 5.3.2) and how cohort effects may differ by age (Section 5.3.3).
Changes in age and cohort effects over time
This section compares estimates for Equation (7) surveys (1994À2010). Time effects have been excluded from the equations because the shorter sample periods make it difficult to separate trends from fluctuations. This exclusion, together with generally smaller sample sizes, reduces the precision of the estimates. Nonetheless, as shown by Figures 13 and 14, the estimated cohort and age effects show similar overall patterns, with some differences, for the two samples. For cohort effects, there is sharper rise for middle cohorts in the earlier sample, perhaps reflecting precautionary-type saving among these cohorts who were of prime-working age during the turbulent economic years of the late 1980s and early 1990s. For age effects, the profile in the later sample is generally flatter, apart from a steep increase between household head ages of 19 and 25 years old. In addition, in the later sample, the old-age decline in saving occurs at an age around five years older than in the earlier sample. This delayed decline may reflect the influence of the increase in the eligibility age for New Zealand Superannuation, from 60 to 65 years old between 1992 and 2001.
5.3.2. Changes in age effects by cohort À accounting for the change in the age of pension eligibility The increase in the eligibility age for New Zealand Superannuation only affected cohorts born after 1932. If, as suggested by Figure 14, this increase affected the saving behaviour of these cohorts, it may be distorting the estimated age profile. A potential way to address this distortion is to replace the age dummy variables in Equation (7) with dummy variables based on 'years-until-expected-retirement', calculated as Expected New Zealand Superannuation eligibility age t ¡ age t . 23 The expected eligibility ages for New Zealand Superannuation for each birth cohort at time t are assumed to be those of government policy at time t, and these are set out in more detail by Talosaga and Vink (2014). 24 The patterns of estimated cohort and age effects for this alternative specification are very similar to those of the preferred specification presented in Figures 4 and 6. As expected, the lifetime profile has a more pronounced hump shape with a peak just prior to the expected retirement age and a more consistent decline over post-retirement ages.
Changes in cohort effects at pre-and post-retirement age
The assumption of constant cohort effects across the life cycle, which is required for the identification of a saving age profile, is a strong one. It implies that differences in saving between cohorts cannot change across time, because of policy changes for example, and that differences in saving between cohorts are never spent within the sample age range. As noted in Section 4.3.2, the second implication is only tenable if it is assumed that higher saving cohorts have higher dissaving at ages above the sample range, or higher bequests. In effect, this assumption involves the reversal of estimated positive and negative cohort effects in later life. The retirement age is a natural point at which a reversal in cohort effects might be expected. One way to roughly test whether this reversal occurs is to split the sample into those households with heads who are eligible to receive New Zealand Superannuation (as a proxy for retirement) and those who are not, and to rerun the regressions on the two subsamples. The estimated cohort effects for the 'pre-retirement subsample' closely match those estimated for the full sample. However, the estimates for the 'retirement subsample' are substantially lower, with negative cohort effect point estimates for more than half of the cohorts in the subsample. The majority of negative point estimates reverses the pattern of nearly all positive estimates for cohorts shown in Figure 6. Although this is a crude test, the results would appear to be consistent with the life-cycle model's prediction that cohorts with higher saving rates during working ages have lower rates of saving (or higher rates of dissaving) during retirement.
Implications for aggregate saving
A useful feature of the life-cycle model outlined in Section 3 is its provision of a simple accounting framework for describing changes in the aggregate saving rate according to changes in population structure and income growth. Using this framework, the trend aggregate saving rate can be predicted as the weighted sum of age effects (or 'life-cycle effects') and cohort effects, where the weights are determined by each cohort's share of aggregate disposable income. Specifically, the trend aggregate saving rate can be estimated as where Y bt is the aggregate disposable income of each birth cohort,â t ¡ b andb b are the respective estimated age and cohort effects on saving for each cohort b as reported in Section 4, andt is the estimated constant. Growth in the aggregate income of successive cohorts, owing to population and/or economic growth, increases the weighting of younger cohorts in the aggregate, and thereby the relative size of younger cohorts' contribution to the aggregate saving rate. To illustrate, in a 'stripped-down' model where saving occurs pre-retirement and accumulated wealth is spent in retirement (with no cohort effects), economic or population growth leads to an increase in the aggregate saving ratio as the total saving of the young exceeds the dissaving of the elderly. This framework can be used to approximate how changes in New Zealand's population structure, and the ageing of the baby boomers in particular, has affected and might affect the aggregate saving rate through life-cycle effects. Clearly, the precision of such approximations is limited to the extent that some population groups, especially the elderly, are not captured in the sample. Figure 15 shows the distribution of households in the HES survey by age of household head in 1984 and 2010. The figure shows the shift between surveys of the baby boomer bulge, from the low-saving young age groups toward the high-saving middle-aged groups.
The contribution to the aggregate saving rate of changes in the population structure through the life-cycle channel is approximated following the approach of Dynan et al. (2009). The income-weighted average of the predicted saving rate of each household head age group in 1984 is multiplied by the change in the share of the population represented by that group in subsequent years. Figure 16 shows this contribution, with projections to 2030 based on Statistics New Zealand population projections (Statistics New Zealand, 2012), alongside the predicted trend (using Equation 8) in the aggregate saving rate over the sample period. 25 The figure shows that changes in the population structure through the life-cycle channel contributed approximately one-quarter of the trend increase in aggregate saving between 1984 and 2010. The greatest increase in contribution occurred over the late 1990s as the baby boomers entered their high-saving fifties.
The total contribution to the aggregate saving ratio through the life-cycle channel combines the contribution from changes in population structure with the contribution from changes in income across cohorts. The latter contribution is almost zero because the low average rate of average income growth across birth-year cohorts makes little difference to the distribution of the cohort income weights over time. 26 As a result, the difference between the aggregate trend and the life-cycle contribution shown in Figure 16 can be almost entirely attributed to cohort effects.
Statistics New Zealand's population projections to 2030 suggest that changes in the population structure are unlikely to provide an additional future boost to aggregate saving through the life-cycle channel, assuming that the estimated age profile of saving is unchanged. In fact, they are likely to put downward pressure on the aggregate rate from the 2020s as the baby boomers become increasingly represented among the lower saving elderly. This downward pressure may be greater than that shown in Figure 16 to the extent that the oldest age groups are excluded from the sample. On the other hand, the future contribution from cohort effects is likely to be positive throughout the projection period, assuming that the identified pattern of cohort effects persist and future cohorts (not captured in the sample) save at comparable rates to the youngest cohorts in the sample. This positive contribution will be underpinned by the fact that both the cohort effects and projected population sizes of cohorts reaching retirement generally increase through to 2030
Conclusion
This paper used household-level data from HES between 1984 and 2010 to characterise the life-cycle saving behaviour of different generations of households. The results can be summarised in two main findings. First, household saving over the life cycle exhibits a hump shape, as predicted by the basic life-cycle model, with a peak when household heads are aged in their mid-to-late fifties. The life-cycle pattern of saving is associated with a more pronounced hump-shaped age profile of disposable income than of consumption expenditure. Second, there are significant differences between the estimated average saving rates of different cohorts over the sample age range of 19À74 years. In particular, after accounting for age and one-off time effects, there is a near-linear trend increase in the average saving rates of cohorts born between the early 1930s and those born in the 1980s. From the baby boomers onward, the saving rates of each generation exceed those of the generation preceding it. This trend increase in saving rates reflects ongoing rises in disposable income accompanied by more moderate-to-flat growth in consumption expenditure across cohorts. One plausible explanation for the rise in saving rates, which is supported by the findings of other research, is that it reflects responses to an 'unfavourable' evolution in the general economic and policy environments faced by successive cohorts born since the 1930s. These two findings are robust to various sensitivity tests, including the use of alternative measures of saving; the introduction of conditioning variables to account for differences in household characteristics; and relaxing the assumption of constant age, cohort and time effects. The findings are consistent with the results of the only previous similar work in New Zealand (Gibson & Scobie, 2001a, 2001bScobie & Gibson, 2003), which used the same HES data set for a shorter time period, 1984À1998. However, while no better household-level saving data exist, the potential influence of measurement error in HES presents an important caveat to the analysis. This error is evident in the divergence in trends between the aggregate saving rate based on HES data and the corresponding national accounts measure, and it could bias the estimated effects of age and cohort on saving rates. Ongoing work by Statistics New Zealand may provide more information about the nature of these potential biases.
Although the potential influence of measurement error cautions against making overly strong inferences, the estimates of age and cohort effects may provide some insight into the underlying influence of demography on national household saving trends. In particular, an increase in the proportion of the population in high-saving age groups contributed approximately one-quarter of the overall trend increase in the aggregate HES saving rate between 1984 and 2010. However, population projections suggest that future positive contributions from this source are unlikely with the ongoing ageing of baby boomers in retirement expected to weigh upon the aggregate saving rate from the 2020s onward. The remaining three-quarters of the aggregate trend increase between 1984 and 2010 is attributable to the rise in average saving rates of successive cohorts born since 1930. If the differences in average rates between cohorts persist into the future, cohort effects are likely to continue to make a positive contribution to aggregate saving rates throughout the projection period ending 2030.
year preceding the interview date. This means that the date pertaining to some data may vary by up to 24 months between households in the same survey. 17. A recent example of the literature discussing household saving and economic recessions is Alan, Crossley, and Low (2012). 18. More detail on the construction of these saving measures is included in the Appendix section. 19. The ideal, but unworkable, approach here would be to exclude changes in the stock of durable expenditures from consumption expenditure and to add to consumption expenditure the value of services obtained from the stock. 20. The 79 households with zero recorded income are excluded from the sample, to leave a total sample size of 56,344. 21. Questions related to educational qualifications were not available for some households in the earlier survey years. The conditioned equations are therefore estimated on a reduced sample. Estimates for cohort, age and the other conditioning variables are not materially affected by the reduced sample. 22. It is beyond the scope of this paper to explore these results in detail, which could be a fruitful avenue for future work. 23. The author thanks Andrew Coleman for this suggestion. 24. In addition to the changes announced in the 1991 Budget, the expected eligibility age variable also accounts for the more gradual increase in the New Zealand Superannuation eligibility age announced in 1989, which involved the eligibility age increasing from 60 to 65 years between 2006 and 2026. Obviously, there may be differences between individuals' expectations in relation to the future eligibility age and actual government policy at the time. 25. The life-cycle-related calculations in Figure 16 are made using Statistics New Zealand's population-by-age estimates, not the HES survey data shown in Figure 15, because the populationby-age estimates data include future projections. There is little difference in the results of aggregate calculation using the two alternative data sets. 26. The estimated cohort effects indicate a compound average income growth rate of approximately 3 / 4 % per birth year between 1910 and 1990. 27. As discussed in Section 5.3.3, it seems likely that differences between the saving behaviour of cohorts would reverse at some point in the retirement period, but the data limitations prevent a precise estimation of how this occurs. Nevertheless, it seems reasonable to assume that the overall contribution from cohort effects will be positive for as long as both the population size and average saving rate of cohorts reaching retirement exceed those of the (older) cohort before them. | 2019-05-17T14:39:44.677Z | 2016-01-02T00:00:00.000 | {
"year": 2016,
"sha1": "3741696a8493bc3ee33fe322b2c5522640e18c6d",
"oa_license": "CCBY",
"oa_url": "https://www.econstor.eu/bitstream/10419/205678/1/twp2014-23.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "887156e2093c32d3f54463949147442ba2f72e0c",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
236320154 | pes2o/s2orc | v3-fos-license | Electrochemical Immunosensors with PQQ-Decorated Carbon Nanotubes as Signal Labels for Electrocatalytic Oxidation of Tris(2-carboxyethyl)phosphine
Nanocatalysts are a promising alternative to natural enzymes as the signal labels of electrochemical biosensors. However, the surface modification of nanocatalysts and sensor electrodes with recognition elements and blockers may form a barrier to direct electron transfer, thus limiting the application of nanocatalysts in electrochemical immunoassays. Electron mediators can accelerate the electron transfer between nanocatalysts and electrodes. Nevertheless, it is hard to simultaneously achieve fast electron exchange between nanocatalysts and redox mediators as well as substrates. This work presents a scheme for the design of electrochemical immunosensors with nanocatalysts as signal labels, in which pyrroloquinoline quinone (PQQ) is the redox-active center of the nanocatalyst. PQQ was decorated on the surface of carbon nanotubes to catalyze the electrochemical oxidation of tris(2-carboxyethyl)phosphine (TCEP) with ferrocenylmethanol (FcM) as the electron mediator. With prostate-specific antigen (PSA) as the model analyte, the detection limit of the sandwich-type immunosensor was found to be 5 pg/mL. The keys to success for this scheme are the slow chemical reaction between TCEP and ferricinum ions, and the high turnover frequency between ferricinum ions, PQQ. and TCEP. This work should be valuable for designing of novel nanolabels and nanocatalytic schemes for electrochemical biosensors.
Introduction
Electrochemical immunosensors have been attractive for a broad range of applications because of their exceptional attributes, including high sensitivity and selectivity, rapid response, low cost, and compatibility with miniaturization [1][2][3][4]. As regards the immunosensing formats, sandwich-type structures are among the most popular schemes, especially for ultralow concentrations of analyte [5,6]. In this assay, the crucial step is to enhance the detection sensitivity by a signal amplification strategy. Traditionally, enzymebased signal amplification by horseradish peroxidase (HRP) or glucose oxidase (GOx) is the most common approach to improve the sensitivity of electrochemical immunoassays. The natural enzymes allow for fast and selective catalytic reactions; however, their practical applications in electrochemical immunoassays are still limited due to their costly and complicated preparation process and relatively poor stability [1,3,7]. Another drawback of enzyme-based, sandwich-type immunoassays is that the active center embedded in the insulating peptide backbone of the enzyme label cannot approach the electrode sufficiently closely, thus limiting the direct electron exchange with the electrode. For this consideration, nanomaterials as signal labels without enzyme modification have numerous remarkable merits in terms of improving sensitivity [1,3,8,9].
In general, nanomaterials can be exploited as nanocarriers to load signal markers, or can be directly used as signal reporters. Recently, metallic nanomaterials that can accelerate the electron transfer rate and exhibit fascinating catalytic activity have been employed as electrocatalysts to enlarge the electrochemical signal and avoid the shortcomings of enzymes-such as thermal and environmental instability [10][11][12][13][14][15][16]. However, there are still at least two drawbacks with nanoelectrocatalysts as signal labels for fabricating sandwich-type immunosensors: (1) long distances (a few tens of nanometers) between the nanoelectrocatalysts and the electrode may limit the direct electron transfer, and (2) surface modification of nanoelectrocatalysts and sensor electrodes with recognition elements and blockers may depress the electrocatalytic activity and make the direct electron transfer less feasible [1,3,17]. Therefore, there still remains significant room to develop novel nanolabels or nanocatalytic schemes for constructing sandwich-type immunosensors with high simplicity and sensitivity.
To accelerate the electron transfer between redox enzyme labels and electrodes, an electron mediator is usually used to shuttle the electrons. However, for the signal labels of nanocatalysts (in particular metal nanocatalysts), the electrochemical immunoassay promoted by the electron-transfer mediator is rarely reported, since it is hard to simultaneously achieve fast electron change between nanocatalysts and redox mediators as well as substrates [18,19]. To obtain excellent analytical performances, three requirements must be met for the mediated electron transfer: (1) the electron mediator should exhibit both a fast electrochemical reaction at the sensing electrode and a fast catalytic reaction with the electrocatalysts; (2) the substrate should undergo a very slow electrochemical reaction at the electrode; and (3) the reaction between the electron mediator and the substrate should be very slow. Previously, several groups have investigated the electron-transfer mediators of "outer sphere to inner sphere" electrochemical-chemical-chemical (ECC) redox recycling, and found that the optimal outer-sphere species (mediator) and inner-sphere species (substrate) are dependent on the working electrode [20][21][22][23][24][25][26][27]. We investigated the ECC redox recycling at a self-assembled monolayer (SAM)-covered gold electrode, and examined the effect of various electron mediators and substrates [26]. It was found that ferrocene derivatives exhibit a reversible electrochemical reaction on the SAM-covered electrode, and ferricinum ions (the oxidation state of ferrocene) show a fast chemical reaction with hydroquinone, aminophenol, and cysteamine. Further investigations demonstrated that TCEP is electrochemically inert at the SAM-covered electrode, and shows relatively slow electron change with ferricinum ions.
Although metallic nanocatalysts have been widely exploited as alternative signal labels to native enzymes to amplify electrochemical signals, there are rarely reports on the use of nonmetallic nanocatalysts as signal labels. Pyrroloquinoline quinone (PQQ) is a redox cofactor adopting an addition-elimination catalytic mechanism in many PQQdependent enzymes. Recently, PQQ-decorated nanomaterials and electrodes have been exploited as biomimetic heterogeneous electrocatalysts [28][29][30][31][32][33]. The PQQ tags exposed on the nanomaterials' surface are able to transfer electrons directly to the electrode's surface. For instance, Long et al. suggested that an electrode modified with PQQ-decorated carbon nanotubes (denoted as PQQ-CNTs) exhibits high electrocatalytic activity toward tris(2carboxyethyl)phosphine (TCEP) oxidation, with a turnover frequency (TOF) of 133 s −1 [29]. Inspired by this study, herein, we developed a sandwich-type immunosensor with PQQ-CNTs as the electrocatalytic label. The electron transfer of PQQ on CNTs was promoted by the redox mediator of ferrocenylmethanol (FcM). The analytical performances and applications of the proposed electrochemical immunosensors were investigated. This work may pave the way for a more rational design of nanolabels for electrochemical immunoassays.
Electrochemical measurements were carried out on a CHI660E electrochemical workstation with a three-electrode system comprised of a gold working electrode, a Pt counter electrode, and a Ag/AgCl reference electrode. The morphology of functionalized CNTs was characterized using a Hitachi SU8010 scanning electron microscope (Tokyo, Japan) and an FEI Tecnai G2 T20 transmission electron microscope (Hillsboro, OR, USA).
Preparation of PQQ-CNT-Ab 2
CNT-NH 2 was modified with antibody and PQQ through an amino covalent coupling reaction [29]. Prior to the modification, CNT-NH 2 was dispersed in 4 M HCl and heated at 60 • C for 12 h. The treated CNT-NH 2 was centrifuged, washed with ultrapure water, and then dried under vacuum overnight. After that, the purified CNT-NH 2 (1 mg) was dispersed in 1 mL of DMF by ultrasonication for 1 h. Then, 1 mL of DMF solution containing 5 mM PQQ and 15 mM EDC was added to the CNT-NH 2 suspension for shaking overnight. The resulting PQQ-CNTs were purified by centrifugation at 14,000 rpm and washing with DMF and water. The immobilization number of PQQ on the surface of CNT-NH 2 was calculated by measuring the adsorption intensity change of PQQ at 330 nm with a standard curve method ( Figure S1) [34]. The immobilization capability of CNT-NH 2 to PQQ was estimated to be 287 ± 26 nmol/mg. The integration of PQQ on the CNTs was also confirmed by Fourier transform infrared (FTIR) spectroscopy ( Figure S2). The second antibody of Ab 2 was modified on the PQQ-CNTs' surface through the EDC/NHSmediated amino covalent coupling reaction between Ab 2 and PQQ on the CNTs' surface. In brief, the obtained PQQ-CNTs were dispersed in phosphate buffer solution (10 mM, pH 7.4) at a concentration of 1 mg/mL. Then, 0.5 mL of the PQQ-CNT suspension was mixed with 0.5 mL of phosphate buffer solution containing 0.2% Triton X, 0.4 M EDC, and 0.1 M NHS. After shaking for 15 min, the activated PQQ-CNTs were washed three times with the Triton-X-containing phosphate buffer solution. Then, 50 µL of 0.5 µg/mL Ab 2 solution was added to 1 mL of the activated PQQ-CNT suspension. After shaking for 2 h at room temperature, the excess Ab 2 was removed by three centrifugation/washing cycles at 12,000 rpm. The resulting PQQ-CNT-Ab 2 was characterized by scanning electron microscopy (SEM) and transmission electron microscopy (TEM) ( Figure S3). The obtained PQQ-CNT-Ab 2 was dispersed in 1 mL of phosphate buffer solution containing 0.2% Triton X and 0.1% (w/v) BSA and stored at 4 • C for 12 h. Herein, BSA was used to block the unreacted sites and eliminate the nonspecific adsorption.
Preparation of Sensor Electrode
Gold disk electrodes were polished with alumina paste down to 0.05 µm, and then sonicated for 30 s in 50% ethanol. Capture antibody of Ab 1 was immobilized on the surface of a gold electrode covered with MU/MUA mixed SAMs through the EDC/NHS-mediated amino covalent coupling. The SAMs were formed by incubating the cleaned gold electrode with an ethanol solution comprising optimized concentrations of 0.7 mM MU and 0.3 mM MUA ( Figure S4). After washing with ethanol and water, the SAMs were activated by soaking the electrode in a solution comprising 0.4 M EDC and 0.1 M NHS for 15 min. After rinsing with water, the electrode was immersed in 0.1 mg/mL Ab 1 solution for 12 h, and then incubated in a solution of 0.1 mM ethanolamine for 30 min to block the unreacted sites. To evaluate the stability, the antibody-modified electrode was stored at 4 • C in a clean environment.
Procedure for PSA Detection
10 µL of phosphate buffer solution containing a certain concentration of PSA (10 mM, pH 7.4) was cast on the Ab 1 -covered electrode at room temperature. After incubation for 30 min, the electrode was rinsed with ultrapure water. Then, 10 µL of the prepared PQQ-CNT-Ab 2 suspension was cast on the electrode's surface. After incubation for a further 30 min, the sensor electrode was washed with ultrapure water to remove non-specific PQQ-CNT-Ab 2 , and then placed in the phosphate buffer solution (pH 6) containing TCEP and FcM for electrochemical measurements.
For the assays of PSA in real samples, fresh male blood was collected, stocked for 2 h at room temperature, and then centrifuged at 2000 rpm for 5 min. After that, 100 µL of serum was taken out and diluted by 10 times with the buffer. For the electrochemical determination, 20 µL of the diluted sample was mixed with 10 µL of PSA standard sample at a given concentration. The other procedures followed those for the PSA standard sample analysis.
Detection Principle
Amperometric immunosensors mainly include sandwich-like and competitive formats. In general, sandwich-like immunosensors show high sensitivity and specificity because of their low background current and their use of a couple of specific antibodies [5]. Thus, the electrochemical immunoassays were carried out using the classical, sandwich-like format in this work (Scheme 1). PSA was tested as the model target. Carbon nanotubes were employed as the carrier of PQQ and Ab 2 for signal readout. Insulating SAMs are widely utilized to immobilize capture elements-including antibodies-at the electrode-solution interface. The captured antibodies at the SAMs' surface maintain their activity and specificity for target binding. Moreover, the SAMs with long linear molecules can eliminate the nonspecific adsorption, thus improving the specificity of affinity biosensors [35,36]. Herein, the mixed SAMs of MU/MUA with 11 alkyl carbons were used for the immobilization of capture antibodies, in which the MUA molecules allowed for the anchor of the antibodies via amino covalent coupling. The MU molecules regulated the surface density of MUA to prevent the formation of anhydride and N-acylurea between the neighboring carboxylic acids, and to facilitate the antibody-antigen interaction by weakening the steric hindrance effect [37,38]. In the detection step, FcM as the electron mediator can initiate the redox cycling between TCEP and PQQ-CNT, since the ferrocene derivatives undergo fast electron transfer at the SAM-covered electrode-even with 11 carbon spacers-and the oxidized ferrocene moieties (ferricinum ions) exhibit fast reaction with hydroquinone or PQQ, but show slow reaction with TCEP. In the so-called "outer sphere to inner sphere" ECC redox cycling system, PQQ-CNTs act as the biomimetic heterogeneous electrocatalyst for the oxidation of TCEP. Specifically, PQQ was first reduced to pyrroloquinoline quinol (PQQH 2 ) by TCEP. Once FcM was electrochemically oxidized into FcM + (the oxidation state of FcM), it would be regenerated by PQQH 2 , thus resulting in the increase in the anodic current.
Feasibility of the Method
Previous investigation has demonstrated that a carbon-fiber ultramicroelectrode modified with PQQ-CNTs exhibits high catalytic activity toward the electrochemical oxidation of TCEP through single-nanoparticle collision [29]. We first attempted to record the electrochemical signal by the direct electron transfer reaction between PQQ-CNTs and the electrode ( Figure 1A). However, in the TCEP-free electrolyte solution, no clear redox wave was observed at the sensor electrode covered with PSA and PQQ-CNT-Ab 2 (curve a). This indicates that the PQQ-CNT label exhibited poor electrochemical signal or electron transfer ability on the sensor electrode. We also monitored the change in the chargetransfer resistance of the sensor electrode before and after the capture of PSA ( Figure S5). However, both cyclic voltammetry and electrochemical impedance spectroscopy (EIS) are insufficiently accurate and sensitive to monitor the change in charge-transfer resistance.
These results can be ascribed to the fact that the recognition elements and blockers on the surface of the electrode as well as the CNTs, and/or the long distance between the PQQ and the electrode, limit the direct electron transfer of PQQ or [Fe(CN) 6 ] 3−/4− . Addition of TCEP to the electrolyte solution induced a clear enhancement in the anodic current within the potential range of 0.2-0.6 V (curve b). Note that the sensor electrode without capture of PQQ-CNT-Ab 2 labels had no contribution to this reaction (curve c); thus, the enhancement of the anodic current in curve b can be attributed to the electrocatalytic oxidation of TCEP by PQQ-CNTs [29]. We also found that no significant signal enhancement was observed when the antibody-PQQ conjugate was used as the label instead of PQQ-CNT-Ab 2 for signal output, demonstrating that the signal was amplified by using CNTs to load large quantities of PQQ.
Feasibility of the Method
Previous investigation has demonstrated that a carbon-fiber ultramicroelectrode modified with PQQ-CNTs exhibits high catalytic activity toward the electrochemical oxidation of TCEP through single-nanoparticle collision [29]. We first attempted to record the electrochemical signal by the direct electron transfer reaction between PQQ-CNTs and the electrode ( Figure 1A). However, in the TCEP-free electrolyte solution, no clear redox wave was observed at the sensor electrode covered with PSA and PQQ-CNT-Ab2 (curve a). This indicates that the PQQ-CNT label exhibited poor electrochemical signal or electron transfer ability on the sensor electrode. We also monitored the change in the charge-transfer resistance of the sensor electrode before and after the capture of PSA ( Figure S5). However, both cyclic voltammetry and electrochemical impedance spectroscopy (EIS) are insufficiently accurate and sensitive to monitor the change in charge-transfer resistance. These results can be ascribed to the fact that the recognition elements and blockers on the surface of the electrode as well as the CNTs, and/or the long distance between the PQQ and the electrode, limit the direct electron transfer of PQQ or [Fe(CN)6] 3−/4− . Addition of TCEP to the electrolyte solution induced a clear enhancement in the anodic current within the potential range of 0.2-0.6 V (curve b). Note that the sensor electrode without capture of PQQ-CNT-Ab2 labels had no contribution to this reaction (curve c); thus, the enhancement of the anodic current in curve b can be attributed to the electrocatalytic oxidation of TCEP by PQQ-CNTs [29]. We also found that no significant signal enhancement was observed when the antibody-PQQ conjugate was used as the label instead of PQQ-CNT-Ab2 for signal output, demonstrating that the signal was amplified by using CNTs to load large quantities of PQQ. To facilitate the electron transfer of PQQ and amplify the electrochemical signal, a well-known electron mediator of FcM was added. Figure 1B To facilitate the electron transfer of PQQ and amplify the electrochemical signal, a wellknown electron mediator of FcM was added. Figure 1B depicts the cyclic voltammograms (CVs) of the sensor electrode after the capture of PSA and PQQ-CNT-Ab 2 in the redoxmediator-containing electrolyte solution. The Ab 1 -modified electrode, after the capture of PSA, only exhibited a couple of well-defined redox waves in the TCEP-free (curve a) and TCEP-containing (curve b) solutions of FcM, which are attributable to the oxidation and reduction of FcM. The reversible redox wave in the presence of TCEP suggested that the electro-oxidized FcM (FcM + ) showed slow chemical reaction with TCEP. After the capture of PSA and PQQ-CNT-Ab 2 , the sensor electrode showed an irreversible electrocatalytic wave in the FcM solution (curve c). This indicates that FcM was regenerated by the PQQ-CNT label after its electro-oxidation. A much larger anodic current was obtained after the addition of TCEP to the FcM solution (curve d); this suggests that the electrochemical-chemical (EC) redox cycling between FcM and PQQ was promoted by the chemical-chemical (CC) redox cycling between PQQ and TCEP, thus amplifying the electrochemical signal.
Optimization of Experimental Conditions
The reaction rate between FcM + and PQQH 2 or PQQ and TCEP is constant during the electrochemical scan. However, the electrochemical reaction rate of FcM/FcM + is dependent on the scan rate. Thus, a slow electrochemical scan can facilitate chemical redox cycling for FcM + /PQQH 2 and PQQ/TCEP, thus improving the sensitivity of the "outersphere to inner sphere" ECC redox cycling system. Nevertheless, a slow electrochemical rate may allow for the occurrence of chemical reactions between the FcM + and TCTP. For this consideration, we investigated the voltammetric characteristics of FcM at the SAM-covered electrode in the TCEP-free and TCEP-containing solutions ( Figure S6). It was found that the presence of TCEP did not cause any significant change in the voltammetric characteristics of FcM/FcM + at a scan rate of 20 mV/s or higher. However, TCEP induced a significant increase in the anodic current of FcM/FcM + at a scan rate below 10 mV/s, which was accompanied by a decrease in the cathodic current. Thus, a scan rate of 20 mV/s was used in the ECC redox cycling system.
Ferrocene derivatives as the general electron mediators for many oxidoreductasesincluding PQQ-dependent redox enzymes-can be modified on the electrode surface or dispersed in the solution to electronically "wire" the redox center upon contact and catalyze the oxidation of the substrate [39,40]. We also investigated the effect of FcM concentration on the peak current, and found that the current increased with the increase in FcM concentration. This suggests that high concentrations of electron mediators can facilitate the electron transfer reaction and the electrocatalytic oxidation of TCEP. However, high concentrations of electron mediators may also cause a high background current, thus decreasing the detection sensitivity. For this consideration, the effect of FcM concentration on analytical performance was evaluated by measuring the change in the redox-mediated catalytic anodic current (∆i pa ) at 0.4 V. The ∆i pa increased greatly with the increase in FcM concentration until 50 µM ( Figure S7), and then began to level off or even decrease slightly. Therefore, the electron mediator concentration was optimized to be 50 µM. Moreover, the effect of TCEP concentration on the ∆i pa was also investigated. The ∆i pa increased gradually with the increase in TCEP concentration, and began to level off beyond 200 µM ( Figure S8). However, aside from the scan rate discussed above, we found that high concentrations of TCEP also showed an important influence on the voltammetric characteristics of FcM alone. At a scan rate of 20 mV/s, an increased anodic current accompanied by a decreased cathodic current was observed when TCEP concentration was higher than 250 µM ( Figure S9). This demonstrates that high concentrations of TCEP can facilitate the chemical reaction between FcM + and TCEP, thus leading to an increase in the background current and depressing the redox cycling between FcM + and PQQH 2 . Here, a slightly excess concentration of TCEP (250 µM) was used.
The density of MUA molecules assembled on the electrode surface may play an important role in both the activation of carboxylic acids for the immobilization of the capture antibody Ab 1 , and the followed-up capture of PSA and PQQ-CNT-Ab 2 labels. In fact, we first investigated the effect of the MUA/MU ratio on the ∆i pa ( Figure S4), and found that the electrode modified with MUA/MU at a ratio of 3:7 exhibited the highest current change. Ab 1 was immobilized on the electrode surface through the EDC/NHSmediated amino covalent coupling. The concentration of Ab 1 was optimized, as shown in Figure S10. To obtain a wide linear range, an excess concentration of Ab 1 was used during the immobilization step. To confirm that the catalytic current was caused by PQQ assembled on the CNTs' surface, the concentration of PQQ used for the preparation of PQQ-CNTs was optimized ( Figure S11). To amplify the signal as much as possible, an excess concentration of PQQ was used for the preparation of PQQ-CNTs.
Sensitivity
Linear sweep voltammetry (LSV) is simple, sensitive, and rapid for quantitative analysis. To investigate the analytical merits of the biosensor, the LSV responses for the analysis of various concentrations of PSA were collected. As presented in Figure 2A, the currents were intensified with the increase in PSA concentration in the range of 0-5 ng/mL, demonstrating that the number of PQQ-CNT labels attached on the electrode surface is dependent on PSA concentration. Figure 2B depicts the calibration plot of the ∆i pa at 0.4 V against PSA concentration. The sensor has a linear detection range of 0.005-1 ng/mL. The linear equation is expressed as ∆i pa = 0.702(PSA) (ng/mL) + 0.014. The relative standard deviations (RSDs) obtained at three parallel prepared electrodes are all below 11.7% for the determination of various concentrations of PSA, suggesting an acceptable reproducibility. Moreover, the limit of detection (LOD) was estimated to be 5 pg/mL. The LOD is comparable to or even lower than that achieved by other immunosensors via the signal amplification of nanocatalyst labels (Table S1) [41][42][43][44][45][46]. The high sensitivity can be attributed to the low background signal of the TCEP substrate, the signal amplification with CNTs as the carrier, and the high turnover frequency between FcM + and PQQH 2 as well as PQQ and TCEP. We believe that the analytical performances-such as detection sensitivity and linear range-may be improved by employing nanomaterial-modified electrodes for the immobilization of the capture antibody, and/or by using other PQQ-decorated nanocatalysts as the signal labels. Although the analytical performances were not good enough compared with those of the reported PSA electrochemical immunosensors, this work presents a novel nanolabel or nanocatalytic scheme for constructing sandwich-type immunosensor. For example, metal-organic frameworks (MOFs) containing a high density of metal centers have received tremendous attention in the fields of electronic and optical sensing. However, the poor electronic conductivity of MOFs and the long distance between the metal active centers and the electrode limit their application as the signal labels in electrochemical immunoassays, although some MOFs exhibit high biomimetic catalytic activity. Efforts are being made in our research group to synthesize PQQ-based MOF signal labels and develop electrochemical biosensors based on the "outer sphere to inner sphere" ECC redox cycling scheme.
Selectivity and Stability
In order to study the selectivity of the immunosensor, the LSV responses for the analysis of four common proteins (IgG, AFP, thrombin, and HSA) were collected. As a result, the proposed biosensor had no obvious response to the four proteins, even when their concentration was nearly 100 times higher than that of PSA (Figure 3). The result showed that the immunosensor exhibited high selectivity for PSA detection. Furthermore, the anti-interference ability was studied by incubating the sensor electrode with a PSA sample containing the four coexisting proteins. Consequently, no remarkable difference in the current change was observed in contrast to that in the presence of PSA only; this indicates that the immunosensor is highly specific for PSA analysis. The high selectivity and good anti-interference ability can be ascribed to the specific antibody-antigen interaction in the sandwich-like detection format, and the low non-specific adsorption of the PQQ-CNT-Ab 2 label to the sensor electrode. Moreover, the stability of the sensor electrode is critical for the practical applications of electrochemical immunosensors. In this work, the stability of the immunosensor was investigated by monitoring the current response of the sensor electrode after storage at 4 • C for a given time. After one week, no significant current change was observed. After one month of storage, the sensor electrode lost only 6.7% of its capture efficiency. This good long-term stability can be attributed to the good stability and high activity of the antibodies attached to the electrode surface by the amino covalent coupling reaction. We also investigated the electrode's regeneration, and found that the sensor electrode can be conveniently regenerated by immersing it in 10 mM NaOH solution (i.e., desorbing PSA and PQQ-CNT-Ab 2 ). After eight regenerations, the current decreased by only 9.2% ( Figure S12). PQQ-decorated nanocatalysts as the signal labels. Although the analytical performances were not good enough compared with those of the reported PSA electrochemical immunosensors, this work presents a novel nanolabel or nanocatalytic scheme for constructing sandwich-type immunosensor. For example, metal-organic frameworks (MOFs) containing a high density of metal centers have received tremendous attention in the fields of electronic and optical sensing. However, the poor electronic conductivity of MOFs and the long distance between the metal active centers and the electrode limit their application as the signal labels in electrochemical immunoassays, although some MOFs exhibit high biomimetic catalytic activity. Efforts are being made in our research group to synthesize PQQ-based MOF signal labels and develop electrochemical biosensors based on the "outer sphere to inner sphere" ECC redox cycling scheme.
Selectivity and Stability
In order to study the selectivity of the immunosensor, the LSV responses for the analysis of four common proteins (IgG, AFP, thrombin, and HSA) were collected. As a result, the proposed biosensor had no obvious response to the four proteins, even when their concentration was nearly 100 times higher than that of PSA ( Figure 3). The result showed that the immunosensor exhibited high selectivity for PSA detection. Furthermore, the anti-interference ability was studied by incubating the sensor electrode with a PSA sample containing the four coexisting proteins. Consequently, no remarkable difference in the current change was observed in contrast to that in the presence of PSA only; this indicates that the immunosensor is highly specific for PSA analysis. The high selectivity and good anti-interference ability can be ascribed to the specific antibody-antigen interaction in the sandwich-like detection format, and the low non-specific adsorption of the PQQ-CNT-Ab2 label to the sensor electrode. Moreover, the stability of the sensor electrode is critical for the practical applications of electrochemical immunosensors. In this work, the stability of the immunosensor was investigated by monitoring the current response of the sensor electrode after storage at 4 °C for a given time. After one week, no significant current change was observed. After one month of storage, the sensor electrode lost only 6.7% of its capture efficiency. This good long-term stability can be attributed to the good stability and high activity of the antibodies attached to the electrode surface by the amino covalent coupling reaction. We also investigated the electrode's regeneration, and found that the sensor electrode can be conveniently regenerated by immersing it in 10 mM NaOH solution (i.e., desorbing PSA and PQQ-CNT-Ab2). After eight regenerations, the current decreased by only 9.2% ( Figure S12).
Serum Sample Assays
The concentration of PSA released into the circulatory system by a healthy prostate is less than 4 ng/mL. However, elevated levels of PSA have been found in the sera of prostate patients. To indicate the usefulness of our sensing strategy for clinical applications, human serum samples were tested. Figure 4 depicts the LSV responses of the sensor electrode for the analysis of diluted serum spiked with and without PSA standard samples. The current for the serum sample is higher than that for the buffer blank, indicating that the serum contains a certain amount of PSA. With the aforementioned established standard curve, the PSA concentration was calculated to be 0.048 ng/mL in the diluted serum or 0.72 ng/mL in the original serum; this suggests that the donor's prostate was healthy. The PSA contents found in the PSA-spiked samples
Serum Sample Assays
The concentration of PSA released into the circulatory system by a healthy prostate is less than 4 ng/mL. However, elevated levels of PSA have been found in the sera of prostate patients. To indicate the usefulness of our sensing strategy for clinical applications, human serum samples were tested. Figure 4 depicts the LSV responses of the sensor electrode for the analysis of diluted serum spiked with and without PSA standard samples. The current for the serum sample is higher than that for the buffer blank, indicating that the serum contains a certain amount of PSA. With the aforementioned established standard curve, the PSA concentration was calculated to be 0.048 ng/mL in the diluted serum or 0.72 ng/mL in the original serum; this suggests that the donor's prostate was healthy. The PSA contents found in the PSA-spiked samples were close to the total values of the initiated and spiked PSA (Table 1). To evaluate the accuracy of our method, the PSA concentration was further quantified by a standard addition method. After spiking of three known concentrations of PSA to the diluted serum, the current was intensified with increasing concentration of PSA. Interestingly, the PSA content in the diluted serum sample spiked without PSA was found to be 0.073 ng/mL; this value is close to that achieved by the standard curve method, and indicates that the PSA concentration change in serum can be readily measured by the immunosensor. To check the correctness of the above results, PSA in the serum sample was also determined using a commercial ELISA kit. The values found by the proposed immunosensor and the ELISA kit were of the same order of magnitude; thus, the immunosensor is suitable for the detection of PSA in real samples, and may provide a useful means for clinical research. We noticed that the values obtained by this method were statistically lower than those determined by the ELISA kit. A possible explanation for this result is that the biological matrices may adsorb on the electrode surface to decrease the analytical performance.
Conclusions
This work presented a novel strategy for the development of non-enzymatic electrochemical immunosensors, in which nonmetallic PQQ-decorated CNTs were used as the biomimetic heterogeneous nanocatalyst for TCEP oxidation. The mixed SAMs with long linear molecules facilitated the immobilization of capture antibodies and eliminated the nonspecific adsorption. The electron transfer was promoted by the extra redox mediator of FcM. The feasibility and sensitivity of the proposed strategy were demonstrated with PSA as the model protein. The proposed immunosensor was also used to quantify PSA presented in the human serum sample. The results were consistent with those obtained by the commercial ELISA kit. Although the accuracy and sensitivity of the immunosensor must be improved, this work should be valuable for the design of novel nanocatalysts and electrochemical sensing platforms. Efforts are being made in our group to prepare new PQQ-based nanocatalysts with uniform size, and to develop novel sensing schemes using PQQ-mediated redox cycling.
Conclusions
This work presented a novel strategy for the development of non-enzymatic electrochemical immunosensors, in which nonmetallic PQQ-decorated CNTs were used as the biomimetic heterogeneous nanocatalyst for TCEP oxidation. The mixed SAMs with long linear molecules facilitated the immobilization of capture antibodies and eliminated the nonspecific adsorption. The electron transfer was promoted by the extra redox mediator of FcM. The feasibility and sensitivity of the proposed strategy were demonstrated with PSA as the model protein. The proposed immunosensor was also used to quantify PSA presented in the human serum sample. The results were consistent with those obtained by the commercial ELISA kit. Although the accuracy and sensitivity of the immunosensor must be improved, this work should be valuable for the design of novel nanocatalysts and electrochemical sensing platforms. Efforts are being made in our group to prepare new PQQ-based nanocatalysts with uniform size, and to develop novel sensing schemes using PQQ-mediated redox cycling.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/nano11071757/s1, Figure S1: UV-Vis spectra of the PQQ/EDC mixed solution before and after incubation with CNT-NH2, Figure S2, Fourier transform infrared (FTIR) spectroscopy of CNT, PQQ and CNT-PQQ, Figure S3: SEM images of the untreated CNT-NH2 and the resulting PQQ-CNT-Ab2, and the TEM image of PQQ-CNT-Ab2, Figure S4: Effect of the MUA/MU ratio on ∆ipa, Figure S5: CVs and EIS of [Fe(CN)6]3− at the sensor electrode before and after capture of PSA, Figure S6: CVs of FcM in the absence and presence of TCEP at a scan rate of 10, 20, 50 or 100 mV/s, Figure S7: Dependence of ∆ipa on FcM concentration, Figure S8: Dependence of ∆ipa on TCEP concentration, Figure S9: CVs of FcM in the presence of different concentrations of TCEP at a scan rate of 20 mV/s, Figure S10: Dependence of ∆ipa on Ab1 concentration, Figure S11: Dependence of ∆ipa on PQQ concentration used for the preparation of PQQ-CNT, Figure S12: Dependence of ∆ipa on the analysis/regeneration number, Table S1: Analytical performances of sandwich-type electrochemical immunosensors with different nanocatalysts as signal labels. | 2021-07-26T05:28:51.427Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "d344f9e6eab5caba12942388098ee5646b5f7932",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/11/7/1757/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d344f9e6eab5caba12942388098ee5646b5f7932",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233626393 | pes2o/s2orc | v3-fos-license | Sanal Flow Choking in Nanoscale Fluid Flow Systems at the Zero Slip Length: Universal Benchmark Data for 3D in Silico, in Vitro and in Vivo Experiments Running Title: An Exact Prediction of the 3D Blockage Factor in Diabatic Nanoscale Fluid Flow Systems
: Although the interdisciplinary science of nanotechnology has been advanced significantly over the last few decades there were no closed-form analytical models to predict the three-dimensional (3D) boundary-layer-blockage (BLB) factor, of diabatic flows (flows involves the transfer of heat) passing through a nanoscale tube. As the pressure of the diabatic nanofluid and/or non-continuum-flows rises, average-mean-free-path diminishes and thus, the Knudsen number lowers heading to a zero-slip wall-boundary condition with the compressible viscous flow regime in the nano scale tubes
Nature
The theoretical finding of the Sanal-flow-choking and streamtube-flow-choking 1,2 (Figure 1a) is a methodological advancement in the modeling of the continuum and non-continuum real-world composite fluid flows at the creeping-inflow (low subsonic flow) conditions. The closed-form analytical model conceiving all the conservation laws of nature at the Sanal flow choking condition for diabatic flow is certainly the unique scientific language of the Universe, which we are presenting herein for solving various unresolved problems carried forward over the centuries.
Cognizing physics of multi-phase and multi-species fluid-flows and controlling the composite flow at the nanoscale is vital for inventing, manufacturing, and lucrative performance improvements of nano-electro-mechanical systems (NEMS) for high precision applications. 3- 10 The design of such systems are currently a subject of great interest in aerospace, chemical, material, biomedical and allied industries. This is particularly true for the design optimization of certain aerospace systems in the international space station (ISS) and the nanoscale-thrusters 11 5 Nanofluid flow is a blend of nano-sized particles in a traditional operating fluid, 10-12 which obeys all the conservation laws of nature. The occurrence of slip in gas flows, due to the local thermodynamic non-equilibrium, was originally reported by Maxwell 14,15 and its scale varies on the extent of rarefaction of the gas. It describes in terms of the Knudsen number (Kn), which gives an explicit clue on the type of flow, viz., the continuum or non-continuum. Note that numerous modeling efforts have been reported in the open literature for nanoscale flow simulation without authentic code verification using any benchmark data and/or any closed-form analytical solution. [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35] The fact is that generating benchmark data from the nanoscale system is a challenging task or quite impractical by using a conventional in vitro methods and/or in vivo animal models.
Nature
And it is anticipated that the classical assumptions on the hydrodynamic model will ride into hitches as the composite flow system reaches nanometer (nm) size. 19 Obviously, due to the lack of universal benchmark data for an authentic verification of the in silico results, the conclusions drawn using sophisticated models, by various investigators across the globe, viz., direct simulation Monte Carlo (DSMC), molecular dynamics (MD), Burnett equation and the hydrodynamic models, will not be endorsed by the high precision industries for the highly expensive nanoscale systems designs for practical applications. It is patently true for the decision making on health care management too without providing an exact solution for the data verification. Note that nanoscale drug delivery devices can be tailored for site-specific therapeutic activity. [36][37][38] Cooper et al. 19 reported that in vitro data well matched with the predicted results using the hydrodynamic Navier Stokes method with the first-order slip condition for the range of average with credibility, which was an unresolved problem over centuries. The corresponding nondimensional blockage factor for the two-dimensional 2 case is also given in Table- The LCDI presented in Table- Vigneshwaran's Table (Table-1) gives the exact values of the non-dimensional 3D-BLB factor at the Sanal flow choking condition of ten different working gases and the corresponding CPR and inlet Mach number. It is pertinent to state that, as seen in Table-1, the three-dimensional blockage factor is always lower than the two-dimensional blockage factor of any wall-bounded nanoscale fluid flow system at the Sanal flow choking condition. The average friction coefficient given in Table- Note that in a vascular system the boundary layer induced flow choking leads to the shockwave generation and pressure-overshoot leading to memory effect, aneurysm, and hemorrhagic stroke as the case may be. This is a grey area in nano medicine, 1,49-55 which needs to be examined in detail through fluid-structural interactive multiphase, multispecies models, which is beyond the scope of this article. The Sanal flow choking for the diabatic condition presented herein is valid for all the real-world fluid flow problems for designing various nanoscale fluid flow systems and sub systems due to the fact that the model is untied from empiricism and any types of errors of discretization. Using Equation 1 and Equation 2 the chemical propulsion system designers could easily predict the likelihoods of detonation with the given inlet flow Mach number and the lowest value of the HCR of the leading gas coming from the upstream port of the chemical system. 50 In a nutshell, the best choice of increasing the solid fuel loading in the nanoscale thruster design without inviting any undesirable detonation and catastrophic failures, is to increase the HCR of the working fluid. Further discussion on the nanoscale propulsion system design is beyond the scope of this letter.
We have established herein that, due to the evolving boundary layer and the corresponding area blockage in the upstream port of any internal nanofluid flow system with sudden expansion or divergent region, the creeping diabatic nanoflow (Mi << 1) originated from the upstream port of Conceptualization and modeling support.
FUNDING SOURCES
The first author thanks to SERB/DST, the Government of India.
NOTES
The authors declare no competing financial interest. | 2021-05-05T00:08:04.499Z | 2021-03-26T00:00:00.000 | {
"year": 2021,
"sha1": "6a0a58ef523718eda9dfa394930680f2d80535e4",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-94450-8.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d9fd7a1dd405f752bb2f264c7d4c9fe392fb6db1",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
246772673 | pes2o/s2orc | v3-fos-license | Training‐induced improvements in knee extensor force accuracy are associated with reduced vastus lateralis motor unit firing variability
New Findings What is the central question of this study? Can bilateral knee extensor force accuracy be improved following 4 weeks of unilateral force accuracy training and are there any subsequent alterations to central and/or peripheral motor unit features? What is the main finding and its importance? In the trained limb only, knee extensor force tracking accuracy improved with reduced motor unit firing rate variability in the vastus lateralis, and there was no change to neuromuscular junction transmission instability. Interventional strategies to improve force accuracy may be directed to older/clinical populations where such improvements may aid performance of daily living activities. Abstract Muscle force output during sustained submaximal isometric contractions fluctuates around an average value and is partly influenced by variation in motor unit (MU) firing rates. MU firing rate (FR) variability seemingly reduces following exercise training interventions; however, much less is known with respect to peripheral MU properties. We therefore investigated whether targeted force accuracy training could lead to improved muscle functional capacity and control, in addition to determining any alterations of individual MU features. Ten healthy participants (seven females, three males, 27 ± 6 years, 170 ± 8 cm, 69 ± 16 kg) underwent a 4‐week supervised, unilateral knee extensor force accuracy training intervention. The coefficient of variation for force (FORCECoV) and sinusoidal wave force tracking accuracy (FORCESinu) were determined at 25% maximal voluntary contraction (MVC) pre‐ and post‐training. Intramuscular electromyography was utilised to record individual MU potentials from the vastus lateralis (VL) muscles at 25% MVC during sustained contractions, pre‐ and post‐training. Knee extensor muscle strength remained unchanged following training, with no improvements in unilateral leg‐balance. FORCECoV and FORCESinu significantly improved in only the trained knee extensors by ∼13% (P = 0.01) and ∼30% (P < 0.0001), respectively. MU FR variability significantly reduced in the trained VL by ∼16% (n = 8; P = 0.001), with no further alterations to MU FR or neuromuscular junction transmission instability. Our results suggest muscle force control and tracking accuracy is a trainable characteristic in the knee extensors, which is likely explained by the reduction in MU FR variability which was apparent in the trained limb only.
trainable characteristic in the knee extensors, which is likely explained by the reduction in MU FR variability which was apparent in the trained limb only.
K E Y W O R D S
electromyography, firing rate variability, motor unit, muscle force accuracy, neuromuscular function INTRODUCTION The human motor unit (MU) is the final component of the neuromuscular system and is fundamental to muscle force generation (Heckman & Enoka, 2012). Each MU comprises a single somatic motor neuron, including its axon, distal axonal branches, neuromuscular junctions (NMJ) and associated innervated skeletal muscle fibres. Motor output is governed by supraspinal commands and spinal reflex pathways which collectively control MU firing rate (FR). Thus, modulation of MU FR contributes to the increase and decrease of muscle force generating capacity.
During muscle contraction, the desired force output fluctuates around an average value rather than being at a constant level (Enoka & Farina, 2021;Pethick & Piasecki, 2022). Variation in MU FR has been identified as a critical determinant influencing the control of muscle force (Enoka & Farina, 2021;Vila-Cha & Falla, 2016), with associations between muscle force and MU FR dependent on single MU force, the input-output function of motor neurons and the frequency response of the muscle to transform an activation signal into force (Enoka & Farina, 2021). Using computational models allowing manipulation of key MU parameters (MU FR and MU FR variability), increasing index finger force resulted in MU FR variability of the flexor dorsal interosseous reducing exponentially, corresponding with improved simulated force fluctuations (Moritz et al., 2005). These simulated data are consistent with experimental (Laidlaw et al., 2000) and other simulated observations supporting evidence that MU FR variability (i.e., the variability of inter-discharge intervals across consecutive MU firings) is a, if not the, key physiological parameter influencing the ability to maintain steady muscle contractions (Vila-Cha & Falla, 2016). Compared to central MU function, much less is known with respect to peripheral MU features (i.e., NMJ transmission instability) and the influence these may have on muscle force control.
Peripheral factors such as the release of acetylcholine at the NMJ, sodium/potassium pump activity, or modification to sodium and/or potassium intracellular and/or extracellular concentrations may alter muscle fibre action potential transmission (Allen et al., 2008). Although this alteration may subsequently impact muscle contraction and thus levels of force control, this has not yet been explored in a longitudinal manner.
The coefficient of variation for force (FORCE CoV ) has been identified as a significant explanatory variable for multiple performance tasks including balance (Zech et al., 2010), walking (Davis, Alenazy et al., 2020), manual dexterity (Keogh et al., 2019;Kornatz et al., 2005), levels of tremor (Kavanagh et al., 2016;Keogh et al., 2019) and the risk of falling in older adults (Carville et al., 2007;Enoka & Farina, 2021). The use of exercise training strategies (e.g., resistance exercise training (RET)) to improve muscle force control is, therefore, of interest for multiple diverse groups of individuals, including athletes, older adults, and those who are clinically vulnerable. It should be noted that findings from such diverging ranges of populations may not be directly comparable; for example, muscle tremor may present differently in varying physiological states (i.e., influence of aged muscle vs. exercise induced fatigue). Although RET is known to improve muscle strength, the effects of such training programmes on muscle force control/accuracy and MU firing properties remain unclear (Elgueta-Cancino et al., 2022).
While improvements in knee extensor maximal voluntary contraction (MVC) force were observed following 8 weeks RET (∼80% 1-repetition maximum (1RM)) in young individuals, neither FORCE CoV nor common drive was altered (Beck et al., 2011). Conversely, performing light-load (30% 1RM) training led to improvements in both knee extensor strength and both knee extensor and elbow flexor muscle force control (Kobayashi et al., 2014), with the greatest RET-induced improvements in isometric FORCE CoV occurring in the least steady subjects (Tracy & Enoka, 2006). Offering a potential explanation for improvements in FORCE CoV with RET, 4 weeks of isometric strength training which significantly increased muscle strength also increased MU FR (+3 ± 2.5 pps) during the plateau phase of submaximal muscle contractions and decreased in the MU recruitment threshold (Del Vecchio et al., 2019). Similarly, an increase in MU FR during the plateau phase of trapezoidal dorsiflexor contractions was observed following strength training (Kim et al., 2019), with conduction of ballistic muscle contractions leading to earlier activation of MUs and increased MU FR in the dorsiflexor muscles post-training (Van Cutsem et al., 1998). Despite RET proving to be mostly an effective training mechanism to improve FORCE CoV , RET may not be accessible to all individuals due to its higher intensity, which may pose physical limitations for older individuals (Barry & Carson, 2004) and those who are injured or present with disability. Resultingly, alternative training modalities, with a focus on light-load/task specific training, still need to be established to circumvent these limitations.
The aim of the current study was, therefore, to investigate the effect of a 4-week low intensity force accuracy training strategy on levels of knee extensor muscle force control/accuracy and any subsequent alterations to central and peripheral MU function in the vastus lateralis (VL) muscle. We hypothesised that muscle FORCE CoV and sinusoidal wave tracking accuracy (FORCE Sinu ), but not muscle strength, would improve with this training strategy alongside reduced MU FR variability, and would be observed in the trained limb only. (Guo et al., 2022), both sexes were included in the current study.
New Findings
• What is the central question of this study?
Can bilateral knee extensor force accuracy be improved following 4 weeks of unilateral force accuracy training and are there any subsequent alterations to central and/or peripheral motor unit features?
• What is the main finding and its importance?
In the trained limb only, knee extensor force tracking accuracy improved with reduced motor unit firing rate variability in the vastus lateralis, and there was no change to neuromuscular junction transmission instability. Interventional strategies to improve force accuracy may be directed to older/clinical populations where such improvements may aid performance of daily living activities.
To avoid corrective actions when reaching the target line, the first two passes were excluded from the calculation. From these six contractions, the mean FORCE CoV was subsequently calculated as (standard deviation/mean) × 100, from the plateau phase of the contraction.
Participants next completed a series of sinusoidal wave force tracking tasks (using OTBioLab software, OT Bioelettronica, Turin, Italy) at 25% MVC to assess levels of FORCE Sinu . A familiarisation contraction was performed prior to the assessment contraction.
Contractions consisted of eight oscillations at a set amplitude (±4%) lasting for 30 s. A 10-s ramp preceded and followed each oscillating section of the contraction to allow force to steadily increase and decrease to and from the desired contraction intensity ( Figure 1b).
Contractions were exported and analysed in Spike2 (version 9) software, where a virtual channel was created (by subtracting the performed path from the requested path, and rectifying) and the area under the curve (N s) of this channel was representative of the level of deviation from the target line, reflecting muscle force tracking accuracy.
Participants also completed physical function tests to assess unilateral balance of both legs pre-and post-training. All balance tests were performed using a Footscan plate (Footscan, 200 Hz, RSscan International, Paal, Belgium) allowing measurement of centre of pressure, and the displacement of this, during static one-legged standing. Participants were asked to visually focus on a fixed point in front of them for the duration of the test (30 s). A 5-s countdown was given before instruction to lift one leg, 2 s before the recording period began. Distance travelled (mm), the displacement of centre of pressure, was recorded for further analysis (Figure 1c).
Sampling of single motor units during voluntary contractions
Intramuscular electromyography (iEMG) recordings were obtained using disposable concentric needle electrodes with a recording area of 0.07 mm 2 (model N53153, Teca, Hawthorne, NY, USA), with a grounding electrode on the patella. Participants were asked to relax their muscles to enable insertion of the needle electrode into the VL muscle to enable sampling of MUs during the series of voluntary iso-metric contractions used to assess FORCE CoV (as described above).
Following each contraction, the needle electrode was withdrawn 5-10 mm and the bevel rotated 180 • , recording from a total of four to six contractions from spatially distinct areas (Jones et al., 2021). iEMG signals were sampled at 50 kHz and bandpass filtered at 10 Hz to 10 kHz. Signals were digitised with a CED Micro 1401 data acquisition unit (Cambridge Electronic Design). All iEMG and force signals were recorded and displayed in real-time via Spike2 software (version 9).
Force accuracy training
All participants were required to perform force accuracy training
Statistical analysis
Data are presented as mean ± SD unless stated otherwise. As there were no bilateral leg differences at baseline in any parameter,
Muscle force tracking accuracy
Although the interaction effect was not significant (
DISCUSSION
The aim of the current study was to investigate the effects of a In line with our hypothesis, MVC force remained unchanged in the knee extensors across both legs. However, these results are not in direct agreement with others. For example, in older adults, low intensity RET (30% 1RM) increased knee extensor MVC after both 8 (Kobayashi et al., 2014) and 16 (Tracy & Enoka, 2006) weeks of training, with numerous studies reporting that low intensity RET (i.e., ∼20-40% 1RM) can induce increases in muscle strength (Hortobagyi et al., 2001;Keen et al., 1994;Kobayashi et al., 2014;Tracy & Enoka, 2006). This notion does, however, remain inconclusive with studies such as that by Moore et al. (2004) MVC (Oshita & Yano, 2010), and another for contraction intensities ≤5% MVC (determined via displacement of centre of pressure) (Kouzaki & Shinohara, 2010). FORCE CoV of the hip abductors and dorsiflexors has been shown to be the most significant explanatory variable in sway-area rate during light load contractions, although most of the variance across conditions was unexplained,suggesting other physiological mechanisms important for postural control likely influence unilateral balance (Davis, Allen et al., 2020). The effects of force accuracy training on functional outcomes such as unilateral balance remains, therefore, to be further examined in other muscle groups and populations such as older individuals, who display deterioration of unilateral balance with advancing age (Maki et al., 1990;Izquierdo et al., 1999;Hess & Woollacott, 2005).
The current study utilised two different force tracking tasks, varying in difficulty, to assess levels of FORCE CoV and force tracking accuracy. Our results demonstrate improvements in both FORCE CoV and FORCE Sinu following training. During isometric contractions of fluctuating force (i.e., sinusoidal contractions), the recruitment and subsequent de-recruitment of MUs needs to be aligned to match the desired trajectory to increase and decrease force (Duchateau & Enoka, 2008) and as such, they require different control strategies.
Resultingly, the fluctuation in force during the sinusoidal contractions may contribute to positive alterations of force tracking and control strategies.
Commonly, FORCE CoV has been highlighted as a critical explanatory variable with respect to muscular performance of tasks including walking (Davis, Alenazy et al., 2020), tremor (Kavanagh et al., 2016;Keogh et al., 2019) and risk of falls in older adults (Carville et al., 2007). FORCE CoV is also associated with impaired functional ability in multiple sclerosis (Davis, Alenazy et al., 2020), and progressively deteriorates from middle to older age in highly active males and females (Piasecki, Inns et al., 2021). Therefore, the impact of force Vila-Cha & Falla, 2016), and here we demonstrate improvements in force accuracy and MU FR variability in the trained limb only, which may be a result of reduced antagonist muscle activity and/or inhibitory afferent feedback (Enoka & Farina, 2021). Further, common inputs of descending and sensory signals induce a correlation between low-frequency oscillations in FR of motor neurons, known as common drive, also significantly influence fluctuations in muscle force output (Negro et al., 2009). Interestingly, low-frequency oscillatory components of MU FR were strongly associated to explain most variation in muscle force output during submaximal contractions, with MU FR variability being poorly correlated with FORCE CoV (Negro et al., 2009). Therefore, although not assessed in the current study, the influence and potential alterations to common drive should additionally be considered as a mechanism to aid improvements in FORCE CoV . There is potential for ionic changes (e.g., modification to sodium and/or potassium ion intracellular and/or extracellular concentrations, subsequently altering muscle fibre action potential transmission; Allen et al., 2008), release of acetylcholine at the NMJ, and/or the type/intensity of muscle contraction Carville et al., 2007;Enoka & Farina, 2021) to also influence levels of muscle force control; however, we observed no alterations to NMJ transmission instability, assessed via NF MUP jiggle.
Strengths and limitations
As the training period was only 4 weeks, it offers translational relevance and application to pre/rehabilitation scenarios (Durrand et al., 2019) with, for example, pre-operative colorectal patients in the UK having a 31-day target time frame between decision to treat and operation (Boereboom et al., 2019). The short duration of each training session (∼20 min) also counters one of the most commonly cited barriers to exercise interventions: 'lack of time' (Trost et al., 2002). Secondly, the tasks constituting force accuracy training are arguably more applicable to daily movements (e.g., rising from a chair)
Conclusion
To summarise, we highlight that a 4-week period of targeted force accuracy leads to improved muscle force control and accuracy in young healthy participants, which is associated with reduced MU FR variability. Importantly, these adaptations and possible mechanisms were evident in the trained limb only. These findings may influence interventional strategies to improve force accuracy, including in older and clinical populations where such improvements may help with independence maintenance via improved performance of activities of daily living. | 2022-02-12T16:22:39.767Z | 2022-02-10T00:00:00.000 | {
"year": 2022,
"sha1": "b4b4c6c412e313cd62b02efd2d68d6d85c037733",
"oa_license": "CCBY",
"oa_url": "https://nottingham-repository.worktribe.com/preview/10074583/Experimental%20Physiology%20-%202022%20-%20Ely%20-%20Training___induced%20improvements%20in%20knee%20extensor%20force%20accuracy%20are%20associated%20with%20(1).pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "8263f3966bfa00e00ac94e55f29d4f72f3d139db",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
253043393 | pes2o/s2orc | v3-fos-license | How Covid-19 restrictions affected young people's well-being and drinking practices: Analyzing interviews with a socio-material approach
Background The Covid-19 restrictions – as they made young people's practices in their everyday life visible for reflection and reformation – provide a productive opportunity to study how changing conditions affected young people's well-being and drinking practices. Methods The data is based on qualitative interviews with 18- to 24-year-old Swedes (n=33) collected in the Autumn 2021. By drawing on the socio-material approach, the paper traces actants, assemblages and trajectories that moved the participants towards increased or decreased well-being during the lockdown. Results The Covid-19 restrictions made the participants reorganize their everyday life practices emphatically around the home and communication technologies. The restrictions gave rise to both worsened and improved well-being trajectories. In the worsened well-being trajectories, the pandemic restrictions moved the participants towards loneliness, loss of routines, passivity, physical barriers, self-centered thoughts, negative effects of digital technology, sleep deficit, identity crisis, anxiety, depression, and stress. In the improved well-being trajectories, the Covid-19 restrictions brought about freedom to study from a distance, more time for significant others, oneself and for one's own hobbies, new productive practices at home and a better understanding of what kind of person one is. Both worsened and improved well-being trajectories were related to the aim to perform well, and in them drinking practices either diminished or increased the participants’ capacities and competencies for well-being. Conclusions The results suggest that material domestic spaces, communication technologies and performance are important actants both for alcohol consumption and well-being among young people. These actants may increase or decrease young people's drinking and well-being depending on what kinds of relations become assembled.
Introduction
In this article, we study how Covid-19 restrictions on social proximity affected young people's well-being and drinking practices in Sweden. Our starting point in the article is that the restrictions on social proximity transformed young people's everyday life practices, making them visible for reflection, and forced young people to reorganize their habitual everyday life practices. By questioning the self-evident nature of their everyday life practices, the restrictions made young people more aware of what is essential for their well-being.
After arriving among us, the Covid-19 virus was soon categorized as a global threat to the continuity of human life, menacing the overall foundations and practices of the current neoliberal economic, political, Prior to the inception of the Covid-19 pandemic, changes in the well-being and in the drinking patterns of Swedish youth had already been observed. An increase in mental health problems among young people was noticed in Sweden at the beginning of 2000 ( Socialstyrelsen, 2013 ), at the same time as a decline in alcohol consumption among youth started. Since then, these two trends have persisted ( Socialstyrelsen, 2017 ). Similar patterns of declining alcohol use and increasing mental health problems have been identified also in other high-income countries ( De Looze et al., 2015 ;Kraus et al., 2018 ). The existing studies suggest that the increase in mental health problems among young people may be caused by multiple social mechanisms. Some of these social mechanisms may be related to an increased presence of social media among young people which makes them vulnerable to online comparison, competition and harassment ( Twenge & Park, 2017 ;Länsförsäkringar, 2017 ) as well as to increased performance demands at school, on the labour market and in leisure time activities ( Wyn et al., 2015 ;Folkhälsomyndigheten, 2018 ;Mind, 2018 ). As young people are expected to become "competitive units " ( Brown, 2015 ) and "entrepreneurs of themselves " ( Rose et al., 2006 ), this promotes performance culture and individualization among them ( Giddens, 1991 ). When young people experience significant pressure to perform well in studies, work, career and personal relationships this entangles their life around concerns over whether they are making the right choices and whether their "projects of the self " will pay off in the future ( Beck & Beck-Gernsheim, 2002 ;Wyn et al., 2015 ). This makes young people's navigation into independent life unpredictable, diversified, and vulnerable ( Brannen & Nielsen, 2007 ).
These aspects of young people's lives may explain why young people's mental health problems are increasing and why they are drinking less. First, in the performance-oriented environment, excessive, unhealthy, irresponsible, or undisciplined drinking may become categorized as a moral failure of the self ( Goodwin & Griffin, 2017 ). Secondly, when young people are encouraged to enhance their abilities and competencies to become competitive units, lead healthy lives and organize their everyday life to serve the pursuit of success, this may not only increase their mental health problems but also dampen their desire to binge drink ( Törrönen et al., 2020 ). Thirdly, when young people's everyday life activities become tightly scheduled around practices that increase their capacities to compete in the main areas of society, they do not have any more time for (unproductive) drinking with their friends ( Caluzzi et al., 2022 ). Lack of time and possibilities to engage in activities related to spontaneous unstructured time could thus partly be an explanation for young people's decreased well-being.
In view of these considerations, the overall aim of this article is to explore how young people's (18-24 years old) mental health is entangled with drinking practices. As the pandemic period provided laboratory-like circumstances ( Moretti & Maturo, 2021 ) to study what kind of relations young people consider essential for producing wellbeing, the Covid-19 restrictions on social proximity will be the starting point for this analysis. A few articles on how the Covid-19 lockdowns affected drinking habits suggest that they made home ( Callinan & MacLean, 2020 ;Conroy & Nichols, 2021 ) and social media ( Nichols & Conroy, 2021 ) more important actors in alcohol consumption, which have been under-researched topics in previous studies. With a sociomaterial approach, detailed below, we address this lack of research and analyze how the pandemic conditions affected young people's wellbeing and drinking practices. Our analysis shows that Covid-19 restrictions reorganized young people's relations at home, around studies, at work and in terms of drinking and moved their lives either towards decreased or improved well-being.
A social-material approach
In what follows, we approach well-being and drinking practices by drawing on socio-material approaches, such as assemblage thinking ( Duff, 2014 ), actor-network theory ( Latour, 2005 ) and the new mate-rialism ( Fox & Alldred, 2017 ). In the socio-material approach to wellbeing, human and non-human entities are handled on a par, symmetrically, as actors who can equally mediate, steer and transform action ( Latour, 2005 ). Therefore, when we explore how Covid-19 restrictions affected young people's relations to well-being and drinking practices, we pay attention to what kinds of diverse human and non-human elements they were linked to and what kinds of 'assemblages' ( Fox & Alldred, 2017 ) they then formed.
In this view well-being and drinking practices are approached as processes that get their meaning through their relations to diverse actors and as part of specific assemblages ( Törrönen, 2022 ). From this perspective, improved well-being can be defined as a process of becoming in which actors are able to multiply their relations to human and nonhuman elements in a way that their capacities for a good life increase. Decreased well-being, in turn, can be understood as a process of becoming in which actors' actions become linked to relations that diminish these capacities ( Fox & Alldred, 2017 ). Drinking, again, can be seen as a practice that may increase or decrease actors' capacities for well-being.
Who or what acts in assemblages is called an actant ( Latour, 2005 ). Human and non-human actors become actants when they become linked to action so that they translate, transform, modify or alter its expected course. Since actants inform us what kinds of specific forces affect our participants' well-being by enabling, hindering, blocking, increasing or decreasing it ( Law, 2004 ;Latour, 2005 ), they are one of the main objects in our analysis. We can identify the transformative power of actants by paying attention to uncertainties and matters of concern that make them visible. The Covid-19 pandemic is an example of a disruption that introduced multiple uncertainties and matters of concern to everyday actions. As Covid-19 unsettled our current neoliberal economic, material, political, institutional, and socio-cultural habits and practices, it sent us to build new kinds of actants, relations and assemblages with which we aimed to overcome the difficulties and stabilize new kinds of routines for our everyday lives ( Törrönen, 2021 ).
Besides paying attention to the human and non-human actants that affected our participants' well-being and drinking assemblages, we further trace what kinds of trajectories they became embedded in. We especially pay attention to how these trajectories modified and translated their well-being and drinking practices and how -as their established everyday life practices became transformed in these trajectories into something else -their capacities for well-being in them situationally decreased or increased ( de Vries 2016 ).
By examining how Covid-19 restrictions reorganized our participants' relations to well-being and drinking practices, we aim to clarify what kinds of relational patterns turned out to be damaging or beneficial for our participants' well-being.
Participants and data collection
The 33 participants were recruited through a purposive sampling procedure from secondary and upper secondary schools in central parts of Sweden, as well as from non-governmental organizations and social media platforms. The recruitment took place in 2017 and 2018 as part of a larger longitudinal research project investigating health and declining alcohol consumption among youth ( Törrönen et al., 2019 ). The purposive sampling procedure was guided by the aspiration to recruit participants with varying characteristics in terms of drinking habits, gender, age, and ethnic and socioeconomic backgrounds. The data we analyze in this article constitutes the fourth round of interviews for these participants. Based on the participants' wishes, the interviews were held in person ( n = 2), over the phone ( n = 21) or on a video calling platform ( n = 10). After informed consent was given, the interviews lasted for a median of 50 minutes (minimum 37, maximum 81) and were audio recorded. The interviews covered topics of the current living situation but were mainly focused on the influence of the pandemic on the lives of the in- terviewees. In the interviews, we asked questions such as "How has the pandemic influenced your life? Has your consumption of alcohol and ways of drinking changed due to the pandemic? " Moreover, we asked the participants to specify the effects in terms of family life, relations, friendship, work prospects, studies, travels, leisure time activities and social media practices. The study was approved by the Swedish Ethical Review Authority (protocol codes 2016/2404-31; 2021-02158).
The interviews were mainly conducted during September and October 2021. At the time of the interview, the participants were between 18 and 24 years of age (median 22). The participants were categorized according to their drinking practices into abstainers, moderate drinkers or heavy drinkers. A participant was categorized as a moderate drinker when s/he drank only a little and avoided intoxication and as a heavy drinker when alcohol was used for intoxication and drinking resulted in drunkenness at least once a month or more often . As displayed in Table 1 , the majority of the participants were women, and mainly university students. A bit more than one-third of the participants had a foreign background, primarily with Middle Eastern or South American ancestry. In terms of their parents' occupations, the participants came from various socioeconomic backgrounds.
Coding and analysis
The data were coded by using the NVivo software program. First, we read the whole material for identifying the passages where our participants speak about the effects of Covid-19 on their well-being and drinking practices.
After this, we analyzed these passages by drawing on the concepts of 'actant', 'assemblage´and 'trajectory' as explained above. We noticed that Covid-19 restrictions displaced and translated our participants' actions towards both worsened and improved well-being trajectories.
In worsened well-being trajectories Covid-19 restrictions mobilized multiple actants. The changes the restrictions brought about weakened our participants' relations with others, leisure time, home, studies, work and with themselves. Moreover, we noticed that performing well in studies emerged as one of the most important goals in these trajectories.
In improved well-being trajectories Covid-19 restrictions became associated with actants and assemblages that increased our participants' personal freedom, mobility and opportunities to be flexible in relation to others, leisure time activities, studies, home, work and with themselves, which led to translations that enhanced their overall well-being.
Moreover, we noticed that in both worsened and improved wellbeing trajectories drinking practices either diminished or increased our participants' capacities and competencies for well-being.
Since in the beginning the public health restrictions on social proximity produced mostly negative outcomes for our participants' well-being, we first analyze what kinds of worsened well-being trajectories, assemblages and translations the Covid-19 restrictions mobilized in our participants' everyday life. After this, we examine the well-being trajectories.
Findings
Our findings below show that the Covid-19 lockdown mobilized processes in which the relations and practices at home, doing studies, working and spending leisure time (by drinking or doing other activities) changed, thereby also transforming our participants' identities as students, workers, family members or drinkers.
Drinking parties helped restore sense of normal life in decreased well-being trajectories
Mia's account below is an example of a worsened well-being trajectory in which Covid-19 restrictions reorganized her practices into an assemblage through the relations of which her everyday routines collapsed, her sleeping became excessive, her eating habits became disordered and her studies became unfocused. The trajectory describes how, through these translations, Mia's encounters with the world led to a loss of her competencies and abilities to be an active agent who can resist the complications in life and transform them to serve her own well-being. This moved her towards worsened well-being. During the lockdown, Emilia experienced a similar loss of routines and agency as Mia, as her encounters with the world became narrowed down to home. While Mia tried to regain her agency by sleeping and resting, Emilia used drinking parties as a counterforce against the negative effects of restrictions. By participating with her friends in drinking parties at each other's homes, she was able to bring normality to her life and momentarily take distance from the abnormality of the situation.
INTERVIEWER
EMILIA: All I did was sleep.… I started feeling sad and angry. And then I got anxious.… During the fall and winter, I started hanging out with people. But we were not really healthy. We partied every weekend just to drink. And then we did that, from say November until December. We were a gang that met up. Or I met people, although I would rather have stayed home. But I like got my act together and went out anyway since I never met anyone and well, I guess I felt I had to meet people…. I've been drinking more during the pandemic…. There were quite a lot of weeks when I was drinking every Friday and then every Saturday it was party again…. We were usually in someone's home. And sat on like a couch and drank…. We never went out or anything. And we were not that many people…. I started going mad by sitting inside here. (19-year-old woman, heavy drinker, middle class) In retrospect, Emilia identified that although drinking parties were healthy as a way of breaking her isolation at home, these situations and their assemblages made her drink more than before, and this aspect of her drinking practices was "not really healthy ". Emilia's account, with its focus on drinking as an actant for restoring normalcy, is typical primarily for the heavy drinking participants. They usually reacted to the pandemic restrictions by moving their drinking to each other's homes. For some, this meant that home became translated into a never-ending drinking scene, into an assemblage in which the physical boundaries between diverse drinking situations disappeared. This tended to increase their drinking. For example, Oliver (23-year-old man, heavy drinker, middle class) disclosed that during the pandemic his heavy drinking friends shared a lot of pictures on Snapchat of them drinking beer in diverse times and situations at home.
Non-drinking practices were questioned in decreased well-being trajectories
Arin's account below is an example of a worsened well-being trajectory in which Covid-19 restrictions interrupted the socialization processes in a new place of study. As this assemblage distributed all activities and relations at home, it highlighted the role of digital devices, apps and platforms as possible actants in mitigating or overcoming the effects of isolation and loneliness caused by the lockdown: ARIN: I was alone in a completely different country, in my own room and was completely lonely there for a year and a half.… I won't lie, I couldn't handle the situation in the beginning.... I tried to distract myself and lie to myself that everything was good, to run away from the truth…. My studies didn't go well either and I felt very bad about that.… I tried to get rid of everything that distracted me. As social media bombarded me with too much information about the pandemic. I stopped using social media…. In this way I was able to concentrate on my studies a little better. Then I asked: "Am I a person who studies better in a group or alone?" and tried to study together with other students, which didn't work. Because as a perfectionist I felt very bad when I noticed that some students were better than me…. Then I developed a technique of studying alone first, before meeting the other students online, which made me feel more competent. And then I started to schedule my day completely [laughs], I wrote down when I should wake up, when I will eat, when I should have leisure time, when I should talk to mom or exercise, and so on. I'm a very autistic person and I need to have a certain time-frame. This worked. I still felt bad and there were some days that were worse than others, but I was able to function. (22-year-old woman, abstainer, lower class) Through encounters with the adversities the lockdown entailed, this account demonstrates how Arin learnt what kind of actants she needed to avoid and what kind of actants to engage with to get through her lonely days. She first stopped using social media platforms that provided her with too much information about the pandemic. This translation increased her capacity to focus better on her studies, but did not help her loneliness. Then she tried to study in online groups. This translation away from loneliness made her feel worse as she noticed that there are some students who perform better than she does. This concern then made her develop a technique to study alone before meeting others online. This improved her performance in her own eyes, and she was able to continue her online life with her peer students. Moreover, she managed to transform and standardize her everyday life activities to follow a strict minute-by-minute schedule, which further increased her competencies and abilities to deal with worsened well-being periods in her physically cramped small space.
The following excerpt from Maya's interview illustrates how isolation from one physical environment can also lead to the development of an assemblage of "mental barriers " that blocks the use of digital devices, apps and platforms and prevents the establishment of relations with other actants and settings that would help in overcoming the isolation. As Maya became trapped in herself due to the Covid-19 restrictions, this led to an identity crisis and made her feel bad and unproductive. Moreover, the changing conditions made her question her non-drinking. In her isolated and socially restricted life during the pandemic, when sociability and dancing in public venues were transformed into drinking at home, Maya felt immature due to her decision to abstain from alcohol. As the home was translated into a setting for diverse drinking assemblages that provided possibilities for new drinking rituals, Maya's habit to keep on drinking "Sprite " in the cozy intimate home events became an actant that increased her feelings of being not fully included in the fun of others:
Moderate drinking was translated into abstinence in decreased well-being trajectories
The above examples of decreased well-being trajectories display how the lockdown led to diverse affective responses among our participants. Because of it, Mia became "tired and lethargic ", Emilia anxious through sadness and angriness, Arin moved towards feeling low and Maya, by collapsing into herself, became passive. Among these participants, alcohol, social media, television, and sleep grew into actants that helped these participants to survive the restrictions. In our data, there are also participants whom the assemblage of restrictions pushed towards such deep anxiety and increased stress that they were not able to stop without actants of therapy or medication.
For Sophie the pandemic first led to difficulty with sleeping, which then started to act as an actant that increased her anxiety and stress by destroying her motivation to study and by weakening her performance at school: Similarly, for Tara, the assemblage of Covid-19 restrictions led to translations in which her capacities for well-being were diminished in relation to her physical body, exercise, eating, love, friends, studies, sleep, and everyday life routines. These relations accumulated into an assemblage that moved her towards such a state of mental stress and inability to achieve satisfying academic results that she needed to con-tact healthcare and start to use sleeping pills. As actants, sleeping pills helped mitigate the problem but did not eliminate it. For Sophie and Tara, stress and depression following the Covid-19 restrictions translated into non-drinking practices. The same was true for Erin, who experienced a deep depression during the pandemic, and for whom being home and feeling low formed an assemblage that decreased her interest in drinking:
INTERVIEWER: What do you miss? What do you want to do now that society is opening up? ERIN: Before, it was partying. But now, when you can do that, then I don't want to do it anymore. I don't know, there's nothing…. No, I can't think of anything.… But I think that it's because I've been home so much and I've been feeling very low, so I haven't…. I'm not so stoked about going out and stuff like that anymore. (19-year-old-woman, moderate drinker, lower class)
Erin's, Sophie's and Tara's shift towards non-drinking practices due to lockdown exemplifies a typical translation especially among our moderate drinkers. They described how assemblages of anxiety, depression and loneliness did not move their actions towards drinking. For them drinking was associated with assemblages of sociability in public areas (e.g., student pubs, clubs), and collective effervescence (e.g., having fun, being "stoked "). The private space at home affected their drinking negatively as an actant that translated it into a "boring " and "unnecessary " activity. What they missed was not the alcohol per se, but rather the sociability of drinking situations, and therefore the pandemic restrictions did not distribute their drinking practices at home. This is illustrated well by Sophie:
SOPHIE: When I was home and felt bad it wasn't like "oh, I want to go to a pub ", it was like "oh, I want to meet my friends ".
On the other hand, Sophie also articulated a link between social drinking practices and stress. To her, drinking together with others in social situations can function as a powerful actant against study-related stress and anxiety. She reasons that as the assemblage of Covid-19 restrictions took away social drinking situations from her life, this could have affected her well-being during the lockdown:
Drinking was reduced in improved well-being trajectories
Even though most of our participants experienced negative outcomes in relation to the pandemic, some of them experienced the Covid-19 pandemic as a positive period. For them, time, digital technology, online communication, domestic material resources and leisure time hobbies grew into pivotal actants that helped in translating the effects of restrictions into improved well-being trajectories. When education, work and bigger leisure time events were transferred online, this enabled them to spend less time commuting and engaging in unwanted social activities and increased their time for themselves, loved ones and "healthy " activities such as exercise. The pandemic further helped them identify what is important in life and what is essential for their well-being. The following quotation from Alice's interview exemplifies this: In the above quotation, Alice realizes that the main actants for her well-being are her "family and relatives, or loved ones ", "time for doing things that feel good ", "online teaching ", "staying home in … parents' house ", "family's mountain cabin " and skiing. For Alice, this realization also moved her towards drinking less even after the restrictions were lifted. Her new assemblages did not include "partying ", as the lockdown made her find new ways to "hang out " with her loved ones:
ALICE: People have sort of, I don't know, well, lost interest a little bit in partying. Or, they have found other stuff to do, or other ways to hang out. I have that perception. And I feel that for myself also.
Similarly, for Mario the assemblage of pandemic restrictions provided actants that enabled him to develop practices that suited better his self-image of being introverted. The restrictions facilitated him to be away from bigger social events and to reorganize his everyday life routines to follow his own inclinations. Pandemic circumstances delivered him excuses and justifications not to take part in drinking parties, to engage in new hobbies that did not require sociality, to decrease his workload, to move away from home and to meet his mother merely online. These translations moved him towards sobriety and empowered him to build an assemblage of relations that made his life less hectic, less social, less working-and cleaning-oriented, more mother-loving and more training-centered. Emily also benefited from the changes in circumstances initiated by the assemblage of Covid-19 restrictions. When face-to-face teaching was translated into digital encounters, she realized how much better her well-being is without travelling to the university by subway and without being physically there. She describes how the crowded subway journey to the university caused her stress and unpleasant sweating, how un-comfortable she felt in walking to the lecture halls in cold and rainy weather, and how she suffered from bad air conditioning and the multitude of students at the university premises. These material, physical, social, and emotional actants formed an assemblage that made her feel bad, and she was happy to have the possibility not to encounter and interact with them daily: As it was for both Mario and Alice, the pandemic moved Emily towards drinking less. Before the pandemic, Emily had a hectic partylife, which changed drastically during the pandemic as the restrictions blocked her possibilities to participate in bigger drinking events. According to her, this reduced her drinking and made room for her to replace the attachments of drinking with healthier assemblages. Through this process her relations to training, work and studies multiplied and became stronger:
EMILY: [During the pandemic] I drank very little. I think it had a lot to do with the fact that I couldn't participate in bigger social events. I became more interested in training, work, and studies. For example, I was at my cousin's birthday party a while ago and didn't drink…. because I wanted to study the next day…. I nowadays prioritize being able to get up early and be ready … and to exercise. If you drink, a whole day is ruined, and I don't want to waste a day.
Moreover, as the Covid-19 pandemic continued over a long period of time, most of our participants -also those whose life was first turned towards a worsened well-being trajectory by the restrictions on social proximity -learnt in the process to form relations with actants that bettered their well-being. For example, Arin argues that without encountering the events and relations to which the pandemic restrictions took her, she would not have learnt this early in her life what she wants, what she likes and what she needs to change in herself as a person. By learning to deal with tough moments during the pandemic, she became more confident of herself, and this empowered her to break away from relations that diminished her capacities for well-being:
ARIN: It was during the pandemic that I began to go a little into myself and ask "but what do I want? What do I like?
What is the weakest thing in me that I want to change? " During the pandemic, for example, I broke up with many friends. I started to be a little more confident in myself. I broke up with my boyfriend, and so on. Sure, it was tough, but … now I almost think it was nice, because if the pandemic wouldn't have happened, I think it would have taken me many years to get this understanding. I'm very outgoing and I'm very good at not seeing things, but I know that subconsciously I feel very bad about the things that I don't see. So now in retrospect, I think the pandemic was something I needed. (22-year-old woman, abstainer, lower class) Like Arin, William (an 18-year-old man, moderate drinker, middle class) describes how the difficult events and relations he encountered during the pandemic made "him know himself better ". They enabled him to become a more mature and "responsible " person. Benjamin (22year-old man, heavy drinker, middle class), in turn, explains how the matters of concern during the pandemic changed his relation to home. As he had used to do his studies and training outside the home in specific material settings, at the beginning of the lockdown he was not able to concentrate on his studies and training from home. His home acted as an actant that diminished his abilities to concentrate on his studies and training. But as the pandemic continued, he learned to concentrate on doing these activities at home and even started to feel good by being able to perform them in his domestic space. Like this, his home became translated from an actant that partly diminished his capacities for wellbeing, into an actant that in a versatile way increased them.
Remarks on social-material approach
This study shows how mental health is a sensitive and complex process of becoming in which one moves towards or away from well-being in interaction with multiple human and non-human actors. Our sociomaterial analysis demonstrates how Covid-19 restrictions on social proximity facilitated for our participants identifying the relations essential for their well-being and how drinking as a practice increases or decreases their well-being.
Our analysis firstly shows how actants such as the physical presence of others, bodily mobility in different material environments, the physical structures and boundaries of different practices and the possibilities for action in private spaces are important in young people's well-being. As the restrictions diminished or blocked our participants' relations, especially to public material actants as well as face-to-face encounters, this tended to move our participants towards worsened well-being trajectories in which they felt lonely and disconnected from life. Those with ample family-owned material and economic resources were able to build relations that enabled them more easily to reverse the crisis to serve their own well-being. For them, the pandemic could even act as an actant that improved their quality of life. In our higher social class participants' accounts, actants such as parents' mountain cabins or spacious homes were assembled in assemblages that facilitated fertile conditions for introspection and provided time and possibilities to identify what is vital in life. For those participants that struggled with economic problems and cramped student housing, such narratives were not as prominent. They rather experienced their homes as prisons in which time and circadian rhythms lost their meaning or as hostile environments that needed to be tamed by strict minute-by-minute schedules.
Secondly, our analysis demonstrates how the restrictions transferred to the home a number of activities previously carried on outside it, thereby transforming the temporal and spatial contours, functions and meanings of our participants' domestic space. For example, restrictions could make the home into a setting for new kinds of drinking habits, rituals and assemblages MacLean et al., 2022 ) that had their origin in public drinking venues but now were fully materialized at home from start to finish. As a result, the home could turn into a café where you can enjoy wine alone or with your friends, into a fine dining restaurant where you share with your guests a full meal with carefully chosen wines, or into a nightclub for dancing and having fun. Moreover, the home could become translated into a never-ending drinking scene where alcohol was all the time present for spontaneous consumption. Here again, spacious private homes provided more possibilities for the translations of domestic spaces into diverse forms of drinking settings and rituals than small student studios that were not even allowed to be used for those kinds of purposes during the lockdown.
Thirdly, our analysis illustrates how in the pandemic conditions, technological devices, apps and platforms became important actants for our participants' well-being, acting as counterforces against the restrictions on physical mobility and interaction. As digital technology facilitated online education and the communication between significant others and friends from a distance, it mitigated the negative effects of lockdown and helped our participants to overcome their physical isolation virtually, although the virtual encounters did not always feel as rewarding as physical encounters.
Fourthly, our analysis elucidates how the lockdown disrupted our participants' biological and habitual daily rhythms, with the result that they could for a period lose their sense of direction and control over their lives.
Moreover, our socio-material analysis reveals that public material environments such as transportation systems and university venuestheir construction, design and uses (see Emily's quotation above) -may function as important actants for worsened or improved well-being.
Reflecting on results
To sum up, by forcing our participants to reorganize their everyday life practices emphatically around the home, and by channeling their physical mobility into virtual mobility, the Covid-19 restrictions mobilized among our participants both worsened and improved well-being trajectories. In our participants' worsened well-being trajectories, the pandemic restrictions gave rise to translations that moved them towards loneliness, loss of routines, passivity, material and physical barriers, selfcentered thoughts, negative effects of digital technology, sleep deficit, identity crisis, anxiety, depression, and stress. This weakened the quality of their attachments to family, friends, leisure time and school and decreased their competencies to perform well in studies, which exacerbated their predicament.
In our participants' improved well-being trajectories, again, the Covid-19 restrictions brought about freedom to study from a distance, more time for significant others, more time to take care of yourself, more time for your own hobbies, for new productive practices at home, and for a better understanding of what kind of person you are.
As the improved well-being trajectories were common among the participants who come from families with rich material resources and worsened among those who lack them, this shows how the material elements may act as helping or opposing actants in overcoming the adversities. While our participants from higher social class were able to translate the restrictions into relations that increased their time to focus on attachments that enhanced their agency, mobility and well-being, our participants with fewer material resources were moved by the restrictions towards relations that decreased their time and capacities to take care of themselves and their well-being.
Our results are in line with existing studies by highlighting how young people's well-being is linked to social and physical proximity to others, meaningful studies and hobbies, self-control, positive relations with significant others and economic and material resources and security ( Wyn et al., 2015 ;McLeod & Wright, 2015 ).
When we compare our participants' assemblages of worsened and improved well-being, we notice that also improved well-being trajectories are related to the aim to perform well. In both trajectories, young people articulate an understanding of well-being as a process of becoming in which you need to build and multiply relations that increase your performance. When your competencies and abilities to do this are blocked or weakened, this produces anxiety and stress. This suggests that young people's becoming processes of well-being align with neoliberal and biopolitical discourses on the importance of growing your individual human and material capital and developing diverse techniques of taking care of yourself. These are crucial in optimizing your capacities while you compete for success with others ( Burns & Davies, 2015 ;Wilson, 2018 ). Hence, our participants approach well-being as an individual "project of the self " that requires constant attention and work ( Beck & Beck-Gernsheim 2002 ) and in which material forces, as described above, act as significant actants. At the same time, as our participants' accounts show how their well-being is moved by multiple cultural, social and material forces, this calls into question the neoliberal ideal that individuals themselves could alone be responsible for their success and happiness.
In our participants' becoming processes of well-being, drinking practices also played a role -either in their absence or in their presence. The socially restricted situation during the pandemic seemed to have po-larized drinking among our participants -at least in the beginning. In some participants' (primarily heavy drinkers') worsened well-being trajectories, drinking at home with friends acted as an actant that brought normality to their lives by bringing people momentarily together from the abnormal isolation the pandemic caused. However, it was also common that drinking decreased or disappeared from heavy and moderate drinkers' assemblages, and this made room for new and "healthier " relations (e.g., in training and studies). What was also striking was that in improved well-being trajectories drinking was typically described as a positive force, but in worsened well-being trajectories as a negative force (primarily among moderate drinkers) that amplifies their problems and weakens their mental health ( Kuntsche et al., 2005 ). On the other hand, some participants approached drinking as a helper also in worsened well-being trajectories. Then they positioned it as an actant that mitigated the anxiety and stress the expectations to perform well at school it produced. Our participants thus connected drinking in a complex way to both worsened and improved well-being trajectories. Drinking either acted as an actant that diminished young people's capacities for healthy living or as an actant that increased their capacities to engage in relations that facilitated the movement towards well-being.
While the restrictions forced new drinking situations upon the participants and transformed the contours of the home ( Moretti & Maturo, 2021 ), they also made them consider their drinking habits in new ways ( Caluzzi et al., 2021 ;MacLean et al., 2022 ). For some, this meant questioning their abstinence, for others, it meant that they could find new means to define what they like about going out (meeting friends rather than drinking alcohol). For some, it meant reducing their drinking and replacing it with other attachments, while for others it meant that they developed new drinking rituals and practices without reducing their drinking or by increasing it. Regardless of whether our participants' alcohol consumption increased, decreased or stayed the same during the pandemic, several of our participants accentuated its presence or absence in their worsened or improved well-being trajectories, and they strongly connected its use or non-use to performance.
The study has some limitations. When we asked our participants how the pandemic affected their well-being, they did not always comment in their accounts on their drinking habits. Therefore, as we later in the interview asked more specifically how the pandemic affected their drinking and how their drinking or abstinence was related to their well-being or feeling bad, we may have made drinking or abstinence more significant actants to their well-being trajectories than they really are. On the other hand, our participants did not have any difficulties producing accounts on these issues. This implies that drinking practices provide one possible entryway to young people's well-being.
Conclusions
Overall, our results suggest that material domestic spaces, communication technologies and performance are important actants for both alcohol consumption and well-being among young people. Domestic spaces can either participate to decrease or increase drinking and well-being, depending on how spacious and multipurpose they are and whether their materiality can be translated to serve introspection, relaxation, sociability, and building human capital. Social media and communication technologies -by facilitating greater flexibility and interaction between different practices and physical spaces -may increase or reduce drinking and well-being, depending on whether they strengthen young people's sense of connection with others or weaken it by directing their interactions to anxiety-and stress-generating comparison and competition. How pressure to perform well is related to drinking and wellbeing and what kinds of assemblages they co-constitute is also a complicated question. Our results propose that among non-drinkers performance seems to be linked to reduced drinking, on the one hand, and increased mental health problems, on the other hand. But among drinkers, the relation between decreased drinking and increased mental health problems is constituted in a more complex way. For some drinkers, the anxiety and stress on being successful, for example in university studies, may set limits on their drinking and reduce the frequency and amount of their drinking so that their recovery from a drinking occasion would not lead to a loss of a study day. Therefore, they tend to drink moderately. For other heavier drinkers, the expectations to perform well may stabilize their drinking to a level that may cause them to lose one of the weekend days but not any of the study days. But there may also be heavy drinkers that feel that they must reset their performance stress once a week with more transgressive drinking, even though they know that it is unhealthy, and makes them feel ill for some days. For these drinkers, the relations between heavy drinking, improved well-being and worsened well-being manifest as a delicate matter that requires constant reflection, monitoring and rationalization.
Funding
The study was supported by Swedish Research Council for Health, Working Life and Welfare, under Grants no. 2016-00313 and no. 2020 -00457.
Declarations of Interest
None. | 2022-10-22T05:18:12.700Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "c0b213ae73746b58b20fd412597669b2c7474aeb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.drugpo.2022.103895",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "48bf92539f070e7e66ec028cef1293d7e1c7501e",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
149501532 | pes2o/s2orc | v3-fos-license | Cultural adaptation in the form of a mosque roof in the South Konawe District of the Southeast Sulawesi Province
Although it is already embodied in Islamic culture, acculturation can still be traced to elements of the supporting cultures. When the religion of Islam entered into Indonesia, the original culture was not brought with them, so the previous culture was continued by Islam. When the mosque concept is embodied as a cultural product, there are various forms and various buildings involved in the mosque. Mosques located in the Middle East are different from mosques in Indonesia. The form of the roof overlaps with that taken from the roof of ancient mosques in Indonesia. The form of roof overlap at mosques in South Konawe District has been traced by the author. This study aims to identify and analyse what cultural elements affect the shape of the roof of the mosque and the form of acculturation of the cultures at the mosques in the District of South Konawe. This research was used a qualitative descriptive method. This research concluded that the architecture as part of the Hindu cultural system, Islamic mysticism, and Europe acculturate is displayed in the roof of the mosque in the District of South Konawe. The roof shape of the two-tiered or three-storey mosque (Hindu culture and the Islamic principles of Mysticism) and the roof covered with mustaks shaped like onions (European culture) was formed by the acculturation of the three cultures.
Introduction
Our personality is determined by the acculturation that is the individual's involvement within a cultural milieu (Maran 2000).Architecture is one of the expressions of national cultural values which is derived from a given culture and customs so that it is appropriate and consistently harmonious to the adherents (Budihardjo 1996).Among the challenges in developing a national architecture which is appropriate for the Indonesian culture is invading foreign influences which will arguably shift the local cultures (Budihardjo 1996).Acculturation etymologically means the mixing between two or more cultural entities which meet and influence each other.It occurs when a particular culture is faced with or invaded by foreign cultural elements that slowly but surely are accepted and considered to be a part of the original culture without displacing it.Despite having manifested in a particular culture, the cultural elements of the acculturated ones can be traced since the acculturation does not eliminate the original (Hakim 2011, Koentjaraningrat 2005).Furthermore, a culture in a particular place will undergo the mixing process since there is a contact with other cultures.People which are involved in this kind of interaction will influence each other (Handoko 2012).Acculturation is the internalisation of foreign cultural elements as the consequence of inter-cultural relations without eradicating the original characteristics.
The idea of cultural Islam is that it was genealogically initiated by Abdurrahman Wahid termed 'Pribumi Islam' in the 1980s.In Pribumi Islam, Islam is described as a theological normative doctrine which is accommodated into and by a particular culture.It does not change the character of the culture.The point is that there is no generalisation of Islam with the Middle Eastern culture.The generalising of Islam with the Middle Eastern culture potentially means divesting people of their cultural roots.The substance of cultural Islam is not only to avoid the polarisation between culture and religion; it is also for bridging the gap between them.Since the arrival of Islam in Indonesia, its carriers have selectively adopted the existing cultures they encountered, acculturating Islam into them, and developing Islam in different sense to the Middle East one.The early carriers of Islam have integrated Islam-ness and Indonesia-ness so that the substance of Islam has been successfully incorporated into the existing culture without displacing the original elements.It means that the values of Islam are considered to be appropriate in addition to the local customs, and it does not need to be changed according to the Arabic taste, tradition, customs and ideology.If it was altered in such a way, it would cause social turbulences such as horizontal conflicts, social disintegration, and so on.Transmitting the value of Islam into the local cultures is more important and effective instead of replacing them (Naupal 2005).
Islam in Indonesia is inseparable with the local tradition in Indonesian cultures.Like in the Middle East, particularly Saudi Arabia, Arabism and Islamism intertwine with each other as if they are undifferentiated.Furthermore, in order to understand the value of Islam, Islamic preachers in the past time were supple and wise in conveying Islam to a culturally heterogeneous society.For example, this is what the Wali Songo did in Java when they disseminated Islamic thought.Wali Songo easily transmitted the value of Islam into Javanese lives because they did not strictly bring in the Arabic culture within the Islam that they brought.Instead, they reformulated Islam according to Javanese society.It means that what they did and exemplified in relation to Islam was the Javanese customs and tradition.Islam did not strictly restrict the physical tradition, which means that Islam does not have strict principles to do with architecture.Islam provides the chance to its believers to decide if their physical choices rest on common sense (Syarah n.d.).Despite there being an instruction to build a mosque, there is no obvious explanation as to how its architectural structure should shape.So, when the mosque is manifested as a socio-cultural product, it varies both in shape and features depended on the geographical and local knowledge, and technological conditions (Haris 2010).When Islam came to Indonesia, the carriers did not bring in their own culture so Islam tended to continue with the existing culture by combining its relevant aspects with Islamic values (Sidiq 2010).
The development of the mosque in Indonesia began in the seventh century, yet it changed due to the adaptation of Hindu-Buddhist elements nine centuries later.Among its characteristic were a single pole building, and a shielded and multilevel roof which meant the higher the roof, the higher its sanctity.In Java, the Hindu-Buddhist-like building became the Islamic house of worship.Its features are mostly similar to Hindu temples.It exemplified and simultaneously confirmed that the basic principle of the development of the mosque in Indonesia was Islamised existing cultures (Tjahjana 2013).The mosque is a unique building, symbolising Islam and the heterogeneity of Indonesian societies.Apparently, Indonesian mosques look different from the ones in the Middle East.The roof of Indonesian ancient mosques does not look like an Arabic dome.Instead, it adopted the rooftop roof, or the double level roof as in (the Agung Mosque Cirebon), Quadra (Kedato Mosque, Ternate), and Penta (Agung Mosque Banten).The rooftop model of the Indonesian ancient mosques is considerably taken from the Meru shape during the Hindu era in Java.It seems relevant because before the coming of Islam to the Indonesian archipelago, people, particularly in Java, followed Hinduism and Buddhism, and their material culture remained.The building structure of the Agung Kudus mosque's tower, for instance, shares similarities with the Kulkul Building in Bali and a Hindu temple in East Java (As'ad 2013).
Despite the similarities in the general roof model, it can be observed in the details that one can describe the particularity and locality of each area by the mosque located in it.In Java, the roof has the form of tajuk or limasan while in Minangkabau, it is shaped bergonjong like the Minangkabaunese traditional house.The roof of Lombok ancient mosques such as Rambitan, Pujud, and Bayan are structured steeply to ensure that rainwater can fall down quickly so that the leaf-made roof will last longer.A similar model can also be found in several ancient mosques in West Sumatera, Kalimantan, and Ternate.This is due to being the result of cultural adaptations to the tropical climate with a high rain temperature.The rooftop model is also associated with the survival of the Meru, which has nothing to do with Islam, like the Meru in Bali.Thus, it seems that the rooftop model in Javanese mosques gained the Meru model from the Hindu-Buddhism era.The rooftop model may be the result of the development of two cultural elements.This includes the roof of the temple, which is basically square and multilevel with the top being a stupa which sometimes looks like an open umbrella.In addition to balancing this with the main building of the mosque which is a particularly large area, the rooftop model can promote air circulation through the aperture so that it can maintain coolness and comfortability inside the building.
To prevent corrosion, the rooftop is covered with kemuncak which can have different names such as mustaka or memolo (Java), gegontongan (Banten), katabah (South Sulawesi), or pungki (Lombok).It is made up of burned soil, porcelain, wood, karas stone, iron, copper, or as on the Agung mosque of Surakarta, gold.In Javanese tradition, the mustaka is construed as the symbol of the Mahameru mount.Several understand it, and it is usually odd in number related to Islamic concepts such as the Five Principles (Rukun Islam) of iman, ihsan or syariat, tarekat, ma'rifat, and hakikat in the Islamic structure of piety.In further developments, the mustaka featured the elements of a star and moon as symbols of da'wah (Haris 2010).The kubah rooftop model (the convexly curved design) emerged as one of the features of Indonesian mosques.Kubah originally does not derive from Islamic architecture.This is due to the Islamic tenets not teaching clearly on the specific material culture, or that Islam does not obviously determine any specific architectural design.Islam gives the chance for its adherents to choose their physical as well as architectural form, according to their common sense.The feature of kubah has been developed for hundreds of years by many different groups all over the world.The historical line of the form of kubah and its function is very vast and has various meaning, even if it has become the semiotic symbol of a particular religion, culture, and civilization.At the certain level, it is tricky to distinguish among the Islamic, Christian, Jewish, or Pagan Kubah due to the fact that the tradition goes back to Ancient Rome, and evolved ever since.The coming of Europeans to Indonesia has influenced local social life.It can be seen in the main gate of the Kauman Mosque of Semarang, which has a curvy model like a waru leaf and contains a Persian architectural model (Sidiq 2011).
The mosque is thus a cultural product which is strongly related to the local system of ideas and social activity (Yunianti 2015).As for the cultural product, it is not static.Instead, it is dynamic following the development of society (Istanto 2003).In each region, the mosque has differences and architecturally has its own particular character (Sofyan 2015).This also can be seen in the Konawe Selatan Regency.Among the 22 mosques, there are 20 levels of the rooftop model of the mosque in Konawe Selatan alone.It urges us to discuss what cultural elements influence this model and how acculturation occurs and affects the mosques of Konawe Selatan.This research study aims to investigate these questions.
Research Method
This research used a qualitative-descriptive method to understand the acculturation process and its impact on mosque architectural design.The area of research has been narrowed down into several sub-districts including Lainea, Laeya, Punggaluku, Wolasi, and Konda.They were chosen due to the architecture in that most of the mosques follow the multilevel-rooftop model covered with a mustaka on top.Data was derived from field observations where the authors observed the model of the mosques, and also literature studies to obtain the information on the acculturation related to Islamic, Hindu, Tasawwuf and European culture and its influence on mosque design.The data was analysed using the tabulation technique and the narrative-descriptive model.
Culture of Indonesia
Approximately 300 ethnic groups exist within Indonesian territory and each has a different cultural expression.Each ethnic group has been influenced by foreign cultures such as that of India, Arabia, China and Europe, including other local ones, like Malay (Soliha 2010).The cultural diversity of Indonesia is formed by such processes as the coming of foreign migrants to Nusantara and their development during their stay.The migrants come predominantly from China, India, Arabia, and Europe, so this acculturation results in human as well as cultural diversity in Indonesia (Safitri 2015).In the past, the cultural features of Indonesia have been influenced by the Hindu, Islamic, and European cultures (Koentjaraningrat 2005).
Hindu culture in the mosque appearance: Micro and macro cosmos
The universe began with the egg of Brahma, who stood on his own and grew to be our current world.This egg is divided into two; the upper which represents the sky and the lower one that is our earth (Budihardjo 2005).Previously, architecture was not for an aesthetic purpose, but was build for utility to support the human life cosmically (Mangunwijaya 2009).According to Hindu philosophers, all power which exists in the universe can also be found in humans.The human and universe are not two different entities, but they, instead, are united (Taqwin 2009).Based on the religious point of view, what ensures the success of human behaviour is the imitation of cosmogony, which means that the first living world must be reincarnated (Dewi 2003).The ancient people divided the world into three levels through the concept of Tribuwana, consisting of the upper (the heaven-world), the middle where humans live and the lower (the world of death).Our living environmental layout must reflect the order of the universe.The micro cosmos has to describe the macro one.Housing and buildings which are influenced by the Hindu philosophy always consist of three main layers.The first one is the vast layer which describes the humans with all of their desires named Kamadatu.The second one is called Rupadatu, located on top of the Kamadatu, which shows a world where humans are shackled by secularity.The third layer is named Arupadatu which describes the real consciousness which is also shapeless.A Hindu traditional house also consists of three datu, including the floor as the Kamadatu, the poles and wall as the Rupadatu, and the roof as Arupadatu (Mangunwijaya 2009).
The word kopula is taken from an uncommon language which means the long, originally from the big world (macro cosmos).It consists of 1) the Brahma egg which is divided into the upper and lower world; 2) there is an intercourse between the upper and lower worlds, so there is the middle world.The upper world is the metaphor for the arupadatu, the heavenly world, the head of the human, and holiness while the middle is for the rupadatu¸ the less holy world, and is described as the human body.The lower world is kamadatu which means not holy at all, the description of the subordinate, a symbol of disease described as the human foot.Furthermore, our ancestors always illustrate the macro cosmos in the micro one.The micro world is also called a replication of the macro one.The micro cosmos is aligned with the macro one and is symbolised as a) mountain; b) pole, obelisk, sila and lingga; c) the combination between the mountain and the obelisk which are called poros (like a pagoda and ziggurat); d) the front gate; e) stupa; h) the umbrella looks-like formation; i) tumpeng; j) roof of the house; k) the horizontal pattern in the backroom of the house; l) human head and m) the vertical patter of the roof.The macro cosmos in the middle area is likened to the micro cosmos, like in the building pattern and human body.The house is divided into the three datu (Figure 1).Tassawuf is defined as the chain with each of its elements linked like stairs.The people who want to be Sufi have to begin with cleaning up their soul so that they can be one who will always be with Allah.The Sufi study this method to uncover its secrets.They usually talk about pleasure and yearning, fear and hope, love and emotion, existing, temporal and eternal, the beginning and the ending.They look for divinity and love wherever they can find it.They learn about the temporal condition, which is the highest level that can be reached by Al Murid.They can uncover the Al-Hijab and go up to the level of emanation and inspiration.By then, they have the basics of a unity with Allah, which is considered to be the end of the tasawwuf (Madkour 2002).The spiritual way to experience this specific religiosity is called tawassuf.The first principle in tasawwuf is the single form (wihdah al-wujud).It influences not only the person's way of life but also their art, including the architecture.Syari'at, tarekat, ma'rifat and hakikat are among the four elements of tasawwuf (Fanani 2009).Syari'atis called the first attitude that must be accomplished by ordinary people.Furthermore, tarekat is usually understood as the particular attitude in the approaching process.Ma'rifat is the symbol of the approach to the top (God) while Hakikat is called the search for the throne of God (Nurhan 1995).
Syariat, tarekat, ma'rifat, and hakikatare the four principles of tasawwuf which are united in the tenets of ahwal and maqomat.This sufistic approach can be described in the cone diagram in which the top corner represents the throne of the God while the centre point of the circle becomes the projection of the cone's peak.If the peak of the cone is the essence of the truth that is sought and the centre of the circle is the projection, then the stages of this projection line rise up to God's throne.
In the Sufi tradition, this is called ma'rifat.By then, the practice of purification in order to get closer to the God is completed in term of its understanding of tasawwuf, with ihwal as the departure point, passing the syari'at, tarekat, and ma'rifat to reach each maqomat, and then finding and uniting with hakikat.If it is transformed into a material manifestation with the possibility of it being constructed, then it will result in the simplification from the cone design to the pyramid of spiritual essence.
In the pyramid-model diagram, it can be described as the steps of spiritual achievement according to the concept of ahwal and maqomat.The number of spiritual levels in the ahwal and maqomat conception varies depending on the existing Sufi group that they follow.Generally, there are approximately forty levels with from three up to seven steps.These are the steps which the architectural concept refers to, including the form of the mass building and the hierarchy of the vertical and horizontal space.For one who looks for the 'truth', the mass cluster of the mosque, both single and multiple, will be looked at as the gradual steps toward the 'truth'.From the early footsteps into the mosque, from the outer gates toward the mihrab up until they are inside among the poles, roof, and upward to the kubah, it all guides the 'truth seeker' to solemnly remember Allah by zikir.The mosque's architectural design is the medium of zikir, or at least, it is the visualisation of zikir itself (Fanani 2009).Islam will never be diffused without the tarekat since it mediates Islam to be more rooted among adherents.Hence, one of the important aspects of Islam is tasawwuf.In Nusantara, the rambler Sufi is the agent who carried out dakwah anywhere.They have successfully Islamised Nusantara, and particularly the Malay people, since the thirteenth century.This success is bolstered by their ability to offer Islam according to the local wisdom.Through Sufism, Islam has been accepted and many of the people have easily adopted Islam into their lives.In addition to Sufism, the quick diffusion of Islam in Nusantara was also due to the fact that Islam is accommodative to the local knowledge, customs, and cultures that existed prior to its coming.Islam has a dynamic and elastic character in response to the local culture so long as they do not break the Islamic principles.Islam is not anti-culture, yet all cultures can be adapted to Islam.This is why Islamic culture varies (Handoko 2012).
The European culture on the roof of the mosque: Ontological culture (Rational) One of the sources of European culture are the ancient Greek civilization, approximately the fifth century BC.The Grecian culture is understood as being rational.Their thinkers always ask about the nature of what is in the world.It is in the architecture where they look for the nature of the building and express it in a material form.The European concept of architecture and the consciousness underlying the will of the builder is manifest in the architectural design.Along with their history, they tend to seek new forms instead of solving old ones.As W. Pindler has said, "the aim of us (Western societies) is not to solve instead of changing" (Mangunwijaya 2009).
The Roman culture is among the influencing sources related to the development of European architectural design.The conquest of most of the European continent and several parts of Africa and Asia are the main factor behind the diffusion of Roman influence over Europe.The conquest over Macedonia and Greek in 146 BC, in addition, to increasing its territories, also became the media through which Greek culture came to Rome together with Greek artists and their influences which were transported to Roman territories.The Roman architectural characteristics can be categorised into two; the Etruscan and Roman periods.Roman architecture dates back from 700 BC during the Etruscan era in Italy.The Etruscan people existed in mid-and Western Italy, which is historically considered to be an area with advanced knowledge and practice of architecture.The Roman people developed the column, beam, and curve model which is a feature of early Roman architecture.In Roman architecture, the curve model became the most important part since it replaced the column and beam model.Among the Roman architectural features are as follows: 1) the curve design, for instance, in the main gate of the Falerii Novii built in the 3 rd century BC, the curve of August in Perugia built in 11 BC, and the principle of the curve in the aqueduct building in Rome and 2) the curve in the roof later modified to be the kubah (Figure 3).(Sumalyo 2003).The Roman curve principle later became a tradition which spread out and exists all around the Mediterranean Sea.
The Ancient Romans are the founders of the curvy-doors construction technique and the wide space room model with a heavy burden at its top.This technique was disseminated throughout Europe and the European nation's colonies including Indonesia.This is because it is considerably strong, safe, and cheaper in relation to its construction.Furthermore, it implies grandeur as if it is eternal and will never be obsolete.Since ancient times, the circle shape is understood to be the symbol of eternity which is reflected in its model; it is without beginning as well as the ending.It seems static (Mangunwijaya 2009).It is supposed that the kubah model is interpreted as a human-made design which replicates the span of the sky (Syarah n.d.)Since the ninth century, Arabic people have thought rationally.The Arabic people learnt about the ideal construction of the self from and together with sources from Hebraic, Greek, Arabic, Jewish, and Middle Eastern cultures, and also the Chinese civilization which is considered to be the pioneer of the ontology (Mangunwijaya 2009).What makes Islamic civilization different is that it has experimented with adopting cultural elements of its conquered areas without replacing its own substantial culture.From Damascus to the Mediterranean Sea, Islam has accepted the Greco-Roman aspects of Hellenism in its architectural design.Egypt has given its Nile river culture which has been acculturated with the Roman influence.Even Muslims do not doubt accepting the Christian experience of developing society.Debt and the inheritance system of goods among the communities involved is allowed to continue as long as it does not go beyond the theological principles that have been carried out by the Muslims (Fanani 2009).
In the early Islamic era, the shape of the mosque's roof was flat and like a saddle.The Kubah was later added on when there was a need to reshape the mosque.For the first time, mihrabis were installed on the wall as the sign of kiblat until it become the space of maksura.As for the roof, the kubah model was chosen, mimicing the Roman-influenced architectural building in Syria.The Kubah was then installed on the top of the mosque so that it distinguished the mosque from other buildings.A new tradition was launched and it was developed widely after the use of Kubah in the Nabawi mosque.(Fanani 2009).The Kubah has various models.As the ruler, the Islam dynasties developed it into several typologies.The Syrian kubah has characteristics such as 1) its shape is like a half-ball and is made from wood; 2) the Kubah is seated in the circle-constructed walls and 3) the Kubah consists of outer and inner layers.The Andalusian kubah which was developed in Iberia and West Africa has characteristics like 1) being propped up with pilasters; 2) the pilasters are made up of floral garnishes and 3) the outer kubah is layered with an octagonal roof.Furthermore, the Persian kubah has characteristics such as 1) the kubah is pointed in shape and 2) the inner space is designed like a beehive.Another is the Mamalika kubah which was developed in Egypt; the shape is pointed with a tiered construction at its neck.The Utsmanian kubah developed in Anatolia has characteristics such as 1) it is constructed imitating mushrooms with a plural composition tiered hierarchically and 2) it is built like half of a ball.Furthermore, the Indo-Persian kubah which is mostly built in India has several features such as 1) it looks like an onion and 2) its top is covered by an inverse petal-like casing (Syarah n.d.).
The Dutch idioms for the Nusantara coastal architectural design came along with the waves of Western culture since the nineteenth century (Mangunwijaya 2009).On 9 th May 1831, the Dutch first arrived in Kendari.At that time, the city was under the rule of the Konawe Kingdom.Yet since the Korte Verklaring treaty was signed, the Dutch gained sovereignty over the Konawe.Despite the fact that the Konawe Kingdom kept ruling, they had to acknowledge Dutch sovereignty according to the treaty.Until 1942, the Dutch continued residing in Konawe and there were no Islamic education-based institutions which functioned there.During the colonial period, the teaching of Islam was not limited to the tasawwuf but also was oriented towards understanding, appreciating, and experiencing the syari'at.The syari'at was devoted to the enforcement of worship and Islamic law (Paibeng et al. 2009).
The influencing cultural elements on the mosque roof in Konawe Selatan Regency
Hinduism and Buddhism are the religions recognised by Majapahit.The kingdom also maintains a relationship between central government and local leaders, which usually preserves their traditional laws.The Majapahit culture is oriented toward the palace in which the king functions not only as the leader but also as the centre of cultural activities.Within its paternalistic tradition, the Palace of Majapahit is the creator of the whole culture which is adhered to by the elites and ordinary people.According to one of the items of Majapahit literature, Negarakertagama in the mid-fourteenth century, the kingdom's territory spread across Nusantara.It includes the main islands such as Java, Sumatera, Kalimantan, Malay Peninsula, Nusa Tenggara, Sulawesi, Muluku, and Papua.In Sulawesi, the Majapahit influences are centralised in several towns like Buntayan Luwuk, Udamakatraya, Makassar, Buton, Bengawi, Kunir, Galiyao and Selayar.The relationship between the central government and the local leaders is shown in the annual visit of the local leaders to the central palace together with the tribute that they pay to the king.Each year, Majapahit's elites and priests are delegated to withdraw the taxes from the local leaders (Pinuluh 2010).Before Islam, the Konawe people believed in animism and dynamism, and then Hinduism when the latter arrived there.The beliefs of the ruler of the universe are called Sangia (Dewa) (Paibeng 2009).Islam arrived in Konawe by official and spontaneous routes.The official path was the spread of Islam brought by the special envoy of the Kingdom of Buton, while the spontaneous path was the spread of Islam by Bugis and Makassar traders in Ternate, which made a stop on the Konawe coastal line.The spread of Islam in Konawe was carried out by Sufism in which the Sufi taught theosophy which was adopted into local knowledge that the people already knew like magic and traditional healing.Tasawwuf, in Islam, was understandable and socially acceptable.This was due to the local knowledge which had similarities with tasawwuf, which was brought in by the Islamic preachers.
The process of Islamisation through Sufism in Konawe was carried out by the Buton, Tiworo and Ternate people, by instilling the teachings of Sufism to allow the locals to know themselves and to know Allah SWT.Islamic Sufism is studied through daily meetings by using the social approach or done in congregation.The community began to establish a house of worship called langgar (surau).This is where Islam developed outward towards the remote areas of Konawe.In Kendari, the Dutch first set foot on 9 th May, 1831.At that time, Kendari was still in the Konawe Kingdom period.The Dutch were sovereign in Konawe after 1908, after the signing of the Korte Verklaring treaty.The agreement emphasises that the Konawe Empire can continue to exercise its rule but that it recognises Dutch sovereignty in Konawe.Until 1942, there was no Islamic teaching institution there.The subsequent teaching of Islam in Dutch colonial times was not limited to Sufism, but was more oriented towards understanding, appreciation, and the experience of shari'a.This is especially with regards to the implementation of worship and Islamic laws (Paibeng et al. 2009).(Blog PNPM MPd Prov Sulawesi Tenggara 2014).In Lainea, there are approximately eighteen mosques, thirteen in Wolasi, three in Laeya and eight in Konda (SIMAS Kemenag 2014).
Our task is related to how we will employ the truth and values from other cultures and any other eras.The awareness influenced by other cultures does not contradict with the efforts to find out identity as well as personality.At some point, it determines creativity (Mangunwijaya 2009).The acculturation of Hinduism, Buddhist, Islamic and European cultures appears in the roof of the Baitul Yaqin mosque located in Watumeeto, in the Lainea sub-district of Konawe Selatan (Figure 4).It is the multi-leveled rooftop which symbolises the mountain in the Hindu macro-micro cosmos tradition.It also implies the Islamic Tasawwuf, where each level describes the sufistic vertical steps.The first level represents the syari'at, the second describes tarekat, the third refers to makrifat, while the highest is hakikat.The European tradition appears in the onion-like kubahand its cover (mustaka).The rooftop is made up of several levels and is covered with mustaka.Among the many designs, this model is more compatible with the people in Konawe Selatan, particularly in the Lainea, Laeya, Punggaluku, Wolasi and Konda sub-districts.
Conclusion
This research has concluded that the architectural works actually describe the acculturation that occurs between Hindu, Islam Tasawwuf, and European cultures.This is expressed in the design of the mosque's rooftop as shown in several regions in Konawe Selatan.The acculturation appears in the multi-level-rooftop model which consists of two or three tiers (Hindu and Islamic Tasawwuf Cultures), and the mustakaand onion-like kubah at its top (European).In Konawi Selatan, there are not only Hindu, Islam Tasawwuf, and European cultures that exist but also the local tradition called Tolaki.Furthermore, this study has the potential to explore more of the influence in relation to the adaptation model of Tolaki architectural knowledge on the mosque's rooftop model in Konawe Selatan.
Figure 3 .
Figure 3.The curve construction in the main gate of the Falerii Novii building in Lazio, Rome, built in the 3 rd century BC.Source: Giap Roma TV 2014
Figure 4 .
Figure 4. Islamic Principles of Sufism, the principles of macro-cosmos and micro-cosmos, and the principles of Roman curvature in the roof shape of Baitul Yaqin Mosque in Watumeeto Village, Lainea Sub-District The form of acculturation in the Mosques in Konawe Selatan Regency Konawe Selatan Regency is one of the regions in the province of Southeast Sulawesi, in Indonesia.Its governmental activities are centered in Andoolo.This regency is the result of the Kendari governmental split which was approved by UU No 4 Tahun 2003, dated 25 th February 2003 (Wikipedia 2015).Konawe Selatan Regency consists of 22 sub-districts with 357 rural villages and 10 urban villages (kelurahan).This amount includes the preparatory villages which number up to 74.The land area of the Konawe Selatan Regency is estimated to be 451,420 hectares or 11.83 percent of the land area of Southeast Sulawesi, whereas the marine area is estimated to be ± 9.368 km².The boundary of Konawe Selatan Regency consists of the North bordered by the municipality of Kendari; the East bordered by the Banda and Maluku Sea; the South side bordered by the Bombana and Muna Regencies and the West side bordered by Kolaka Regency.Based on the projection of SUPAS in | 2019-05-12T14:23:02.255Z | 2018-06-30T00:00:00.000 | {
"year": 2018,
"sha1": "2fc4922d195c9fd83266c5d527794652c75cfd13",
"oa_license": "CCBYNCSA",
"oa_url": "https://e-journal.unair.ac.id/MKP/article/download/4554/5032",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "c34901673294103b04bdc0e640f7b79fe9925306",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
237327139 | pes2o/s2orc | v3-fos-license | Advanced Oxidation Processes Coupled with Nanomaterials for Water Treatment
Water quality management will be a priority issue in the near future. Indeed, due to scarcity and/or contamination of the water, regulatory frameworks will be increasingly strict to reduce environmental impacts of wastewater and to allow water to be reused. Moreover, drinking water quality standards must be improved in order to account for the emerging pollutants that are being detected in tap water. These tasks can only be achieved if new improved and sustainable water treatment technologies are developed. Nanomaterials are improving the ongoing research on advanced oxidation processes (AOPs). This work reviews the most important AOPs, namely: persulfate, chlorine and NH2Cl based processes, UV/H2O2, Fenton processes, ozone, and heterogeneous photocatalytic processes. A critical review of the current coupling of nanomaterials to some of these AOPs is presented. Besides the active role of the nanomaterials in the degradation of water contaminants/pollutants in the AOPs, the relevance of their adsorbent/absorbent function in these processes is also discussed.
Introduction
Drinking water scarcity is one of the most problematic crises facing the world today; despite water being one of the most abundant resources in the world, less than 1% is considered clean water available for human consumption [1]. With increasing industrialization, water contamination becomes increasingly evident, essentially due to emerging pollutants such as pharmaceuticals, personal care products and endocrine disrupting compounds [2]. Most conventional water treatment methods do not effectively degrade or remove recalcitrant contaminants or are unable to remove enough to meet increasingly stringent water quality standards, thus indicating the need for alternatives. Additionally, due to the ongoing climate change, there will be an increasing demand for water quality management strategies. In recent decades, research has focused on the development of advanced oxidation processes (AOPs) for water treatment [3].
AOPs are water treatment processes that are promising for the degradation of persistent or toxic organic pollutants, as well as compounds refractory to other environmental remediation/decontamination treatments. AOPs have gained great importance as alternative treatment processes that affect the degradation of organic species through the action of the hydroxyl radical (OH), oxidizing pollutants present in wastewater and industrial effluents [4]. AOPs are carried out at room temperature and at a pressure close to normal, which involve the formation of very reactive radical species with a high oxidizing capacity, mainly hydroxyl (OH) radicals. These OH radicals are extremely reactive oxidizers (oxidation potential of the OH radical is approximately, E θ = 2.8 V) and non-selective towards organic pollutants in wastewater [5]. AOPs can be considered versatile technologies, as they provide different possible alternatives to produce OH radicals [6]. AOPs, compared to conventional water treatment techniques, have a greater efficiency and capacity to degrade recalcitrant organic pollutants, and can generate less toxic intermediate products during their degradation [7].
Over the last decades, different advanced homogeneous and heterogeneous oxidation processes have been investigated in the field of wastewater treatment. Different mechanisms can be used for the activation of AOPs: ultraviolet radiation (UV) or visible light; different oxidants species (O 2 , H 2 O 2 , O 3 ); or catalysts materials (TiO 2 ) [8]. The different techniques of AOPs which are currently being used for the treatment of wastewater can be essentially divided into two large groups, the photochemical methods and the non-photochemical methods. Non-photochemical methods do not require light energy to form OH radicals and these methods include ozonization (O 3 ), ozonization combined with hydrogen peroxide (O 3 /H 2 O 2 ), catalytic ozonization (O 3 /catalysts), Fenton system (Fe 2+ /H 2 O 2 ), and ultrasound, among others. With respect to photochemical methods, complete destruction of organic compounds can be achieved by combining UV radiation with non-photochemical AOPs. UV radiation, coming from most UV lamps, has a wavelength between 200 and 300 nm. Photochemical methods include ozonization combined with ultraviolet radiation (O 3 /UV), ozonization with hydrogen peroxide and ultraviolet radiation (O 3 /H 2 O 2 /UV), the photo-Fenton system (Fe 2+ /H 2 O 2 /UV), and photocatalysis, among others [9,10].
Currently, there are several environmental impacts associated with the different conventional technologies in water treatment, and these technologies are sometimes considered inefficient in removing emerging contaminants and endocrine disrupting compounds. Although there are some effective current technologies for water treatment, most of the time, they are technologies that do not always offer the most economical solution for the removal of various pollutants, in particular, for pollutants present in low concentrations, requiring a large amount of energy and producing a considerable amount of hazardous waste [11]. The use of nanomaterials, such as nanoparticles, nanomembranes and nanotubes, have proved to be quite effective for the detection and removal of various chemical and biological substances that include metals, algae, organic substances, bacteria, viruses, nutrients and antibiotics [10]. Nanotechnology describes the characterization, fabrication and manipulation of structures, devices or materials that have one or more dimensions smaller than 100 nanometers [12].
Nanotechnology has been shown to be a promising alternative in water treatment and environmental remediation. Several researches have been carried out in the scope of developing advanced oxidation processes (AOPs) combined with nanotechnology for water treatment. One of the great advantages of using nanotechnology in wastewater treatment is the fact that few by-products that are toxic to the environment are formed. Techniques based on nanotechnology have proved to be extremely important to meet increasingly demanding water quality standards, namely in the removal of emerging pollutants and for lower levels of contaminants [7].
Although the costs associated with nanotechnology are generally considered high, in some cases they can offer more effective alternatives at lower costs than conventional techniques. In addition, the costs of technologies based on nanotechnology may be reduced by production on an industrial scale and from the development of new methods that use cheaper raw materials and consume less energy. Technologies based on nanotechnology also allow the reuse of many nanoparticles, reducing energy consumption, increasing the efficiency of treatment and also reducing the production of associated waste [3,11].
Currently, nanomaterials that have proved important for the degradation of organic matter in wastewater are classified as dendritic polymers, metal/metal oxide nanoparticles, zeolites and carbon-based nanomaterials [13]. Dendritic polymers are nanostructures constituted by the presence of multi-branched chains. Different dendrimer structures can be produced through the reaction between dendritic polymers that consist of multifunctional properties. These dendritic polymers can be used in the process of removing organic pollutants and heavy metals, acting as adsorbents. The characteristics of these dendritic polymers are therefore important for water treatment [10]. Metal/metal oxide nanoparticles are a diverse class of nanomaterials composed of one, two or three metals and/or metal oxides. Silver, gold and palladium nanoparticles are widely studied for wastewater treatment. Metal oxide nanoparticles, such as TiO 2 , ZnO and CeO 2 , stand out in the degradation of organic pollutants in aqueous media, due to their high surface contact area and improved photolytic properties, thus being considered the best photocatalysts for water treatment [11]. Zeolites have a porous structure making them effective in ion exchange, and are therefore widely used as an ion exchange medium for water treatment and also effective in removing metal ions present in wastewater [14].
Carbon-based nanomaterials are composed essentially of carbon atoms, have an exceptionally high surface area, thus making them ideal for the adsorption of pollutants and therefore very useful in the treatment of emerging contaminants, considered an economical option for removing pharmaceutical and personal care products and endocrine disrupting compounds. These nanomaterials have a relatively high removal capacity in organic compounds, making them a very promising option, and the cost is not much higher than other water treatment methods [13].
The unique properties of nanomaterials when combined with AOPs present unique opportunities to revolutionize water treatment. These combined methods of nanomaterials involving AOPs are one of the most advanced techniques for removing contaminants from wastewater or converting pollutants into more degradable compounds, being considered a promising alternative to traditional methods of water treatment and environmental remediation [15]. However, most methods of AOPs combined with nanomaterials are still under study, and it is necessary to further investigate and develop these innovative methodologies in order to increase their potential. The use of nanomaterials in wastewater treatment should be done carefully taking into consideration all sustainability indexes, and particularly taking into account that they should not result in damage to the environment [10,11].
AOPs and Its Mechanisms
In this section a review of the most important AOPs are presented focusing on their mechanisms of action towards the degradation of water contaminants and pollutants.
UV/H 2 O 2 Processes
The AOP based on the UV/H 2 O 2 system consists of the combination of ultraviolet radiation with hydrogen peroxide (H 2 O 2 ), leading to its photolysis, in order to generate two hydroxyl (OH) radicals [16], as illustrated in following equation [17]: In addition to direct photolysis, the UV/H 2 O 2 process can also comprise indirect photolysis, where the compounds react with the OH radical, produced by the photolysis of H 2 O 2 , in order to generate less reactive radicals such as HO 2 , which can react with the H 2 O 2 , generating again the OH radical [16,18], as shown in the following equations: The efficiency of OH radical production depends on the ability of hydrogen peroxide to absorb UV radiation, as well as the physical and chemical characteristics of the fluid that will be subjected to the oxidation process. UV absorption by H 2 O 2 will depend on the wavelength of UV radiation, that is, the shorter the wavelength of UV radiation, the greater the energy absorption by H 2 O 2 , increasing the production potential of the OH radical [17].
An advantage of the UV/H 2 O 2 system is due to the fact that UV radiation acts as a disinfectant, physically inactivating the microorganisms and simultaneously assist-ing the peroxide photolysis, generating highly reactive OH radical species. Thus, the UV/H 2 O 2 system is currently one of the most promising technologies for wastewater treatment [17,19].
Persulfate Based Processes
Recently developed, the persulfate oxidation technology (S 2 O 8 2− ) is a new AOP, which has been shown to be a very promising alternative for water and wastewater treatment [20,21].
Due to its high oxidizing power (E θ = 2.01 V), persulfate is considered an emerging oxidant in the degradation of pollutants present in water, moreover, persulfate is relatively stable at room temperature and has a non-selective behavior in regarding the degradation of pollutants [22,23]. Persulfate can also be activated to produce the sulfate radical (SO 4 ), an even stronger oxidant (E θ = 2.6 V) than S 2 O 8 2− [24]. There are several methods of activating persulfate, in general, the SO 4 radical can be generated through heat, ultraviolet light, alkali, ultrasound, transition metal ions and activation of metal oxides [21,25].
Compared to the OH radical, the SO 4 radical has a longer shelf life, has a wider pH range of action, proved to be more stable and effective in oxidation for water decontamination [26,27]. Furthermore, some studies suggest that after the generation of the SO 4 radical, it may react with several species in solution to form other active species, such as the OH radical, which will play a key role in the pollutant degradation process [24,27].
Chlorine and NH 2 Cl Based Processes
Chlorine is one of the most used chemical oxidants worldwide for the disinfection of drinking water [28]. Currently, the UV/chlorine process is considered an emerging AOP, constituting an alternative to the UV/H 2 O 2 process in water treatment, effective in the degradation of a variety of persistent contaminants, such as desethylatrazine, sulfamethoxazole, carbamazepine, diclofenac, benzotriazole, tolyltriazole, iopamidol, 17αethinylestradiol [29,30]. Compared to the UV/H 2 O 2 process, the UV/chlorine process demonstrated greater efficiency in the degradation of some micropollutants under slightly acidic conditions, such as trichloroethylene [31]. In the UV/chlorine process, reactive species such as the OH radical and the Cl radical are formed from photolysis of free chlorine (HOCl/OCl-), as shown in the following equations [30]: The Cl formed reacts with chloride, giving rise to Cl 2 •− , and both Cl and Cl 2 •− are strong oxidizers with redox potentials of 2.4 and 2.0 V, respectively. The various reactive species, including the OH radical, formed during this process make the UV/chlorine process a promising AOP for controlling a variety of contaminants in water treatment [29]. Furthermore, chlorine that does not react after the UV/chlorine process can provide residual protection in water distribution systems [32].
Recently, photolysis of monochloramine (NH 2 Cl) has also attracted significant interest as a new AOP for the degradation of emerging water contaminants, such as carbamazepine, and for efficiently controlling the formation of disinfection by-products [28]. Furthermore, NH 2 Cl is considered adequate to provide residual disinfection throughout the water distribution system due to its high stability [33]. NH 2 Cl photolysis generates NH 2 and Cl radicals, and due to their strong interaction with water, significant amounts of Cl radical are converted into OH radicals. As discussed previously the OH radical is a strong non-selective oxidizer against the different pollutants present in water. In contrast, Cl radical is a relatively selective oxidant that reacts with most organic micropollutants. In relation to the NH 2 radical, there is still not much information about its reactivity to pollutants present in water [34]. Primary radicals can further react with water co-solutes to form secondary radicals (such as CO 3 •and Cl 2 •-). Although knowledge is still quite limited, the UV/NH 2 Cl process demonstrates considerable potential as a new AOP for water treatment [28,34].
Fenton Processes
The Fenton process consists of the formation of OH radicals in an acidic medium, from the decomposition of hydrogen peroxide by the action of an iron catalyst [7]. The Fenton process is a very promising AOP due to the high mineralization promoted under normal conditions of both temperature and pressure, very effective in destroying refractory and toxic organic pollutants present in wastewater [35,36]. The main reactions involved in Fenton processes are [37]: Organic pollutant + OH → Degraded products The advantage of Fenton processes is due to the fact that they do not require sophisticated equipment or expensive reagents, they are considered ecologically viable processes due to their high performance and their relatively simple approach, which uses less harmful chemicals and cyclical nature, being necessary a lower concentration of these chemicals [37].
A condition of this AOP is the restricted pH range, where the generation of OH radicals during the Fenton reaction is only efficient under acidic pH conditions (with a pH value close to 3) [35,36].
There are different types of Fenton processes, including Fenton, photo-Fenton, electro-Fenton, photo-electro-Fenton, homogeneous and heterogeneous Fenton, among others. Among the various AOPs, it was the Fenton and photo-Fenton processes that proved to be the most effective, energy efficient and least expensive methods for treating recalcitrant compounds, when used exclusively or in conjunction with other conventional and biological methods [37].
Ozone Based Processes
Ozone (O 3 ) is a strong oxidizer, which has been widely used in wastewater treatment and drinking water disinfection. Once dissolved in water, O 3 acts as an oxidant due to its high redox potential (E θ = 2.07 V), leading to the destruction or degradation of organic pollutants, through different pathways, namely molecular ozone (direct) or through the hydroxyl radical (indirect) [38]. The main semi-reactions of ozone in water are [39]: However, direct oxidation of O 3 , in addition to being a relatively slow reaction, is also more selective than indirect reactions [40]. In water, O 3 undergoes a series of reactions, decomposing into several oxidative species, including the OH radical, which is a stronger oxidant than the original molecular ozone, which reacts with organic and inorganic compounds in a non-selective way, with rates very high reaction rates. Thus, the high oxidative power of ozonization is partially due to the generation of OH radicals [39].
In the presence of other oxidants or irradiation, the yield of the OH radical can be significantly improved, namely through the addition of hydrogen peroxide (O 3 /H 2 O 2 ), use of UV irradiation (O 3 /UV), or addition of metallic catalysts, increasing the efficiency of the treatment. These processes are called advanced oxidation processes (AOPs). Thus, ozonation and ozone-based AOPs are responsible for the destruction of many recalcitrant organic compounds in water and wastewater, including pharmaceuticals and personal care products, solvents, surfactants and pesticides [38,39].
Heterogeneous Photocalalytic Processes
Heterogeneous photocatalysis is an AOP that has been widely studied in the last two decades. The principle of heterogeneous photocatalysis is associated with the activation of a semiconductor by the action of light, when the semiconductor and the reagent are in different phases, photocatalytic reactions are classified as heterogeneous photocatalysis [41].
The substrate that absorbs light and acts as a catalyst for chemical reactions is known as a photocatalyst, and all photocatalysts are generally semiconductors. When a photocatalyst is exposed to light of the desired wavelength (sufficient energy), photoexcitation of an electron occurs, which is promoted from the valence band to the conduction band. The absorption of a photon with energy equal to or greater than the separation of the bands (band gap) of the catalyst, thus creates a gap in the valence band. The electron and the gap migrate to the surface of the photocatalytic semiconductor where they act, respectively, as a reducing agent and an oxidizing agent. On the catalyst surface, the adsorbed water molecules are oxidized by the holes, to OH radicals, which can subsequently oxidize the organic matter in solution, transforming it into non-toxic or less harmful products, such as CO 2 and water [9].
Among the various existing photocatalysts, TiO 2 is currently the most studied in heterogeneous photocatalysis processes, essentially due to its physical and chemical characteristics. TiO 2 has high chemical and thermal stability, is non-toxic, inexpensive and has a relatively high efficiency [41].
As a green, highly efficient and ecological technology, heterogeneous photocatalysis for the treatment of organic pollutants present in wastewater has been shown to be a promising technology to face future environmental challenges [42].
Nanomaterials Based AOPs
In this section the successful coupling of nanomaterials with some AOPs in water treatment are presented and briefly discussed. Table 1 shows some examples of nanomaterials based AOP for the treatment of dyes in water. The analysis of this table shows that the nanomaterials based AOP allow almost complete degradation of the dyes under investigation and faster decomposition processes are achieved when compared with the processes without nanomaterials. Indeed, innovative processes are being proposed for the treatment of dye-contaminated wastewaters with relative reduced costs and with high efficiency [43].
UV/H 2 O 2 Processes
The UV/H 2 O 2 process is a promising technology for the degradation of a wide spectrum of organic contaminants present in water. The coupling of nanomaterials to this process has great potential for water treatment, as the extremely small size of these particles maximizes the surface area exposed to the reagent, allowing more reactions to occur, thus increasing the rate of degradation of these contaminants [3].
The UV/H 2 O 2 process, by itself, cannot efficiently degrade target pollutants, since the molar absorption coefficient of H 2 O 2 is relatively weak in the UV region [3]. It will be beneficial if this process coupled with nanomaterials, among which TiO 2 is the most common. TiO 2 is a semiconductor that exhibits a wide band interval (3.2 eV) corresponding to radiation in the range close to UV. When UV radiation is irradiated on the TiO 2 surface, a reactive electron-hole pair is generated, which in turn reacts with H 2 O, forming a hydroxyl radical [10,44].
Thus, the combination of TiO 2 to the UV/H 2 O 2 process increases the degradation rate, forming more active radicals. The UV/H 2 O 2 /TiO 2 process thus proves to be quite efficient in removing persistent organic contaminants from water [45][46][47].
Zinc oxide (ZnO) nanoparticles, when combined with the UV/H 2 O 2 process, have also proven to increase the effectiveness of this process, with a greater production of hydroxyl radicals, which in turn has a positive impact on the removal of target pollutants [48].
Persulfate Based Processes
Recently, as an alternative to hydroxyl radicals, sulfate radicals have proven to be very effective in removing organic pollutants from aqueous solutions. For the generation of these sulfate radicals, several nanomaterials have been studied in order to make this process more efficient [3,26].
Magnetic iron oxide nanoparticles (MNPs) are a promising technology for the degradation of recalcitrant organic contaminants, such as p-nitroaniline (PNA), in wastewater, as these nanoparticles can effectively activate persulfate, generating sulphate free radicals, due to their relatively wide availability and their specific structural, magnetic and catalytic properties [56]. Furthermore, the excellent ferromagnetic behavior of Fe 3 O 4 makes it a bifunctional material that can be easily separated from the solution. Thus, the use of these nanoparticles as catalysts in persulfate activation demonstrates a high potential in wastewater treatment [26].
In addition to magnetic iron oxide nanoparticles (MNPs), other nanomaterials have also been proposed as promising heterogeneous catalysts in persulfate activation, due to their high efficiency and good dispersion. Namely, nanomaterials such as ferrite-carbon aerogel, cobalt, iron, Co 3 O 4 /graphene oxide, CoFe 2 O 4 /titanate nanotubes, Co-MnO 4 , α-MnO 2 , have been used for the generation of sulfate radicals, with very interesting results [3].
Fenton Processes
The Fenton process is one of the most efficient processes for removing toxic organic pollutants present in wastewater. Heterogeneous solid catalysts have been widely studied for Fenton-type reactions, as they are more beneficial when compared to homogeneous analogues, due to their wide pH range and easy separation and recovery of these catalysts [3,57].
Fenton-type reactions using nano-zero-valent iron (NZVI) have recently emerged as a type of heterogeneous Fenton reaction, quite promising in wastewater treatment and groundwater remediation. NZVI is able to effectively transform, degrade and remove hazardous contaminants from water [58,59].
Due to its structure, NZVI has several benefits, such as high activity, non-toxicity and reduced cost. The main advantage of NZVI is its high reactivity and efficiency attributed to its particle size at the nanometer scale and its high specific surface area, which provides excellent properties for removing contaminants in aqueous solutions [59]. NZVI nanoparticles can also be combined with metals such as Cu and Pd or metal oxides such as ceria, acting as efficient catalysts in the degradation of target pollutants [3].
Recently, a wide range of nanomaterials has been studied, acting as catalysts in Fenton-type processes, such as iron nanoparticles, carbon nanocomposites, solid iron oxide, activated carbon and Fe 3 O 4 magnetic nanoparticles. These nanomaterials showed significant results, due to their high surface areas and porosity, which results in a greater production of hydroxyl radicals, increasing the efficiency of Fenton processes in water treatment [3,57].
Ozone Based Processes
Ozone is considered to be one of the most powerful and favorable oxidants in eliminating toxic organic compounds. However, by itself, ozone has a slow reaction rate towards some persistent organic compounds, such as inactivated aromatics. Thus, currently, some catalysts have been used to increase the oxidation of persistent organic compounds in wastewater, based on catalytic ozonization processes [49,60].
Recently, some nanomaterials were applied as heterogeneous catalysts, to increase the yield of the O 3 /H 2 O 2 process, among them metallic slags stand out. Iron slag emerges as an excellent catalyst due to their availability and reduced cost [59]. The use of these nanomaterials and nano-zero-valent iron in wastewater treatment will remarkably reduce the overall cost of wastewater treatment [3,49].
ZnO and TiO 2 nanoparticles were studied as heterogeneous catalysts for the O 3 /UV process, where promising results were obtained. These nanoparticles were considered excellent catalysts in the generation of oxidizing species in the presence of photons [3]. ZnO has attracted special attention due to its high sensitivity to ultraviolet light, its non-chemical nature, high stability, high surface-to-volume ratio, long service life, and high efficiency in the production of electrons. Thus, the UV/ZnO/O 3 process is currently considered an ecologically correct method to treat large volumes of wastewater containing toxic and resistant pollutants [50].
There is also a wide range of nanomaterials, which have been studied as heterogeneous catalysts in ozonation, in order to improve their reactivity with different types of organic pollutants. For example, NiFe 2 O 4 , Co 3 O 4 , Fe 3 O 4 , MgO, Mn/γ-Al 2 O 3 and Fe 3 O 4 /carbon nanotubes, which obtained very interesting results [3].
Heterogeneous Photocalalytic Processes
Heterogeneous photocatalysis is an emerging branch of AOPs for water treatment. Nanometric sized photocatalysts are preferentially used in heterogeneous photocatalysis due to their high specific surface area, which gives them greater profitability in the degradation of persistent pollutants in water [51].
Currently, TiO 2 nanoparticles have been widely applied in photocatalysis for organic reactions and degradation of organic pollutants, due to their specific properties, such as their strong oxidative power, high chemical and thermal stability, low toxicity, reduced cost and availability in large quantities [61]. Furthermore, the ability of TiO 2 nanoparticles to degrade organic compounds also depends on the particle size, as these particles have a nanometric size, which gives them larger specific surface areas. TiO 2 is thus the most promising photocatalyst for the degradation of contaminants present in water [52].
Recently, the combination of some titanium nanoparticles with other compounds was also investigated, namely, with carbon spheres and nanotubes, zeolites, graphene oxide, mp-MXene/TiO 2−x and Au-TiO 2 /SiO 2 nanodots, with the aim of to increase the efficiency of heterogeneous photocatalytic processes in the degradation of target contaminants [3,62].
In addition to TiO 2 , there are also other semiconductors that have also been successfully used in the photocatalytic degradation of pollutants, such as zinc oxide (ZnO), this is the second most studied photocatalyst for the photocatalytic degradation of target pollutants. The main advantage of ZnO is that it absorbs a fraction of the solar spectrum greater than TiO 2 , therefore, the ZnO photocatalyst is sometimes considered more suitable for photocatalytic degradation of refractory organics in the presence of sunlight [3,63].
The use of anatase TiO 2 nanoparticles as a heterogeneous catalyst in water treatment has one drawback related with its wide band gap of 3.2 eV, that limits its workability under UV light-anatase TiO 2 absorb only wavelengths lower to about 387 nm [64]. To overcome this constraint and make TiO 2 -based nanomaterials photosensitive under visible light, doping TiO 2 with metal and/or non-metal ions is being reported [53,65,66]. This strategy results in an overall (UV and visible light) photocatalytic enhancement in consequence of the suppression of the electron/hole recombination in anatase [67][68][69].
Nanomaterials as Adsorbents in Water Treatment
In this section a brief global description is presented of nanomaterials designed to be selective adsorbents of contaminants that exist in fresh water and wastewater. A process unit of adsorption of selective undesirable substances in the water should not constitute per se an AOP. In order for the above AOPs based nanomaterials to be effective in a degradable substance elimination, an elementary step of adsorption/absorption of the substance to the nanoparticle may exist, which facilitates the catalytic process due to the proximity to the catalyst or due to a synergistic mechanistic effect between the adsorbent and catalytic nanomaterials. This can be achieved by using nanocomposite materials constituted by an adsorbent and a catalyst, which may be particularly useful for organic substances degradation, such as dyes [70]. Indeed, the removal of dyes from water can be achieved by adsorption and/or degradation of the organic molecules.
The coupling of TiO 2 with an intrinsically adsorbent material, polyethersulfone nanofibers (PES NF), resulted in an efficient adsorption-photocatalytic degradation nanosystem for methylene blue dye [54]. Graphene oxide (GO), a common adsorbent nanomaterial with similar properties to that of graphene, was grafted to titanate nanotubes (TNT@GO), that are produced from the alkaline and thermal treatment of TiO 2 and showed higher performance in the photocatalytic activity for degrading MB dye [55]. GO is characterized by a high adsorption capacity and stability and is a particularly interesting nanomaterial to be coupled to photocatalytic nanomaterials, such as for example TiO 2 and derivatives, because it shows some semiconductive properties that can serve as photosensitizer via suppressing the charge recombination rate of electron-hole pairs [19].
Perspectives
The current worldwide water crisis will get increasingly worst in the next decades. A critical part of the current and future water management will be the water treatment of wastewater, with particularly relevance to the industrial effluents, and drinking water. Wastewater regulatory frameworks will be increasingly strict to reduce environmental impacts and to allow water to be reused. Drinking water quality standards must be improved in order to account for the emerging pollutants that are being detected in tap water. These tasks can only be achieved if new improved and sustainable water treatment technologies are developed.
Nanomaterials will definitely be part of the solution for future water managing strategies. Indeed, nanomaterials will enhance the performance of the current technologies sustainably, because fewer resources are used in their production and greener catalytic processes are implemented. Moreover, when compared with bulk materials, nanomaterials have unique properties that allow the development of new and innovative AOP. Consequently, the potential for the incorporation of nanomaterials in high environmental performance AOPs is enormous and will continue to occur in the near future.
New nanomaterials development is necessary for the water treatment industry. Nanomaterials with a multifunctional role are most desirable, for example with adsorption, radiation active properties (UV and visible light), and reactive oxygen species generation, etc. However, multifunctional advanced innovative processes can also be developed by the coupling of nanomaterials with bulk materials resulting in tuned systems that can be used in water treatment. | 2021-08-28T06:17:20.780Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "4e342fbefdae39d1d76e9ffc37251075c9055496",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/11/8/2045/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0dc1a05b65e3fddb97ef3cd82095dab41a917f91",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253259858 | pes2o/s2orc | v3-fos-license | Search for type-III seesaw heavy leptons in leptonic final states in pp collisions at s=13TeV\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sqrt{s} = 13~\text {TeV}$$\end{document} with the ATLAS detector
A search for the pair production of heavy leptons as predicted by the type-III seesaw mechanism is presented. The search uses proton–proton collision data at a centre-of-mass energy of 13 TeV, corresponding to 139fb-1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$139\,\text {fb}^{-1}$$\end{document} of integrated luminosity recorded by the ATLAS detector during Run 2 of the Large Hadron Collider. The analysis focuses on final states with three or four electrons or muons from the possible decays of new heavy leptons via intermediate electroweak bosons. No significant deviations above the Standard Model expectation are observed; upper and lower limits on the heavy lepton production cross-section and masses are derived respectively. These results are then combined for the first time with the ones already published by ATLAS using the channel with two leptons in the final state. The observed lower limit on the mass of the type-III seesaw heavy leptons combining two, three and four lepton channels together is 910 GeV at the 95% confidence level.
Introduction
Neutrino physics presents one of the biggest puzzles yet to be addressed in modern particle physics. The extremely small values of the neutrino masses compared to the masses of the other fermions appear unnatural in the Standard Model (SM) [1]. The seesaw mechanism [2][3][4][5][6] provides an elegant way to give a very small mass m ν to each of the SM neutrinos by introducing a heavy Majorana neutrino with mass M. The spontaneously broken electroweak (EW) symmetry explains the neutrino mass as a Yukawa coupling. Three types of seesaw mechanisms have been proposed and their phenomenology can be tested at collider experiments. The type-III seesaw [7] introduces at least one extra fermionic SU(2) L triplet field coupled to EW gauge bosons. These heavy charged and neutral leptons can in principle be produced by EW processes at the Large Hadron Collider (LHC).
Type-III seesaw heavy-lepton searches have already been performed in various decay channels by both the ATLAS and CMS collaborations. In Run 1, ATLAS excluded heavy leptons with masses below 335 GeV [8] using final states containing two light leptons (electrons or muons) and two jets. This mass limit was then improved to 470 GeV, still using Run 1 data, by adding the three-lepton channel [9] as suggested in Ref. [10]. Using the full Run 2 data sample of proton-proton collisions at √ s = 13 TeV, the CMS Collaboration has excluded heavy-lepton masses up to 880 GeV [11] by analysing three-and four-lepton final states, while ATLAS has excluded heavy-lepton masses up to 790 GeV [12] by using only the two-lepton-plus-jets final state. The analysis presented in this paper searches for a type-III seesaw heavy lepton in three-and four-lepton final states. For the first time, a combination with the two-lepton-plus-jets final state is performed, giving a significant improvement in the sensitivity of the analysis.
The type-III seesaw model targeted in this search is described in Ref. [13]. It assumes the pair production of the neutral Majorana (N 0 ) and charged (L ± ) heavy leptons proceeds via the s-channel production of virtual EW Fig. 1 Examples of Feynman diagrams for the considered type-III seesaw model [13] producing three-and four-lepton final states gauge bosons. N 0 pairs are not produced because N 0 has T 3 = Y = 0 and thus does not couple to the Z [10,14]. The production cross-section depends only on the masses of the N 0 and L ± , which are assumed to be degenerate as the mass splitting due to electroweak radiative corrections is expected to be smaller than ∼ 200 MeV [15]. The decays allowed in this model are L ± → H ± , Z ± , W ± ν and N 0 → Z ν, H ν, W ± ∓ , where the SM leptons can be of any flavour, i.e. = e, μ, τ . The branching ratios B for the heavy-lepton decays into plus one SM boson are determined by the parameters V which govern mixing between the new heavy leptons and the SM leptons. Current bounds on V can be found in Ref. [16]. In this analysis, we assume the democratic scenario, where the three mixing parameters are equal, so that B e = B μ = B τ = 1/3. The branching ratios B Z , B W , B H for heavy-lepton decays into any SM lepton plus Z , W or H , namely L ± , N 0 → Z , W ± , H , are independent of the mixing parameters. For N 0 masses larger than a few times the H mass, as considered in this analysis, the decays into different SM bosons are independent of the heavy-lepton mass, and therefore 2B H 2B Z B W 1/2. Examples of Feynman diagrams in three-and four-lepton final states are shown in Fig. 1. These events are characterised by the production of two SM bosons (V V , V H or H H, where V = W, Z ) and two charged leptons or neutrinos in the final state. The Majorana nature of N 0 allows final states with four leptons and non null total lepton electrical charge as shown in Fig. 1b. This analysis focuses on events with high light-lepton multiplicity, including light leptons from τ -lepton decays.
This paper is structured as follows. The ATLAS detector is described in Sect. 2, the data and simulated events used in the analysis are outlined in Sect. 3, and the event reconstruc-tion procedure is detailed in Sect. 4. The analysis strategy and background estimation are presented in Sects. 5 and 6, respectively. The systematic uncertainties are described in Sect. 7. Finally, results and their statistical interpretation are presented in Sect. 8, followed by the conclusions in Sect. 9.
ATLAS detector
The ATLAS detector [17] at the LHC is a multipurpose particle detector with a near-4π coverage in solid angle around the collision point and a cylindrical geometry 1 coaxial with the beam axis. It consists of an inner tracking detector surrounded by a thin superconducting solenoid providing a 2 T magnetic field, electromagnetic and hadronic calorimeters, and a muon spectrometer with superconducting toroidal magnets.
The inner detector (ID) provides charged-particle tracking in the range |η| < 2.5 and, going outwards from the beam pipe, is composed of a high-granularity silicon pixel detector that typically provides four measurements per track, the first hit normally being in the insertable B-layer installed before Run 2 [18,19], a silicon microstrip tracker, and a transition radiation tracker that covers the region up to |η| = 2.0.
The calorimeter system covers the pseudorapidity range |η| < 4.9. Within the region |η| < 3.2, electromagnetic calorimetry is provided by barrel and endcap highgranularity lead/liquid-argon (LAr) calorimeters, with an additional thin LAr presampler covering |η| < 1.8 to correct for energy loss in material upstream of the calorimeters. Hadron calorimetry is provided by the steel/scintillatortile calorimeter, segmented into three barrel structures within |η| < 1.7, and two copper/LAr hadron endcap calorimeters. The solid angle coverage is completed with forward copper/LAr and tungsten/LAr calorimeter modules optimised for electromagnetic and hadronic energy measurements respectively.
The muon spectrometer (MS) instruments the outer part of the detector and is composed of high-precision tracking chambers up to |η| = 2.7 and fast detectors for triggering up to |η| = 2.4. The MS is immersed in a magnetic field produced by three large superconducting air-core toroidal magnets with eight coils each.
A two-level trigger system is used to select events that are of interest for the ATLAS physics programme [20]. The first-level trigger is implemented in hardware and reduces the event rate to below 100 kHz. A software-based trigger further reduces this to a recorded event rate of approximately 1kHz.
An extensive software suite [21] is used in the reconstruction and analysis of real and simulated data, in detector operations, and in the trigger and data acquisition systems of the experiment.
Data and simulated events
This analysis uses data collected in proton-proton collisions at √ s = 13 TeV with proton bunches colliding every 25 ns. After requiring that all ATLAS subdetectors collected highquality data and were operating normally [22], the total integrated luminosity amounts to 139 fb −1 . The uncertainty in the combined 2015-2018 integrated luminosity is 1.7% [23], obtained using the LUCID-2 detector [24] for the primary luminosity measurements. Events were collected using dilepton triggers selecting pairs of electrons [25] or muons [26]. The transverse momentum ( p T ) threshold of the unprescaled dilepton trigger was raised during the data taking, due to the increasing luminosity of the colliding beams, but was never higher than 24 GeV for the leading electrons and 22 GeV for the leading muons.
Signal and background events were modelled using different Monte Carlo (MC) generators as listed in Table 1. The response of the ATLAS detector was simulated [27] using the Geant44 toolkit [28] and simulated events were reconstructed with the same algorithms as those applied to data [21]. The type-III seesaw signal model was implemented in the MadGraph5_aMC@NLO [29] generator at leading order (LO) using FeynRules [30] and the NNPDF3.0lo [31] parton distribution function (PDF) set. All decays of L ± and N 0 into the different leptonic flavours and subsequent decays of the W , Z and H are considered. Matrix element (ME) events were interfaced to Pythia 8.230 [32] for parton showering with the A14 set of tuned parameters [33] and the NNPDF2.3lo PDF set [34]. The signal crosssection and its uncertainty at next-to-leading-order (NLO) plus next-to-leading-logarithm (NLL) accuracy were calculated from SU(2) triplet production in an electroweak chargino-neutralino model [35,36]. The calculated crosssections are compatible within uncertainties with the type-III seesaw NLO implementation [37,38]. Production cross section for a 600, 800 and 1000 GeV type-III seesaw heavy lepton are 29.6±3.0, 7.0±0.8 and 1.97±0.25 fb respectively.
Simulated SM background samples include diboson processes, which are the dominant ones, followed by processes labelled rare top quark that include multi-top-quark production and top-quark production in association with EW bosons (tt V, tt H, t W Z). Other SM simulated samples are triboson (V V V ), tt, single top, and Drell-Yan (qq → Z /γ * → + − ( = e, μ, τ )) production processes. They are mainly used for the estimation of reducible backgrounds as described in Sect. 6. The MEPS@NLO prescription [39] was used in the generation of Drell-Yan processes to match the ME to the parton shower. The generators used in the MC sample production and the cross-section calculations used for MC sample normalisations are listed in Table 1. The normalisation of the dominant backgrounds, diboson and rare topquark processes, are extracted from the final likelihood fit, as described in Sect. 8.
Diboson V V and triboson V V V events were simulated with the Sherpa 2.2.1-2 generator [44]. Off-shell effects and Higgs boson contributions were included. ME calculations were matched and merged with the Sherpa parton shower based on the Catani-Seymour dipole factorisation [45,46] using the MEPS@NLO prescription [39,[47][48][49]. The Open-Loops library [50,51] provided QCD corrections. Diboson samples were produced with different lepton multiplicities using MEs at NLO accuracy in QCD for up to one additional parton and at LO accuracy for up to three additional parton emissions. Loop-induced gg → V V processes were generated with LO MEs for emission of up to one additional parton for both the fully leptonic and semileptonic final states. Electroweak production of a diboson pair in association with two jets (V V j j) was also simulated at LO. The PDFs used for the nominal samples were CT14 [52] and MMHT2014 [53].
Samples of events from tt + V (where V stands for γ * , W, Z and H ) and t W Z processes were produced using MadGraph5_aMC@NLO [29] at NLO accuracy and were interfaced to the Pythia 8.230 parton shower with NNPDF3.0nlo [31] PDFs. The h damp parameter, which controls the matching between the ME and the parton shower, Table 1 Configurations used for event generation of signal and most-relevant background processes. For the cross-section, the order in the strong coupling constant is shown for the perturbative calculation. If only one parton distribution function is shown, the same one is used for both the ME and parton shower generators; if two are shown, the first is used for the ME calculation and the second for the parton shower. Tune refers to the set of tuned underlying-event parameters used by the parton shower generator. The masses of the top quark and SM Higgs boson were set to 172.5 GeV and 125 GeV, respectively. The samples with negligible impact are mentioned in the [54], using a top quark mass of m top = 172.5 GeV. All simulated events were overlaid with a simulation of multiple pp interactions occurring in the same or neighbouring bunch crossings. These pp inelastic scattering events were generated by Pythia 8.186 using the NNPDF2.3lo set of PDFs and the A3 set of tuned parameters [55]. Their effects are referred to as pile-up. The simulated events were reweighted such that the distribution of the average number of interactions per bunch crossing is compatible with that observed in the data.
Event reconstruction
Events considered in this analysis are required to have at least one collision vertex reconstructed with at least two tracks with transverse momentum greater than 500 MeV. The primary vertex of the hard-scattering event is the one with the largest sum of the associated tracks' squared transverse momenta. Events have to satisfy the quality criteria listed in Ref. [22], including rejection of events with a large amount of calorimeter noise or non-collision background.
Electrons are reconstructed by matching a chargedparticle track in the ID with an energy deposit in the electromagnetic calorimeter. Electron candidates are required to satisfy a Loose likelihood-based identification selection [56] and to be in the fiducial volume of the inner detector, |η| < 2.47. The transition region between the barrel and endcap calorimeters (1.37 < |η| < 1.52) is excluded because it is partially non-instrumented due to services infrastructure. The transverse impact parameter d 0 of the track associated with the candidate electron must have a significance satisfying |d 0 |/σ (d 0 ) < 5. This is required in order to reduce the number of electrons originating from secondary decays. Similarly, the track's longitudinal impact parameter z 0 relative to the primary vertex must satisfy |z 0 sin(θ )| < 0.5 mm, where θ is the track's polar angle. The electron's transverse energy E T must exceed 10 GeV. After this preselection, to refine the electron quality, a Tight likelihood-based identification selection and a set of Loose isolation criteria based on both calorimetric and tracking information are applied to primarily select electrons coming from the decays of the heavy leptons or the EW bosons.
Track segments in the MS are matched with ID tracks to reconstruct muons if they are within the η coverage of the ID. Muon candidates with p T lower than 300 GeV are required to satisfy the Medium muon identification requirements, while for highp T muons, a specific identification working point is applied [57]. Muon candidates are required to have |η| < 2.5, a transverse impact parameter significance of |d 0 |/σ (d 0 ) < 3 and a longitudinal impact parameter value of |z 0 sin(θ )| < 0.5 mm. The minimum muon p T is 10 GeV.
After this preselection, an isolation requirement based only on tracking information is applied.
In this analysis, particle-flow objects [58] are formed from energy-deposit clusters in the calorimeters and tracks measured in the ID but not matched to identified leptons. Particleflow objects are then clustered into jets using the anti-k t algorithm [59] with a radius parameter R = 0.4. The measured jet p T is corrected for detector effects to measure the particle energy before interactions with the detector material [60]. Energy within jets that is due to pile-up is estimated and removed by subtracting an amount equal to the mean pile-up energy deposition density multiplied by the ηφ area of the jet. Pile-up can also produce additional jets that are identified and rejected by the jet-vertex tagger (JVT) algorithm [61], which distinguishes them from jets originating from the hard-scattering primary vertex. Only jets with transverse energy greater than 20 GeV and |η| < 2.4 are considered. Jets originating from heavy-flavour quarks are identified with the MV2c10 multivariate b-tagging algorithm using the 77% efficiency working point [62][63][64], with measured rejection factors of approximately 134, 6 and 22 for light-quark and gluon jets, c-jets, and hadronically decaying τ -leptons, respectively.
The missing transverse momentum p miss T (with magnitude E miss T ) is calculated as the negative vectorial sum of the p T of reconstructed jets and leptons in the event. A 'soft term' taking into account tracks associated with the primary vertex but not with any hard object is then added to guarantee the best performance in a high pile-up environment [65]. The E miss T significance S(E miss T ), calculated with a maximumlikelihood ratio method, is used in Sect. 5 to define the various analysis regions, taking into account the direction of the p miss T and calibrated objects as well as their respective resolutions.
Different objects reconstructed close together in the ηφ plane could in principle have originated from the same primary object. Possible overlaps are resolved by an algorithm that appropriately removes one of the two closely spaced objects to avoid double-counting. If a muon candidate is found to share an ID track with an electron candidate, the electron candidate is rejected. If two electron candidates share an ID track, the one with the lower p T is rejected. Jets are rejected if they are within R = 0.2 of a lepton candidate, except if the candidate is a muon and three or more collinear tracks are found. Finally, lepton candidates that are within R = 0.4 of any remaining jet are removed.
Analysis strategy
Once events have been classified according to the presence of exactly either three or four light leptons, 2 the two lepton- 2 Leptons are ordered going from the highest to the lowest momentum, 1 to 3 ( 4 for four-lepton events). multiplicity categories are refined with dedicated selections. Events with larger lepton multiplicities can be categorized in lower lepton multiplicities regions if one or more leptons escape detection. Signal regions (SRs) are defined so as to maximise the significance of the signal event count predicted by the targeted model relative to the expected number of SM background events. SM backgrounds are normalised by performing a simultaneous fit in the SRs and in dedicated control regions (CRs). The CRs are defined so as to be enriched in relevant background processes and depleted in events from signal processes. The fit uses a kinematic variable chosen to optimise the sensitivity to the small cross-sections expected for the signal processes. Validation regions (VRs), also depleted in signal events, are used to validate the extrapolation of the SM background expectations obtained from the backgroundonly fit to independent regions kinematically close to the SRs. All CRs and VRs are characterised by a signal contamination below 2%.
Backgrounds are assigned to two broad categories: reducible and irreducible backgrounds. Reducible backgrounds include leptons from misreconstructed objects such as jets, or from light-or heavy-quark decays or, in the electron case, photon conversions. These are called fake or nonprompt (FNP) leptons, and events containing at least one such lepton are referred to as the FNP background. Its contribution is calculated using a data-driven method. Irreducible backgrounds are produced by SM processes with three or four prompt leptons in the final state. Prompt leptons are leptons produced in the decays of W bosons, Z bosons, and τ -leptons, as well as direct decays of the heavy leptons considered as signal in this analysis. The most important sources are diboson and rare top-quark processes, with the latter being primarily tt pairs produced in association with an EW or Higgs boson. For these processes, kinematic distributions are obtained from MC simulation and their normalisation is extracted from the fit. Because low-mass heavy leptons have been excluded by previous searches, this search focuses on higher masses, where signal events are characterised by objects having high momenta.
Details about the three-and four-lepton analysis regions are provided below, while details of the two-lepton analysis regions are given in Ref. [12]. The analysis regions are all orthogonal to one another. Table 2 summarises the selection criteria used to define the three-lepton SRs, CRs and VRs. The ZL SR is characterised by a leptonically decaying Z boson, and thus an oppositesign, same-flavour (OSSF) lepton pair compatible with the Z boson mass is required. The SM boson from the decay of the other heavy lepton produced in the event, decays hadronically. Signal events are expected to have a large three-lepton Table 2 Summary of the selection criteria used to define relevant regions in the three-lepton analysis. No selection is applied when a dash is present in the corresponding cell invariant mass (m ), and the transverse masses of the two highestp T leptons, m T ( 1 ) and m T ( 2 ), are also expected to be large. 3 An additional requirement is placed on the angular distance between the leading and subleading leptons to further increase the signal-to-background ratio in the SRs.
Three-lepton channel
A complementary ZLVeto SR, targeting signals involving leptonic decays of W bosons and hadronic decays of Z bosons (including those from H bosons), is defined by vetoing events containing OSSF lepton pairs compatible with a leptonic decay of an on-shell Z boson requiring the invariant mass of the pair to be larger than 115 GeV. The H T variable is defined as the scalar sum of the p T of all selected objects in the event. In cases where the scalar sum of the p T is restricted to only a subset of the objects, they are specified. Signal events are characterised by large H T and E miss T values and by a large value of the scalar sum of the momenta of the same-sign leptons, denoted by H T (SS). Since the presence of SS leptons in this region is mainly due to rare top events and FNP leptons, the H T of this pair is used as discriminating variable looking for values H T (SS) ≥ 300 GeV. To account for possible hadronic decays of electroweak bosons from diboson background sources, an upper limit is placed on the two leading jets invariant mass m j j .
Finally, the JNLow SR targets events where the electroweak bosons decay leptonically, and therefore events with low jet multiplicity, as in Fig. 1a, are selected. A lower bound 3 The generic transverse mass of one or multiple objects N obj is defined is imposed on the invariant mass of the OSSF lepton pair (m (OSSF)) 4 and a large value of the scalar sum of the p T of the three leptons, H T ( ), is required. Fake-lepton background is further reduced by requiring m T ( 1 ) and m T ( 2 ) to exceed a minimum value. The angular separation R( 1 , 2 ) between the two leptons is required to exceed a minimum value to reduce the FNP contribution. Overall selection efficiencies for the production of a 800 GeV type-III seesaw heavy lepton are 0.29%, 0.57%, 0.41% for the ZL, ZLVeto and JNLow SR respectively. SM backgrounds in the three-lepton SRs consist of diboson events, which contribute ∼60%, ∼80% and ∼40% in the ZL, JNLow and ZLVeto regions, respectively. Another background in the ZL and ZLVeto regions originates from rare top-quark processes involving one or more top quarks, which contribute ∼40% and ∼50% of the background in those regions, respectively. Therefore, a CR targeting the normalisation of the diboson background is defined by requiring at least two jets and a low transverse mass for the subleading lepton such that m T ( 2 ) ≤ 200 GeV.
Two VRs are defined in order to validate background estimates for events containing a Z boson decaying into leptons, both obtained by inverting the R( 1 , 2 ) selection of the ZL SR, and applying additional requirements. The DB-VR also requires a b-tag veto, while in the RT-VR the presence of at least one b-tagged jet is required. These VRs validate the Table 3 Summary of the selection criteria used to define relevant regions in the four-lepton analysis. N Z is the number of leptonically reconstructed Z bosons, using opposite-sign same-flavour leptons. No selection is applied when a dash is in the corresponding cell predictions and normalisation of diboson and rare top-quark processes respectively. An additional JNLow-VR is obtained from the JNLow SR by inverting the transverse mass requirement on the leading lepton, m T ( 1 ) ≤ 240 GeV. Moreover, a Fake-VR is defined by inverting the S(E miss T ) selection common to all the other regions without applying any additional requirement except for lepton p T ones. This region is enriched in contributions from FNP backgrounds and is therefore used to validate them.
In the three-lepton channel SRs, the kinematic variable used as the final discriminant to the data is the transverse mass of the three-lepton system.
Four-lepton channel
In the four-lepton channel, the momentum requirement on the three leading leptons is the same as in the three-lepton channel; the momentum of the fourth lepton is required to be larger than 10 GeV. Events are classified using the sum of the charges of the four leptons in the final state: q . The conditions q = 0 and | q | = 2 identify the zero charge (Q0) and double charge (Q2) regions, respectively. A summary of the selection criteria defining the four-lepton regions is shown in Table 3. Signal events are characterised by large H T and large invariant mass m of the four-lepton system. The presence of possible neutrinos in the final state is taken into account using the H T + E miss T variable; therefore m ≥ 300 GeV and H T + E miss T ≥ 300 GeV are required in both the Q0 and Q2 signal regions. Apart from the common lepton p T requirements, these are the only kinematic selections applied in the Q2 SR, which has less background than the Q0 SR since it is very rare for a SM process to produce a doubly charged final state. To reduce the ZZ * contribution in the Q0 SR, no more than one OSSF lepton pair in an event is allowed to be compatible with a leptonic Z decay defined by the invariant mass window 80-100 GeV. Background in the Q0 SR is further reduced by requiring S(E miss T ) ≥ 5. Overall selection efficiencies for the production of a 800 GeV type-III seesaw heavy lepton are 0.14% and 0.11% for the Q0 and Q2 SR respectively.
Two CRs and two VRs are defined in the zero-charge Q0 kinematic space. A DB-CR targeting diboson backgrounds is built by requiring a b-jet veto and defining an invariant mass window for the four-lepton system (170 GeV ≤ m < 300 GeV). A RT-CR targeting rare top-quark background is obtained by requiring at least two b-tagged jets and m < 500 GeV. To ensure orthogonality to the CRs, the VRs require events to have exactly one b-jet. To increase the contributions of diboson and rare top-quark backgrounds in the VRs, m must satisfy 170 GeV ≤ m < 300 GeV in DB-VR and 300 GeV ≤ m < 500 GeV in RT-VR. The RT-VR also requires H T + E miss T ≥ 400 GeV and S(E miss T ) ≥ 5. The main sources of background in the Q2 signal region are diboson or rare top-quark events where the electric charge of one of the electrons is mismeasured. As mentioned above, the only additional kinematic selections used to define the Q2 SR are that both H T + E miss T and the four-lepton invariant mass must exceed 300 GeV. A dedicated Q2 VR is obtained by requiring m < 200 GeV or H T + E miss T < 300 GeV in order to validate both the diboson and FNP background estimates.
In the four-lepton channel SRs, the kinematic variable H T + E miss T is used as the final discriminant to fit to the data.
Background composition and estimation
The background estimation techniques used in the analysis, combining simulations and data-driven methods common to all channels, are discussed in this section.
Irreducible-background predictions are obtained directly from simulations, but normalisation of diboson and rare topquark processes is obtained from the fit. To avoid doublecounting between background estimates derived from MC simulation and the data-driven reducible-background predictions, a specific check is performed: events from irreduciblebackground MC samples are considered only if generatorlevel prompt leptons can be matched to their reconstructed counterparts.
There are two sources of reducible background: events in which at least one lepton charge is misidentified and FNP leptons. The former source is relevant only in the Q2 fourlepton signal region where an event in the Q0 category can migrate to the Q2 category if the charge of one of the leptons is mismeasured.
Charge misidentification for muons is well described by the simulation and occurs only for high-momentum muons where detector misalignments degrade the muon momentum resolution. However, electrons are more susceptible to charge misidentification, as a combination of effects from bremsstrahlung and photon conversions that might not be adequately described by the detector simulation. Correction factors (scale factors) accounting for charge misreconstruction are applied to the simulated background events. They are derived by comparing the charge misidentification probability measured in data with the one in simulations and are parameterised as function of p T and η. The charge misidentification probability is extracted by performing a likelihood fit in a dedicated Z → ee data sample, as described in Ref. [56]. The charge misidentification probability increases from ∼10 −4 to ∼10 −1 with increasing p T and |η|.
FNP leptons are produced by secondary decays of lightor heavy-flavour mesons into light leptons embedded within jets. Although the b-jet veto and lepton isolation significantly reduce the number of FNP leptons, a fraction still satisfy the selection requirements. Significant components of FNP electrons arise from photon conversions and from jets that are wrongly reconstructed as electrons. MC samples are not used to estimate these background sources because the simulation of jet production and hadronisation has large intrinsic uncertainties.
Instead, the FNP background is estimated with a datadriven approach, known as the fake factor (FF) method, as described in Ref. [66]. The FF is measured in dedicated FNPenriched regions where the events satisfy single-lepton triggers without isolation requirements and have low E miss T , no b-jets, and only one reconstructed lepton that satisfies the lepton identification preselection described in Sect. 5. In these regions, two kinds of leptons are identified, L leptons satisfy looser object selection criteria than the ones used to identify the leptons considered in the analysis regions, which are here named T leptons. Electron and muon FFs are then defined as the ratio of the number of T leptons to the number of L leptons, and are parameterised as functions of p T and η. The FNP background is then estimated in the SRs by applying the FF as an event weight in a template region defined with the same selection criteria as the corresponding SRs, except that at least one of the leptons must be an L lepton but not a T one.
The prompt-lepton contribution is subtracted from the template region by using the irreducible-background MC samples to estimate the prompt-lepton contamination in the adjacent regions [56]. The Fake-VR, as defined in Table 2, is used to validate the data-driven FNP-lepton estimate.
Systematic uncertainties
Uncertainties affect several aspects of this analysis. Experimental uncertainties related to the trigger selection, lepton reconstruction, identification, momentum measurement and isolation selection affect both the global selection efficiency and the shape of the kinematic distributions used in the fit. The main contributions come from the electron selection efficiency and the sagitta resolution of the muon spectrometer. Uncertainties are estimated mainly from Z → and J/ψ → processes [56,57]. Uncertainties in the jet energy scale and resolution are evaluated from MC simulations and data, using multi-jet, Z + jets and γ + jets events [67]; they are estimated to be less than 2% in the range of jet transverse momentum of interest and affect both the selection efficiency measurement and the kinematic distributions used in the fit. Uncertainties in the b-tagging efficiencies are evaluated from data using dileptonic tt events [68]. They are estimated to range from 8% at low momentum to 1% at high momentum. These uncertainties affect the analysis region selection efficiencies. The E miss T measurement uncertainties are estimated from data-to-MC comparison in Z → μμ events without jets as described in Ref. [69]. The E miss T uncertainties affect both the selection efficiencies and the kinematic distributions used in the fit. The charge misidentification uncertainty affects only the Q2 analysis regions. The uncertainty in the chargemisidentification scale factor is estimated to be less than 10% from a comparison between same-sign dielectron data and MC events, with the same electron selection as used in this analysis, and with |m ee − m Z | < 10 GeV as described in Ref. [56]. The uncertainty in the pile-up simulation, derived from a comparison of data with simulation, is also taken into account [70]. The limited size of the MC samples is taken into account as an additional uncertainty. The FNP background uncertainty comes from the modelling and normalisation of the prompt leptons subtracted in the FF estimation. The origin of the FNP background is then varied by selecting slightly modified FNP-enriched regions, where the FF is measured. Variations of the FNP-enriched regions are, for example, obtained by varying the jet multiplicity requirement and the E miss T selection. The resulting uncertainty of the FF depends on the lepton momentum and pseudorapidity and ranges from 5 to 40% for electrons and from 10 to 30% for muons.
Theoretical uncertainties affect both the signal and background predictions. For both, the uncertainties from missing higher orders are evaluated by independently varying the QCD factorisation and renormalisation scales in the matrix element by up to a factor of two [71]. The PDF uncertainties are evaluated using the LHAPDF toolkit [72] and the PDF4LHC prescription [73]. An additional uncertainty of 10% is added to the diboson cross-section to take into account variations in the level of data-to-MC agreement for V V processes in different jet multiplicity regions. For rare top-quark backgrounds, uncertainties in the tt W cross-section are evaluated to be ±50%, while for the tt H cross-section the uncertainty from varying the QCD factorisation and renormalisation scales is +5.8 −9.2 %, with another ±3.6% from PDF+α s variations. Since the yields of the rare top-quark and diboson backgrounds are derived from the likelihood fit to the data in the CRs, the systematic variations have little impact on the final yields of the background predictions in the CR and SR.
Statistical analysis and results
The HistFitter [74] statistical package is used to fit the predictions to the data in the CRs and SRs. The fit considers the m T,3 and H T + E miss T distributions for the threeand four-lepton channels, respectively, calculating lower limits on the heavy-lepton mass with a test statistic based on the binned profile likelihood in the asymptotic approximation [75], whose validity is tested using a pseudo-experiment approach. The lower limits on the mass are calculated at 95% confidence level (CL) and the binning is chosen to optimise the sensitivity to signal. The various components of the background predictions are validated in the corresponding VRs. Background and signal contributions are modelled by a product of independent Poisson probability density functions representing the likelihood of the fit. Systematic uncertainties are modelled by Gaussian probability density functions centred on the pre-fit prediction of the nuisance parameters, with widths that correspond to the magnitudes of these uncertainties. Four different fitting procedures are performed: the three-lepton channel on its own, the four-lepton channel on its own, the three-and four-lepton channels combined, and finally the two-, three-and four-lepton channels combined, where results for the two-lepton channel are taken from Ref. [12]. All the contributions from the experimental uncertainties in the lepton, jet and E miss T selections and reconstruction, pile-up simulation, background simulation, theoretical calculations and irreducible-background estimates are considered correlated among the different multiplicity channels in multi-channel fits. After a background-only likelihood fit in the CRs, the three-and four-lepton channel diboson normalisation factors are found to be 0.80 ± 0.09 and 1.08 ± 0.03, respectively. The normalisation and shape of the m T,3 and H T + E miss T distributions are validated in the ZL DB-VR and Q0 DB-VR, respectively, by comparing data and SM expectations after the fit. The rare top-quark contribution normalisation is estimated to be 1.3 ± 0.2 in the four-lepton channel Q0 RT-CR and is then extrapolated to all the SRs. The background modelling is validated in the ZL RT-VR and Q0 RT-VR for the three-and four-lepton channels, respectively. Event yields after the likelihood fit for the analysis regions in the three-and four-lepton channels are shown in Fig. 2. Good agreement within statistical and systematic uncertainties between data and SM predictions is observed in all regions, demonstrating the validity of the background estimation procedure as shown in Table 4.
The post-fit distributions of the m T,3 and H T + E miss T variables used in the likelihood fit in the three-and four-lepton channels are shown in Figs. 3 and 4, respectively, for the signal regions, with the binning used in the fit. After the fit, the compatibility of the data and the expected background is assessed. Good agreement is observed. The p-values, 5 evaluated using the distributions in Figs. 3 and 4, are 0.38, 0.090 and 0.25 for the three-, four-and the combined three-andfour lepton channels, respectively. Figure 5 shows the dis- 5 The p-value is defined as the probability of observing an excess at least as large as the one observed in data, in the absence of signal.
tributions of these discriminating variables in some of the control and validation regions.
The relative uncertainties in the background yield estimates are shown in Fig. 6 for all analysis regions in the threeand four-lepton channels. The dominant uncertainty in the SRs, and in most of the other regions, is the statistical uncertainty of the data, which varies from 20 to 37% depending on the signal region. The MC statistical uncertainty varies from 2 to 7% instead. In the Q2 SR an uncertainty contribution close to the data statistical uncertainty comes from the charge misidentification background, considered in the Experimental category.
In the absence of a significant deviation from SM expectations, 95% CL upper limits on the signal production crosssection are derived using the CL s method [76]. The upper limits on the production cross-sections of the pp → W * → N 0 L ± and pp → Z * → L ± L ∓ processes are evaluated as a function of the heavy-lepton mass, using the three-and four-lepton channels with the democratic B scenario. By comparing the upper limits on the cross-section with the theoretical cross-section calculation as a function of the heavylepton mass, a lower limit on the mass of the type-III seesaw heavy leptons N 0 and L ± is derived. The observed (expected) exclusion limit is 870 GeV (900 +80 −80 GeV). The signal hypothesis in the three-and four-lepton channel result is also tested in a combined fit with the similar type-III seesaw search regions in the two-lepton channel [12]. All the CRs, VRs and SRs in the various lepton multiplicity regions are statistically independent. The reconstruction The simulated signal contribution was found to be below 2% and is not shown in the figure. The hatched bands include all statistical and systematic post-fit uncertainties with the correlations between various background sources taken into account. The lower panel shows the ratio of the observed data to the predicted SM background. The last bin in the distributions contains the overflow algorithms and working points are the same in all cases, and the FNP and lepton charge misidentification backgrounds are estimated using the same method. The parameter of interest, namely the number of signal events, and common systematic uncertainties are treated as correlated. Normalisations of the diboson, tt (for the two-lepton multiplicity region) and rare top-quark (for the three-and four-lepton multiplicity regions) backgrounds are treated as uncorrelated since they account for different physics processes and different acceptances in each final state. The three-lepton channel's limit dominates in the high heavy-lepton mass region, while the two-lepton channel dominates in the lower mass region. The combined observed (expected) exclusion limits on the total cross-section are shown in Fig. 7, excluding heavylepton masses lower than 910 GeV (960 +90 −80 GeV) at 95% CL. The combined observed (expected) exclusion limits on the total cross-section restraining to three-lepton and fourlepton channels are shown in Figs. 8 and 9 respectively. . [12]), the three-and four-lepton channels, and the two-, three-and four-lepton channels for the type-III seesaw process with the corresponding one-and two-standard-deviation uncertainty bands, showing the 95% CL upper limit on the cross-section. The theoretical signal cross-section prediction, given by the NLO calculation [35,36], with its corresponding uncertainty band is also shown 9 Conclusion ATLAS has searched for pair-produced heavy leptons predicted by the type-III seesaw model in 139 fb −1 of data from proton-proton collisions at √ s = 13 TeV, recorded during the 2015-2018 data-taking period. A lower limit on the mass of the type-III seesaw heavy leptons N 0 and L ± is derived for final states with three or four light leptons. Expected and observed 95% CL s exclusion limits in the three lepton channel for the type-III seesaw process with the corresponding one-and two-standard-deviation bands, showing the 95% CL upper limit on the cross-section. The theoretical signal cross-section prediction, given by the NLO calculation [35,36], is shown with the corresponding uncertainty bands for the expected limit No significant deviation from SM expectations is observed. The observed (expected) exclusion limit on the heavy-lepton mass is 870 GeV (900 +80 −80 GeV) at the 95% CL. This result is combined with the result of the two-lepton analysis, which Fig. 9 Expected and observed 95% CL s exclusion limits in the four lepton channel for the type-III seesaw process with the corresponding one-and two-standard-deviation bands, showing the 95% CL upper limit on the cross-section. The theoretical signal cross-section prediction, given by the NLO calculation [35,36], is shown with the corresponding uncertainty bands for the expected limit used very similar experimental methodologies and treatment of statistics. In the full combination, heavy leptons with masses below 910 GeV are excluded at the 95% CL, while the expected lower limit on the mass is 960 +90 −80 GeV. This is the most stringent limit to date on the type-III seesaw model from events with light leptons at LHC. Cantons facilities worldwide and large non-WLCG resource providers. Major contributors of computing resources are listed in Ref. [77].
Data Availability Statement
This manuscript has no associated data or the data will not be deposited. [Authors' comment: All ATLAS scientific output is published in journals, and preliminary results are made available in Conference Notes. All are openly available, without restriction on use by external parties beyond copyright law and the standard conditions agreed by CERN. Data associated with journal publications are also made available: tables and data from plots (e.g. cross section values, likelihood profiles, selection efficiencies, cross section limits, ...) are stored in appropriate repositories such as HEPDATA (http:// hepdata.cedar.ac.uk/). ATLAS also strives to make additional material related to the paper available that allows a reinterpretation of the data in the context of new theoretical models. For example, an extended encapsulation of the analysis is often provided for measurements in the framework of RIVET (http://rivet.hepforge.org/). This information is taken from the ATLAS Data Access Policy, which is a public document that can be downloaded from http://opendata.cern.ch/record/413 [opendata.cern.ch].
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 . SCOAP 3 | 2022-11-03T18:35:57.663Z | 2021-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "a6d9b0d841a397d3cdb54728a2bdbd2b52c74512",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-022-10785-0.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "52baa3e3f3cd43921692669cec887a74172a8b74",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
251159096 | pes2o/s2orc | v3-fos-license | Anatomically informed multi-level fiber tractography for targeted virtual dissection
Objectives Diffusion-weighted MRI can assist preoperative planning by reconstructing the trajectory of eloquent fiber pathways, such as the corticospinal tract (CST). However, accurate reconstruction of the full extent of the CST remains challenging with existing tractography methods. We suggest a novel tractography algorithm exploiting unused fiber orientations to produce more complete and reliable results. Methods Our novel approach, referred to as multi-level fiber tractography (MLFT), reconstructs fiber pathways by progressively considering previously unused fiber orientations at multiple levels of tract propagation. Anatomical priors are used to minimize the number of false-positive pathways. The MLFT method was evaluated on synthetic data and in vivo data by reconstructing the CST while compared to conventional tractography approaches. Results The radial extent of MLFT reconstructions is comparable to that of probabilistic reconstruction: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.21$$\end{document}p=0.21 for the left and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.53$$\end{document}p=0.53 for the right hemisphere according to Wilcoxon test, while achieving significantly higher topography preservation compared to probabilistic tractography: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p<0.01$$\end{document}p<0.01. Discussion MLFT provides a novel way to reconstruct fiber pathways by adding the capability of including branching pathways in fiber tractography. Thanks to its robustness, feasible reconstruction extent and topography preservation, our approach may assist in clinical practice as well as in virtual dissection studies. Supplementary Information The online version contains supplementary material available at 10.1007/s10334-022-01033-3.
Introduction
Diffusion MRI fiber tractography provides an opportunity to estimate fiber orientations through the Brownian motion of water molecules. This imaging technique allows for exploring brain connectivity in vivo and non-invasively [1,2] as well as performing virtual dissection [3][4][5][6], aiding pre-surgical planning [7] and serving as a reference during surgery [8]. In case of neurosurgery planning, the extent of resected tissue may need to be limited in order to limit function deficit, despite maximal tumor resection being one of the key factors for prolonged survival [9,10]. Consequently, fiber bundle reconstructions need to have adequate extent to enable clinicians to estimate a safe resection margin. Despite its promising results, fiber tractography remains challenging, as the results of existing methods have been shown to perform satisfactory on either sensitivity or specificity, but not both [11][12][13].
For the purposes of surgery planning and virtual dissection, the sensitivity of tractography plays a key role, as the correct prediction of the extent of resection is essential to avoid functional impairment. The corticospinal tract (CST) is one of the bundles which neurosurgeons and neuroradiologists focus on during surgery planning to prevent motor function degradation [14]. However, the reconstruction of the corticospinal tracts and other pathways are often limited by intrinsic flaws of existing tractography algorithms, which by design makes it challenging to reconstruct branching configurations, leading to an increased false-negative rate [15].
Multiple approaches have been proposed to reconstruct the organization of fiber pathways from the diffusion signal, with the most common being the estimation of the fiber orientation distribution (FOD) with spherical deconvolution techniques [16][17][18]. Based on the way tractography methods use the information provided by the FOD, they can be categorized as either deterministic or probabilistic. Deterministic approaches follow either the dominant diffusion (or fiber) direction [19] or one of the main directions that is the least deviating from the orientation of a previous step [16,20]. On the other hand, probabilistic approaches typically sample and propagate orientations based on the FOD in the voxel [21]. Probabilistic methods can potentially reconstruct branching-like configurations and have been shown able to reconstruct more true-positive pathways than deterministic methods, but also tend to have a higher false-positive rate [11] that complicates their application in pre-surgical settings. For instance, given that directions are sampled from orientation distribution, each step introduces a bias in relation to the peaks of the distribution. Consequently, during propagation, the bias may be accumulated to the extent that the reconstructed bundle does not follow known internal topographic organization [22][23][24][25][26] or accumulates the volume of plausibly looking pathways that will influence the safety margin estimation during tumor resection. In contrast, deterministic methods cannot reconstruct branching configurations and are prone to generating false-negative results, but their results are reproducible by definition and straightforward to interpret. Another approach that has the potential of resolving the tractography issues related to bundle extent is global tractography (GT). GT reconstructs all white-matter fiber bundles at once by optimizing an energy function based on the diffusion data. This group of approaches aims at resolving local fiber orientations by modeling pathways as a chain of connected segments and maintaining or changing the connectivity of the segments based on the underlying data. Despite the issue of being computationally expensive and suffering from fiber pathways that sometimes do not reach the cortex, GT can show improved performance in some cases [27].
As it was already briefly mentioned, certain fiber bundles, such as the optic radiation bundle and the CST, appear to have specific topographic organization [22][23][24][25][26] which assigns function duties to parts of these bundles. Maintaining such internal organization appears to be a challenge for probabilistic tractography unless it is specifically taken into account [22]. This creates potential issues in cases when functional data is used for the placement of either a seed region or simply a region of interest, for instance, when direct electric or transcranial magnetic stimulation is performed, further complicating the interpretation of the tractography results. In such cases streamline representation becomes more important given an additional constraint on sub-bundles visiting finer white-matter and cortical landmarks.
Incorporating anatomical prior knowledge in the tractography might offer a viable solution to improve the quality of the CST fiber tractography, given that anatomical landmarks are well defined for this tract [6,28,29]. For instance, the bundle-specific approach MAGNET [30] has been previously shown to enhance the reconstruction of the optical pathways by enforcing a specific direction for tract propagation using user-defined regions of interest (ROI). A similar guidance of the fiber tracking can be achieved also using transcranial magnetic stimulation to find the brain regions responsible for specific functionality for the purpose of filtering fiber bundles related to those regions [31].
Most anatomy-aware approaches attempt, thus, to either improve the streamline propagation or to enhance the FOD estimation. However, the aforementioned methods do not exploit all information available in the FOD. For one, the possibility of incorporating branching configurations with high angular deviations along fiber trajectories is not taken into account by most existing approaches. This problem has been first investigated by introducing the concept of pathway splitting [32], but the proposed framework may suffer from a high false-positive rate due to complications of the splitting procedure.
In this work, we propose a novel approach to fiber tractography that adds branches to fiber pathways in a hierarchical multi-level approach (Fig. 1). By defining target and seed regions based on anatomical priors, the algorithm imposes additional constraints on the reconstructed streamlines, limiting the number of false-positive reconstructions that might be introduced either by the algorithm or via branching. Additionally, to differentiate crossing and branching configurations, only if a pathway does not reach the target, the peaks of the corresponding FODs may be considered as branches. This concept can be integrated into a wide range of tractography algorithms, e.g., any algorithm based on an FOD both probabilistic and deterministic. In this work, we focus on the proposed multi-level strategy in combination with deterministic constrained spherical deconvolution (CSD)-based tractography [20].
Multi-level fiber tractography
The core of our algorithm is a multi-level fiber tractography (MLFT) strategy that is compatible with a wide range of fiber tractography methods to take potential branching configurations into account. It is an iterative procedure that is capable of generating multiple spurious pathways and, consequently, requires user-defined starting and target regions as well as stopping criteria to control false-positive rate. MLFT can be combined with both deterministic and probabilistic methods. In name of clarity, in this first work, we choose to focus on combining our tractography strategy with deterministic CSDbased streamline tracking [20].
Our algorithm iteratively expands the reconstruction by branching from the pathways not reaching the target region. The set of streamlines visiting target region and added at one iteration can be considered one level of the overall bundle reconstruction. At each iteration, conventional deterministic CSD-based tractography is performed while storing information on which peaks were chosen for propagation at each point. If a reconstructed streamline does not enter the user-defined target region, its points that correspond to FODs with unused peaks are used as seeds for a new tractography level. Initial directions are defined as the FOD peaks that were not used during the reconstruction of the previous levels. The algorithm runs for a pre-defined number of levels or until a pre-defined convergence criterion is met. Finally, tracts that do not enter the target region at any of the considered levels are discarded (Fig. 1), which is a critical step to prevent the generation of aberrant branches. Co-existence of fiber crossings and fiber branching is facilitated by treating FOD peaks as crossings during propagation and only considering them as potential branches at the following iteration if a corresponding pathway does not reach a target region.
Data
We performed experiments on both simulated and acquired diffusion weighted images. A numeric phantom was generated using ExploreDTI [34] (v4.8.6; PROVIDI Lab, Utrecht, the Netherlands; http:// www. explo redti. com/) with 6 volumes at b = 0 s∕mm 2 and 60 volumes at b = 1200 s∕mm 2 with a resolution of 1 mm isotropic ( Fig. S1 in Online Resource). The phantom represented three fiber bundles with two branching spots, conceptually mimicking fiber configurations as those that can be observed in the CST. The experiments with this phantom were performed without noise and for two signal-to-noise ratio (SNR) levels: 25 and 15. To analyze the performance of our method on in vivo brain images, the MASSIVE [35] dataset was used. The data consisted of 430 volumes at b = 0 s∕mm 2 , 250 volumes at b = 500 s∕mm 2 , 500 volumes at b = 1000 s∕mm 2 , 2000 s∕mm 2 and 3000 s∕mm 2 each, 600 volumes at b = 4000 s∕mm 2 . The data were acquired with a resolution of 2.5 mm isotropic. The MASSIVE dataset was corrected for signal drift [36], subject head motion, eddy current and echo-planar imaging distortions [37].
Additionally, we applied our method to the preprocessed data of ten subjects from the Human Connectome Project (HCP). The data had a resolution of 1.25 mm isotropic and contained 18 volumes at b = 0 s∕mm 2 and 90 volumes at b = 1000 s∕mm 2 , 2000 s∕mm 2 and 3000 s∕mm 2 each. includes points with multiple FOD peaks, some of which are ignored. b Using these points as seeds with the unused peaks as initial locations, another iteration of CSD-based tracking is performed to obtain a new level of the result. c In the last stage only the tracts that enter the pre-defined target region are retained. The background picture on the left of the whole-brain fiber tractography result is taken from [33] with permission Multi-shell CSD [38] was used for the FOD estimation. The motor cortex was segmented as a combination of the left and right precentral and paracentral gyri ( Fig. S2 in Online Resource) with FreeSurfer [39][40][41] (v6.0.0, Laboratory for Computational Neuroimaging, Charlestown, MA, USA; http:// surfer. nmr. mgh. harva rd. edu) and was used as a target region.
Experiment 1: Tractography in silico
We evaluated MLFT as well as iFOD2 [42], as it is a popular choice of probabilistic tractography algorithm, using a noiseless phantom. In all the experiments the implementation of iFOD2 from the MRtrix package [43] was used and all the options were set to default except for providing the seeding region. The same seed point was used for tracking in both cases.
The endpoint regions were placed at the separate ends of each sub-bundle ( Fig. S1 in Online Resource) that served as target regions of interest for MLFT. They were also used to select the target fibers from the results of iFOD2, which was run with default parameters. The parameter setup for MLFT was as follows: angle threshold = 45°, maximum order of spherical harmonics L max = 8 , FOD peak value threshold = 0.1, the default value in ExploreDTI. The step size was set to half the voxel size and the number of iterations was set to two.
Experiment 2: Robustness to noise
The sensitivity of the MLFT to noise was tested. Fiber tracking was performed for the phantoms at varying SNR levels with the same settings as in Experiment 1. The target fibers were then compared across SNR levels.
Experiment 3: Tractography in vivo
The MLFT approach was used to delineate the CST with the MASSIVE and HCP brain data described above. The motor cortex area of both hemispheres was used as a target region. The added value of our multi-level strategy was investigated more closely on the fanning projection of the left CST.
To evaluate whether MLFT reconstructs parts of the pathways belonging to the corpus callosum (CC), the bundle was delineated with both deterministic CSD-based whole-brain tractography and MLFT. The overlap of the CST and the CC generated by MLFT and CSD-based whole-brain tractography, respectively, was visually evaluated. The results of our algorithm were evaluated along with the results produced by the conventional deterministic CSD-based tractography from ExploreDTI as well as iFOD2 and GT [44] implemented in MRtrix. The CSD-based tractography, MLFT and iFOD2 used the same seed regions. The streamlines reconstructed by iFOD2 were further selected to include only the tracts that visit the target cortical area. In the case of GT, the masks of the seed and target regions were used to delineate the CST from the whole-brain tractography. To improve the visual interpretation of the results, implausible streamlines were removed using identical exclusion regions for all methods in case of the MASSIVE dataset.
Radial extents of the reconstructed bundles were calculated. To do that, the area covered by bundles' endpoints in the cortex was calculated given that coronal projection of the motor cortex defines a 90º segment. Obtained radial extents were compared per hemisphere using paired Wilcoxon signed-rank test with significance level = 0.05 . Additionally, density distributions of the endpoints in the motor cortex were evaluated per subject for each algorithm.
The tractography parameters were set as in Experiment 1. Reconstructions with two iterations (for both MASSIVE and HCP) and three iterations (only for MASSIVE) were performed with MLFT. To obtain the whole-brain reconstruction with GT, the number of iterations was set to 10 9 , segment length = 1.5 mm, maximum spherical harmonics order L max = 8 . The default values were used for the remaining settings.
To reconstruct the CST with the MASSIVE dataset, the seed regions were placed close to the internal capsule. In case of HCP subjects an axial cross section of the brain stem was used. For seeding 100 points per voxel were evenly distributed at a single slice level. The number of seed points per voxel was selected empirically.
To run iFOD2, the FODs that were used for MLFT and CSD-based reconstructions were converted to MRtrix format using MRIToolKit (Image Sciences Institute, UMC Utrecht, the Netherlands; https:// github. com/ deluc aal/ MRITo olKit). iFOD2 was provided with a mask of the seed region used for MLFT, seed_image option was used. When performing iFOD2 on the HCP data, the number of selected pathways was empirically set to 10,000. In addition, the target regions were provided using include option, while the same function was used for filtering as in MLFT in case of the MASSIVE dataset. During the analysis of the HCP data, a NOT gate was used to remove inter-hemispheric connections, due to the use of the common seed region in the brain stem for both of the CST branches.
Experiment 4: Topographic organization
Previous research has established that both the motor cortex and the internal capsule can be divided into regions corresponding to specific motor functions, and that such organization is preserved within the CST [25,45]. The topography preservation index (TPI) [46] was calculated, which highlights whether pathways that pass in close proximity to each other through the internal capsule also have closely located endpoints in the motor area. This index reflects how well the internal organization is preserved in the bundle reconstruction. The lower the TPI score the more topographic organization is preserved in the reconstruction.
To calculate TPI scores, rectangular ROIs were defined around the left and right internal capsules, then the longest axis of the ROI was used to map all the tract points crossing the ROI onto [0; 1] segment. Consequently, each pathway is assigned a value v i ∈ [0;1] , where i is an index of a pathway. Afterwards, a triangulation is built using the endpoints in the motor area and each edge connecting two endpoints of pathways j and k is assigned a weight equal to the distance of the projections in the ROI: Finally, the TPI score is an average of the weights. The edge in the calculated triangulation signals close proximity of the endpoints in the motor area, while the weight serves as a penalty if the corresponding pathways' locations in the internal capsule are distant.
The TPI was computed for the left and right CST branches reconstructed by each of the algorithms. To visually appreciate such organization, the CST streamlines were colored according to the part of the area of the motor cortex they reach. This allows to visually check whether the pathways reconstructed by MLFT and iFOD2 on the MASSIVE data corresponded to the anatomical position of the same associated function in the internal capsule. Additionally, statistical testing was performed to compare obtained TPI scores using paired sign-rank Wilcoxon test with significance level = 0.05.
Experiment 5: Anatomical plausibility
As the previous experiment evaluates topography preservation capability of the algorithms by comparing relative placement of the endpoints, the coherence of the pathways was evaluated in order to observe whether the geometric similarity between pathways closely located to each other along their length is associated with the calculated TPI scores. We hypothesize that a fiber reconstruction with a lower intrinsic geometric similarity corresponds to a higher TPI, highlighting the effect of the bias on fiber pathway propagation. To this end, the minimum average direct-flip (MADF) distance was employed, which previously has been used in bundle clustering applications [47,48]. This metric represents the average point-to-point distance between two pathways and is invariant to the ordering of the points in each pathway (e.g., to which endpoint is considered the start/end). It is defined in the following way: where a i and b i are the points of the pathways A and B of length N , respectively. The metric requires the compared tracts to contain an equal number of points, which is why all the pathways were uniformly resampled to N = 200 points. Evaluations were performed on the left and right CST bundles of the MASSIVE and HCP data obtained by the tested methods without filtering gates. For each set of the reconstructed pathways of a given subject, an all-to-all distance matrix was calculated. Then, for each pathway, the minimum distance was calculated based on that matrix.
Experiment 1: Tractography in silico
Both MLFT and iFOD2 reconstructed all the phantom branches of the noiseless DWI phantom, as shown in Fig. 2.
It can be observed that the results of MLFT follow the underlying simulated directions, whereas iFOD2 produces trajectories oscillating around the ground truth.
Experiment 2: Robustness to noise
The results of MLFT obtained for three different SNRs are presented in Fig. 3. A slight misalignment lower than 10° can be observed at the branching point at SNR = 25, which becomes more evident at SNR = 15 with values up to 30°. The case with the lowest SNR is also characterized by an increased pathway number of branching configurations at the points where original bundles diverge, as can be seen in the top row in Fig. 3.
Experiment 3: Tractography in vivo
The multi-level structure of the reconstructed left CST bundle can be seen in Fig. 4, which clearly shows the benefit of the proposed algorithm over conventional deterministic CSD-based tractography with the improved extent of the bundle fanning. The addition of an extra layer increases the number of streamlines reaching the motor cortex but does not bring further improvement to the coverage of the motor cortex: the radial extent with 3 levels amounts to 75.66º, while 2-level reconstruction has an extent of 71.48º. Consequently, in all of the in vivo experiments, the number of levels was set to two. The full reconstructions of the CST segmented by MLFT, iFOD2, GT and deterministic CSD-based tractography in the MASSIVE data are shown in Fig. 5. It can be observed that the pathways obtained with MLFT densely cover most of the motor cortex unlike the results of deterministic CSD-based tractography. At the same time, both MLFT and iFOD2 cover most of the motor area (Fig. 5). For the iFOD2 reconstruction, the pathways traversing into contralateral hemisphere are present due to them bending after visiting the target region, returning into the white matter and propagating through the CC.
Regarding the reconstruction achieved by GT using the MASSIVE dataset, although the CST fanning is quite sparse, it reaches most parts of the motor cortex (Fig. 5). The sparsity allows for a closer comparison of the multi-level and global tractography results which can be seen in Fig. S3. Unlike in the case of GT, the CST reconstructed by MLFT does not reach the approximate leg-related motor area. In the face area, the pathways generated by GT are aligned to those generated by MLFT, although they do not show any branching, but rather a smooth curving trajectory.
The CST bundles that were reconstructed for the HCP subjects by the proposed approach, iFOD2, GT and deterministic CSD-based tractography are shown in Fig. 6. Overview of the radial extents achieved by all Fig. 3 Tracts reconstructed by MLFT on the phantom data (FA map) at multiple SNR levels. Considerable angular errors are only observed at SNR = 15: increased number of branching configurations and direction perturbations up to 30º (red arrows). At SNR = 25, there is a minor angular deviation below 10º (red arrow). Streamlines are colored using standard orientation color-coding the employed algorithms can be seen in Fig. 7. Regarding iFOD2, the results have the same characteristics as the results obtained using the MASSIVE data described above. Generally, both MLFT and iFOD2 reconstructions are represented by the bundles with a plausible fanning extent. GT seems to show lower radial extent compared to its result using the MASSIVE data.
Despite iFOD2 and MLFT both showing high radial extent, in case of iFOD2 the temporo-lateral part of the motor cortex is covered more sparsely than its superior part (Fig. 8), At the same time, MLFT provides more uniform coverage of the motor cortex, although the superior motor cortex coverage is still relatively denser. Given the sparse reconstruction achieved by GT, its density distribution also appears quite uneven as can be seen in Fig. 8.
Performing statistical testing to compare the radial extents of the algorithms has shown no statistically significant difference between MLFT and iFOD2: p = 0.21 for the left and p = 0.53 for the right hemisphere. Also, no significant difference is observed between GT and CSDbased tractography: p = 1 for the left and p = 0.06 for the right hemisphere. All the other comparison combinations when performing Wilcoxon test resulted in p values lower than the significance level.
The CST and CC bundles reconstructed using a subject from HCP data are depicted in Fig. S4 for comparison. In the axial view, it is well visible that part of the CST fanning does not overlap with the CC pathways, as the CC is not covering lateral part of the motor cortex.
Experiment 4: Topographic organization
The TPI scores are reported in Table 1. The deterministic CSD-based tractography seems to outperform other algorithms showing lower values of the TPI metric, and thus higher coherence, for every subject. Both MLFT and CSD-based tractography achieved TPI scores that are significantly different from the scores of both iFOD2 and GT with p < 0.001 . Despite rather close mean scores (0.03 and 0.06 for the left and 0.03 and 0.05 for the right CSD-based and MLFT reconstructions, respectively) MLFT and CSDbased reconstructions were shown to achieve significantly different TPI scores ( p < 0.001 ). MLFT is shown to have seemingly comparable TPI scores to the CSD-based tractography, while they are still consistently lower than the scores of iFOD2 and GT reconstructions. In this regard, iFOD2 and GT show generally comparable performance to each other without statistically significant difference: p = 0.17 for the left and p = 0.21 for the right hemisphere. Figure 9 shows the pathways color-coded according to their final locations in the motor cortex. The visualization demonstrates that MLFT maintains the anatomical configuration of the pathways, according to which the organization of tracts connecting specific sub-domains of the motor cortex is maintained throughout the bundle. In contrast, the bundle produced by iFOD2 seems to be less organized.
Experiment 5: Anatomical plausibility
The normalized histograms of the MADF distance are shown in Fig. 10. The distributions are similar across subjects per tractography approach and show that the distance between the closest pathways obtained by MLFT is generally smaller than that of iFOD2. The results of GT showed the highest distance, which is attributed to the sparsity of the bundles. The CSD-based reconstructions also appear to be very similar geometrically according to the MADF distance with the peak of the distribution being very close to zero for most of the subjects.
Discussion and conclusion
In this study, we presented MLFT, a novel strategy to enhance fiber tractography by reconstructing branching configurations. The strategy we propose achieves anatomically plausible reconstructions of the CST bundles, is robust and reproducible, and maintains topographic organization. Each iteration of the proposed tractography algorithm attempts to branch existing streamlines towards the target region, which may open up new avenues for investigating more complex pathway configurations in the brain [49]. Given that image resolution is usually not sufficient to distinguish branching points, some of the FOD peaks might not only be an indication of crossing fibers, but also of branching ones.
Given the improved extent of the bundles and anatomically imposed control over false positives, our approach is attractive for a number of applications. It can be used to support pre-surgical planning, as it reveals more extensive coverage of the motor cortex than the conventional deterministic CSD-based tractography [50,51], while maintaining clear structure of the reconstructed bundles.
MLFT features
With simulations, we have shown that MLFT can reliably reconstruct branching fiber configurations that are less tortuous as compared to a probabilistic algorithm (Fig. 2). Additionally, the results of MLFT are reproducible. Although higher tortuosity of the probabilistic tractography reconstruction is an expected behavior, overcrowded reconstruction makes it a bit more challenging to spot spurious pathways. This also can be connected to the ability of the tractography algorithms to maintain topographic organization, which is relevant in applications involving brain stimulation methods, such as transcranial magnetic stimulation or direct electric stimulation. Fig. 7 Radial extents of the reconstructed CST bundles for both left and right hemispheres. MLFT (blue) is shown to improve the radial extent compared to the conventional deterministic CSD-based trac-tography (green). iFOD2 (orange) and MLFT (blue) appear to have comparable radial extents. GT (red) achieves high radial extent on the MASSIVE dataset, while on HCP data the extent is primarily low Fig. 8 Density of the motor cortex coverage by the reconstructed CST bundles considering with angular coordinate starting at tempolateral point of coronal projection of the motor cortex and increasing towards superior motor cortex separately for each hemisphere. All the algorithms appear to densely cover superior part of the motor cortex. However, MLFT (blue) consistently covers most lateral part of the motor cortex with its density more evenly distributed compared to iFOD2 (orange). GT (red) also occasionally covers temporo-lateral motor cortex, although the coverage is very sparse. CSD-based tractography (green) primarily covers superior motor cortex part in all the subjects Table 1 TPI scores of the left and right CST reconstructions by MLFT, iFOD2 and GT and also the TPI score of the first level of MLFT only, which is reconstructed by deterministic CSD-based tractography (the lowest score is indicated in bold) According to the TPI values, the CSD-based reconstruction of both CST branches has best-preserved topography. The scores of MLFT and CSD are comparable and consistently low in contrast to iFOD2 and GT Robustness to noise is another important aspect to consider. In order to analyze the sensitivity to noise of our algorithm, the same phantom bundles were simulated with three different SNR levels. The effect of SNR on the reconstructed pathways became clearly visible only at the lowest SNR level (SNR = 15), as reflected by an increased number of branching configurations and occasional perturbations after branching (Fig. 3).
When using the concept of branching for in vivo brain tractography, a well-delineated fanning was observed close to the motor cortex (Fig. 4). Although, MLFT and iFOD2 achieved comparable reconstructions (Fig. 7) of the CST fanning without statistically significant difference, iFOD2 reconstruction contains multiple spurious tracts (Figs. 5,6). Apart from that, MLFT reconstructions show more uniform coverage of the lateral part of the motor cortex, while iFOD2 appears to provide much denser coverage of the superior motor cortex while sparsely lateral part (Fig. 8).
Most of the fanning consists of second-level branches, which might often look as if they diverge into another bundle at the branching points making a sharp turn. However, high angular deviations have been observed by Van Wedeen et al. [52]. Similarly, Mortazavi, et al. [15] also observed axon T-branching as well as sharp turns at sub-millimeter scale performing tract tracing experiments in the area under the motor cortex. Both of those papers present results based on the analysis of the macaque brain, but the statements are likely also valid for the human brain, which is reportedly congruent to the structure of the macaque brain [15], although it is difficult to provide estimates on the distribution of this type of branching in the human CST dissections. Additionally, certain cases can be considered a branching from a modeling point of view given the resolution. For instance, in case of the CST the fibers originating in the cortex descend into the trunk of the bundle. At this point, they pass through a "bottleneck" (at sulcus circularis insulae) and merge together [53]. As was presented in [54], up to 7 bundles appear to co-exist in a single voxel in the mentioned area, which would also suggest that some of the peaks may indicate a splitting of a bundle or an overlap of two bundles. Additionally, the angle between the CST trunk and the fanning close to the "bottleneck" area appears to be around 90° (Fig. 5 in [53]), which is usually absent due to the angular deviation threshold as propagation is incapable of making sharp turns. For this reason, probabilistic approaches struggle with reconstructing the inferior lateral part of the CST without smoothing the angle between the trunk and the fanning of the CST (Fig. 8). However, setting a threshold as high as nearly 90° for probabilistic tractography would overflow the result with false positives by allowing sampling not only around the FOD extremums.
The validity of the MLFT reconstructions can also be evaluated with Fig. S4. It is known that part of the CC originates from the motor cortex [29,55] Thus, a successful reconstruction of the motor part of the CC remains prone to ambiguity as the CST pathways are present in that area as well. Similarity of the shapes of the MLFT-reconstructed bundles to those presented by Wasserthal et al. [56] provides additional confidence in plausibility of the results obtained by MLFT. Additionally, the comparison to the results of GT (Fig. S3) has shown that this alternative approach reconstructs similar pathways, although with certain smoothing of the high angular bifurcations that are observed in MLFT results. In general, the resemblance between the second level of the CST and the CC bundle can be explained by the co-alignment of the pathways of different bundles near the motor region reported by W. Krieg for the macaque brain and for the human brain [57]. This does not necessarily demonstrate that these similar pathways are true positive but serves as a reference which shows stable delineation of certain structures across various algorithms.
Specific topographic organization is a characteristic of a number of brain fiber bundles [23]. Somatotopic organization of the CST [24,45] is one of the established examples of known internal bundle organization. Similarly to [22], we have evaluated the ability of the algorithms to maintain topographic organization using TPI score. According to the observed results, MLFT can preserve topographic organization of the fiber bundles as can be seen in Fig. 9. This is also reflected by the TPI scores (Table 1) across all the subjects analyzed in this study. The fact that topography preservation of MLFT is hampered compared to the deterministic CSD-based tractography might be a consequence of either obvious false-positive streamlines or precision mistakes, as in some cases first-level pathways terminate close to the target region and then branch at acute angles to reach it, and thus change the expected point location. By following the FOD peaks, we propagate the streamlines along the most reliable fiber orientation and, consequently, we are less affected by the noise. This leads to a more stable pathway propagation and, consequently, to a more anatomically reliable organization of the bundle. This is also supported by the presented higher values of pathway coherence of CSD-based tractography and MLFT compared to iFOD2 and GT (Fig. 10).
Limitations
Some degree of uncertainty propagates in the results from the CSD procedure, as the response function is not voxel-wise perfect and FOD peaks have limited angular resolution. This limitation is, however, inherent to most tractography algorithms. Further, branching along a pathway might generate false-positive reconstructions. In our current implementation, correctly chosen anatomical priors are key to control the rate of false-positive pathways.
As revealed by the experiments with the phantom (Fig. 2), deterministic reconstruction from a single point may generate a whole dense branch, leading to an unrealistic density distribution. This is a result of the current implementation choice, of not imposing an upper limit to the number of times a streamline is allowed to branch. We believe that this aspect could be potentially improved in future work, for example, with a microstructure-informed extension of our framework.
Methodological considerations
Given the results of the three-level reconstruction (Fig. 4), it seems increasing the number of reconstructed levels requires an increasingly accurate delineation of the target region. While error propagation across multiple levels may lead to spurious results, at the same time, the definition of the seed region seems to play a key role in the robustness of the reconstruction. In both HCP and MASSIVE datasets, the seed regions were placed based on specific landmarks (brain stem and internal capsule). Incomplete segmentation of those regions would probably lead to reduced density of fiber pathways and, as a consequence, reduced quality of the reconstruction.
In this work, the analysis was fully focused on the application of the CST bundle given the well-defined anatomical landmarks that can be associated with the target and seed regions. Consequently, spurious pathways are often easy to detect which may not be the case for other bundles. For instance, to test the generalizability of MLFT, a reconstruction of the cingulum was performed with the MASSIVE dataset (Fig. S6). Despite the reconstruction being visually similar to the reference bundle from the ISMRM 2015 challenge, certain pathways may as well be spurious. For that reason, the MLFT reconstruction should be treated as a guidance, unless the target region is defined based on functional data. In any case, general prior knowledge of the anatomical configuration of the fiber bundle of interest is required to disambiguate interdigitating from branching pathways.
We would also like to stress that this approach in its current form is only suitable for bundle-specific applications or other cases that aim to investigate connections between two specific regions. As a consequence, it cannot be combined with algorithms for whole-brain reconstructions at the current moment.
Future work
Although in this work we integrated the MLFT framework with the deterministic tractography, it can also be implemented for probabilistic tractography. In some probabilistic tractography methods such as iFOD2, for example, new directions are sampled at each propagation step from the fiber orientation distribution (concentrating around the peaks) only regarding peaks with an angular deviation lower than the pre-defined threshold (Fig. S5 in Online Resource). In this context, MLFT could be similarly applied to sample the propagation direction from the part of distribution outside the area conforming with the angular threshold when branching into the second level (Fig. S5 in Online Resource).
MLFT has shown promising results in healthy controls, but it remains unclear whether its performance will be maintained in presence of pathology especially with routinely acquired clinical data. Thus, evaluation of the algorithm in a clinical setting would also be beneficial.
It must be noted that in this work, we did not focus on devising an approach for estimating the required number of levels based on convergence criteria, which might be useful for clinical translation. For this study, those settings were identified empirically. Experiments were performed with up to 3 levels, however, there was not much change observed between results with 2 and 3 levels. One of the possible future directions of this work could be to introduce microstructural information in analogy to the dynamic seeding approach [58], which may allow for the automatic estimation of the number of levels, and which could facilitate the identification of valid branches. | 2022-07-30T06:16:51.074Z | 2022-07-29T00:00:00.000 | {
"year": 2022,
"sha1": "94a042041661bd5ae244cc7b0505b9d03de3ba7e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10334-022-01033-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "9298e973741582a73962c9e60929f6ef1ce64db3",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.