id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
265009154
pes2o/s2orc
v3-fos-license
Citizen scientists as data controllers: Data protection and ethics challenges of distributed science Citizen-science is a rapidly expanding approach to knowledge production that increasingly involves the collection of personal data in various forms. This processing of personal data invokes relevant data protection laws and, specifically, the designation of data controller, the person(s) or organisation(a) who determine if and how personal data is to be processed and hence are charged with the legal responsibility for compliance with the General Data Protection Regulation (GDPR). Traditionally, in the context of research, professional researchers would be designated controllers, and research participants whose data was processed would be “ data subjects ” and hence enjoy the GDPR ’ s protections. Yet, citizen-scientists adopt a dual role, acting both as participants and as researchers. This paper maps the implications this dual role has from the perspective of data protection law and research ethics. We explain how the data protection concept of controller has been interpreted very broadly. As a result, in their dual role, citizen scientists can be both data subjects entitled to protection and data controllers, sometimes of their own data, tasked with data protection compliance obligations. If citizen scientists share the objectives of research projects they participate in or co-shape those objectives, it is likely that they – together with the professional researchers - will be considered controllers, and held responsible for the processing of personal data in compliance with the GDPR. The paper discusses how this can affect both the quality of protections provided to participants (including participant-researchers), thus undermining the fundamental goal of research ethics, generally, as well as the practice of citizen science itself. We analyse this question of citizen scientists as data controllers as both a matter of law and research ethics. We conclude with policy recommendations that can be applied both on the level of data protection law (to reconsider how the role of controller is assigned) and research ethics guidelines that should take a nuanced approach to the circumstances of assignment of the status of data controller in citizen science projects as an important step toward responsible and ethical participatory research. Introduction This paper explores an intersection of citizen science, a relatively new and rapidly expanding approach to generating scientific knowledge, and data protection law to examine the implications of this law for citizen science when the knowledge generation process involves personal data.The particular focus of analysis lies on the concept of controller.Controller is a crucial concept in the basic mechanics of European data protection law.This term refers to the individuals or organisations who determine if, why and how personal data is collected and used.Under the General Data Protection Regulation ('the GDPR'), the data protection principles and data subject rights are effectuated by the corresponding obligations of controllers. 1 Controller has influence on the purposes and means of data processing, unlike a processor, who does not have data processing purposes of its own but acts on instructions.Therefore a controller is designated to bear the principal load of data protection compliance. 2 For the GDPR to work in practice, there must be at least one controller so that sanctions can always be imposed and remedies sought. 3Therefore, the concept of controller has been interpreted broadly by the EU Court of Justice in Luxemburg.The purpose of the broad interpretation was to ensure "complete and effective protection" of data subjects. 4Yet, the data protection scholarship has been criticizing the resulting meaning of controllership as too expansive. 5That is, that in the effort to ensure that data subjects are absolutely protected, the (reading of) GDPR may be overly-inclusive in defining the parties who bear the responsibility of controller.This paper subjects the concept of controller to the stress-test of "distributed science".We apply the current broad interpretation of controller in the context of citizen science projects that involve the processing of personal data.We conclude that under some circumstances citizen scientists will likely be regarded as joint controllers together with the researchers who may be leading a project, as well as controllers of their own personal data and that of each other .This may serve to weaken the protection of the citizen scientists as data subjects and thus reaffirms the already voiced concerns about the current case law of the EUCJ on the concept of controller.Furthermore, this broad understanding of "controller" also leads to an outcome that is at odds with fundamental principles of conducting ethical research that holds researchers responsible for the protection of participants.Instead, citizen scientists, who play a double role of "scientists" and research participants, are now charged with responsibility for their own (data) protection under the GDPR.Not only does this lead to a peculiar and counterintuitive outcome.It also yields a concerning result should lay "scientists" fail to comply with the GDPR and be subject to sanctions that are difficult to bear by lay individuals rather than research institutions.While ethics of research literature has addressed some issues of power imbalance and information asymmetries in the context of citizen science, the issues of responsibility for data protection compliance have remained unexplored.This paper aims to address this gap.Among others, we identify three issues that emerge on the point of contact of data protection law and citizen science.First, there is a concern of harm to citizen scientists as research participants.Second, there is a problem of responsibility for harm not corresponding to the actual control over harm.Third, there is a concern of exclusion of underprivileged from participatory science. To do this we first present some background of citizen science, its benefits, contexts of use and various configurations.We then explain the meaning of controller in data protection law, including the authoritative interpretation by Article 29 Working Party and the European Data Protection Board, advisory bodies in EU data protection under respectively the 1995 Data Protection Directive and the GDPR, as well as the binding case-law of the European Court of Justice.The analysis further proceeds to sketch the implications of the broad interpretation of controller for citizen science from the perspective of data protection law and research ethics.The paper concludes with a summary of findings and some policy recommendations. Background: citizen science The threshold for generating scientific knowledge is now easier in many ways than it has ever been. 6The proliferation of digital technologies, mobile devices, as well as the availability of information on the internet has contributed to the expanded reach of laypersons engaging in the production of science. 7At its core, citizen science, as a form of participatory research democratizes the practice of knowledge production, 8 engages lay people who have not undergone the traditional training in the scientific method to discover or produce science, thus taking it out of the exclusive domain of professional scientists.While citizen science made its early inroads largely in the area of environmental phenomena, 9 the scope of participatory science projects by nonexperts has expanded significantly.The prevalence of digital technologies has allowed for broader participation in the "self-quantification" phenomenon in which individuals can keep track of information pertaining to themselves.This not only allows for tracking data relevant to one's own health, but also enables laypersons to aggregate their personal information to generate new insights pertaining to relevant communities, locations, health status, and effects to answer questions that they consider to be important.This has led to an increase in participatory science in such fields as epidemiology, genetics and genomics, and specific disease-related research.Citizen science is practices in other contexts, too, such as environmental activism .12Citizen science can take a range of forms, including bottom-up, with questions, goals, and methods originating from the community of laypersons as well as research institution-led initiatives that recruit lay participants to collect or contribute data, with numerous variations along this spectrum.A study could involve any of these forms and seek to collect a variety of personal data, such as citizen scientists' reports of their experiences and observations, personal, as well as facilitated by technology such as sensors, genetic information, bio-samples, results of various types of bio-tracking, e.g.heart rate, weight, pulse, and other health or fitness indicators.There is recognized value in these kinds of participatory research that extends beyond the political consideration of democratization.In principle, citizen science can enhance the scientific enterprise by conceivably accessing information that may be difficult to obtain by traditional research institutions because of access, lack of priority, or lack of funding.Thus, the conduct of research by nonprofessionals can serve important ends. 13ssues of responsibility in the conduct of research are complex.A completely bottom-up research project would arguably locate responsibility for risks with the lay researchers since there is no outside institution or other actor on whom responsibility could be ascribed.However, once the form starts to move further along the spectrum to involve a research institution, the assigning of responsibility is arguably less clear, absent specific agreement. 14That is, where a research institution or professional researchers collaborate with laypersons to develop the purpose, means, or determine access to data collected, these actors all operate in a "directive" or "responsible" capacity.It is precisely this shared decision-making that characterizes much of participatory research, and is heralded as among its benefits, that triggers shared responsibility, including responsibility for the handling of any personal data that might be collected.This raises the question of whether a citizen scientist, in addition to being a participant, is also a controller under the meaning of the GDPR. The meaning of controller under the GDPR Data protection law knows three key actors: data subject, a living natural person to whom personal data relates, who would potentially suffer injury should data protection law be violated, and who thus enjoys data protection rights; data controller, a natural or legal person who alone or jointly with others determines the means and purposes of data processing and carries the main load of the data protection obligations, 15 and processor, a natural or legal person who processes personal data on behalf of a controller. 16Traditionally, the stakes in establishing the status of a controller or processor are high, since the status of a controller comes with the data protection obligations, and the boundaries between the two are blurry.The sections that follow review how the concept of controller is understood in the authoritative opinion of the Article 29 Working party and in the case law of the EU Court of Justice, pre-and post its Judgment in Fashion ID. Article 29 working party and European data protection board guidelines The Article 29 Working Party, the former EU advisory body under the 1995 Data Protection Directive, produced guidelines for determining the status of controller and processor (WP169). 17While not formally binding, the guidelines in practice bear undeniable persuasive authority and were a primary reference point for compliance with the data protection law.The WP169 has retained its significance also after the GDPR came into effect in place of the 1995 Directive, since the definitions of a controller and processor did not undergo any significant changes between the two legislative instruments.According to WP169, the dividing line between controller and processor lies along the factual influence over purposes and means of processing, arising out of explicit or implicit legal competence (e.g. a competence conferred by a statute vs a competence necessary to fulfil explicit authority, but not explicitly named), contractual arrangements, but also, and importantly, out of other circumstances determining the factual ability to determine the purposes and means of processing, even when these factual circumstances contradict the statutory or contractual arrangements. 18he European Data Protection Board, which replaced the WP29 after the GDPR came into effect, issued own guidelines on the concepts of controller and processor under the GDPR. 19The guidelinesalthough update the WP169 -do not deviate from the WP29 opinion in the main lines of interpretation, emphasising the importance of the factual influence over the purposes and means of processing. 20 The EU court of justice case lawpre-Fashion ID The EUCJ's line of case law on controllership started in 2014 with its decision in Google Spain. 21The question was if a search engine operator should be considered a controller with regard to personal data published on third party websites and processed in the context of activity of a search engine.The Court ruled that the search engine operator determines the purposes and means of data processing in the context of that activity 22 and thus is a controller.To rule otherwise on the ground that the search engine operator does not exercise control over personal data published on the websites of third parties would be contrary to the objective of the relevant provision of the Directive "to ensure, through a broad definition of the concept of 'controller', effective and complete protection of data subjects." 23A decisive criterion determining the status of a controller was that the data processing carried out in the context of its activity "can be distinguished from and is additional to that carried out by publishers of websites," 24 as the activity of search engines plays "a decisive role in the overall dissemination of those data" in that it makes the data searchable to each user. 25The main Google Spain legacy relevant for the concept of controller is that the concept ought to be interpreted broadly, in light of the objective of the data protection law to ensure effective and complete protection of data subjects. In the 2018 Wirtschaftsakademie the Court continued developing its caselaw on controllership.Wirtschaftsakademie offered some educational services via its fan page set up on Facebook.It was established that as a part of non-negotiable conditions of use set by Facebook, administrators of fan pages receive anonymous statistical information on the page visitors collected by means of cookies installed on visitors' devices, containing a unique user code, making the data processed personal (the Facebook Insights function). 26The page visitors were not notified of the placement and functioning of the cookie and subsequent data 16 As defined in Art 4(1), ( 7) and (8) GDPR respectively. 17Article 29 Working Party "Opinion 1/2010 on the concepts of 'controller' and 'processor'" WP169, adopted on 16 February 2010. 18 WP169, 8-9 (e.g."[b]eing a controller is primarily the consequence of the factual circumstance that an entity has chosen to process personal data for its own purposes.") 19European Data Protection Board "Guidelines 07/2020 on the concepts of controller and processor in the GDPR" adopted on 7 July 2021. 20Ibid, e.g.p. N. Purtova and R.L. Pierce processing, which was in violation of the data protection rules. 27The national courts went back and forth between ruling Wirtschaftsakademie a controller jointly with Facebook, or Facebook alone, 28 since the latter "alone decided on the purpose and means of collecting and processing personal data used for the Facebook Insights function" while the former only received anonymous statistics. 29The Court ruled in favour of considering the Wirtschaftsakademie a controller, jointly with Facebook.The Court reaffirmed the purpose of data protection law "to ensure a high level of protection of the fundamental rights and freedoms of natural persons" 30 and cited the need to interpret the meaning of controller broadly in view of the goal of the definition to ensure effective and complete protection of the data subjects. 31The Court acknowledges that while Facebook is indeed the actor "primarily determining the purposes and means of processing" 32 , using it to serve its system of advertising, 33 Wirtschaftsakademie itself "must be regarded as taking part" in determining the purposes and means of data processing, and hence a joint controller. 34This follows from the examination of the contribution of Wirtschaftsakademie "to determining, jointly with Facebook ..., the purposes and means of processing" 35 .Any administrator of a fan page on Facebook concludes a contract with Facebook and thereby subscribes to the conditions of use, including the cookie policy. 36By creating a fan page, its administrator enables Facebook to install cookies on the devices of the page visitors, including those without a Facebook account. 37When setting up a fan page its administrator for its own objectives of managing and promoting its activities, can set parameters determining production of statistic, 38 e.g.request for demographic and other data of its target audience. 39Finally, while the administrator has no access to personal data collected by Facebook and only receives anonymised audience statistics, one does not have to have access to the personal data in order to be a controller. 40Importantly, the Courtfor the first time -brought up the issue of distribution of responsibility between joint controllers.It ruled that responsibility between joint controllers does not have to be equal, but needs to be assessed on a case by case basis, 41 since joint controllers may be involved at different stages of processing and to different degrees. 42This issue played an important role in its subsequent jurisprudence. In the same year the Court had to deal with another case regarding the definition of controller.The relevant dispute in the Tietosuojavaltuutettu v Jehovan todistajat case concerned, among others, whether or not The Jehovah's Witnesses Community (Jehovan todistajat), even though it had no access to the relevant data, should be regarded as a joint controller along with its members who, in the course of their doorto-door preaching, made notes containing names, addresses and other personal data relating to the people they visited.The Court answered in the affirmative.It reaffirmed that in view of the objective to provide effective and complete protection of the data subjects the meaning of controller should be construed broadly, and that joint responsibility does not mean equal responsibility. 43Similarly to WP29 position, the Court noted that the determination of the purposes of processing does not have to be in the form of written guidelines or instructions. 44The Court restated that "[t]he joint responsibility … does not require each of [the multiple controllers] to have access to data". 45ccording to the Court, while the Jehovan todistajat members and not the Jehovan todistajat itself are deciding if and when they collect the data, the preaching is "organised, coordinated and encouraged" by the Community. 46Data collected serves as a memory aid for further preaching.The community members engage in preaching for the purposes of the Jehovan todistajat.The Jehovan todistajat is also generally aware of the data processing taking place.It organizes and coordinates the preaching, 47 and hence "encourages its members who engage in preaching to carry out data processing". 48Thus, "by organising, coordinating and encouraging the preaching activities of its members, … [the Community] participates, jointly with its members … in determining the purposes and means of processing" 49 and should be considered a controller. 50he resulting approach of the Court to understanding controllership has been described as "sweeping", 51 potentially making everyone a controller, 52 and thus laden with undesirable consequences, 53 including the "actual impossibility for a potential joint controller to comply with valid legislation". 54 Fashion ID In Fashion ID, the latest occasion where the Court dealt with the meaning of controllership, the Court tempered its broad approach somewhat.The case is significant for the issue of the degree of responsibility of joint controllers first raised in Wirtschaftsakademie.It involved a clothing retailer who placed the Facebook "like" button on its website, resulting in personal data of the website visitors being transmitted to Facebook. 55The question was if the operator of a website that embeds a social plugin causing the personal data of the visitor to be transmitted to e.g.Facebook is a controller, even though this operator is unable to influence the processing of the data transmitted to that provider as a result. 56The Court again answered affirmatively.The Court restated the existing case law on controllership. 57It reaffirmed that Finck, Michael Veale, and Nicolo Zingales 'Data subjects as data controllers: a Fashion(able) concept?' (2019) Internet Policy Review published on 13 June 2019 available online at https://policyreview.info/articles/news/data-subjects -data-controllers-fashionable-concept/1400 accessed 2 October 2023, pointing to the risk of considering data subjects as controllers. 54Opinion of AG Bobek (n 51), 84. 55Fashion ID 26, 27. 56Fashion ID 64. 57The Court referred to the data protection law objective to ensure a high level of protection of the fundamental rights and freedoms and the broad interpretation of controller in view of the effective and complete protection of data subjects.Several actors can be controllers and bear data protection obligations at the same time.The status of a controller does not require access to personal data, and a person with influence over data processing for his own purposes may be considered a controller.Fashion ID 65-68. N. Purtova and R.L. Pierce multiple controllers can be involved in different stages of processing and to different degrees, and hence the joint responsibility does not mean equal responsibility and the level of liability of each controller has to be assessed given all the circumstances of each case. 58The case's significance lies in how it developed the latter point. The Court appears to have seen the problems with the broad meaning of controller that resulted from its previous caselaw and pursued the path of narrowing it down, as laid out in the opinion of AG Bobek.The Court noted that the meaning of data processing includes a variety of operations, 59 and that a processing instance may consist in one or a number of operations, relating to one of the different processing stages. 60An actor "may be a controller, … jointly with others only in respect of operations … for which it determines jointly the purposes and means.By contrast, … that natural or legal person cannot be considered to be a controller … in the context of operations that precede or are subsequent in the overall chain of processing for which that person does not determine either the purposes or the means." 61In the case at hand, the Court considered that Fashion ID was only able to jointly determine means and purposes of processing for the stage of collection and disclosure by transmission of the personal data of visitors to its website, and not for the processing by Facebook that occurred later.Hence, while Fashion ID is certainly a joint controller for the operation of data transfer, it cannot be considered to be a controller in respect of the subsequent operations, such as processing by Facebook for the purposes of advertising. 62This "chain of processing" or "processing stages" approach to joint controllership has narrowed down the application of the concept of controller in the context of complex data processing involving multiple actors, such as social networks and digital service providers.Yet, it has already received criticism for "creating more problems than it solves" 63 by "losing sight of the bigger picture", in particular, of "the societal risks posed by complex, networked, personal data processing systems such as … Facebook." 64Indeed, the risks of data processing in such systems are more than the sum of the risks of the individual stages of processing, yet the responsibility of (joint) controllers, a.o. to inform about those risks, as well as provide for data subjects' protection, are reduced to the latter. 65 Citizen scientists as controllers (of their own data) How is the role of a controllergiven the current state of lawassigned in the context of citizen science, and in particular, what role does a citizen scientist have? As discussed earlier, the citizen science research can take a number of configurations, ranging from absolutely centralized to absolutely decentralized, and the many degrees of de-centralization in between.In the scenario of absolute centralization, the "professional scientists" lead and the citizen scientists follow their instructions and have no influence on the course of a study, including the purposes and means of (personal) data processing.The opposite scenario is of absolute decentralization, where citizen scientists are the true drivers of research, determining among others the purposes and means of data processing, and the professional scientists are not involved.Based on the current state of law on the concept of controller, citizen scientists will likely be considered controllers in all these scenarios, although the range of stages of processing for which they are responsible may differ and be more limited in some circumstances.This being said, the concrete outcomes will depend on the circumstances of each particular case. In all contexts, the citizen scientists will be considered joint controllers for the data processing in the entire project if, as commonly practiced and required by the standards of ethical research with human participants in terms of responsibility of researchers for protections, 66 at the stage of being recruited they are informed about the purposes of the project and data processing and agree with them, similar to a Facebook page administrator who subscribes to Facebook's conditions of use, including the cookie policy. 67n decentralized distributed science projects, citizen scientists are by design given a real role and influence over the project design, for instance when research is closely linked to their interests and living environments, e.g. research into pollution and can benefit from the citizen scientists' knowledge of the situation.In this case, citizen scientists are given influence over purposes and sometimes means of processing personal data.For instance, they may co-determine what types of data will be collected and participate in discussions about and co-steer the purposes of data processing. Moreover, as it was the case with the search engine provider in Google Spain, Facebok page administrators and administrators of a website with a Facebook "like" button in Wirtschaftsakademie and Fashion ID, when joining distributed science projects, citizen scientists will often have their own purposes different from those of the professional scientists, e.g.use gathered data to understand a phenomenon of relevance in their personal lives like the environmental conditions or online tracking, 68 support their position or defend their interests in their relations with public authorities, 69 and others.Even when a project to a larger or lesser degree is coordinated and steered by a professional scientist, and the factual influence over the purposes of data processing is varying, but present, they will likely be joint controllers together with the coordinating professional scientists (if involved) and their fellow citizen scientists. Even in case of absolutely centralized citizen science project, it is fairly certain that the professional scientists who control the project, including determining the purposes and means of processing personal data, will under some circumstances not be the only controllers and citizen scientists will sometimes be considered joint controllers too.This will be the case if they have their own purposes served by processing personal data in the project as described earlier, e.g.investigating and documenting pollution and its impact, e.g. on the health of the citizen scientists themselves, as well as other community members.This will also most likely be the case where citizen science is a form of citizen activism, and its results serve purposes of civil initiatives pursued by the citizen scientists. Even more so, in all these scenarios, where the personal data processed is their own, citizen scientists will be data subjects and (joint) controllers of their own data at the same time.While this may seem counterintuitive, there is nothing in the GDPR that explicitly prevents N. Purtova and R.L. Pierce data subjects from being controllers with regard to their own data.The former Article 29 Working party has alluded to the possibility of data subjects being controllers with respect to their own data in the context of mobile health apps,70 and the French data protection authority CNIL has explicitly recognized such a possibility in the context of blockchain. 71he ethical implications of a citizen scientist being a data subject and controller with regard to his or her data will be explored further in the paper.Here it suffices to say that this state of affairs may be morally quite controversial, as the aspiration behind the data protection law is to protect the data subject from harm.The dual role of participant and researcher, shifts the responsibility for protections to the person to be protected. Following the "stages of processing" approach in Fashion ID, the degree of responsibility of citizen scientists may be limited in a few cases.For instance, in cases where professional researchers are subject to obligations not applicable to citizen scientists, such as archiving raw research data to enable verification of study results and ensure scientific integrity, 72 and process personal data for these purposes, the citizen scientists have no influence over such processing which has to take place regardless of their wishes, and thus will likely not be considered controllers for this processing.In the rare cases of the fully-centralized citizen science projects where the involved citizen scientists do not have any influence over the purposes and means of data processing within the project, e.g. they were not informed of the project purposes, and have no interest in the study outcomes, they will likely be considered controllers only for the stage of transferring personal data to the professional researchers, as they enabled data transfer to the professional researchers. 73They will be fully responsible for the data processing in the project when their interests align with the purposes of the project, as described above. Finally, the so-called "household exemption" under Art. 2 GDPR which often exempts from the GDPR the data processing "by a natural person in the course of a purely personal or household activity" will not apply here and create no exceptions for the citizen scientists.This is because to qualify for such an exemption, the processing must be carried out "in the course of private or family life of individuals," e.g. the data must not be shared with an indefinite number of people, 74 and processing must not be "directed outwards from the private setting of the person processing the data". 75Data processing by the citizen scientists does not meet either requirement, as participating in research projects does not fall within their private or family life, and takes place outside of the private settings. Data protection implications The major implication of assigning the role of controller to the citizen scientists is that they will bear responsibility for compliance with data protection law, including respecting data protection rights and bearing data protection sanctions, jointly with the professional researchers if they are involved, and their fellow citizen scientists.This creates serious difficulties, both from the perspective of the practicality of compliance as well as the perspective of protection of a citizen scientist as a data subject. Among others, in his capacity of controller, a citizen scientist is expected to determine an appropriate legal ground of data processing (e.g. if a legitimate interest would be appropriate or the affected rights and interests of the data subjects outweigh) and exercise complex balancing exercises and make normative calls to establish, e.g. if the data processing is fair, lawful, and proportionate to what is necessary for the purpose of processing, if that purpose is legitimate.The citizen scientists and professional researchers as joint controllers have to agree on and inform the data subjects of the division of their respective compliance responsibilities, in particular as regards the data subject information rights, and their roles and relationships of the joint controllers vis-à-vis data subjects. 76These are not easy tasks, particularly from the perspective of the citizen scientists.While the professional researchersalthough often ignorant of their data protection responsibilitiesare expected to be aware of this aspect of their profession, especially when personal data processing is a core part of their research, this is less so for the citizen scientists.Indeed, while acting outside of household or domestic context, they are not professional players on the research field and lack the necessary resources, institutional support and expertise.Their knowledge of data protection requirements and especially of their role as a controller responsible will more often then not depend on the information provided by the professional scientists, e.g. in the course of recruitment.Their compliance effort will be heavily defined by support of the professional researchers.Given the complexity of this data processing context and the slow uptake of the data protection expertise outside of the big tech, it is highly likely that such information and support will be lacking or inadequate.The situation is further exacerbated where citizen scientists do not have access to personal data and hence are even less aware of the nature of data processing and the associated obligations, and yet are considered controllers, 77 e.g. the personal data of their fellow research participants directly supplied to the professional researchers.What results is high expectations of data protection on paper met by the reality of the non-professional actors such as citizen scientists unable to deliver on the expectations in the context of the highly-complex data protection regime where compliance demands institutional effort, resources and expertise. 78t the same time, even in cases where the professional scientists do provide the necessary information and support and take the bulk of the data protection obligations on their account, a data subject may exercise his or her rights with regard to any of the joint controllers, regardless of any agreements made. 79Among others, a data subject has a right to claim compensation of damages suffered as a result of GDPR infringements against each and any joint controller, and one joint controller can be held liable for the entire damage, 80 with a right, upon payment of full compensation, to claim back from other controllers their respective parts. 81Paying such a compensation for the damage as a result of processing involving many fellow citizen scientists and professional scientists and research institutions, especially when the violations are serious and damages significant, may appear impossible for a single citizen scientist, as well as disproportionate in light of the unequal relationship between the lay citizen scientists and professional researchers.This provision was clearly written with professional controllers in mind and did not account for the situations where complex distributed data processing will involve controllers who are regular citizens.While data subjects wishing to claim such compensation may strategically opt to go after the professional researchers and the research institutions with deeper pockets, 82 it will provide little relief in case a citizen scientist is the only controller known to the data subject.The only exemption from the liability is if the controller "proves that it is not in any way responsible for the event giving rise to the damage." 83The extent to which this will mitigate a citizen scientist liability depends on what "responsible for the event giving rise to the damage" means.The EUCJ case law on controllership seems to equate responsibility for processing to the status of a controller. 84Thus, it seems impossible for a citizen scientist considered a controller with regard to a certain data processing operation to obtain exemption from liability in case such operation results in damages. Finally, considering a citizen scientist as a joint controller with regard to his or her own data raises a question if this broad reading of the concept indeed leads to "effective and complete protection of data subjects" as intended. 85Indeed, a status of a controller implies that the data subject carries at least some of the data protection obligations towards himself (in case of joint control, responsibilities for the obligations, in particular, to inform data subjects, should be divided between the controllers).As far as obtaining compensation for damages caused by data protection violations, citizen scientists who are controllers with regard to their own data jointly with professional scientists should still be able to obtain some compensation from the professional scientist.This is because the liability under the GDPR is joint and several (Art 82 (4) GDPR), i.e. applies to the professional scientist also.Yet, the liability of the latter may be limited if the professional scientist demonstrates he was not responsible for the event leading to the damage (Art 82(3) GDPR), e.g. if some of the "fault" may be attributed to the citizen scientist himself.Considering a data subject as a joint controller will likely enable the professional scientists avoid some responsibility for the consequences of their professional activity, which, from the perspective of the ethical standards of research, should be their concern, as discussed next. Implications for ethical research This section exploress the problems of research ethics that occur as a result of the broad approach to the notion of controller in the context of participatory science.Traditionally, research ethics is based on a clear distinction between a researcher and a research participant.Research ethics has as its main focus the protection of research participants from risks of harm, and charges the researcher with providing these protections.Generally, this responsibility is allocated to the "investigator" 86 researcher or physician or scientist in the case of medical research.This responsibility essentially aims to ensure the integrity of the research. 88Citizen science challenges this dual distinction, as citizen scientists assume both roles.The question of whether a citizen scientist is a controller and, as such, bears corresponding responsibilities arises from the very aspect of participatory research that delivers many of its most valued benefitsthe layperson as both participant and researcher.However, research ethics has yet to provide for how to fulfil necessary obligations regarding protections when participants assume this dual role. 89cholarship in the field of research ethics has already addressed some issues raised by the shift in the role of research participants that participatory science brings.Resnik and colleagues have devised a framework for addressing ethical issues in citizen science that aims to address the imbalance in expertise, education, and relevant knowledge about research practices between professional and lay researchers.This useful framework targets 1) data quality and integrity 2) data sharing and intellectual property 3) conflict of interest and 3) exploitation. 91This list captures many of the otherwise unaddressed risks associated with the conduct of citizen science research.However, it does not adequately reflect the ethics challenges posed by the status of a citizen scientist as a (joint) controller.Some researchers have compared the role of participant-researcher to that of "research assistants" 92 which recognizes the dual role of participant and agent.This, of course, raises issues of unequal positioning that may be captured by Resnik et al's issue of exploitation.While exploitation casts a wide net on practices that take advantage of unequal positioning, the specific issue of responsible parties in the form of controller raises a different type of dilemma for actors engaging in participatory research. First and foremost, there is a concern of harm to citizen scientists as research participants.The assignment of joint controller to citizen scientists in circumstances of shared purposes essentially imposes responsibility for the protection of harms arising from the processing of personal data on persons who may be least knowledgeable about what those obligations are or how to execute them in compliance with the law.There are two disturbing ethical reverberations from this.One, the lack of knowledge can result in an absence of protections for citizen scientists in that if a lay person is the responsible party for the handling of data in compliance with the GDPR, but does not know the law or does not sufficiently understand how to apply this law, the personal data of participants may not be properly protected (including their own).This consequence may actually serve to negate the most fundamental purpose of research ethics of protecting humans from harm as a result of participating in research.Two, citizen scientists may be held liable for damages caused by data protection violations which can also amount to harm. Second, there is a problem of responsibility for harm not corresponding to the actual control over harm.Given the distributed nature of many types of citizen science projects, the assignment of joint controller role may fall where the lay researchers actually have very little or no control.This not only can result in diminished protections for participants (or participant-researchers), but also results in imposition of responsibility on persons who have no means by which to actively assume that responsibility.In other words, mere participation in a project with certain shared features (e.g.purpose) can result in the imposition of liability on a lay participant who may have no knowledge of the scope of 82 Indeed, those institutions will most likely be joint controllers as per Jehovan todistajat.Although they do not directly determine the purposes and means of each data processing operation taking place within it, have research as core of their business, encourage and organize it and are aware that it often involved personal data processing (Jehovan todistajat 70). 83Art 82(3) GDPR. 84e.g.Wirtschaftsakademie 43, Fashion ID 70. 85Google Spain 34; see also 58 where the Court says that "ensuring effective and complete protection of the fundamental rights and freedoms of natural persons, and in particular their right to privacy, with respect to the processing of personal data" is an objective of the 1995 Directive and not only of the definition of controller. 86See e.g.The Belmont Report (n 66). 87World Medical Association, Declaration of Helsinki (n 66). 88 N. Purtova and R.L. Pierce his or her obligations or the ability to execute them effectively.The flip side of this is that where this takes place in the context of a professional researcher-layperson collaborative research project, the professional researchers may escape sole liability for a responsibility for which they alone may have the ability to bear effectively. Third, there is a concern of exclusion of underprivileged from participatory science.The characterization of citizen scientists as joint controllers may ultimately have an impact on who participates in participatory research.That is, persons least likely to be able to bear or execute the responsibility of a joint controller in terms of sufficient knowledge about the law and what it requires of processing in a particular project or financial wherewithal to bear any sanctions ensuing from breach, may refrain from participation.This privileges the elite in society, already a concern in citizen science participation 93 , in an enterprise that claims to "democratize" the generation of knowledge.This is problematic on multiple levels, not least for this undermining effect, but also for the robustness of research results, to the extent that a research-participant base does not sufficiently represent the relevant demographic or the demographic who may be affected by any research results. The existing standards of ethical research need to be updated to be able to adequately address these dilemmas. Conclusions and policy recommendations In this paper we examined how the data protection law and research ethics interact in the context of participatory science and assigning responsibilities for protection of personal data between professional researchers and citizen scientists.We have demonstrated thatin order to ensure complete and effective protection of the data subject -the meaning of controller in data protection law has been construed very broadly both in non-binding authoritative interpretations and binding case-law.Even a small part in determining if and how personal data will be collected and used render an individual or an organisation a (joint) controller.This state of affairs has already received criticism in the data protection field for making "everyone a controller" and assigning responsibilities for processing personal data where they do not always belong or are practically possible to respect, e.g. in distributed computing or interactions of individuals and small organisations with technology giants on their platforms.The context of citizen science presents yet another case where this unbalanced distribution of data protection responsibilities manifests itself and creates dilemmas of research ethics so far not addressed by the ethics literature or guidelines. We identified three such dilemmas.First, as long as it is possible to designate citizen scientists as (joint) controllers, participatory science may cause harm to them.Assigning status of a data controller to a citizen scientist who is a lay player in the field of research makes a citizen scientist responsible for protection of his or her own rights and thus jeopardizes the level of protection of the fundamental right to data protection.This may also lead to the citizen scientist being held liable for data protection violations when other data subjects are harmed, which is another form of harm.Second, the degree of responsibility for data protection compliance and violations will often not correspond to the degree of actual control citizen scientists have over the processes of compliance and violations.The Fashion ID Judgment addressed some of this problem, but not in part where citizen scientists are fully involved in formulation of the research objectives, or have their own objectives for research outcomes.Third, if the citizen scientists are adequately informed about their potential roles and responsibilities as controllers during the recruitment, this may discourage many of them, especially coming from less privileged groups, from taking part in participatory science and make participation in this form of science only accessible to the elites.It is well beyond the scope of this paper to map out a detailed plan of action to address these dilemmas.We will only sketch some general recommendations here, addressed to the EU legislator and courts, as well as the local research ethics committees, leaving more thorough explorations to further research. The problem of citizen scientist as a controller regarding personal data of him/herself and others is a consequence of a more general problem that the data protection law is facing, namely, how to balance on the one hand complete and effective protection of the data protection rights and not allow responsible actors to avoid responsibility for data protection violations, while on the other hand still assigning responsibility where it belongs.The pre-Fashion ID approach has been criticized as over-inclusive, making "everyone a controller".The more restrictive approach the Court of Justice took in Fashion-ID has been criticised for drawing the boundary according to the "stages of processing", thereby oversimplifying the modern data processing which is more often then not a complex multi-stage and multi-actor phenomenon.Reducing responsibility for any impacts of processing on the data subject to individual stages of processing neglects this complexity and thus reduces rather than promotes protection of the data subject.This reductive approach does not significantly help the case of citizen scientists either.The Court should consider assigning responsibility based on a different criterion, e.g. the purpose of processing.To illustrate the impact in the context of participatory science, a citizen scientist (and any other joint controller) will be held responsible for data processing only to the extent it was done for the purposes the citizen scientist formulated.If professional researchersin addition to the research purposesalso are archiving personal data for an extended period of time in order to ensure verifiability of the research results, the citizen scientists should not be considered joint controllers for this processing, sincealthough they may be aware of and accept this purposethey did not formulate it.At the same time, if the citizen scientists process personal data for the purposes other than research, e.g. to support legal claims in court or other forms of civil activism, the professional researchers should not be joint controllers and responsible for data processing done for these purposes. Yet, the change in (case) law may take a long time.In the meantime, action can be taken on the level of self-regulation and ethics guidelines for professional researchers engaging citizen scientists.Such guidelines should at least address the following points. The first and foremost point of attention for such guidelines is an obligation of the professional researchers to fully inform the potential citizen scientists of their possible role as joint data controllers at the stage of recruitment, as well as the corresponding obligations and possible liability. Second, professional researchers should run data protection impact assessment of their envisaged research projects to fully map and become aware of the data processing where citizen scientists will be (joint) controllers, along with the data protection obligations associated with such processing. Third, under law the citizen scientists and professional researchers when acting as joint controllers have to agree on and inform the data subjects of the division of their respective compliance responsibilities, in particular as regards the data subject information rights, and their roles and relationships of the joint controllers vis-à-vis data subjects.The ethics guidelines should include a model agreement between professional researchers and citizen scientists where the responsibilities are divided between the two parties based on their actual roles and capacities to comply with data protection obligations. 94Because of the gravity of both the responsibility and the consequences for non-fulfilment, these points should be addressed formally.For instance, such a model agreement can contain a clause on the obligation of the researchers to 93 Prainsack (n 6). 94Resnik and colleagues raise a similar point in Resnik, et al. (n 13).That there should be an understanding between the researchers and participants at the outset about how certain rights and responsibilities are distributed. N. Purtova and R.L. Pierce indemnify citizen scientists against any data protection liability that may arise as a result of the project, in the context of the jointly formulated research purposes.This can partially address the hesitance of less well-off citizen scientists to join participatory science projects.Another model clause could contain an obligation for the professional researchers and their institutions to provide secure infrastructures through which personal data will be collected and stored. Fourth, where citizen scientists process data beyond joint research purposes, they, alone, will carry responsibilities for such processing.However, researchers must inform citizen scientists of this. Fifth, where professional scientists are aware of the citizen scientists' own purposes of data processing, they may wish to take measures to ensure that no personal data in an identifiable form is shared with the citizen scientists for those purposes, both to facilitate their own compliance and to shield the citizen scientists from possible liability. Sixth, in cases where citizen scientists are joint controllers of their own personal data together with professional researchers, the model agreement should vest professional researchers with the data protection obligations. Finally, to mitigate any data protection violation risks both to the citizen scientists and data subjects, professional researchers should offer citizen scientists training on the basics of data protection law and data security. There is growing recognition that citizen science presents as a potentially valuable activity, some of which, takes place in the lacunae of traditional research ethics.The question of whether citizen scientists are data controllers under the GDPR is not an insignificant matter.Such a designation assigns substantial responsibility for the protection of fundamental rights under the law to persons who may not fully appreciate or be equipped to execute these obligations.Given the recognized benefits of citizen science, in principle, attention to nuance in the allocation of rights and responsibilities may serve to promote the positive yields of this form of research, also about the awareness about data protection.That citizen science may further the democratization of knowledge generation and enhance the broader scientific enterprise through local and lay lenses are reasons to promote sensible, ethical, and responsible compliance with the law.Taking a nuanced approach to the circumstances of assignment of the status of data controller in citizen science projects is an important step toward responsible and ethical participatory research. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. 3 21 Google Spain SL, Google Inc v Agencia Española de Protección de Datos and Mario Costeja González Case C-131/12 [2014] ECLI:EU:C:2014:317. 22Google Spain 33. 23Google Spain 34; see also 58 where the Court says that "ensuring effective and complete protection of the fundamental rights and freedoms of natural persons, and in particular their right to privacy, with respect to the processing of personal data" is an objective of the 1995 Directive and not only of the definition of controller. 24Google Spain 35. 25 Google Spain 36. 26Case C-210/16 Unabhangiges Landeszentrum für Datenschutz Schleswig-Holstein v Wirtschaftsakademie Schleswig-Holstein GmbH (Judgment, 5 June 2018) 15. See e.g.The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research.The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research.DHEW Publication No. (OS) 78-0012; World Medical Association, Declaration of Helsinki: ethical principles for research involving human subjects (2008) (in the case of health research); and Resnik et al, (n 13) (for ethical framework drawing on established ethical principles)..
2023-11-05T16:11:50.586Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "9efb51234ae26a704c0af6dcc9976a9f6692c2be", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.clsr.2023.105911", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "5e5d32c5ff98737bb7fb5d8cab8ccf35e63b7452", "s2fieldsofstudy": [ "Computer Science", "Law" ], "extfieldsofstudy": [] }
203882454
pes2o/s2orc
v3-fos-license
The complete chloroplast genome of Aucuba obcordata (Rehder) Fu ex W. K. Hu et Soong (Garryaceae) from China Abstract Aucuba obcordata is an endemic species and traditional Chinese medicine in China. The complete chloroplast genome sequence of A. obcordata was generated by de novo assembly using whole genome next generation sequencing. The complete chloroplast genome was 157,993 bp in length, contained four parts: a large single copy (LSC) region of 87317 bp, a small single copy (SSC) region of 18483 bp, and two inverted repeat (IRs) regions of 26,094 bp each. The genome annotation contained a total of 113 genes, including 79 protein-coding genes, 30 tRNA genes, and four rRNA genes. The overall GC content was 37.8%. Phylogenetic analysis of chloroplast genomes clustered A. obcordata with Eucommia ulmoides Oliver. Aucuba obcordata (Rehder) Fu ex W. K. Hu et Soong, a member of Garryaceae (APG IV 2016), is endemic to China (Xiang and Boufford 2005). The leaves of A. obcordata can be used as a traditional Chinese medicine for treating dysmenorrhea, irregular menstruation, traumatic injury, and scald (Editorial Committee of Chinese Materia Medica 1999). Meanwhile, A. obcordata is cultivated in gardens because of its evergreen habit, shiny leaves and brightly colored fruits. Aucuba Thunb. is phylogenetically close to the America genus Garrya Lindl. (Bremer et al. 2001;APG IV 2016). It contains only ten species in the world with seven of them being endemic to China (Xiang and Boufford 2005). The taxonomy and genetic structure of A. obcordata is complicated due to its wide distribution, diverse leaf-form, dioecy and ambiguous interspecific boundary with A. albopunctifolia Wang. In the present study, we sought to sequence the complete chloroplast genome of A. obcordata and expand our understanding of the diversity of Aucuba chloroplast genomes. Our studies could also provide basic data for the medicinal species conservation and is essential for studying the phylogeny and evolution of the genus Aucuba, family Garryaceae and order Garryales. Total genomic DNA was extracted from strain leaves of Aucuba obcordata (Yi Tong TY2934, GUCM) collected from wild of Guangdong province (E113 54 0 41.84 00 ; N23 44 0 35.26 00 ), China, using the DNeasy Plant Maxi kit (Qiagen, Valencia, CA). An Illumina paired-end library was constructed and sequenced using the Illumina HiSeq 2500 platform (Illumina Inc., San Diego, CA). The PE reads were assembled using GetOrganelle pipeline (Jin et al. 2018). The filtered plastid reads were transferred to the Bandage software (Wick et al. 2015) for visualization processing. The complete plastid genome was annotated and compared with the reference sequence of Cornus controversa Hemsl. artificially in Geneious 7.1.4 (Kearse et al. 2012). The final complete plastome was deposited into GenBank (GenBank Accession no. MN015608). The complete chloroplast genome of A. obcordata is 157,993 bp in length and has a typical quadripartite structure, consisting of a large single copy region (LSC) of 87,317 bp, a small single region (SSC) of 18,483 bp, and two inverted repeat regions (IRa and IRb) of 26,094 bp. Genome annotation revealed 113 functional genes in the chloroplast genome, including 79 protein-coding genes, four ribosomal RNA (rRNA) genes, and 30 transfer RNA (tRNA) genes. Seventeen of the genes are duplicated in IR regions. The overall GC content of the circular genome is 37.8% while the corresponding values of the LSC, SSC, and IR region are 35.9%, 31.5%, and 43%, respectively. Based on the chloroplast genomes of 17 species, we constructed a Maximum-likelihood (ML) tree by RAxML-HPC2 on XSEDE (8.2.12) on CIPRES Science Gateway (https://www. phylo.org/). The ML tree (Figure 1) revealed that the phylogenetic placement of A. obcordata was sister to Eucommia ulmoides Oliver and the phylogenetic relationships of the analysed species were consistent with previous results (APG IV 2016). Our results can provide a reference for other Aucuba species and can be subsequently used for species identification, phylogenetic analysis and cp genomic studies of the genus Aucuba.
2019-09-16T18:24:25.595Z
2019-07-03T00:00:00.000
{ "year": 2019, "sha1": "0dc172db9a435492385ab95478983fa134deac55", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23802359.2019.1638843?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b4903f483eb7f8f0ac47ffb1976d669f78a07fae", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
136870267
pes2o/s2orc
v3-fos-license
Particle Breakage and Attrition The attrition of particulate materials during their handling and processing may result in losses of material, poor product quality, poor f!owability and environmental pollution caused by the generation of dust. It is not surprising therefore that so much research has been devoted to studying the breakage mechanism of particles, in order to reduce the attrition in conveying and handling systems. In this paper, an extensive literature review on the above topics is presented. The literature review also covers the experimental systems that are commonly used to evaluate these phenomena. Moreover, some new experimental results are presented to clarify future trends, to better understand the complex mechanisms at work, to reduce the required number of 'standard indices' and to enable better engineering design. Introduction Particle breakage is a common occurrence in science as well as in engineering. Depending on the situation, the same phenomenon is referred to as crushing, grinding, fracture, partition, division, disintegration, shattering, scission, fragmentation, degradation, and abrasion. However, while "comminution" is used to describe the desired breakage of particles, "attrition" is the undesirable damage of particles. Both comminution and attrition are a result of either impact, compression, frictional or shear forces, or sometimes a combination of these forces. In the chemical processing industry, breakage can strongly influence the operation and economics of a manufacturing process [1). Particle breakage can occur in a variety of modes depending on material characteristics and the level of applied stress. Such stresses are encountered in nearly all handling and processing systems and lead to attrition, which, physically, is identical to comminution except that it represents losses due to undesired particle breakage. Attrition is not restricted to any particular type of process, even though there are some processes where it occurs more readily or where its effects are quite serious. The effects of attrition can be a loss of product by removal of undersized particles from the process streams, the need for recycling lost products, the requirement for additional filtration, loss of flowabil-ity and environmental pollution caused by large quantities of dust. Conveying systems that increase attrition are: free fall systems, chutes, screw conveyors, pneumatic and hydraulic conveyors, fluidised bed systems, cyclones, silos, etc. [3). Massive fracture usually occurs when the overall stress acting on a particle exceeds a critical value, and results in disintegration of the particle into a large number of fragments, all significantly smaller than the parent. Degradation is probably associated with smaller applied stresses for which the critical value is only exceeded locally -at the edges, for example. Here, the original particle retains its identity but experiences a slow, more or less continuous loss of mass. At the same time, there is a continuous production of fragments much smaller than the parent [4). Damage to the particles can occur immediately or after repeated loadings below the critical value, a phenomenon known as fatigue [5). In chemical engineering practice, attrition may be an important factor in the design of plants with solids processing steps [6,7). However, a complete mechanistic understanding of solids breakage is not yet available despite numerous fundamental studies [8,9]. To quantify breakage for process design and optimisation, three pieces of information are needed -the stress distribution, the rate at which breakage occurs, and the number and sizes of the "daughter" particles resulting from breakage of a parent particle. The problem is further compounded by the fact that breakage may be history-dependent, with the solid fracturing along existing cracks [1]. This may par-tially explain the complexity of the problem that involves simultaneous fractures growing within a large number of interacting particles. As the breakage proceeds, the stress distribution and load transfer between particles changes significantly. Detailed theoretical studies are mostly limited to the propagation of single cracks inside a single particle, and studies that are more global are largely statistical in nature. Experimental studies provide only very superficial data such as fragment size distributions. The existing continuous numerical schemes such as finite element methods can also handle only the propagation of one or two cracks inside a single particle, and do not appear to be able to handle massive fracture [2]. Since a theoretical or numerical analysis of comminution or attrition processes up to a level that could be applied to practical design is impossible, it is common to evaluate the strength of particles and their damage (breakage and degradation) by measuring various indices of friability in a variety of standard systems. Another approach is to conduct simulation experiments with the system in question or a similar one. Many different types of tests have been described by the British Materials Handling Board [3] and in studies such as those by Bemrose and Bridgwater [10] to assess the breakage and attrition tendency of particulate materials. In this paper, some recently published papers concerning both experimental and theoretical approaches to the breakage and attrition of particles are reviewed. The discussion is limited to compression of individual particles and particulate beds and to impact loads. Examples of comminution and attrition systems are given to emphasise the practical application. Throughout the paper, the fatigue phenomenon is emphasised. Some original experimental results by the present author are also shown. Compression of single particles It is often necessary to monitor the strength of a product rapidly and reproducibly. This is particularly important if the breakdown of particles in subsequent handling is to be avoided. The conventional method is the so-called 'Brazilian test', in which single particles are crushed between two platens and the load required for fracture is recorded. However, in any batch of particles formed under nominally identical conditions, there is always a wide variation in the fracture loads measured in this way. Consequently, many particles must be tested before a reliable average can be obtained [11]. An example of compressive strength measurements is shown for potash in Figure 1. For only 20 measured particles, the strength range was found to be from 59 to 175 N. Many researchers conducted experiments to evaluate the compressive strength of single particles of various materials: Shipway and Hutchings [12] tested brittle spheres, Adams eta!. [11] tested agglomerates, and Kalman and Goder tested potassium sulphate [5]. It has been shown that the fracture behaviour of a single grain depends generally on its size, shape, material properties and the loading conditions [13]. Danjo eta!. [14] measured the particle diameter and compressive load for 140 particles individually, and then the single particle strength was calculated. They found that the compressive strength decreased exponentially with an increase in particle diameter up to about 500 .urn, and thereafter showed a constant value. The breakage behaviour of fine single brittle particles of five minerals and two coals in a size range from 88 .urn to 1 mm was investigated by Sikong eta!. [15]. They found that the relation between the particle strength and the particle size of these fine particles is similar to that of coarser particles. Gundepudi eta!. [16] studied brittle spheres using a different compression method, namely three-point in-plane loading. They found that there are maximum tensile stresses that correlate well with failure, and which are partially responsible for attrition in particulate systems. In the case of tablets, axial and radial compression yields compressive and tensile stresses. However, Kalman et a!. [ 17] showed that the two stresses are correlated for large tablets (2.5 em in diameter). The fracture of brittle materials is controlled by the propagation of a primary crack in a tensile field. Many research studies were conducted to develop theoretical models for crack propagation. Mecholsky et a!. [18] found three local maximum tensile stress regions in particle-particle or particle-wall contact situations. The first is the region just outside the contact circle, the second is far-field surface stresses near the meridian of the sphere relative to the contact point, and the third is a region of internal stress below the contact surface and the compressive zone. Tsoungui eta!. [13) proposed a theoretical model to define the failure criterion on an individual grain subjected to an arbitrary set of contact forces. The model is implemented in a two-dimensional computer simulation code based on the molecular dynamics method to study the crushing mechanisms of grains inside a granular material under compression. Song eta!. [19) proposed a direct stochastic simulation method, which is applicable for general population balances with particle break-up, based on the analysis of the dynamic breakup process of particles in particulate systems. They suggested two steps: 1) to determine whether a particle breaks up using the breakage frequency function; and 2) to determine the volume of daughter particles for one breakage event using the probability distribution function. Another method for generating theoretical breakage distribution functions for multiple particle breakage is presented by Hill and Ng [1). Mecholsky eta!. [18] also found a correlation between the numbers of pieces generated during fracture, the stress at fracture and the fractal dimension for a particular loading geometry. However, in the case of an individual grain with an arbitrary number of contact forces, the calculation of the stress distribution inside the grain and the prediction of its fracture condition remains a difficult task [13). However, before making any attempt to use these models in practical systems, several further steps should be defined and developed: 1. Crack propagation is a stochastic phenomenon that depends on the structure of the particle, pores, stresses and contaminant distributions. The breakage models should therefore be developed to a stage where a reasonable average for further practical use can be defined despite the variety in the structure of individual particles. 2. The models should be suitable for irregular particle shapes and various sizes. 3. Since particles in the size order of a few millimetres and lower are usually not crushed as single 110 particles, the model should relate the breakage to multiple contact points with variations of stress distribution at the boundaries being accounted for. 4. Finally, the crack propagation models should yield the number and size of daughter particles. In a particulate bed, the daughter particles of a broken particle will affect the remainder of the unbroken particles by increasing the contact points and changing the stress distribution at the particle boundaries. Since the above requirements are quite complex, more research effort should be devoted to developing empirical or semi-empirical correlations. Compression of particulate beds A simple alternative method to the single particle compression test consists of replacing the single particle with a confined bed of similar particles and then inferring an average single particle strength parameter from the behaviour of the whole bed under compression. This is most easily achieved experimentally using a piston in a cylinder, in which the test becomes one of uniaxial confined compression [11]. Kanda et a!. [20] studied the compressive crushing of powder beds (quartz) for a roller mill application. They studied the effect of applied load, the mass of the feed and the particle size on the probability of crushing and on the crushing resistance. Holman [21) showed -for these and similar handling systems -that the percolation theory in combination with the principles of mechanics adequately describes the relationship between the normalised solids fraction and the logarithm of the applied pressure. According to this theory, the materials can be classified by their softness or rigidity on the one hand, and their flexibility or brittleness on the other, depending whether a rigidity threshold, a brittle-ductile transition or a percolation threshold is exceeded. The pragmatical approach used by Liu and SchOnert [22) for modelling interparticle breakage has proved its ability of predicting the size reduction of an arbitrary feed distribution within a technically reasonable range and with a good degree of accuracy. It should therefore be possible to use it for modelling closedcircuit comminution systems with high-pressure roller mills. In an additional experimental study, this application was tested and it was found that model calculations were able to predict the experimental results well [22]. By carrying out confined uniaxial compression tests with monosized materials, the percentage of broken particles versus the pressure can be determined. A compression test of potash is presented in Figure 2. The experiment was conducted up to a pressure where bonds between particles were observed. The percentage of broken particles was determined by the ratio of undersized particles to the initial weight during sieving, without any distinction being made between fragmentation and degradation. At the presented range of compression loads, the percentage of broken particles is linearly dependent on the compression load. However, for other materials and higher compression loads, the breakage was found to level off at a certain load [5]. 3. Impact Strength Impact tests are common and are aimed at subjecting materials to forces that are similar to those they would encounter during handling (dilute-phase pneumatic conveying, chutes, etc.) and during comminution in a jet mill. It is believed that by performing tests on either single particles or groups of particles that collide with walls or with other particles, a representative measure of the particle friability can be obtained [3]. Therefore, much effort has been devoted to improving the test rigs and the measuring systems. Guigon eta!. [23] presented a comprehensive literature survey of studies that investigate the impact of particles on various targets. The reported experimental velocities varied from 1.1 up to 600 m/s. Tavares et a!. [24] and Tavares [25] used sophisticated measurement equipment. The test rig consists of a long steel rod equipped with strain gauges on which a single particle or a bed of particles is placed and then impacted by a falling steel ball. Until now, this system has been used to investigate the deformation and fracture of single particles subject to impact crushing. According to many investigations, the following parameters were found to affect the attrition rate due to impact: KONA No. 18 (2000) 1. Particle Velocity -The fundamental studies of Salman et a!. [26][27][28][29] and Ghadiri and Papadopoulos [30] demonstrate that the attrition rates increase with increasing velocity. Gorham and Salman [31] found that at the lowest velocities, fracture is mainly due to a brittle-elastic response. At higher velocities, inelastic deformation under the impact site leads to characteristic patterns of fragmentation arising from radial, lateral and meridian cracks. 2. Particle Size -The particle strength decreases as the particle size increases [24,32,33] because larger particles have more micro-cracks and impurities. Nevertheless, the length and number of micro-cracks may vary; therefore, some results showed that particle attrition is independent of particle size [34,35]. 3. Particle Shape -Vervoorn and Scarlett [33] found that particle shape is an important factor in attrition, especially when the presence of sharp edges and corners allows large local stresses to be created that easily cause particle attrition. Tavares et a!. [24,25] also showed the effect of particle shape on the particle fracture energy, the particle strength and the particle stiffness. 4. Target Rigidity-Salman eta!. [26][27][28][29] and Mebtoul et a!. [36] established the influence of target nature, thickness and orientation on the attrition rate of particles colliding with targets. 5. Particle Orientation -Cleaver et a!. [32] found that the attrition mechanism depends also on particle orientation. For impacts on sharp corners and edges, particle damage appears to result from semi-brittle failure at all velocities tested. For impacts on crystal faces, however, a threshold velocity was identified, above which brittle fracture occurred, and below which no visible damage was detected. 6. Impact angle -The studies of Salman eta!. [26][27][28][29] showed that the probability of particle failure varies only slightly from normal impact to about 50°. The mechanical strength of agglomerate materials under impact was investigated by means of computer simulation using Distinct Element Analysis. The breakage of agglomerates upon impact is shown to increase with impact velocity until a certain limit is reached, beyond which the damage seems to approach an asymptote [37]. An examination of the mechanisms that lead to the pattern of impact breakage in twodimensional discs was presented by Potatov and Campbell [38]. It should be noted at this stage that most of the experiments reported above with high velocities were conducted with systems where the particles are accelerated in an air stream (air gun) and oriented towards a target. These experimental systems are limited to large particles with high densities. Otherwise, the particles tend to follow the air stream, and do not collide with the target, or are deflected by the air stream to collide at a different angle. These systems should, therefore, be modified to enable fine powders to also be tested. Comparison Between Methods Comparison between the various methods of measuring particle strength is very important. Although during the years, similarities have been found between the comminution and attrition behaviour in practical systems and some strength measurement methods, all methods relate, in one way or another, to the strength of the particles. Reliable theoretical models or empirical correlations relating the strengths measured by various methods could significantly reduce the number of required measurements and strength indices in current use. Therefore, research and investigation for comparing various strength measurement methods is very important. Compression of individual particles and particulate beds The bulk crushing test is commonly used in industrial applications to assess the attrition resistance of particles. A small quantity of particles is placed in a rigid container and loaded quasi-statically by a piston to a pre-specified level of stress. The extent of breakage is then analysed after the unloading stage. However, despite the simplicity of the test procedure, the analysis of particle breakage is very difficult because the test is carried out on an assembly in which not all particles are uniformly loaded. It is therefore difficult to relate the test results to particle properties; a task that is highly desirable for the optimisation of production as it enables the particle properties to be tailored for improved performance [39]. A full solution to this problem would require the formulation of an assembly model relating the distribution of contact stresses to the distribution of single particle failure stress within the bed. In practice, this problem is so complex that it can only be addressed by computer simulation [11]. Danjo eta!. [14] found experimentally that a linear relationship existed between the compressive load and the number of particles. The average particle strength was found to be lower than the single particle strength in every sample. This is due to a variety of factors such as the distributions of particle size, 112 shape, and compressive strength in multiple particle systems. They also examined the particle strength evaluated from the inflection of the compression curve of a particulate bed. Particle strengths obtained from the inflection points were closely related to the single particle strength. Adams et a!. [11] presented a simple theory that provides a means by which the average shear strength of a single agglomerate can be obtained by experiments on a bed of agglomerates, and this value is related to the single particle crushing strength through a single empirical proportionality factor. Such numerical methods as the Distinct Element Method [39] and Distinct Elements Analysis [40,41] were also applied to find the relationship between bulk compression and single particle compressive strengths. Single particle mechanical properties such as Young's modulus and compressive strength distribution have been characterised by Couroyer eta!. [39] and used in the simulation to predict the bulk crushing behaviour. The results show that an increase in the value of Young's modulus and the coefficient of friction leads to a significant increase of breakage in the assembly, and that a decrease in the loading rate leads to a lower extent of breakage. The strength of three samples with different levels of macroporosity was compared under quasi-static loading by Couroyer et a!. [41]. The experimental data were used to test the DEA In order to develop a reliable model, the individual particle compressive strength should be related to particulate bed experiments through the distribution of contact points that transfer the loads and the compression level. The distribution of contact points depends on the size distribution of particles and their orientation and structure in the bed. The problem then becomes more complicated since at low loads and low bed-compaction, particles fracture when the area-averaged load exceeds their single-particle fracture strength. At higher loads and degrees of compaction, fine particles begin to transmit force, and distribute the force flux over the surface of large particles [42]. All of the above-described models assume the compression stress within the die to be constant. However, preliminary results obtained by the present author show that the pressure varies along the die height and sometimes also radially at the pressing piston. The single particle load at the die walls was measured by indentation. In order to increase the indentation sensitivity, a thin copper plate was used to replace the die walls. In these tests, very hard zirconium spheres of 1-1.2 mm in diameter were used in a 25-mm-diameter die. An example of the indentation at the die walls is shown in Figure 3. The indentations have a "droplet" shape which indicates that the bed deforms during application of the load. The indentation area is, however, proportional to the force that presses the spherical particle against the die wall. The force and indentation area were calibrated in a similar way for the hardness measurement of surfaces, and some typical results are shown in Figure 4. Figure 5 is presented to show the indentation area at the die wall as a function of height from the bottom surface (the bottom surface was kept stationary while the load was applied through the upper surface). It is clear that the average indentation area, and consequently the pressure, increase towards the upper surface. This emphasises the need to describe the pressure distribution within the die in a more accurate way prior to any attempt to relate it to the stress experienced by individual particles within the die for their breakage analysis. Compression and impact Shipway and Hutchings [12) presented results of a theoretical and experimental study of the fracture of single brittle spheres by uniaxial compression between opposed platens and by free impact against plane targets. They found that the stress distributions in elastic spheres are broadly similar under both types of loading, with significant tensile components inside the sphere on the axis of the system and on the surface of the sphere, around the equator for the case of compression. Salman et a!. Fatigue Strength In most of the handling and conveying systems involving attrition, each particle experiences more than one event of loading. The loading might be of compression and shear force, as in silos, or of impact force, as in pneumatic conveying systems. Many research studies related the attrition rates to the residence time in continuous systems or to the test time in batch systems [45). However, their tests should be related to the number of loading events, which is a more fundamental parameter as it relates to fatigue [5). The kinetics of batch milling was proposed to be expressed with respect to energy instead of time [46]. Moreover, Potapov and Campbell [47] noted that the attrition rate is simply related to the total work performed on the system. This appears to be independent of the mechanisms of breakage, how the work is applied, and even whether the material is experiencing quasi-static or rapid flow behaviour. Comminution processes involve a combination of discrete breakage events by particle fracture and continuous degradation. A simplified model for a mathematical description of the overall process, as well as process simulations as used to illustrate the effects of the different mechanisms on grinding kinetics and product size distribution, are described by Hogg [4]. He also pointed out that product size distributions assume an increasingly bimodal character as the relative contribution from degradation increases. In order to evaluate a continuous grinding process, a model is proposed which combines experimentally determined breakage kinetics [48]. Berthiaux and Dodds [49] developed a methodology for characterising grinding kinetics based on a new criterion, the so-called "residual fraction" to represent the performance of a grinding process. Similar analyses could be applied to attrition in the handling and processing of particulate materials. Compression Although in storage and handling systems, particles may experience repeated compressive loadings, almost no previous reports concerning fatigue behaviour were found. Experimental results concerning the repeated compressive loading of particulate beds were reported by Kalman and Goder [5,50,51]. They used a testing rig that is shown in Figure 6. A sample of the tested material is compressed in a cylindrical die under repeated compressive forces. Both rate and maximum value of the compressive force are adjustable. The number of cycles is pre-set for Fig. 6 Experimental apparatus for repeated compression cycles. 114 every test. The test rig was designed in such a way as to enable the application of compressive stresses to a bulk material inserted into a cylindrical die. After filling, the matrix is tapped to achieve better repeatability of the initial conditions. The upper piston is loaded by a pneumatic piston through a beam that compresses the material. By varying the location of the pneumatic piston, its air pressure, and the die dimensions, one is able to control the compression stress. A pressure switch controls the load of the pneumatic piston between the pre-set upper and lower pressures. The frequency of operation is controlled by a needle valve. With this arrangement, the pneumatic piston increases the load until the upper set point of pressure is reached and then the pressure is reduced to zero, and so on. The frequency was set to very low values to avoid impact effects. An electrical counter that enables long experimentation overnight for thousands of pre-set cycles was incorporated into the system. After each test was terminated, the percentage of material under a certain size was measured. For engineering use, a fatigue curve is useful. The "fatigue curve" is a term taken from mechanical engineering, and which is applied to metals to describe the load versus the loading cycles for damage to a standard specimen. Since the tested material is a single specimen, a single curve is plotted for any probability of occurrence. In fatigue experiments of particulate assemblies, however, many specimen particles exist in a single test. It would be impossible to plot a single curve. However, several curves can be plotted to describe an amount of damage by each curve as shown in Figure 7. From an engineering point of view, the compression load and the number of loading cycles can be found for a postulated amount of dam- age from such curves. Each curve is expected to stabilise, as mentioned earlier, because the stress distribution on each undamaged particle is moderated due to the broken particles that provide more contact points. The fatigue curve of potassium sulphate is shown in Figure 7. The percentage of undersize particles is an uncontrolled parameter in the experiments. It is a result of the compression stress and number of cycles and is measured only after the test is terminated. Therefore, it was impossible to conduct experiments with a constant amount of damage. The percentage of undersize particles is indicated in the figure for each experiment. From these, the fatigue curves were estimated and plotted manually. As expected, the amount of damaged particles increases with both the compression stress and number of cycles. As expected, the curves also stabilise after a certain number of compression cycles. In order to complete the insight into the fatigue behaviour, many more tests with various materials must be conducted. Furthermore, new experiments on the fatigue of individual particles subjected to repeated compression loads will be conducted. Finally, a correlation between single particle strength and particulate bed strength during cycled loading can be determined based on the modified or existing empirical models that relate the single particle and particulate bed strength in a single static compression. Impact Most experiments published in the literature and the standard available equipment are related to repeated impact loads. Salman eta!. [27][28][29], Cleaver et a!. [32] and Ghadiri and Papadopoulos [30] showed that attrition rates increase sharply with the number of impacts. Also experiments conducted in fluidised beds, which are common test rigs for attrition measurements [34,52], show that attrition rates increase with time. The time of fluidised bed operation can be converted, although in a very complex manner, to the number of impacts. A new test method that allows the characterisation of granules by their attrition resistance, fatigue lifetime and breaking mechanism was presented recently by Beekman eta!. [53]. The following sections describe impact strength measurements in various experimental rigs and their relation to attrition in industrial systems. The impact velocity is divided into two ranges: low velocity (1-10 m/s) that is applicable in chutes, dense-phase pneumatic conveying, fluidised beds and some processes conducted in rotating drums such as coating and granulation; and medium velocity (10-40 m/s) that is KONA No.18 (2000) applicable in dilute-phase pneumatic conveying. A high impact velocity (80-300 m/s) may occur in comminution systems such as jet mills and pin mills. Low velocity -rotating drum Rotating drums are used as a means of conducting some processes such as coating, granulation and mixing, as well as for characterising the attrition and strength of particles. This apparatus is widely used in the pharmaceutical industry to characterise the strength of tablets. This method was also used for large tablets (1 inch in diameter) by Kalman and Targan [22]. They found that the attrition rate of tablets is well correlated to the compression or tensile strengths. Figure 8 presents an example of several results obtained by Grant and Kalman [54] with a rotating drum made of steel of 285 mm in length and 265 mm in diameter. One shelf of 40 mm in width was used. The multiplication of the rotation speed by the period of operation provided the number of rotations. After a predetermined period of operation, the sample was sieved to provide a size distribution, and in some cases the strength of the particles was measured by a Crush Strength Analyser. An example of the size distribution variation during a test with potash is shown in Figure 8. The test was conducted at 30 rpm with a sample weight of 100 gr. The initial size of the sample was in a narrow range between 2 and 4 mm, so the sample could be considered monosize material. The figure shows the total weight percentage under the size indicated at each line (cumulative undersize). The upper line, showing the cumulative weight under 2 mm, shows the total weight percentage of the particles that were found to be smaller in size than the initial lower limit. As expected, the percentage of damaged particles increases as the number of rotations increases. This is probably due to fatigue. However, it seems that the rate of attrition of the particles decreases. In order to gain an insight into the breakage mechanism, the compressive strength of the particles was also measured. In these experiments, the sample was sieved after a postulated number of rotations and the compressive strength of the surviving particles at the initial size range was measured. Each average compressive strength shown in Figure 9 is an average of at least 20 particles and in some cases even more than 60 particles. Although the compressive strength distribution of any shown point was significant, the average values make the attrition mechanism somewhat clearer. The initial particles are the strongest, and become weaker after experiencing some impact loads in the drum test. The decrease in the compressive strength due to the number of rotations creates a minimum, then increases slightly to a local maximum, and finally decreases again until the strength stabilises. The behaviour is too complicated to permit a complete explanation at this stage, but could be the result of two different effects: 1. The strength of each particle could decrease due to fatigue. Thus, repeated impact loadings enlarge the micro cracks within the particle until it breaks. 2. Since there is initially a strength distribution of particles, the weaker ones break first. Therefore the average strength of the remainder increases. If we also take into account that different materials display different sensitivity levels to repeated loading, then we might have an explanation for the strange behaviour shown in Figure 9. The micro cracks grow faster than the breakage during the first rotations, and result in a decrease in strength. Then, most particles that were initially weak or suffered from fatigue are damaged, which results in an increase of the compressive strength of the surviving particles. At the z .c 120 116 end of the process (after 2000 rotations), fatigue slightly influences the breakage until both effects stabilise to a steady-state condition, i.e. the rate of fatigue equals the rate of attrition. Medium velocity -dilute-phase pneumatic conveying The main parameters affecting the breakage and chipping of particles in pneumatic conveying systems are air and particle velocities, loading ratio and particle properties such as size distribution, shape and material. Since it was noticed a long time ago that the main attrition occurs at the bends, most studies were dedicated to flow and attrition mechanisms at various bends. Mckee eta!. [55] showed that particle breakage is described to be inversely related to the solids loading factor. Hilbert [56] examined three bends experimentally: a long-radius bend, a short-radius elbow and a blindtee. He found that regarding wear, the blind-tee is the best device (less attrition), with the short-radius elbow coming a close second and the long-radius bend coming in third. A comprehensive study was carried out by Agarwal et a!. [57] on a long-radius bend. They studied the acceleration length due to bends (caused by bends?) and the effects of phase density, conveying velocity and the use of inserts on the wear of the bends, particle degradation and depth of penetration. Vervoorn [45] carried out pneumatic conveying experiments for dilute-phase alumina flow, changing some parameters such as particle velocity and bend structure. Recently, Bell eta!. [58] and Papadopoulos eta!. [59] presented attrition experiments with salt in which the size distribution was measured on-line. They have also shown that the air velocity has a prime effect on the attrition rate, although the effect of loading ratio and the bend structure cannot be ignored. Kalman and Goder [60] measured the pressure drop, attrition rate, wear of the bend and build-up on the bend walls for four types of bends: long-radius (three construction materials), short-radius elbow, blind-tee, and a turbulence drum. Aked et a!. [61] showed that even fine powders (15 Jlm) suffered significant attrition under certain conditions. Kalman and Aked [62] presented a comparison between different attrition measurement methods and analysed the attrition of various materials. Kalman [63] discussed the possibilities of controlling the attrition rate and of using it for useful processes such as particle rounding, consequently reducing the dust generation during the downstream handling and conveying stages. Most of the parameters affecting attrition in pneumatic con-veying pipelines for a variety of materials were summarised by Kalman [64]. As for dense-phase flow, measurements were made of the specific energy consumption and particle attrition for a limited range of particulate materials by Taylor [65]. Coppinger eta!. [52] presented experimental evidence to show that a fluid bed attrition test performed on small amounts of a wide variety of powders and granules gives a very good indication of both total attrition and bulk density changes for the materials transported in dense-phase conveying loops. The standard compression test did not yield additional information beyond that gathered from the fluid bed attrition test as far as particle breakability was concerned. It was also found that there is a good correlation between attrition in the dilute-phase conveying system and the mechanical sieving test, which seems to indicate that breakage and attrition in the two systems is somewhat similar. New results of the attrition of potash at 30m/sin a l-inch pipe diameter due to only one bend (caused by only one bend?) of different type is presented in Figure 10. The results are shown in terms of the decrease of the weight median size. It is clear that different bends cause different attrition rates, although in the presented results, the difference is not significant. However, the effect of the number of times that the material passed through the bend significantly affects the attrition rate. Fatigue curve By analysing Figures 8 and 10, it looks as though a fatigue curve for impact might be developed. Disregarding other effects, the impact velocity defines the impact load for a certain particle and a certain target, i.e. higher impact velocities reflect higher impact loads. Therefore, at lower impact velocities, more collisions cause the same damage as fewer collisions at higher impact velocities. A further investigation and analysis should be conducted in order to develop a fatigue curve, similar to the one· shown in Figure 7. This could have a significant benefit, since it might unify a number of practical comminution and attrition systems into a common class. Obviously, the picture should be completed with tests with jet mills that give the highest range of impact velocities. In order to enable the comparison of results gained in various systems, other influencing parameters such as the collision angle and target rigidity, should be converted to the effect of normal impact load. Conclusions A literature review concerning breakage models in comminution and attrition systems is presented. The common strength characterising systems for compression and impact are reviewed in detail. The difficulties concerning the application of theoretical models for crack propagation to practical problems is discussed. The review and the results presented in the paper can be summarised as follows: 1. In order to reduce the number of required testing and strength indices, future investigations should be devoted to developing empirical correlations between various measurements such as: individual and particulate bed compression, particle compression and impact, etc. 2. The stress distribution in the die used for particulate bed compression should be taken into account for comparison with single particle compressive strengths. The indentation method shown in this paper might give the required means of measurement. 3. Fatigue curves for compression and impact loads could improve design tools for comminution and attrition systems. 4. Industrial impact systems such as rotating drums, ball mills, pneumatic conveying, pin and jet mills might be incorporated as an integral part of a common fatigue curve to characterise most systems where degradation and breakage occur.
2019-04-28T13:09:35.412Z
2000-01-01T00:00:00.000
{ "year": 2000, "sha1": "437a65c7e39523a19d76c5a31b7dc59490797e4c", "oa_license": "CCBY", "oa_url": "https://www.jstage.jst.go.jp/article/kona/18/0/18_2000017/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "9b206f481f7a45253cf9d3c243f1ec88d4f51088", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Materials Science" ] }
252224451
pes2o/s2orc
v3-fos-license
Quality of life following medial patellofemoral ligament reconstruction combined with medial tibial tubercle transfer in patients with recurrent patellar dislocation: a retrospective comparative study Background Because the patients undergoing medial patellofemoral ligament reconstruction (MPFLr) combined with medial tibial tubercle transfer (TTT) procedure are usually young and active, the quality of life (QoL) is also an important prognostic factor for patients with recurrent patellar dislocation. Assessing QoL can provide more useful and accurate evidence for the effects of this procedure. This study aimed to evaluate QoL following MPFLr combined with TTT, compared with isolated MPFLr (iMPFLr). Methods Fifty-one patients who underwent iMPFLr + TTT and 48 patients who underwent iMPFLr were included. Clinical evaluation included QoL (EQ-5D-5L and EQ-5D VAS), functional outcomes (Kujala, Lysholm and Tegner activity scores), physical examinations (patellar apprehension test and range of motion) and redislocation rates. Radiological evaluation included patellar tilt angle and bisect offset. These preoperative and postoperative results were compared between groups at baseline and the final follow-up. The paired and independent t tests were used for the data following a normal distribution. Otherwise, the Wilcoxon and Mann–Whitney U tests were used to analyze the differences. Categorical variables were compared by chi-square or Fisher’s exact test. Results All of the QoL (EQ-5D-5L and EQ-5D VAS), clinical results and radiological outcomes significantly improved in both groups at the final follow-up, with no significant differences between groups. There was no significant difference in five dimensions of EQ-5D at the final follow-up, although percentages of people with problems of mobility and pain/discomfort were higher in the MPFLr + TTT group. Female patients had lower EQ-5D index and EQ-5D VAS compared with male patients in both groups at the final follow-up, but there was only a significant difference in the EQ-5D VAS. Conclusions Both MPFLr + TTT and iMPFLr groups obtained similar and satisfactory improvements in the QoL, clinical results and radiological outcomes, indicating that MPFLr combined with TTT is a safe and effective procedure, which can significantly improve the QoL for patients with recurrent patellar dislocation in cases of pathologically lateralized TT. However, female patients obtained lower QoL than males. The MPFL is the primary static soft tissue restraint which restrains against lateral subluxation and dislocation of the patella, especially between 0° and 30° of knee flexion [9]. Therefore, injury or deficiency of MPFL is one of the predisposing factors for RPD. In most cases, an acute traumatic patellar dislocation leads to MPFL rupture [8,11]. Therefore, an anatomical repair MPFL is necessary to prevent redislocation of patella [11]. Although various procedures have been reported for the treatment of RPD, MPFL reconstruction (MPFLr), which is one of the proximal realignment surgical techniques, has become an increasingly common and popular procedure in the last decade, with satisfactory clinical outcomes, reduced redislocation rates and improved quality of life (QoL) although surgical techniques and graft types are various [5,8,9,[12][13][14]. However, it is necessary to consider the potential risk factors before isolated MPFLr (iMPFLr) [15][16][17]. The lateralized TT, which can be measured by TTtrochlear groove (TT-TG) distance, leads to the increase in Q angle and a lateral force on the patella, thus damaging the normal patellar tracking [8,18]. The iMPFLr could not achieve promising results due to increased graft tension and potential failure caused by the TT lateralization, which produces anisometry in MPFLr [19]. Therefore, a combination of MPFLr and TT transfer (TTT) for patients with an increased TT-TG distance, especially when TT-TG distance is greater than 20 mm, should be taken into consideration, with the purpose of addressing both patellar dislocation and patellar maltracking at the same time to restore the optimal position of patella relative to the femoral trochlea [5,18,20]. A systematic review showed that MPFLr combined with TTT is a safe and effective surgery, with a low to moderate risk of complications and overall good results [21]. However, flexion deficits, strength deficiencies, a slower recovery process and a prolonged return to sport time after MPFLr with TTT compared with iMPFLr were reported [12,22]. In addition, the additional TTT increases the operative time and the risk of tibial fracture and reoperation because of symptomatic hardware removal [23,24]. Overcorrection of TT-TG distance can lead to medial cartilage wear and instability, thus promoting medial osteoarthritis [25]. Because the patients undergoing MPFLr with TTT procedure are usually young and active, the QoL is also an important prognostic factor for RPD. Assessing postoperative QoL, with clinical and radiological outcomes, can further provide more useful and accurate evidence for the beneficial effects of this procedure. However, to our knowledge, no study has evaluated the QoL following combined MPFLr and TTT for RPD. The five-level EuroQol five-dimensional questionnaire (EQ-5D-5L) is a generic and standardized instrument for describing and valuing health-related QoL on five dimensions of health: mobility, self-care, usual activities, pain/discomfort and anxiety/depression [26,27]. The visual analogue scale (EQ-VAS) was also used in this study [26]. EQ-5D-5L has increased reliability and sensitivity and decreased ceiling effects compared with previous EQ-5D-3L [28]. The main aim of the present study was to evaluate postoperative QoL, clinical and radiological outcomes following combined MPFLr and TTT, compared with iMPFLr in patients with RPD in cases of pathologically lateralized TT. It was hypothesized that combined MPFLr and TTT would significantly improve both QoL and other results for RPD. Patient selection This retrospective study was approved by the ethics committee of the Third Hospital of Hebei Medical University and informed consent was obtained from the patients. All patients of RPD who underwent MPFLr from January 2017 to April 2020 were reviewed. The inclusion criteria were: (1) two or more episodes of patellar dislocation; (2) a history of recurrent patellar instability with symptoms of patellar instability and positive apprehension sign; (3) lateral subluxation or dislocation of the patella through computed tomography (CT) or magnetic resonance (MR) images; (4) skeletal maturity. The exclusion criteria were: which can significantly improve the QoL for patients with recurrent patellar dislocation in cases of pathologically lateralized TT. However, female patients obtained lower QoL than males. (7) patella alta with Caton-Deschamps index greater than 1.2; (8) generalized or localized joint laxity; (9) rheumatoid arthritis or osteonecrosis; (10) incomplete medical records or imaging data and refusal to take part in this study. Patients with criteria (2), (5) and (7) had to receive both MPFLr and an additional bony procedure, including trochleoplasty, derotational distal femoral osteotomy and distal TTT, and thus were not included. Based on these criteria, 102 MPFLr procedures in 102 patients, with a mean follow-up of 25.8 ± 7.6 months (range 12.5-33.2 months) were included. All these patients failed a conservative treatment and were followed up for at least one year after surgery. Three patients were lost to follow-up. A simultaneous medial TTT was performed if preoperative TT-TG distance was greater than 20 mm. Patients were divided into two groups according to whether they had undergone TTT or not. The MPFLr + TTT group consisted of 51 patients (51 knees) who underwent MPFLr combined with TTT. The iMPFLr group comprised of 48 patients (48 knees) who underwent iMPFLr (Fig. 1). Preoperative radiographic examination consisted of anteroposterior, lateral and axial radiographs and axial CT scans in all patients. Caton-Deschamps index was measured on the lateral radiograph, which was defined as the ratio of the shortest distance from the lowest point of patellar articular surface to the anterior upper corner of tibial plateau contour to the length of patellar articular surface [30]. A Caton-Deschamps index greater than 1.2 was considered as patella alta. Trochlear dysplasia was evaluated on the lateral radiograph. According to Dejour's classification, types B, C and D were regarded as high-grade trochlear dysplasia [29]. The TT-TG distance was measured on two overlapped axial CT images, including the deepest point of the trochlear groove and approximately proximal one-third of the TT. Two lines were drawn from the deepest point of the trochlear groove and the center of the TT, respectively, pendicular to the posterior condylar line. The distance between these two lines was measured as TT-TG distance [8]. Surgical procedures All surgical procedures were performed by the same experienced surgeon. First, a routine arthroscopy was performed through anterolateral and anteromedial portals to carefully evaluate the cartilage of patella and femoral trochlea. The cartilage injuries were graded according to the Outerbridge classification. Then the patellar tracking was checked within the range of motion of the knee. The TTT was performed first and followed by the MPFLr. The amount of the TT medialization was evaluated by the preoperative CT measurement of TT-TG distance. TT-TG distance was restored to less than 20 mm, but care was taken not to overcorrect it. The TT was fixed with lag screws after obtaining a satisfactory patellar tracking. The MPFLr was completed using ipsilateral semitendinosus autograft. The graft was prepared in a "Y" shape to have a double bundle anatomical MPFLr. The folded end was whipstitched about 2.5 cm from the free end with Ethicon non-absorbable suture. Positions of the femoral and patellar tunnels were based on the native MPFL anatomy. The absorbable suture passed through the folded end and entered into the femoral tunnel, which was positioned between the medial femoral epicondyle and the adductor tubercle. Two bony grooves were made in the medial edge of the patella, which were located in the upper corner and the center of the patella, respectively. Three ends of the graft were fixed with absorbable screws. At the end of the procedure, patellar tracking, range of motion (ROM) and lateral displacement of patella were examined. If the tightness of lateral patellar retinaculum produced tension in the MPFLr or restricted the patella from returning to the normal tracking, further release of the lateral retinaculum was performed. Postoperative rehabilitation All patients followed the standard postoperative rehabilitation program. After procedure, the patient was placed in a brace to protect the knee. In the first 2 weeks postoperatively, passive ROM up to 60° and partial weightbearing exercise with the knee held in extension were permitted. In the next 4 weeks, ROM was gradually increased to 90°. After 6 weeks, ROM was increased with no restriction and full weight-bearing exercise was allowed. The quadriceps femoris strength training and straight leg raising exercises were encouraged to strengthen the muscle early following the surgery. Patients who achieved full ROM and normal muscle strength and stability were allowed to return to normal sports activities at 6 months. Strenuous high-risk exercise required a longer rehabilitation period according to individual situation. Radiological evaluation All the radiological evaluation was performed before and after surgery, including patellar tilt angle (PTA) and bisect offset (BO), which were measured on two overlapped axial CT images, including the widest patellar axis and the deepest point of trochlear groove, respectively. CT was performed in a standardized manner, with the patient in supine position, and the knee in full extension. The PTA was defined as the angle between the widest patellar axis and the posterior condylar line (Fig. 2). The axial slice with the widest patella was determined, and the line connecting the medial and lateral edges of the patella was the widest patellar axis. The posterior condylar line was defined as the line passing through the most posterior points of the medial and lateral femoral condyles on the axial slice showing the posterior condyles with the Roman arch [31]. Positive value represented lateral patellar tilt. The BO indicated the lateral displacement of patella relative to the trochlear groove. A line was drawn through the deepest point of the trochlear groove and perpendicular to the posterior condylar line. The BO was measured as the portion of the width of the patella lateral to the deepest point of the trochlear groove ( Fig. 3) [32]. A higher percentage represented a more lateral displacement of patella. Clinical evaluation All the clinical evaluation was performed before and after surgery, including the QoL, functional outcomes, physical examination and redislocation rate. The QoL was evaluated using EQ-5D-5L, which is based on five dimensions [26]. Each dimension has five response categories: no problems, some problems, moderate problems, severe problems and extreme problems [27]. Patients also assessed their overall health on the vertical EQ-VAS with the range of 0-100 [26]. Functional outcomes included Kujala score, Lysholm score and Tegner activity score [33][34][35]. Kujala or Lysholm improvement was defined as the average change measured as the difference between the postoperative and preoperative Kujala or Lysholm scores. Functional failure was based on positive apprehension sign, recurrent patellar subluxation or dislocation, subjective instability and complications. The physical examination consisted of patellar apprehension test and the ROM. The patellar apprehension test was performed with the patient in the supine position and the quadriceps femoris relaxed at 30° of knee flexion. The experienced examiner put his thumb on the medial edge of the patella to push the patella laterally. The apprehension during the passive patellar glide was considered as a positive apprehension sign. The ROM was assessed with the patient in the supine position without weight bearing. An experienced examiner used a standard handheld goniometer to measure the maximum active flexion and extension angles. The redislocation rates were also recorded. Statistical analysis Descriptive statistics were reported as mean ± standard deviation (SD) for continuous variables. Statistical comparisons were performed using SPSS Statistics software (version 21, SPSS Inc., Chicago, IL, USA) and p < 0.05 was considered to be significant. Shapiro-Wilk test was used to examine normality of the data. The paired and independent t tests were used for the data following a normal distribution. Otherwise, the Wilcoxon and Mann-Whitney U tests were used to analyze the differences. Categorical variables were compared by chi-square or Fisher's exact test. The sample size was estimated using G*Power 3 (Heinrich Heine Universität Düsseldorf, Germany), based on the EQ-5D data collected in this study from included patients. A priori analysis indicated that the minimum required sample size with an effect size of 0.6, α of 0.05 and a power of 0.80 was 90. To evaluate intraobserver and inter-observer reliability, intra-class correlation (ICC) values were calculated. The measurements were either repeated by one researcher at the interval of two weeks or independently performed by two different researchers. Demographic data The preoperative and postoperative values (p = 0.640). The groups were comparable for gender, age and BMI. Demographic data and knee characteristics are shown in Table 1. QoL The EQ-5D index and EQ-5D VAS were significantly improved between baseline and the final follow-up in the two groups (p < 0.001 and p < 0.001, respectively). There was no significant difference in the EQ-5D index at baseline and the final follow-up between the two groups (p = 0.474 and p = 0.502, respectively). There was no significant difference in the EQ-5D VAS at baseline and the final follow-up between the two groups (p = 0.301 and p = 0.142, respectively) ( Table 2). Five dimensions of EQ-5D at the final follow-up were also compared between the groups, but no significant difference was found, although percentages of people with problems of mobility and pain/discomfort were higher in the MPFLr + TTT group (Fig. 4). Female patients had lower EQ-5D index and EQ-5D VAS compared with male patients in both groups at the final follow-up, but there was only a significant difference in the EQ-5D VAS (Table 3). Functional outcomes and clinical results Between baseline and the final follow-up, significant improvements for Kujala, Lysholm and Tegner activity scores were observed in both groups (p < 0.001), with no statistic difference between the two groups ( Table 4). Although Kujala improvement was higher in the MPFLr + TTT group than iMPFLr group, there was no significant difference (p = 0.534). There was also no significant difference in Lysholm and Tegner activity improvements between the two groups (p = 0.813 and p = 0.723, respectively) ( Table 4). Apprehension sign were negative in all patients. There was no significant difference in the postoperative ROM between the two groups (p = 0.160 and p = 0.098, respectively). The average extension angle of ROM was 1.8° ± 1.2° and the average flexion was 130.6° ± 4.9° in the MPFLr + TTT group. The average extension angle of ROM was 1.3° ± 0.9° and the average flexion was 133.4° ± 5.2° in the iMPFLr group (Table 4). No extension and flexion deficit of 5° or more was reported in ROM in both groups at the final follow-up. Two patients (3.9%) in MPFLr + TTT group and three in iMPFLr group (6.3%) reported redislocation without apprehension, but only two patients sought medical treatment. They were treated conservatively because of no surgical indications. Forty-four patients (86.3%) in MPFLr + TTT group and 42 (87.5%) in iMPFLr group were able to return to their pre-injury activity level at a mean of 4.6 ± 2.2 months and 4.4 ± 1.9 months, respectively (p = 0.341). A total of 90.2% (n = 46) of patients in MPFLr + TTT group and 93.8% (n = 45) in iMPFLr group were satisfied with the clinical outcomes, and they would like to recommend their surgeries to others. Radiological results The PTA and BO were improved significantly between baseline and the final follow-up in the two groups (p < 0.001 and p < 0.001, respectively). There was no significant difference in the PTA and BO between the two groups at baseline and final follow-up, although PTA in the iMPFLr group was lower than MPFLr + TTT group (p = 0.051 and p = 0.381, respectively) ( Table 5). Five patients in the MPFLr + TTT group and seven patients in the iMPFLr group still had a pathological patellar tilt postoperatively. All of these patients showed preoperative pathological patellar tilt greater than 30°. The intra-observer and inter-observer reliability of radiological measurements was good to excellent, with all ICCs greater than 0.8. Discussion In this retrospective comparative study, clinical, functional and radiological results between 51 patients (51 knees) who underwent MPFLr + TTT and 48 patients (48 knees) who underwent iMPFLr for RPD were compared. The most important findings of this study were the significant improvements in the QoL following MPFLr + TTT and iMPFLr, with no significant difference in the EQ-5D index and EQ-5D VAS at baseline Table 3 Postoperative EQ-5D index and EQ-5D VAS in the MPFLr + TTT group and iMPFLr group according to gender 17:416 and the final follow-up between the two groups. Female patients had lower EQ-5D index and EQ-5D VAS compared with male patients in both groups. In addition, no significant difference was found in the functional outcomes, physical examinations, redislocation rates and radiological results between the two surgeries. Decision making is complex, and clear treatment guidelines are still indeterminate for patients with RPD. The MPFLr, which was the standard treatment for RPD based on the consensus of International Patellofemoral Study Group, has been widely performed to restore the length and stiffness of the medial soft tissue [36]. However, patellar dislocation is a multifactorial condition, which depends on bony variables besides ligament laxity, including lateralization of TT [5,6]. In patients with abnormally lateralized TT, iMPFLr is not sufficient to compensate for the lack of bony restraint and to restore patellofemoral pressure and dynamics [37]. Wagner et al. reported that patients with an increased TT-TG distance who underwent iMPFLr had a lower Kujala score, suggesting that medializing the TT when the TT-TG distance was greater than 20 mm was recommended [25]. Therefore, TTT may be necessary in these patients to decrease the pressure of lateral patella and trochlea. The purpose of TTT is to restore the relationship between femoral trochlea and TT, which can realign the extensor mechanism and increase patellofemoral stability [11,37]. A systematic review demonstrated significant improvements in the overall clinical results, with low functional failure rates and reoperation rates following MPFLr + TTT [38]. However, no consensus has been reached regarding the threshold of the TT-TG distance as an indication for TTT. We took a TT-TG distance greater than 20 mm as the cutoff value for TTT in this study, which was Dejour et al. considered as abnormal [29]. EQ-5D index EQ-5D VAS EQ-5D index EQ-5D VAS Numerous studies reported significant improved QoL after iMPFLr using different methods, including EQ-5D-3L, Knee Injury and Osteoarthritis Outcome Score (KOOS), Pediatric International Knee Documentation Committee Form (Pedi-IKDC), Banff Patella Instrumentation and Short Form 12 (SF-12), but none of them mentioned psychometric effects [14,36,39,40]. Bouras et al. used EQ-5D-3L questionnaire to evaluate the Qol after iMPFLr and noted that EQ-5D index and EQ-5D VAS significantly increased at last follow-up [14]. Biesert et al. also reported overall good health with EQ-5D-3L [41]. Erickson et al. found statistically significant improvement in mean KOOS-QoL [39]. However, to our knowledge no study has specifically investigated QoL and health state preference value following MPFLr + TTT for RPD with EQ-5D-5L. In this study, the EQ-5D index and EQ-5D VAS were significantly improved in the two groups. However, there was no significant difference in the EQ-5D index and EQ-5D VAS at baseline and the final follow-up between the two groups. Five dimensions of EQ-5D at the final follow-up were also compared between the groups, but no significant difference was found, although percentages of people with problems of mobility and pain/discomfort were higher in the MPFLr + TTT group. The small incision and decreased pain caused by the hardware, which were beneficial to the postoperative rehabilitation and activities, could account for the slightly higher QoL in the iMPFLr group. But this does not affect the significant improvement of the QoL following MPFLr + TTT. Our results were slightly lower than previous reported scores using EQ-5D-3L, which could be the results of reduced ceiling effect of EQ-5D-5L and a small sample size of this study [14,41]. The reasons for improved QoL were multifactorial. One possible reason was the standard postoperative rehabilitation protocol adopted in this study, including rapid postoperative activities and quadriceps femoris strength training, which prevented muscle atrophy caused by immobilization. Another reason was that MPFLr and TTT were able to reestablish the normal anatomy of the knee, promote normal joint reaction forces and restore patellofemoral stability, which contributed to the relief of pain and related symptoms that affected general health conditions, including mental health [40]. Therefore, postoperative improvements in clinical evaluation, including functional outcomes, and radiological evaluation lead to the improvement of Qol. This could also explain the high postoperative satisfaction of patients. The TT-TG distance is used as a decision-making parameter in TTT surgery. However, validity of the TT-TG distance has been brought into question. The TT-TG distance has a significant association with knee rotation, but it does not indicate the patellofemoral alignment and provide accurate information about the congruence between the patella and trochlear [42,43]. Besides, TT-TG distance does not determine the location of the patellofemoral malformation [44]. Chen et al. reported that TT-TG distance was influenced more by knee rotation and trochlear groove medialization, instead of TT lateralization. Therefore, a high TT-TG distance might not be an appropriate indication for surgical planning [45]. This could also be the reason why our results showed that there was no significant difference in QoL between the two groups. We found worse QoL assessed by the EQ-5D index and EQ-5D VAS in females versus males in both groups, but there was only a significant difference in the EQ-5D VAS. Bouras et al. also found EQ-5D index and EQ-5D VAS of female patients at baseline and final follow-up were lower than males following iMPFLr, but they used EQ-5D-3L and did not include iMPFLr + TTT group [14]. Other studies demonstrated poorer functional outcomes for females after iMPFLr + TTT. Allen et al. reported female gender was the risk factor for the lower scores of the International Knee Documentation Committee (IKDC) and Kujala score following MPFLr + TTT [46]. Compared with males, the reoperation rate of females was higher following MPFLr + TTT [20]. However, Watanabe et al. studied the efficacy of MPFLr with or without TTT and found no effect of gender on the results [12]. Migliorini et al. also reported that gender had no influence on surgical outcomes following iMPFLr [47]. One possible reason for lower QoL in females could be the higher probability of dysplasia and more joint laxity than males [12,48]. Also, pain levels are usually severer in females following cartilage injury and the incidence of patellofemoral pain syndrome is also relatively high in females [46,48]. This study supports that gender is an important risk factor of patellar instability. Functional outcomes in this study, including Kujala, Lysholm and Tegner activity scores, were significantly improved in both groups with MPFLr + TTT or iMPFLr. However, no significant difference was found between the two groups. These findings are consistent with previously reported results. Schottle et al. showed no significant difference in Kujala score and subjective assessment between patients who underwent MPFLr with or without TTT [49]. Watanabe et al. reported no significant difference in the Lysholm score, which improved from 70 to 92 in the iMPFLr group and from 72 to 90 in the MPFLr + TTT group [12]. Neri et al. showed no difference in Kujala score [13]. Kim et al. also found no significant difference in Kujala and Tegner activity scores between the two groups [36]. However, Franciozi et al. reported Kujala and Lysholm improvements from baseline, favoring MPFLr + TTT over iMPFLr. But this TTT included anteriorization, which provided some biomechanical advantages for improving patellar tracking [23]. This study reported low redislocation rate following MPFLr + TTT. Two patients in the MPFLr + TTT group and three in the iMPFLr group reported redislocation without apprehension. This corresponds with findings of other studies. Kim et al. reported no difference in the functional failure and complications, with two subjective instability and one repeat dislocation in MPFLr + TTT group, and two subjective instability in iMPFLr group [36]. Allen et al. reported one dislocation and one subluxation after MPFLr + TTT [46]. Cossey et al. showed that no recurrence of subluxation or dislocation occurred and objective stability was excellent after MPFLr with TTT [50]. Although the iMPFLr provides restraint which prevents against redislocation, an uncorrected TT-TG distance could still cause episodes of instability or dislocation during activity. CT scans demonstrated an improvement in the PTA and BO in both MPFLr + TTT group and iMPFLr group, but no significant difference was observed in two groups. Similar findings were also reported in other studies. Kim et al. reported no significant difference between the two groups in PTA with significant improvement [36]. Neri et al. also reported similar results regarding PTA [13]. A biomechanical study showed that MPFLr significantly reduced BO, especially at low flexion angles [51]. PTA and BO were improved, because of the dorsal tension of the medial edge of patella by the graft and the medial tension on the patella caused by medialization of TT [48]. However, overcorrection of patellar tilt due to the overtension of the graft could increase the medial patellofemoral contact pressure and result in the damage of the medial patellofemoral cartilage, thus developing into osteoarthritis [13]. Whether these results can reflect patellar tracking and improve patellofemoral pressure remains to be studied, which can reduce long-term cartilage wear and osteoarthritis. This study has some limitations. First, the sample size of patients was relatively small, and the follow-up time was not long enough to evaluate more significant clinical and radiological results. Second, TT-TG distance was measured by CT images with knee extension. Therefore, the results could be different from those obtained from CT or MR images with different knee angles. Third, because different patients had different evaluation time, the results could vary from time to time. Fourth, the additional incision of TTT procedure and the existence of hardware made it difficult to use blind methods in clinical and radiological evaluation, which could lead to measurement bias. A prospective controlled study with a large sample and a long follow-up is required to further confirm our findings. Conclusion In the current study, both MPFLr + TTT and iMPFLr groups obtained similar and satisfactory improvements in the QoL, clinical results and radiological outcomes. There was no significant difference in the EQ-5D index and EQ-5D VAS at baseline and the final follow-up between the two groups, indicating that MPFLr combined with TTT is a safe and effective procedure, which can significantly improve the QoL for patients with RPD in cases of pathologically lateralized TT. However, female patients obtained lower QoL than males.
2022-09-15T13:22:54.954Z
2022-09-14T00:00:00.000
{ "year": 2022, "sha1": "1f9542d034349879815afea26747404ea651b9a9", "oa_license": "CCBY", "oa_url": "https://josr-online.biomedcentral.com/counter/pdf/10.1186/s13018-022-03310-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ccf71f182b824e3e746649d97e0a61436aa21111", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245580242
pes2o/s2orc
v3-fos-license
Efficient Delivery of Hydrophilic Small Molecules to Retinal Cell Lines Using Gel Core-Containing Solid Lipid Nanoparticles In this study, we developed a novel solid lipid nanoparticle (SLN) formulation for drug delivery of small hydrophilic cargos to the retina. The new formulation, based on a gel core and composite shell, allowed up to two-fold increase in the encapsulation efficiency. The type of hydrophobic polyester used in the composite shell mixture affected the particle surface charge, colloidal stability, and cell internalization profile. We validated SLNs as a drug delivery system by performing the encapsulation of a hydrophilic neuroprotective cyclic guanosine monophosphate analog, previously demonstrated to hold retinoprotective properties, and the best formulation resulted in particles with a size of ±250 nm, anionic charge > −20 mV, and an encapsulation efficiency of ±60%, criteria that are suitable for retinal delivery. In vitro studies using the ARPE-19 and 661W retinal cell lines revealed the relatively low toxicity of SLNs, even when a high particle concentration was used. More importantly, SLN could be taken up by the cells and the release of the hydrophilic cargo in the cytoplasm was visually demonstrated. These findings suggest that the newly developed SLN with a gel core and composite polymer/lipid shell holds all the characteristics suitable for the drug delivery of small hydrophilic active molecules into retinal cells. Nanoparticle Synthesis The SLN formulation is illustrated in Figure 1A. A stock solution for W1-phase without gel core was prepared by dissolving RhoB in deionized water to reach 10 mg/mL concentration. For W1-phase stock with gel core, poloxamer 407 was added to the RhoB solution to reach 40% w/v. To ensure complete dissolution, the poloxamer 407 solution was dissolved at 4 °C for 48 h. The W2-phase stock solution was prepared by dissolving poloxamer 188 in deionized water to reach 2% w/v concentration. The O-phase solution was prepared by dissolving GTP, LCT, SA, and PCL or PLGA in 1 mL of dichloromethane according to different formulation codes listed in Table S1. Finally, for the preparation of blank and drug-loaded particles, deionized water or CN03 was added, respectively, in the W1-phase instead of RhoB. Similarly, for release assays of the compound from SLN inside the cell, ANTS (25 μM) and DPX (90 μM) were co-encapsulated in the W1-phase instead of RhoB. ANTS and DPX, which form a fluorescence tracer and quencher pair, were chosen based on previously published studies [22]. The synthesis was performed by adding 200 µL of W 1 -phase stock solution kept at 4-7 • C, using an ice bath, to 1 mL of O-phase solution, followed by sonication with a Q55 ultrasound probe (Qsonica, Newton, CT, USA) at 30% amplitude for 60 s without a pulse to form the primary W 1 /O emulsion. Then, 4.8 mL of W 2 -phase solution was added into the primary emulsion and sonicated to form W 1 /O/W 2 emulsion (amplitude: 40% for 10 s followed by 20 s at 20% amplitude without a pulse). The solution was further diluted with 10 mL of W 2 -phase solution followed by sonication (amplitude: 20% for 30 s without a pulse). The organic solvent was then removed by vacuum evaporation (P: 650 mmHg) at room temperature for 20 min to form the nanoparticles. The resulting colloidal solution was stirred for 4 h to ensure complete removal of dichloromethane. Dynamic Laser Scattering Particle size distribution and hydrodynamic diameter were measured using NanoPhox DLS equipment (Sympatec GmbH, Clausthal-Zellerfeld, Germany). Before the measurement, concentrated particle solutions were diluted 5 times using 2% w/v poloxamer 188 solutions. The analyses were performed using a non-negative least square (NNLS) algorithm integrated in Windox5 software (Sympatec GmbH, Clausthal-Zellerfeld, Germany). Viscosity of the solutions was calibrated and validated using polystyrene bead standards. For each sample, the measurement was repeated 3 times, each lasting 200 s. Zeta Potential The zeta potential of the particles was measured by Zetasizer equipment (Malvern Panalytical Ltd., Malvern, UK). Samples were diluted 5 times using 10 mM phosphate buffer, pH 7.4. For each sample, the measurement was repeated 3 times. Morphological Analysis SLN solutions, as described in Table S1, were synthesized without hydrophilic cargo (i.e., blank particles) and analyzed using a transmission electron microscope (TEM). The colloidal nanoparticle solutions were stained with 7% (w/v) phosphotungstic acid for negative contrast. The morphological characterization of the particle was performed at a TEM acceleration voltage of 120 kV. Encapsulation Efficiency Encapsulation efficiency (EE) was measured using an indirect method in which the amount of unencapsulated cargo outside the particles was measured. Sample solutions were filtered using a 100 kDa microcentrifuge membrane filter (Sartorius, Brno, Czech Republic) (3 × 5 min, at 5000× g). The filtrate was collected and the amount of cargo was quantified using equation 1. As there may be some loss of W1-Phase in the pipette tips during synthesis, cargo loss was quantified to avoid overestimation of encapsulation efficiency. RhoB and CN03 concentrations were quantified by measuring absorbance at 550 nm and at 254 nm using a UV-spectrophotometer (Biomolecular device) and a high-pressure liquid chromatography (HPLC) system (Dionex Ultimate 3000, ThermoFisher, Gothenburg, Sweden), respectively. For HPLC, mobile phases A and B were 5 mM ammonium acetate buffer, and acetonitrile, respectively. Five microliters of the sample was injected into the column (Waters ® XBridge C18 XP column, 50 × 3 mm). The HPLC quantification was performed using the Chromeleon software (v7.2, ThermoFisher, Gothenburg, Sweden) by calculating the peak area at the retention time of around 3.1 min. The experiment was replicated 3 times for both UV-spectroscopy and HPLC. Stability Study Colloidal stability was assessed by monitoring the changes in size of the resulting nanoparticles over time. Concentrated SLN solution was diluted in 10 mM PBS to reach 200 µg/mL concentration and stored in glass vials at either 25 • C or 4 • C for a period of 4 weeks. At each time point, the vials were gently shaken by hand and brought to room temperature prior to dynamic laser scattering analysis. Cell Culture The cells primarily affected in retinal degeneration are RPE and photoreceptors. For this study we chose two retinal cell types: (i) ARPE-19, a spontaneously arising human RPE cell line with normal karyotype [23]; (ii) 661W photoreceptor-like cells, derived from a mouse retinal tumor generated in a transgenic mouse expressing the SV40 large T-antigen under the control of the IRBP (interphotoreceptor retinoid-binding protein) promoter [24]. ARPE-19 cells were cultured in DMEM/F12 supplemented with 10% FBS and 1% penicillinstreptomycin in an incubator at 5% CO 2 and 37 • C. The murine photoreceptor 661W cell line was cultured in low-glucose (1 mg/mL) DMEM supplemented with 10% FBS, 2 mM glutamine, and 1% penicillin-streptomycin in an incubator at 5% CO 2 and 37 • C. Approximately every three days, cells reached 70-80% confluence and were sub-cultured. Fluorescence Microscopic Analysis and Immunofluorescence Cells were seeded on glass coverslips in a 24-well plate at a density of 4 × 10 4 cells/well. After 24 h, cells were exposed to RhoB-loaded SLN (RhoB-SLN). As control, cells were treated with free RhoB solution at the same concentration (20 µM) as RhoB present in the nanoparticle solution. A blank was prepared by incubating the cells with nanoparticles containing no fluorophore. For compound release assays, 24 h after seeding, ARPE-19 cells were treated with medium containing 200 µg/mL blank SLN (blank); freely dissolved tracer (ANTS); free tracer together with quencher (free ANTS/DPX), or 200 µg/mL SLN loaded with ANTS/DPX (ANTS/DPX-SLN). After 5 h, the medium was replaced with fresh medium without any particles or fluorophores and the incubation was continued for 24 h, 48 h, or 72 h. After incubation, cells were rinsed with phosphate-buffered saline (PBS), fixed with 2% PFA for 10 min, and nuclei were stained with 0.1 µg/mL DAPI. For immunofluorescence, cells were incubated with anti-ZO-1 (1:100) primary antibody overnight at 4 • C. After three washes with PBS, cells were incubated with Alexa Fluor ® 488 goat anti-rabbit secondary antibody (1:1000) and 0.1 µg/mL DAPI for 40 min at room temperature. Slides were mounted with Mowiol 4-88 and cells were observed using the Zeiss Axio Imager A2 fluorescence microscope. Mean fluorescence intensity (MFI) of single cells was quantified by ImageJ software (n cells ≥ 10). Cell Viability Assay Cell viability assay was performed by the colorimetric methyl-thiazolyl diphenyltetrazolium bromide (MTT) assay previously published for 661W cells [25]. Cells were seeded on 96-well plates at a density of 6000 cells/well. After treatment with SLN for various times, the medium was aspirated and cells were incubated with 50 µL of 1 mg/mL MTT solution for 90 min at 37 • C. The supernatant was removed, and the purple formazan crystals were dissolved in 100 µL isopropanol. The plate was shaken for 10 min and analyzed at 570 nm using a microplate reader (Labsystems Multiskan MCC/340, Fisher Scientific, Rodano, Italy). Flow Cytometry Analysis ARPE-19 and 661W cells were seeded on 12-well plates at a density of 1 × 10 5 cells/well. After treatment with control or SLN, cells were detached with 500 µL Accutase ® and collected by centrifugation at 300× g for 5 min at room temperature. The cells were washed three times with 500 µL PBS and collected by centrifugation at 300× g for 5 min at room temperature. The cell pellet was resuspended with 500 µL of PBS and RhoB fluorescence was immediately analyzed using an Attune ® NxT Acoustic Focusing Cytometer (ThermoFisher, Rodano, Italy). The channel voltage and gain were maintained constant throughout the whole analysis. Statistical Analysis Data are presented as the means ± SEM (standard error of the mean). Student's t-test was applied to compare two groups. Analysis of variance (ANOVA) was used for comparisons of data with greater than two groups. Post hoc comparisons were performed with Bonferroni test. Significance was set at * p < 0.05, ** p < 0.01, and *** p < 0.001. All statistical analyses were performed using SPSS (Statistics 21; IBM Inc., Bentoville, AR, USA). Data for each statistical analysis were obtained from at least three independent experiments, or three biological replicates for studies on cells. Generation and Characterization of Solid Lipid Nanoparticles Containing a Gel Core The aim of this study was to develop a drug delivery system (DDS) to facilitate uptake of hydrophilic molecules by retinal cells. Among common thermoresponsive gels, such as poloxamer 407, chitosan, and hydroxypropyl methylcellulose (HPMG), which have been previously studied [19], we chose poloxamer 407 as the gel core material, because gelling can be easily induced by increasing temperature. For the lipidic shell, we used a mixture of lecithin (LCT), tripalmitin (GTP), and stearic acid (SA), since this mixture has been previously reported to be able to significantly enhance nanoparticle cellular uptake [26]. In addition, we added hydrophobic polyester to the lipid mixture to create a composite shell using biocompatible PCL and PLGA ( Figure 1B). To evaluate encapsulation and cellular uptake, we chose rhodamine B (RhoB; 479.02 g/mol) as a small hydrophilic cargo that can be easily tracked during experiments. The summary of the generated SLN components is reported in Table S1. The addition of hydrophobic polyesters, such as PCL and PLGA, may improve the particle polydispersity index (PDI) to less than 0.4 when used in combination with a gel core. Moreover, all of the produced nanoparticles were anionic, as characterized by their zeta potential. The addition of the gel core did not significantly affect the surface charge, e.g., SLN.03 (−27 ± 2.3 mV) versus SLN.06 (−24 ± 1.5 mV). On the other hand, the presence of the gel core improved the encapsulation efficiency, e.g., SLN.02 (24 ± 0.8%) versus SLN.05 (48 ± 0.44%). Both types of particles with a gel core could encapsulate above 40% RhoB, while particles with an aqueous core had RhoB encapsulation efficiency at around 20%, regardless of the shell type. The addition of hydrophobic polyester to the shell formulation had no detectable effect on the encapsulation efficiency ( Figure 2A). The SLN morphology, observed using TEM, confirmed that all the produced particles were smaller than 500 nm, fulfilling the basic size requirement for mobility in the vitreous ( Figure 2B). Based on the analytical analyses of the generated SLN, we chose to focus on SLN.05 and SLN.06 for further studies, since they showed an improvement in PDI and encapsulation efficiency compared to conventional SLN with pure a lipid shell and no gel core (SLN.01). We first validated the formulation for encapsulation capability of SLN.05 and SLN.06 using a known compound previously shown to have neuroprotective properties in the retina, a compound called CN03 [27]. Both SLNs were able to encapsulate CN03 and resulted in negatively charged particles of 200-250 nm. We noticed a higher encapsulation efficiency (i.e., ±15% increase) compared to RhoB when CN03 was used (Figure 2A). A colloidal stability study was then performed with CN03-loaded SLN dispersed in a PBS solution. The samples were stored at different storage temperatures for 4 weeks. An increase in size (±30 nm) was observed in the SLN.05 colloidal solution after one week of storage (data not shown). In comparison, the SLN.06 colloidal solution showed better size stability within the study period. More importantly, the particle size was maintained below 300 nm, regardless of storage temperature, within 1 month of storage ( Figure 3). A colloidal stability study was then performed with CN03-loaded SLN dispersed in a PB solution. The samples were stored at different storage temperatures for 4 weeks. An i crease in size (±30 nm) was observed in the SLN.05 colloidal solution after one week storage (data not shown). In comparison, the SLN.06 colloidal solution showed better si stability within the study period. More importantly, the particle size was maintained b low 300 nm, regardless of storage temperature, within 1 month of storage ( Figure 3). Taken together, all physicochemical characterization data show that SLN.05 and SLN.06 have relatively good properties to be further in vitro validated as a potential DDS for retinal cells. Evaluation of SLN.05 and SLN.06 Toxicity to ARPE-19 and 661W Retinal Cell Lines We used ARPE-19 cells (human retinal pigment epithelium cell line) and 661W cells (mouse photoreceptor-like cell line) to evaluate the toxicity of SLNs on retinal cells. We exposed ARPE-19 and 661W cells to either SLN.05 or SLN.06 at different concentrations and evaluated toxicity by the MTT cell viability assay at different time points. Both SLN.05 and SLN.06 showed increased toxicity in a dose-dependent manner and a time-dependent manner ( Figure 4A,B). Overall, SLN.05 showed higher toxicity in both cell types. Toxicity of SLN.05 to ARPE-19 cells started to be detected at 200 μg/mL after 5 h of exposure. Interestingly, 661W cells showed higher resistance to SLN.05 toxicity, because 200 μg/mL of SLN.05 did not significantly reduce 661W viability even after exposure for 24 h, but started to be toxic at 500 μg/mL. SLN.06 was less toxic to both cell lines, especially to 661W cells. ARPE-19 cells could tolerate up to 500 μg/mL of SLN.06 within 5 h of exposure, while 661W cells could tolerate up to 800 μg/mL of SLN.06 within 24 h exposure. SLNs reduced viability of both ARPE-19 and 661W cells after 48 h of exposure. Taken together, all physicochemical characterization data show that SLN.05 and SLN.06 have relatively good properties to be further in vitro validated as a potential DDS for retinal cells. Evaluation of SLN.05 and SLN.06 Toxicity to ARPE-19 and 661W Retinal Cell Lines We used ARPE-19 cells (human retinal pigment epithelium cell line) and 661W cells (mouse photoreceptor-like cell line) to evaluate the toxicity of SLNs on retinal cells. We exposed ARPE-19 and 661W cells to either SLN.05 or SLN.06 at different concentrations and evaluated toxicity by the MTT cell viability assay at different time points. Both SLN.05 and SLN.06 showed increased toxicity in a dose-dependent manner and a time-dependent manner ( Figure 4A,B). Overall, SLN.05 showed higher toxicity in both cell types. Toxicity of SLN.05 to ARPE-19 cells started to be detected at 200 µg/mL after 5 h of exposure. Interestingly, 661W cells showed higher resistance to SLN.05 toxicity, because 200 µg/mL of SLN.05 did not significantly reduce 661W viability even after exposure for 24 h, but started to be toxic at 500 µg/mL. Significance at * p < 0.05, ** p < 0.01, *** p < 0.001; ANOVA followed by Bonferroni's post hoc test. SLN.05 and SLN.06 Internalization by Retinal Cell Lines To deliver small hydrophilic molecules to retinal cells, SLNs need to be efficiently internalized by cells. To visualize SLN uptake, we used RhoB-loaded SLNs (RhoB-SLN). Both RhoB-SLN.05 and RhoB-SLN.06 could be efficiently internalized by 661W cells, where RhoB intensity in the cytosol increased in a concentration-dependent manner (Figure 5A,B). To further confirm and quantify internalization efficiency of SLNs, we exposed 661W cells to 200 μg/mL of RhoB-SLN.05 and RhoB-SLN.06 and quantified the fluorescence signal at different time points by flow cytometry. Since free RhoB can also penetrate the cells, we used cells treated with free RhoB suspension for 5 h as the control. Only 0.08% of 661W cells were positive for RhoB after treatment with free RhoB for 5 h, indicating that RhoB diffusion inside the cells was very low. One hour exposure to RhoB-loaded SLN was sufficient to detect 3.28% of RhoB-positive 661W cells after incubation with RhoB-SLN.05 and 3.35% of positive 661W cells after incubation with RhoB-SLN.06. The percentage of RhoB-positive cells increased with longer incubation time ( Figure 5C). Similarly, we observed the same trend in SLN uptake by ARPE-19 cells ( Figure S1). Taken together, these data indicate that SLN.05 and SLN.06 can be internalized by both photoreceptor and RPE cell types. SLN.05 and SLN.06 Internalization by Retinal Cell Lines To deliver small hydrophilic molecules to retinal cells, SLNs need to be efficiently internalized by cells. To visualize SLN uptake, we used RhoB-loaded SLNs (RhoB-SLN). Both RhoB-SLN.05 and RhoB-SLN.06 could be efficiently internalized by 661W cells, where RhoB intensity in the cytosol increased in a concentration-dependent manner ( Figure 5A,B). To further confirm and quantify internalization efficiency of SLNs, we exposed 661W cells to 200 µg/mL of RhoB-SLN.05 and RhoB-SLN.06 and quantified the fluorescence signal at different time points by flow cytometry. Since free RhoB can also penetrate the cells, we used cells treated with free RhoB suspension for 5 h as the control. Only 0.08% of 661W cells were positive for RhoB after treatment with free RhoB for 5 h, indicating that RhoB diffusion inside the cells was very low. One hour exposure to RhoB-loaded SLN was sufficient to detect 3.28% of RhoB-positive 661W cells after incubation with RhoB-SLN.05 and 3.35% of positive 661W cells after incubation with RhoB-SLN.06. The percentage of RhoB-positive cells increased with longer incubation time ( Figure 5C). Similarly, we observed the same trend in SLN uptake by ARPE-19 cells ( Figure S1). Taken together, these data indicate that SLN.05 and SLN.06 can be internalized by both photoreceptor and RPE cell types. We visually confirmed the intracellular localization of SLNs after being internalized by the ARPE-19 cells by staining the membrane of the cells with an anti-ZO-1 antibody (specific antibody that recognize a peripheral membrane protein in epithelial cells), and we observed that RhoB-SLNs are localized inside the cytosol after the internalization process ( Figure 6A). To elucidate the mechanism of the internalization of SLN.05 and SLN.06 by the photoreceptor cells, we exposed 661W cells to 200 µg/mL of RhoB-SLN.05 and RhoB-SLN.06 for 1 h at either 37 • C or 4 • C. We observed that incubation at 4 • C highly limited the uptake of Rho-SLNs, indicating an energy-dependent process rather than passive membrane passage ( Figure 6B). Based on the knowledge that most of the nanoparticles are internalized by cells through endocytosis [28], these data confirmed that SLN.05 and SLN.06 were taken up via an endocytic process rather than membrane permeation. Encapsulated Cargo Release Inside the Cells To evaluate if SLN.05 and SLN.06 can successfully release the cargo after being uptaken by the cells, we performed a fluorescence leakage assay using ANTS/DPX, which has been widely used to study vesicle leakage [29]. We encapsulated the ANTS fluorescent dye together with its quencher DPX. Once the SLN shell breaks and releases the cargo inside the cells, DPX will no longer be able to quench ANTS due to the increase of the molecular distance between ANTS and DPX, which allows free ANTS inside the cells to emit green fluorescence ( Figure 7A). In this experiment, we exposed cells to either free ANTS or DPX, which are not able to penetrate the cells, as controls. Only cells exposed to SLN.05 and SLN.06 loaded with ANTS/DPX resulted in fluorescence, demonstrating that SLNs could successfully deliver ANTS/DPX inside the cells and release the cargo ( Figure 7B,C). A faint signal could be detected at 24 h, but a full signal was easily detected after 48 h of exposure ( Figure 7C). Taken together, these data demonstrate that the new formulated SLNs are able to release a hydrophilic molecule inside a retinal cell and can be an efficient drug delivery system for the retina. Micrographs of ARPE-19 cells exposed to ANTS/DPX-loaded SLN.05 and SLN.06. Released ANTS (green) was detectable only in cells exposed to ANTS/DPX loaded into SLNs, and not in cells exposed to free ANTS and/or DPX. Nuclei of cells were stained with DAPI in blue. Scale bar: 10 μm. Discussion The delivery of a drug to the neural retina is challenging due to the different barriers that need to be crossed and the physicochemical environment of the vitreous that may affect the passage of the drug to the target cells. In this study we presented new formulations of nanoparticles that could enter retinal cells while having features that may facilitate navigation across the vitreous (e.g., size < 500 nm and anionic). The key findings from Micrographs of ARPE-19 cells exposed to ANTS/DPX-loaded SLN.05 and SLN.06. Released ANTS (green) was detectable only in cells exposed to ANTS/DPX loaded into SLNs, and not in cells exposed to free ANTS and/or DPX. Nuclei of cells were stained with DAPI in blue. Scale bar: 10 µm. Discussion The delivery of a drug to the neural retina is challenging due to the different barriers that need to be crossed and the physicochemical environment of the vitreous that may affect the passage of the drug to the target cells. In this study we presented new formulations of nanoparticles that could enter retinal cells while having features that may facilitate navigation across the vitreous (e.g., size < 500 nm and anionic). The key findings from the formulation development studies were: (i) the gel core improved the encapsulation efficiency by up to 2-fold; (ii) the addition of hydrophobic polymer to the shell could be used to tailor the surface charge of the final DDS. Our encapsulation efficiency results suggested that the gel core could improve the small hydrophilic cargo retention during formulation. This agreed with previous reports that used large macromolecules as cargo [19]. Most likely, the improved retention came from the solidification of Poloxamer 407 emulsion droplets. Poloxamer 407 droplets formed a nanogel thanks to a local increase in temperature during the sonication process. Particle surface charge should be considered in terms of cellular uptake. The cellular membrane is generally negatively charged and thus, a strongly anionic particle will have more difficulty to enter the cells compared with a cationic particle [30]. However, when the particle is cationic, it will have a tendency to aggregate in the vitreous [11]. Thus, there is a need to tailor particle surface charge during DDS development. The addition of polyester to the shell formulation reduced the strong negative charge of the pure lipid SLN shell (SLN.01; −39 mV). The intensity of the surface charge reduction differed based on the hydrophobic polymer used as a filler in the composite SLN shell formulation (i.e., PCL and PLGA were used in this study). Particle shells containing PCL (SLN.02; −15 mV) had a higher zeta potential reduction compared to PLGA (SLN.03; −27 mV). The charge reduction, observed from adding PCL or PLGA to create a composite shell, may indicate that the hydrophobic polymeric chains are well distributed on the surface. The intensity of surface charge observed is very likely related to the inherent surface charge of the polymer used. Based on this finding, the choice of hydrophobic polymeric components in the composite SLN shell may be used to tailor specific surface charges in further stages of DDS development. The SLN formulation initially developed with RhoB, as the hydrophilic cargo, was validated with a real drug for retinal degeneration (i.e., CN03). The freshly synthesized CN03-loaded SLN particle size was maintained in the range of 200-250 nm. There was a significant change in surface charge when CN03 salts were used instead of RhoB for SLN.05. However, this was not observed in SLN.06. Without CN03 salts, SLN.05 (−13 mV) had a less negative charge than SLN.06 (−24 mV). Thus, unencapsulated CN03 salts had a weaker influence or had less surface absorption on a more negatively charged SLN. We also observed an increase in polydispersity when CN03 salts were used as the cargo. Colloidal system is a delicate particulate system, which is strongly influenced by the salts and pH from the dispersing medium. The increase in polydispersity may come from the effect of unencapsulated CN03 salts during the synthesis. Finally, there was an increase of about 15% in encapsulation efficiency of CN03 compared to RhoB. This may be due to the fact that CN03, which is in a sodium salt form, has a much lower solubility in dichloromethane compared to RhoB. Thus, RhoB, can possibly leak out from the W 1 phase during the DDS preparation compared to CN03. Based on this finding, we surmise that this DDS may also work for other hydrophilic cargos with lower solubility in the organic solvent (e.g., DNA) for different pharmaceutical application. The colloidal stability study, which was the last checkpoint in this work prior to in vitro studies, showed a ±30 nm increase in particle size for SLN.05 after the first week of storage. The observed increase in particle size may be caused by the high concentration of salts from PBS, which disrupts the colloidal solution stability. However, SLN.06, which had a more negatively charged surface compared to SLN.05, showed a statistically better stability profile throughout the study. This observation might come from the fact that the magnitude of particle-to-particle repulsion, which could help prevent aggregation, is proportional to the intensity of the particle surface charge. While SLN.06 may seem to perform better than SLN.05 in terms of prolonged colloidal stability in salt solution, the size of both SLNs was maintained below 300 nm throughout the stability study regardless of the storage temperature and duration. Based on these characteristics, both SLN.05 and SLN.06 were selected for in vitro studies using 661W and ARPE-19 cells. While both SLN.05 and SLN.06 caused time-and dose-dependent toxicity in the cells, the in vitro cytotoxicity studies provided insight that the two retinal cell lines had different sensitivity to these SLNs. This agreed with previous studies reporting that distinctive cell physiology, proliferation rate, metabolic activity, membrane, and phagocytosis characteristics are responsible for the different sensitivity to external factors [30,31]. Physicochemical elements of nanoparticles can also affect the cytotoxicity of cells [30]. Specifically, distinct shell composition of SLN.05 and SLN.06 may differently affect viability of retinal cells. With regards to cytotoxicity, SLN.06 seemed to perform better than SLN.05 as a DDS. Internalization studies in the two retinal cell types demonstrated that: (i) the SLN formulation helped the internalization of small hydrophilic compounds; (ii) the SLN shell component might be used to tailor the uptake rate in different cell types. We observed that ARPE-19 cells had better uptake of SLN containing PCL in the shell (SLN.05). This might be due to the fact that SLN.05 is less negatively charged compared to SLN.06 ( Figure 2A) and the uptake level is directly affected by the physicochemical properties of SLN, such as shape, size, and surface charge [32]. In 661W cells, the uptake profile of SLN.06 nanoparticle was similar to that measured in APRE-19 cells and is limited to a low percentage of cells internalized by the nanoparticles. For SLN.05, lower uptake was observed in 661W compared to ARPE-19. This difference may be attributed to the fact that uptake rates are also specific to each cell type [33]. It is not surprising that photoreceptor cells have a lower uptake rate compared to ARPE-19 cells, because RPE cells are characterized by a high rate of phagocytosis, which is one of their daily functions to remove the apical part of photoreceptor outer segments [34]. The reduced uptake at 4 • C suggested that the SLN mainly enter the cells via endocytosis, as energy-dependent endocytosis will be largely inhibited at this temperature [35][36][37]. We also demonstrated that SLN could release their cargo after being internalized by the cells. This result highlights that the newly developed DDS was appropriate for the encapsulation of small hydrophilic drugs and for their release into the target cells. Overall, both SLN.05 and SLN.06 could successfully improve the uptake of small hydrophilic cargos into retinal cell lines in vitro. SLN.06 seemed to perform better as a DDS when compared to SLN.05 considering its slight advantages in terms of stability and cytotoxicity. Finally, while the relatively simple cell culture environment yielded interesting data, full drug/DDS efficacy testing will likely require more complex test systems. More advanced in vitro tests using in vivo injections or organotypic retinal explant cultures, in which the normal histotypic context of the retina is preserved [38], will further characterize the suitability of the new SLN for delivery to the retina. The fate of SLN materials after being broken down inside the cells, and the specific mechanism on how they are metabolized, will be the focus of further research. Nevertheless, based on these developments and initial validation studies, our work may open new perspectives for developing a treatment for retinal diseases based on SNL with small hydrophilic cargos. Conclusions This study presents an SLN formulation capable of encapsulating a small hydrophilic cargo and delivering it to retinal cells in vitro. The study highlighted that a gel core could significantly increase the encapsulation efficiency of small hydrophilic cargo inside the SLN (i.e., up to ±60% with a gel core compared to initial ±20% with only an aqueous core). We also observed that the type of hydrophobic polymer used in the composite shell may affect the particle surface charge, a key factor for intravitreal drug delivery systems. The physicochemical properties of the DDS developed using RhoB were retained when the neuroprotective cGMP analog CN03 was used as a cargo. The SLN maintained its particle size below 300 nm after 1 month of storage in PBS. The in vitro study demonstrated that the DDS could be taken up by model retinal cell lines (i.e., ARPE-19 and 661W), with different uptake rates based on the particle shell composition and cell type. Equally importantly, the DDS could release its cargo inside the cells. While the current results are promising for an early-stage formulation development study, more complex in vivo studies are needed to demonstrate the clinical relevance of the newly developed DDS. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/pharmaceutics14010074/s1, Table S1. Formulation code and component mass dissolved in O-phase for each formulation, Figure S1. Data Availability Statement: The raw data supporting the conclusions of this article will be made available upon request.
2021-12-31T16:05:10.787Z
2021-12-28T00:00:00.000
{ "year": 2021, "sha1": "0b785a81d4052f3e24bd9731a7a1a8413bd2d338", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4923/14/1/74/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5d0140b4737549ba8e42a764f3690ff09d64e365", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
52151282
pes2o/s2orc
v3-fos-license
Tomato DCL2b is required for the biosynthesis of 22-nt small RNAs, the resulting secondary siRNAs, and the host defense against ToMV The tomato encode four functional DCL families, of which DCL2 is poorly studied. Here, we generated loss-of-function mutants for a tomato DCL2 gene, dcl2b, and we identified its major role in defending against tomato mosaic virus in relation to both natural and manual infections. Genome-wide small RNA expression profiling revealed that DCL2b was required for the processing 22-nt small RNAs, including a few species of miRNAs. Interestingly, these DCL2b-dependent 22-nt miRNAs functioned similarly to the DCL1-produced 22-nt miRNAs in Arabidopsis and could serve as triggers to generate a class of secondary siRNAs. In particular, the majority of secondary siRNAs were derived from plant defense genes when the plants were challenged with viruses. We also examined differentially expressed genes in dcl2b through RNA-seq and observed that numerous genes were associated with mitochondrial metabolism and hormone signaling under virus-free conditions. Notably, when the loss-of-function dcl2b mutant was challenged with tomato mosaic virus, a group of defense response genes was activated, whereas the genes related to lipid metabolism were suppressed. Together, our findings provided new insights into the roles of tomato DCL2b in small RNA biogenesis and in antiviral defense. Introduction RNA silencing plays key roles in regulating endogenous gene expression, suppressing transposon activity, silencing transgenes, responding to environmental stimuli and combatting viral infection 1,2 . Small RNAs (sRNAs), including miRNA and siRNA, are loaded into an Argonaute (AGO) effector protein to form RNA-induced silencing complexes to repress complementary target RNA at a posttranscriptional gene silencing (PTGS) level through cleavage and/or the inhibition of translation. RNA-induced silencing complexes can also repress gene expression at a transcriptional gene (TGS) silencing level 3,4 . sRNAs are generated by DCL proteins. In Arabidopsis, four DCL family proteins have been found to cleave stemloop or double-stranded (ds) RNA precursors into miR-NAs or siRNAs of specific sizes 5 . DCL1 primarily processes hairpin RNA into the 21-nt miRNA that is involved in PTGS 6,7 . However, DCL1 can also produce 22-nt miRNAs from bulged precursors that in turn lead to the production of secondary siRNAs. DCL2 is required for the biogenesis of 22-nt siRNAs from endogenous inverted repeat loci 8,9 . DCL3 produces TGS-engaged 24-nt siRNAs from RNA-dependent RNA Polymerase 2 (RDR2)dependent dsRNAs 4,10 . DCL4 produces 21-nt siRNAs from RDR6-dependent dsRNA in the PTGS pathway 11,12 . In addition, DCL3 and DCL2 could also generate secondary siRNA (sec-siRNAs) from RDR6-dependent dsRNA in a dcl4 mutant 13 . Sec-siRNAs typically exhibit a phased pattern, and they have recently been shown to be derived from a large set of loci; they play important roles in plant development and disease resistance 14,15 . Normally, sec-siRNAs are processed from 3′ cleaved transcripts that are targeted by DCL1-generated 22-nt miRNA and trans-acting siRNA (tasiRNA) 16,17 , except for 21-nt miR390 18 . The 3′ cleaved transcripts are amplified into dsRNAs by RDR6 and then cleaved by DCL4 into 21-nt 'head-to-tail'-phased, sec-siRNAs 14,18 or cleaved by DCL3 into 24-nt sec-siRNA 12 . These sec-siRNAs are called phased siRNA (phasiRNA) if they are generated from coding transcripts, but they are known as tasiRNA if they come from noncoding transcripts. Many miRNAs could trigger the production of sec-siRNAs from various RNA transcripts, such as the pairs miR173 and TAS1/2, miR390 and TAS3, miR393 and TIR/AFB, miR7122 and PPR, and miR482 and NBS-LRR 14,15 . Over the past few decades, the biological roles of DCL1, DCL3 and DCL4 have been well studied, and the DCL2 was considered as a substitute for DCL4 for defense against viruses 19,20 . Importantly, DCL2 plays a primary role in transgene silencing, especially in the sense transgene-induced silencing and transitivity of hairpininduced transgene silencing 21,22 . DCL2 is also engaged in plant development and systemic RNA silencing, and DCL4 attenuates systemic PTGS 8,23,24 . When the DCL4 function is impaired, endogenous genes (SMXL4 and SMXL5) are excessively silenced by 22-nt siRNA that are produced from the DCL2-and RDR6-dependent transitive PTGS pathway. In wild type (WT) plants, DCL4 outcompetes DCL2 for the same dsRNA templates, which prevents or limits the deleterious effects of this endogenous silencing by DCL2 8 . DCL2 can enhance the recruitment of RDR6 to target transcripts and the production of sec-siRNAs. The DCL2-produced 22-nt siRNA can also trigger the production of 21-nt sec-siRNA from target mRNA 9,23,24 . This type of siRNA amplification is essential for enhancing the PTGS efficiency. It is now proposed that there is a dualdefense strategy for plant viruses to break. DCL4 is the primary defender that attacks viruses in initially infected cells through virus-induced gene silencing in autonomous cells. Once the DCL4 activity is inhibited by the viral suppressors of RNA silencing, DCL2 and DCL2-processed/dependent siRNAs are required to trigger non-cell autonomous (virus-induced gene silencing) and protect the recipient cells from further invasion by plant viruses 23,24 . The tomato (Solanum lycopersicum) is the seventhmost important crop species and the second-most consumed vegetable in the world 25 . An analysis of tomato DCL1 and DCL3-silencing mutants indicates that DCL1 produces canonical miRNAs and a few 21-nt siRNAs 7 , and DCL3 is involved in the biosynthesis of heterochromatic 24-nt siRNAs and long miRNAs 10 . DCL4 is required for the production of 21-nt tasiRNAs that in turn target the ARFs to alter tomato leaf development 11 . However, the functions of tomato DCL2 remain unknown. Previously, we cloned the full-length cDNA sequences of four tomato DCL2 subfamily members, DCL2a, DCL2b, DCL2c, and DCL2d for expression pattern analysis, and we showed that the DCL2b expression is much higher than that of other DCL2s, implying its predominant role in biology 26 . Here, we generated loss-of-function dcl2b mutants using the CRISPR/Cas9 genome-editing system. The dcl2b mutants did not show developmental defects under normal conditions. When infected by tomato mosaic virus (ToMV), however, the dcl2b mutants displayed more severe developmental defects consisting of strange narrow patterns on the leaves, flowers and fruits compared to the WT. Even DCL4 was still functional, indicating that DCL2b played a major role in the defense against ToMV. We performed genome-wide sRNA expression profiling and found that DCL2b was required for the biogenesis of 22-nt sRNA, including some 22-nt miRNAs that would otherwise be produced by DCL1 in Arabidopsis, leading to sec-siRNA production. The RNA-seq analysis showed that numerous genes associated with mitochondrial metabolism and hormone signaling changed significantly under virus-free conditions. When the ToMV engaged in infection, the tomato plants activated the genes involved in a response pathway to the stimulus and suppressed a lipid metabolism pathway. Collectively, our results demonstrated that tomato DCL2b played a critical role in antiviral defense by regulating the biogenesis of a group of 22-nt miRNAs and subsequently sec-siRNAs. Results Generation of dcl2b mutants using the CRISPR/Cas9 geneediting system To generate null alleles of loss-of-function mutants, we used the CRISPR/Cas9 gene-editing system to knock out DCL2b in tomatoes. Four genomic sites were targeted for cleavage ( Figure S1a), and then transgenic plants were genotyped through the direct sequencing of PCR products from genomic DNA flanking the target sites. The transgenic lines were grown in two greenhouses (I and II) under comparable cultivation environments. Two lines from each greenhouse carrying homozygous deletions of 5 bp in Target1 were predicted to be null mutants and selected for further research ( Figure S1b). The genome editing caused a premature stop codon to occur in the first conserved domain of the DCL2b protein ( Figure S1c). We next predicted there would be four off-target genes for each edited target in the tomato genome using CRISPR-GE 27 . No offtarget events were detected at any sites ( Figure S2). dcl2b mutants displayed different phenotypes in two greenhouses In greenhouse I, the tomato WT and dcl2b mutants were grown normally without any visible differences (Fig. 1a). However, all the dcl2b plants in greenhouse II displayed a strange morphological phenotype compared with the WT. The adult leaves were abnormally long, narrow, and twisted, and the secondary leaflets disappeared. A scanning electron microscope (SEM) analysis of these long leaflets showed that the cells were distinct in size and shape. They became long and columnar rather than irregular polygons (Fig. 1a). In addition to the abnormal leaf development, we observed a similar phenotype in the flowers and fruits of dcl2b mutants from greenhouse I. The flowers had more spindly petals and sepals (Fig. 1b, c). We compared the fruits of the WT and dcl2b mutant at three stages, at 25 days after pollination (DPA), 35 DPA and 45 DPA. The mutant fruits exhibited an elongated shape whereas the WT fruits were almost round (Fig. 1d). Notably, the fruits of dcl2b had few welldeveloped seeds (Fig. 1d). Furthermore, the fruit setting ratio of the first three branches in the mutant was much lower (Fig. 1e), which might partially result from developmental defects in the stamen and pistil (Fig. 1b). Strange narrow pattern phenotype in the dcl2b mutant resulting from ToMV infection The strange phenotype was reminiscent of plant symptoms that occur when a viral infection takes place 28 . We wondered whether the morphological phenotype of the dcl2b mutants grown in greenhouse II were caused by plant virus(es). Since the deep sequencing of virus-derived small-interfering RNAs (vsiRNAs) has been shown to be an efficient approach for virus discovery in plants and animals 29 , we conducted sRNA-seq with total RNA prepared from the following samples: normal adult leaves from the WT and dcl2b from greenhouse I, and normal adult leaves from the WT and "shoestring-like" adult leaves of dcl2b from greenhouse II. More than 14 million clean reads ranging from 18 to 26 nt were obtained. Notably, the tomato genome mapping ratios of the WT and dcl2b from greenhouse II were obviously lower than those of greenhouse I ( Figure S3), suggesting that part of the sequence reads might map to genomes other than that of the tomato. Next, we aligned all the cleaned sRNA-seq reads to 8605 viral genome sequences that were mined from the NCBI (https://www.ncbi.nlm.nih.gov/genome/ viruses/). Less than 0.01% of the reads from the WT and dcl2b from greenhouse I could be mapped to virus genomes. However, 12.5-35.7% of the reads from greenhouse II matched viral genomes (Fig. 2a). We analyzed the size distribution of vsiRNAs in the WT and dcl2b mutant from greenhouse II and found that more than half of these reads reached 21-nt (Fig. 2b). This result was consistent with previous studies in which the majority of vsiRNAs are 21-nt long in virus-infected plants 20,30,31 . Among the vsiRNAs, over 90% of the reads matched to ToMV, whereas 3.8% and 1.5% belonged to tomato mottle mosaic virus (ToMMV) and tomato brown rugose fruit virus (ToBRFV), respectively, with residue reads aligning to another 8000 viruses (Fig. 2c). The three species are positive-sense single-strand RNA (ssRNA) viruses that belong to the Tobamovirus genus 32 . To examine whether the dcl2b mutant was infected with a single ToMV virus or multi-viruses, we checked the distribution of vsiRNAs against 34 total types of tobamovirus RNA genomes from the NCBI GenBank database. The protein sequences of replicases from 34 viruses were aligned and an unrooted neighbor-joining tree was constructed (Fig. 2d). It was obvious that three tomato viruses, ToMV, ToMMV, and ToBRFV, had closer evolutionary relationships. Furthermore, compared with ToMMV and ToBRFV, the vsiRNAs were distributed over the entire ToMV. The few vsiRNA-matched loci in ToMMV and ToBRFV most likely resulted from the high sequence similarity with ToMV. Collectively, these results suggested that the WT and dcl2b from greenhouse II were both infected naturally by ToMV. This conclusion was further validated by reverse transcription-PCR (RT-PCR), which showed that a ToMV-specific fragment was detected in the WT and dcl2b from greenhouse II, but not in plants from greenhouse I (Fig. 2e). Taken together, these data indicated that the loss of tomato DCL2b would increase a plant's susceptibility to ToMV infection in a natural environment. To validate the result above, we challenged the tomato WT and dcl2b mutants at 2-, 3-, and 4-week stages in a virus-free chamber with ToMV. Again, the dcl2b mutants infected by the virus all displayed a strange narrow phenotype while the WT grew normally ( Figure S4). This result was fully reproducible in three independent experiments. In addition, an RNA blot showed that the viral RNA accumulation was clearly higher in dcl2b than in the WT plants ( Figure S4b). Altogether, the developmental defects observed in the dcl2b mutant in both the greenhouse and the controlled growth chamber did result from ToMV infection, a further indication that our analysis of datasets collected from naturally and unbiased infection plants are physiologically relevant. DCL2b was required for the biogenesis of 22-nt sRNA To investigate the functions of DCL2b, we removed the vsiRNAs and focused on endogenous tomato sRNA data for further analysis. Genome-wide profiling by sRNA-seq showed that the tomato sRNAs were not evenly distributed across the chromosomes of the WT and mutants (Fig. 3a). Unlike the tomato long non-coding RNAs 33 , the sRNAs showed higher densities in the pericentromeric heterochromatin regions than in the euchromatin (Fig. 3a). Furthermore, the knockout of DCL2b did not influence the overall sRNA distribution (Fig. 3a). Since DCL2 was previously reported to process dsRNA into 22-nt sRNA 22 , we counted the reads of each sRNA species and found that 22-nt sRNA did not experience a significant decrease in the virus-free dcl2b mutant (Fig. 3b). Compared with plants grown in greenhouse I, the populations of 21-nt sRNAs were markedly enhanced in the WT and dcl2b from greenhouse II (Fig. 3b). These results were consistent with the previous findings in which parts of these 21-nt sRNAs were novel siRNAs induced by viral infection, and they were designated as virus-activated siRNAs (vasiRNAs) 34 . We then compared the differentially expressed endogenous sRNAs (DE sRNAs) in two groups. Compared with the WT, 750 sRNAs showed significant change in the dcl2b mutant. Among them, 242 (32.3%) were upregulated and 508 (67.7%) sRNAs were downregulated (Fig. 3c). Under viral infection, the number of DE sRNAs rose to 4188, for 3130 (74.7%) increased and 1058 (25.3%) Fig. 3 DCL2b-affected biosynthesis of tomato 21 and 22-nt sRNAs. a A genome-wide heat map of the total sRNA levels. b The length distribution of total reads that were mapped to the tomato genome in the WT and dcl2b. c, Volcano diagrams of differentially expressed sRNAs. Red and blue dots indicate differentially expressed sRNAs in dcl2b mutants. d, e The percentage of tomato 21-24 nt sRNAs that were differentially expressed in dcl2b mutants decreased, respectively. These results suggested that the presence of the virus influenced large numbers of sRNAs. To illustrate the role of DCL2b in the accumulation of these sRNAs, we separated 21, 22, 23, and 24-nt DE sRNAs and calculated their respective up-and downregulation proportions. Intriguingly, almost 65% of the 21-and 22-nt DE sRNAs were decreased in the dcl2b mutant whereas 23-and 24-nt sRNAs showed comparable bilateral changes (Fig. 3d). It has been known that 22-nt sRNAs can trigger the biogenesis of 21-nt sec-siRNAs that are typically produced by DCL4 in plants 15,16 . Here, we tended to interpret that the production of 22-nt triggers was largely affected in the dcl2b mutant, leading to a compromise in the generation of 21-nt sec-siRNAs under virus-free conditions. However, this scenario was not observed under the virus-infection condition since an overwhelming number of DE sRNAs were produced in the absence of DCL2b (Fig. 3e). DCL2b affected miRNA accumulation and sec-siRNA production Sec-siRNAs can be triggered by 22-nt miRNA. We next examined the impact of tomato DCL2b in the regulation of miRNA expression. To this end, we calculated the enrichment levels of 110 mature tomato miRNAs from miRBase. We first examined the miRNA changes under the virus-free condition. Compared with the WT, only five miRNAs exhibited significant changes in the dcl2b mutant (Fig. 4a). Notably, the expression levels of 21-nt miR399 and 22-nt miR6026 were reduced. A legitimate target site for miR6026 was predicted in the 5′UTR region of DCL2a, 2b, and 2d, raising the possibility that miR6026 might act as a trigger for sec-siRNA biogenesis from the three DCL2 genes 7 . In the virus-infected dcl2b mutant, the number of DE miRNAs was elevated to 31, and the expression of DE miRNAs was easily validated by a small RNA blot analysis (Fig. 4b). Importantly, 9 of the 31 DE miRNAs reached 22-nt (Fig. 4c). Interestingly, we aligned six 22-nt DE miRNA sequences and found that they all had a U in the 5′ position and three miRNAs had a C at the 3′ terminal nucleotide (Fig. 4d), indicating that these miRNAs were likely associated with the initiation of sec-siRNA production 16 . To investigate whether DCL2b contributed to the sec-siRNA production through 22-nt miRNAs, we aligned all the sRNAs measuring 18-26-nt to tomato transcripts (genome annotation ITAG3.2), revealing perfect matches using Bowtie and an in-house Perl program. We named the transcripts that served as the precursors of sec-siRNAs as templates, and then we extracted the templates that contained less than half the reads of sec-siRNAs generated in dcl2b mutants compared with their own WT controls. Here, we predicted 96 and 197 potential templates in virus-free and virus-infected dcl2b, respectively, with 17 overlaps (Fig. 4e and Table S1). In the previous studies, DCL4, together with AGO1, AGO4, AGO7, SUPPRESSOR OF GENE SILENCING3 (SGS3), RDR6, and DOUBLE-STRANDED RNA BINDING FACTOR 4 (DRB4), were found to participate in different steps of sec-siRNA production 15,35 . To investigate if these above genes had any impact on the sec-siRNA production from the above filtered templates, we measured their abundance according to RNA-seq data that were obtained from the same sets of samples for the sRNA-seq above. We obtained~20 million clean reads from RNA-seq, over 80% of which could align to the tomato genome (with two mismatches) ( Figure S3). We found that none of these sec-siRNA-related genes displayed obvious changes in transcription when DCL2b was knocked out ( Figure S5a). Thus, the differential accumulation of sec-siRNAs is unlikely caused through the indirect impact of these sec-siRNA-related genes. In the virus-free WT and dcl2b mutant, we hypothesized that the 22-nt miR6026 might lead to the formation of sec-siRNA from targeted mRNAs such as DCL2a, 2b, and 2d ( Figure S5b). The expression level of miR6026 in the dcl2b mutant decreased significantly ( Fig. 4b and Figure S5c). The sec-siRNA abundance from three DCL2 transcripts was reduced two-to seven-fold ( Fig. 5a and Figure S5d). Intriguingly, over 70% of the total sec-siRNAs measured 21-nt (Fig. 5b), suggesting that DCL4 might play a major role in 21-nt sec-siRNAs biosynthesis from the DCL2 transcripts. It is noteworthy that the formation of 22-nt sec-siRNA was suppressed in the dcl2b mutant, suggesting that DCL2b is also involved in the biogenesis of 22-nt sec-siRNAs (Fig. 5b). Next, we used Integrative Genomics Viewer (IGV) to illustrate the above results. Normalized sRNAs were mapped to three DCL2 loci, and the IGV screenshots clearly showed that sec-siRNAs were produced in a phased pattern of each locus (Fig. 5a). We could also note that the sec-siRNAs were less abundant in the dcl2b mutant (Fig. 5a). The miR6026 expression did not change significantly in the virus-infected dcl2b mutant ( Figure S5c). As a consequence, the abundance variations in DCL2-derived sec-siRNAs were not as obvious as those in the virus-free samples ( Fig. 5a and Figure S5d). In contrast to virus-free plants, more 22-nt miRNAs experienced significant decreases in the virus-infected dcl2b mutant (Fig. 4a, c). We selected three downregulated ones for analysis to find if they acted as triggers of sec-siRNAs. Notably, three, four, and two genes had target loci for miR6027, miR482b, and miR482e, respectively, all of which are disease-related genes. The sec-siRNAs generated from these templates were remarkably reduced in the tomato dcl2b mutant (Figure S5e). The IGV screenshots also illustrated the phased patterns of sec-siRNAs (Fig. 5c), indicating that the three miRNAs caused target slicing and phased siRNA production. Furthermore, when we checked the annotations of the sec-siRNAs precursor from dcl2b, we found 66 out of 197 precursors belonged to the "disease resistant" category (Table S1), revealing DCL2b's critical function in the sec-siRNA biogenesis after ToMV infection. From the above results, we could conclude that DCL2b plays an important role in sec-siRNA production. When tomato plants are infected by ToMV, DCL2b plants have an even broader impact on disease-resistant genederived sec-siRNAs, at least partially through their 22-nt sRNA triggers. Function analysis of DCL2b-activated and -repressed genes To further investigate the function of tomato DCL2b, we examined the global expression profiles of the WT and Fig. 4 The miRNA levels were influenced in dcl2b mutants. a, c Differentially expressed miRNAs in dcl2b mutants. The 22nt miRNAs are colored dark blue. b A small RNA blot analysis of miRNAs. d The alignment of sec-siRNA triggers. Identical nucleotides and conserved nucleotides among sec-siRNA triggers were marked in red and blue, respectively. e Venn diagram for numbers of downregulated precursors of sec-siRNAs in dcl2b mutants dcl2b plants through RNA-seq. Compared with the WT, we identified 3435 and 3363 genes that were differentially expressed (DEGs) (|log2(FoldChange)| > 1, p-value < 0.05) in virus-free and virus-infected dcl2b mutants, respectively, of which 916 transcripts were overlapped (Fig. 6a middle). In the virus-free dcl2b mutant, 1685 transcripts were elevated, whereas 1750 were reduced (Fig. 6a left), and these DEGs might merely represent the DCL2b function under the normal growth condition, whereas DEGs in the virus-infected dcl2b would partially result from the virus infection. If we excluded the 916 overlapped transcripts from the total DEGs, the remaining 2447 DEGs were more likely related to defense against viruses. Among the 2447 DEGs, there were 1251 upregulated genes and 1196 downregulated transcripts in the dcl2b mutant (Fig. 6a right). To understand the biological processes associated with DCL2b-related or virus-affected genes, DEGs were divided into four clusters (upregulated in virus-free dcl2b (1), downregulated in virus-free dcl2b (2), upregulated in virus-infected dcl2b (3), and downregulated in virusinfected dcl2b (4)), and then they were subjected to Gene Ontology (GO) analysis separately (Table S2). For Cluster 1, the genes involved in the mitochondrial RNA metabolic process and the defense response were highly enriched (Fig. 6b). We tested the expression of nine genes in the mitochondrial RNA metabolic process pathway and found that all of them belonged to the pentatricopeptide repeat (PPR) protein family (Fig. 6c). These PPR genes have a range of essential functions in post-transcriptional processes within the mitochondria and chloroplasts 36 . Our GO analysis showed that Cluster 2 genes were highly enriched in a hormone metabolism pathway (Fig. 6d). We examined 41 genes that are related to auxin, cytokinin, and other hormones such as gibberellic acid (GA) and found that they were reduced significantly (Fig. 6e). c, e, g, i Heat map of genes enriched during the mitochondrial RNA metabolic process, hormone signaling, response to stimulus, and lipid and fatty acid metabolism pathways, respectively Next, we performed a GO enrichment analysis of virusinduced genes in Cluster 3. As shown in Fig. 6f, genes associated with responses to stimuli were highly enriched. Of all 145 transcripts, 12, 27, and 22 genes belonged to disease-, kinase-, and, hormone-related categories, respectively (Fig. 6g), suggesting that these genes could play an important role in responding to ToMV infection. For Cluster 4, plenty of DEGs were related to the lipid and fatty acid metabolic pathway (Fig. 6h). When viruses invade a cell to complete the replicative cycle, they will express their own proteins and co-opt host cell factors for multiplication, including lipids 37 . The initial steps include the attachment to a specific receptor, in some cases a specific lipid. The replication of the viral genome then takes place, either in association with cellular membranes or other lipid structures. Next, new virus genomes are enclosed inside synthesized viral particles, in which many lipids also play an important role 38 . Since host lipids are essential for multiple steps of the viral replication cycle, plants can use different strategies to interfere with viral infection. One of the strategies is to reduce the lipid metabolism and biogenesis, which might explain why the lipid-related genes were downregulated in dcl2b (Fig. 6i). Discussion Here, we systematically studied the function of tomato DCL2b. Generally, DCL2b protein might have a role comparable to that of Arabidopsis DCL2 in producing 22nt sec-siRNAs. However, this very protein also has some unique functions in the biogenesis of 22-nt miRNAs, which in turn cause the production of sec-siRNAs. This function has typically been assigned to DCL1 in Arabidopsis. In addition, the tomato DCL2b appears to play a more important role than Arabidopsis DCL2 in the defense against viruses. Tomato DCL2b was responsible for sec-siRNA biogenesis In Arabidopsis, 22-nt sRNAs are the key determinant of sec-siRNA triggers 16 . DCL2 usually generates 22-nt siR-NAs from perfectly duplexed precursors 21 . For 22nt-miRNA, its precursor contains a bulge, so DCL1 generates a 22-nt miRNA and a 21-nt miRNA* 16,39 . In our research, we found that tomato DCL2b was required for the biogenesis of 22-nt siRNAs, which served as triggers to generate sec-siRNAs ( Fig. 3d and Table S1). Notably, the level of 22-nt miR6026 decreased significantly in the dcl2b mutant (Fig. 4a, b). In a previous report, miR6026 was not processed by tomato DCL1 7 . Moreover, its precursor does not contain asymmetric bulges (miRBase: http://www. mirbase.org/), raising the possibility that miR6026 is DCL2b-dependent. Additionally, miR6026 was predicted to target tomato DCL2a, 2b, and 2d 7 . In our study, we also discovered that miR6026 could trigger sec-siRNAs from these three DCL2 transcripts (Fig. 5a), suggesting that there may be a feedback regulation mechanism between DCL2b and miR6026. When tomato plants were infected by ToMV, a large number of 21-nt sRNAs were markedly enhanced (Fig. 3b, c). The expression levels of DCL2b accumulated as well, both in the WT and dcl2b mutant ( Figure S4c and S6). The above feedback loop was destroyed and miR6026 was not decreased in the virus-infected dcl2b mutant (Fig. 4b). Other 22-nt miRNAs showed downregulation (Fig. 4c). DCL2b might not participate in their processing directly but could influence their expressions when the tomato was infected by a virus. These 22-nt miRNAs acted as sec-siRNA triggers, and all of them targeted disease-related genes (Fig. 5c). In the tomato, Tobacco mosaic virus resistance-1 (Tm-1) encodes a protein that can bind to ToMV replication proteins and inhibit its replication 40,41 . Due to its low expression (RPM < 1), we could not determine if Tm-1 serves as a sec-siRNA precursor. However, when we calculated the precursors that generated less sec-siRNAs in dcl2b, we found 197 transcripts, and one-third of them belonged to the "disease-resistant" category ( Fig. 4e and Table S1), suggesting that the tomato DCL2b appears to regulate disease-related genes by processing sec-siRNAs while combating ToMV. Tomato DCL2b played a key role in defending against ToMV In Arabidopsis, DCL4 and DCL2 act hierarchically in antiviral resistance 24 . DCL4 is considered the leader and is primarily responsible for the processing of 21-nt vsiRNAs from RNA viruses 20 . Only when DCL4 is absent or its activity is suppressed by viruses does DCL2 produce 22-nt vsiRNAs, serving as a backup to functionally compensate for DCL4 19,42 . Turnip crinkle virus (TCV) is an example because this virus encodes a suppressor P38 that specifically inhibits DCL4 activity. In this scenario, DCL2 contributes to a majority of vsiRNAs 19 . For other viruses such as Potato virus X (PVX), DCL4 alone is sufficient to inhibit PVX accumulation. The infected Arabidopsis dcl2 mutant does not show viral symptoms 43 . During Turnip mosaic virus (TuMV) infection in Arabidopsis, DCL4-dependent siRNAs are necessary to prevent initial infections whereas DCL2 is neither necessary nor sufficient to limit infections 20 . In our study, when the tomato DCL2b itself was knocked out, the plants showed severe developmental defects both in natural and manual ToMV infection environments (Fig. 1a, S4a and S4d). This scenario has not been observed with Arabidopsis DCL2, implying that the tomato DCL2b has more important roles in virus defense compared to its Arabidopsis counterpart. When the tomato was infected with ToMV, the DCL4 expression remained unchanged ( Figure S4c and S6). The 21-nt vsiRNAs and tomato endogenous sRNA also did not decrease (Fig. 2b), suggesting that DCL4 was still functional after antiviral silencing. The ToMV virulence in tomatoes is very strong, and plants need both DCL2b and DCL4 to combat the virus. Together, these results indicate that plants might be regulated by unique mechanisms during different species-virus interaction. Plant materials and growth conditions The tomato wild type (cv. Ailsa Craig) and mutants (with AC background) were planted in commercial tomato cultivation soil. All the plants were grown in two greenhouses under the same conditions, 20-25°C under a 16 h light/8 h dark cycle. CRISPR/Cas9 gene knockout and mutation analysis Four sgRNAs targeting four exons of DCL2b were designed using the online tool CRISPR-GE (http://skl. scau.edu.cn/home/). These 20 bp oligos were cloned into AtU3d, AtU3b, AtU6-1, and AuU6-29 vectors, respectively. The above sgRNA expression cassettes were assembled into pYLCRISPR/Cas9-Ubi-H binary plasmid by Golden Gate ligation 44 . Tomato tissue culture was performed according to the established protocol using the Agrobacterium infection method 45 . For the mutation analysis, genomic DNA was extracted from young tomato leaves using a Plant Genomic DNA Kit (Tiangen, China). The DNA was used as a template to amplify the DCL2b fragment using PCR. The fragments were then sent out for sequencing. The primers used in the vector construction and mutation analyses are listed in Table S3. Virus inoculation procedures Viral inoculations were performed as described 46 . The tomato plants were infected at 2-, 3-, and 4-week stages and the inoculated plants were placed in the incubator at 22°C. High-throughput sequencing of RNAs and sRNAs The total RNA samples were prepared from WT and DCL2b mutant adult leaves using TRIzol reagent (Invitrogen, USA). Paired-end mRNA libraries were generated using NEBNext ® Ultra TM RNA Library Prep Kit for Illumina ® (NEB, USA) according to the manufacturer's recommendations and were sequenced on an Illumina HiSeq 4000 platform; 150 bp reads were generated. The quality of clean reads was checked using the FastQC program (v0.11.3). Then, Fastq data were aligned to the tomato genome (SGN release version SL3.0) using TopHat (v2.1.0). These mapped reads were counted by HTSeq (v0.6.1) and differential expression analysis was performed using the DESeq2 package. sRNA libraries were prepared using NEBNext ® Multiplex Small RNA Library Prep Set for Illumina ® (NEB, USA.) and sequenced on an Illumina HiSeq2500 platform, and 50 bp single-end reads were generated. The quality of clean reads was also checked by FastQC. Subsequently, FASTA data were aligned to the virus genomes from NCBI (ftp://ftp.ncbi.nlm.nih.gov/refseq/release/viral/) using Bowtie (v1.1.2). The unmapped reads were then aligned with the tomato genome. A differential expression analysis was processed with an in-house python script. RNA extraction, real-time PCR analysis, and northern blot The total RNA was extracted with TRIzol reagent from the tomato leaves (Invitrogen, USA). For reverse transcription, 1 μg of RNA and oligo dT primers were used to synthesize cDNA using a TranScript One-Step gDNA Removal and cDNA Synthesis SuperMix kit (Trans, China). Real-time PCR was then performed on a CFX96 Real-Time PCR Detection System (Bio-Rad, USA) with SYBR Green PCR Master Mix (Trans, China). Actin was used as an internal control. Each experiment included three independent biological repeats and three technical replicates. For the ToMV northern blot, 10 μg of total RNA was used in each lane. Hybridizations were performed with biotin-labeled probes complementary to the ToMV sequence. Loading RNA served as the control. For the small RNA northern blot, 10 μg of total RNA was used in each lane. Hybridizations were performed with 32 P -radiolabeled probes complementary to miR159, miR390a, and miR6022. U6 served as the loading control 6 . The primers and probes used in the real-time PCR analysis and northern blot are listed in Table S3. Imaging SEM For SEM analysis, leaf samples were first fixed in 2.5% glutaraldehyde buffer with 0.1 mol/L sodium phosphate (pH 7.2) for 2 h. After the samples were dehydrated with serial ethanol washes, they were dried with a critical point dryer and then coated with gold particles. The samples were examined with an SEM (HITACHI S-3400N, Japan) 47 . Gene Ontology analysis Gene Ontology enrichment analysis was performed using Gene Ontology Consortium online tools 48 . The genes were calculated by PANTHER overrepresentation method. Binomial tests with Bonferroni correction were used to calculate the p-values. Terms with p-values < 0.05 were considered to be enriched. Phylogenetic analysis Protein sequences from 34 Tobamovirus were aligned using the ClustalX 2.1 with default parameters 49 and a phylogenetic tree was constructed through MEGA6.06 by neighbor-joining method with 1000 bootstrap replicates and visualized with the online tool Evolview 50 . Statistical analysis SPSS statistics (v19.0) was used for the statistical analysis. The statistical significance was computed using Student's t test. Significant differences (p < 0.05) were indicated by asterisks. All data were presented as the means ± standard errors (SEs). Data availability Raw data from RNA-seq and sRNA-seq have been submitted to the Sequence Read Archive (SRA) at NCBI (http://www.ncbi.nlm.nih.gov/sra/) under the accession number SRP136048.
2018-09-14T14:05:13.045Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "cd4198bae07ffdaed74cc81e330ec657cc01446f", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41438-018-0073-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd4198bae07ffdaed74cc81e330ec657cc01446f", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
263817844
pes2o/s2orc
v3-fos-license
Qualitative and Quantitative Detection of CRISPR-Associated Cas Gene in Gene-Edited Foods Effective regulation of gene-edited products and resolution of public concerns are the prerequisites for the industrialization of gene-edited crops and their derived foods. CRISPR-associated protein, the core element of the CRISPR system, requires to be regulated. Thus, there is an urgent need to establish qualitative and quantitative detection methods for the Cas gene. In the present study, the primers and probes were designed and screened for Cas12a (Cpf1), which is the most commonly used target site in gene editing; we performed PCR system optimization, determined the optimal primer concentration and annealing temperature, and established qualitative PCR and quantitative PCR (qPCR) assays for detecting Cpf1 in gene editing by specificity and sensitivity tests. In specificity testing, qualitative PCR and qPCR methods could 100% detect samples containing Cpf1 DNA, while the detection rate of other samples without Cpf1 was 0%. In the assay sensitivity test, the limit of detection of qualitative PCR was 0.1% (approximately 44 copies), and the limit of detection of the qPCR method was 14 copies. In the stability test, both the qualitative PCR and qPCR methods were repeated 60 times at their corresponding lowest detection limit concentrations, and the results were positive. Thus, the qualitative and quantitative assays for Cpf1 are specific, sensitive, and stable. The method provides technical support for the effective monitoring of gene-edited products and their derived foods in the future. Introduction Gene editing technology can effectively edit target genes and is therefore a valuable tool in biological research [1][2][3]. The third-generation gene editing technology, namely clustered regularly interspaced short palindromic repeats (CRISPR), is a defense system in bacteria. The presence of these repeats was first discovered in Escherichia coli as early as 1987 by Ishino et al. [4], and, in 2000, Mojica et al. used computer analysis to find their prevalence in bacteria [5]. The CRISPR/Cas system has become one of the most popular gene editing techniques because of its simple vector construction process and high editing efficiency [6]. The Cas (CRISPR-associated) proteins of CRISPR/Cas systems bind to the transcription products of CRISPR to form complexes that function to cleave DNA sequences [7]. CRISPR/Cas systems are broadly classified into two major categories: type 1 and type 2. Type 1 includes type I, type III, and type IV, which require multiple Cas proteins. Type 2 includes types II, V, and VI, which require only a single scissor protein [8]. The common Cas9, Cas12a (Cpf1), and Cas13a belong to type 2; however, the Cas12 system substantially differs from the Cas9 system in the following aspects [9,10]: (1) Cas12a (Cpf1) requires only CRISPR RNAs (crRNA), while Cas9 requires trans-acting crRNA (tracr RNA) in addition DNA Extraction Weighed 100 mg of various transgenic mixtures, non-gene edited cotton, and geneedited cotton powder with mass fractions of 100%, 10%, 1%, 0.1%, and 0.05%, respectively, and extracted DNA using the plant DNA extraction kit according to the instructions. A 25 µL reaction system was used for qPCR: Fast Start Essential DNA Probes Master 12.5 µL, 1 µL each of 10 µmol/L forward and reverse primers at the final concentration of 400 nmol/L, probe (10 µmol/L) 0.5 µL at a final concentration of 200 nmol/L, DNA solution 2 µL, the reaction mixture was supplemented with ddH 2 O to make up the volume to 25 µL. The reaction conditions were as follows: pre-denaturation at 95 • C for 10 min; denaturation at 95 • C for 15 s; annealing and extension at 60 • C for 1 min; 40 cycles of amplification reaction; PCR amplification fluorescence signal collected at 60 • C. Primer Design and Screening Based on the sequence of Cpf1 in gene-edited cotton provided by the developer, three pairs of primers were designed using Primer Premier 5.0 based on the analysis of Cpf1 (Table S1), and the amplified fragment lengths analyzed by Primer Premier 5.0 were all in the range of 260-360 bp. The annealing temperatures were set as 52 • C, 53 • C, 54 • C, 56 • C, 57 • C, 59 • C, 61 • C, 63 • C, 65 • C, 66 • C, 67 • C, and 68 • C for the screening of the three primer pairs using the gene-edited cotton DNA solution. The qPCR primers and probes were developed using Primer Express 3.0 software (Applied Biosystems, Waltham, MA, USA) (Table S2), and all primers were synthesized by Sangon Biotech (Shanghai) Co., Ltd. The DNA solution of gene-edited cotton was diluted in six gradients (190, 38, 7.6, 1.52, 0.304, and 0.0608 ng/µL), and the standard curves were made for the designed primers to verify the efficiency of the primers. Qualitative PCR System Optimization The qualitative PCR system was optimized by setting the primer concentrations to 0.1, 0.2, 0.3, 0.4, and 0.5 µmol/L. The amplification products were detected by 2% agarose gel electrophoresis, and the best primer concentration was the one with brighter amplification bands. Specificity Analysis PCR amplification was performed using 1% of gene-edited cotton, 6 other transgenic rice mixes, 6 common transgenic soybean mixes, 14 common transgenic maize mixes, 8 common transgenic oilseed rape mixes, 5 common transgenic cotton mixes, and nongene-edited cotton DNA as templates, and qualitative PCR amplification products were electrophoresed on a 2% agarose gel at 160 V for 30 min. The results were observed by gel imaging system. qPCR amplification results were observed by amplification curve. Sensitivity Analysis Qualitative PCR sensitivity assay: DNA of gene-edited cotton with mass fractions of 100%, 10%, 1%, 0.1%, and 0.05% were used as templates and amplified with primers, and their amplification products were electrophoresed on 2% agarose gel at 160 V for 30 min, and the results were observed using a gel imaging system. qPCR sensitivity assay: 100% genomic DNA of gene-edited cotton was diluted to 190, 38, 7.6, 1.52, 0.304, 0.0608 ng/µL for qPCR amplification, and the results of the amplification curve were observed. Designing and Screening of Primers Based on the sequence of Cpf1 (GenBank Accession No.: OK557998.1) shown in Figure 2A, three pairs of primers were developed using Primer Premier 5.0 to assess the specificity, sensitivity, and efficiency of the method. Specificity Analysis PCR amplification was performed using 1% of gene-edited cotton, 6 other transgenic rice mixes, 6 common transgenic soybean mixes, 14 common transgenic maize mixes, 8 common transgenic oilseed rape mixes, 5 common transgenic cotton mixes, and non-geneedited cotton DNA as templates, and qualitative PCR amplification products were electrophoresed on a 2% agarose gel at 160 V for 30 min. The results were observed by gel imaging system. qPCR amplification results were observed by amplification curve. Sensitivity Analysis Qualitative PCR sensitivity assay: DNA of gene-edited cotton with mass fractions of 100%, 10%, 1%, 0.1%, and 0.05% were used as templates and amplified with primers, and their amplification products were electrophoresed on 2% agarose gel at 160 V for 30 min, and the results were observed using a gel imaging system. qPCR sensitivity assay: 100% genomic DNA of gene-edited cotton was diluted to 190, 38, 7.6, 1.52, 0.304, 0.0608 ng/µL for qPCR amplification, and the results of the amplification curve were observed. In the experiment, the gene-edited cotton DNA containing Cpf1 was used as the template, and an annealing temperature gradient experiment was performed to determine the optimum annealing temperature. The annealing temperatures were set at different temperatures. The results of PCR amplification showed that some of the products amplified by primer Cas12-1 had nonspecific amplification products, and a single band was present at the annealing temperature of 59-68 °C; however, the bands were weak. The amplified product of primer Cas12-2 had more nonspecific amplification products, and, although there was a single band at 66-68 °C, this band was weak. The amplification product of primer Cas12-3 had a single and clear band and no primer dimer formation, and the highest amplification efficiency was achieved at the annealing temperature of 59-63 °C ( Figure 2B). In the experiment, the gene-edited cotton DNA containing Cpf1 was used as the template, and an annealing temperature gradient experiment was performed to determine the optimum annealing temperature. The annealing temperatures were set at different temperatures. The results of PCR amplification showed that some of the products amplified by primer Cas12-1 had nonspecific amplification products, and a single band was present at the annealing temperature of 59-68 • C; however, the bands were weak. The amplified product of primer Cas12-2 had more nonspecific amplification products, and, although there was a single band at 66-68 • C, this band was weak. The amplification product of primer Cas12-3 had a single and clear band and no primer dimer formation, and the highest amplification efficiency was achieved at the annealing temperature of 59-63 • C ( Figure 2B). On the basis of the experimental results, the final identified primer was Cas12-3, and the amplified band size was 271 bp. The optimal annealing temperature range was 59-63 • C. Optimization of the Qualitative PCR Reaction System and Amplification Conditions System optimization experiments were performed with different primer concentrations. The results ( Figure 2C) showed that the amplified bands became significantly stronger with increasing primer concentrations at the annealing temperature of 60 • C; however, nonspecific amplification products appeared at higher primer concentrations. The sensitivity and specificity of the assay were evaluated, and the primer concentration of 0.3 µmol/L was found to yield a high level of amplification efficiency without nonspecific amplification products and primer dimer formation. Thus, the optimum annealing temperature of 59-63 • C was determined in the primer design and screening experiments mentioned in Section 2.1. The PCR reaction mixture contained PCR buffer (Mg 2+ Plus) 2.5 µL, dNTP mixture 2 µL, Cas12-F3 and Cas12-R3 (10 µmol/L) 0.75 µL each at the final concentration of 300 nmol/L, rTaq DNA polymerase (5 U/µL) 0.15 µL, and DNA solution (50 ng/µL) 2 µL; the reaction mixture was supplemented with ddH 2 O to make up the volume to 25 µL. The reaction conditions were as follows: pre-denaturation at 95 • C for 5 min; denaturation at 95 • C for 30 s; annealing at 59-63 • C for 45 s; extension at 72 • C for 30 s; 35 cycles of amplification reaction, extension at 72 • C for 7 min; and storage at 4 • C. Specificity Analysis of the Qualitative PCR Method To determine the specificity of the established qualitative PCR assay, the DNA of mixed samples of other transgenic samples, the DNA of rice samples edited by the Cas9 system, and the DNA of non-gene-edited cotton were used as templates for PCR amplification [26], and the experimental results are shown in Figure 3A. Only 1% of the gene-edited cotton samples amplified to the expected DNA fragments, while, in the other samples, no bands of the expected size were amplified. This finding indicates that the established method to detect Cpf1 is highly specific. Next, four different gene-edited cotton samples were amplified using primer Cas12-3, and the experimental results are shown in Figure 3B. Except for the blank control and the negative control, all the gene-edited cotton line samples amplified the target fragment of the expected size. On the basis of the experimental results, the final identified primer was Cas12-3, and the amplified band size was 271 bp. The optimal annealing temperature range was 59-63 °C. Optimization of the Qualitative PCR Reaction System and Amplification Conditions System optimization experiments were performed with different primer concentrations. The results ( Figure 2C) showed that the amplified bands became significantly stronger with increasing primer concentrations at the annealing temperature of 60 °C; however, nonspecific amplification products appeared at higher primer concentrations. The sensitivity and specificity of the assay were evaluated, and the primer concentration of 0.3 µmol/L was found to yield a high level of amplification efficiency without nonspecific amplification products and primer dimer formation. Thus, the optimum annealing temperature of 59-63 °C was determined in the primer design and screening experiments mentioned in Section 2.1. The PCR reaction mixture contained PCR buffer (Mg 2+ Plus) 2.5 µL, dNTP mixture 2 µL, Cas12-F3 and Cas12-R3 (10 µmol/L) 0.75 µL each at the final concentration of 300 nmol/L, rTaq DNA polymerase (5 U/µL) 0.15 µL, and DNA solution (50 ng/µL) 2 µL; the reaction mixture was supplemented with ddH2O to make up the volume to 25 µL. The reaction conditions were as follows: pre-denaturation at 95 °C for 5 min; denaturation at 95 °C for 30 s; annealing at 59-63 °C for 45 s; extension at 72 °C for 30 s; 35 cycles of amplification reaction, extension at 72 °C for 7 min; and storage at 4 °C. Specificity Analysis of the Qualitative PCR Method To determine the specificity of the established qualitative PCR assay, the DNA of mixed samples of other transgenic samples, the DNA of rice samples edited by the Cas9 system, and the DNA of non-gene-edited cotton were used as templates for PCR amplification [26], and the experimental results are shown in Figure 3A. Only 1% of the geneedited cotton samples amplified to the expected DNA fragments, while, in the other samples, no bands of the expected size were amplified. This finding indicates that the established method to detect Cpf1 is highly specific. Next, four different gene-edited cotton samples were amplified using primer Cas12-3, and the experimental results are shown in Figure 3B. Except for the blank control and the negative control, all the gene-edited cotton line samples amplified the target fragment of the expected size. for gene-edited cotton. 1: Blank control; 2: Negative control; 3-4: Gene-edited cotton with mass fraction of 10%; 5-6: Gene-edited cotton with mass fraction of 5%; 7-8: Gene-edited cotton with mass fraction of 1%; 9-10: Gene-edited cotton with mass fraction of 0.1%; 11-12: Gene-edited cotton with mass fraction of 0.05%. (D) Stability test of PCR detection method for gene-edited cotton. 1: Blank control; 2: Negative control; 3~62: The results of 60 independent amplifications of gene-edited cotton DNA with 0.1% mass fraction. Sensitivity Analysis of the Qualitative PCR Method In the sensitivity analysis of the qualitative PCR method, DNA from gene-edited cotton samples with mass fractions of 10%, 5%, 1%, 0.1%, and 0.05% was used as a template for qualitative PCR amplification. The results ( Figure 3C) showed that the PCR products showed weaker bands as the content of gene-edited cotton DNA decreased; however, bands still appeared in the PCR amplification at levels as low as 0.05%, thus indicating that the sensitivity of the method could reach 0.1%. To further determine the lower limit of stable detection of the method at 0.1%, 60 qualitative PCR amplifications were performed using gene-edited cotton genomic DNA with a mass fraction of 0.1% as the template. The results are shown in Figure 3D. All the 60 repeated qualitative PCR experiments yielded targeted amplification products, thus meeting the requirements for determining the qualitative detection limit of gene editing components [27]. Therefore, the detection limit of the method was taken as 0.1%. The concentration of the DNA template in the PCR reaction system was 50 ng/µL, and the size of the cotton haploid genome was 2118 Mbp; thus, the limit of detection (LOD) of this method was approximately 44 copies. Primer Design and Screening for qPCR The present study further developed the quantitative detection method for Cas12a (Cpf1). On the basis of the Cpf1 sequence ( Figure 4A), three pairs of primer and probe combinations were designed using Primer Express 3.0 software (Applied Biosystems, USA) (Table S2). Three pairs of primers and probes were used to amplify gene-edited cotton genomic DNA with a mass fraction of 100% and the amplification templates were diluted to different concentrations. Standard curves were constructed for the three pairs of primers and probes to verify the primer amplification efficiency, and the results are shown in Figure 4B-D. The amplification efficiency was calculated according to the slope of the standard curve: E = 10 −1/slope −1. The quantitative detection of gene editing exogenous components requires the slope of the standard curve to be in the range of −3.6 ≤ slope ≤ −3.1, which implies that the amplification efficiency is 90-110%, and the correlation coefficient (R 2 ) is ≥0.98 [28]. The results showed that the slope of Cas12-real-1 was −3.340, the amplification efficiency E was 99.3%, and the correlation coefficient R 2 was 1. The slope of Cas12-real-2 was −3.426, the amplification efficiency E was 95.8%, and the correlation coefficient R 2 was 0.998. The slope of Cas12-real-3 was −3.427, the amplification efficiency E was 95.8%, and the correlation coefficient R 2 was 0.999. Thus, all indicators of the three primer pairs met the requirements of the standard curve for the quantitative detection of exogenous components of gene editing. However, according to the extent of amplification, the primers and probes of Cas12-real-1 were superior to those of Cas12-real-2 and Cas12-real-3, and the primer for the qPCR assay was finally determined to be Cas12-real-1. Three pairs of primers and probes were used to amplify gene-edited cotton genomic DNA with a mass fraction of 100% and the amplification templates were diluted to different concentrations. Standard curves were constructed for the three pairs of primers and probes to verify the primer amplification efficiency, and the results are shown in Figure 4B-D. The amplification efficiency was calculated according to the slope of the standard curve: E = 10 −1/slope −1. The quantitative detection of gene editing exogenous components requires the slope of the standard curve to be in the range of −3.6 ≤ slope ≤ −3.1, which implies that the amplification efficiency is 90-110%, and the correlation coefficient (R 2 ) is ≥0.98 [28]. The results showed that the slope of Cas12-real-1 was −3.340, the amplification efficiency E was 99.3%, and the correlation coefficient R 2 was 1. The slope of Cas12-real-2 was −3.426, the amplification efficiency E was 95.8%, and the correlation coefficient R 2 was 0.998. The slope of Cas12-real-3 was −3.427, the amplification efficiency E was 95.8%, and the correlation coefficient R 2 was 0.999. Thus, all indicators of the three primer pairs met the requirements of the standard curve for the quantitative detection of exogenous components of gene editing. However, according to the extent of amplification, the primers and probes of Cas12-real-1 were superior to those of Cas12-real-2 and Cas12-real-3, and the primer for the qPCR assay was finally determined to be Cas12-real-1. Specificity Assay for the qPCR Method To determine the specificity of the established qPCR assay for gene-edited cotton, the DNA of mixed samples of other transgenic crops, the DNA of rice samples edited by the Cas9 system, and the DNA of non-gene-edited cotton were used as templates for qPCR amplification. By using 1% of gene-edited cotton as a positive control and both negative and blank controls, no specific amplification curve was obtained in any of the samples, except for the positive control. As shown in Figure 5A, the amplification results indicate that Cas12-real-1 has a high specificity to detect Cpf1. Next, four samples of a different gene-edited cotton were selected and amplified with Cas12-real-1. Amplification curves were obtained in all positive samples ( Figure 5B). Cas9 system, and the DNA of non-gene-edited cotton were used as templates for qPCR amplification. By using 1% of gene-edited cotton as a positive control and both negative and blank controls, no specific amplification curve was obtained in any of the samples, except for the positive control. As shown in Figure 5A, the amplification results indicate that Cas12-real-1 has a high specificity to detect Cpf1. Next, four samples of a different gene-edited cotton were selected and amplified with Cas12-real-1. Amplification curves were obtained in all positive samples ( Figure 5B). Sensitivity Analysis of the qPCR Method In the sensitivity assay of the qPCR method, 100% genomic DNA of gene-edited cotton was diluted to different concentrations for qPCR. The standard curve was then plotted. As shown in Figure 5C, the slope of the standard curve was −3.407, the amplification efficiency E was 96.6%, and the correlation coefficient R 2 was 0.999. All the correlation values were within the range (−3.6 ≤ slope ≤ −3.1, amplification efficiency E was 90-110%, and R 2 ≥ 0.98). This finding indicates that the method has good linearity in this template concentration range. The qPCR method detected 0.016 ng/µL of the genomic DNA of gene-edited cotton, which corresponds to 14 copies of the gene-edited cotton genome. We repeated the experiment 60 times with 0.016 ng/µL DNA, and typical amplification curves were obtained ( Figure 5D). The results showed that the Cq value of 60 replicates was 35.53 ± 0.43, the SD value was 0.43, and the relative standard deviation (RSD) value was 1.2%, which Sensitivity Analysis of the qPCR Method In the sensitivity assay of the qPCR method, 100% genomic DNA of gene-edited cotton was diluted to different concentrations for qPCR. The standard curve was then plotted. As shown in Figure 5C, the slope of the standard curve was −3.407, the amplification efficiency E was 96.6%, and the correlation coefficient R 2 was 0.999. All the correlation values were within the range (−3.6 ≤ slope ≤ −3.1, amplification efficiency E was 90-110%, and R 2 ≥ 0.98). This finding indicates that the method has good linearity in this template concentration range. The qPCR method detected 0.016 ng/µL of the genomic DNA of gene-edited cotton, which corresponds to 14 copies of the gene-edited cotton genome. We repeated the experiment 60 times with 0.016 ng/µL DNA, and typical amplification curves were obtained ( Figure 5D). The results showed that the Cq value of 60 replicates was 35.53 ± 0.43, the SD value was 0.43, and the relative standard deviation (RSD) value was 1.2%, which determined the detection limit of the qPCR method as 14 copies of the gene-edited cotton genome. Discussion Presently, with a rapidly growing population and a surge in demand for food, modern agriculture needs to make sustainable developments for producing high quality and quantity of crops that can withstand global climatic change and other biological factors. In the past few years, CRISPR technology has greatly accelerated the pace of crop research and breeding [6]. The Cas9 system has been widely used in various fields. The advantages of the Cas12 system can overcome the limitations of the Cas9 system, and, thus, it is expected that the Cas12 system could become a substitute for the Cas9 system. CRISPR/Cpf1 technology enables breeders to improve crop yield and quality in an efficient and accu-rate manner [29,30]. Currently, researchers have used the Cas12 system to edit various plants, such as Arabidopsis thaliana [31], cotton [23], rice [30], maize [6], Glycine max var. [29], tomato [32], Chlamydomonas reinhardtii [33], and citrus [34]. The CRISPR/Cas12 system also has strong advantages in editing bacteria, and many bacterial species are widely used in life science industries, such as pharmaceutical [35], biotechnology, and cosmetics [36]. Many byproducts of microbial metabolism, such as enzymes and antibiotics, are used by humans [37]; hence, studying and modifying the genome to meet research needs are required to fully utilize the true potential of microorganisms. In gene-edited bacteria, the Cas12a (Cpf1) system has many advantages that enable nucleotide substitutions, deletions, and insertions in the genome with greater efficiency [38,39]. In the research on the application of the Cas12a (Cpf1) system for editing mammalian cell lines, Cas12a (Cpf1) can reverse the disease status of cell lines [40][41][42], and this system is expected to be a safe and effective gene editing tool that can be used clinically. The nucleic acid detection function of Cpf1 is also highly valued by researchers, and Hour Low-cost Multipurpose highly Efficient System (HOLMES) can use Cpf1 to efficiently and rapidly detect target DNA if it is present in the sample. Cpf1/crRNA of HOLMES forms a ternary complex with the target DNA and triggers the trans-cleavage of the nontargeted ssDNA; thus, the breakage of the reporter DNA causes the fluorescent group to generate fluorescent signals. This detection tool opens a new window to diagnose human diseases. Previous studies have shown that HOLMES can be used to detect DNA and RNA viruses, such as pseudorabies virus and Japanese encephalitis virus, and for food and environmental monitoring [43]. In the era of rapid development of gene editing, the Cas12a (Cpf1) system is expected to become a more popular scissor protein; however, the effective monitoring and regulation of gene editing technology remains a critical issue. Thus far, no research has been reported for the detection method of Cpf1. In the present study, the presence or absence of exogenous DNA such as Cpf1 in the sample was detected to identify whether the sample was a gene-edited product and the derived foods based on the principle of gene editing. In this study, Cpf1 introduced by geneedited cotton was used as the target site to design primer pairs for specific qualitative PCR and qPCR amplification to ensure the detection of exogenous sequences. Two detection methods were established: qualitative PCR and qPCR. The advantages of qualitative PCR are low instrumentation requirements, simple operation, and low cost; however, the method is time-consuming. The qPCR method has the advantages of real-time analysis, high sensitivity, less time consumption, and detection of low levels of gene-edited components. Thus, the combination of the two methods in a complementary assay could yield accurate and rapid detection [28], It will also provide some technical support to establish a detection and identification system of gene-edited foods in the future. The method of Cpf1 detects different exogenous genes compared to the previously studied Cas9 assay. In experiments such as primer design and screening, sensitivity, and detection limit, the detection of Cpf1 is more sensitive and has a lower detection limit compared to Cas9. In terms of primer selection, the analysis of Cpf1 provides more primer sequences, and only the optimal primers and probes were selected in the manuscript. In fact, these primers and probes can complete the detection of Cpf1. Secondly, in qualitative PCR experiments, the sensitivity of the Cpf1 assay (44 copies) was higher than that of the Cas9 assay (65 copies). In the qPCR method, the amplification efficiency E of the optimal primer for Cpf1 was 99.3%, and the correlation coefficient R 2 was 1, and the amplification efficiency E of the optimal primer for Cas9 was 95.2%, R 2 = 0.999. The detection limit for Cpf1 was 14 copies, while the detection limit for Cas9 was 16 copies [22]. In summary, the detection method of Cpf1 is a highly sensitive and applicable method. Conclusions In the present study, qualitative PCR and qPCR assays were established to detect the specificity of Cpf1, which was used in gene-edited cotton as the target site. The detection limit of the qualitative PCR assay was 0.1% (approximately 44 copies), and the LOD of the qPCR method was 14 copies. The two methods established in this study have high specificity, good sensitivity, and reproducibility; can yield stable and reliable results; and are suitable for qualitative and quantitative analysis of the exogenous components of Cpf1 in some gene-edited foods and primary screening for Cpf1 detection. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/foods12193681/s1, Table S1: The information of primers used in qualitative PCR; Table S2: The information of primers and probes used in qPCR. Author Contributions: Conceived and designed the experiments: L.D., C.P., and J.X. Performed the experiments: X.X. and X.W. Analysed the data: X.C. and Y.L. Contributed reagents/materials/analysis tools: C.P. and L.D. Wrote the manuscript: L.D. and C.P. Read and provided suggestions on the manuscript: L.D., X.X., X.W., X.C., Y.L., C.P. and J.X. All authors have read and agreed to the published version of the manuscript.
2023-10-11T15:04:32.697Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "09e9acb9d9cbf9b9e681b0b79355d8f36a248eb7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/12/19/3681/pdf?version=1696673743", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ba012831c6562b9a4c4ed41e2f59663f49467e35", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
259994761
pes2o/s2orc
v3-fos-license
Hypoglycaemic and hypolipidemic activities of total flavonoids from Nymphaea candida flowers on diabetic mice Abstract This study aimed to investigate the hypoglycaemic and hypolipidemic activities of total flavonoids from Nymphaea candida (NCTF). The result showed that NCTF could significantly ameliorate various indicators such as FBG, OGTT, TC, TG, LDL-C, and HDL-C in ALX-induced diabetic mice compared with the model group. Meanwhile, in the therapeutical effect study of NCTF on high-fat and high-sugar diets combined with STZ-induced T2DM mice, all parameters including RBG, INS, and INSR related to diabetes were significantly improved by NCTF as well as the serum ALT, AST, ALP, CR, MDA, and IL-6 activities. NCTF could significantly increase the expression levels of SOD and PPAR-γ in T2DM. Pathological observation showed that NCTF could improve the damage to pancreatic and liver tissues in T2DM mice. In conclusion, NCTF has better hypoglycaemic and hypolipidemic effects, and its mechanism may be related to its antioxidant, PPAR-γ regulation, and inhibiting inflammatory cytokine expression. Graphical Abstract Introduction Waterlilies (plants in Nymphaea) with more than 70 species are widely distributed all over the world, and all show significant ornamental and medicinal value, of which some waterlilies can be used to prevent diabetes and its complications (Wiersema 2001;ishrat et al. 2021).For example, N. stellata is used in South asia to treat diabetes, and its extract exhibited hypoglycaemic and hypolipidemic effects on alloxan (aLX)-induced diabetic rats as well as α-glycosidase inhibiting activity (dhanabal et al. 2007;Huang et al. 2010); N. rubra extract could reverse insulin resistance by inhibiting c-Jun nH2-terminal kinase and nF-κB activities (Gautam et al. 2014); N. nouchali extract could activate the ppaR-γ signalling pathway, increase the expression of GLut4, and then enhance insulin sensitivity in tissues, promoting fat metabolism and glucose consumption and utilisation (parimala et al. 2015).these studies indicated that there are constituents with significant hypoglycaemic effects in the waterlily's extracts. as a member of the genus Nymphaea, N. candida (Snow-white waterlily) is mainly distributed in the Central asia region, and its flower buds have multiple efficacies such as reducing heat, nourishing the liver, relieving inflammation, moistening throat and thirst (Zhao et al. 2011).previous studies showed that N. candida has a variety of biological activities including anti-oxidation, anti-hepatitis, and neuroprotection (Wang et al. 2021;Zhao et al. 2021).Flavonoids (nicotiflorin, and astragalin, etc) are mainly the characteristic components of this plant (Zhao et al. 2008).However, systematic studies on the hypoglycaemic and hypolipidemic effect of flavonoids from N. candida and its mechanism are rarely reported.therefore, this study aimed to investigate the prevention and therapeutic effect of total flavonoids from N. candida (nCtF) on the diabetes mice and its associated diseases to provide data reference for the development and utilisation of this plant. Result and discussion as shown in Figure S1, the characteristic compounds of nCtF were isostrictiniin, nicotiflorin, and astragalin, their contents in nCtF were 2.21%, 11.63% and 4.68% respectively.two diabetic animal models (aLX-induced, high-sugar high-fat combined streptozotocin (StZ)-induced) were used to evaluate the hypoglycaemic and hypolipidemic activities of nCtF. aLX is a β-cytotoxin that can destroy β cells in the pancreas, and resulting in a reduction in endogenous insulin release and a rise in the body's blood sugar concentration, so this method is commonly used to evaluate the hypoglycaemic activities of natural products.type ii diabetes (t2dm) is a disease caused by various factors such as genetics, diet, age, and pregnancy.this disease can not only damage islet cells but also produce insulin resistance, glucose and lipid metabolism disorder.High-sugar and high-fat diets can induce insulin resistance in mice.StZ can selectively destroy islet β cells in the pancreatic tissue of animals, leading to insufficient insulin secretion in the body and causing hyperglycaemia.therefore, the mice are injected by StZ after being fed high-sugar and high-fat feed for a period of time and can produce a t2dm model with steady insulin resistance. Effects of NCTF on ALX-induced diabetic mice the diabetic model in mice was established by aLX with intraperitoneal injection according to the previous report (tang et al. 2020).the experimental results showed that the fasting blood glucose (FBG) levels in aLX-induced diabetic mice were ≥11.1 mmol/L, and various symptoms as follows: the food intake, water intake, and urine output of mice significantly added; body weight in mice appeared significant loss, spleen and kidney index significantly increased (table S2-1).these results indicated that the model of diabetic mice was successfully established.nCtF (100, 200 mg/kg) could significantly reduce FBG, improve glucose tolerance, and effectively alleviate the "three more" symptoms in diabetic mice (p < 0.05, table S2-2, S2-3).as a metabolic disease, diabetic patients are often accompanied by varying degrees of elevated blood lipids or lipid metabolism disorders (Yan et al. 2019). in this study, the serum tG, tC and LdL-C levels were significantly increased and the HdL-C levels were significantly decreased in diabetic mice in model group (p < 0.05), while the levels of tC, tG and LdL-C were significantly decreased and the levels of HdL-C were increased in nCtF (100, 200 mg/kg) intervention group (p < 0.05, Figure S2-1).moreover, nCtF group (50,100, 200 mg/kg) could also markedly ameliorated CR level in aLX-induced diabetic mice (p < 0.05, Figure S2-1).Studies have shown that nCtF has better hypoglycaemic, hypolipidemic and renoprotective effects on aLX-induced diabetic mice. Effects of NCTF on high-sugar and high-fat combined STZ induced T2DM mice in this study, the mice were intraperitoneally injected with StZ (60 mg/kg, twice, every three days between) after being fed high-sugar and high-fat feed for 40 days (tang et al. 2020).Seven days after treatment with StZ, the random blood glucose (RBG) in mice was more than 16.7 mmol/L, and indicating that the t2dm model was successfully established.Subsequently, nCtF (50, 100, 200 mg/kg) and metformin (mEt, 260 mg/kg) were administered by gavage at 28 days to treatment of t2dm in mice.the experimental results showed that t2dm mice had less activity, less reaction, sparse hair, and "three more" phenomenon (polydipsia, polyuria, poly food) compared with normal mice.after treatment with nCtF and mEt, the mental status of t2dm mice were obviously improved.nCtF (200 mg/kg) also significantly improved the spleen index and kidney index (p < 0.05, Figure S3-1).However, there was no significant difference in body weight' change between various groups (table S3-1). it was found that nCtF could significantly reduce the blood glucose and insulin (FinS) levels of t2dm mice (p < 0.05), improve insulin resistance (iRi) (p < 0.05,) and effectively alleviate the symptoms of "three more" (table S3-2, Figure S3-2).Histopathological observation showed that nCtF could significantly improve pancreatic tissue injury in t2dm mice compared with the model group, such as the pancreatic islet atrophy reduced, vacuolisation decreased, the structure clearer and the number of islets increased (Figure S3-3).these results indicate that nCtF not only has the effect of lowering blood glucose but also has a certain alleviating effect on pancreatic pathological lesions in t2dm mice. Long-term insulin resistance can cause adipocytes to produce a large amount of free fatty acids (FFa), and lead to dyslipidemia.Continuous dyslipidemia may further induce non-alcoholic fatty liver and other complications. in the experiment, the serum tG, tC, and LdL-C levels in t2dm mice were significantly increased (p < 0.05), indicating the disorder of glucose and lipid metabolism.these indicators in t2dm mice were significantly decreased by nCtF administration through improving insulin resistance (p < 0.05, Figure S3-4).ppaR-γ is a key transcription factor involved in lipid metabolism, which plays a crucial role in regulating insulin resistance, glucose and lipid metabolism.Compared with the model group, nCtF (200 mg/kg) could significantly increase the ppaR-γ protein expression in the liver of t2dm mice (p < 0.05, Figure S3-5), and indicate that nCtF had a protective effect on liver injury in t2dm mice, which may be related to the up-regulation of the expression of ppaR-γ in the liver. in addition, disorders of glucose and lipid metabolism can induce non-alcoholic fatty liver disease, which can subsequently lead to liver damage.in this study, compared with the normal group, liver homogenate aLt, aSt, and aLp levels in t2dm mice were significantly increased (Figure S3-6), pathological change of hepatocytes in t2dm mice was also observed through a microscope, and various injuries were found such as glycogen degeneration and loose cytoplasm, lipid vacuolar degeneration, basophilic change, punctate necrosis foci, and the lobular hepatocytes hypertrophy.the results indicated that the liver of t2dm mice was damaged.after treatment with nCtF, liver homogenate aLt, aSt, and aLp levels int2dm mice were remarkably decreased (p < 0.05, Figure S3-6), and pathological changes of liver tissue were significantly improved (p < 0.05, Figure S3-7).moreover, nCtF (100, 200 mg/kg) could remarkably decrease the elevator mda levels, and improve Sod activities compared to the model group (p < 0.05, Figure S3-8).interleukin-6 (iL-6) plays an important role in the development of liver injury in t2dm mice.nCtF (50, 100, 200 mg/kg) could remarkedly decrease the elevatory iL-6 levels by t2dm (p < 0.05, Figure S3-9).t2dm leads to renal disease, and renal injury can reduce the excretion of CR and promote its increment in blood concentration.the experimental results showed that nCtF (200 mg/kg) could reduce the CR value and renal index of t2dm mice (p < 0.05, Figure S3-10), indicating that nCtF had the ability to protect the kidney function of t2dm mice. Conclusion through intervention in two diabetic animal models, nCtF showed a significant effect of lowering blood glucose, decreasing serum lipids, improving insulin resistance as well as alleviating liver injury and kidney injury.therefore, nCtF is expected to be a potential therapeutic drug for diabetes and its complications, but its mechanism needs to be further studied. Disclosure statement no potential conflict of interest was reported by the author(s).
2023-07-21T06:17:50.728Z
2023-07-20T00:00:00.000
{ "year": 2024, "sha1": "bdf34e3dd8ac9f70bf30315a15e30916279d086f", "oa_license": "CCBY", "oa_url": "https://figshare.com/articles/journal_contribution/Hypoglycaemic_and_hypolipidemic_activities_of_total_flavonoids_from_i_Nymphaea_candida_i_flowers_on_diabetic_mice/23716945/1/files/41624431.pdf", "oa_status": "GREEN", "pdf_src": "TaylorAndFrancis", "pdf_hash": "672c5b2ba3c575742052194be11dafa63439eb98", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
253141557
pes2o/s2orc
v3-fos-license
Non-Reproducibility of Oral Rotenone as a Model for Parkinson’s Disease in Mice Oral rotenone has been proposed as a model for Parkinson’s disease (PD) in mice. To establish the model in our lab and study complex behavior we followed a published treatment regimen. C57BL/6 mice received 30 mg/kg body weight of rotenone once daily via oral administration for 4 and 8 weeks. Motor functions were assessed by RotaRod running. Immunofluorescence studies were used to analyze the morphology of dopaminergic neurons, the expression of alpha-Synuclein (α-Syn), and inflammatory gliosis or infiltration in the substantia nigra. Rotenone-treated mice did not gain body weight during treatment compared with about 4 g in vehicle-treated mice, which was however the only robust manifestation of drug treatment and suggested local gut damage. Rotenone-treated mice had no deficits in motor behavior, no loss or sign of degeneration of dopaminergic neurons, no α-Syn accumulation, and only mild microgliosis, the latter likely an indirect remote effect of rotenone-evoked gut dysbiosis. Searching for explanations for the model failure, we analyzed rotenone plasma concentrations via LC-MS/MS 2 h after administration of the last dose to assess bioavailability. Rotenone was not detectable in plasma at a lower limit of quantification of 2 ng/mL (5 nM), showing that oral rotenone had insufficient bioavailability to achieve sustained systemic drug levels in mice. Hence, oral rotenone caused local gastrointestinal toxicity evident as lack of weight gain but failed to evoke behavioral or biological correlates of PD within 8 weeks. Introduction Parkinson's disease (PD) is the second-most common neurodegenerative disease worldwide after Alzheimer's disease, with increasing prevalence with advancing age [1]. Symptoms of the disease are mainly tremor, movement disorders, and rigidity. Lesscommon symptoms include motor disorders such as dystonia, dysphagia, and non-motor problems such as dementia, anxiety, depression, and pain [1][2][3]. A hallmark feature of PD is a loss of dopaminergic neurons, particularly in the substantia nigra but also in other brain areas, and the accumulation of Lewy bodies in the surviving neurons. These consist mainly of aggregated alpha-synuclein (α-Syn, Snca). In Lewy bodies, aggregated α-Syn is present in a fibrillar form whose formation is promoted by oxidative stress or mitochondrial malfunctions and defective post-translational modifications of the native protein [4,5]. In the brains of Parkinson's disease patients and of rodents in Parkinson's disease models, dysregulation of α-Syn, including oligomerization and fibril formations, are common manifestations contributed by lysosomal defects of aggregate removal [6][7][8]. It has been suggested that α-Syn proteins may spread from peripheral sensory and autonomic neurons and behave like prions [9,10]. To date, there is no cure for the disease and its molecular mechanisms are still incompletely understood, in part owing to limitations of available rodent models. Several animal models of PD have been developed in rats and mice. They are mostly genetic models of human mutant synuclein or knockouts of key Parkinson'sassociated genes such as Pink1 or Parkin [11][12][13][14]. Alternatively, local or systemic MPTP (1-methyl-4-phenyl-1,2,3,6-tetrahydropyridin) or other dopaminergic neurotoxins have been widely used [15,16]. All of these models replicate parts of human PD pathology, but the behavioral manifestations are mostly subtle and the predictive nature for human PD is still limited. Rotenone is a natural ingredient of Leguminosa plants that inhibits complex I of the mitochondrial respiratory chain. It is commercially used as a pesticide and able to cross the blood-brain barrier due to its high lipophilicity. The rotenone PD model was originally developed in rats which showed PD-like phenomena associated with dopaminergic neuron degradation and formation of Lewy body-like structures in the brain after intravenous administration of the substance [17]. A problem with the model is its high mortality, which hampers reproducibility of the results and offends 3R criteria. To reduce animal suffering and mortality, alternative methods of administration including oral, subcutaneous, and intraperitoneal have been investigated and were described as being reproducible and reliable [18][19][20][21][22]. Oral administration has also been established in mice [21,22] and was used as a model in multiple experimental studies investigating novel therapeutic compounds against PD [23][24][25][26][27][28][29][30]. Therefore, we adopted the described high-dose treatment regimen of oral rotenone in mice exactly following the published protocol, with RotaRod running as the behavioral and tyrosine hydroxylase immunostaining as the biological readouts [21,22]. Our aim was to study long-term behavioral and morphologic outcomes. However, plasma pharmacokinetic studies revealed that the drug was not sufficiently bioavailable, and accordingly mice showed no PD-like phenomena but only lack of weight gain, likely resulting from high local intestinal drug exposure and toxicity on the gut and microbiome [31][32][33][34][35]. Weight Gain and Health in Rotenone and Vehicle Group Mice were treated with rotenone (30 mg/kg body weight, p.o.) and vehicle 5 times per week. The health score, including general appearance and eating and drinking behavior, as well as movement and weight gain, were monitored daily with the exception of weekends. Rotenone and vehicle-treated mice did not show any impairments of well-being throughout the observation time. All mice were healthy and there were no drop-outs, which is in contrast to other studies showing more or less high mortality (up to 50%) under rotenone treatment in mice. Mice were 6-8 weeks old at the onset of treatments, and as expected vehicle-treated mice constantly gained weight during the observation period. In contrast, rotenone-treated mice remained at their starting body weight. Time courses differed significantly between groups, but one-way ANOVA of final body weights did not reach statistical significance, owing to high variability in the rotenone group ( Figure 1) (Table 1). Motor Function Motor function was assessed two times per week during the treatment period using the RotaRod test. The running times (fall-off latencies) did not differ between groups at any time point, indicating that the rotenone treatment did not evoke impairments of motor coordination, endurance, or running motivation. Figure 2 shows the individual running times for both groups during the complete observation period (mean of two runs per test day) as well as the running on the last day of the treatment period of 28 days or 54 days. Supplementary Figure S1 further depicts the individual time courses of each single run sequentially, and the distribution of all pooled runs in comparison with a former control group of mice, which had similar ages at observation start, also did replicate runs per test day, and were observed over a similar period. The previous age-matched control mice show a similar running behavior as vehicle and rotenone-treated mice. Motor Function Motor function was assessed two times per week during the treatment period using the RotaRod test. The running times (fall-off latencies) did not differ between groups at any time point, indicating that the rotenone treatment did not evoke impairments of motor coordination, endurance, or running motivation. Figure 2 shows the individual running times for both groups during the complete observation period (mean of two runs per test day) as well as the running on the last day of the treatment period of 28 days or 54 days. Supplementary Figure S1 further depicts the individual time courses of each single run sequentially, and the distribution of all pooled runs in comparison with a former control group of mice, which had similar ages at observation start, also did replicate runs per test day, and were observed over a similar period. The previous agematched control mice show a similar running behavior as vehicle and rotenone-treated mice. Immunofluorescence In addition to behavioral analysis, we assessed the effects of rotenone on dopaminergic neurons and potential accumulation of α-Synuclein using immunofluorescence studies of brain sections. Furthermore, potential changes in astrocytes and microglia in the brain were also investigated using immunofluorescence analysis. There was no accumulation of α-Syn in the substantia nigra or other brain regions in rotenone-treated mice. There was also no loss of dopaminergic neurons in the Immunofluorescence In addition to behavioral analysis, we assessed the effects of rotenone on dopaminergic neurons and potential accumulation of α-Synuclein using immunofluorescence studies of brain sections. Furthermore, potential changes in astrocytes and microglia in the brain were also investigated using immunofluorescence analysis. There was no accumulation of α-Syn in the substantia nigra or other brain regions in rotenone-treated mice. There was also no loss of dopaminergic neurons in the substantia nigra (as revealed by immunofluorescence analysis of tyrosine hydroxylase) or immune cell infiltration, in agreement with previous reports [21]. Morphologic features of microglia in the substantia nigra were alike in both groups and mostly agreed with resting state morphology, but quantitative analysis of CD11b immunoreactivity revealed higher numbers of CD11b-positive cells in rotenonetreated mice, suggesting mild microgliosis (Supplementary Figure S2). Considering the absence of α-Syn and absence of TH differences in the SN (Figure 3, Supplementary Figure S2), mild microgliosis would agree with a subtle gut-to-brain effect in response to rotenonemediated disruption of gut homeostasis [36,37]. Rotenone Plasma Concentrations Since we did not observe differences in the treatment groups concerning health, RotaRod performance, and histology, we assessed plasma concentrations 2 h after the last rotenone dose to reveal a putative pharmacokinetic (PK) failure. Low bioavailability of oral rotenone has been described before [15]. Rotenone concentrations in the plasma of mice was determined by liquid chromatography combined with tandem mass spectrometry. The lower limit of quantification (LLOQ) was 2 ng/mL (5 nM). Concentrations of rotenone were below detection limit (10× lower than LLOQ) for all samples except one, which showed a measurable concentration of 0.964 ng/mL, but still below the LLOQ. The PK results show low bioavailability of oral rotenone irrespective of its high lipophilicity, either owing to low absorption (unlikely) or very fast and near complete first-pass metabolism. Drug levels in plasma were too low for any systemic effect. Considering the large volume of distribution it cannot be excluded that small amounts might have accumulated in tissue, including the brain, but our "no-weight-gain" data suggest that oral rotenone underwent fast and near complete first-pass metabolism in the gut and liver, where it caused GI toxicity as described [31,32,34], preventing mice from gaining body weight; it is very unlikely that orally administered rotenone reached the central nervous system. Figure S2). Considering the absence of α-Syn and absence of TH differences in the SN (Figure 3, Supplementary Figure S2), mild microgliosis would agree with a subtle gut-to-brain effect in response to rotenone-mediated disruption of gut homeostasis [36,37]. Immunofluorescence studies showing (from left to right) staining for alpha-Synuclein (α-Syn, red), tyrosine hydroxylase (TH, green) for dopaminergic neurons, glial fibrillary acid protein (GFAP, green) for astrocytes, and CD11b (green) for microglial cells and myeloid derived immune cells in the substantia nigra of vehicle-and rotenone-treated mice (54 d). The brain picture indicates the brain region which was investigated (substantia nigra pars compacta, SNpc), Allen Brain Atlas (https://mouse.brain-map.org/static/atlas, accessed on 3 August 2022). The diagram shows the quantitative analysis of the different proteins. Representative staining from at least n = 3 in the respective groups. Scale bar: 50 µm. * Statistically significant difference between vehicle and rotenone group, p < 0.05. Rotenone Plasma Concentrations Since we did not observe differences in the treatment groups concerning health, RotaRod performance, and histology, we assessed plasma concentrations 2 h after the last rotenone dose to reveal a putative pharmacokinetic (PK) failure. Low bioavailability of oral rotenone has been described before [15]. Rotenone concentrations in the plasma of mice was determined by liquid chromatography combined with tandem mass spectrometry. The lower limit of quantification (LLOQ) was 2 ng/mL (5 nM). Concentrations of rotenone were below detection limit (10× lower than LLOQ) for all samples except one, which showed a measurable concentration of 0.964 ng/mL, but still below the LLOQ. The PK results show low bioavailability of oral rotenone irrespective of its high lipophilicity, either owing to low absorption (unlikely) or very fast and near complete first-pass metabolism. Drug levels in plasma were too low for any systemic effect. Considering the large volume of distribution it cannot be excluded that small amounts might have accumulated in tissue, including the brain, but our "no-weight-gain" Figure 3. Immunofluorescence studies showing (from left to right) staining for alpha-Synuclein (α-Syn, red), tyrosine hydroxylase (TH, green) for dopaminergic neurons, glial fibrillary acid protein (GFAP, green) for astrocytes, and CD11b (green) for microglial cells and myeloid derived immune cells in the substantia nigra of vehicle-and rotenone-treated mice (54 d). The brain picture indicates the brain region which was investigated (substantia nigra pars compacta, SNpc), Allen Brain Atlas (https://mouse.brain-map.org/static/atlas, accessed on 3 August 2022). The diagram shows the quantitative analysis of the different proteins. Representative staining from at least n = 3 in the respective groups. Scale bar: 50 µm. * Statistically significant difference between vehicle and rotenone group, p < 0.05. Discussion The aim of the study was to establish a temporally well-controlled, dose-dependent, reliable, and reproducible model for Parkinson's disease in mice that would be compatible with 3R criteria and would replicate at least in part PD-typical pathology and the motorfunction deficits of human PD. Systemic administration of rotenone has been described in several publications as a promising, relatively novel approach to phenocopying the slowly progressive course of the disease in rats and mice. In particular, studies used i.p., s.c., and p.o. as well as intranasal and dermal exposure to the drug, with a number of different readouts to assess histologic, biochemical, and in vivo correlates of human Parkinson's disease [21,22,[38][39][40][41][42]. Comparisons of the results of these studies reveal high within-and between-study variability even with very similar protocols, but particularly high-dose oral rotenone looked promising and has been used as a PD model in multiple studies exploring drug, diet, stress, or LRRK2 effects in mice [23,[25][26][27][28][29][30]36,39,[43][44][45]. Interestingly, studies addressing drug or diet effects found quite stable reductions in RotaRod running of about 50% and restoration with the candidate drug [23,[25][26][27][28][29][30], whereas studies addressing add-on effects of stress or LRRK2 mutation found minor or no effect of rotenone alone but a serious drop in combination [36,39,45], suggesting aim-dependent biases. Mortality rates are not always reported, but high numbers of dropouts may preclude the most severe cases from final analysis. In addition, pharmacokinetic features likely depend on the route of drug administration, leading to strong variations in rotenone concentrations in plasma or brain. Rotenone's PK parameters have not been systematically compared in rodents, and most of the rotenone PD studies did not analyze plasma levels of rotenone. In one study where plasma concentrations were determined, levels were undetectable in some cases at the onset of in vivo symptoms [46]. A further study found no rotenone in the brain at 10 mg/kg/d p.o [36]. It is not clear how PD-like pathology may arise in the absence of measurable plasma and brain concentrations. It has been suggested that oral rotenone disrupts the gut microbiome and intestinal barrier [30,[32][33][34][35] and leads to accumulation of α-Syn in the enteric nervous system (ENS) [31,43] from where it is supposed to spread to the brain via the vagus, which has been experimentally demonstrated by direct injection of α-Syn into the vagus nerve [47][48][49]. Our "no-weight-gain" data agree with rotenonetriggered gut pathology and suggest that rotenone undergoes first-pass metabolism in the gut and liver, in agreement with case reports of rotenone fatalities in humans [50] and liver toxicity in rats [51]. The toxicity of the drug in the gastrointestinal tract and liver likely causes sickness behavior in mice that might manifest in reduced RotaRod running in some studies. Gut dysbiosis, barrier leakage, and local mitochondrial damage may promote α-Syn accumulation in the ENS and may cause gut-to-brain proinflammatory signaling, resulting in microgliosis. According to the hypothesis of Braak [52,53], rotenone-evoked PDlike phenomena are a result of gastrointestinal accumulation and a retrograde transmission of α-Syn via the ENS to the brain, which would agree with α-Syn prion-like spreading [54], and lack of weight gain in our rotenone-treated mice agrees with the autonomic non-motor symptoms of PD. Beyond these effects on body weight, there was no evidence of drugevoked toxicity in our mice except mild quantitative microgliosis. None of the mice had health problems or died, which is in accordance with previous toxicology studies [55] but does not agree with previous rotenone-PD studies in which up to 50% of animals dropped out [21,56]. Our data further indicate that orally administered rotenone fails to manifest in measurable plasma concentrations, likely owing to fast first-pass metabolism. Considering its lipophilicity, non-absorption is less likely. In our mice, toxic effects on the gastrointestinal tract were however not associated with α-Syn accumulation or toxicity on dopaminergic neurons in the brain, either because the effect was too weak or the time frame insufficient to allow for spreading of α-Syn to the brain. Consequently, there was also no effect of rotenone on the motor behavior in the RotaRod test. This lack of effect after oral rotenone has also been described in another study, which focused mainly on the anticarcinogenic effects of rotenone in a rat model. Similar to our results, the authors described retarded weight gain in rotenone-treated mice but no evidence for rotenone-induced neurotoxicity after oral administration (52 mg/kg body weight for 14 d) and the authors pointed to low oral bioavailability as the most likely explanation [57]. Further studies found that rotenoneevoked manifestations of neuropathology require add-on stressors such as restraint stress, aging, or LRRK2 mutations [36,39]. It is a weakness of our study that sample sizes were low, particularly for the treatment period from day 28 to day 54. Based on the previous protocol, the study was powered for an observation time of 28 d and was limited by ethical restrictions resulting from assumptions of high mortality. To assess biological effects at 28 d (previous endpoint) half of the animals had to be euthanized at 28 d, leaving only small groups for observation up to 54 d. Hence, statistical comparisons of behavioral data of the final period are hampered by low sample sizes. Nevertheless, comparison of individual behavioral RotaRod data suggest a mild advantage of rotenone-treated mice versus vehicletreated mice, possibly owing to the lower body weight and no difference in comparison with a former control group. Irrespective of the low sample size, our pharmacokinetic studies clearly reveal that rotenone is not bioavailable via the oral route, so the CNS is not directly exposed. Nevertheless, the majority of oral rotenone-PD studies claimed the model as suitable [21,23,[25][26][27][28][29][30][33][34][35]37,43,44,58,59]. Some of these studies used different solvents, such as sunflower oil 4% or 2% carboxymethylcellulose with 1.25% chloroform, which likely affects absorption [59][60][61][62] but not so much first-pass metabolism. According to SwissADME prediction (http://www.swissadme.ch, accessed on 6 September 2022), GI absorption is per se high, and rotenone is substrate and inhibitor of cytochrome P450 enzymes. Therefore, PK is difficult to predict, particularly with once-daily treatment. Nevertheless, other reports used exactly the same treatment protocol as chosen in our study. It is not clear why oral rotenone produces dopaminergic neuron degeneration, α-Syn accumulation, and motor deficits in some studies but not in others. Details of methodology, sample sizes, and methods of randomization and blinding may contribute to high inter-study variability and low reproducibility. These problems and differences further emphasize the need for more standardized protocols in which the vehicles, the sources of drugs, composition of drug and vehicle solutions/suspensions and dosing schedules, and genetic background of the animals are described as precisely as possible to increase comparability and reproducibility of the studies. In addition, rotenone-standardized diets (food pellets) may overcome the problem of fluctuating concentrations and rapid pre-systemic metabolism, and oral rotenone may indeed be useful to study PD-associated autonomic neuropathy of the ENS. Animals Male C57BL/6J mice were obtained from Charles River, Sulzfeld, Germany at the age of 6-8 weeks. Animals had free access to food and water and were maintained in climate-and light-controlled rooms (24 ± 0.5 • C, 12/12 h dark/light cycle). All behavioral experiments were performed by an observer blinded for the treatment in a dedicated room with restriction on sound level and activity. Ethics Statement: Animal experiments adhered to the ethical guidelines for investigations in conscious animals, and the procedures were approved by the local Ethics Committee for Animal Research (Regierungspräsidium Darmstadt, Germany, permit no. FK1136). All efforts were made to minimize animal suffering and reduce the number of animals according to 3R principles. Reagents Rotenone was purchased from Sigma-Aldrich (Darmstadt, Germany). For animal experiments, it was suspended in 0.5% carboxymethyl cellulose sodium salt (Sigma-Aldrich, Darmstadt, Germany) with 0.1% Tween-20 (Carl Roth, Karlsruhe, Germany). The suspension was freshly prepared every day. The rotenone and the 0.5% carboxymethyl cellulose sodium salt/0.1% Tween-20 solution were stored in the dark at room temperature. Animal Treatment Mice were treated orally with vehicle or rotenone (30 mg/kg body weight) for 5 consecutive days a week and were drug-free on weekends. The treatment period was 28 days for 10 mice in the rotenone group and 6 mice in the vehicle group. At day 28, half of the mice were euthanized for histological analysis and the other half was continued up to 54 days. Based on previous studies, the primary endpoint was 28 d and the study was powered for this endpoint. Owing to the expectation of high mortality, only low sample sizes were ethically approved. Behavioral data of the vehicle group were supported/bolstered with former behavioral data of a control group, which was similar in age, RotaRod testing frequency, observation period, and identical C57BL/6 genetic background. The data were available from a previous study investigating motor behavior not related to PD. General health and weight gain were monitored daily before drug administration. Twice a week, the mice were subjected to a RotaRod test to investigate motor functions and coordination. RotaRod Test Motor coordination, endurance, and motivation were assessed with an acceleratingspeed RotaRod for mice (Ugo Basile, Comerio, Italy) in an accelerating mode with speed increasing from 4 to 40 rpm over 5 min. All mice had four training sessions before the first day of the experiment. Two running series were performed every test day. Determination of Rotenone Plasma Concentrations Animals were killed by CO 2 and cardiac puncture 2 h after the last rotenone dose. For plasma preparation, blood was collected in EDTA tubes and centrifuged at 2000× g for 90 s. After centrifugation, the plasma was transferred to a fresh tube and stored at −80 • C until further analysis. Rotenone was quantified in plasma samples using an LC-MS/MS method. All solvents were LC-MS grade. A gradient elution with 10 mM ammonium formate + 0.1% formic acid (A) and acetonitrile with 0.0025% formic acid (B) was run on an Agilent 1200 LC system (Agilent, Waldbronn, Germany) with a flow rate of 400 µL/min using a Zorbax C8 Eclipse Plus RRHD column (50 × 2.1 mm, 1.8 µm, Agilent, Waldbronn, Germany) with a precolumn for analyte separation in 7.5 min. MS/MS analysis was performed on a QTRAP 5500 triple quadrupole mass spectrometer (Sciex, Darmstadt, Germany) in positive ion mode, using the following transitions: m/z 395.1 > 213.0 (quantifier) and 395.1 > 192.0 (qualifier) for rotenone and m/z 247.2 > 204.1 (quantifier) and 247.2 > 202.0 (qualifier) for carbamazepine-D 10 as the internal standard. MRM and MS parameters were optimized to achieve the highest signal yield. The internal standard (20 µL, 5 ng/mL in MeOH) was added to 20 µL of thawed plasma samples, which were then purified using liquid-liquid extraction with ethyl acetate. The organic phase was dried under nitrogen at 45 • C, reconstituted with 50 µL methanol, and 10 µL were injected into the LC system. Calibration standards covering a range from 2.0 up to 200.0 ng/mL were created by adding appropriate working solutions to K3EDTA plasma not containing any rotenone. Absence of rotenone in the plasma for the calibration curve was evaluated by analyzing blank (no analyte or IS) and zero (no analyte) samples. Method verification included analysis of spiked plasma samples as calibration curve at levels from 2 ng/mL to 200 ng/mL in combination with measuring quality control samples at low, medium, and high levels. Quality control measures were performed during every sample run and included two sets with low-, medium-, and high-quality control samples. Acceptance criteria for accuracy was set to 20%. The lower limit of quantification (LLOQ) in 20 µL plasma, defined as signal-to-noise ratio of ≥10, was 2 ng/mL (S/N ratio determined for the LLOQ: 32.8). Limit of detection (LOD) was defined by a signal-tonoise ratio of ≥3. Quality control measures were performed during every sample run and included two sets with low-, medium-, and high-quality control levels. Data acquisition and evaluation was performed using Analyst 1.7.1 and MultiQuant-Software 3.0.3 (both Sciex, Darmstadt, Germany). Immunofluorescence Mice were euthanized with CO2, blood was collected by cardiac puncture, and mice were then cardially perfused with 1× PBS followed by 2% PFA in 1× PBS for fixation. The brains were collected for immunofluorescence staining. They were placed in 2% PFA in 1× PBS for 24 h for post-fixation, and then transferred to 20% sucrose solution for at least five hours and stored in 30% sucrose solution overnight at 4 • C for cryoprotection. The tissues were then embedded in cryomedium (Tissue-Tek O.C.T. Compound, Sakura Finetek Europe B.V., Alphen aan den Rijn, The Netherlands), frozen on dry ice, and cut into 16 µm thick cryosections (cross sections) at −21 • C. The slides were stored at −80 • C until histological staining. Images were analyzed in FIJI ImageJ. RGB images were converted to 8-bit images. Brightness and contrast were adjusted if necessary. The pseudo-flat-field correction plugin was used to adjust uneven illumination, followed by background subtraction. Images were converted to binary images using the IJ-IsoData threshold algorithm for tyrosine hydroxylase and CD11b and Yen's algorithm for α-Syn and GFAP. Immunofluorescent particles were analyzed using the particle analyzer. Binary masks are presented as Supplementary Figure S2. The percentage area was used for group-wise comparisons. Data Analysis Statistical evaluation was performed using GraphPad Prism 9 (GraphPad Software Inc., San Diego, CA 92108, USA). Data are presented as mean ± SD. Data were either compared by univariate analysis of variance (ANOVA) with subsequent t-tests employing a Dunnett's correction for multiple comparisons versus vehicle-treated mice or baseline or by Student's t-test. Non-parametric alternatives were used for small sample sizes (immunofluorescence). Time courses of body weights were submitted to ANOVA for repeated measurements (rmANOVA). For all tests, a multiplicity-adjusted probability value of p < 0.05 was considered statistically significant. Funding: This study was supported by the Deutsche Forschungsgemeinschaft (CRC1039 A03 to IT). The funder had no role in the study design, data analysis, or decision to publish. Institutional Review Board Statement: Animal experiments adhered to the ethical guidelines for investigations in conscious animals, and the procedures were approved by the local Ethics Committee for Animal Research (Regierungspräsidium Darmstadt, Germany, permission no. FK1136). Conflicts of Interest: The authors have no financial or other conflict of interest.
2022-10-27T15:23:33.503Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "0ca0760445ab917610eaf88412a60c9b2d2e77f3", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "7b9f8a7d00e18c2779ce1214a5d976a7e66f1996", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
263231996
pes2o/s2orc
v3-fos-license
Construction of a plasmid-free l-leucine overproducing Escherichia coli strain through reprogramming of the metabolic flux Background l-Leucine is a high-value amino acid with promising applications in the medicine and feed industries. However, the complex metabolic network and intracellular redox imbalance in fermentative microbes limit their efficient biosynthesis of l-leucine. Results In this study, we applied rational metabolic engineering and a dynamic regulation strategy to construct a plasmid-free, non-auxotrophic Escherichia coli strain that overproduces l-leucine. First, the l-leucine biosynthesis pathway was strengthened through multi-step rational metabolic engineering. Then, a cooperative cofactor utilization strategy was designed to ensure redox balance for l-leucine production. Finally, to further improve the l-leucine yield, a toggle switch for dynamically controlling sucAB expression was applied to accurately regulate the tricarboxylic acid cycle and the carbon flux toward l-leucine biosynthesis. Strain LEU27 produced up to 55 g/L of l-leucine, with a yield of 0.23 g/g glucose. Conclusions The combination of strategies can be applied to the development of microbial platforms that produce l-leucine and its derivatives. Supplementary Information The online version contains supplementary material available at 10.1186/s13068-023-02397-x. Background l-Leucine is a valuable functional amino acid, that is involved in many processes of cellular physiology and metabolism, including the signal molecules of protein metabolism, the maintenance of glucose homeostasis, and the regulation of lipid metabolism [1,2].The metabolic functions of l-leucine in regulating muscle protein synthesis and insulin release have high commercial value in the feed industry [3,4].Therefore, the increasing market demand for this amino acid has stimulated interest in the development of cost-effective approach for its production on an industrial scale [5].Currently, mutagenesis and metabolic engineering strategies are the common methods used to engineer cellular factories for l-leucine production [6].However, existing l-leucine fermentation systems are still too inefficient to achieve large-scale titers and economic competitiveness.Therefore, the construction of a superior microbial cell factory for l-leucine production is urgently required to meet future market demands for a sustainable supply. By applying the strategies of systems metabolic engineering, researchers have made some progress in constructing l-leucine producing strains with Corynebacterium glutamicum as the "microbial chassis" [7].Stemming from the precursor pyruvate, the l-leucine biosynthesis pathway involves seven reactions that are regulated by different mechanisms, including transcriptional attenuation and substrate inhibition [8].In general, the superior l-leucineproducing performance of C. glutamicum is achieved through the promotion of glucose uptake, deletion of competitive consumption, and enhancement of the l-leucine biosynthesis pathway [9,10].Vogt et al. achieved a l-leucine titer (23.7 g/L in approximately 72 h) by increasing both the supply of the precursor and the feedback-resistance of 2-isopropylmalate [11].However, despite the accumulation of l-leucine achieved with these above-mentioned strategies, the low titer and/or yield still limit translation of the process to the industrial scale.In addition, NADPH is also required in the l-leucine biosynthesis pathway and cofactor must be balanced to ensure efficient synthesis of l-leucine [12].Wang et al. achieved an l-leucine production of 23.31 g/L by converting the cofactor requirements of l-leucine biosynthesis and glutamate dehydrogenase from NADPH to NADH [13].However, by-products usually accumulated along with l-leucine, indicating that although redox balance is essential for maintaining strong flux in the l-leucine biosynthesis pathway, the by-products would inevitably result in a loss of productivity.Because the l-leucine biosynthesis pathway is long and sophisticated, and interlaces with intracellular redox imbalance, there is an urgent need for more efficient approaches to constructing redox balanced strains. In addition to optimizing the carbon flux of the l-leucine biosynthesis pathway, enriching the precursor pool is another pivotal constraint to overproduction of the amino acid [14].Because pyruvate and acetyl-CoA are the precursors of l-leucine, the synthesis of which is restricted by its coupling to cell growth, an undesired trade-off between biomass and products might be the crucial issues affecting the amino acid yield [15].The promotion of l-leucine biosynthesis could be achieved by blocking or weakening the metabolic flux in the tricarboxylic acid (TCA) cycle.A prior effort to inhibit the TCA cycle had largely focused on reducing the activity of citrate synthase [11].However, the metabolic valve regulated the TCA cycle statically, resulting in the premature loss of cell biomass, which is not the best strategy for l-leucine production.Recently, a quorum sensing (QS) circuit independent of inducers was applied for the efficient production of a variety of chemicals and dynamically changed the distribution of the carbon flux in the metabolic process [16].In another study, inositol production was effectively increased through the design and use of a pathway-independent genetic control module to dynamically regulate the metabolic flux of glycolysis and redistribute the cellular metabolic network [17].In addition, Jiang et al. (2019) successfully reduced the production cost of l-citrulline by combining modular engineering strategies with dynamic regulatory circuit to decouple the "growth mode" and "production mode" [18].However, there are not many published studies on the use of dynamic regulation to increase the pyruvate supply and thereby l-leucine yield. Escherichia coli, which carries the advantages of being susceptible to cultivate and having a short fermentation process, is another potential host for the industrial production of chemicals [19,20].In the present study, the combination of systems metabolic engineering and a dynamic regulation strategy was successfully implemented in E. coli to construct a strain capable of producing a high titer of l-leucine.First, the metabolic flux toward l-leucine biosynthesis was improved through the introduction of 2-isopropylmalate synthase variant and the overexpression of biosynthetic genes.The l-leucine transport system (including importer and exporter) was reprogrammed to achieve efficient l-leucine efflux.Next, redox balance in the metabolic network was achieved through the regeneration of NADPH and by changing the cofactor preference of l-leucine dehydrogenase, which resulted in the high-level synthesis of the target product.Finally, genetic control of the alpha-ketoglutarate dehydrogenase (sucAB) operon by QS was designed to dynamically regulate the glycolytic flux and reconstruct the metabolic flux of l-leucine.As a result of this modular combination strategy, the engineered E. coli strain (LEU27) produced 55 g of l-leucine in a 5 L bioreactor, with a maximum yield 0.23 g/g glucose.This method achieved the higher known level of l-leucine biosynthesis in E. coli and would be invaluable for the industrial-scale production of this high-value amino acid. Multi-step rational metabolic engineering for optimizing the l-leucine biosynthesis pathway In E. coli, l-leucine biosynthesis starts from the precursors 2-ketoisovalerate and acetyl-CoA, and proceeds through a series of reactions catalyzed by four enzymes encoded by genes leuA, leuB, leuC, and leuD, respectively (Fig. 1).Among them, 2-isopropylmalate synthase (IPMS), the key enzyme encoding by leuA, has been proven to be subject to feedback inhibition by l-leucine [21].Different feedback-resistant (fbr) IPMS mutants have been reported, including those of C. glutamicum and E. coli [11,22].To eliminate the inhibitory effect of the substrate on IPMS, the leuA fbr -cgb gene derived from C. glutamicum and leuA fbr -ecj gene derived from E. coli were transferred into E. coli W3110 using the trc promoter (P trc )-driven expression system, resulting in strains LEU01 and LEU02, respectively.These strains were tested by shake flask fermentation, and their growth and accumulation of l-leucine are shown in Fig. 2A.Compared to the wild-type strain with almost no l-leucine, the amount produced by the LEU01 and LEU02 strains were higher at 3 and 3.55 g/L, respectively.These results suggest that the introduction of the leuA fbr -cgb and leuA fbr -ecj genes into the host had play a positive role in releasing the feedback regulation effect of l-leucine, and this strategy provides a feasible solution to alleviate the bottleneck of its metabolic flux. Compared with gene overexpression through plasmid cloning, chromosome integration has the advantages of the need for antibiotics being avoided and gene expression being more stable [23].Because strain LEU02 had accumulated slightly more l-leucine than strain LEU01, the feedback-resistant leuA fbr -ecj gene driven by P trc was introduced into E. coli W3110 ΔlacI, resulting in strains LEU03.As expected, the l-leucine titer in strain LEU03 increased to 1.3 g/L (Fig. 2B).Optimization of the enzyme expression level can effectively balance metabolism by optimizing the copy number of genes, thus improving the titer of target chemicals [24].To enhance the expression of the rate-limiting enzyme, we attempted to integrate multiple copies of leuA fbr -ecj into the E. coli genome.Therefore, another one or two copies of leuA fbr -ecj were introduced into LEU03 successively, resulting in strains LEU04 and LEU05.The accumulation of l-leucine showed an increasing trend with increasing number of gene copies, with strain LEU05 carrying three copies of leuA, producing slightly more of the amino acid (up to 2.1 g/L) than strain LEU04.The shake flask results suggested that leuA fbr -ecj overexpression could increase the metabolic flux toward l-leucine synthesis, which lays a foundation for further improvement of the amino acid titer. The non-rate-limiting genes in the l-leucine biosynthesis pathway of E.coli are organized in the leuBCD operon [25].To strengthen the metabolic flux to l-leucine, we integrated the natural leuBCD operon The amount of l-leucine accumulated by strain LEU06 increased from 2.1 to 4.8 g/L (Fig. 2C).These results showed that strengthening the natural metabolic flux was effective in improving the synthesis of l-leucine. 2-Ketoisovalerate, an immediate precursor of l-leucine biosynthesis, is generated by acetolactate synthase through the pyruvate flux [26,27].The acetohydroxy acid synthase is feedback inhibited by 2-ketoisovalerate [28]; therefore, releasing this substrate feedback inhibition effect might be a favorable method to achieve better l-leucine production efficiency.The ilvIH fbr operon (genes encoding feedback-resistant acetolactate synthase) was introduced into the yjiT locus of LEU06, generating strain LEU07.In addition, increase the level of 2-ketoisovalerate, the ilvIH and ilvEDC genes (encoding branched-chain-amino-acid aminotransferases) were integrated into the ylbE and yjiV locus in strain LEU07, resulting in strains LEU08 and LEU09, respectively.Consequently, the l-leucine titer reached up to 6.8 g/L, 41.67% higher than that produced by LEU06 (Fig. 2C), indicating that enriching the 2-ketoisovalerate pool promotes l-leucine biosynthesis.Over-expression of key enzyme genes in the biosynthesis pathway from pyruvate to l-leucine and optimized gene expression effectively achieved the accumulation of the target product. Removal of the transcriptional attenuation of leuABCD expression In addition to the over-expression of crucial enzyme genes in the l-leucine biosynthesis pathway, removal of the control of transcriptional attenuation is also an effective strategy for regulating the distribution of the carbon flux.The leuABCD operon is regulated by leucine-mediated transcriptional attenuation in C. glutamicum, and the accumulation of the amino acid could be increased by replacing the promoter and attenuator sequences [29].To evaluate the feasibility of this strategy in E. coli, the promoter and attenuation regions of the leuABCD gene in strain LEU09 were replaced with P trc , resulting in strain LEU10.These results showed that removal of the attenuation of leuABCD had raised the l-leucine titer to 7.35 g/L.This implies that the replacement of the attenuator sequence and the further enhancement of l-leucine biosynthesis genes were effective in improving synthesis of the amino acid (Fig. 2D). Modification of the l-leucine transport system Eliminating the reabsorption of the product and promoting its outflow can result in its continuous synthesis in the cell, which is essential for the efficient production of chemicals [30].In addition, it is worth noting that efficient l-leucine efflux will further alleviate the feedback inhibition of intracellular products.Previous researches studies have demonstrated that the l-leucine importers are controlled by leucine-specific-binding protein (LivK) and Leu/Ile/Val-binding protein (LivJ), and the gene coding for leucine efflux protein (yeaS) participates in the output system in E. coli [31][32][33].Therefore, we sequentially deleted the livK and livJ genes in strain LEU10 to generate strains LEU11 and LEU12.The l-leucine production in strain LEU12 was slightly higher than that in the other strains, with 8.45 g/L produced, indicating that the elimination of reabsorption of the target product had a positive effect on its accumulation (Fig. 3A). In addition, the branched-chain amino acid exporters BrnF and BrnE are the carriers of the l-leucine export system in C. glutamicum [34].To evaluate the effect of yeaS and its homologous azlC (encoding branched-chain amino acid permease from C. glutamicum) and brnFE gene in E. coli, these P trc driven genes were, respectively, integrated into the livJ locus of strain LEU11, resulting in strains LEU13, LEU14 and LEU15.Compared with strain LEU12, there was no further improvement in l-leucine production reaped through the integration of yeaS, whereas integration of the brnFE genes led to a 20.12% increase in the titer, which reached 10.15 g/L.Meanwhile, overexpression of the heterologous azlC gene resulted in a slightly improvement in the l-leucine titer (Fig. 3B).BrnQ, which encodes branched chain amino acid transporter, is responsible for the intake of extracellular branched chain amino acids.Therefore, we further inserted the brnFE genes into the brnQ locus of strain LEU14 to construct the LEU16 strain for subsequent genetic modification to increase the exporters.The above results reveal that an efficient transport system plays a key role in further improving the titer of l-leucine.By eliminating the absorption of l-leucine and introducing efficient exporters, the loss of carbon source was effectively avoided and the inhibition of intracellular substrate was further reduced. Cooperative utilization of cofactors for efficient l-leucine biosynthesis Intracellular redox balance is a crucial factor in the overproduction of l-leucine, as two molecules of NADPH are consumed in the biosynthesis of one molecule of the amino acid in E. coli, and the glycolysis pathway produces excess NADH to maintain cell metabolism [35].Overexpression of the pntAB gene (coding for NAD(P) transhydrogenase), which increases intracellular NADPH levels, was successfully applied to the efficient synthesis of NADPH-dependent products [36,37].Besides improving the availability of NADPH, changing the cofactor demand from that of NADPH to NADH would also be an effective strategy to ensure there is a balance of intracellular cofactors [38].We speculate that this reaction is a crucial rate-limiting step for the synthesis of l-leucine.Therefore, we hypothesized that a cooperative Fig. 3 Modification of the l-leucine transport system.A Elimination of l-leucine reabsorption; B overexpression of the l-leucine export system gene cofactor strategy of combining NADPH and NADH by redesigning the redox metabolic network for l-leucine production could improve the product titer.Therefore, the pntAB genes were introduced into the yjiP locus of strain LEU16, to construct strain LEU17.The l-leucine titer of LEU17 had increased to 11.8 g/L.Wang et al. replaced the endogenous glutamate dehydrogenase in C. glutamicum with the NADH-dependent glutamate dehydrogenase from Bacillus subtilis to improve the balance of cofactors in the biosynthetic pathway, thereby increasing the yield of l-leucine [13].The rocG gene from B. subtilis was inserted in the gltB locus of strain LEU17, generating strain LEU18.Unfortunately, the substitution of glutamate dehydrogenase in E. coli did not increase the l-leucine titer.The results showed that modifying the coenzyme requirement of glutamate dehydrogenase had no significant effect on l-leucine synthesis.The reason may be that the carbon flux of l-glutamic acid synthesis in the metabolic network is relatively weak, leading to a slight consumption of NADH, which is not conducive to the balance of intracellular cofactors.Therefore, it has no obvious promoting effect on l-leucine biosynthesis.Previously, it has been reported that engineering of leucine dehydrogenase has been established to improve chemical production [39].To test the rationale of balancing cofactors to improve l-leucine production, we introduced heterologous l-leucine dehydrogenase for using NADH.Esldh gene from Exiguobacterium sibiricum and the Bcldh gene from Bacillus cereus were inserted in the ilvE locus of strain LEU17, generating strain LEU19 and LEU20, respectively.The shake flask fermentation results showed that the l-leucine titer of strain LEU19 did not increase.Surprisingly, the l-leucine titer of LEU20 had increased significantly to 16.1 g/L, which was 36.44% higher than that of the control strain (Fig. 4A).Furthermore, the NADPH/NADP + level of strain LEU20 was slightly higher than that of strain LEU16 (Fig. 4B).These results indicate that overexpression of the pntAB genes and the substitution of l-leucine dehydrogenase from B. cereus had significantly improved the l-leucine titer and achieved the balance of intracellular redox. Suppression of the TCA cycle with a toggle switch Although the new strain constructed to this point had achieved a certain level of l-leucine accumulation following systems metabolic engineering, there are still carbon sources utilized for normal cell growth and product synthesis that would result in a decrease in the yield and an increase in the industrial production cost.Traditionally, an engineering strategy for increasing acetyl-CoA is to attenuate citrate synthase activity using a weaker promoter.However, statically regulated protein expression levels inevitably impose a metabolic burden on cell growth [11].To address the low l-leucine yield, we attempted to regulate the TCA cycle through the toggle switch control of sucAB gene expression, based on the dynamic regulatory circuit of the Esa QS system of Pantoea stewartii, thereby pulling the carbon flux toward the biosynthetic pathway of the target product.The transcriptional regulator EsaR I70V was bound by the P esaS promoter and activated transcription.With the accumulation of the signal molecule 3-oxyhexanoyl homoserine lactone (AHL), the P esaS promoter was inactivated; that is, sucAB transcription was arrested.The rate of AHL accumulation in the regulatory circuit is determined by the intensity of esaI gene expression.The different intensities of promoter activity could trigger the time required to switch the regulatory system and further result in the appropriate intensity of sucAB gene expression.To prevent the overflux of carbon from pyruvate that can occur in the process of weakening the TCA cycle, we deleted the poxB, pflB, and ldhA genes in strain LEU20, thereby generating strain LEU21, LEU22 and LEU23 in turn. Subsequently, the esaR gene (controlled by the strong promoter P apFAB104 ) was introduced into the yeep locus of strain LEU23, and the promoter of sucAB was separately replaced with P esaS , generating strains LEU24 and LEU25, respectively.Next, esaI genes driven by promoters and RBS of different intensities (P bS1 , P bS2 , P bS3 , P bS4 and P bS5 ) were introduced into the strain LEU25 to facilitate AHL production, generating strains LEU26-30 [40].As a the l-leucine production level of strain LEU27 had increased slightly to 16.25 g/L, and surprisingly, its conversion rate was significantly increased by 42.86% to 0.2 g/g glucose (Fig. 5).The LEU27 strain showed a decrease in cell density at 24 h, which eliminated the unexpected carbon flux transfer from pyruvate to the TCA cycle.The increase in the yield of strain LEU27 indicates that cell growth and l-leucine production can be balanced by turning off the expression of the sucAB genes at a more appropriate switching time. It is well-known that there is a coupling between l-leucine production and growth, so balancing the carbon flux between them has become the important and difficult point in the construction of l-leucine production strain.In the later stage of cell growth, the switch of sucAB node was turned off, which effectively inhibited the metabolic flux of TCA cycle and promoted the synthesis of excess carbon to l-leucine. Fed-batch production of L-leucine in a bioreactor To further evaluate the potential fermentation performance of the engineered strains, LEU23 and LEU27 were separately added to 5 L bioreactors for fed-batch fermentation.As shown in Fig. 6, strain LEU27 produced 55 g/L of l-leucine, with a yield of 0.23 g/g glucose.By comparison, the l-leucine titer and yield of strain LEU23 were 54.6 g/L and 0.17 g/g glucose, respectively.These fed batch fermentation data demonstrated that strain LEU27 accumulated slightly more l-leucine than strain LEU23.Surprisingly, the fermentation supernatant of strain LEU27 hardly had other detectable branched-chain amino acids, which would effectively reduce the cost of separation and extraction in the downstream processes of large-scale industrial production. Conclusion This research article outlines the methods we had used to construct a plasmid-free nonauxotrophic l-leucineoverproducing E. coli strain.The metabolic engineering strategies included the (1) strengthening of the l-leucine biosynthesis; (2) enrichment of precursors pools; (3) optimization of the l-leucine transport system; and (4) cooperative utilization of cofactors.A dynamic switch for regulating the sucAB node was designed to weaken the metabolic carbon flux of the TCA cycle and further increase the l-leucine yield.To the best of our knowledge, strain LEU27 has produced Fig. 5 Effects of the dynamic regulation of the TCA cycle.A Knockout of by-products; B effects of the dynamic regulation of sucAB operon expression on l-leucine production and biomass the titer of l-leucine (55 g/L) to date.Our engineering strategy of strain modification provides a methodology for the construction of microbial cell factories capable of producing high titers of l-leucine or related products. Bacterial strains and plasmids The strains constructed in this study are listed in Table 1.E. coli W3110 was used as the starting strain, whereas E. coli JM109 was used as the cloning host for plasmid construction.The pREDCas9 and pGRB plasmids and the CRISPR/Cas9 gene editing system were used for constructing the various E. coli strains. Genetic manipulations and culture conditions Gene deletion and integration in E. coli were performed using the standardized protocols of the CRISPR/ Cas9 gene editing method [41,42].The primers for gene manipulation are listed in Additional file 1: Table S1.Herein, we describe the deletion of the lactate dehydrogenase A (ldhA) gene as an example.First, the primers (gRNA-ldhA-S and gRNA-ldhA-A) were annealed to form dsDNA, which included a 20 bp complementary sequence and a flanking sequence homologous to the pGRB trunk.Then, the pGRB-ldhA plasmid was constructed through homologous recombination of the dsDNA and the linearized vector.The total DNA-ldhA fragment was obtained, of which the was amplified with the upstream homologous arm (primers UP-ldhA-S and UP-ldhA-A) and the downstream homologous arm (DN-ldhA-S and DN-ldhA-A).The DNA-ldhA and pGRB sequences were transfected into pRED-Cas9 containing cells via electrotransformation and then transformed cells were cultured on Lysogeny Broth (LB) agar plates (supplemented with spectinomycin and ampicillin) at 30 ℃.The bacterial suspension was cultured on LB medium for 16-18 h at 30 ℃, and then the positive single colony was verified by colony PCR.To cure the plasmid expressing the targeted gRNA, the positive recombinant was cultured in LB containing 0.2% l-arabinose for 14 h.Then, the bacterial solution was further cultured for 10 h in a 42 ℃ shaking incubator to lose the pRED-Cas9 plasmid.Finally, the donor DNA fragment with the target gene was incorporated into the host genome using the chromosomal integration technique.These same procedures were used for the construction of all the other strains. Fermentation in shake flasks The engineered strains were first cultivated at an inclined plane, and then transferred into 30 mL of seed medium in a shake flask and cultured at 37 °C with shaking at 200 rpm.The seed medium was composed of KH 2 PO 4 1.2 g, yeast extract 10 g, peptone 5 g, MgSO 4 ⋅7H 2 O 0.5 g, MnSO 4 10 mg, FeSO 4 ⋅7H 2 O 10 mg, V H 0.3 mg, V B1 1.3 mg and glucose 20 g, per liter.Then, a 15% inoculum of the seed culture was transferred to a 500 mL baffled flask containing fermentation medium composed of KH 2 PO 4 2 g, yeast extract 2 g, peptone 4 g, sodium citrate dihydrate 1 g, MgSO 4 0.7 g, MnSO 4 0.1 g, V H 0.2 mg, FeSO 4 0.1 g, V B1 0.8 mg and glucose 20 g, per liter.Using phenol red as the pH indicator, NH 4 OH (25%, v/v) was added the culture medium when the latter's pH value was less than 7.When the culture was in a state of sugar depletion, a glucose solution (60%, w/v) was provided intermittently under aseptic conditions. Fermentation in a 5 L bioreactor The activated strains were first cultured in a bioreactor containing 2 L of seed medium.When the absorbance at 600 nm (OD 600 ) of the seed culture broth had reached 11-15, a 15% inoculum was transferred to a 5 L bioreactor, and the temperature was set to 37 ℃.The fermentation medium of the bioreactor contained K 2 HPO 4 7 g, yeast extract 2 g, citric acid 2 g, (NH 4 ) 2 SO 4 3 g, MnSO 4 ⋅7H 2 O 10 mg, FeSO 4 ⋅7H 2 O 30 mg, L-methionine 1 g, MgSO 4 ⋅7H 2 O 1 g, V Bx 0.5 mg, V H 1 mg and glucose 10 g, per liter.During the fermentation process, the dissolved oxygen content was set to 20% by controlling the aeration rate and agitation speed.The pH was controlled at 6.5 through the automatic feeding of ammonium hydroxide (25%, v/v).When the sugar in the medium was exhausted, glucose (80%, w/v) was added automatically, so that its final concentration was not higher than 3 g/L. Analytical methods During the fermentation process, the cell density was measured using an ultraviolet spectrophotometer to detect the OD 600 .The glucose concentration was measured using an SBA-40C biosensor (Shandong Province Academy of Sciences, China).The l-leucine content was determined using high-performance liquid chromatography, with an acetonitrile/water mixture (50:50, v/v) and 50 mM sodium acetate used to prepare the mobile phase.The data presented in this study represent the average and standard deviation of three independent samples.The data presented in this study represent the mean and standard deviation of three independent cultures. Fig. 2 Fig. 2 Optimization of the pathway for l-leucine biosynthesis and cell growth.A Introduction of 2-isopropylmalate synthase (IPMS) encoding genes from Corynebacterium glutamicum and Escherichia coli; B overexpression of the leuA fbr -ecj gene in the chromosome; C overexpression of l-leucine operon genes; D removal of the transcriptional attenuation of leuA Fig. 4 Fig. 4 Effects of the cooperative utilization of cofactors.A Biomass and l-leucine titer of strains LEU16-20.B NADPH/NADP + levels of strains LEU16 and LEU20 Table 1 Escherichia coli strains used and constructed in this study
2023-09-30T13:53:52.104Z
2023-09-29T00:00:00.000
{ "year": 2023, "sha1": "e953d5470d31ebdff353c725c5d6b56ff37a12be", "oa_license": "CCBY", "oa_url": "https://biotechnologyforbiofuels.biomedcentral.com/counter/pdf/10.1186/s13068-023-02397-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "06dd74210fc4b2a0ad2f04cff789440878d95d4f", "s2fieldsofstudy": [ "Engineering", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
92998953
pes2o/s2orc
v3-fos-license
Predictors of virological treatment failure among adult HIV patients on first-line antiretroviral therapy in Woldia and Dessie hospitals, Northeast Ethiopia: a case-control study Background Virological treatment failure is a problem that a Human Immune Virus patient faces after starting treatment due to different factors. However, there were few studies done on the predictors of virological treatment failure among adult patients on first-line antiretroviral therapy in Ethiopia in general, and no study was done in the study area in particular. Therefore, the aim of the study was to identify predictors of virological treatment failure among adult patients on first-line antiretroviral therapy in Woldiya and Dessie Hospitals, Northeast Ethiopia. Method Hospital based case–control study was conducted in Woldia and Dessie Hospitals from from 12 August 2016–28 February 2018 on 154 cases and 154 controls among adult patients on first-line antiretroviral treatment. All cases were included and comparable controls were selected using stratified random sampling technique. Data were collected by document review using checklists and entered into Epidata version 3.1 and analyzed by SPSS version 21. Multivariable logistic regression analysis was done to identify the independent predictors of virological treatment failure. Results In this study, statistically higher odds of virological failure was observed among patients who had current CD4 T-cell count of < 200 mm3 (AOR = 2.4, 95% CI: 1.35, 4, 18) compared withCD4 T-cell count of > 200 mm3, current body mass index(BMI) < 16 kg/m2 (AOR = 4.2, 95% CI:1.85, 9.51) compared with BMI > 18.5 kg/m2, BMI between 16 and 18.5 kg/m2 (AOR = 3.72, 95% CI: 1.75, 7.92) versus BMI > 18.5 kg/m2, poor adherence to antiretroviral therapy (AOR = 5.4, 95% CI: 2.95, 9.97) compared with good adherence. Conclusion This study showed that low current CD4 T-cell count and body mass index, as well as poor adherence for ART treatment predicts virological failure. Therefore, deliberate efforts are urgently needed in HIV care through improving their nutritional status by enhancing nutritional education and support, and by strengthening enhanced adherence counseling. Background The global scale-up of antiretroviral treatment (ART) under the public health approach of standardized and simplified regimens has registered significant gains, with increasing access to treatment for millions of people, and a reduction in new infections and Human Immune Virus(HIV)-associated morbidity and mortality [1]. Globally, 82% of people on treatment had suppressed viral loads.Similarly, in Eastern and Southern Africa at the end of 2016 83% were virally suppressed, contributing to 29% reduction in new HIV infections between 2010 and 2016 [2]. The primary goal of ART is to prevent HIV-associated morbidity and mortality and an effective ART can reduce viremia and transmission of HIV to sexual partners by more than 96% [3]. Therefore, Monitoring people on ART is important to ensure successful treatment, identify adherence problems and determine whether ART regimens should be switched in case of treatment failure, which can be assessed in three ways: clinically, immunologically, and virologically; whichprovides an early and more accurate indication of treatment failure [4]. The World Health Organization (WHO) and Ethiopian National guideline defines virological treatment failure as plasma viral load above 1000 copies/ml (based on two consecutive viral load measurements after 3 months with enhanced adherence support) after at least 6 month of ART treatment [4][5][6]. Different studies showed that virological failure is a problem that an HIV patient faces after starting treatment and the magnitude of the problem is apparent in different countries:20.8% in China [7], 16% in Swaziland [8], 24.6% in Kenya [9], 24% in Mozambique [10], 41.3% in Gabon [11], 11.9% in Rwanda [12], 11% in Uganda [13], and 10.7% in Bahirdar, Ethiopia [14]. At the individual patient level, failed ART regimen or HIV drug resistance limits treatment options, complicates succession of therapy and puts the patient at increased risk for drug toxicity [15,16] which in turn has both human and financial consequences [6]. Mathematical modeling predicts that if levels of non-nucleoside reverse transcriptase inhibitors (NNRTI) drug resistance exceed 10% in sub-Saharan Africa, between 2016 and 2020 drug resistance is predicted to be responsible for an additional 105,000 new HIV infections, 135,000 AIDS deaths, and US$ 650 million in Antiretroviral drug costs [6]. Several studies showed that various factors were positively associated with virological treatment failure such as poor adherence to treatment [9,10,12,13,17,18], low baseline CD4 count [8,10,17,19,20], younger age [9,10,12,17], longer time on ART [17,21], being male in gender [21], advanced WHO staging [10,20], and lower current CD4 count [17]. On the other hand, disclosed HIV status of patients, and extra baseline weight was negatively associated with virological failure [19,22]. However, in Ethiopia, there were few studies done on the predictors of virological treatment failure among adult patients on first-line antiretroviral therapyusing routine viral load testing as a measure of treatment failure in in general, and no study was done in the study area in particular. Identifying and intervening determinants of virological treatment failure is important to achieve high treatment success rate. The aim of this study was therefore to assess the predictors of virological treatment failure among adult patients on first-line antiretroviral therapy at Woldiya and Dessie Hospitals, Northeast Ethiopia. in Dessie hospital) of them had viral load result of above 1000 and ≤ 1000 copies/ml, respectively. All HIV-infected patients aged 15 years and above who had taken first-line ART for at least 6 months with two consecutive documented viral load test results were the source population. All cases and selected controls who had documented viral load test results from 12 August 2016-28 February 2018 were the study population. HIV-infected patients aged 15 years and above whose plasma viral load was > 1000 copies/ ml in 2 consecutive viral load measurements in a 3-month interval with enhanced adherence support after at least 6 months of starting first-line ART regimen is defined as cases (virological treatment failure) whereas HIV-infected patients aged 15 years and above whose plasma viral load was < 1000 copies/mL in 2 consecutive viral load measurements after at least 6 months of starting first-line ART regimen is defined as controls (without virological failure). Sample size estimation and sampling technique Sample size was determined using Epi Info7 version 3.5.3, by taking age less than 35 years as predictor of virological treatment failure which gave larger sample size [17], 1:1 case to control ratio, 95% Confidence interval (CI), power of 80%, and it becomes 288 (144 cases and 144 controls). However, to improve the power of the study, all of the cases (154) and comparable controls (154) were included in the study. Stratified random sampling technique was employed to select controls as study participants. First, sampling frame was prepared based on patient MRN from recorded documents of each hospital for controls separately. Then, the total sample sizes of the controls were allocated for each hospital proportionally to the number of controls. Finally, systematic random selection was applied for the selection of allocated controls (68 from Woldiya hospital and 86 from Dessie hospital) based on respective sampling interval. Data collection instrument and quality control Data were extracted by document review using a structured checklist prepared in English adapted from Ethiopian Federal Ministry of Health ART clinic intake and follow up form. Data collectors and supervisors were trained for two days about the objectives of the study, contents of tools and how to collect the data before the data collection. The data was collected by 6 ART trained nurses that work at ART clinic. Two runners were used for bringing cards from the card room. The principal investigator and the supervisors were closely monitored the whole data collection process on a daily basis. To keep the quality of the study, document review checklist was prepared based on federal ministry of health of Ethiopia standard ART intake and follow up form. Data collectors and supervisors were ART trained nurse. Training was given for data collectors and supervisors before data collection and there was close follow up of data collectors by supervisors and the principal investigator including observation of how they were extracting the recorded data. Moreover, data quality was also ensured during collection, entry. Data processing and analysis Data were checked for completeness, coded and, finally it was entered into Epi Data version 3.1, cleaned and analyzed by using SPSS version 21. Descriptive statistics, including frequencies mean and percentages were used to describe demographic, clinical, and treatment-related characteristics of patients. Binary logistic regression analysis was carried out for independent variables with an outcome variable to select candidate variables for multivariable analysis. Variables with a p-><?A3B2 twb.?>value < 0.25 in bivariate analysis were included into a multivariable logistic regression analysis using backward likelihood ratio method to identify the independent predictors of virological treatment failure. The final model was assessed for goodness-of-fit using Hosmer-Lemeshow test. No evidence indicating lack of fit was found (p-value = 0.298). Finally, variables that had significant associations with virological treatment failure were identified based on the adjusted odd ratio (AOR) with a 95% CI and p-value < 0.05. And also, effect modifications among independent predictors were assessed using interaction term and no effect modification was found. Socio-demographic characteristics of respondents A total number of participants included in the study were 308 (154 cases and 154 controls). At baseline, the mean age of the cases and controls were 30 years (SD: 8 years) and 31 years (SD: 9 years) respectively. About 55.2% of cases and 53.9% of the controls were females; likewise, 47.4% of cases and 52.6% of the controls were married. Similarly, 38.3% of cases and 42.9% of the controls were attained primary education. By their occupation, 33.8% of cases and 30.5% of the controls were farmers. About 69.5 and 74% of the cases and controls, respectively were, disclosed their serostatus (Table 1). Table 2 below). Treatment related characteristics of respondents This study revealed that 80.5% of cases and 77.3% of the controls were on ART for >48 months. About 42.2% of cases and 12.3% of the controls had history of poor adherence for ART treatment. Moreover, 90.3% of cases and 82.5% of the controls had history of Cotrimoxazole prophylaxis therapy; 5.2% of cases and 10.4 % of the controls had history of Isonized prophylaxis therapy. At the start of ART 29.9% of cases and 37% of the controls were on 1c regimen (AZT+3TC+NVP). The study showed 29.2% of cases and 11.7% of the controls had history of regimen change/individual drug (see Table 3 below). Predictors of virological treatment failure In bivariate logistic regression analysis, factors such as Marital status, WHO staging, baseline BMI, Current BMI, current CD4 T-cell count, Adherence to ART treatment, history of Cotrimoxazole Prophylaxis therapy, history of Isonized Prophylaxis therapy were associated with virological failure at P-value of< 0.25. When variables that had association with virological failure in the bivariate analysis (P-value < 0.25) were all included in multivariable logistic regression model using backward likelihood ratio method (LR) it was found that current BMI, adherence level to ART treatment, and Current CD4T-cell count had statistically significant association with virological failure (p-value < 0.05). In this study, the odds of virological failure was 2.4 times more (AOR = 2.4, 95% CI: 1.35, 4.18) among those who had Current CD4 T-cell count of ≤200 mm 3 compared with those who had Current CD4 Table 4). Discussion This study was aimed to assess predictors of virological failure among first-line ART users and showed that low current CD4 T-cell count count (≤ 200mm 3 ), low current BMI (< 16 kg/m 2 and 16-18.5 16 kg/m 2 ), Poor adherence to ART were found to have increased odds of virological failure. In this study, the odds of virological failure were 2.4 times more among those who had current CD4 count of ≤200 mm 3 compared with those who had Current CD4 T-cell count of > 200 mm 3 . The finding is supported by studies conducted in Northwestern Uganda [22], Gonder, Ethiopia [17]. It is well known that CD4 T-cell counthas an inverse relationship with viral replication and load. As patients' immune status becomes declined, the rate of viral replication increases compared to their immune-competent counterparts. In addition, clients with compromised immunity are more vulnerable to different opportunistic infections that sustain the vicious cycle of immunity and viral replication [3]. The odds of virological failure were 4.2 times more among those who had Current BMI of < 16 kg/m 2 compared with those who had Current BMI of > 18.5 kg/m 2 . Likewise, the odds of virological failure were 3.7 times more among those who had Current BMI between 16 and 18.5 kg/m 2 compared with those who had Current BMI of > 18.5 kg/m 2 . This finding is supported by a study conducted Northwestern Uganda [22]. It is evident that low BMI is correlated significantly with the decrease in CD4 count and the increase in viral load by progressing in to the advanced stage of the disease, and by exposing the patients not taking ART medication (poor adherence) [23,24]. In addition, the odds of virological failure were 5.4 times more among those who had poor adherence compared with those who had good adherence to antiretroviral treatment. This finding is supported by studies conducted in Uganda [13], Zimbabwe [20], Rwanda [12], Kenya [9], and Gonder, in Ethiopia [17]. It is grossly apparent that poor adherence to medication reduce treatment response due to suboptimal drug concentration and by doubling the viral load or viral replication, which leads to virological failure [25,26]. This confirms that achieving long-term good adherence is indeed Achilles' heel of successful virologic outcomes. The limitation of this study were being record based methods of data collection, which restrict the number of variables that would be studied such as psycho social factors (Depression, stigma) and differences in quality of care and service in each hospital. Conclusion The present study revealed that, the key predictors for virological failure were low current (recent) CD4 count, low current (recent) body mass index, and poor adherence to antiretroviral treatment. Therefore, deliberate efforts are urgently needed in HIV care by concerned bodies like ART case managers, adherence counselors in the hospitals on patients with low body mass index, low current CD4 count (through improving their nutritional status by enhancing nutritional education and support), and improving poor adherence to ART treatment by strengthening enhanced adherence counseling.
2019-04-05T03:28:43.069Z
2019-04-03T00:00:00.000
{ "year": 2019, "sha1": "1cfeeb6d20745993376f48bc44d313712dec3552", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-019-3924-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1cfeeb6d20745993376f48bc44d313712dec3552", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
59842662
pes2o/s2orc
v3-fos-license
The Words Made Fresh: Transforming the Language and Context of Faculty Development One of the main challenges faced by those who create, implement, or revitalize faculty development programs is making the language of development palatable, even positive, to faculty. Allan Tucker (1988) reminds us that the term, '"faculty development,' for all its good intentions, often offends faculty who see the term as demeaning to their hard-earned Ph.D.'s,'' and that a "faculty development director underscores the remediation stigma in many minds." Since Tucker is only articulating an attitude we all face at one time or another, what is the sensitive and semantically adept faculty developer to do? There are the usual euphemisms to try, such as "professional growth," "revitalization,'' "instructional enhancement,'' "holistic renewal," and "career redirection,'' most of which elicit cynicism and charges of semantic manipulation. At the University of Hawaii, we tested them all and found little agreement on any specific language. Faculty members, by virtue of our education and political instincts, are an individualistic lot especially given to debate and criticism. Questions of semantics appear to bring out particularly strong academic passions. Therefore, the effective faculty developer must recognize the passion, identify with the faculty proclivity to examine and debate, attempt to build an ethos receptive to development, go on to transform the institution, and thus bring fresh meaning to the language of "development" within the institutional context. University of Hawaii, Manoa One of the main challenges faced by those who create, implement, or revitalize faculty development programs is making the language of development palatable, even positive, to faculty. Allan Tucker (1988) reminds us that the term, '"faculty development,' for all its good intentions, often offends faculty who see the term as demeaning to their hard-earned Ph.D.'s,'' and that a "faculty development director underscores the remediation stigma in many minds." Since Tucker is only articulating an attitude we all face at one time or another, what is the sensitive and semantically adept faculty developer to do? There are the usual euphemisms to try, such as "professional growth," "revitalization,'' "instructional enhancement,'' "holistic renewal," and "career redirection,'' most of which elicit cynicism and charges of semantic manipulation. At the University of Hawaii, we tested them all and found little agreement on any specific language. Faculty members, by virtue of our education and political instincts, are an individualistic lot especially given to debate and criticism. Questions of semantics appear to bring out particularly strong academic passions. Therefore, the effective faculty developer must recognize the passion, identify with the faculty proclivity to examine and debate, attempt to build an ethos receptive to development, go on to transform the institution, and thus bring fresh meaning to the language of "development" within the institutional context. A weed by another name can be an exotic flower, and gardeners know that the best results depend on when and where the manure is put out. If a faculty development program is to flower, the needs and aspirations of the faculty must be met. The terminology becomes merely the reference point. If transformation of the institution is the goal, it will drive the transformation of the language. Institutional transformation, while a visionary concept, can come to fruition with systematic planning and semantic support. Combining Faculty Development and Faculty Support The faculty development program at the University of Hawaii came about through a systematic and comprehensive two-year planning process that sought extensive faculty and administrative participation. The process began with the traditional needs assessment survey, which allowed respondents to identify those aspects of campus policy, infrastructure, and instructional environment that hindered their work as teachers and scholars. After identifying problems, the faculty could place a priority on which of these should be addressed during the first three years of a program. To ensure sensitivity to responses from all of the diverse groups that compose a comprehensive research university faculty, twelve open forums were held with various constituencies, including senior, junior, women, minority, foreign and visiting faculty, lecturers, and graduate research and teaching assistants. Each group had its particular interests or specialized needs for development and support. More than 385 faculty members played an active role in this assessment and development phase; we interpreted this strong participation as a sign of extraordinary interest and success early in the program. The issue of "development" language was a recurrent theme throughout the forums. Predictable responses centered on the implication that "development" was associated with "remediation," "failure to meet expectations," "incompetent." A second pattern emerged, however, which interpreted "development" as a support mechanism. Faculty members constantly assess and fine-tune their teaching and scholarly skills. What often requires development, therefore, is not the faculty member, but the faculty support system, which should encourage, fund, and reward productivity. As a result of these insights, the University of Hawaii administration proposed establishing an Office of Faculty Development and Academic Support. Under the leadership of the Vice President for Academic Affairs, the portfolio of faculty development was broadened to include traditional academic support activities, such as the media resources center, computer-assisted instruction, classroom maintenance and improvement, a research unit on multicultural higher education, curriculum development grants, faculty workshops, convocations, award ceremonies, and administrative development activities for department chairs. Since the program is a probationary one, a formal evaluation will be required by the University after five years. During the initial two-year developmental period, we have evaluated the program based on the number of faculty members participating in specific activities, such as the workshops, seminars, convocations, inquiry groups, and diagnostic evaluations, and on participants' ratings of these activities. Both attendance and ratings have been very high. A key strategy for institutionalizing and securing faculty development and support is to ensure that the language of development is used consistently in regular institutional policy and procedural documents, such as the academic development plan, the legislative budget requests, the repair and maintenance plans, and the guidelines for faculty evaluation, promotion, and tenure. It then becomes clear that development goes beyond what we do to faculty, to what we do for and with them as an institution. This approach is more inclusive than the early models for faculty development programs cited by Bergquist and Phillips (1975). These early models focus on instructional development and individual skill enhancement; our approach is more closely related to the broader concepts advocated by Clark, Corcoran, and Lewis (1986), who interrelate faculty and institutional vitality and make a case for an institutional perspective on faculty development that is based on a human resource development model. Our good fortune came in starting a program late in relation to other comparable universities and in having many excellent program models to consider. Linking Faculty Development and Faculty Evaluation Experienced faculty developers almost universally advise dissociating evaluation for merit, promotion, and tenure from development. Within most institutional contexts, this is excellent advice. We have found, how-ever, that the two functions can be linked as mutually beneficial activities in the case of periodic review of tenured faculty. While only a few research universities have as yet instituted formal post-tenure review programs, a growing literature supports the view that evaluation can act as a catalyst for faculty development. The key is to integrate both in a way that nurtures faculty growth and institutional excellence (Andrews, 1985;Arreola, 1983;Licata, 1986). Indeed, evaluation practices that exist in isolation from development may cause deeper alienation, especially among tenured faculty. And if evaluations include feedback on performance, then a professional development plan that draws on the resources of a campus development program linking assessment with development may be a very viable avenue to improvement. The linkage between evaluation and development on our campus evolved as a result of an event that occurred at about the same time we were working to design and implement the faculty development program. After a three-year process that took the University Board of Regents and the faculty union, the University of Hawaii Professional Assembly (UHPA), to the Hawaii Supreme Court, a ruling confirmed that management had the right to evaluate faculty and that the evaluation process was not a negotiable item. The ruling reactivated a long-standing, but not fully implemented, Board of Regents policy requiring that all faculty be formally evaluated every five years. Each academic department was required to agree on expectations for faculty within the department mission and goals. Under these guidelines, the department chair served as the primary evaluator. The only group not routinely subject to review under the new procedures was senior tenured faculty. The evaluation thus took on the aura of a "post-tenure review." Given this situation, the fledgling faculty development program risked being viewed by some faculty members and administrators as a ruse to soften the effects of the review or to focus on "deadwood" not meeting expectations. Some department chairs thought that they would be able to remand faculty to the program for required "fix-ups." Transformational language, however, can influence even such a potentially negative situation. Rather than take an adversarial stance, UHP A and the administration agreed to develop a mutually acceptable procedure for the review. The university's commitment to the faculty is thus reflected in the language of the procedures, as is the union's advocacy of academic freedom. The Preface (1987) states: Evaluation can be a positive force when used to encourage members of the University community to continue their professional growth and thereby improve the delivery of their professional services. To this end, the institutional resources must be committed to incentive programs which support faculty development in the areas of teaching, research, and service. Evaluation of faculty must not undermine the concepts of academic freedom and tenure ... there is a presumption of competence on the part of each tenured faculty member ... the Manoa Faculty Development Committee was established by the procedures to assist individual faculty members who do not meet expectations in arriving at a successful development plan for meeting the expectations of their department. The document goes on to say "that the interaction of the Committee with faculty members is intended to be positive and supportive." During the 1987-88 cycle of review, 245 tenured faculty were subject to evaluation. Of this group, 46 who did not meet expectations formulated approved professional development plans with their department chairs or deans or with the counsel of the Faculty Development Committee. During 1988-89, all are actively carrying out these plans, a measure of success we find encouraging. UHP A conducted a survey of the faculty at the end of the first cycle and concluded that the faculty judged the process to be just, equitable, and humane. We are keenly aware, however, that our experiment is at an early stage and that the operational efficiency of the evaluation side must be matched by continued moral and financial support on the development side. Linking faculty evaluation directly with faculty development has therefore become a way of enhancing a positive institutional climate whereby faculty are encouraged and supported even during periods of their careers when they may not be fully productive. A supportive academic culture requires that faculty be treated with respect and dignity and recognizes that development is a highly complex process that demands attention to individual needs within the department mission and is attuned to the stages of faculty careers. Cultivating Culture, Climate, and Community InAcademic Culture and Faculty Development (1979), Mervin Freedman proposes that faculty development be seen "as a process of unfolding, of making latent processes active, of increasing amplitude, of evolving to a higher state," and argues that good teaching and scholarship "depend on the inner state of mind, on faculty values and attitudes." Inherent in the success of a faculty development program are the shared vision, the values, and the attitudes of the participants and advocates. For example, while collegiality clearly is essential in the planning and implementing of a program, the very support of the administration may arouse suspicion and cynicism among faculty on a campus with a history of conflict over evaluation and development. In this situation, the language of development may sharpen the double-edged sword. Faculty who most need improvement may distrust even those publicly pledged to support them and see the development program as a means of eroding their individual choice. A development program thus needs the widest base in order not to be identified only with the needy. Among the strategies supporting faculty development within an institutional context must be an emphasis on enhancing, recognizing, rewarding, and highlighting the vitality of successful and productive faculty members. If a faculty development program is presented as a group of interrelated services providing a menu of possibilities that enhance the working environment, we can create a climate and sense of community. Traditional activities such as convocations, orientations, open inquiry groups, colloquia, ceremonies and rituals honoring accomplishment, and workshops on teaching, student learning, grantsmanship, scholarly writing, and classroom research announce support for these expectations of the Academy. While some of these activities are ongoing in specific departments and colleges of the University of Hawaii, the creation of the Center for Teaching Excellence and an Office of Faculty Development and Academic Support has brought together faculty across disciplines to focus on common concerns. For example, a workshop and classroom presentation on the professional voice by a former Shakespearean actor and professor emeritus garnered an audience of 96 faculty members. All who share in its ranks are responsible for nurturing the academic community. Developing a climate of vitality, renewal, and transformation can only enhance and legitimize the development of faculty. On an urban commuter campus like ours, establishing a sense of community among faculty is as difficult as managing one for students. To solidify faculty attachment to the University of Hawaii through identification with the larger Academy, we re-established a series of ritual events with appropriate pomp and circumstance. Faculty members were invited to and did attend the Student Opening Year Convocation in academic regalia (no small factor in the 90-degree Honolulu afternoon). Colleges carried medieval-style banners designed with their own motifs to create what Harvey Cox calls "a festival of the spirit." Faculty also came together in a separate Faculty Convocation, and in welcoming remarks by the President, members of the Board of Regents, the Faculty Senate, and a keynote faculty speaker were reminded that congregating together in academia is a long and honored tradition for scholars and students. The complementary event that ends the academic year is a ceremony honoring faculty members selected for excellence in teaching and research awards. The recipients report that it is especially gratifying to be recognized by their peers in a public event. The awards, consisting of medals and a cash prize, are given by the Board of Regents upon recommendation by the President. Nominations may be made by students, faculty, staff, and alumni and are screened by college-level committees and the campus Honors and Awards committee. The awards remain prestigious because of the rigorous selection and peer review process. As Bellah et al. reminded us in Habits of the Heart (1985), "a real community is a community of memory ... that must tell its story often ... and offer examples of its vision through the men and women who embody and exemplify the collective dreams ... tradition is central to the community of memory." Creating and nurturing a sense of community is the broadest mission of faculty development and is a goal that can bring out the best spirit as well as the most productive work from the community's members. In the gardens that we cultivate, "faculty development" and the language that surrounds, supports, and symbolizes it can take on a proud and visible legitimacy. "Development" can be a word that suggests potential, regeneration, recognition, and resurgence of the best within us. Faculty will come to live with the local definition of development programs if these programs are institutionally integrated and supported, and complex enough to recognize faculty diversity and values. When faculty development is combined with faculty support and faculty recognition, the combination is powerful in both its symbolism and semantics, and faculty can become its greatest advocates.
2019-02-10T14:02:19.237Z
1989-01-01T00:00:00.000
{ "year": 1989, "sha1": "3209698beac3571c0300caa12470c5a2c1762a2b", "oa_license": null, "oa_url": "https://doi.org/10.1002/j.2334-4822.1989.tb00146.x", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "7a822ffafc746016a80a490dbb72a6cf359afe9d", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
155945166
pes2o/s2orc
v3-fos-license
Partial Characterization of Novel Bacteriocin SF1 Produced by Shigella flexneri and Their Lethal Activity on Members of Gut Microbiota A strain of Shigella flexneri producing bacteriocin was isolated from a patient with diarrhea. The main objective of this study was to isolate and partially characterize the bacteriocin. The producing microorganism was identified using biochemical, serological, and molecular methods. The lethal activity of the S. flexneri strain was studied using the drop method. This bacterial strain showed activity against different strains of E. coli and B. fragilis. Using immunological techniques, it was determined that S. flexneri belongs to serotype 2a, and by PCR, the presence of the ipaH plasmid was determined. By chromatographic techniques, it was determined that the bacteriocin is a peptide of high purity with a molecular weight of 66294.094 Da. The amino acid composition and sequence were determined by the Edman reaction, and a sequence of 619 amino acid residues was obtained. Only in five positions of this sequence, the amino acid glutamine changed to glutamic acid with respect to colicin U produced by S. boydii. From an ecological point of view, it could be assumed that SF1 bacteriocin contributes to eliminate some members of the normal microbiota of the human intestine, facilitating colonization and then producing the invasion process that characterizes the pathogenicity of Shigella. Introduction Bacteriocins are proteins or antimicrobial peptides produced by different bacterial species, which have a broad or narrow spectrum of lethal action [1,2]. Usually, these products exert their antagonistic role on other bacterial species by competing for the same ecological niche [3]. In vitro investigations related to the detection and characterization of bacteriocins showed that the biosynthesis is altered by various physical factors. Furthermore, it has been proposed that the production of bacteriocins can be induced by unfavorable conditions of bacterial growth or chemical agents such as mitomycin C [4][5][6]. Some members of the Enterobacteriaceae family have genetic determinants that encode bacteriocins, whose frequently are located in plasmids [2]. ese antibacterial products show a broad spectrum of action and have a variable molecular weight range between 25 and 80 kDa (colicins) or below 9 kDa (microcins) [7,8]. E. coli is noted for its colicin production, very similar to the products synthesized by S. sonnei, S. boydii, and S. dysenteriae [8]. A few studies show bacteriocinogenic activity produced by S. flexneri. Preliminary studies demonstrated for the first time the bacteriogenic activity of S. flexneri on E. coli and B. fragilis strains isolated from feces of healthy humans [9,10]. Shigella is the most common etiologic agent of dysenteric diarrhea. e species of this genus have the ability to invade and multiply in the human intestinal epithelium, causing an acute inflammatory response and tissue destruction. e infection usually spreads from person to person through the faecal-oral route, and a very small inoculum (10-100 bacterial cells) is enough to cause disease [8]. Previous studies demonstrated the presence of one strain of S. flexneri with antibacterial capacity. In addition, it was reported that the antibacterial product is a bacteriocin with antagonistic activity on E. coli and B. fragilis [9]. In order to deepen the knowledge about this interesting antibacterial substance, the main objective of this study was to purify and perform a partial characterization of the bacteriocin produced by the S. flexneri strain. Bacterial Strains and Antimicrobial Spectrum of Bacteriocin SF1. A bacteriocin-producer strain of S. flexneri was isolated from a patient of 31 years of age with dysenteric diarrhea in the Regional Hospital of Talca, Chile. e patient signed an informed consent for the use of the isolated strain. e bacterium was identified using microbiological and biochemical methods described in Bergey's Manual [11]. Furthermore, serological identification was performed by means of an agglutination test with polyvalent and monovalent serum for somatic O antigen (Denka Seiken, Japan) according to the manufacturer's instructions. e Scientific Ethics Committee of University of Talca approved this study. As fragilis. ree strains were tested from each bacterial species. All target strains were grown in Tryptic Soy Broth (Merck, Darmstadt, Germany) at 37°C for 24 h until the early exponential growth phase (OD 0.4 at 600 nm in UV visible spectrophotometer Shimadzu, Japan) except for B. fragilis, which was grown on Agar Base Blood (Merck) supplemented with vitamin K1 and hemin in an anaerobic jar (Genbox Anaer Biomerieux, France). Subsequently, this culture was collected and diluted in distilled water until the same OD already mentioned was reached. After that, the target strains were grown on Mueller-Hinton agar (Merck, Germany), except B. fragilis, which was sown in the same anaerobic medium already mentioned and incubated in an anaerobic system. On the contrary, the bacteriocinogenic S. flexneri strain was cultivated overnight in the Tryptic Soy Broth and subsequently centrifuged to 10,000g for 20 min. e antibacterial activity of S. flexneri was determined using the drop method. Specifically, all the dishes with the target strains were dried for 10 min at 37°C, and then 5 μL of the S. flexneri supernatant was spotted on the lawn [9]. e dishes were incubated at 37°C for 5 h, and then the inhibitory zones were observed. in solution was determined spectrophotometrically at a wavelength of 260 nm. e purity was measured at 280 nm as previously described [12]. DNA integrity of the template was tested on agarose gel 1% (w/v) and stained with GelRed (Biotium Inc., USA). A PCR assay was performed in a final volume of 25 μL, with a reaction mixture containing 0.25 μg/ μL of template DNA, 50 pmol of each oligonucleotide sequence, 1 X PCR master mix, and DNAse-RNAse free distilled water. e primers IpaH-F 5′-CCTTGACCGCCTTTCC-GATA-3′ and IpaH-R 5′-CAGCCACCCTCTGAGGTACT-3′ [13,14] were used. e amplifications were performed using a DNA Engine ermal Cycler (Laboratories BioRad, USA). PCR conditions were as follows: initial denaturation at 94°C for 2 min, followed by 35 cycles of denaturation at 94°C for 1 min, annealing of oligonucleotide sequences at 62°C for 1 min, and extension at 72°C for 2 min. is was followed by a final incubation for 10 min at 72°C and maintained at 4°C. In addition, a negative control without template DNA was used. PCR-amplified DNA fragments were separated by electrophoresis on 1.5% agarose gel. Furthermore, a wide-range molecular weight marker ladder 100 bp (Invitrogen, Waltham, Massachusetts, USA) was used as standard. e band was stained with GelRed (Biotium Inc). PCR products were visualized and images captured using a Gel Documentation System Doc 1000 (Laboratories BioRad). Partial Purification of Bacteriocin SF1. S. flexneri strain was grown in 500 mL BHI broth at 37°C for 24 h. Subsequently, the culture was centrifuged at 14,000g for 35 min at 4°C. e supernatant was treated by a progressive addition of ammonium sulfate to reach 30% saturation. e mixture was kept overnight at 4°C with constant stirring. en, it was centrifuged at 10,000g for 30 min at 4°C. e precipitate was suspended with 50 mM Tris-HCL (pH 8.0) buffer. e resulting supernatant was adjusted to 95% saturation with ammonium sulfate, as described above. Both samples were centrifuged at 10,000g for 30 min at 4°C, and the final pellet containing the bacteriocin SF1 was suspended in a minimal volume of 50 mM Tris-HCL (pH 8.0). e obtained products were dialyzed separately in a MEMBRA-CEL ™ MC-18 (Viskase, USA) membrane at 4°C for 48 h in the same buffer used above. e resulting dialysate was loaded onto a column of ion exchange FPLC (fast performance liquid chromatography) using a Mono-Q ™ 5/ 50 GL (GE Healthcare, Sweden) column previously equilibrated with 50 mM Tris-HCL (pH 8.0) and eluted with the same buffer using a gradient of 0 to 1.0 M NaCl in Tris-HCl (pH 8.0). Two mL aliquots were collected and tested to determine antimicrobial activity. Active fractions were mixed, concentrated, and lyophilized. Subsequently, filtration chromatography was performed on FPLC gel using a Superose 12 HR column 10/30 (GE Healthcare Life Sciences, UK) equilibrated with 50 mM Tris-HCl (pH 8.0) and 0.2 M NaCl; the samples were eluted with the same buffer. Active aliquots were processed in HPLC (high-performance liquid chromatography) using a LiChroCART ® C18 column (250 × 4.0 mm) (Merck). e mobile phase was 0.1% (v/v) trifluoroacetic acid (TFA) and the solution of 80% (v/v) aqueous acetonitrile containing 0.1% (v/v) of TFA. Aliquots were assayed for detecting the bacteriocin activity. All those positive activity fractions were pooled, lyophilized, and suspended in Milli-Q ® , obtaining the partially purified bacteriocin [15]. For all assays, bacteriocinogenic activity related with processes of purification and partial characterization, E. coli (EC-7) strain which has a high sensitivity to the bacteriocin studied, was used. is E. coli strain was the one that showed the highest sensitivity to bacteriocin among all the studied strains. Molecular Weight Determination of Bacteriocin SF1. e molecular weight of the bacteriocin SF1 was determined by glycine SDS-PAGE with 5% stacking gel and 12% separating gel using the molecular weight standard Strep Tag II Perfect Protein (Merck) [16]. e gel was stained with Coomassie blue R-250 and washed at room temperature with a solution of acetic acid 5% to remove excess of stain. Effect of Enzyme Action, pH, and Temperature on Bacteriocin SF1. e bacteriocin SF1 was diluted 5 times in buffer with each enzyme (Sigma, USA) analyzed. e final enzyme concentration was 1 mg/mL. All enzymes were sterilized by filtration through a membrane filter of 0.22 μm (Merck, Germany). For assays, trypsin, α-chymotrypsin, and pepsin in a buffer Tris-HCl 20 mM pH 8.0; proteinase K in buffer 20 mM Tris-HCl pH 7.2; and papain in 50 mM phosphate buffer pH 7.0 were used. e enzyme activity was determined by incubating each sample at 37°C for 30 min, 1, and 4 h. Untreated bacteriocin samples were used as controls. e bacteriocin activity was assessed at different pH values. e pH of the bacteriocin sterilized by filtration was adjusted using the following buffers: KCl-HCl (pH 2.0 and 3.0), acetate (pH 4.0 and 5.0), citrate (pH 6.0), and Tris-HCl (pH 7.0, 8.0, 9.0, 10.0, 11.0, and 12.0). e bacteriocin was diluted twice in different buffers. e resulting mixtures were incubated at 37°C for 30 min. e assay previously described was then performed to detect bacteriocinogenic activity. On the contrary, the bacteriocin was treated at −76, 4, 25, 37, 60, and 80°C for 30 min as well as 100 and 121°C for 15 min. After the treatment, the samples were diluted 1 : 2, 1 : 4, and 1 : 10 and kept at 4°C for 2 h. Later, tests were performed to determine the biological activity as described above. Analysis of Antimicrobial Activity in Polyacrylamide Gel under Nondenaturing Conditions. e bacteriocins SF1 was run in a polyacrylamide gel 7.5%. It was subsequently washed with distilled water and placed in a sterile Petri dish. en, it was covered with a thin mixture of 0.8% Brain Heart Infusion Agar (BHI) (Merck) and 10 4 CFU/mL E. coli EC-7. After, the plate was incubated for 5 h at 37°C for observing the inhibitory zone. Amino Acid Composition and Sequence Analysis of Bacteriocin SF1. e lyophilized partially purified bacteriocin SF1 was used to obtain the amino acid composition, and the sequence was determined by the Edman reaction in an automated sequencer PPSQ-31A (Shimadzu, Japan) [17]. Identification of the S. flexneri Strain. Bacteriological identification of the S. flexneri strain was performed by biochemical and serological methods. e strain belongs to serotype 2a. On the contrary, the molecular identification showed the presence of plasmid 606 bp ipaH, confirming that the strain studied is S. flexneri. Partial Purification of Bacteriocin SF1. e specific activity of bacteriocin SF1 was increased 23 times during the purification process; sixty-nine percent of the antimicrobial activity was recovered (Table 1). e SDS-PAGE analysis showed a band of approximately 66 kDa in a triplicate assay (Figure 1(a)). e antibacterial activity analysis showed the presence of an inhibitory zone at the same level of the detected band (Figure 1(b)). e chromatogram of purified bacteriocin SF1 was obtained by HPLC (see Figure S1 in the Supplementary Material for analysis of purity of bacteriocin SF1). Antimicrobial Spectrum of Bacteriocin SF1. e antimicrobial spectrum of the bacteriocin SF1 was determined on different Gram positive and negative species. e bacteriocin was active only against the three target strains of E. coli and B. fragilis tested in this research ( Table 2). Effect of Enzymes, pH, and Temperature on Bacteriocin SF1. Only the enzymes proteinase K and papain inactivated the bacteriocin SF1. It was observed that the bacteriocin SF1 lost biological activity only when exposed to 100°C and 121°C. Moreover, the alkaline pH inhibited the antibacterial action (see Table S1 in the Supplementary Material for comprehensive analysis of enzymes activity, temperature, and pH on bacteriocin SF1). Amino Acid Composition and Sequence Analysis of Bacteriocin SF1. e bacteriocin SF1 obtained from S. flexneri strain showed a sequence of 619 amino acid residues (Figure 2). Its amino acid composition is shown in Table 3. Its molecular weight was 66294,094 Da. Discussion Species of the genus Shigella are among the bacterial pathogens most frequently isolated from patients with diarrhea. Five to fifteen percent of all diarrheal episodes worldwide can be attributed to an infection with Shigella, including 1.1 million fatal cases [18,19]. In this research, the novel bacteriocin produced by the S. flexneri strain was named SF1. is substance was International Journal of Microbiology sensitive to treatment with proteolytic enzymes, particularly proteinase K and papain, showing their peptidic nature. According to the results, bacteriocin SF1 maintains antibacterial capacity at 80°C, but not at 100°C. e thermolability at 100 and 121°C of this antagonistic substance is also consistent with its chemical composition and corroborates data reported by other authors who have demonstrated that colicins produced by Shigella are generally heat labile [7,20]. By SDS-PAGE, a single band was obtained for bacteriocin SF1. is band was taken and used to detect the protein purity by HPLC. Also, it is important to note that the band in the gel in nondenaturing conditions was responsible for the antagonistic activity in the biological assay. It was interesting to observe that the amino acid composition of bacteriocin SF1 is similar to that of colicin U produced by S. boydii [21]. e specific results of this research showed a small variation that affected only the proportion of glutamine (amino acid imide) and glutamic acid (amino acid) of bacteriocin SF1 in respect to colicin U. In comparison with bacteriocin SF1, the colicin U shows 38 residues of glutamic acid and 28 of glutamine. is result was confirmed by means of the amino acid sequence performed in this research. e amino acidic variation occurs only at five positions in the sequence in which glutamine are replaced by glutamic acid. e molecular weight calculated for the bacteriocin SF1 is 66294.094 Da, compared to the molecular weight of the colicin U which is 66289.1719 Da. Also, according to the results, it is possible to argue that the bacteriocin of S. flexneri could be the product of mutations, explaining the differences detected between colicin U and bacteriocin SF1. In addition, the bacteriocinogenic S. flexneri strain might present a selective advantage during the colonization process and before the development of its invasive capacity. us, the bacteriocinogenic activity of S. flexneri against E. coli and B. fragilis would allow understanding how a low infectious dose of S. flexneri is capable of displacing these members of the gut microbiota and prevailing in this ecological niche, facilitating colonization and later starting the invasiveness process [22]. erefore, the possible role of bacteriocin SF1 as a virulence factor should be studied, and further microbiological and molecular studies on the bacteriocin SF1 are necessary to understand in depth its ecological role. ree strains were tested from each bacterial species. +: bacterial species sensitive to bacteriocin; −: bacterial species not sensitive to bacteriocin. Conclusions A novel bacteriocin of 619 amino acid residues and 66294,094 Da of molecular weight, produced by S. flexneri, named bacteriocin SF1 has been for the first time detected and partly purified. Bacteriocin SF1 shows lethal activity on E. coli and B. fragilis, important members of the normal microbiota of the human gut. Data Availability e experimental data used to support the findings of this study are included within the article. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper. Acknowledgments is work was supported by a grant from Dirección de Investigación, Universidad de Talca (VAC 600500). Figure S1 shows the purity of bacteriocin SF1 by means of HPLC. Active aliquots of bacteriocin SF1 were processed in HPLC using a LiChroCART C18 reverse phase. HPLC: mobile phase A: 0.1% trifluoroacetic acid (TFA); mobile phase B: 80% aqueous acetonitrile solution containing 0.1% TFA; linear gradient 0-100% of B solution in 30 min flow rate 1 mL/min; temperature 35°C; active fraction: 34.6 min retention time. Table S1 shows the enzymes activity, temperature, and pH on bacteriocin SF1. Initially, from the untreated bacteriocin, the arbitrary units per mL (AU/mL) were calculated, estimating 25,600 AU/mL. e arbitrary units were calculated based on the reciprocal of the highest dilution with biological activity and multiplied by 100 (dilution factor). E. coli EC-7 was used as target strain of lethal action of the bacteriocin SF1. (Supplementary Materials)
2019-05-17T13:46:35.303Z
2019-05-06T00:00:00.000
{ "year": 2019, "sha1": "e25a53dd4d4380864405d9f10612dc8a18cf2af3", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ijmicro/2019/6747190.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "69eb3c694370c51a7751b90caec49b757bf35b0e", "s2fieldsofstudy": [ "Biology", "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
235781557
pes2o/s2orc
v3-fos-license
Non-pharmacological interventions and corticosteroid injections for the management of the Achilles tendon in inflammatory arthritis: a systematic review Background Achilles tendon (AT) pathologies, particularly Achilles enthesitis, are common in inflammatory arthritis (IA). Although there are various non-pharmacological interventions and injection therapies available, it is unknown if these interventions are effective for people with IA, as this population is often excluded from studies investigating the management of AT pathologies. This study aimed to identify and critically appraise the evidence for non-pharmacological interventions and corticosteroid injections in the management of AT pathology in those with IA. Methods All studies which met the inclusion criteria (AT interventions in adults with a working clinical diagnosis of IA, English language) were identified from the following databases: Medline, Embase, CINAHL and the Cochrane Library. The search strategies used the search terms ‘spondyloarthropathies’, ‘inflammatory arthritis’, ‘achilles tendon’, ‘physical therapy’, ‘conservative management’, ‘injections’, and related synonyms. Studies included were quantitative longitudinal design, such as randomised controlled trials, pseudo randomised and non-randomised experimental studies, observational studies, cohort studies, and case control studies. All outcome measures were investigated, quality assessment to determine internal and external validity of included studies was undertaken, and qualitative data synthesis was conducted. Results Of the 10,911 articles identified in the search strategy, only two studies that investigated the efficacy of corticosteroid injections for the management of the AT in IA met the inclusion criteria, and no studies were identified for non-pharmacological interventions. Both injection studies had low quality rating for internal and external validity, and thus overall validity. The included studies only investigated two outcome domains: pain and ultrasound (US) (B Mode and Doppler) identified abnormalities and vascularity in the AT. There is weak evidence suggesting a short-term improvement (6–12 weeks) in pain and for the reduction in some abnormal US (B-Mode and Doppler) detectable features (entheseal thickness, bursitis, and entheseal vascularity) at the AT and surrounding structures post-corticosteroid injection. Conclusion Weak evidence is available regarding the efficacy of corticosteroid injections in reducing pain and inconclusive evidence for the improvement of abnormal US detectable features. No studies were identified for non-pharmacological interventions. It is evident from the lack of relevant literature that there is an urgent need for more studies assessing non-pharmacological interventions for the AT in people with IA. Supplementary Information The online version contains supplementary material available at 10.1186/s13047-021-00484-6. Background Inflammatory Arthritis (IA) refers to a group of conditions including rheumatoid arthritis (RA), ankylosing spondylitis (AS), psoriatic arthritis (PsA), and other spondyloarthropathies (SpAs) [1].These conditions are progressive and characterised by joint destruction, pain, and eventually lead to decreased function [2]. IA has been shown to have a profound impact at both an individual level (negative impact on quality of life of those afflicted) and at a societal level (medical expenditure and work disability) [1][2][3].Enthesitis (inflammation occurring at the attachment site of tendons and ligaments to bone) is regarded to be a hallmark feature of the SpAs [3] and a known predilection to the insertion of the Achilles tendon has been reported in the literature [4].The prevalence of Achilles enthesitis in IA, in particular SpAs, is much higher than the general population [5,6].Anecdotal evidence suggests that enthesitis in the SpAs may be largely unresponsive to standard pharmacological regimens, with treatment guidelines not recommending conventional disease-modifying anti-rheumatic drugs (DMAR Ds) due to reported failed efficacy on peripheral enthesitis [7].The use of biological drug therapies has been described in the literature for those with NSAID-refractory persistent heel enthesitis [8].However, there is evidence of the progression of pain and disability even when low disease activity has been achieved with the use of biological drug therapies [9]. An association between enthesitis, and the presence of higher disease activity, increased fatigue, worse functional status, reduced disease duration, and body mass index was reported in patients with AS [10].In contrast to the accepted view that general chronic AT pathologies are primarily degenerative in nature in non IA populations, the involvement of a greater inflammatory component in those with IA has been acknowledged [11].This highlights the need for targeted therapies to address both inflammatory and biomechanical features affecting the AT, as proposed by Woodburn et al. [12] for management of rheumatoid arthritis in patients with low, moderate and high disease activity. Currently, high quality reviews that can be applied in clinical practice for the management of AT are unavailable.The available Cochrane review by MacLauchlan and Handoll [13] was withdrawn from the Cochrane Database due to lack of recency and the need for updating, and the Cochrane review protocol developed by Wilson et al. [14] in 2011 was subsequently withdrawn due to lack of progress.Recent systematic reviews proposing management options for the AT have been published, and include many conservative interventions and injection therapies [15][16][17].Physical therapies, such as progressive heavy load eccentric exercises, gained popularity from the Alfredson's protocol, and have shown promising efficacy in reducing pain in recent literature [16,18].Extracorpeal shockwave therapy (ESWT) has also been reported as being as effective as eccentric exercises [16].Other conservative management options include orthoses, splints, low level laser therapy, and microcurrent therapy.However, these have not been proven to be as effective as eccentric exercises in studies [15,16,19].Corticosteroids injections are powerful anti-inflammatories, and can be useful in patients that have previously experienced adverse reactions to nonsteroidal anti-inflammatory drugs (NSAIDs) [20].These are injected locally at the AT to decrease inflammation [17].Although the mechanism through which corticosteroids have an effect is unknown, there is evidence to suggest corticosteroids can be effective in chronic tendinopathy at relieving pain, reducing swelling and inflammation [11,17].However, a risk of potential adverse effects, such as local infection, bleeding, swelling, and tendon rupture, is associated with corticosteroid injections.The potential adverse effects of the tendon itself are local bleeding and weakening of the tendon [17]. Although enthesitis and inflammatory AT pathologies are a hallmark of IA, those with IA are excluded from studies focused on the management of AT pathologies.Consequently, it is currently unknown if treatments for AT pathologies are safe and effective for people with IA.This is an important consideration as there is increased risk of adverse events due to immunosuppression and studies have shown different pathological processes at the AT for people with and without IA.Therefore, the aim of this review was to identify and critically appraise the evidence for the effectiveness of nonpharmacological interventions and corticosteroid injections for the management of the AT in people with IA. Search strategy A detailed electronic database search of the literature was performed on August 3rd, 2020 using the following databases: Medline, Embase, CINAHL, and the Cochrane Library.Reference lists of eligible studies were hand searched by the reviewers (SM and KH) to identify any relevant studies that were not identified through the initial electronic database search.The following search terms 'spondyloarthropathies', 'inflammatory arthritis', 'achilles tendon', 'physical therapy', 'conservative management', 'injections', and related synonyms were used to develop a comprehensive pragmatic literature search strategy.Standard MeSH or medical headings and appropriate keywords were utilised where appropriate according to each database.Boolean operators (such as 'OR', "AND', and '*') were used as appropriate.The full search strategies are available in Additional File 1. Study inclusion criteria Studies included were quantitative longitudinal study designs, such as randomised controlled trials (RCTs), pseudo randomised and non-randomised experimental studies, cohort studies, and case control studies.Single case studies were excluded.Full text studies published only in the English language were included due to language barriers of the independent reviewers.There were no limitations on the year of publication.Studies reporting participants aged ≥18 years with a working diagnosis of IA made by a rheumatologist, and AT pathology were included.Studies investigating non-pharmacological interventions (such as orthoses and physical therapy) and/ or site-specific corticosteroid injections were selected for further analysis.Limitations were not placed on which qualified health professional prescribed and/or administered the non-pharmacological interventions or corticosteroid injections at the AT.All outcome measures were selected for further analysis.Studies were excluded if they were not an intervention study or focused solely on pharmacological treatments, such as biologic drugs as a primary treatment. Study selection All titles and abstracts of the studies found electronically and through reference list reviews were cross-referenced, and any duplicates were removed.Two reviewers (SM and KH) independently screened the title and abstracts of the studies for information fulfilling the eligibility criteria as described above.Any discrepancies in opinions were resolved through discussion.Full text articles were retrieved from the selected abstracts and compared to the inclusion criteria.Two reviewers (SM and KH) independently screened the full text articles to determine if they met the inclusion criteria.Any discrepancies were resolved through discussion. Quality assessment The quality of the studies was assessed using criteria that assessed the internal validity (i.e.how well the study was conducted, including method errors or risk of bias) and external validity (i.e.generalisability of results to the wider population of people with IA and affected AT).The quality assessment criteria used was adapted from the Cochrane Collaboration tool [21] as previously reported by Hennessy et al. [22].It included internal validity criteria of sequence generation and allocation concealment, blinding of participants, personnel and outcome assessors, incomplete outcome data, selective reporting and statistical issues, and interventions.External validity criterion assessed the representativeness of the sample population to the general population with IA and affected AT and the restrictiveness of the inclusion and exclusion criteria [21,22].Quality assessment was conducted by two independent reviewers (SM and KH).Any disagreements between the two reviewers were resolved through discussion.For a high quality rating, all applicable domains had to be scored as high quality.Any study with ≥1 domain scoring low quality resulted in an overall low quality rating for the study.This quality rating was agreed upon by the co-authors a priori, and has been used in previous systematic reviews [22,23]. Data extraction/ evidence grading For outcome measures, random-effects model metaanalyses would have been conducted if multiple RCTs had been available.As multiple RCTs were not available, qualitative data synthesis was conducted.An evidence rating was assigned to the extracted data that was analysed and synthesised.Once the studies were rated for quality, they were grouped according to intervention and associated outcome measures, and evidence rating were assigned according to the criteria adapted from Ariens et al. [24] (Table 1).The interpretations of findings were based on combining the qualitative data synthesis and the evidence grading system. Results Using the detailed search strategy, a total of 10,909 articles were retrieved from Medline, Embase, CINAHL and Cochrane Library.Two articles were also identified through reference list reviews.Figure 1 outlines the flowchart displaying how the relevant studies were found [25].The inclusion criteria were met by two studies (described in Table 2), which were both observational studies that assessed the effects of injecting corticosteroid around the site of the AT [26,27].No studies assessing non-pharmacological interventions (such as orthoses and physical therapy) for the AT in patients with IA were found. The study by Huang et al. [26] compared the efficacy of ultrasound (US) guided injections of etanercept and betamethasone (corticosteroid) when injected into the entheses of patients with AS and refractory Achilles enthesitis.The study by Srivastava & Aggarwal [27] investigated the efficacy of US guided corticosteroid (methylprednisolone) injections at Achilles enthesis in patients with SpAs.In the case of Huang et al. [26], only the corticosteroid injection reported outcomes for the seven patients who were injected with betamethasone were extracted, as the inclusion criteria of our systematic review did not include biologic pharmacological agents such as etanercept. The sample size in both studies were small, lacked a robust sample size calculation, and had a male gender predominance (M:F = 6:1 [26], M:F = 8:1 [27]), thus making it hard to deduce any potential gender specific differences in results.Heterogeneity in the The two studies were rated as having overall low internal and external validity during quality assessment (Table 3).Two outcome domains were identified: pain and US (B Mode and Doppler) identified abnormalities and vascularity in the AT.Considerable variation in the study duration and timing of outcomes assessments was noted in the two studies.Huang et al. [26] measured clinical parameters at baseline with follow up outcomes measures reported at 2, 4, 8 and 12 weeks.Srivastava & Aggarwal [27] had a shorter follow-up duration with the assessment of clinical parameters only at baseline and 6 weeks.Additionally, Huang et al. [26] investigated patients that had unilateral AT enthesitis (only one foot investigated), whereas Srivastava & Aggarwal [27] investigated symptomatic AT, which meant an individual participant may have had both ATs assessed if they were symptomatic.The loss to follow up varied across the studies, with no loss to follow up reported by Huang et al. [26], but considerable attrition (38% of study participants) was reported by Srivastava & Aggarwal [27], with a failure of the authors to offer any explanatory cause for the high levels of attrition in the study. Due to the lack of RCTs and/or sufficient studies/data for analysis, meta-analyses could not be conducted, as was the original intention of this work.A qualitative synthesis of the results is outlined below and described in Table 4. Pain Both studies used a visual analogue scale (VAS) as an outcome measure for pain.Neither of the two studies clarified the nominal scale used for the VAS.The pain was reported to be reduced in both observational studies [26,27].Huang et al. [26] only included patients that had a VAS score > 4.0 in the affected heel.The baseline VAS before betamethasone was injected was 5.3 ± 0.7 (mean ± SD), which reduced to 1.5 ± 0.8 during followup at week 12.However, it should be noted that the VAS reduced at week 2 (0.8 ± 1.0) and week 4 follow-up (0.5 ± 0.6) from baseline, but then increased for the next two follow-ups at week 8 and week 12.Overall, the VAS remained reduced in comparison to baseline measures [26].Srivastava & Aggarwal found patients injected with methylprednisolone at local site of AT reported a VAS mean score of 7 with a range of 4-10 prior to injection, and a mean score of 3 with a range of 0-7 6 weeks post injection [27].According to the evidence rating criteria by Ariens et al. [24], it can be concluded that there is weak evidence suggesting corticosteroid injections at the AT may reduce pain in the short term (6-12 weeks) as both studies reported consistent findings. Ultrasound evaluation (B-mode and Doppler) Both Huang et al. [26] and Srivastava & Aggarwal [27] evaluated the Achilles enthesis and surrounding structures with B-mode US imaging and Doppler US at baseline and at follow-up visits.B-Mode US Imaging evaluated morphological changes and Doppler US assessed vascularisation at the site of AT.Both studies reported a small reduction in entheseal thickness and the presence of retrocalcaneal bursitis [26,27].Srivastava & Aggarwal [27] reported a reduction of entheseal thickness from 6.9 mm to 6.1 mm and a reduction in bursitis (n) from 26 at baseline to 15 at 6 week followup.Huang et al. [26] reported reduction of entheseal thickness from 7.6 mm to 6.2 mm and a reduction in bursitis (n) from 7 at baseline to 3 at 12 week follow-up.The values for bursitis and entheseal thickness reduced initially and remained the same from four week followup to twelve week follow-up [26].No changes in the number of bone erosions and enthesophytes measured with B-mode US from baseline to follow-up were reported in both studies [26,27].This result is not unexpected due to the irreversible nature of these features.Srivastava & Aggarwal [27] also reported reductions in entheseal hypoechogenicity (n) from 27 to 19 and peritendinous oedema (n) from 17 to 5. Doppler US examination was undertaken in both studies to assess vascularity at the AT at baseline and follow-up.Both studies utilised different grading systems to measure vascularity (Table 5).The two studies reported a reduction in entheseal vascularity at follow-up.Huang et al. [26] reported a reduction in the number of highest grade from 2 to 0, and Srivastava & Aggarwal [27] reported a reduction in mean grade from 2 to 0. Srivastava & Aggarwal [27] also reported a reduction in retrocalcaneal bursa vascularity.Overall, there is weak evidence to suggest that US guided corticosteroid injections can improve some of the B mode and Doppler US detected features at the AT in the short term (6-12 weeks).There is weak evidence to suggest an improvement in entheseal thickness (B Mode), bursitis (B Mode), and entheseal vascularity (Doppler) as both studies reported consistent findings.Weak evidence is also present to indicate no improvement in bone erosion and enthesophyte formation was observed in both studies following US guided corticosteroid injections.There is inconclusive evidence regarding entheseal hypoechogenecity (B Mode), peritendinous oedema (B Mode), and bursal vascularity (Doppler), as these features were only investigated in one study [27]. Discussion The aim of this study was to determine the effectiveness of non-pharmacological and corticosteroid injections in the management of the AT in people with IA.Only two studies met the inclusion criteria.These two studies investigated the effectiveness of corticosteroid injection therapy.No other relevant studies investigating the use of non-pharmacological interventions in this population were identified.A weak level of evidence was the highest possible irrespective of study quality, which in this case were low quality observational studies.To that end, weak levels of evidence were found for corticosteroid injections decreasing pain and some US (B-mode and Severe flow signal refers to the presence of vessels involving more than half of the entheses N/A Doppler) detected features, such as bursitis, entheseal and entheseal vascularity.All other US detectable features were either weak evidence for no improvement, which was not unexpected due to the irreversible nature of these features, or inconclusive due to a lack of studies. The included studies, Huang et al. [26] and Srivastava & Aggarwal [27], both had low quality for internal and external validity, and thus overall low quality.The low quality ratings for internal validity were primarily attributed to the domains of selective outcome reporting/statistical issues and interventions.An inadequate sample size was present in both studies with only seven patients injected with betamethasone in Huang et al. [26], whilst 27 ATs (18 patients) were injected with methylprednisolone in Srivastava & Aggarwal [27].Additionally, a risk of attrition bias was observed in the study by Srivastava & Aggarwal [27].The inclusion of 40 symptomatic AT entheses (in 19 patients) was initially reported.However, only 27 symptomatic AT entheses (in 18 patients) were reported at 6week follow-up [27].The reason for participant drop-out was not addressed, and it is possible that some participants did not return due to adverse events or lack of improvement [28].The interventions domain was also a concern for both studies, as the systemic management of participants was not recorded.The impact of systemic management on the effectiveness of localised pharmacologic intervention should be considered, as any localised improvements may be due to improvement in global disease activity rather than the effectiveness of the localised intervention [2].Additionally, non-pharmacological interventions are adjunct management strategies in this population, and can have limited effect if global disease activity is not addressed [29].Therefore, the pharmacological profile of participants with IA should always be accounted for when investigating non-pharmacological and localised pharmacological interventions. External validity was also a focus of the quality assessment as intervention studies need to have a pragmatic approach, and be generalised to the wider population, with the aim to inform clinical practice [30].Our findings revealed that both included studies lacked external validity.This was due to the minimal representation of females in comparison to males in both studies, with a male to female ratio of 6:1 for Huang et al. [26] and 8:1 for Srivastava & Aggarwal [27], and the lack of representation of common IA subtypes (such as RA) when IA collectively was being investigated [27].Therefore, future intervention study design should consider population ratios for gender and prevalence of IA subtypes to allow for greater generalisability to the wider population.Whilst methodological concerns were found, both studies did demonstrate that corticosteroid injections may be an effective localised management strategy for the AT in this population.However, as the study duration from baseline to final follow-up was short (6-12 weeks), longterm efficacy or risks and adverse effects may not have been fully determined.Additionally, whilst there is evidence of decrease in pain and some radiographic features at the AT, the data reviewed did not give insight into evidence of improvement provided by corticosteroid injections in terms of disability and quality of life in people with IA. As far as the authors are aware, this is the first systematic review to investigate the efficacy of nonpharmacological interventions and corticosteroid injections for the management of AT pathologies in people with IA.The identified studies both addressed sitespecific corticosteroid injections for the management of the AT in IA.The use of corticosteroid injections for the AT is still controversial due to inconclusive evidence regarding their efficacy and potential risks [17].This is because corticosteroid injections tend to have shortterm benefits with the potential risk of weakening the structural integrity of tendons long-term.Repeated injections and possible puncture of the tendon substance can increase the potential risk of AT rupture [11].Although evidence is limited, there are reported cases of AT rupture following corticosteroid injection in healthy population with AT pathologies [17,31].However, it is unknown if this is due to injection technique or the agent injected [11,17,32].Notwithstanding, there is also a risk of tendon rupture in the IA population, when there is too much active inflammation at the tendon: either primary inflammation (specifically within the tendon) or secondary inflammation (from an adjacent location) [33,34].Therefore, the greater risk between administering an injection or not administering an injection needs to be established, and managed accordingly [17].Consequently, the European League Against Rheumatism (EULAR) advises glucocorticosteroid injections as an adjunctive therapy for localised disease, such as enthesitis in PsA [35].Additionally, risk of rupture due to injection technique may be mitigated by using US image guidance when injecting [36].However, this has not been firmly proven [11].Interestingly, both included studies administered injections under US guidance, and did not report any cases of AT rupture. Non-pharmacological interventions for the AT, such as physical therapy consisting of eccentric exercise and ESWT, have shown positive results in the symptomatic management of the AT [16].Additionally, nonpharmacological interventions can minimise potential risks involved with local injections, such as rupture, infection, skin hypersensitivity, and skin depigmentation [37].However, no studies investigating non-pharmacological interventions for the management of the AT in people with IA were found during our comprehensive search. Due to the lack of evidence, is currently no data available regarding the efficacy of non-pharmacological interventions in patients with IA.The reason is most likely due to people with IA being excluded from studies on management of the AT [17]. Exclusion may be attributed to the pathogenesis of AT pathology, which may differ between non-IA and IA, with active inflammation observed through US in patients with IA [11].Additionally, systemic medications (such as biological drugs or DMARDs) that patients may be taking combined with a disease course of variable nature (for example, flares of disease activity and remissions) could impact the findings.If systemic management is working effectively with subsequent low disease activity, it may enhance the efficacy of results.Conversely, if systemic management is not adequately controlling disease activity, it could lead to less efficacious results for non-pharmacological or localised pharmacological interventions, and could impact adherence to interventions and study attrition rates [2].As such, the impact of pharmacological interventions should be carefully considered in methodological design and interpretation of study outcome measures. Therefore, due to the exclusion of people with IA from studies investigating AT management, further studies investigating non-pharmacological interventions for this population are required.Smolen et al. [38], who made recommendations for treating SpAs, also highlighted the need for more research into the management of musculoskeletal involvement.They recommended that inactive disease of musculoskeletal involvement, such as enthesitis, should be a foremost treatment target to optimise quality of life for patients.Current guidelines also highlight the importance of non-pharmacological management in overall management of axial SpA [39].These guidelines also recommend glucocorticoid injections at the localised site of musculoskeletal inflammation could be considered to treat enthesitis despite the lack of evidence [39].Unfortunately, guidelines with recommendations for the nonpharmacological management of IA specific conditions were not found, which further emphasises the need for more research in this area.It should also be noted that the potential mechanism of action of glucocorticoids in tendinopathy includes decreased inflammation, inhibition of cellular proliferation, scarring and adhesion, antiangiogenic activity, antinociceptive action or some combinations of these factors.The results can be positive in cases where excessive inflammation is prevalent [40,41].This might explain the positive results from the studies included in this systematic review due to high inflammation from the disease process of IA. There were a number of limitations to this systematic review that need to be acknowledged.Language was limited to English due to language restrictions of the reviewers.This is generally not advised, but is difficult to overcome [42].The number of studies that met the inclusion criteria may have been limited due to this, and the results of studies excluded based on language could have impacted on the findings of this systematic review.The second limitation of the study was that the outcome measures within each domain were the same even if the methods of determining these outcomes were different, such as reduced vascularity in the enthesis, which would be reduced regardless of the differing grading systems used by the two studies.The second assumption was that the vascularity was assumed as being decreased overall, even if it may have increased between follow-ups.The vascularity grade was still reduced at the final follow-up in comparison to the baseline grade of vascularity. The findings of this systematic review highlight the urgent need for high quality research to be conducted to establish the efficacy of non-pharmacological interventions and injection therapies for the AT in people with IA to better guide those responsible for delivering care.Future research should consider how study outcomes may be interpreted in the context of co-interventions, such as pharmacological management, and variable disease course and progression, and consider analysis for specific subtypes of IA to allow applicability of results in wider clinical practice. Conclusion There is some weak evidence for the efficacy of corticosteroid injections in reducing pain and improving some US detectable features in the AT in people with IA.The efficacy of non-pharmacological interventions could not be assessed due to a lack of relevant literature.There is an urgent need for more research in this field.Future research should address the efficacy of nonpharmacological interventions and injection therapies for the AT within the IA population, specifically addressing different subtypes of IA.An emphasis should also be placed on external validity to allow for greater applicability in clinical practice. Table 1 - [24]ence Rating Criteria[24] [27]1Search flowchart for non-pharmacological interventions and corticosteroid injections for the Achilles tendon in inflammatory arthritis type of IA existed across the two studies, with Huang et al.[26]solely including patients with AS, whereas Srivastava & Aggarwal[27]represented subcategories of SpA, including those diagnosed with AS, juvenile SpA, PsA, inflammatory bowel disease-associated arthritis, and undifferentiated SpA. Table 3 - Quality Assessment of Included Studies Table 4 - Qualitative synthesis of results and overview of evidence
2021-07-10T13:59:37.187Z
2021-07-10T00:00:00.000
{ "year": 2021, "sha1": "995433b33c22983a604c042c0bfe37110fb260ee", "oa_license": "CCBY", "oa_url": "https://jfootankleres.biomedcentral.com/track/pdf/10.1186/s13047-021-00484-6", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "995433b33c22983a604c042c0bfe37110fb260ee", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214504848
pes2o/s2orc
v3-fos-license
System Design Of An Infrared Dual-band Seeker Infrared imagine seeker is an important development direction of ground-to-groung ballistic missle terminal guidance technology, but it’s derection probability is vulnerable to target/background characteristic detector performance field of view and other factors. In order to improve the ability of infrared seeker to adapt to complex battlefield environment, a new design principle of infrared dual-band seeker is introduced. The whole system adopts dual band design, and uses image fusion technology to enrich the information difference between target and background features. In addition, the design of large and small dual field of view can further improve the target interception and accurate recognition and tracking ability of seeker. The working principle of infrared seeker is analyzed, and the optical design idea and image fusion method are introduced. Through index analysis and calculation, the infrared seeker meets the system requirements, which can provide reference for future ballistic missle seeker design Introduction The key targets in the war are high-level command headquarters,airports,power plants,bridges and other strategic objectives. It is difficult for general subsonic weapons to attack and destroy them effectively. Ground-to-ground ballistic missiles with penetration capability have the function of attacking these strategic targets. It has the characteristics of high-speed, long range, maneuverable orbit change and difficult interception of existing weapons. It carries enough warheads to destroy aircraft carriers and deep underground shelters. According to the need of precise attack, the development trend of ground-to-ground ballistic missile is to install precise terminal guidance seeker on the basis of orginal inertial guidance,and to achieve precise hit by map matching or terrain matching system. Infrared imaging terminal guidance has become an important development direction of terminal guidance technology because of its high sensitivity, high spatial resolution,strong anti-jamming ability and quasi-all-weather characteristics. With the development of modern war, jamming and anti-jamming technology is progressing in mutual confrontation. The battlefield environment faced by various tactical missiles is becoming more and more complex, and the confrontation is becoming more and more fierce. The operational effectiveness of single-band infrared imaging guidance weapon will be weakened day by day due to the effective information obtained. It will not be able to strike accurately in future wars, which also forces people to seek further development of infrared imaging guidance technology. At present, there are many kinds of tactical missile equipments using infrared imaging guidance at home and abroad. Some of them adopt dual-band or dual-field of view design. They can distinguish targets and interference by using the characteristics of energy, shape, trajectory and spectrum, which greatly improves the detection ability and has strong anti-infrared decoy ability. For example, France's MICA-IR and South Africa's A-Darter use two-color line detectors to form a scanning imaging system. The US Navy's standard-3 interceptor uses a dual-band staring infrared detector, The Chaparral missile IOP Conf. Series: Materials Science and Engineering 711 (2020) 012092 IOP Publishing doi:10.1088/1757-899X/711/1/012092 2 developed by Lall Company in the United States uses a long-wave detector with two wide and narrow field of view. Norway's new generation anti-ship missiles NSM uses two detectors, medium wave and long wave, and has two wide and narrow field of view. These seekers are more advanced in technology. They can adopt dual-band or dual-field of view for different application requirements, further enhance their ability to resist artificial and complex background interference, and improve target detection, interception and tracking performance. They are one of the development directions of infrared imaging guidance. The infrared dual-band/dual-field-of-view seeker designed in this paper can detect and acquire scene information distribution in long-distance using large field of view after dual-band image fusion, and accurately recognize and track targets in short-distance using small field of view. The information between the two seekers works asynchronously in series. The results of long-range and large-field-of-view infrared imaging can support the follow-up recognition and tracking of small-field-of-view infrared targets, and improve the anti-jamming ability and confidence of infrared targets. a Target detection in large field of view b Accurate recognition in small field of view Fig.1 Aircraft target imaging diagram Summary of principles Both medium-wave detector and long-wave detector adopt 640× 512 focal plane refrigeration detector. After photoelectric conversion of target infrared energy, the detector outputs an electrical signal containing target information. The detector processing unit amplifies, filters, processes the frame and converts the signal to analog and digital. Finally, two infrared digital image signals are output. Target information processing module fuses two images to complete image signal selection decision, feature extraction, intelligent recognition, judgment and tracking processing. The central processor can drive the field of view switching mechanism to complete the magnification switching between large and small field of view. Finally, the central processor acts as the main control device of the seeker, receives commands and data from the missile-borne computer, and reports the target deviation in real time. Optical design ideas Infrared dual-band seeker is strapdown structure, using common-light path design, front-end sharing objective lens, back-end using prism splitting, long-wave transmission, medium-wave reflection, the two bands are separated, back-end for medium-wave, long-wave respectively optimized sub-light path, each imaging in the corresponding band detector. In order to achieve fast switching between large and small field of view, optical element switching and doubling method is adopted, which cuts optical elements into and out of the optical system to change the focal length field of the optical system. This method has fast speed and requires little axial space. In order to ensure high imaging quality, the refrigeration optical system hopes to achieve 100% Cold-shield matching. In the optical design process, it is necessary to set the system diaphragm at the detector cold-shield. If the aperture diaphragm is far away from the front lens, the radial size of the front lens of the optical system will be enlarged more; if the cold aperture matching is satisfied, the optical aperture can be effectively reduced by using the second imaging method, which is conducive to the miniaturization of the system. The optical structure of the second imaging is shown below. Fig.3 Secondary imaging optical structure Infrared optical system has fewer kinds of materials available, and the performance of different bands varies greatly, which makes the system chromatic aberration correction difficult to design. Because the temperature coefficient of refractive index of infrared optical material is large, the change of curvature and thickness of infrared system caused by temperature change will cause the image plane of optical system to drift, and the focal plane of detector and optical system will no longer coincide, which will seriously affect the performance of the system. In order to meet the requirements of high and low temperature environment, the system adopts optical passive non-thermal design. By choosing lens material reasonably, distributing light intensity, and making use of mutual compensation between lens material and lens mechanical structure material, the dual-band infrared optical system achieves the purpose of simultaneous thermal and achromatic. The material parameters of the lens should satisfy the requirements of both light focus, achromatism and calorific aberration. In the formula,  is the total optical focus of the system;  i is the optical focus of the ith lens; 1 h is the incident height of the first paraxial ray on the ith lens; i C is the chromatic aberration coefficient of the ith lens, which is the reciprocal of Abbe number; i T is the thermal difference coefficient of the ith lens;  is the thermal expansion coefficient of the barrel material. i T is defined as: In the formula, i n is the refractive index of the ith lens material, / i dn dt is the index of change of refractive index with temperature, and  i is the thermal expansion coefficient of the lens material. The machinability, chemical stability and coating properties of optical materials were comprehensively analyzed. Finally, germanium, zinc selenide and zinc sulfide were selected as dual-band optical materials. The working wavelength of the long-wave detector is 7.7-10.3 um, the area array is 640×512, the pixel size is 15 um, the working wavelength of the medium-wave detector is 3.7-4.8 um, the area array is 640×512, and the pixel size is 15 um. Using CODE V software to optimize the design, the system F number is 2, in which the large field of view is 25° ×20°, the focal length is 22 mm, and the aperture is 11 mm; the small field of view is 10 °×8°, the focal length is 55 mm, and the aperture is 27.5 mm. The design results have high optical transmittance and transfer function, which meet the requirements of the system. Image fusion method Image fusion should be the sum of the common features and the unique features of two bands. According to the different fusion levels, image fusion can usually be divided into three levels: pixel level fusion, feature level fusion and decision level fusion . This system chooses to use pixel level fusion. The image fusion method proposed by LIS [1] and others is to decompose the image into detail layer and basic layer, respectively, to guide the filtering of detail layer and basic layer. The fusion method retains the complementary information of multiple source images [2]. The flow chart of image fusion algorithm based on guided filtering is shown in the following figure. In the formula, A is a mean filter and * is a convolution operation. Further, the detail layer image is obtained by the difference between the original image and the base layer image.   n n n D I B (6) In this way, through image decomposition, it is easy to obtain the basic layer and detail layer of each image. The basic layer expresses the general picture of the image and the gray level change on a larger scale, while the detail layer contains the details on a smaller scale. Fusion coefficient The fusion coefficients of the basic layer and the detail layer are constructed respectively. Firstly, the construction of fusion coefficient of foundation layer is introduced. The high-pass image is obtained by filtering the source image by Laplace operator [3]. L is 3 ×3 Laplacian operator: The local saliency of the pixels is represented by the absolute value of the high-pass image: S H (9) By comparing the saliency maps, the weighted mapping of the underlying image is obtained. Obviously, the fusion image formed by the weighted mapping will produce sawtooth and gray jump at the junction, which makes it difficult to maintain the spatial structure of the image and ensure the smooth gray transition of the image. Therefore, using the characteristics of guided filtering, the source image n I is used as the guided image, and the guided image B n P is filtered: GF is for guiding the filtering operation, r is for guiding the size of the filtering window and  is the regularization parameter. For the detail layer, because the detail layer image itself belongs to the high frequency information of the source image, it has high frequency characteristics [4]. The local saliency of detail layer pixels is directly represented by the absolute value of detail layer image itself. Image fusion According to the fusion coefficients obtained above, image fusion is performed at the basic level and the detail level respectively. In the formula, * is a Point multiplication. Then, the image of the base layer and the detail layer are added together to get the final fusion result: The simulation results of dual-band image fusion are shown in the following figure. It can be seen that the fused image has important details of two original images, so it is more advantageous for target interception. Spatial Resolution Analysis The imaging schematic diagram of Strapdown infrared seeker is shown in the following figure. Its imaging model is perspective imaging model, so the resolution of pixels corresponding to different positions in the image plane is also different. For large surface ships and other targets, they can be approximated as plane targets at higher altitudes, and the size of the target in the image plane from different viewing angles is also different. The imaging field of view and resolution are analyzed separately below. The sea surface area corresponding to the field of view is calculated as follows:
2020-01-09T09:14:53.252Z
2020-01-07T00:00:00.000
{ "year": 2020, "sha1": "531c31b78ddea6ab4837a46996fe9a3df0ce65ce", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/711/1/012092", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "166a630e72db391007fb7ac3648d85eae212d133", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
225070298
pes2o/s2orc
v3-fos-license
Assessment of stakeholders’ contributions to livestock development in Delta State, Nigeria: Rural infrastructure intervention Abstract: The study assessed the contributions of stakeholders to livestock development through provision of infrastructure to rural areas of Delta State, Nigeria. The objectives were to describe the socio-economic characteristics of respondents, appraise the role of external stakeholders in livestock development, verify any existing relationship between livestock development and rural development indicators and identify the challenges faced by respondents. A purposive and simple random sampling techniques were used to select the three major towns and 180 respondents. Data were collected by questionnaire and subjected to descriptive and inferential statistics. Result obtained showed that majority of the respondents were males (68.3%) with higher national diplomat (HND)/First degree (33.3%) and have a mean age of 42 years. The first four highest external stakeholders were skills training and entrepreneurship programme (94%), youth agricultural entrepreneurs programme (90.6%), job creation agency (89.4%) and FADAMA (80.0%) that promoted livestock development. A significant relationship was observed based on infrastructural contributions to livestock development (p < 0.05) among the variables: market, water project, market and roads. Serious challenges included high cost of feed facilities (mean = 3.69) and insufficient power supply (mean = 3.49). The study concluded that the more available the rural infrastructure intervention, the more developed the livestock sector. The study recommended that stakeholders should make their extension agent available to livestock farmers. Introduction It is affirmed that sustainable rural development approaches particularly in relation to agriculture, agro-industrial, agro-allied value chains, and business, if satisfactorily adopted and adapted in Nigeria, could transform the rural communities to desirable elevations in human and socio-economic development (Ndukwe and Omeji 2015). The livestock sector has passed through relative growth in recent years principally powered by worldwide increase in demand for food of animal products. This has been attributed mostly to population growth, urbanization and returns on investment which is liken to livestock revolution (Delgado et al. 1999). The positive relationship between agriculture and development, principally in sub-Saharan Africa, is seen as a yardstick to achieving sustainable development. The adoption of sustainable development goal (SDG) II was one of the community development tools that encouraged rural infrastructural development in African countries, and 70% of the focus target group lives in rural areas and sub-urban areas and is reliant on agriculture for a living (International Livestock Research Institute 2004). Invariably, reducing poverty, improving quality food intake and optimal well-being of the people would mean improving the livelihood of its vast majority and this centres seriously on the achievement of agriculture sector. For example, using world development indicator data from Nigeria for selected periods, we find a strong positive connection among food production, primary school enrolment ratio and gender impartiality, while there is a strong negative relationship between food production and child mortality rates. As long as it has been established that developing countries like Nigeria has proportional benefit over other countries in the production of agricultural output than industrialized countries, it is imperative to highlight that there is a need for such economy to focus its consideration on the agriculture sector development, so that it can encourage progress of the nation (FAO 2002). This is the only sector that provides the ready-made means for country like Nigeria to smooth the progress of industrial development since all the other sectors straightforwardly depend on agriculture either for food to sustain their workforce or as decisive input in their production process. In the meantime, the sector can act by supplying comparatively cheap food to the urban industrial sector to check inflationary tendency of workers' wages, where insufficient food supply may lead to rising food prices as a result of industrial turbulence as workers continue to demand for increase in wages to meet indispensable needs of life. Food importance is not feasible for any determined country for numerous motives which might be governmental, commercial and tactical reasons. So it is demanding of agriculture to offer food above the subsistence level (surplus). In a nutshell, the outstanding role expected to be from the agricultural sector in the developing country like Nigeria cannot be underscored (FAO 2020). Steinfeld (2014) reported that crop production activities attracted more developmental efforts than livestock productivity. Animals themselves are the main resource, but their mobility makes them a pointed resource to measure. Livestock ownership is more twisted than ownership of or access to land, and, as a consequence, livestock development, particularly if it concerns larger and more costly species such as cattle, tends to produce benefits of low impartiality (Rahman et al., 2020). For the livestock planner, over the years, land ownership has been a serious dilemma for livestock development. These are predominantly multifaceted and sensitive problems that needed to be addressed. Eisler (2018b) confirmed other researchers that livestock includes raising of sheep, goats, pigs, chickens, rabbits, cattle and horses. The importance of livestock development was mentioned by Eisler (2018a) who explained that livestock farms have been benefiting us in many ways for ages they provide us with eggs, meat and milk. Even the so-called animal wastes are not waste per se, but they are recycled and make admirable organic fertilizers. The nexus between livestock development and rural development is interdependable. Functional roads are great priority in rural development; thus, a prerequisite to livestock development. Investment in infrastructure accounts for more than half of the recent enhancement in economic growth in Africa and has the potential to attain even more. Good infrastructure has other ancillary and equally important effects . Water project and access to water, particularly in the extensive rangeland systems, are elementary requirements for livestock production but inadequate accessibility to water has been a great issue during dry season grazing (Steinfeld 2014). Failure and prevalence of dilapidated infrastructure such as water project, power supply and good roads in the communities constituted unfriendly environment for agricultural development (Kessides 1993;Egbetokun 2009). They further stressed that the provision of rural infrastructure does not only contribute to agricultural development but also increase in commercial activities involving both public and private sectors. Peng (2002) affirmed that negligence or no development of rural areas contributes to general poor economic development. This study sought to bridge the gaps mentioned above as to estimate the contributions of stakeholders to livestock development through the provision of infrastructure. The study also tried to identify the continuation of a long-run relationship between the agriculture sector and the rural development. The intention is to make this topic more researchable in Delta State, so that the government and interventionist agencies will focus more on this area as it is a strategy to solve societal problems, ranging from poverty, low employment creation, poor infrastructure and weak socio-economic development. Knowing that entrepreneurship (agriculture) would stimulate the economic growth and development of the country, it could help our rural areas to be developed into urban area and thereby developing the country in near future. Objectives of the study The specific objectives of the study were to: (i) describe the socio-economic characteristics of the livestock farmers, (ii) appraise the role of external stakeholders in livestock development sector, (iii) verify any existing relationship between various sectors of livestock development and rural development indicators, and (iv) identify the challenges faced by livestock development practitioners. Hypotheses H 01 : There is no existing relationship between poultry sector development and rural development indicators. H 02 : There is no existing relationship between piggery sector development and rural development indicators. H 03 : There is no existing relationship between rabbitry sector development and rural development indicators. H 04 : There is no existing relationship between goat sector development and rural development indicators. Conceptual framework for the study The assessment of the contributions of stakeholders to livestock development in Delta State, Nigeria, through rural infrastructure intervention is similar to the the findings of Ovharhe (2019). He opined that the establishment of framework analysis is necessary for the successful implementation of agricultural projects in the Niger Delta area. Arising from Figure 1, it is observed that a nexus exits between rural infrastructure interventions (market, water project, market, roads, etc.) and livestock development practitioners (poultry, piggery, rabbitry and goat rearing) amid various challenges or limitations. This is to say external stakeholders' contributions of rural infrastructural development in various communities support the advancement of livestock development by overcoming challenges posed on the industry and assurance to food security and income generation of farmers. The nexus between project performance and rural infrastructural development It was established that project performance whether high or low level ( Figure 2) was impacted by the following increasing or decreasing factors in farmers' knowledge, aspirations, skills and attitude; innovation adoption; poverty reduction; objectives achievement; farmers' constraints; natural environment sustainability and government policy implementation (Ovharhe 2019). Similarly, this phenomenon of high or low performance level of the agricultural projects is associated with the nexus between livestock project performance and rural infrastructural development indicators. In essence, government infrastructural policy implementation by interventional measures in the provision of roads, pipe borne water, health facilities, power supply, etc., contributes to agricultural development particularly among the livestock enterprises. Livestock objectives and goal achievement by managing farmers' constraints and harnessing the natural environment contribute to the well-being of the livestock farmers and vice versa as demonstrated in Figure 2. Methodology The study was carried out in Delta State. Delta State is made up of the three agricultural zones which are Delta North, Delta Central and Delta South zones and 25 local government areas (LGAs) in the south-south geographical region of Nigeria. The state is endowed with oil mineral deposits and agricultural resources in crop and animal husbandry, fisheries and forestry potentials. The three agricultural zones and 25 LGAs A purposive sampling technique was used to select the three major towns from three LGAs majorly involved in livestock production across the three agricultural zones: Ughelli in Ughelli North, Igbide in Isoko South and Ejeme-Aniogor in Aniocha South. The livestock farmers' record was obtained from Delta Agricultural and Rural Development Authority (DARDA) (formerly Agricultural Development Programme, ADP). This information enabled a sample size of 180 farmers (66%) to be selected from a population frame of 272 various registered livestock farmers in the study areas. Measurement of variables The role of external stakeholders in livestock development was measured using dichotomous scale of "yes" or "no." The respondents were asked to tick a "yes" or "no" on the external stakeholders in livestock development. Examples of the external stakeholders include skills training and entrepreneurship Programme (STEP), FADAMA, state employment and expenditure for result project (SEEFOR), ADP, non-governmental organizations (NGOs), ADP, SDG, among others. The existing relationship between livestock and rural development was measured by ticking the rural development that contributes to livestock agriculture component. Examples of the rural development project include water project, power supply, town hall, school building, market, hospital road network, hotel, banks and church, while agricultural establishment includes: poultry, piggery, rabbitry and goat. The constraints faced by livestock farmers were measured by making a list of possible challenges and requested the respondents to rate the challenges. Examples of the challenges include non-functional government water project, cost of feed, insufficient power supply, non-modern livestock housing, high cost of feed, absence of young farmers club, distance market location, inadequate veterinary facilities and poor road network. A Likert-type scale was used to access the challenges faced by livestock farmers with scores of 1-4 assigned to strongly disagreed, disagreed, agreed and strongly agreed, respectively. The mean value is 2.50 (1 + 2 + 3 + 4/4 = 2.5). The data generated were analysed using SPSS statistical software (version 20) analysis. Operationalization of variables Correlation coefficient was used to analyse the hypotheses. The variables were pairwise ranked in comparative analysis. The dimensions and indicators of the variables were reported in the vertical and horizontal dimensions using each livestock to pair or associate with the concerned stakeholders' indicators as listed. While positive correlation was recorded with both increasing vertical and horizontal axes (dimensions), negative correlation associated with increase or decrease in either vertical or horizontal variables. This approach was adopted by Ofuoku (2017) in the operationalization of correlation variables in rural-urban migrants' remittances to food security in household studies in Delta State, Nigeria. 2 Results and discussion 2.1 Socio-economic characteristics of the respondents Entries in Table 1 show that the respondents mean age was 42 years. This is in line with the study of Mandal The majority of respondents (31.1%) had secondary educational level as the highest attainment. This is in agreement with the study by Alabi and Aruma (2006) that the level of training determines the quality of skills of farmers, their locative capabilities and how well versed they are to the innovations and technologies around (2020) who stated that backyard livestock farmers had a household size of 6-10 persons. Respondent's average farm income was observed in categories of annual farm income -N15,00,000-16,99,000 as the highest (1.1%), while N50,000-2,49,000 as the lowest (38.3%). Upon contact with extension agent, from the cumulative analysis, only 40% of livestock farmers had contact with extension agents. This result indicates that the contact with extension agents is very poor and needs to be attended to. According to the findings of Ovharhe et al. (2020), extension workers are not effective in value addition training programmes for maize cropping as a component of livestock feeds in Delta State. Roles of external stakeholders in livestock development As presented in Table 2 Majority of the stakeholders recorded good support to livestock development. However, the extent of support varied from organization to organization, with STEP, YAGEP and job creation ranked as the first three highest contributors ( Table 2), whereas DARDA, institutions and local NGOs ranked as the lowest three contributors. In view of this, Oyıbo (2020) affirmed that the DARDA outreach activities are very weak in the Delta State of Nigeria. Tables 3-6 show the relationship between various livestock sectors and rural development. The rural development intervention indicators were water project, power supply, town hall, school building, market, hospital, road network, hotels, banks and church. The livestock enterprises in this study were poultry, piggery, rabbitry and goat. It should be important to state here that, on the majority, the correlation existing among the various enterprises was significantly positive (p < 0.05). On a general statistical note, where negative correlations existed, the implication is that there is need gap to develop the variables with negative impact on the assessment rating for better contributions to livestock development. Ndukwe and Omeji (2015) asserted that agricultural development has serious setback because of poor infrastructural development as related to pipe-borne water availability, tarred road occurrence, electricity functionality and communication accessibility. The rural infrastructural development intervention by stakeholders' contributions to livestock development sectors in Delta State, Nigeria, is specifically assessed in the following nexus. Relationship between the poultry sector and rural development The correlation coefficient for poultry is shown in Table 3. existing relationships with positive correlation here are in consonance with the FADAMA III survey embarked upon by , it was reported that rural areas with FADAMA III infrastructural support and intervention tend to advance in agricultural development than other communities without such assistance. Again, Ofuoku et al. (2016) reaffirmed that more poultry farmers drifted from rural to urban areas because of poor infrastructural development and low input support given to livestock farmers. Relationship between the piggery sector and rural development The correlation coefficient for the piggery sector is shown in Table 4 Ovharhe et al. (2020) discovered that rabbit backyard farming contributed to food security in Delta State, Nigeria. In addition, the infrastructural contribution of stakeholders to the housing of rabbitry sector in livestock production increased the livelihood of rural farmers in northern Nigeria (Mailafia et al. 2011). Relationship between goat sector and rural development The correlation coefficient for goat is shown in Table 6. It indicated that most indicators of the goat sector showed positive correlation among some variables, an indication that they are a good measure of fitness. The goat sector correlated negatively with school building (r = −0.073, 0.05), market (r = −0.020, 0.05), hospital (r = −0.153, 0.05), road network (r = −0.012, 0.05) and church (r = −0.141, 0.05). Jabir (2007) opined that the goat sector in livestock development contributes to poverty alleviation. Thus, infrastructural development is a key factor for livestock development. Fan and Zhang (2004) also maintained that economic development cannot be holistic until rural and urban areas are synergized into development. Identify the challenges faced by livestock development practitioners Results in Table 7 Conclusion It was noticed that majority of the livestock farmers were males and had considerable of experience in farming. Based on the finding of this research, it is believed that stakeholders' intervention through rural infrastructural development has positive contributory roles and impacts on the livestock development in Delta State. The study revealed the livestock development status by external stakeholders' programmes (such as STEP, YAGEP and job creation entrepreneurs) recorded good support to livestock development. It was observed that a positive correlation existed among various enterprises of the livestock industry and rural development. Thus, the more the availability of water project, power supply, town hall, school building, market, hospital, road network, hotels, banks and church, the more the livestock development in the study area, which includes increase in population and practices of livestock farming in poultry, piggery, rabbitry and goat production. Among the challenges facing the livestock development, the serious issues are high cost of feed, non-functional government water project, insufficient power supply and non-modern livestock housing among others. Recommendations The findings of this study led to the following recommendations, that (i) Stakeholders should make their extension organs more available to livestock farmers since it was discovered that there was a limitation in livestock extension activities. (ii) Project donors and stakeholders should have a functional policy of monitoring and evaluating empowerment programmes (livestock) as in YAGEP. This will ensure project sustainability. (iii) High cost of feed facilities, insufficient power supply and inadequate veterinary facilities were serious challenges that needed to be dealt with. Thus, external stakeholders should intervene appropriately as it will help farmers to have good productivity.
2020-10-27T13:10:55.879Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "ed1090ac31706f5c9ef09724bbf63c9abd44eb85", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/downloadpdf/journals/opag/5/1/article-p656.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cde0aff93c66f37567135b0f68bb48ed026735af", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Economics", "Environmental Science" ], "extfieldsofstudy": [ "Business" ] }
24559570
pes2o/s2orc
v3-fos-license
Pressure-Induced Melting of Confined Ice The classic regelation experiment of Thomson in the 1850s deals with cutting an ice cube, followed by refreezing. The cutting was attributed to pressure-induced melting but has been challenged continuously, and only lately consensus emerged by understanding that compression shortens the O:H nonbond and lengthens the H–O bond simultaneously. This H–O elongation leads to energy loss and lowers the melting point. The hot debate survived well over 150 years, mainly due to a poorly defined heat exchange with the environment in the experiment. In our current experiment, we achieved thermal isolation from the environment and studied the fully reversible ice–liquid water transition for water confined between graphene and muscovite mica. We observe a transition from two-dimensional (2D) ice into a quasi-liquid phase by applying a pressure exerted by an atomic force microscopy tip. At room temperature, the critical pressure amounts to about 6 GPa. The transition is completely reversible: refreezing occurs when the applied pressure is lifted. The critical pressure to melt the 2D ice decreases with temperature, and we measured the phase coexistence line between 293 and 333 K. From a Clausius–Clapeyron analysis, we determine the latent heat of fusion of two-dimensional ice at 0.15 eV/molecule, being twice as large as that of bulk ice. W ater at atmospheric conditions exists in several states of aggregation, such as vapor, liquid, and several amorphous and crystalline solid phases. 1−4 Understanding the vast amount of ice phases and phase transitions is essential for many fields, including environmental, life, and planetary sciences. 5,6 The most important phase transitions are those of melting and freezing of water because they define the sea level and dominate life on Earth. 7 One of the anomalous thermodynamic properties of water is that its melting point decreases as the pressure increases. 8−11 This effect is of particular importance because it can define water flow under large compressive forces. Pressure-induced melting plays a prominent role in glacial motion. 12−14 The weight of massive glaciers can cause internal deformations on the ice structure. The effect is strongest near the glacier/terrain interface, where pressures are highest. At these locations, ice melts even at temperatures below its bulk melting point, and the resulted liquid form of water allows the glacier to slide over the terrain. It was initially believed that moderate pressures were sufficient to form a thin water layer on ice, attempting this way to explain the anomalous friction behavior of ice, for example, during ice skating. 15 However, this idea was already challenged early on by Faraday. 16 Slipperiness of ice (for example, in ice skating applications) is the result of the presence of a liquid-like film of water on the ice surface, even at temperatures below its freezing point. 17−19 Pressure-induced melting of ice requires far greater pressures than those encountered in common slippery situations. Another example that is often associated with ice skating is Thomson's 19th century experiment that involves the sintering of a wire through an ice cube (or a large block of ice), 16,20 the wire cuts through the ice by melting it by the application of an external force. As the wire moves through, the water behind it immediately refreezes. Ice melting due to the application of a high external pressure and refreezing when the pressure is relieved is known as regelation. Thomson's experiment is often used as a textbook paradigm for pressure-induced melting and regelation. 21,22 However, even though pressure-induced melting is real when sufficiently high pressures are applied (in the order of hundreds of MPa or a few GPa), the wire that cuts through a block of ice is a far more complicated experiment and several other parameters contribute to the melting process. Among those, heat conduction through the wire, friction heating, and wire wettability contribute the most. 11,23,24 Even though it is difficult to experimentally decouple pressure-induced melting from other effects, it still plays a prominent role in several physical processes. It is most prominent in systems in which large pressures prevail. Such systems are difficult to access experimentally, and knowledge on the molecular dynamics comes only from theoretical investigations. 11,25,26 It is thus highly desirable to find a way to access pressure-induced melting experimentally. We have designed an experiment that allows for the first time to explore the microscopic behavior of ice layers under an external pressure. Our solution suppresses possible disturbing thermal influences from the environment. We use graphene as an ultrathin coating to trap water structures on a supporting mica surface. Because of graphene's unusual properties, such as impermeability to small molecules, mechanical flexibility, and chemical stability, it allows for the direct visualization of confined water structures by scanning probe techniques. 27 The anisotropy in the thermal conductivity of graphene 28 and mica, 29 with a high/low conductivity parallel/perpendicular to the sheets, allows one to investigate the intrinsic properties of the ice network, isolated from thermal fluctuations during imaging. A sharp atomic force microscopy (AFM) tip is used to raster-scan the graphene surface on top of ice crystals on mica. By regulating the tip load, we can directly control the locally applied pressure at the graphene/ice/mica interface with nanometer precision and high accuracy. Any heat induced by the scanning AFM tip is quite rigorously led away from the ice crystals due to the extremely low thermal conductivity perpendicular to the graphene sheets as the in-plane thermal conductivity outweighs by far the out-of-plane thermal conductivity (2000−4000 and 6 Wm −1 K −1 , respectively). 28 The system is therefore a viable candidate to investigate pressure-related phase transitions of ice networks decoupled from thermal effects. Graphene coating of water has provided useful insight on intercalation effects and on the physical properties of confined water structures. 27,30−41 In principle, when water is confined between two flat surfaces, its structure and dynamics depend heavily on the molecular structure of the confinement walls, the confinement dimensions, temperature, and pressure. 42−48 Often, confined water structures display perpendicular order due to stratification effects at the vicinity of the surface. 49−51 In particular, water confined between graphene and mica forms flat islands with faceted edges and well-defined thickness, close to the interlayer distance of I h ice. 27 These water structures are in equilibrium with the environmental water pressure, and they communicate with the environment through defects located at the graphene/mica interface. 33 At ambient relative humidity (∼50%), the graphene/mica confinement contains a thin water film with a thickness that corresponds to two water layers. 52,53 Interestingly, at low relative humidity (<1%), ice crystals grow at the interface induced by the heat extracted from the system by the evaporation of water molecules from the intercalated water film. 53 Because of diffusion and rotational limitations of the water molecules that want to incorporate into the ice crystal, the crystallites acquire a fractal shape (see Figure 1a). 54 The mica is hydrophilic and defines the structure of the ice crystal, whereas the graphene is slightly hydrophobic and acts as a neutral confinement. First-principle molecular dynamics (MD) simulations revealed that the first water monolayer is a fully connected hydrogen bonded network epitaxially grown on mica. 55,56 The first ice layer on mica (and in contrast to multilayer films) has no free O−H bonds sticking out of its surface. 55,57,58 The ice layer possesses a net dipole moment where the positive side points toward the mica surface. A schematic of this confined ice network is shown in Figure 1b, and the structure is based on ref 55. Owing to the absence of uncoordinated O−H bonds on the surface of the ice layer and the appearance of a net dipole moment, a graphene covering these ice films is p-type doped. 53, 59 Here, we report on ice melting induced by the application of an external pressure. We show that the ice crystals melt when subjective to high external pressures and refreeze when the pressure is lifted, coined as regelation. For local pressures higher than 6 GPa, a solid to quasi-liquid transition takes place. The water molecules of the ice crystal become dynamic, and the layer loses its net dipole moment, indicative of disorder. The ice crystals start to melt initially at their edges, and the quasi-liquid layer expands toward their interior. The process is fully reversible when the applied pressure is released, and the water molecules immediately refreeze and reform a polarized ice layer. Our experiments are of interest to water (flow) in biological and geological systems. They also expand on the complex phase diagram of confined ice between graphene and mica. RESULTS AND DISCUSSION Pressure-Induced Solid to Quasi-liquid Phase Transition. When the graphene/water/mica system is exposed to low relative humidity, ice crystals are formed at the graphene/ mica interface induced by the heat extracted from the system due to water evaporation into the environment. 53 An example is shown in the AFM topographic image shown in Figure 1a, where the ice crystals (shown as bright areas) have a fractal shape. The surrounding brighter area is a double layer water film (see Figure 1c); the height difference between the two levels in this image amounts to 0.36 ± 0.02 nm, a value very close to the interlayer distance of I h ice (hexagonal ice). The structure of the ice crystal is shown schematically in Figure 1b. Besides the ice crystals and the water bilayer, small droplets of water are occasionally present on top of the water double layer, as shown, for instance, in Figure 2a. The simultaneously recorded lateral force microscopy (LFM) image displays a difference in roughness between graphene on top of the ice layer and the surrounding water double layer. The higher roughness of graphene on top of the ice fractal was attributed to the presence of potassium ions and ionic domains on the air- ACS Nano Article cleaved mica surface (we note that the topography is featureless). 59 The same structure is also present in the double water layer, however, less pronounced as a result of convolution by the second water layer. These images were obtained in contact mode AFM with a tip load of approximately 0.8 nN. By considering a p-doped diamond tip with a radius of curvature of about 5 nm, the applied pressure on the graphene/ice/mica system induced by the tip is approximately 4.5 GPa, calculated using the Hertz model. 60−62 Therefore, by scanning the surface in contact mode and varying the tip load in a controlled way, we can obtain spatial information about the aggregation state of the confined water structures as a function of the applied pressure. We find that when the pressure exceeds a critical value (P c ), the ice/water edges become fuzzy and dynamic. These edges change from frame to frame even when the pressure is kept constant and higher than P c . In addition, a quite faint but strongly persistent contrast appears that propagates from the edges of the ice crystal, visible in both topographic and LFM images ( Figure 2b,c and their insets). The contrast is stronger in the LFM images. Under these conditions, the edges of this region are dynamic (see Figure 2h and compare it to Figure 2g, i.e., a zoom-in of Figure 2a) and change shape every consecutive image, even when the pressure remains at a constant value. Furthermore, the area it occupies strongly depends on the applied pressure. As the pressure increases, this region propagates further toward the interior of the (dark) ice layer; this becomes clear upon comparing, for example, panels c and d of Figure 2, and the total area that it occupies increases further (see the movie in the Supporting Information for more details). The dynamic nature of this region suggests that the water molecules at this location are mobile. Based on this dynamic behavior, we will hereby refer to this region as a quasi-liquid water layer. Additional proof will be presented further below. When the pressure is reduced, the area of the quasi-liquid layer shrinks, starting first from the interior of the ice crystal ( Figure 2d,e). The process is fully reversible, meaning that the molten area disappears completely when the pressure drops below a certain threshold (<6 GPa). In addition to the disappearance of the quasi-liquid layer, the ice/double layer of water edges become stable and smooth when the pressure is decreased; see the red arrows in Figure 2f. Moreover, the density of small water droplets found on top of the surrounding water layers has increased, suggesting mass transport (see Figure 2f and compare it to 2a). Note that after the pressure is lifted, the total ice area has increased, accompanied by a decrease of the double water layer area. The excess amount of water molecules form a third water "layer" or droplets on top of the double layer (mass conservation). Regions that were not scanned with a high tip load remained unaltered (see Figure 2a,f, outside the white dashed square). We emphasize that the melting of the ice crystals is heterogeneous, as it only occurs locally at the region of high pressure just below the surface of the AFM tip. As the tip moves across the graphene surface, the water is expected to refreeze at the locations left by the tip. At the locations where the pressure is lifted, refreezing should occur with a finite speed, that is, slower than the AFM scanning speed (the refreezing rate is low compared to the scan speed for a single line, 0.5 s, but faster than the acquisition time of one image, 256 s). Figure 3a shows topographic information on an area consisting of the quasi-liquid layer, ice layer, and the double The applied pressures are approximately 8.5, 9.7, 10, and 8.5 GPa. Besides the ice layer (white arrow) and the double layer of water (black arrow), a third layer is present which grows with increasing pressure (blue arrow). This is the quasi-liquid water layer. (f) Same region as in (a), after several images were recorded with higher applied pressures within the white dashed box. The edges between the ice crystal and the double water layer become smooth after the pressure is lifted (red arrows). The fractal region increased and a higher density of water droplets is found on top of the double water layer. The area outside the white dashed borders is unaffected, indicating that the changes are only induced due to the pressure applied by the AFM tip. (g,h) Zoom-in topography images (80 × 80 nm 2 ) of a boundary between the ice crystal and the double layer of water at 4.5 GPa (g) and of a boundary between the quasi-liquid layer and the double layer of water at 10 GPa (g). A clear difference is observed in the fuzziness of the boundary. ACS Nano Article water layer. Marked with the white dashed line in Figure 3a, the line profile provides a quantitative measurement of the depth of the fractal with respect to the double layer of water, i.e., 0.36 ± 0.02 nm, which is in good agreement with previous studies. 34,52,53 A histogram of the line profile is shown in Figure 3c and reveals a third peak which corresponds to the evolved quasi-liquid layer. This layer is approximately 70 ± 5 pm higher than the ice crystal. The increase of height is a result of the disordered water network, which is in direct contrast to the H-down network of the ice crystal (Figure 1b). The disorder results in several OH bonds that point away from the mica surface and increase the average thickness of the quasiliquid layer. Supporting evidence that the evolved dynamic region is a quasi-liquid layer of water is obtained from LFM measurements (see insets in Figure 2b−g). Strikingly, the lateral deflection of the cantilever as measured by the LFM signal (which is proportional to tip−surface friction) increases at the regions where graphene covers the dynamic water layer by about 10%. This is rationalized by the fact that in these regions it is easier to deform the graphene cover in a vertical sense (owing to the mobile nature of the water molecules at these locations). These indentations give then rise to enhanced resistance when the tip is moved parallel to the surface. On the other hand, graphene in contact with ice can be less easily indented, which gives rise to lower friction forces, in line with the observations. 63, 64 We emphasize that the LFM images very clearly show the existence of melted ice and the extension of its area. These regions are also visible in topography images, but the contrast is rather weak and the exact area of the melt is sometimes harder to detect due to strong contrast enhancement actions. Emerging Disorder in the Quasi-liquid Layer. In a previous study, we showed that the graphene cover can be doped by the underlying ice/water structures. 59 That investigation made it possible to gain information about the structure of the ice crystal and the double water layer. The graphene on top of an ice crystal is p-doped, where on the other hand the double water layer does not induce any significant charge doping on the graphene cover because of disorder. The p-doping is the consequence of the crystalline structure of the ice, which has a H-down configuration with a net dipole moment, 55,56,58 whereas the net dipole moment is absent in the water double layer. In essence, the ice surface is electronegative, and the graphene is doped due to charge transfer. 65 This difference in charge can be measured using a ACS Nano Article conductive AFM. 59 Figure 4a and its inset show a topography and a conductive AFM image of an ice crystal intercalated between graphene and mica, under 4.5 GPa of applied pressure (the pressure is small enough that it does not induce any changes in the ice crystal) at room temperature. No bias was applied between the conductive AFM tip and the substrate. Instead, only charges that are present on or near the surface can be detected by the AFM tip and measured in the current signal. A distinct correlation is found between the topography and the current image. The graphene layer on top of the ice layer displays a significant amount of current (yellow), whereas graphene above the double layer shows almost no current (blue). In order to enhance the correlation, the topography and the current images are overlaid in Figure 4b. Clearly, all the yellow parts (high current) are located within the borders of the ice fractal. When pressures larger than 6 GPa are applied on the ice crystals, a phase transition takes place and a quasi-liquid layer is formed (see Figure 4c: slightly brighter areas within the ice fractal). Exactly at the places where the quasi-liquid layer is formed, the current measured with the conductive AFM (C-AFM) vanishes. When the two images are overlaid, the change in charge density becomes even more prominent (as shown in Figure 4d). This can only be explained by a change in the structure of the underlying ice crystal. As mentioned earlier, the water network in the ice has a H-down configuration, which results in a net dipole moment (Figure 4f). 55 The quasi-liquid layer, as a result of disorder, loses its net dipole moment, and therefore no current/charge is measured on the graphene cover ( Figure 4g). 56 In Figure 4e, the cross-correlation between the topography and the current image is shown. A distinct decrease of the correlation is observed as a function of the applied pressure. This declining in overlap is the expected result because, with increasing pressure, an increasing fraction of the ice layer is melted. As explained above, the melted regions do not contribute to the conduction. To summarize the above observations, when the pressure exerted on the confined ice exceeds a specific threshold, the ice/water edges become very dynamic and a dynamic layer appears at the interface. This layer is thicker than the ice layer by 70 ± 5 pm. This region increases in lateral size with increasing pressure. Owing to the dynamic nature of this layer, the observed disorder, and the high apparent mobility of the water molecules, we refer to it as a quasi-liquid water layer. We expect that this layer preserves some slight order that is stemming from the underlying mica due to stratification effects (see Figure 4g). We have thus shown that pressure variations can induce morphological changes in confined ice nanocrystals. The ice crystals melt when a high pressure is exerted at the interface by an AFM tip. When the pressure is lifted, the newly formed quasi-liquid layer refreezes. Our experiments provide the first ever example of regelation, fully decoupled from thermal effects owing to graphene's large anisotropy in the thermal conductivity, which warrants very good isolation form the environment. Heat that might be induced by frictional forces is immediately transported away from the underlying water structures, owing to the high in-plane thermal conductivity. This leaves pressure as the sole parameter responsible for the observed phase transitions. The observed quasi-liquid layer shows similarities with the structure found by Li et al. on water on mica. 56 The authors performed ab initio molecular dynamics study of the structural and dynamic properties of water adlayers on the mica surface 56 at different temperatures. They found that at room temperature molecules that are bonded to the mica form an ice network, where the water molecules bridging the K + ions are slightly weaker bonded than those bonded directly on the mica oxygen ions. When the system is brought to elevated temperatures, the structure starts to show melting behavior. Even though the hydrogen network collapses at these temperatures, the hydrogen bonds between the water and the supporting mica can remain. The bridging water molecules can easily rotate and diffuse, resulting in a liquid-like layer. Of course, in our system, the temperature remains constant and cannot be made accountable for the observed phase transition. We thus propose that the H 2 O molecules behave similarly when an external pressure is applied. When water is compressed, the O:H hydrogen bond shortens and stiffens; on the other hand, the O−H covalent bond elongates and softens via O−O repulsion. 26 The elongation of the covalent bond and its energy loss lowers the melting point. Once the pressure is reduced, the H:O−H bond fully recovers to its original state. 25 Graphene Thickness Dependence. It is evident from the AFM images in Figure 2 that the quasi-liquid layer emerges at approximately 6 GPa, and its area increases in size when the external pressure is increased. The area of the quasi-liquid layer (A QL ) is measured for each frame and plotted as a function of the applied pressure in Figure 5a. The quasi-liquid layer area increases with increasing pressure until a maximum of 0.25 μm 2 (∼95% of the total ice area) at a pressure of approximately 10 GPa. We note here that melting does not occur instantaneously everywhere in the image; these variations might originate from the non-homogeneous distribution of the potassium ions on the mica surface 59 that could influence the bonding of the water network. 56 When the pressure is decreased, the quasi-liquid area decreases until it completely vanishes. The molecules immediately refreeze and resume their positions in a polarized ice layer (see Figure 4a). The same behavior is observed when the ice crystals are covered with thicker graphene covers. However, the applied force needed to create the quasi-liquid layer increases with the graphene thickness. For example, in order to melt an ice crystal covered by bilayer graphene, a ∼25% larger force is needed to melt the ice compared to monolayer graphene case (see Figure 5b). For three layers of graphene, forces larger than 10 nN are needed to form the quasi-liquid layer. We attribute this behavior to the increase of the effective tip−graphene contact area on the ice surface. Thicker graphene cover sheets will convolute the indentation by the tip more and lead to an increase of the effective contact area, due to their higher bending modulus 66 compared to that of single-layer graphene. Therefore, higher forces are required in order to reach the pressure needed to melt the ice crystal (i.e., 6 GPa). The curves perfectly overlap with each other when compensating for the increase of the contact area due to the thicker graphene cover (inset of Figure 5b). This reveals that the mechanism leading to the observed phase transition is purely pressure. The extracted contact areas for bilayer and trilayer graphenes have increased about 2 and 3 times, respectively, compared to single-layer graphene. Our information is obtained from friction forces during scanning, and the possibility that related heat effects may interfere with the inherent properties of the considered system needs attention. For this purpose, we have conducted experiments with different tips (different radii of curvature and material). The results of the most deviating measurements ACS Nano Article are shown in Figure 5a, obtained with sharp diamond tip (radius of curvature <5 nm) and those shown in Figure 6a, obtained with a blunt Si tip (radius of curvature of 20 nm), both from data gained at room temperature. When accounted for the different contact areas and the consequently larger forces required to melt the ice, the same melting characteristics are observed, and the influence of friction-induced heating on the melting of the ice crystals is thus clearly excluded. The clearest effect is expected from the variation of the contact area: a larger contact area leads to a higher friction force, 67 and therefore, an enhanced heat generation should be expected. Still no differences are observed for the results obtained with tips with clearly different radii of curvature, and apparently, the graphene cover sheet warrants sufficient thermal insulation due to its anisotropic thermal conductivity discussed above. We can safely conclude that the melting of the ice is the result of the exerted pressure only. Temperature Influence on the Melting Pressure. When the temperature of the substrate is increased, the applied pressure required to melt the ice crystals decreases (see Figure 6a). Because of the higher substrate temperature, the water molecules gain energy and therefore become dynamic even at lower pressures. 56 As a result, less pressure is needed to melt the ice crystals. The magnitude depends strongly on the substrate temperature. For example, at 60°C, an external pressure of ∼2 GPa is needed in order to melt the ice crystals. Li and Zeng 56 predicted that for a monolayer of ice on mica without a graphene cover, the interfacial hydrogen bonds, that is, the bonds between the mica and the water molecules, are broken at temperatures around 100°C. At these temperatures, the ice layer loses its structure and its net dipole and acts as liquid. These observations explain the coarser and smoother shaped fractals observed after heating at 100°C for 1 h in a recent study. 53 When the temperature is increased, the fractals undergo edge melting, and the water molecules at the edges rearrange, resulting in a smoother and coarser fractal. It is noted that the critical pressure for melting, P M , is equal to the summation of the van der Waals adhesion pressure, P W , and the critical exerted pressure, P c . The van der Waals adhesion pressure is calculated using P W = E W /d, where E W is the adhesion energy per unit area and d is the distance between the graphene cover and the supporting mica. 68,69 P W is ACS Nano Article estimated to be approximately 150 MPa for E W ≈ 0.075 J m −2 36,54 and thus negligibly small. P M as a function of temperature denotes the coexistence curve between the solid and the quasi-liquid phases. The functional shape of this curve is given by the Clausius−Clapeyron relation 70 where L is the specific latent heat of fusion, k is Boltzmann's constant, and P 0 is the equilibrium pressure at some temperature T 0 . We have plotted the ln(P M ) per Pa values obtained from data as shown in Figure 6a as a function of the corresponding reciprocal temperatures in Figure 6b. From a first-order polynomial fit and by using eq 1, we have extracted the specific latent heat of fusion of water molecules from the quasi-liquid water into the solid ice and found it equal to 0.15 ± 0.04 eV per water molecule. We emphasize here that the value is independent of the uncertainty regarding the critical pressure of melting as the ΔA(P,T) curves have been obtained with the same tip. This value is only two times larger than the bulk latent heat of fusion at 0°C (i.e., 0.062 eV per water molecule). This correspondence is a strong confirmation of the suggested mechanism. The phase transition is clearly related to melting of the confined ice. The difference of the latent heat of fusion in two dimensions compared to the three-dimensional case is hard to explain and needs a specialized theoretical consideration, which is lacking at this moment. This difference is quite subtle in view of the fact that already in three dimensions the heat of fusion is about 1 order of magnitude smaller than the heat of vaporization. Interestingly, from our data, we can extrapolate that at about 100°C the confined ice layer undergoes melting at a pressure of ∼1 atm, in good agreement with the result of ref 53. We finally note that the exact value of P M depends on the absolute size of the contact area. Possible margins have no consequence for the obtain heat of fusion of two-dimensional ice. CONCLUSIONS To conclude, the pressure-induced solid to quasi-liquid phase transition of confined ice has been explored in situ and real time using scanning probe microscopies. Two-dimensional ice crystals trapped between graphene and mica melt and form a quasi-liquid layer of water when a critical pressure beyond 6 GPa (at room temperature) is exerted locally onto the system. The H-down ice network loses its order, and the molecules become dynamic and mobile. The process is fully reversible; when the applied pressure is lifted, the water molecules immediately refreeze and resume a polarized H-down network. We were able to determine the heat of fusion in 2D ice at 0.15 ± 0.04 eV per water molecule. The protective graphene cover transports the dissipated energy induced by the probing tip effectively away from the ice crystals such that the melting and refreezing processes are only governed by pressure. The graphene cover warrants a powerful thermal protection from the environment. Our results are crucially important for understanding the phase behavior of confined water, and they provide an example of intrinsic regelation of 2D ice. EXPERIMENTAL SECTION The graphene flakes were obtained using the microexfoliation process from a freshly cleaved HOPG (ZYA grade, MikroMasch) and immediately deposited on a freshly cleaved mica surface (SPI, V1) at ambient conditions. The number of graphene layers was determined by optical microscopy with a DM2500H materials microscope (Leica, Germany) and tapping mode atomic force microscopy (Agilent 5100 atomic force microscope). 53, 71 All the experiments were performed inside an environmental chamber in which the relative humidity (RH) can be controlled. The RH was measured using a humidity sensor (SENSIRION EK-H4 SHTXX, Humidity Sensors, Eval Kit, SENSIRION, Switzerland), with an accuracy of 1.8% between 10 and 90% RH and was controlled by purging the environmental chamber with an adjustable N 2 flow. The sample was heated using a Peltier element and a Lakeshore 332 temperature controller. Lateral force microscopy and conductive AFM imaging of the graphene−mica system was performed at room temperature and in contact mode using AD-E-0.5-SS tips (diamond tips, Adama Innovations) with a nominal spring constant of 0.3 N/m and resonance frequency of 30 kHz and PtSiCont (NanoSensors) with a nominal spring constant of 0.3 N/m and a resonance frequency of 15 kHz. In order to make electrical contact with the graphene flakes, for the C-AFM measurements, the graphene flakes were mechanically connected with a bigger graphite flake acting as an electrode. 59 ASSOCIATED CONTENT * S Supporting Information The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acsnano.7b07472. Movie of the topographic changes of confined water between graphene and mica when an external pressure is applied; when the pressure is increased, the quasi-liquid layer propagates from the edges of the ice crystal toward the interior; with a decreasing pressure, the quasi-liquid layer shrinks until it completely disappears when the pressure drops below the critical melting pressure (AVI) Movie of a sequence of lateral force images, recorded simultaneously with the topography movie, of confined water between graphene and mica when an external pressure is applied (AVI)
2018-04-03T02:08:30.420Z
2017-11-07T00:00:00.000
{ "year": 2017, "sha1": "fe45fb3c4a3e1d4bc8308b922e5aa2bbd377c9ed", "oa_license": "CCBYNCND", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsnano.7b07472", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6954a44e69fbb3a8eae438276ca9f6ec1a2ddf8f", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Chemistry", "Medicine", "Materials Science" ] }
239662382
pes2o/s2orc
v3-fos-license
TRAINING PRE-SERVICE TECHNOLOGY TEACHERS TO DEVELOP SCHOOLCHILDREN’S TECHNICAL LITERACY Technical literacy is a component of professional competence of the pre-service technology teacher. However, the course content of technical disciplines in the pedagogical universities of Ukraine is not consistent with the content knowledge subsequently used in teaching practice of a technology teacher. Also, there is a need in general technical literacy of the students, yet it is developed only in its engineering design aspect. In the paper, it was proved that for the general technical literacy of pre-service technology teachers the basic concepts are the following technical phenomena: motion transmission, changes in kinematic parameters of motion, changes in force parameters of motion. Natural and scientific foundations of the machine drives were used as the basic topic-specific knowledge. It was hypothesized that effectiveness of teaching technical literacy to children would raise if the narrative about the technical phenomena is included in the content of the “Utility machinery” course for the pre-service technology teachers. The pedagogical experiment was performed in the Vinnytsia Mykhailo Kotsiubynskyi State Pedagogical University (Ukraine). It included ascertaining, formative, and control stages. At the ascertaining stage of the pedagogical experiment, the students’ readiness level to study technical phenomena was determined. At the formative stage, the students’ readiness to develop children’s technical literacy was measured. At the control stage, students’ readiness level to develop technical literacy was estimated in experimental and control groups. Theoretical value of the results is in substantiating technical topic-specific content knowledge as necessary for the pre-service technology teachers. Practical significance of the results is in implementation of the narratives about technical phenomena in the learning practice of the students of pedagogical university. Introduction A part of the curricula of the Ukrainian secondary general-education institutions, the branch "Technology" is aimed to develop children's understanding of the scientific foundation of production. In Ukraine, teachers are trained for teaching this branch under the specialization "Secondary education (Handicraft and technologies)". In the core curriculum course -"Technologies" -children's learning projects are organized, which are based on the principles of congruity with nature, congruity with culture, integration, consistency, and creativity (Steshenko, 2019). Learning technical knowledge is a didactic condition for entering a profession in the sphere of contemporary production (UNESCO, 2016). The concepts of technical education are integrity, humanization of knowledge, and sustainable development, while course content is formed based on integrated and interdisciplinary approaches (UNESCO, 1989;UNESCO, 2016). Development of factual, procedural, conceptual and meta-cognitive knowledge in the relevant areas of technology (Barak, 2013) is a result of technical education. The content of technical knowledge is unfolded in the process of completing the learning projects by children, which are grounded in the interconnection of technical phenomena and structure and functions of technical systems (Mitcham, 1994). In practice, however, the content of general technical training of pre-service technology teachers is selected out of the engineering theory and not out of the content of the project-based learning process. Contradiction between the need in general technical knowledge and scarce technical knowledge of the students of pedagogical universities of Ukraine raises the problem of practical use of technical knowledge in the branch "Technology". The drawback of technical training of the pre-service teachers in Ukraine is its orientation on engineering design for the machine-building industry (Yurzhenko & Yurzhenko, 2017). However, machine-building is not relevant for the practical activities of technology teachers. Organization of technological activities of children is grounded in project-based learning. However, the objects of children's learning projects usually are the household items and works of decorative and applied arts. For these objects, engineering and technical training of preservice technology teachers is redundant and is devaluated. Globally, in order to solve the problem of developing technical literacy of pre-service technology teachers, the researchers suggest the following methodological approaches: cultural research (Kelley & Knowles, 2016) and axiological approach (NAE & NRC, 2009). Hence, technical objects are viewed as the cultural artifacts and technical worldview-as the basic component of value orientations of students. The basic functions of schoolchildren's technical literacy are understanding and practical use of technical knowledge (ITEA, 1996). Technical knowledge would be acquired provided the natural and science basis of technical phenomena, and machinery operation is explained. To achieve that, traditional model making and craft projects should be complemented with informational, research, managing, game, and creative projects. Different types of learning projects broaden the sphere of practical use of technical knowledge. For instance, the senses of natural and scientific foundations of technical knowledge are the objects of research in informational and research learning projects. Thus, at the local level, the problem is solved through shifting the accents in the informational, research, managing, creative, game projects that requires reorientation of technical training of the pre-service technology teachers from the engineering design to studying technological machine. Generalized structural and functional machine diagram was used as a formative component of general technical literacy of the pre-service technology teacher's training (Ivanchuk, 2018). The basic components of technical knowledge of preservice technology teacher are mechanical energy, transfer and transformation of energy. The basic technical phenomena involved in the machine drive are: motion transmission and torque, changes in kinematic and force parameters of motion, conversion of motion types. Therefore, the pre-service technology teachers' understanding of the natural and scientific foundations of technical phenomena is the basis for their technical literacy. Research Problem In the study, the concept of practical knowledge (Stohlmann et al., 2012;Wang, et al., 2011), interdisciplinary didactic principle (Purzer, et al., 2015), and the concept of practicebased learning (Novoa, 2018) are considered. Steshenko and Kilderov (2017) described the core of technical competence of the preservice technology teacher that is rooted in activity-oriented and cultural studies methodological approaches. Putnam and Borko (2000) associated technical literacy of the technology teachers with the ability to include technical knowledge in the technological education of children. Thus, the specifics of teaching the technical knowledge to the students of pedagogical universities requires the use of technical topic-specific content knowledge for solving practical challenges of children's education. In defining the content of technical knowledge as a component of professional competency of the students of pedagogical universities of Ukraine, it was assumed that projectbased learning is basic for the technological education of children. Hence, engineering design would be the main technical activity of technology teacher. However, the content of engineering design of an engineer radically differs from the content of engineering design of a technology teacher. There is a contradiction between machine-building engineering design training of pre-service technology teachers and their practical engineering design activities in crafts and modeling. As a result, this knowledge does not have practical application. This problem could be solved in the context of development of scientific mindset of students. For example, Serheev et al. (2017) emphasized the worldview function of technical knowledge as a general guidance in the technological sphere. In technological education of children, this worldview function is implemented in learning projects (Serheev et al., 2018). Yurzhenko (2019) proposed the solution of the problem of practical application of technical knowledge on the basis of the paradigm of the use of mechanical power and its transmission. Thus, mere illustration of the use and transmission of mechanical power in engineering is not sufficient. Dannyk (2012) proposed to use a generalized description of the principles of operation of technical facilities (natural and scientific knowledge and engineering implementation of these principles of operation). Varnavskyh (2006) suggested to develop the content of students' technical training that would be rooted in generalized structural and functional machine diagram. All mentioned approaches produce new problems to be solved. The idea to use technical worldview knowledge for developing scientific mindset of the students of pedagogical universities raises the problem of securing integrity of such mindset. The idea to illustrate the use and transmission of mechanical power in engineering requires substantiating the choice of a certain engineering object in technological education of children. The idea to use generalized description of the principles of operation of technical objects is closer to the professional activity of an engineer than to the one of technology teachers. The idea to use generalized structural diagram of a machine addresses the drawbacks of the paradigm of mechanical power use and transmission, however, it requires specification. Such specification is possible if the drive transmission mechanisms are chosen as the object of study. Accordingly, technical phenomena engaged in drive operation would become the basic concepts of the process of development of technical literacy in children that would correspond to Yurzhenko's (2019) idea about unfolding the senses of technical phenomena. The senses of technical phenomena engaged in drives of the machines enable children to understand the practical use of natural and scientific knowledge. Considering the gaps in the fundamentals of science knowledge in schoolchildren and the need in actualization of basic natural and scientific knowledge, the narratives about technical phenomena were chosen as an effective didactic means. Research Focus The study was centered around raising of training of the pre-service technology teachers to the development of technical literacy of children in the process of learning "Utility machinery" course. Research Aim The aim of the study was to critically assess the results of including narratives in the process of training the pre-service technology teachers to develop schoolchildren's technical literacy. The objectives of the research were the following: 1) to develop criteria and indicators for assessing technical literacy of schoolchildren; 2) to develop the narratives about the natural and scientific essence of technical phenomena and experimentally prove the effectiveness of the use of such narratives in the real-life learning process. General Background The process of pedagogical research included ascertaining, formative, and control stages. As the basic concepts in such narratives, the following technical phenomena were used: motion transmission, changes in kinematic parameters of motion, changes in force parameters of motion. On the ascertaining experiment of the pedagogical experiment, the students' readiness level to study technical phenomena was established. On the formative experiment, the students' readiness level to develop technical literacy of children was determined. At the control stage, students' readiness level to develop technical literacy of children was compared in experimental and control groups. Eight-year (2012-2020) natural experiment in teaching and learning was realized during the "Utility machinery" course for the BA undergraduates in the Vinnytsia Mykhailo Kotsiubynskyi State Pedagogical University (Ukraine). Sample 636 students participated in the pedagogical experiment (245 at the ascertaining stage and 391 at the formative stage). Over the course of 8 years, 4 series of pedagogical experiments were conducted (two years each), and the common data set was processed with the methods of mathematical statistics in 2020. The following methods of equalizing the conditions of the experiment were used: equal composition of participants (pre-service technology teachers); permanent lecturer (Associate Professor Ivanchuk A. V.); same content of technical knowledge (drives operation); equally complicated technical problems, equal technical means of learning. Instrument and Procedures At the ascertaining stage, the diagnostic test for determining the students' readiness to study technical phenomena was performed. The questions of the test were assessed regarding their diagnostic value that was not over the critical value of 1.5 (Mayboroda et al., 2015) and the content of questions was validated by the experts. The following criteria were used: value, basic features, and functional. Indicator of the value criterion -the index of the need to study technical phenomena (C n ) -was calculated based on the formula (Kyveryalg, 1980;Volovyk, 1969): where n i is the quantity of correct answers, N is a general number of students taking the test. The indicator of the basic features -the index of the need to define the basic features of technical phenomena (C b ) -was calculated based on the formula (Kyveryalg, 1980;Volovyk, 1969). The questions of the diagnostic test were divided in the following sections: the use of natural and scientific knowledge (10 questions); kinematic and power parameters of mechanical transmission (10 questions); features of the transformation of kinematic and power parameters of mechanical transmissions (10 questions illustrated with the images of mechanical transmissions and description pf of the phenomenon of angular velocity. The students determined the technical phenomena. In the next section, combination of mechanical transmission and description of technical phenomena were used. The students determined the signs of increasing speed and decreasing torque. In the third section, the rough design of torque was used. The students determined the indicators of the increase of torque and decrease of angular velocities. Students' readiness to study technical phenomena was evaluated under the mean value of the readiness index (C gi ): At the formative stage, the narratives about technical phenomena were used as an independent variable, and the readiness level of students to develop schoolchildren's technical literacy was a dependent variable. The following working hypothesis was tested: "The students' level or readiness to form schoolchildren's technical literacy would increase if the educational material about technical phenomena in drives of the machines are presented as narratives". Students were not informed about their participation in the experiment, and the course content was consistent with the curriculum of the "Utility machinery" course. The levels of students' knowledge and skills were controlled with the methods, familiar to them. In the control groups, methodological guidance for the practical work was used; in the experimental groups, there were narratives about technical phenomena, as well as diagnostic tests. The diagnostic value of the tests was not over the critical value of 1.5 (Mayboroda et al., 2015) and the content was validated by the experts. Students' readiness to form children's technical literacy was determined according to cognitive and processual components. For evaluating the level of development of cognitive component, the criterion of completeness of cognitive skills (C gc ) with the following indicators was used: index of the skill to perceive the features of technical phenomena (C a ), index of the skill to identify natural and scientific knowledge in a description (C d ), index of the skill to analyze the mechanics of the drives (C ch ). Diagnostic tasks (C a ) included the following areas: figurative thinking (15 questions) and technical thinking (15 questions). Images of mechanical transmissions and their combinations were used. Students determined direction of motion transmission (5 questions), diagram of the increase of torque (5 questions), diagram of the decrease of torque (5 questions). In the technical thinking tasks, formulas of overall gear ratio were used: for power transmissions (5 questions); for speed transmissions (5 questions); for combined transmissions (5 questions). To diagnose (C d ), functional descriptions of lathe, drives were used (20 questions). Students determined natural and scientific knowledge in the functional descriptions. To diagnose (C ch ), the single reduction gear units (5 questions), two-stage speed reducers (5 questions), and combined reducers (5 questions) diagrams were used. The students determined gear ratio and torque. For the cognitive component, the mean value of the (C gc ) index was calculated. For evaluating the level of development of processual component, the criterion of completeness of processual skills (C cp ) with the following indicators was used: index of the skill to analyze morphological and functional description of the drives (C mf ), index of the skill to define the aim of functional descriptions (C g ), index of the skill to determine the basic concepts of functional descriptions (C i ), index of the skill to prove the main point of the functional descriptions of mechanical transmissions. To diagnose (C mf ), the morphological descriptions of the drives (15 questions) and functional description of the drives (15 questions) were used. Students determined the elements of drives and their functions. To diagnose (C g ), descriptions of lathe, drives were used (20 questions). Students determined the aim of functional descriptions. To diagnose (C i ), the functional descriptions of mechanical transmissions were used. Students determined the basic concepts in the descriptions. To diagnose C c , the descriptions of mechanical transmissions were used. Students established if the main point of description was well- At the control stage, the final examination was carried out. Diagnostic value of its tasks was evaluated according to the formula (Kyveryalg, 1980): where K is overall number of tasks, N is the number of students in the "strong" group, V N is the number of mistakes made by the "weak" group, V T is the number of mistakes made by the "strong" group. Preliminary, a preparatory test with 52 students of experimental groups was carried out. The tasks were grouped in sections: motion transmission (6 questions), changes in kinematic parameters of motion (6 questions), changes in force parameters of motion (6 questions). At first, students determined the directions of motion transmission. In the next section, students determined gear ratios. In the third section, students calculated the torques according to the diagrams of power and speed transmissions. The results of the preliminary test were arranged in a line, with the median established. According to this median, "strong" (28 persons) and "weak" (24 persons) students were selected. They participated in final examination. The tasks were grouped in the following sections: geometric attributes of technical phenomena (6 questions), kinematic chain (6 questions), and power circuit (6 questions). At the initial stage, students determined the direction of motion transmission, geometric attributes of changes in speed and torque, force levels. In the next section, students determined gear ratios and basic geometric attributes. In the third section, students calculated the torques. The diagnostic value of the task was within the normative range between 16% and 84% (Kyveryalg, 1980). Data Analysis Overall database size of the pedagogical experiment was 1/17 of the total number of preservice technology teachers in Ukraine (Vuzy Ukrayiny. Dovidnyk, 2021). In the analysis, the indices of completeness of cognitive and processual skills of the students to develop children's technical literacy were used. The obtained results of pedagogical research were checked to the extent of their randomness by means of the criterion of homogeneity χ 2 . The two statistical hypotheses: zero and alternative were compared. According to the zero hypothesis, the obtained results do not contain differences; according to the alternative hypothesis, the probability of error does not exceed 5% (significance level .05). Research Results The results presented in Table 1. Table1 The Results of the Final Test Figure 1 The Results of the Ascertaining Stage of the Pedagogical Experiment At the ascertaining stage, the third-year students were allocated to the control groups, while the fourth-year students -to the experimental groups. The results of development of the cognitive component of students' readiness to form schoolchildren's technical literacy at the formative stage are presented in the Table 2. Table2 The Results of the Formative Stage of the Pedagogical Experiment Comparison of distribution of (C a ) and (C d ) in experimental and control groups proves that impact of narratives on students' ability to perceive the attributes of technical phenomena was not confirmed; however, their ability to identify natural and scientific knowledge was significantly influenced. Allocation of (C ch ) characterizes the limited potential of the narratives to learn the mechanics of the drives. The results of the formative pedagogical experiment for the cognitive component are illustrated in the Figure 2. Figure 2 The Results of the Formative Stage of the Pedagogical Experiment, % The results of development of the processual component of students' readiness to form schoolchildren's technical literacy at the formative stage are presented in the Table 3. Table3 The Results of the Formative Stage of the Pedagogical Experiment Differences in the (C mf .) index characterize the limited potential of the narratives to form figurative and technical thinking in students. Differences in the (C g ), (C i ), (C c ) indices give grounds to assert that the use of narratives in educational process contributes to higher level of analysis of the functional descriptions of technical objects by students. The results of the formative pedagogical experiment for the processual component are illustrated in the Figure 3. The results of the final test by the students of experimental and control groups are presented in the Table 4. The results of calculation of the χ 2 experimental criterion are presented in the Table 5. The results of pedagogical experiment were verified for randomness; it was found that χ 2 experimental = 45.46 and critical value of χ 2 critical for α = 0.05 and u = 4 is 9.49 (Kyveryalg, 1980), i.e. χ 2 experimental < χ 2 critical . Hence, the null hypothesis is not confirmed. Verification of statistical significance proved validity of the results. Discussion Readiness of pre-service technology teachers to form schoolchildren's technical literacy was evaluated according to their cognitive and processual components that is consistent with the idea of practical use of technical knowledge (Utesch, 2019) and dovetails with the Melezinek (1999) recommendations regarding the practical application of technical knowledge by students. For each component of the studied phenomenon, the results show the decline in the number of students with the low ability to form schoolchildren's technical literacy, while the number of students with average and high levels of this ability increased. For the cognitive component, the lowest level of the student's ability to form schoolchildren's technical literacy in the experimental groups was identified for the index of the skill to analyze the mechanics of the drives. This reveals the lack of connection between the process of formation of the spatial thinking and graphic knowledge of students, as well as with the narratives about the mechanics of the drives. For this component, the highest level of students' skill to form schoolchildren's technical literacy was detected for the index of the skill to identify natural and scientific knowledge in the functional description of the drives. This proves the fact that unfolding the content of natural and scientific knowledge in the plots of the narratives about the functional descriptions of machines facilitates understanding of technical knowledge by students. Students worked on practical tasks in the "Utility machinery" course, which were presented in the form of narratives. For the processual component, the lowest level of the students' ability to form schoolchildren's technical literacy in the experimental groups was identified for the index of the skill to analyze morphological and functional descriptions of the drives. The reason for this is the multidisciplinary content of analysis of morphological and functional descriptions of the drives, with the core knowledge of natural and scientific, technical courses underlying the algorithm of actions. In this component, the highest level of students' skill to form schoolchildren's technical literacy was detected for the index of the skill to identify the basic concepts of the functional descriptions. This demonstrates that the attributes of the basic concepts in the functional descriptions could be effectively formed with the narratives. For the cognitive component, the lowest level of students' skill to form schoolchildren's technical literacy in the control groups was detected for the index of the skill to identify natural and scientific knowledge in the functional description of the drives. This result illustrates the need to use such narratives. For this component, the highest level of students' skill to form schoolchildren's technical literacy was detected for the index of the skill to identify the attributes of technical phenomena. This proves the effectiveness of traditional technique for developing certain components of students' readiness. For the processual component, the lowest level of the students' ability to form schoolchildren's technical literacy was identified for the index of the skill to determine the aim of the functional descriptions. This result demonstrates the need to use the narratives. For this component, the highest level of students' skill to form schoolchildren's technical literacy was detected for the index of the skill to identify the basic concepts of the functional descriptions. Such result, contrary to expectations, matches the result in experimental groups. This proves the effectiveness of traditional technique for developing certain components of students' readiness. During the experiment, expedience of including technical phenomena as the basic concepts of schoolchildren's technical literacy into the course content was proved (Ivanchuk, 2018;Varnavskyh, 2006). The results of the research comply with the conclusion about the positive impact of narratives on students' learning of technical knowledge (Afanasyev, 2013). Limited potential of the narratives about technical phenomena to form consistent technical thinking of students, as Valisova (2020) and Franus (2003) previously suggested, was revealed. Finally, the results of the pedagogical experiment show a positive impact of narratives on students' perception of technical knowledge and learning that substantiates the recommendations to widen access to knowledge as an aspect of overall modernization of the content of technical education (Levine & Marcus, 2010;Melezinek, 1999;Ruutmann & Kipper, 2016). Conclusions and Implications It was established that explanation of natural and scientific concepts' meaning in descriptions of technical phenomena increases the level of professional competence of preservice technology teachers. The criteria of cognitive and processual readiness of students to develop schoolchildren's technical literacy were determined. It was confirmed that the indices of the criteria of cognitive and processual components are characteristic of the students' level of readiness to form schoolchildren's technical literacy, which is based on the conceptual type of technical thinking. In the statistically significant limits, the positive change in the level of formation of components of the readiness of pre-service technology teachers to form schoolchildren's technical literacy was established. The main result of the research is in revealing the content of technical knowledge to the students of pedagogical universities; also, the method of narratives was tested. The change in traditional concept of formation of engineering design type of technical thinking and formation of the conceptual type of technical thinking of students creates conditions for the practical use of technical knowledge. Consequently, schools would be sufficiently stuffed and able to systemically form children's technical literacy. Well-developed technical literacy would allow the high school graduates to make an independent decision regarding their further professional education in the relevant sphere of contemporary production; and for the pre-service workers of service industry, it would be a baseline for understanding the mechanics of machines. Further studies should be focused around the problem of integration of the narrative method and problem-based learning in the development of schoolchildren's technical literacy. The investigation regarding the use of narratives in the process of formation of the figurative component of technical thinking and development of the system of technical training tasks would also be topical. Declaration of Interest Authors declare no competing interest.
2021-09-27T19:47:20.139Z
2021-08-10T00:00:00.000
{ "year": 2021, "sha1": "51c4f6559dd5b16e9dc59289d593dc05368c5c43", "oa_license": "CCBYNC", "oa_url": "http://oaji.net/articles/2021/457-1628686899.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "cf80cc269a77e73b6e758a9a9c8e27ac13597253", "s2fieldsofstudy": [ "Education", "Engineering" ], "extfieldsofstudy": [ "Psychology" ] }
261100158
pes2o/s2orc
v3-fos-license
The interaction impact of compost and biostimulants on growth, yield and oil content of black cumin (Nigella sativa L.) plants Abstract This study was conducted during the seasons 2020/2021 and 2021/2022 to investigate the effect of the interaction between compost and biostimulants on growth measurements, seed yield, and oil production of black cumin (Nigella sativa L.). Four levels of fertilizer (0.6, 12, 18 tons/ha) were used. While the biostimulants were ascorbic acid (AS) at 100 ppm, yeast extract (YE) at 8 g/L, and AS at 100 ppm + 8 g/L YE. The plants were treated with these stimuli as follows: control (without addition), 100 ppm, YE at 8 g/L, and the results showed that fertilization at all levels, as well as foliar spraying with the used stimuli, led to a significant increase. Growth parameters, number of capsules, seed production, as well as fixed and volatile oil production, and plant treatment with organic fertilizers at a high level (18 tons). /ha) recorded the highest values for the trait under study. It was also shown that the foliar treatment at a concentration of 8 g/l YE was more effective in increasing the previously studied variables. All interactions were great. Most of the composite coefficients increased significantly for all the traits studied. Moreover, the application of manure at a high rate (18 t/ha) at a rate of 8 g/l YE was the best treatment. GC-MS analysis of the volatile and persistent oil showed that the main constituents of both species were also affected by the use of organic fertilizers and biotreatments. The combination of high-rate fertilizer (18 t/ha) plus AS at 100 ppm + 8 g/L YE improved main oil components compared to untreated plants. effective in plant resistance to many plant pathogens such as fungi, bacteria, nematodes, and parasitic plants (Oertli, 1987Mahdy, 1994).Ascorbic acid also has many other important roles such as antioxidant defense and photoprotection, as well as the regulation of growth and photosynthesis (Blokhina et al., 2003). Yeast extract (Saccharomyces cerveace) has been used for a long time as a biofertilizer and also a biostimulant that is used in the production of horticultural crops, due to the positive, biological and physiological roles of yeast, which were described by some studies such as Nagodawithana (1991), where he indicated that yeast extract is a good source of many nutrients, Vitamin B, proteins, carbohydrates, enzymes, nucleic acids, and plant hormones which make it suitable for application to the leaves.Yeast extract plays an important role in providing many nutrients to plants (Khalil and Ismael, 2010).This study was planned to study the response of growth, seed yield, and production.Oil of black cumin plants (Nigella sativa L.) to compost and some biostimulants (ascorbic acid and yeast extract). 2.1.Description of the study site This experiment was conducted at the farm of the Muhammadiyah project, Ma'an, Jordan during the two consecutive experimental seasons 2020/2021 and 2021/2022 to find out the effect of compost control and 6, 8, and 12 t/ha and foliar application of biostimulants represented by a concentration of 100 ppm of ascorbic acid ( AS) and yeast extract (YE) at 8 g/L and AS at 100 ppm + 8 g/L YE and the interaction between the two factors on growth characteristics, seed yield, fixed and volatile oil (percentage and yield) of black cumin plants (Nigella sativa L.). Experimental design and tested treatments A split plot design with three replications, compost, 0, 6,12, and 18 tons/ha.was the main plots, Table 1 reviews the physical and chemical analysis of the soil used, while Table 2 indicates the physical and chemical properties of the compost used in this study.Biostimulants were ascorbic acid (AS) at 100ppm, yeast extract (YE) at 8 g/l, and AS at 100 ppm + 8 g /l YE were assigned as the subplots.Black cumin seeds were planted on November 10th of both seasons.The empirical plot was 3.0×2.5 m and contained 4 rows, 60 cm apart.The distances between the hills were 25 cm.and the plants were thinned 35 days later to two plants/hills.Compost was added at its three rates before sowing and during soil preparation, the plants, except the control, were treated with foliar sprayed with the two tested biostimulants, either separately or in combination three times as follows.After 60, 75, and 90 days from sowing respectively, in both seasons.The plants were foliar sprayed till runoff.All usual farming operations were performed.All agricultural practices were performed as usual. Introduction Medicinal plants have been used since immemorial in foods, spices, and treating diseases.Black cumin (Nigella sativa L.) is a winter annual flowering species in the Ranunculaceae family.This spice seed crop is native to the Mediterranean region and grows widely throughout the Middle East, Europe, and Asia.The plant is cultivated and grown all over the world (Aggarwal et al., 2008;Bayram et al., 2010;Mohamed et al., 2017).The main producers of Nigella sativa are India, Sri Lanka, Bangladesh, Pakistan, Afghanistan, Egypt, Iraq, Iran, Turkey, Syria, and Ethiopia.Ripe black cumin seeds contain about 7% moisture, 4.34% ash, 23% protein, 0.39% fat, 4.99% starch, and 5.44% crude fiber.The seeds are rich in fats, fiber, minerals such as iron, sodium, copper, zinc, phosphorous, calcium, and vitamins such as ascorbic acid, thiamin, niacin, pyridoxine, and folic acid (Takruri and Dameh, 1998;Mozaffari et al., 2000;Sultana et al., 2018).Moreover, Nigella Sativa seeds contain 30-35% fixed oil and 0.5-1.5% volatile oil which has many uses in the pharmaceutical and food industries.Black cumin seeds contain protein, alkaloids (nigericin and nigellone), stable (α-here) saponins, and essential oil (Ashraf et al., 2005;Ozel et al., 2009). Organic fertilizers have recently gained a lot of popularity as a useful way for sustainable agriculture to provide the nutritional needs of crops.Although organic fertilizers contain trace amounts of nutrients, they enhance soil fertility and production because they contain growth-promoting factors including enzymes and hormones.Applying compost to the soil improved the soil's water-holding capacity, which enhanced crop access to nutrients.Root cover conditions (structure, moisture, etc.) are also greatly improved by compost, and this enhances plant growth by increasing the number of microorganisms (Puma, 2001;Shaheen et al., 2007). The nutrients in the compost are added gradually and are retained in the soil for a longer period, ensuring residual benefits on subsequent crops.(Ginting et al., 2003).In addition, it can be found in considerable quantities locally and is a less expensive way to increase soil fertility.Increased crop yield was achieved by adding organic manures, especially to sandy soil, which was deficient in organic matter, had undesirable physical and biological characteristics, and had higher N leaching (Awosika et al., 2014). According to Norman (2004), compost can represent organic manure from both plant and animal sources.Plant sources include green manures, seaweeds, cover plants, crop residues, nitrogen fixed by microorganisms, mulch, and compost.Animal sources also include the dung of sheep, goats, cattle, horses, and poultry.Both major and minor nutrients can be found in compost from both vegetable sources.We used compost and poultry manure in this investigation.Ultimately, environmentally friendly agricultural practices for sustainable food production use organic and bio-fertilizers (Islam et al., 2017). Ascorbic acid (Vitamin C) is a proven antioxidant and biostimulant that can protect plants from damage caused by aerobic metabolism and a range of pollutants.It acts as an enzyme cofactor.Further, ascorbic acid is highly Improving productivity of black cumin by biostimulants At the end of the experiment in May, the following measurements were taken: plant height, number of branches/plants, shoot fresh and dry weight g / plant, number of capsules/plants, seed yield g /plant, and seed yield kg /ha.Was calculated, fixed oil percentage, fixed oil yield ml/plant, and fixed oil yield L / ha.Volatile oil percentage, volatile oil yield ml /plant, volatile oil yield L /ha. and volatile oil and fixed oils components. Time and method of treatments N, P, and K fertilizers (control) were added to the soil in half of the recommended dose as follows; Ammonia nitrate (33.5%) at 357 kg/ha, calcium superphosphate (15.5% P2O5) at 476 kg/ha, potassium sulfate (48% K2O) at 89.25 kg/ha, and phosphorus fertilizer was added during soil preparation.Nitrogen and potassium fertilizers were divided into two equal doses.The first dose was added 30 days after sowing and the second dose was added 30 days after the first dose. Volatile and fixed oil percentage The percentage of volatile oil in the air-dried seeds was determined according to the method of the British Pharmacopoeia (1963).However, the fixed oil percentage was predestined by the Soxhlet apparatus using petroleum ether (BP 40-60 °C) as solvent according to the Association of Official Agricultural Chemists (AOAC, 1980). Statistical analysis All acquired data were recorded in tables and statistically analyzed according to MSTAT-C (1986) using the L.S.D. test at 5% test for differences between all treatments according to Mead et al. (1993).The E.C. (m mhose/cm) represents electrical conductivity, while the C/N ratio represents the ratio of carbon to nitrogen. Growth measurements The data recorded in (Table 3) indicated a significant increase in plant height, number of main branches/ plants, plant freshness, and dry weight (g) of black cumin (Nigella sativa L.).Compost fertilizer at all levels, in both seasons, it can be seen that these parameters gradually increased with increasing fertilizer levels compared to untreated plants in both seasons.Therefore, the use of high-level organic fertilizer (18 t/ha) gave the best vegetative growth of plants ranging from 65.79, 68.15, 44.74, 42.86, 47.38, 56.62, 43.40, and 48.17% of control in both seasons., respectively. The efficiency of organic manures in improving growth parameters was appeared by Sanjeeva et al. (2018), Ali and Hassan (2014), Badran et al. (2012), Sayed and Hossein (2011) and Hassan et al. (2009) on Nigella sativa L. plants, Al-Fraihat et al. (2011) About foliar spray AS and YE, the datum in (Table 3) evident that foliar application of black cumin plants with the two materials whether alone or in combination, in two experimental seasons, led to a significant improvement in all growth traits, in two seasons, as compared to non-sprayed plants.COM (1) = 6, COM (2) = 12, and COM (3) = 18 ton/ha.of compost.AS = ascorbic acid (100 ppm).YE = yeast extract (8 g/l). Brazilian Journal of Biology, 2023, vol.83, e272957 5/10 Improving productivity of black cumin by biostimulants Spray application with AS at 100 ppm + 8 g /l YE record to be more efficient in improving plant height, branch number/plants, and fresh & dry weights of plants (g) than those gained by control and other treatments.The previous superior treatment improved these traits with percentages ranging from 28. 60, 27.66, 14.94, 13.54, 8.44, 8.16, 16.18, and 17.22 higher than the control for the two experimental seasons, respectively.The aforementioned results which show the effect of Ascorbic acid and yeast extract on growth parameters was In line with the findings of Ali (2001) on Calendula officinalis, Youssef and Talaat (2003) According to the nested transactions compost and biostimulant treatments, it is noted that it gave a highly significant on all studied growth traits in two seasons.The data specified that the best results it was received when using the high level of compost plus addition to AS at 100 ppm + 8 g / L YE when compared to other treatments in two experimental seasons, as shown in (Table 3). Yield measurements The data in Table 4 shows the number of capsules per plant, seed yield per plant (g), and hectares (tons).The effect was significant on Nigella sativa L. plants at all levels of organic fertilizer during the two study seasons.It was clear that all of them led significantly to an increase in yield measurements.Moreover, the highest values were the capsules number per plant and the total seed yield per plant (gm) and hectares (ton).It was obtained when black cumin plants were supplied with fertilizer at a high rate (18 tons/ha), which led to an increase that ranged between 29.98, 25.08, 45.51, 45.40, 45.36, and 45.23 over the control in the two seasons, respectively.The effectiveness of compost on increasing yield measurements was revealed by Sanjeeva et al. (2018), Ali and Hassan (2014), Badran et al. (2012), Sayed andHossein (2011), andHassan et al. (2009) on Nigella sativa L. plants, Hassan et al. (2015) COM (1) = 6, COM (2) = 12, and COM (3) = 18 ton/ha.of compost.AS = ascorbic acid (100 ppm).YE = yeast extract (8 g/l). Concerning Spraying with stimulants, data in (Table 4) show that the influence of them the growth parameters of black cumin was significant in both seasons.The recorded data show that the highest number of capsules/plant, seed yield is g/plant, and seed yield is ton/ha.Recorded data show that the highest number of capsules/plant, seed yield is gram/plant and seed yield is tons/ha.The result of foliar spraying with ascorbic acid and yeast extract together (100 ppm AS + 10 g/L YE) was better in increasing these measurements than those other treatments in both seasons.Numerically, this pre-treatment increased this fraction by 11.37, 10.36, 18.64, 19.78, 17.84, and 19.23 higher than the unsprayed plants for the two experimental seasons, respectively.The improvement of yield parameters due to treating ascorbic acid was explored by, Khalil et al. (2010), on sweet basil, and Ahmad Al-Fraihat et al. ( 2023) on rosemary, Concerning the use of yeast extract, there were similar results by Salman (2006), El-Keasy et al. (2011), Abd El-Salam Nora (2014), Abo kutta (2016) on fennel, Abdou and Badr (2022) on caraway plants and Mohamed et al. (2022) on basil plant., Yield measurements increases due to the interaction effect between compost and biological treatments on all black cumin growth treatments were significant in two seasons. The maximum effective treatments were obtained from treating the plants with a high rate of compost (18 tons/ha) with ascorbic acid (AS) at a rate of 100 ppm and 8 g/L of yeast extract (YE) as listed in Table 4. Fixed oil production The data shown in (Table 5) shows the fixed oil production (percentage, yield per plant (ml), and hectare (L) of Nigella sativa L. plants was greatly affected by compost treatment in both experimental seasons.Fertilizing the plants with a high rate of compost 18 tons/ha.it gave the best values as ranged 18.39& to 19.27,71.91 &73.23 and 71.67& 73.21 over the Untreated plants in both seasons, respectively.The role of organic fertilizer is increasing with the results obtained by Ali and Hassan (2014) on black cumin, Hassan et al. (2015) on Rosemary. Regarding ascorbic acid (AS) and yeast extract (YE) treatments, the results gave out in Table 5 showed that foliar spraying with them either alone or together, in two experimental seasons, gave a high significance in constant fixed oil yield ml/plant and l/ha, for two seasons, comparing to unsprayed plants.spray the plants with the 100 ppm COM (1) = 6, COM (2) = 12, and COM (3) = 18 ton/ha.of compost.AS = ascorbic acid (100 ppm).YE = yeast extract (8 g/l). Brazilian Journal of Biology, 2023, vol.83, e272957 7/10 Improving productivity of black cumin by biostimulants AS+ 8 g /1 YE It was the best transaction is ever given as 10. 67 & 11.24,30.79 & 32.58 and 30.99 & 32.70% over unsprayed plants in both seasons, respectively.Regarding the interaction between the two factors under study (organic fertilizer and biocatalyst treatments), the effect was significant for the properties of the fixed oil of black cumin in both seasons.Where the measurements indicate that the best results were obtained when using a high percentage of organic fertilizer 100 ppm AS + 8 g / year when compared with other mixed treatments in the two agricultural seasons as in (Table 5). Volatile oil production The measurements recorded in (Table 6) Show the percentage of volatile oil production (volatile oil percentage, yield volatile oil ml/plant, liter/ha).Black cumin plants (Nigella sativa L.) had a significant effect when adding compost at all levels in compost in the two growing seasons.This coefficient was gradually increased by increasing the organic fertilizer levels to range between 19.89, 17.31, 75.86, 70.59, 72.99, and 69.36 over the unsprayed treatment in both seasons, respectively. These results regarding organic fertilization are similar to those of Ali and Hassan (2014) on black cumin, Abdullah et al. (2012), andHassan et al. (2015) on rosemary. Regarding the bioactive stimuli treatments, the results presented in (Table 6) show that the foliar spraying of ascorbic acid (AS) and yeast extract (YE), either alone or together, in both seasons, gave a significant increase in the percentage of volatile oil, the yield of volatile oil ml/plant and liter/ha for both seasons, compared to unsprayed plants.It was found that when spraying plants with 100 ppm AS + 8 g/l it was more effective in increasing volatile oil yield (ml/plant) and volatile oil yield (l/ha).As in 17. 02, 14.42, 38.24, 33.33, 37.17, and 35.94% of control in the first and second seasons, respectively. As for the interaction between the two factors under study, the effect was significant on all measurements of the volatile oil of black cumin plants in both seasons.The data indicate that the most effective treatments were obtained by adding a high rate of compost with 100 ppm Asc + 8 g/L compared to other treatments for both seasons, as shown in (Table 6).COM (1) = 6, COM (2) = 12, and COM (3) = 18 ton/ha.of compost.AS = ascorbic acid (100 ppm).YE = yeast extract (8 g/l). Fixed oils compositions The results obtained from the GC-MS analysis of the fixed oil showed that the components of the fixed oil extracted from the seeds of the black cumin plant are fatty acids, namely, myristic acid, palmitic acid, stearic acid, oleic acid, and linoleic acid.And linolenic acid and arachnid acid as shown in (Table 7).The highest percentages of these fatty acids were oleic acid, followed by stearic acid and arachidic acid.Compost treatments and some biostimulants increased ex-fatty acids in fixed oil compared to untreated plants, especially when manure with rise stomach plus ascorbic acid (AS) and yeast extract (YE) was applied. Conclusion By following the results obtained from this study, it can be seen that the highest values of the studied characteristics (growth characteristics, yield, oil production, and its components) were recorded when fertilizer was added at a high level (18 tons/ha).Foliar spraying with AS at 100 ppm + 8 g/L YE also increased the proportions of major components compared to untreated plants.GC-MS analysis of the volatile and fixed oil showed that the major constituents were also affected by the use of organic fertilizer and biostimulant applications.In general, the combination of a higher standard organic fertilizer (18 t/ha) plus plants sprayed with AS at 100 ppm + 8 g/L YE resulted in higher proportions of major components compared to untreated plants. Table 1 . Physical and chemical analysis of the soil used during the seasons 2020/2021 and 2021/2022. Table 2 . The physical and chemical properties of the compost used in this study. Table 3 . Effect of compost, ascorbic acid (AS), and yeast extract (YE) on growth measurements of Nigella sativa L. plants through the 2020/2021 and 2021/2022 seasons. Table 4 . Impact of compost, ascorbic acid (AS), and yeast extract (YE) on yield measurements of Nigella sativa L. plants during the 2020/2021 and 2021/2022 seasons. Table 5 . Effect compost, ascorbic acid (AS), and yeast extract (YE) on fixed oil percentage and yield of black cumin plants during the 2020/2021 and 2021/2022 seasons. Table 6 . Effect of fertilizer in compost, ascorbic acid (AS), and yeast extract (YE) on volatile oil percentage and yield of black cumin plants during 2020/2021 and 2021/2022 seasons. Table 7 . Effect of compost and biostimulant on fixed oil components of black cumin plants during the 2021/2022 season. Table 8 . The interaction effect of compost, ascorbic acid, and yeast extract on volatile oil components of black cumin plants during the 2021/2022 season.Improving productivity of black cumin by biostimulants Moreover, the largest values of the main components were obtained from treatment when the addition of COM (3) + AS and COM (3) AS+YE with values of 35.89% p-Cymene.All detected components belong to two chemical groups, monoterpenes, and sesquiterpenes, produced by plants of compost at the high level (18 ton/ha) plus ascorbic acid (AS) at 100ppm, yeast extract (YE) at 8 g/l and AS at 100 ppm + 8 g /l YE.Which had the highest percentage of MH values (91%), while producing the highest value of SCH (19.04%) from all components and different chemical classes.
2023-08-25T06:17:23.424Z
2023-08-18T00:00:00.000
{ "year": 2023, "sha1": "70ca3b3eea6b966f6a1267d3ab1a19ecec7a59bb", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/bjb/a/kc93rc7d7vKJzcLZYtRcxkn/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Dynamic", "pdf_hash": "0aff3c7952b2ae683f70ce8b45f54ad75f12f0b4", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
145878660
pes2o/s2orc
v3-fos-license
Breast Metastasis From a Combined Hepatocellular–Cholangiocarcinoma ABSTRACT Combined hepatocellular-cholangiocarcinoma (cHCC-CC) is a unique entity that contains mixed elements of both hepatocellular carcinoma and cholangiocarcinoma. We report a 62-year-old woman with alcoholic cirrhosis with elevated α-fetoprotein of 25.3 ng/mL. Abdominal computed tomography showed a poorly defined subcapsular nodular lesion in the VIII segment, showing enhancement during the arterial phase and washout in the delayed phase. Histological examination of hepatic segmentectomy revealed a malignant epithelial neoplasia constituted by 2 distinct components, consistent with the diagnosis of cHCC-CC, classical type. One year after surgical resection, the patient noticed a nodule in the right breast. Histological examination of core needle biopsy was compatible with a metastasis in the breast of the previously diagnosed liver cancer. To our knowledge, this is the first report of breast metastases from a cHCC-CC, denoting disseminated metastatic disease and poor prognosis. INTRODUCTION Combined hepatocellular-cholangiocarcinoma (cHCC-CC) is a unique entity that contains mixed elements of both hepatocellular carcinoma (HCC) and cholangiocarcinoma (CC). 1 This rare form of primary liver cancer accounts for 1%-5% of all primary liver cancers and has a poor prognosis. 1 cHCC-CC was described for the first time in 1949 by Allen and Lisa. 2 However, its diagnosis, biological behavior, prognosis, and treatment remain poorly understood compared with HCC or CC. Nonetheless, cHCC-CC has been increasingly recognized, partly because of the extensive sampling of explants and surgical resection specimens. 1 cHCC-CC is currently defined as the presence of unequivocal mixed components of both HCC and CC according to the recent WHO definition and is divided into 2 subcategories: classic cHCC-CC and cHCC-CC with "stem cell features" when morphological and/or immunophenotypical features of stem/progenitor cells within the tumor predominate. 3 Aggressive multimodal treatment is strongly recommended for recurrent cHCC-CC tumors. The current therapeutic management is based on surgical resection. 1 Liver transplant, transarterial chemoembolization, radiofrequency ablation, and percutaneous ethanol injection are other available management options. However, the response to treatment is often poor, especially in patients with multiple or extrahepatic metastases. CASE REPORT A 62-year-old woman with alcoholic cirrhosis was regularly observed for 10 years in the Hepatology Department of Centro Hospitalar de São João. She denied alcohol consumption; she was not taking any medications, and there was no family history of liver disease. In the biochemical screening, a slight elevation of a-fetoprotein (AFP) of 25.3 ng/mL (normal range, 1-8 ng/mL) was observed, without other abnormalities in liver function test analysis. Abdominal ultrasound was normal. Therefore, abdominal computed tomography (CT) was performed, which showed a poorly defined subcapsular nodular lesion (4.7 3 4.6 cm) in the VIII segment, showing enhancement during the arterial phase and washout in the delayed phase ( Figure 1). There was no evidence of distant or nodal metastasis in staging thoraco-abdominopelvic CT and neither in bone scintigraphy. The case was discussed in a multidisciplinary meeting, and it was decided to perform hepatic segmentectomy VIII. Grossly, the surgical specimen measured 10.0 3 8.5 3 5.0 cm, weighed 133 g, and was partially covered by the liver capsule. On the cut surface, a well-circumscribed, heterogeneous nodule (4.2 cm in diameter) was seen. Histological examination revealed a malignant epithelial neoplasia constituted by 2 distinct components: one with HCC-like features and the other with adenocarcinomatous pattern, displaying desmoplastic stroma. In the latter, cellular atypia was prominent, and mitoses were frequent. Phenotypical features of stem cells were not identified. Immunohistochemical studies revealed, in the HCC component, expression of HepPar-1 (focal), arginase-1 (focal), glypican-3 (diffuse), glutamine synthetase (diffuse), and CD34 (sinusoidal pattern). In the adenocarcinomatous component, there was diffuse expression of cytokeratins 7 (CK7) and 19 (CK19), in keeping with biliary differentiation. On the basis of these findings, the diagnosis of cHCC-CC, classical type, was made ( Figure 2). Imagiological revaluation was performed at 3 months with abdominal CT, without showing evidence of residual disease/recurrence. One year after surgical resection, the patient noticed a nodule in the right breast (1.5 cm in diameter), localized in the upper inner quadrant, characterized as hard and irregular. Mammography revealed a single nodule (1.1 cm), which showed characteristics compatible with BI-RADS 5. Core needle biopsy of the breast nodule displayed a solid neoplasia with necrosis, composed of polygonal cells with prominent nucleoli and marked anisocariosis. The cells were immunoreactive for glypican-3 and glutamine synthetase, with no expression of estrogen receptor, progesterone receptor, HER-2, GATA-3, GCDFP-15, mammaglobin, arginase-1, and HepPar-1. These findings were compatible with a metastasis in the breast of the previously diagnosed liver cancer (Figure 3). At the time, another abdominal CT was performed, which showed 2 new hepatic lesions (4.3 and 3.7 cm), associated with many smaller nodules, suggesting multifocal cHCC-CC. The patient was referred to best supportive care and died 2 months later because of disease progression. DISCUSSION Combined HCC-CC (variously referred to as mixed HCC-CC or biphenotypic hepatobiliary carcinoma) clearly represents a distinct subtype of liver carcinoma, histologically characterized by intermingling of both HCC and CC elements. 1 According to the WHO classification, cHCC-CC is classified into 2 subtypes: the classical type and cHCC-CC with stem cell features. 3 The latter is additionally subcategorized into the following 3 subtypes: typical, intermediate, and cholangiocellular. 3 Furthermore, in some cHCC-CCs, there are foci of intermediate morphology at the interface of the HCC and CC components, showing biphenotypic differentiation; cells that have phenotypical or immunophenotypical features of stem/progenitor cells may also be present. 3 The presence of these stem/ progenitor cells is believed to be the reason why these tumors exhibit aggressive biological behavior and poor prognosis with 5-year survival of 36%. 4 Although the histopathological characteristics of cHCC-CC are well known, its risks factors, imagiological characteristics, and clinical behavior are still poorly understood. Few studies have demonstrated that some risk factors are similar to those for HCC and CC, such as liver cirrhosis, chronic hepatitis B and C, alcohol intake, or dioxin exposure. 5 Our patient had alcoholic liver cirrhosis; cHCC-CC was diagnosed 10 years after the initial diagnosis of cirrhosis, during a regular follow-up screening. The diagnosis was suspected because of a slight elevation of AFP levels; Yano et al observed that the AFP level .400 IU/L is an independent prognostic factor in cHCC-CC. 6 Other authors reported that the AFP level in cHCC-CC was lower than that in HCC (not reaching the threshold of statistical significance). 7 cHCC-CC is characterized by an aggressive biological behavior and dismal prognosis comparing to HCC or CC, and extrahepatic metastases commonly occur, with the stomach being the most common metastatic site. 8 In our case, distant metastases were detected 1 year after surgery, localized in the breast, in keeping with the aggressiveness of these tumors, as described in the literature. To our knowledge, this is the first report of breast metastases from cHCC-CC, denoting disseminated metastatic disease and poor prognosis. The accurate diagnosis, differentiating primary from metastatic breast carcinoma, is important for appropriate treatment to avoid unnecessary or even harmful therapy. In the literature, there are 5 cases of breast metastasis from HCC, and no cases have been described of combined HCC-CC. [8][9][10] DISCLOSURES Author contributions: M. Silva and R. Coelho contributed equally to this work. M. Silva collected data, wrote the manuscript, and is the article guarantor. R. Coelho collected data and wrote the manuscript E. Rios performed the histopathologic examination and wrote the manuscript. S. Gomes collected data and revised the manuscript. F. Carneiro performed the histopathologic examination and approved the manuscript. G. Macedo revised and approved the manuscript. Financial disclosure: None to report. Informed consent was obtained for this case report.
2019-05-07T13:28:44.973Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "e133abd5adfa82fcf84faed8841b76d693448ce0", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.14309/crj.0000000000000057", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d13bc3be79e6a1247ba7e7bdba8367460f4fe53d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251711577
pes2o/s2orc
v3-fos-license
Upregulation of TLR4/MyD88 pathway in alcohol-induced Wernicke’s encephalopathy: Findings in preclinical models and in a postmortem human case Wernicke’s encephalopathy (WE) is a neurologic disease caused by vitamin B1 or thiamine deficiency (TD), being the alcohol use disorder its main risk factor. WE patients present limiting motor, cognitive, and emotional alterations related to a selective cerebral vulnerability. Neuroinflammation has been proposed to be one of the phenomena that contribute to brain damage. Our previous studies provide evidence for the involvement of the innate immune receptor Toll-like (TLR)4 in the inflammatory response induced in the frontal cortex and cerebellum in TD animal models (animals fed with TD diet [TDD] and receiving pyrithiamine). Nevertheless, the effects of the combination of chronic alcohol consumption and TD on TLR4 and their specific contribution to the pathogenesis of WE are currently unknown. In addition, no studies on TLR4 have been conducted on WE patients since brains from these patients are difficult to achieve. Here, we used rat models of chronic alcohol (CA; 9 months of forced consumption of 20% (w/v) alcohol), TD hit (TDD + daily 0.25 mg/kg i.p. pyrithiamine during 12 days), or combined treatment (CA + TDD) to check the activation of the proinflammatory TLR4/MyD88 pathway and related markers in the frontal cortex and the cerebellum. In addition, we characterized for the first time the TLR4 and its coreceptor MyD88 signature, along with other markers of this proinflammatory signaling such as phospo-NFκB p65 and IκBα, in the postmortem human frontal cortex and cerebellum (gray and white matter) of an alcohol-induced WE patient, comparing it with negative (no disease) and positive (aged brain with Alzheimer’s disease) control subjects for neuroinflammation. We found an increase in the cortical TLR4 and its adaptor molecule MyD88, together with an upregulation of the proinflammatory signaling molecules p-NF-ĸB and IĸBα in the CA + TDD animal model. In the patient diagnosed with alcohol-induced WE, we observed cortical and cerebellar upregulation of the TLR4/MyD88 pathway. Hence, our findings provide evidence, both in the animal model and the human postmortem brain, of the upregulation of the TLR4/MyD88 proinflammatory pathway in alcohol consumption–related WE. Wernicke's encephalopathy (WE) is a neurologic disease caused by vitamin B1 or thiamine deficiency (TD), being the alcohol use disorder its main risk factor. WE patients present limiting motor, cognitive, and emotional alterations related to a selective cerebral vulnerability. Neuroinflammation has been proposed to be one of the phenomena that contribute to brain damage. Our previous studies provide evidence for the involvement of the innate immune receptor Toll-like (TLR)4 in the inflammatory response induced in the frontal cortex and cerebellum in TD animal models (animals fed with TD diet [TDD] and receiving pyrithiamine). Nevertheless, the effects of the combination of chronic alcohol consumption and TD on TLR4 and their specific contribution to the pathogenesis of WE are currently unknown. In addition, no studies on TLR4 have been conducted on WE patients since brains from these patients are difficult to achieve. Here, we used rat models of chronic alcohol (CA; 9 months of forced consumption of 20% (w/v) alcohol), TD hit (TDD + daily 0.25 mg/kg i.p. pyrithiamine during 12 days), or combined treatment (CA + TDD) to check the activation of the proinflammatory TLR4/ MyD88 pathway and related markers in the frontal cortex and the cerebellum. In addition, we characterized for the first time the TLR4 and its coreceptor MyD88 signature, along with other markers of this proinflammatory signaling such as phospo-NFκB p65 and IκBα, in the postmortem human frontal cortex and cerebellum (gray and white matter) of an alcohol-induced WE patient, comparing it with negative (no disease) and positive (aged brain with Alzheimer's disease) control subjects for neuroinflammation. We found an increase in the cortical TLR4 and its adaptor molecule MyD88, together with an upregulation of the proinflammatory signaling molecules p-NF-ĸB and IĸBα in the CA + TDD animal model. In the patient diagnosed with alcohol-induced Introduction Wernicke's encephalopathy (WE) and Korsakoff's syndrome are considered different stages of the same disease because of vitamin B1 or thiamine deficiency (TD), where WE represents the acute and reversible (when treated with thiamine) form of the disease and Korsakoff is an advanced and irreversible state characterized by neuronal death. This neurologic disease, also named the Wernicke-Korsakoff syndrome (WKS), is characterized by ocular abnormalities (nystagmus and/or ophthalmoplegia), mental status changes, and gait disturbances (Kohnke and Meek, 2021). Because of limiting motor, cognitive, and emotional alterations, these patients require heavy dependence to complete daily life activities. Alcohol use disorder (AUD) is the main risk factor for this disease, although other causes with no history of alcohol dependence may also induce the pathology, such as repetitive vomiting, gastric disorder, or after bariatric surgery (Kopelman, 1995;Deb et al., 2001). The nutritional TD in AUD is associated with malnourishment and decreased absorption of thiamine, due to the direct effects of alcohol on its metabolism, besides reduced storage in the liver because of alcoholic liver disease (Arts et al., 2017). The bioactive form of thiamine (thiamine diphosphate) is necessary for energy metabolism in all cells. Therefore, the brain is the main site of TD-induced damage because of its immense energy requirement in comparison with the rest of the body (Clarke and Sokoloff, 1999). Brain damage has been extensively described in several brain regions in WE, mainly including diencephalic regions such as the thalamus and mammillary bodies (Manzo et al., 2014). However, some authors pointed out the presence of damage in other structures less studied in this pathology, such as the frontal cortex (Jacobson and Lishman, 1990;Jernigan et al., 1991;Paller et al., 1997;Aupée et al., 2001;Gibson et al., 2016) and the cerebellum (Mulholland, 2006;Manzo et al., 2014). In our previous preclinical studies about WE, we selected these less studied frontal cortex and cerebellum (Moya et al., 2021(Moya et al., , 2022 as structures of great interest to be investigated, since both participate in motor function control, cognition, and emotional responses (Baillieux et al., 2008;Molinari et al., 2008;Rudebeck et al., 2008;Leggio et al., 2011;Clausi et al., 2017). Indeed, the frontal cortex is particularly important in executive control tasks and behavioral inhibition, including cognitive processes, social behavior, and inhibition of motor responses. Within the cerebellum, the mapping of associative learning with emotional, motor, and cognitive functions follows a medial-tolateral cerebellar distribution: the sensorimotor functions are distributed more toward the midline, whereas the cognitive functions are located more laterally in the cerebellar hemispheres. Executive functions, including verbal working memory, are related to both cerebellar hemispheres, whereas affective functions are primarily midline in the so-called "limbic cerebellum." It is of interest that the left cerebellar hemisphere, the region analyzed in the present study, also appears to be involved in visuospatial functions and in linguistic processes (Klein et al., 2016;Amore et al., 2021). Therefore, the cerebellum and the frontal cortex are two brain areas directly involved in the behavioral alterations manifested in the WE, which deserve further investigation. The exact cause of brain damage in WE is unclear, but neuroinflammation has been proposed as a contributing factor (Neri et al., 2011;Zahr et al., 2014;Toledo Nunes et al., 2019). Proinflammatory cytokines, enzymes, and different constituents of this process have been reported, but how the inflammatory response is activated in the brain tissue remains unknown. At present, our research group reported for the first time the involvement of the innate immune receptor Toll-like (TLR) 4 in the pathogenesis of nonalcoholic WE, showing a selective vulnerability of the frontal cortex and cerebellum, the two brain structures understudied in comparison with diencephalic regions, in this pathology over time (Moya et al., 2021). The activation of the canonical proinflammatory TLR4 pathway induces, via myeloid differentiation factor 88 (MyD88), the recruitment of downstream signaling molecules that triggers the stimulation of transcriptional factors, such as the nuclear factor κB (NF-κB), which lead to the induction of genes encoding inflammation-associated molecules and cytokines. In addition to cytokines, NF-κB transcriptional activity induces the expression of other proinflammatory markers that lead to oxidative and nitrosative stress, such as the inducible nitric oxide synthase (iNOS) and cyclooxygenase-2 (COX-2) enzymes, and different caspases, generating lipid peroxidation and apoptotic cell death, respectively. Some other molecules can be released in response to injured tissue, such as heat shock proteins and the high mobility group box 1 protein (HMGB1), inducing more neuroinflammation in a vicious cycle [reviewed in (Orio et al., 2019)]. The TLR4-induced neuroinflammatory pathway has been extensively studied in the context of AUD (Pascual et al., 2011;Crews et al., 2013;Montesinos et al., 2016;Antón et al., 2017), and we recently reported that the TLR4induced neuroinflammation in the frontal cortex and cerebellum in TD animals could be related to the cognitive and motor deficits, respectively (Moya et al., 2021). However, the specific contribution of TD and chronic alcohol (CA) use to the impact of TLR4 signaling and their contribution to the pathogenesis of WE are currently unknown. In the present study, we aimed to further characterize the role of the TLR4 in WE by using combined models with TD and CA exposure, the two main known contributing factors of the pathology, and we also explored the TLR4 activation and signaling in the frontal cortex and cerebellum of a postmortem alcohol-induced WE brain. The presence of postmortem brains of WE-diagnosed patients in biobanks is extremely scarce. Here, we reported a deep analysis (in white and gray matter in the frontal cortex and cerebellum) in a single case, using a matched control subject and a positive control in which TLR4-induced neuroinflammation has been extensively reported, as in an aged brain with Alzheimer's disease. Rodent studies Animals and housing Male Wistar rats (Envigo © , Barcelona, Spain) (n = 50), weighing 100-125 g at arrival were used. Animals were housed in groups of 2-3 per cage and maintained at a constant room temperature (21°C ± 1℃) and humidity (60 ± 10%) in a reversed 12 h dark-light cycle (lights on at 8:00 p.m.). Standard food and tap water were available ad libitum during an acclimation period of 12 days prior to experimentation, and then, rats were randomly assigned to the experimental groups. All procedures followed ARRIVE guidelines and adhered to the guidelines of the Animal Welfare Committee of the Complutense University of in compliance with the Spanish Royal Decree 118/ 2021 and following the European Directive 2010/63/EU on the protection of animals used for research and other scientific purposes. Experimental groups The experimental design and all the procedures of this animal study are described in detail and can be viewed in Moya et al. (2022). In a word, to explore the different conditions that contribute to developing WE, the following experimental groups were used: CA: animals exposed to forced consumption of 20% (w/v) alcohol for 9 months (n = 9). TD diet (TDD): TD hit (TDD* + pyrithiamine 0.25 mg/kg dissolved in saline (0.9% NaCl) i.p. daily injections the last 12 days of experimentation; *TDD specific composition is detailed in the Supplementary Material) (n = 9). Chronic alcohol combined with TDD in the last days of treatment (CA + TDD): both combined treatments (n = 10). These groups were compared with the corresponding control group (C), animals drinking water with standard chow (n = 8). During the last 12 days of TDD protocol, the remaining animals (C and CA) received equivalent daily injections of vehicle (saline, i.p.). The number of animals in the alcohol and TDD groups was slightly higher than in control groups for the possible loss of experimental subjects. We consider that the group with the combined CA + TDD treatment is the most relevant in this translational study, since is the animal model that most closely approximates the WE related to alcohol use. Tissue samples collection On day 12 of TDD protocol, at least 1 h after treatment administration, all animals were killed via rapid decapitation after anesthesia overdose of sodium pentobarbital (320 mg/kg, i.p., Dolethal ® , Vétoquinol, Spain). Brains were immediately isolated from the skull, discarding meninges and blood vessels, and the frontal cortex (area between Bregma +4.7 and +1.2 mm approx.) and the left cerebellar hemisphere were dissected on ice and frozen at −80℃ until assayed. The liver was also immediately taken out and kept at −80℃ for other assays. Western blot analysis Frontal cortex and cerebellar hemisphere samples were processed and analyzed using western blotting following the methodology previously detailed in Moya et al. (2022). In a word, the tissue samples were homogenized at a ratio of 1:3 (w/v) in ice-cold lysis buffer with protease inhibitors, followed by centrifugation to obtain the supernatants. Protein levels were measured using Bradford's method (Bradford, 1976). The samples were adjusted with the loading buffer to a final concentration of 1 mg/ml, and 15-20 µg of total protein were separated using SDS-polyacrylamide gels and transferred to nitrocellulose membranes. Blots were incubated with specific primary and secondary antibodies, using the housekeeping ßactin protein as a loading control (see Table 1 for a complete list of antibodies and their details). Bands were visualized using an ECL kit and quantified via densitometry using ImageJ software (NIH, United States). Liver damage The status of the liver in the animals was checked by measuring the hepatic nitrites and malondialdehyde (MDA) levels, due to the major role of these processes in the Frontiers in Pharmacology frontiersin.org 03 pathogenesis of alcoholic liver disease (ALD) (McKim et al., 2003;Galicia-Moreno and Gutiérrez-Reyes, 2014;Pérez-Hernández et al., 2017;Tan et al., 2020;. For details, see Supplementary Data S1.5. Postmortem human studies Cases. Three cases were selected from brains donated to the Biobank of the Hospital Universitario Fundación Alcorcón (HUFA), Madrid, Spain: Case diagnosed with WE: woman, 62 years old. History of chronic alcohol consumption of at least one bottle of wine per day for 10-15 years. The patient showed classical symptoms of WE: altered mental state such as confusional syndrome, disorientation in space and time, and scarce and incoherent spontaneous language; ocular signs (horizontal nystagmus) and motor disturbances (extrapyramidal symptoms, decreased reflexes). The liver enzyme (transaminase) values and other information are reported in Supplemental Section S1.1. The WE was diagnosed based on the (previous) clinical presentation along with the confirmation via postmortem neuropathological analyses. Negative control case: woman, 53 years old. Cause of death: a nonneurological disease or psychiatric disorder. Positive control case: aged brain with Alzheimer's disease (AD): woman, 76 years old. Primary progressive aphasia, logopenic subtype. Frontotemporal lobar degeneration. Neuropathological diagnosis with changes of AD advanced stage. This was the positive control to observe neuroinflammation, since it had a double hit: aged brain with AD. (It has already been demonstrated that TLR4 neuroinflammation is involved in AD pathology (Zhang et al., 2012;Fiebich et al., 2018;Miron et al., 2018;Calvo-Rodriguez et al., 2020), and an aged brain is also susceptible of having more neuroinflammation). The extended clinical history of each case can be found in Supplementary Section 1.1. Sample processing Postmortem proceedings were carried out in the HUFA, Madrid, Spain. All studies were performed complying with national ethical and legal regulations, being approved by the Drug Research Ethics Committee of the HUFA (ref . According to the brain bank protocol, in conventional donation cases, immediately after extraction, the left half of the brain was fixed via immersion in phosphate-buffered 4% formaldehyde for at least 3 weeks. Then, the brain is processed using coronal slices, except for the cerebellum, which is sectioned sagittally. Brain samples from the dorsolateral frontal cortex (Brodmann area 9) and the left cerebellar hemisphere (corresponding to the area from the superior cerebellum to the dentate nucleus) were selected for this study. The tissue was embedded in paraffin, and 4 μm sections were obtained via microtomy for subsequent immunostaining. Immunohistochemistry A detailed description of the immunohistochemistry (IHC) protocol is provided in Supplementary Section S1.2. In a word, slides were incubated with specific primary antibodies against TLR4, MyD88, p-NFκB p65, and IκB-α and were developed using diaminobenzidine (DAB) along with Carazzi's hematoxylin as counterstaining. To evaluate the specificity of the staining, several technical controls were run including, on the one hand, the omission of the primary antibody and, on the other hand, the omission of the secondary antibody. These technical controls resulted in the absence of staining, and they were performed in both the frontal cortex and cerebellar tissue from the control, alcoholinduced WE, and AD cases. In addition, the specificity of the TLR4 and MyD88 antibodies selected for this study was previously demonstrated in human brain tissue by using IHC and western blotting (Zurolo et al., 2011;MacDowell et al., 2017;Martín-Hernández et al., 2018). Imaging and quantification Slides immunostaining of the frontal cortex and cerebellar hemisphere were observed under light microscopy (Zeiss Axioplan Microscope, Germany). The microscope had a highresolution camera attached (Zeiss Axioplan 712 color, Germany), which was used for capturing the images that were then processed using Axiovision 40V 4.1 (Carl Zeiss Vision, Germany) and ZEN2 software (Carl Zeiss AG, Oberkochen, Germany). Light, shine, and contrast conditions were kept constant during the capture process. For the study of each tissue section per patient, a total of 16 visual fields, 8 within the gray matter and another 8 within the white matter, were examined. An image of each visual field was taken at 40× magnification for the frontal cortex and at 20× magnification for the cerebellum to capture its three layers. In addition, manual neuronal counting of each image was performed, in which a total number range for an accurate comparison between the cases was obtained. Positive signals (in brown color due to DAB) on immunohistochemically stained tissues were semiquantitatively evaluated via visual and automatic scoring, comparing both methods to achieve the most reliable results. Images were always evaluated in a blinded manner without prior knowledge of the clinical information. Visual/observational analysis: immunopositivity of the images was visually assessed by the investigator using a scoring system adapted to our study. The modified immunoreactivity score (IRS) is a composite score assigned to the distribution and intensity of immunostaining, based on (Wang et al., 2011) and (Meyerholz and Beck, 2018) (see Supplementary Section S2.1 and Figure 1). In a word, the observer must assign subscores for immunoreactive distribution (on a 0-4 scale) and intensity (on a 0-3 scale), multiplying them to calculate the total score for each image (ranging from 0 to 12). The final IRS was obtained by averaging the values in the eight fields for each section. Automated analysis: a semiquantitative analysis of images was carried out using the ImageJ Fiji software, following the color deconvolution protocol previously described by Crowe and Yue (2019). In brief, a threshold value was set to remove the background signal after the deconvolution of images, followed by the quantification of the DAB signal within the image. The average intensity of the DAB signal in IHC images was calculated. Last, the mean value of the eight images was taken to represent the specific immunoreactivity of each target protein. Since valuable information might be neglected by the abovementioned scoring systems, we included a brief description of the cell types and tissue components positively marked, as well as the intensity and characteristics of the staining (see Supplementary Section S1.3). Both visual IRS and automatic Fiji methods were employed for the analysis of frontal cortex images. Since both procedures reported comparable and reliable results (see results section), the cerebellar hemisphere was subsequently analyzed only using the Fiji method. Statistical analysis Data are expressed as mean ± S.E.M. In the animal study, two-way ANOVAs were used to assess the overall effects or interactions between two factors: CA and TDD. In addition, an FIGURE 1 Effects in the TLR4 signaling pathway in frontal cortex of thiamine deficiency diet (TDD), chronic alcohol (CA), and CA + TDD-treated rats. Graphs indicate protein levels of (A) TLR4, (B) MyD88 (C) p-NF-ĸB, and (D) IĸBα markers via western blotting; data of the respective bands of interest (upper bands) were normalized by β-actin (lower band) and expressed as a percentage of change in comparison with the control group. Some blots were cropped from the original (black lines) for improving the clarity and conciseness of the presentation. Mean ± S.E.M. (n = 8-10). Two-way ANOVA or nonparametric Kruskal-Wallis test. Overall alcohol effect: &&&p < 0.001; different from control group: *p < 0.05, **p < 0.01, ***p < 0.001. Since the combined CA + TDD treatment mimics better the human case of alcohol-induced Wernicke's encephalopathy (WE), this group was also compared with the C group using an unpaired Student's t-test or Mann-Whitney (CA + TDD vs. C): a p < 0.05, aa p < 0.01, aaaa p < 0.0001. Frontiers in Pharmacology frontiersin.org 05 unpaired Student's t-test was used to compare the CA + TDD group with the control group. Regarding immunohistochemistry analyses, automated Fiji measures were analyzed using one-way ANOVA. For manual IRS data, the nonparametric Kruskal-Wallis test followed by paired comparisons with the Mann-Whitney test was used. A comparison between manual and automated IHC measurements was performed using Pearson's correlation and linear regression analyses. Parametric tests were performed when normality and homoscedasticity were verified (checked by Kolmogorov-Smirnov and Barlett's tests, respectively). Otherwise, data were transformed, or the alternative nonparametric analysis was applied. In the ANOVAs, the Bonferroni post hoc test was used when appropriate. Outliers were analyzed using Grubbs' test. A p value of <0.05 was set as the threshold for statistical significance in all statistical analyses. The data were analyzed using GraphPad Prism version 8.0 (GraphPad Software, Inc., La Jolla, CA, United States). Results Frontal cortex findings CA + TDD-treated rats showed an increased expression of TLR4, MyD88, p-NF-ĸB, and IĸBα proteins in the frontal cortex CA increased TLR4 expression levels ( Figure 1A, overall effect F (1, 31) = 13.7, p = 0.0008). Regarding its coreceptor, rats exposed to TDD showed a significant increase in the MyD88 protein levels compared with controls (p = 0.0434), being higher in the CA group (p = 0.0007 compared to C). Likewise, the combined CA + TDD treatment induced MyD88 upregulation respect to control animals (p = 0.0315) ( Figure 1B, differences between groups H = 15.82, p = 0.0012). CA exposure induced an increasing trend in the phosphorylation of NF-ĸB that did not reach significance using ANOVA ( Figure 1C, overall effect F (1,29) = 2.995, p = 0.0941). We report the results of the phosphorylated-NFkB protein normalized by the structural protein β-actin, as done with the rest of the markers, in accordance with other authors (Yang et al., 2021), since the increase of the phosphorylation in the p65 subunit is indicative of NF-κB activation to mediate inflammatory gene transcription. The levels of total NFkB were measured, with no changes (see Supplementary Section S2.3 and Figure 5). We analyzed also the IĸBα protein as a reporter of NF-ĸB activity, finding an increase in its levels by the effect of TDD (p = 0.042) and CA (p = 0.0039) treatments relative to controls ( Figure 1D, differences between groups H = 12.75, p = 0.0052). The increased expression of the NF-κB inhibitory protein IκBα can be considered an autoregulatory mechanism switched on by NF-κB to block its stimulation. Moreover, the COX-2 enzyme was studied in the frontal cortex, showing an interaction between CA and TDD factors (F (1,25) = 7.407, p = 0.0117). Post hoc analysis revealed no statistical differences among groups (Supplementary Section S2.3 and Figure 4A). In addition, trying to achieve the best approximation between the animal model and the human case, we consider that the combined CA + TDD treatment is an animal model that mimics better the WE related to alcohol use. According to this, we analyzed separately the CA + TDD group via Student's t-test or Mann-Whitney test. The CA + TDD group showed higher protein levels of TLR4 and MyD88 compared with controls ( Figures 1A, B, U = 13, p = 0.0274; t = 5.208, df = 16, p < 0.0001, respectively). In addition, an elevation in p-NF-ĸB and IĸBα protein expression was observed in this group respect to control ( Postmortem human frontal cortex of alcoholinduced Wernicke's encephalopathy showed an increased expression of TLR4, its coreceptor MyD88, and phospo-NFκB p65 Prior to visual and automatic analysis of the images, the results of the manual counting of the total number of neurons (mainly pyramidal) showed that the three cases studied were within the same range; thus, they were comparable (data not shown). The findings reported below were obtained using the automatic ImageJ Fiji software, which were confirmed via comparison with the manual IRS analyses. Correlations between both measurements were high (Supplementary Section S2.2 and Figure 3; Table 1, for TLR4: r = 0.6375; for MyD88: r = 0.7958; for both p < 0.0001) supporting that the Fiji protocol here used is a robust automated measure for TLR4 and MyD88 IHC staining in the brain tissue. In addition, Fiji data were chosen as representative results since this method is the most objective. Manual IRS results for frontal cortex images can be found in Supplementary Section S2.2 and Figure 2. In the cortical gray matter of the control case, weak TLR4 immunoreactivity was occasionally detected in a few pyramidal neurons and glial cells (Figure 2A). In the same brain area of the WE patient, we observed a strong TLR4 expression in most pyramidal neurons (with cytoplasmic localization, Figure 2Ba), as well as in glial cells ( Figure 2Bb) and slightly in the neuropil. Tissue edema was also evident, with parenchymal distension ("gaps or empty spaces") ( Figure 2B). Some endothelial cells in the blood vessels also appeared to be TLR4 positive ( Figure 8). Likewise, in the AD case, we found a heavy TLR4 expression in the cytoplasm of the pyramidal neurons and glial cells and in the vicinity of some Frontiers in Pharmacology frontiersin.org blood vessels ( Figures 2C, 8). Thus, we found significant differences between the cases (F (2, 21) = 6.758, p = 0.0054), with an increased TLR4 signal in WE (p = 0.0066) and AD (p = 0.0358) cortical gray matter compared with the TLR4 positive staining in the control case. In addition, we found higher MyD88 expression in WE (p = 0.0086) and AD (p < 0.0001) compared with the MyD88 staining in the control case (Figures 2E,F;F (2,20) = 24.62, p < 0.0001). MyD88 immunoreactivity was observed in the cytoplasm of pyramidal neurons (Figure 2Ec), in some glial cells (Figure 2Ea,b) and around the endothelial cells in blood vessels in the WE tissue ( Figures 2E, 8). A similar pattern of greater intensity was observed in the AD case ( Figures 2F, 8, p = 0.0039). Regarding the results found in the cortical white matter, control and WE cases showed faint TLR4 staining ( Figures 3A,B), but a higher TLR4 immunoreactivity was detected in the AD patient mostly between the fibers and in glial cells, showing that the AD case has a higher TLR4 expression than the WE case (p = 0.0028) ( Figure 3C; F (2, 20) = 7.53, p = 0.0036). It is worth noting that we found an increase in MyD88 expression in the cortical white matter of both the WE and AD patients when compared with the control case (p < 0.0001 and p = 0.0018, respectively), and such an increase was particularly prominent in the WE case (p < 0.0001, compared with the AD case) (Figures 3D-F; F (2, 21) = 139.4, p < 0.0001). This pronounced WE positive signal appears to be detected by the surrounding fibers and glial cells ( Figure 3E, magnified box). In addition, we checked the p-NFκB p65 and IκB-α markers in the animals. In the cortical gray matter, we noticed comparatively elevated immunoreactivity of p-NFκB p65 in the WE case than in the control case (p = 0.0204), finding mostly a cell nuclear localization of this mediator of inflammation. This can be observed mainly in neurons (especially pyramidal) (Figure 4Ba,b). A very similar staining pattern was also found in the positive control for neuroinflammation, the AD case, being significantly different from the control (p = 0.0003) (Figure 4Cc) (Figure 4, p-NFκB p65 U = 16.14, p = 0.0003). Regarding the IκB-α, in the gray matter of the frontal cortex, we found certain differences between the cases (Figure 4, F (2, 21) = 4.406, p = 0.0253). In the WE patient, a slight staining of the cell cytoplasm was observed in some neurons, although it was not significant compared with the control ( Figure 4E, p > 0.05, n.s.). Likewise, the AD-positive control showed no differences in IκB-α staining versus the control ( Figure 4E, p > 0.05, n.s.). However, although lower total levels of immunoreactivity were detected in FIGURE 2 Representative images of immunohistochemical detection for TLR4 and MyD88 in the gray matter of the frontal cortex of control, alcohol-related WE, and Alzheimer's disease (AD) cases. Regarding TLR4 and MyD88 results in the WE case (B), pyramidal neurons were strongly stained especially in the cytoplasm (arrows; high magnification in inset (Ba; Ec)) and also glial immunoreactivity (arrowheads; high magnification in inset (Bb; Ea, b)); this staining pattern was also found in the AD (C). MyD88 also around the endothelial cells in blood vessels (bv). Tissue edema (parenchymal distension/"gaps") is prominent (B,E). Images taken with a 40× objective. On the right panel, semiquantitative analysis of DAB images using the ImageJ Fiji software is shown. Data represent the mean of eight images/fields per section ±S.E.M and are expressed as a percentage of change in comparison with the control group. Different from control: *p < 0.05, **p < 0.01, ****p < 0.0001; different from WE: ##p < 0.01. Frontiers in Pharmacology frontiersin.org the AD subject than in the WE case (p = 0.0221), it is noteworthy to highlight that a striking IκB-α labeling was observed in the astrocytes (Figures 4Fe,f), which was also found in the WE case with less intensity, with some astrocytes reacting in the same way to this marker (Figures 4Ed). In the cortical white matter, we found no significant differences between cases with p-NF-κB and those with IκB-α ( Figure 5, p > 0.05, n.s.). Cerebellar findings Unaffected expression of TLR4 signaling markers in the cerebellar hemisphere of thiamine deficiency diet, chronic alcohol, and CA + TDD-treated rats None of the experimental conditions induced significant changes in the markers studied ( Figures 6A-D; total NF-κB: Supplementary Section S2.3 and Figure 5, p > 0.05, n.s.). iNOS enzyme and heat shock protein 70 (HSP70) were also analyzed, showing no alterations in their levels by any of the treatments (p > 0.05, n.s., SupplementarySection S2.3 and Figures 4B,C, respectively). Likewise, no significant differences were found when comparing the CA + TDD group with the control in any of these markers (p > 0.05, n.s.). Postmortem human cerebellar hemisphere of alcohol-induced Wernicke's encephalopathy showed an increased expression of TLR4 and its coreceptor MyD88 and IκB-α immunoreactivity In the cerebellum sections, the three cellular layers of the cerebellar cortex-the molecular layer, Purkinje cells and the granular layer-were observed and analyzed together as cerebellar gray matter. The control case showed only occasional and low TLR4 immunoreactivity, mainly in some cells in the transition between the molecular and the granular layer ( Figure 7A). By contrast, the WE case showed a more intense TLR4 staining, especially in the granular layer, in the cells and between the branching or neuropil; endothelial cells of blood vessels also showed TLR4 staining (Figures 7B, 8). Likewise, TLR4 in the AD patient was found mostly throughout the granular layer and in blood vessels ( Figures 7C, 8). Therefore, the semiquantitative analysis demonstrated a significant increase in TLR4 expression in the cerebellar hemisphere gray matter (F (2, 20) = 13.81, p = 0.0002) of WE (p = 0.0006) and AD (p = 0.0006) patients compared to the control case. In the cerebellar cortex, MyD88 staining was predominant in the WE patient, with a main distribution within the granular layer, as well as TLR4 (Figure 7E), whereas a weak MyD88 immunoreactivity was found in both the control case Representative images of immunohistochemical detection for TLR4 and MyD88 in the white matter of the frontal cortex of control, alcoholrelated WE, and AD cases. WE patient showed the highest elevation in MyD88 expression (E). Images taken with a 40× objective. On the right panel, semiquantitative analysis of DAB images using the ImageJ Fiji software is shown. Data represent the mean of eight images/fields per section ± S.E.M and are expressed as a percentage of change versus the control group. Different from control: **p < 0.01, ****p < 0.0001; different from WE: ##p < 0.01, ####p < 0.0001. Frontiers in Pharmacology frontiersin.org and the AD patient ( Figures 7D,F, respectively). MyD88 expression intensity was significantly increased in the cerebellar hemisphere gray matter (F (2, 21) = 54.03, p < 0.0001), showing the WE patient the highest levels compared to the AD patient (p < 0.0001) and control case (p < 0.0001). In contrast, cerebellar hemisphere white matter did not show any differences in either TLR4 or MyD88 immunostaining in the human cases analyzed (Figure 9). In addition, p-NF-κB and IκB-α immunoreactivity were also analyzed in the cerebellar hemisphere, and the p-NF-κB results in the gray matter showed no differences in the WE compared with the control (Figure 10, p > 0,05 n.s.). There was a slight difference between the cases (Figure 10, F (2, 20) = 12.27, p = 0.0003), with an apparent lower level of labeling in the AD case compared to the control (p = 0.0002) and to the WE subject (p = 0.0369). With regard to IκB-α, we observed significant differences between the patients (Figure 10, F (2, 20) = 5.466, p = 0.0128), finding an increased IκB-α immunoreactivity in the WE case compared with the control (p = 0.0462) and with the AD subject (p = 0.0207). This IκB-α-labeling appears to be observed mainly through the granular layer, as was the case for TLR4 and MyD88. In agreement with the results of TLR4 and MyD88 found in the white matter of the cerebellar hemisphere, we observed no significant changes between cases in this area in p-NF-κB and IκB-α markers (Figure 11, p > 0.05, n.s.). Thiamine levels Plasma thiamine levels were measured in all animals in our study. In brief, after 9 months of alcohol exposure and after TDD treatment, we found a trend toward a decrease in total thiamine levels due to an alcohol effect [for detailed results see (Moya et al., 2022)]. In the case of the WE patient studied here, it was not possible to perform thiamine determinations, since she died very quickly. Nevertheless, neuropathological analyses confirmed the diagnosis of WE without comorbidity with other pathologies such as HE, as we have already explained. Liver status We checked the status of the liver in the animals by measuring the hepatic nitrites MDA levels, due to the major role of these processes in the pathogenesis of ALD (McKim et al., FIGURE 4 Representative images of immunohistochemical detection for p-NFκB p65 and IκB-α in the gray matter of the frontal cortex of control, alcoholrelated WE, and AD cases. p-NF-κB exhibited nucleus localization in WE and AD cases (see especially in pyramidal neurons: arrows; high magnification in inset (Ba, b; Cc)). IκB-α manifested cytoplasmic localization but highlighting a striking immunoreactivity in the astrocytes, mainly in the AD (arrowheads; high magnification in inset (Fe, f)), but also, although to a lesser extent, found in the WE case (high magnification in inset (Ed)). Images taken with a 40× objective. On the right panel, semiquantitative analysis of DAB images using the ImageJ Fiji software is shown. Data represent the mean of eight images/fields per section ± S.E.M and are expressed as a percentage of change versus the control group. Different from control: *p < 0.05, ***p < 0.001; different from WE: #p < 0.05. Figure 6). Regarding the human case, the liver enzyme (transaminase) values were in the upper limit but without exceeding reference levels (See Supplementary Section S1.1). The patient exhibited no other symptoms or clinical signs of liver disease, suggesting that she was an alcohol-induced WE patient without ALD. FIGURE 5 Representative images of immunohistochemical detection for p-NFκB p65 and IκB-α in the white matter of the frontal cortex of control, alcohol-related WE, and AD cases. Images taken with a 40× objective. On the right panel, semiquantitative analysis of DAB images using the ImageJ Fiji software is shown. Data represent the mean of eight images/fields per section ± S.E.M and are expressed as a percentage of change versus the control group. FIGURE 6 Unaffected expression of the TLR4 signaling pathway in the cerebellar hemisphere of TDD, CA, and CA + TDD-treated rats. Graphs indicate protein levels of (A) TLR4, (B) MyD88, (C) p-NF-ĸB, and (D) IĸBα markers via western blotting; data of the respective bands of interest (upper bands) were normalized by β-actin (lower band) and expressed as a percentage of change versus the control group. Some blots were cropped from the original (black lines) for improving the clarity and conciseness of the presentation. Mean ± S.E.M. (n = 8-10). Two-way ANOVA. Since the combined CA + TDD treatment mimics better the human case of alcohol-induced WE, this group was also compared with the C group using an unpaired Student's t-test or Mann-Whitney (CA + TDD vs. C). Frontiers in Pharmacology frontiersin.org Discussion Little is known about the possible role of TLR4 in the WE, since there are no studies in humans, and to our knowledge, only our previous work with TD animal models provides evidence about the contribution of this receptor to this pathology (Moya et al., 2021). In the present study, we further characterized the role of this receptor in WE by studying animal models with combined TD and CA use. Thus, here, we report the importance of the double hit (TD and CA use) in the magnitude of the expression of the proinflammatory TLR4 signaling cascade in the frontal cortex but not in the cerebellum. We also described the presence of an upregulated cortical and cerebellar TLR4 and its adaptor molecule MyD88, along with specific changes in the signaling molecules phospo-NFκB p65 and IκBα, in a single case of alcohol-induced WE, by using postmortem brain tissue. WE patients show neuropsychological symptoms such as memory alterations, apathy, executive deficit, and disinhibition, which suggest dysfunction of frontal structures. The vulnerability of the frontal lobe to CA consumption with or without TD is widely accepted based on neuropathological and neuroimaging studies [reviewed in (Jung et al., 2012)]. Inflammation, among other processes, may contribute to the WE symptomatology, as it increases cell damage and causes neuronal death. The innate immune receptor TLR4 plays a critical role in determining the pathological outcomes in several neurological and neuropsychiatric disorders, including AUD, AD, depression, schizophrenia, and trauma (Crews et al., 2013;García Bueno et al., 2016). By using WE animal models resulting from TD exposure, we were able to identify TLR4 as a key molecule in the emotional, cognitive, and motor disturbances associated with these models in a previous study (Moya et al., 2021). To our knowledge, there are no previous works that examine TLR4 signaling in the brain in the context of WE, either in animals or in human subjects. However, since the main documented cause of WE is alcohol consumption, it is needed to explore more complex animal models, in which we can combine TD and CA consumption and explore the specific contribution of each factor to the pathophysiology of the disease. Indeed, in a very recent publication of our group we described the contribution of both factors (TD and alcohol abuse) to the induction of neuronal damage in the frontal cortex and how the combination of both (CA + TDD) correlates with FIGURE 7 Representative images of immunohistochemical detection for TLR4 and MyD88 in the gray matter of the cerebellar hemisphere of control, alcohol-related WE, and AD cases. Gray matter with molecular layer (ML), Purkinje cell layer (PL), and granular layer (GL). In the WE (B) and AD cases (C), an increased TLR4 immunoreactivity was detected especially in the granular layer (high magnification in the inset in B; Purkinje cell, PG; arrows pointing to positive cells) and blood vessels (bv), compared with the control (A). WE patient also showed the highest elevation in MyD88 expression, mainly by the granular layer (high magnification in the inset in E), compared with AD (F) and control (D). Images taken with a 20× objective. On the right panel, semiquantitative analysis of DAB images using the ImageJ Fiji software is shown. Data represent the mean of eight images/fields per section ±S.E.M and are expressed as a percentage of change versus the control group. Different from control: ***p < 0.001; ****p < 0.0001; different from WE: ####p < 0.0001. Frontiers in Pharmacology frontiersin.org disinhibition-like behavior in animals (Moya et al., 2022), which is a core symptom of the pathology. Likewise, in the present study, we used the same combined animal models to explore the specific role of TLR4 in the induction of a neuroinflammatory cascade in the frontal cortex and cerebellum and added the description of the TLR4/MyD88 upregulation in the postmortem brain of an alcohol-induced WE case. Among the animal models investigated here, the CA + TDD model is the one that better represents the pathology, as expected (Moya et al., 2022), and it is also the best approach to be compared with the alcohol-related WE case (postmortem human brain) studied here. In the animal model, we observed a significant upregulation in the protein expression of TLR4, MyD88, p-NF-κB, and IĸBα in the frontal cortex (CA + TDD versus the control group). It is to note that neuroinflammation is a very complex response that involves the activation of several factors. The upregulation of phosphorylated NF-κB p65 is indicative of activation of this nuclear factor, since phosphorylation of Ser536 in the cytosolic p65 promotes the nuclear translocation and facilitates p65 binding to specific promoter sequences, activating the inflammatory gene expression (Giridharan and Srinivasan, 2018). In addition, its inhibitor gene IκBα contains NF-κB binding sites in its promoter, so the NF-κB is able to autoregulate the transcription of this own inhibitor, meaning that the "NF-κB-IκBα autoregulatory feedback loop" would be trying to suppress a prolonged activation of NF-ĸB to limit the inflammatory response (Doremus-Fitzwater et al., 2015;Gano et al., 2016;Toledo Nunes et al., 2019;Moya et al., 2021). Indeed, this autoregulatory mechanism switched on by NF-κB to block its stimulation is widely known in neuroinflammatory studies induced by LPS (Sayd et al., 2015), alcohol (Doremus-Fitzwater et al., 2015;Gano et al., 2016), TD (Moya et al., 2021), or combined TD and alcohol (Toledo Nunes et al., 2019). Hence, the IĸBα protein levels are useful reporters of NF-ĸB activity and increase neuroinflammatory status. It could be surprising that the TLR4 was not upregulated in the TD model, as opposed to our previous studies (Moya et al., 2021). It is known that neuroinflammation is a complex response where markers peak at different time-points, so the lack of significant effect in TLR4 in the TD animals of this study could be indicative that we did not catch the TLR4 peak at the precise moment of the samples collection. This is probably related to the age difference between the animals in both studies. FIGURE 8 Details showing the endothelial cells of blood vessels with immunohistochemical reactivity of TLR4 and MyD88 in the gray matter of the cerebellar hemisphere and frontal cortex of alcohol-related WE and AD cases. High magnification from 20× (cerebellar hemisphere) and 40× (frontal cortex) images. Frontiers in Pharmacology frontiersin.org Whereas in the previous study the animals were 8-9 weeks old, in the current study, the animals were approximately 10 months old. The younger animals may react differently (they show a particular timing of TLR4 upregulation profile) than the older animals. Nevertheless, there is certain evidence of a TLR4 signaling pathway overactivation in these animals, as some other inflammatory signals were observed (MyD88, IκBα) in the frontal cortex. Indeed, the upregulation of the TLR4 coreceptor MyD88 could be interpreted as a sign of TLR4 signaling overresponse, in absence of significant receptor overexpression. However, TLR4 is upregulated in the combined animal model, CA + TDD, although the increase is moderate compared to controls. In this regard, it is to note that the animals were exposed to a chronic treatment of moderate alcohol intake, so we are facing a process of chronic neuroinflammation, where we cannot expect such pronounced elevations as in a binge drinking model, for example, where there is a peak of acute neuroinflammation with a prominent increase in cortical levels of the TLR4 pathway (Antón et al., 2017). In addition, there is an absence of synergic effect by the combination of CA and TDD, since the elevations in the TLR4 pathway proteins were not higher than in the CA and/or TD exposure alone, which is maybe a consequence of the long alcohol exposure (9 months), rendering cells less sensitive to the TDD response. It is to note that together with the upregulation of the TLR4 neuroinflammatory pathway found in CA + TDD animals in this study, we have already described, to complement the mechanisms, that other processes associated with TLR4 activation such as oxidative and nitrosative stress, lipid peroxidation, apoptosis death, and cell damage are upregulated in the frontal cortex of the same animals and correlate with disinhibition-like behavior (Moya et al., 2022). All these markers are more representative of the latest stages of a neuroinflammatory response, traditionally linked to neurotoxicity (Moya et al., 2022). Thus, altogether, these studies shed light on the relative contributions of each factor (alcohol and TD, either isolated or in interaction) to the potential disease-specific mechanisms involved in the WKS pathophysiology, resulting in brain damage and behavioral problems. In a complementary way, here, we show, for the first time, a case of WE associated with CA consumption in which there is an upregulation of TLR4 and MyD88 protein expression in the postmortem frontal cortex and cerebellum. Neuroinflammation involves all the cell types present within the central nervous system (Shabab et al., 2017). In this way, microglia and astrocytes, as well as neurons and oligodendrocytes all contribute to innate immune responses in the CNS through the expression of TLR4, among other TLRs. Thus, TLR4 is FIGURE 9 Representative images of immunohistochemical detection for TLR4 and MyD88 in the white matter of the cerebellar hemisphere of control, alcohol-related WE, and AD cases. Images taken with a 20× objective. On the right panel, semiquantitative analysis of DAB images using the ImageJ Fiji software is shown. Data represent the mean of eight images/fields per section ± S.E.M and are expressed as a percentage of change versus the control group. Frontiers in Pharmacology frontiersin.org expressed in human brain cells, including neurons, microglia, astrocytes, and oligodendrocytes (Vaure and Liu, 2014;Stephenson et al., 2018;Frederiksen et al., 2019;Kumar, 2019;Leitner et al., 2019). In the WE cortical gray matter, immunohistochemical analysis showed an increased TLR4 staining in glial cells and pyramidal neurons, mainly in the cytoplasm, since TLR4 can signal both at the plasma membrane and intracellularly (Gangloff, 2012). TLR4 immunoreactivity was also observed in endothelial cells of the blood vessels, in agreement with Nagyoszi and colleagues, who demonstrated the expression of TLR4 on rat and human cerebral endothelial cells induced by inflammatory stimuli or oxidative stress (Nagyoszi et al., 2010). MyD88 expression in the WE patient showed a staining pattern similar to that observed for TLR4: it was mainly detected in pyramidal neurons and glial cells. Such a result may suggest that the upregulation of TLR4 has functional consequences in the associated signaling pathway. Likewise, endothelial cells of blood vessels showed immunoreactivity, which could fit with a study reporting that activation of the MyD88 pathway in endothelial cells of the cerebral microvasculature is involved in the regulation of inflammatory events (Gosselin and Rivest, 2008). Similar findings have been previously reported in the AD brain, considered as a positive control, showing an activation of TLR4 in both human AD diagnosed patients and AD animal models (Fiebich et al., 2018;Calvo-Rodriguez et al., 2020;Zhou et al., 2020). The increase in TLR4 expression was particularly observed in the frontal cortex of AD subjects when compared with age-matched controls (Miron et al., 2018). Therefore, MyD88 levels were also reported to be elevated in the cortex of patients with AD and in a mouse model of AD (Rangasamy et al., 2018). Preclinical and human studies have demonstrated that exposure to severe alcohol alone or combined with TD leads to white matter damage in the cortex (Kril et al., 1997;Harper, 2009;de la Monte and Kril, 2014;Chatterton et al., 2020) suggesting that neuroinflammation participates in the myelin and white matter disruptions (Alfonso-Loeches et al., 2012;Toledo Nunes et al., 2019). Here, we found a prominent increase in MyD88 immunoreactivity in the WE cortical white matter, and this excessive signaling could be leading to lower TLR4 levels in this cortical area by a compensatory downregulation mechanism. Indeed, depending on the temporal status in which these parameters were measured, the balance between TLR4 and MyD88 upregulation can be differentially affected. Thereby, the concurrence of TLR4 and MyD88 immunoreactivity suggests an activation of the TLR4- FIGURE 10 Representative images of immunohistochemical detection for p-NFκB p65 and IκB-α in the gray matter of the cerebellar hemisphere of control, alcohol-related WE, and AD cases. Gray matter with molecular layer (ML), Purkinje cell layer (PL), and granular layer (GL). In the WE (E), an increased IκB-α immunoreactivity was detected especially in the granular layer compared with the control (D). Images taken with a 20× objective. On the right panel, the semiquantitative analysis of DAB images using the ImageJ Fiji software is shown. Data represent the mean of eight images/fields per section ± S.E.M and are expressed as a percentage of change versus the control group. Different from control: *p < 0.05; ***p < 0.001; different from WE: #p < 0.05. Frontiers in Pharmacology frontiersin.org MyD88 signaling pathway, although we cannot exclude other alternative pathways. It is possible that other TLR4 MyD88independent signal transduction pathways such as the TRIFdependent pathway could also be activated. Thus, the TLR4 immunoreactivity detected in our study could be indicative of signaling both from the membrane through Myd88 and internally from TRIF, and the MyD88 immunoreactivity could be somehow nonspecific for TLR4 and may include other TLRs (Biswas, 2018). Nevertheless, even if different pathways are activated, all of them converge and activate the NF-κB factor, in which we are particularly interested because it is the foremost important transcriptional manager of inflammation-associated genes (Marongiu et al., 2019;Ciesielska et al., 2021;Lin et al., 2021;Duan et al., 2022). Indeed, we found an increase of the proinflammatory mediator p-NF-κB in the cortical gray matter of the WE case compared with control, as well as in the positive-ADcontrol, with a predominant expression or nuclear localization, indicating that this proinflammatory factor is active, which is, presumably, a direct consequence of the activation of the TLR4 signaling in the frontal cortex. Results regarding IκB-α are sometimes difficult to explain as both factors regulate their levels through compensatory mechanisms, as explained above. In this study, we found an interesting striking pattern of IκB-α labeling in glial cells such as astrocytes in the AD case, which was reproduced, to a lesser degree, in the WE patient. Regarding the cerebellum, a damage induced by alcohol and TD has been previously reported (Mulholland, 2006;Manzo et al., 2014). Moderate shrinkage of the vermis and cerebellar hemispheres was observed in postmortem examination in patients diagnosed with alcohol abuse and with WE (Harper, 1979). It is of interest that our analyses in the postmortem cerebellar hemisphere of the WE patient showed an increase in TLR4 expression compared to the control brain, mainly detected in the granular layer and in endothelial cells of blood vessels. In addition, MyD88 and IκBα were also upregulated and observed mostly in the granular layer of the WE subject. Therefore, the cerebellum is also considered as a vulnerable region for AD pathology (Hoxha et al., 2018). AD patients showed severe astrocytosis in the cerebellar granular layer (Fukutani et al., 1996), and studies with cerebellar granule cells reported increased secretion of ßamyloid related to the neurodegeneration of nearby cells (Galli et al., 1998) (reviewed in (Hoxha et al., 2018). Here, we also found an elevated TLR4 signal in the cerebellar hemisphere of the AD patient compared with the control case, although this increase was not observed for MyD88. As explained above, we cannot exclude the implication of other independent pathways to MyD88; thus, TLR4 could mediate FIGURE 11 Representative images of immunohistochemical detection for p-NFκB p65 and IκB-α in the white matter of the cerebellar hemisphere of control, alcohol-related WE, and AD cases. Images taken with a 20× objective. On the right panel, semiquantitative analysis of DAB images using the ImageJ Fiji software is shown. Data represent the mean of eight images/fields per section ± S.E.M and are expressed as a percentage of change versus the control group. Frontiers in Pharmacology frontiersin.org their effects through different inflammatory mediators in this brain area in AD. In contrast to the data found in the WE patient, none of the treatments appeared to affect the TLR4 signaling in the cerebellar hemisphere of the rats compared with controls. In our previous study (Moya et al., 2021), the 12 days TD-induced model did not show a neuroinflammation signature in the cerebellum, coincident with an absence of motor impairment. However, we found the neuroinflammatory markers increased in the cerebellum in another model with a deeper degree of TD due to severe TDD treatment of 16 days, where the decline in animals' motor performance positively correlated with an upregulation of p65 NF-κB in this brain region (Moya et al., 2021). In the present study, we observe no evidence of motor dysfunction, such as ataxia, in any of the animal models tested, thus explaining the lack of changes observed in the cerebellum. However, it is to note that the clinic history of the WE patient showed hypotonia, hyporeflexia, oculomotor deficits such as nystagmus and saccadic intrusions, and altered speech, which are presumably signs of cerebellar dysfunction (Bodranghien et al., 2016;Jafar-Nejad et al., 2017) and may explain the changes observed in the cerebellum of the postmortem WE brain. Thus, the TLR4 signature in the cerebellum appears to precisely coincide with the manifestation of cerebellar symptoms, as reported by us previously (Moya et al., 2021). Last, it is noteworthy to take into account possible hepatic alterations when this pathology is induced by alcohol consumption. It is known that the hepatotoxic properties of alcohol abuse may lead to ALD. However, despite alcohol consumption being the main cause of WE, the prevalence and characteristics of the relationship between ALD and WE remain unclear because of the lack of available data (Chamorro Fernández et al., 2011). ALD is a possible comorbidity in WE patients, which presents specific clinical, analytical, and radiological characteristics and a poorer prognosis compared with alcoholic WE patients without ALD (Novo-Veleiro et al., 2022). Hepatic encephalopathy (HE) induced by CA consumption is an extreme example of brain and liver interaction. Although both HE and WE occurs frequently in the setting of alcoholism, HE is due to liver disease and/or shunting of portal blood around the liver resulting in altered metabolism of nitrogenous substances, whereas WE is due to a deficiency of thiamin (Schenker et al., 1980). Three types of HE are traditionally differentiated (A, B, and C) (Weissenborn, 2019), but, in a term, HE occurs when toxins that are normally cleared from the body by the liver accumulate in the blood, eventually traveling to the brain. Elevated levels of ammonia appear to play a central role in this disorder, primarily by acting as a neurotoxin that generates astrocyte swelling, resulting in cerebral edema and intracranial hypertension. Other factors, such as oxidative stress, neurosteroids, systemic inflammation, increased bile acids, impaired lactate metabolism, and altered blood-brain barrier permeability likely contribute to the process of HE (Liere et al., 2017;Hadjihambi et al., 2018). In patients with underlying liver cirrhosis, distinguishing between HE and WE sometimes becomes a tough problem (Novo-Veleiro et al., 2022). HE is characterized by a wide spectrum of nonspecific neurological, psychiatric, and motor disturbances; hence, most of them may coincide with those in WE, since no mental alteration is unique for both disorders. Notwithstanding, mental alteration is usually the most noticeable symptom of WE (Zhao et al., 2016). The recent ISHEN (International Society for Hepatic Encephalopathy and Nitrogen Metabolism) consensus uses the onset of disorientation or asterixis as the initial sign of overt HE (Ferenci, 2017). In due course, the diagnosis of HE is based on history and physical examination, exclusion of other causes of altered mental status, and the laboratory clinical findings and is sometimes confirmed through a trial of therapy for this disorder. Therefore, when difficulties exist in distinguishing between HE and WE, intravenous vitamin B1 can be considered to be a discriminative method or a preemptive treatment (Zhao et al., 2016). Nevertheless, although most heavy drinkers develop fatty liver, only a 20-40% subset of patients progresses to alcoholic hepatitis, and approximately 10-15% develop frank cirrhosis (Ghosh Dastidar et al., 2018). The WE case studied here is an alcoholic patient with WE without ALD, since she did not develop a severe liver injury, thus far from being comorbid with HE. Likewise, regarding our animal model of CA consumption, the existing literature indicates that rodent models exposed to a CA administration equivalent to the one performed in this study developed mild or moderate steatosis but with no inflammation, no fibrosis, and no portal hypertension (Nevzorova et al., 2020). The steatosis or fatty liver is relatively benign and represents the initial stage in the ALD spectrum. To achieve greater damage to the liver, the alcohol drinking model is combined with other stressors to stimulate inflammation, fibrosis, or hepatocellular carcinoma. These second-hit models include additional factor(s) as dietary, chemical, genetic manipulations, or single or multiple alcohol binges to facilitate progression to advanced ALD (Ghosh Dastidar et al., 2018;Lamas-Paz et al., 2018;DeMorrow et al., 2021). Notwithstanding, we checked the status of the liver in the animals by measuring the hepatic nitrites and MDA levels, because of the major role of these processes in the pathogenesis of ALD (McKim et al., 2003;Galicia-Moreno and Gutiérrez-Reyes, 2014;Pérez-Hernández et al., 2017;Tan et al., 2020;. The results suggest that the protocol of CA consumption used in this study did not produce oxidative damage in the liver in the long term, since both the nitrite and the MDA levels, indicative of nitrosative stress and lipid peroxidation, respectively, showed no significant changes in the CA and CA + TDD Frontiers in Pharmacology frontiersin.org animals versus controls. We cannot discard that CA consumption has produced some mild to moderate alterations in the liver of our animals; however, in that case, it has not apparently progressed to a state of inflammatory/oxidative injury. Thus, these results suggest that the brain inflammatory response found in this study was achieved in absence of deep liver alterations. Limitations, strengths, and future perspectives Several limitations of our study should be acknowledged. On the one hand, our three human cases consisted of women and only male rats were used, which does not represent the real population of the disease. Further studies are needed to investigate the potential sex differences in this TLR4 pathway. On the other hand, in the control human case, ischemic anoxia was detected in the postmortem neuropathological diagnosis. However, signs of ischemia or hypoxia were not observed in the samples employed here. Moreover, since there is evidence for an involvement of inflammatory pathways, including TLR4 upregulation, after ischemia or hypoxia (Paschon et al., 2016;Mohsin Alvi et al., 2020), the present results may suggest an even higher increase in the TLR4 inflammation pathway if the data had been compared with another healthy control. Moreover, there is a difference of 9 years of age between the WE patient and the control. Nevertheless, we consider it to be within a comparable valid range, as observed in other studies where the difference between controls and patients also ranges from 8 to 10 years (Dabos et al., 2015;Ishiki et al., 2015;Ivanski et al., 2018). In addition, we are aware that the sample size in the postmortem brain study is very limited, but due to the poor records of WE cases, access to postmortem tissue is very complicated; hence, the obtained results in this study are even more noteworthy. The postmortem study is descriptive, and the results should be considered as a pilot study and nonfirmly conclusive. Furthermore, we are aware that the results in humans and animals have been obtained by different methodological techniques, so in future studies, we will verify these findings both by these and other methods. Nevertheless, obtaining results pointing in the same direction coming from two different techniques is also interesting and noteworthy. This indicates to us that both methodologies complement each other supporting common conclusions. Despite these limitations, this study is the first to characterize TLR4 in the frontal cortex and cerebellum of a human subject with WE. Moreover, we demonstrated the utility of automatic Fiji and the visual IRS methods to analyze particularly DAB-based IHC images, since a strong correlation between both results was observed. Notwithstanding, automated analysis was chosen for the results report because it reduces subjective bias and allows the detection of signals that are not so easily identifiable to the naked eye by the observer. Future research is required to analyze more markers of this TLR4 signaling pathway in postmortem cerebral tissue from WE patients, including the exploration of other vulnerable brain regions. Moreover, the study of WE cases with other etiology, as nonalcoholic patients, is also needed. Conclusion Taken together, our study shed light on the relative contributions of alcohol consumption and TD, either isolated or in interaction, to the activation of the TLR4/ MyD88 signaling, which may act as an underlying mechanism to the pathogenesis of WE. The findings provided here using animal models, along with complementary results (Moya et al., 2022), and our previous work (Moya et al., 2021) suggest that the TLR4/ MyD88 signaling may be a potential disease-specific mechanism involved in the WE pathophysiology, resulting in brain damage and behavioral problems. We provide also the first preliminary evidence of the TLR4/MyD88 upregulation in the postmortem brain tissue of a human case of WE. Our results offer valuable information to guide future studies to further investigate these specific inflammatory mechanisms in the context of WE. The knowledge about how the inflammatory response is triggered in the WE brain and its relationship with the course of the disease is critical to understanding this disabling disorder and developing new therapeutic strategies. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. Ethics statement The studies involving human participants were reviewed and approved by Drug Research Ethics Committee (Comité Ético de Investigación con Medicamentos/Investigación Clínica, CEIm del HUFA, ref. 62-2018). Written informed consent was not obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. The animal study was reviewed and approved by Animal Frontiers in Pharmacology frontiersin.org
2022-08-22T13:32:48.202Z
2022-07-04T00:00:00.000
{ "year": 2022, "sha1": "cdb3df29a40a5e231ea2918cfc60ad20220665f3", "oa_license": "CCBYNCND", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2022/07/04/2022.06.30.497714.full.pdf", "oa_status": "GREEN", "pdf_src": "Frontier", "pdf_hash": "29b021b8cf7f810a671a907e2b18a279ddb375b6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
186916516
pes2o/s2orc
v3-fos-license
Flaring activity of the SFXT IGR J16418−4532 Supergiant fast X-ray transients (SFXTs) are a sub-class of wind-fed High Mass X-ray Binaries (HMXB) in which the normal companion is a supergiant. These systems were collected in a sub-class because of short flares (a few hours duration) in which the X-ray luminosity increases by a few orders of magnitude. One of the members of SFXTs is the X-ray 1212 s pulsar IGR J16418−4532, which is characterized by a high quiescent X-ray luminosity and flaring on a short timescale. We show that the degenerate component of the system is either a magnetar which accretes matter from a Keplerian disk of quasi-spherical flow, or a regularly magnetized neutron star which rotates near spin equilibrium and accretes matter from a non-Keplerian magnetic disk. Introduction IGR J16418−4532 is a High Mass X-ray Binary system (HMXB) with a short eclipsing orbit of P orb ∼ 3.7 days [1] which belongs to the Supergiant fast X-ray transients SFXTs subclass of wind-fed HMXBs [2,3]. The optical component of this system is an O8.5 I supergiant of the mass M OB = 31.5M , radius R OB = 21.4R , luminosity L OB = 4.47 × 10 5 L , and the effective temperature T eff = 32274 K [4], which underfills its Roche lobe [3]. A distance to the system is ∼ 13 kpc [5]. The neutron star (NS) in this system manifests itself as an X-ray pulsar with the period 1212 ± 6 s [6]. The X-ray luminosity of the system in the quiescence, L qt ∼ 10 36 erg s −1 , is unusually high for SFXTs. Flaring activity The X-ray luminosity of the system in 18 − 100 keV during outbursts lies in the range (2 − 40) × 10 36 erg s −1 [6]. The orbital phase distribution of the outbursts is uniform, apart the eclipse region where no outbursts were detected. The duration of the outbursts varies from a few minutes up to a few hours [3,7]. The total duration of these outbursts places the active duty cycle of IGR J16418−4532 at 6.14% [3]. The raising/decline time of the outbursts lies in the range 100 − 1000 s [3]. Equilibrium rotation As recently shown by Ikhsanov & Mereghetti [8], the equilibrium period of a NS in a wind-fed HMXB lies within the range P eq min ≤ P eq ≤ P eq max , where P eq min 14 s × μ International is the maximum possible equilibrium period corresponding a situation in which the angular velocity of the accreting matter is smaller than the Keplerian angular velocity at the mangertospheric radius ω s (r) < ω k (r m ) = GM ns /r 3 m 1/2 . Here μ 30 is the dipole magnetic moment μ = (1/2)B ns R 3 ns of a NS with the surface magnetic field B ns and the radius R ns in units 10 30 G cm 3 and m is its mass, M ns , in units 1.4 M . The parameterṀ 15 represents the mass accretion rate onto the surface of the NS in units 10 15 g s −1 , which can be evaluated from the observed X-ray luminosity, L x , of the pulsar asṀ = L x R ns /GM ns . P orb(d) is the orbital period of the binary system measured in days. ξ = ξ/0.2 is the dimensionless parameter accounting for dissipation of angular momentum due to density and velocity gradients in a gas with no magnetic field which is falling free towards the NS in a quasi-spherical fashion [9]. c 6 is the speed of sound, c s,0 , in the gas captured by neutron star from the stellar wind of its massive companion at the Bondi radius, r G = 2GM ns /v 2 rel , in units 10 6 cm s −1 (here v rel is the velocity of NS in the frame of surrounding material). Finally, β 0 = β(r G ) = 8πρ 0 c 2 s,0 /B 2 f0 is the ratio of thermal to magnetic pressure in the material of the density ρ 0 which the NS captures at its Bondi radius, and B f0 = B f (r G ) is the magnetic field of the accreting matter itself. The lines P eq min = P eq min (P orb ) and P eq max = P eq max (P orb ) are shown in Figure 1, which is the P s vs. P orb diagram. The position of IGR J16418−4532 on the diagram is marked by asterisk. It is located well above the line of the minimum possible period and is rather close to the maximum period of a NS which accretes from a magnetized non-Keplerian disk (a so-called the Magnetic Levitating disk, or ML-disk) in which the material is confined by its own magnetic field. Formation of the ML-disk can proceed if R sh > max{r A , r circ }, where r circ is the circularization radius, r A is the Alfvén radius. This is valid if the NS velocity relative to wind of its optical component satisfies the inequality v kd < v rel < v ma , where and v kd 430 km s −1 × ξ 3/7 This indicates that the accretion process in IGR J16418−4532 can unlikely be explained in terms of the Keplerian disk accretion scenario. For this scenario to realize the surface magnetic field of the neutron star should satisfy the condition B ns ≥ B kd , where is the solution of equation P s = P eq min for the parameters of IGR J16418−4532. Also, one can obtain similar condition B ns ≤ B ml for the Magnetic Levitation accretion scenario, where is the solution of equation R sh = r A for the parameters of IGR J16418−4532. Here R sh is the Shvartsman radius [10]. These conditions indicate that accretion process in this pulsar should be constructed with the quasi-spherical or Magnetic Levitation accretion scenarios. A possibility for the NS in IGR J16418−4532 is far from the equilibrium rotation can be rather rejected since the spin-up time of the NS in this case will be less than 30 years. 1. P s vs. P orb diagram. The solid line corresponds the minimum equilibrium period, P eq min = P eq min (P orb ), for μ = 10 30 G cm 3 andṀ = 5×10 15 g s −1 . The dash-dotted lines indicate the maximum equilibrium period, P eq max = P eq max (P orb ), for β 0 = 1 and c s,0 = 20 km s −1 . The position of IGR J16418−4532 is marked by the asterisk, the crosses denote several other of presently known pulsing SFXTs Flaring timescale The observed flaring in SFXTs is associated with the sporadic variations of the mass accretion rate onto the surface of NS. This can be a reason of variations of the mass capture rate by the NS from the wind of its companion or/and instabilities of the accretion flow inside the Bondi radius or/and variations of the rate at which the accreting material enters the magnetic field of the NS at its magnetospheric boundary. All of these possibilities implies variations of the radius of the magnetosphere of the NS. The magnetospheric radius decreases during flares and increases to its initial value as the mass-transfer rate towards the magnetospheric boundary decreases and the system switches back to the quiescence. The characteristic time on which the system can change its state from quiescence to flaring within this approach is limited to t rf ≥ τ a (r m ), where τ a (r m ) = r m /v a (r m ) is the Alfvén time, v a (r m ) = 2μ/r 3 m 4πρ(r m ) is the Alfvén velocity and ρ(r m ) is the density of matter in the magnetopause at the magnetospheric boundary. If a NS accretes matter from a quasi-spherical flow or/and from a hot turbulent envelope [11], then the Alfvén time in the magnetopause is comparable to the dynamical time t ff (r) = r 3/2 / (2GM ns ) 1/2 , which under the conditions of interest is only a fraction of a second. The raise time of flares in this case reflects the characteristic time of variations of the mass-transfer rate in the accretion flow beyond the magnetospheric boundary. If a NS accretes matter form a disk the Alfvén velocity in the magnetopause is comparable with the speed of sound in the plasma at the magnetospheric boundary, which is significantly smaller than the dynamical (free-fall) velocity. This indicates that the characteristic time on which the system can switch into flaring is limited to t rf ≥ r m /c s (r m ). The magnetospheric radius of a NS accreting from a disk can be limited as r ma < r m < r A , where (see [8] and is the radius of the magnetosphere of a NS accreting from the ML-disk, and is the Alfvén radius which is defined by equating the magnetic pressure of the dipole magnetic field of the NS with the ram pressure of the free-falling spherical flow. Here α B is the efficiency parameter, L 36 is the X-ray luminosity of the pulsar in units of 10 36 erg s −1 , and T 6 is the temperature of material in the magnetopause at the magnetospheric boundary in units 10 6 K, which is normalized following Hickox et al [12]. Consequently, if the NS in IGR J16418−4532 accrtes matter from a disk, then the raising time of flares can hardly be less than a minute and about 100 s if the temperature of material in the disk at its inner radius is about 10 4 K. Conclusion Assuming that the NS in IGR J16418−4532 rotates close to the equilibrium period we showed that accretion in this system can hardly be explain by the accretion from Keplerian disk since this scenario requires supercritical NS magnetic field. Obtained restrictions of the characteristic time on which the system can switch into flaring are consistent with observed flaring properties. This allows to investigate the mechanism of flaring formation within Magnetic Levitation accretion scenario in IGR J16418−4532.
2019-06-13T13:12:08.568Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "e04954623370afa162c4c85145eb76dc850065d2", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/932/1/012032", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "45365d6698f4b3a178393fca1089655c93242e1c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Physics" ] }
219972186
pes2o/s2orc
v3-fos-license
Validation of the Ottawa knee rule in adults: A single centre study Abstract Introduction This clinical audit aimed to evaluate performance of the Ottawa Knee Rule (OKR) and degree of compliance by emergency referrers for acute knee injuries in adults. Methods Knee radiography requests were analysed retrospectively for eligibility. Data were extracted from eligible requests under headings describing the OKR criteria, patient history, diagnosis and referrer profession. Sensitivity, specificity, negative likelihood ratio and positive likelihood ratio were calculated with 95% CI for the entire sample and each profession (consultant doctors, resident medical officers [RMO], physiotherapists and triage nurses) individually. The frequency of each OKR criterion and correlation with fracture, referrer compliance to the rule and the relative reduction in radiography were also calculated. Results Of 713 patients identified, 149 were enrolled by the eligibility criteria. The overall sensitivity, specificity, negative likelihood ratio and positive likelihood ratio of the OKR for knee fracture were 71% (95%CI, 49‐87%), 46% (95%CI, 37‐55%), 0.64 (95%CI, 0.33‐1.22) and 1.3 (95%CI, 0.96‐1.76), respectively. Physiotherapists and triage nurses demonstrated better rule performance than consultant doctors and RMOs, with a sensitivity of 100% and negative likelihood ratio of 0.0. Physiotherapists were most compliant at 73% (19/26). Only 85 requests were OKR positive and, when abiding by the rule, this would have reduced radiography by 43% (64/149). Conclusions In this first Australian study, moderate OKR performance and variable compliance by emergency referrers were observed. This led to unnecessary irradiation of patients without a fracture. The findings suggest emergency referrers could benefit from education on applying and documenting the OKR on radiography requests. Introduction Acute knee injury is a common presentation in the emergency department and accounts for a significant use of plain radiography. 1,2 As per the American College of Radiology Appropriateness Criteria, plain radiography is the standard diagnostic tool for establishing bony injury. 3 The Ottawa Knee Rule (OKR) was derived in 1995 to provide a clinical decision aid for emergency referrers in ruling out knee fractures. 2 The rule assesses five criteria: age 55 years or older, tenderness of fibular head, isolated tenderness of patella, inability to flex to 90 degrees and inability to weight bear for 4 steps immediately after injury and in the emergency department (ED). 2 Patients with positive OKR findings are highly likely to have sustained a fracture and subsequently require radiographic investigation. 2 Several studies have proven the value of OKR in knee fracture assessment, with a sensitivity approaching 100%. [4][5][6][7][8] A recent systematic review and meta-analysis by Sims, Chau and Davies demonstrated the rule to have a sensitivity of 99% and negative likelihood ratio of 0.07. 9 Results of the review by Bachmann and colleagues were similar, with a sensitivity of 98.5% and negative likelihood ratio of 0.05. 10 This alone boosts the efficiency of patient care in ED and limits the costs of unnecessary radiography and radiation exposure. 1,2,4,11 Other decision guidelines exist, such as the Pittsburgh Rule 12 and those by Weber 13 and Bauer and colleagues 14 , but do not possess this degree of validation in the literature. However, referrer compliance to the OKR has been described as poor due to several patient, medico-legal and confidence barriers. 1,4,11,15,16 The generalisability of results is also limited, particularly in Australia. To our knowledge, no audit has been conducted in Australia which evaluates performance of the OKR and referrer compliance to the rule without prior formal training. The primary objectives of our audit were to assess consultant doctor, resident medical officer (RMO), physiotherapist and triage nurse compliance with OKR and calculate rule performance for each profession audited. The secondary objective of this study was to determine whether use of the rule would reduce the number of knee radiographs ordered. Ethics The research study was reviewed and given exemption by both the Southern Adelaide Local Health Network and University of South Australia Human Research Ethics Committees. Study design and setting A retrospective audit was conducted at a tertiary hospital in South Australia, where patients are referred for radiography typically by consultant doctors, RMOs, physiotherapists and triage nurses. At our centre, triage nurses trained in NIXR (nurse-initiated X-ray) and physiotherapists with appropriate training and qualifications may request radiography. For the purposes of this study, RMOs included emergency registrars and excluded medical interns and students. Eligible patients and reports were identified using the Picture Archiving and Communication System (PACS), and the Radiology Information System (RIS) was used to document the profession of the referrer. Participants Our study population consisted of patients 18 years of age and older who presented with acute knee trauma to the institution's emergency department in an 8-month period between May 2019 and December 2019. Procedure All emergency requests for knee radiography were screened for eligibility, and all ineligible requests were categorised based on the exclusion criteria; paediatric patient, multi-trauma or multiple areas requested, injury occurred over 7 days prior to presentation, relevant preexisting disease or previous injury, and no history of trauma. As it was an acute setting, the study did not identify or include any requests for follow-up radiography of confirmed fractures. Data were extracted from the requests into a Microsoft Excel spreadsheet (Microsoft Office 16). Headings included the specific OKR (including five criteria), patient history, patient ID, date of birth, gender, date, examination, referrer profession (consultant doctors, RMOs, physiotherapists or triage nurses) and diagnostic outcome. Data analysis For our primary objective, compliance of the referrer with the rule was assumed if at least one of the criteria was met or, if none were met, the referrer specifically indicated a negative OKR result. This was expressed as a percentage of the total requests completed by the referrer. Evidence of other decision rules was noted separately from the OKR. The performance of the rule for identifying patients with a fracture was examined in the study cohort by calculating sensitivity, specificity, positive likelihood ratio (LR+) and negative likelihood ratio (LR-) with 95%CI. This was performed by constructing 2x2 contingency tables for the entire sample and for each profession individually. As the OKR is solely used to rule out fracture and is therefore not a definitive diagnostic tool, we did not calculate diagnostic accuracy. For our secondary objective, relative reduction in radiography was calculated as the difference in the total sample size and the number of OKR positive cases. The documented frequency of each OKR criterion and its associated fracture correlation was also calculated (%). Results A total of 713 knee radiography requests were gathered from May to December 2019. Of these, 149 met the inclusion criteria (20.9%). Our pre-set criteria (adult, localised injury to knee, injury within 7 days, no relevant pre-existing disease or injury, and no history of trauma), and additional exclusions determined during data collection, are outlined in Figure 1. The included age range was 18-93Y, with a mean age of 44.29. Approximately half (48.9%) of the patients were female. Profession-specific performance of the OKR A total of 26 referrals were completed by physiotherapists, 20 by triage nurses, eight by consultant doctors and 95 by RMOs. Physiotherapists and nurses performed best, with a sensitivity and LR-of 100% and 0.0, respectively. No false-negative results were noted for these professions, while five were seen for RMOs and two for consultant doctors. Table 2 summarises the specific results for each profession. Compliance with the OKR Compliance between the professions with the OKR was variable. 73% (19/26) of the requests completed by physiotherapists and 65% (13/20) by nurses demonstrated compliance. The compliance for consultant doctors and RMO requests was 37.5% (3/8) and 48.4% (46/95), respectively. Moreover, 35% (9/26) of physiotherapist requests and one nurse request utilised the rules by Bauer and colleagues, 14 namely medial or lateral 'joint line tenderness'. In five cases, RMOs used these rules and four cases demonstrated evidence of both Bauer and colleagues and OKR. Relative reduction in radiography The secondary objective of this study was to determine whether use of the OKR would reduce the number of Discussion Since the development of the OKR in 1995, 2 its excellent sensitivity (near 100%) has been argued and found indisputable across many countries. 1,4-8, 17 However, few published studies have investigated the local generalisability of the rule, particularly in Australia. Our retrospective clinical audit evaluated OKR performance and referrer compliance in a tertiary hospital in South Australia. Our results demonstrate the OKR to have an overall sensitivity, specificity, LR-and LR + of 71% (95% CI, 49-87%), 46% (95%CI, 37-55%), 0.64 (95%CI, 0.33-1.22) and 1.3 (95%CI, 0.96-1.76), respectively. In general, physiotherapy and nursing professions performed best, both demonstrating 100% sensitivity and 0.0 LR-. This indicates a positive OKR has a high probability of identifying a fracture and a negative OKR has good odds for the absence of fracture. Requests completed by physiotherapists and nurses also demonstrated 73% and 65% compliance with the rule, respectively. Consultant doctors and RMOs showed poorer performance of the rule, with sensitivities and LR-less than 65% and 0.8, respectively. Particularly, the false negatives (n = 7) and lower compliance of 37.5% for consultant doctors and 48.4% for RMOs were unpromising. Had they utilised the OKR to justify radiography, such false-negative results would misdiagnose patients and risk potential for further injury. According to the present study where 85 requests were OKR positive, implementing the OKR would have resulted in an overall 43% reduction in radiograph use (64/149). Nevertheless, the true reduction in radiographs cannot be established unless a prospective study or an implementation trial is performed with follow-up of patients without radiography. Follow-up is essential because in spite of the fact that the radiologist was a board-certified and experienced practitioner, it is still possible that some fractures were missed. The implementation trial could also involve the interpretation of two independent musculoskeletal radiologists to ensure a more robust methodology. In general, our findings propose moderate performance of the OKR for identifying a knee fracture. Unsurprisingly, this is much lower than previous studies, as the majority of methodologies included formal training in the OKR and solely evaluated medical officers. In their systematic review, Bachmann and colleagues (2004) demonstrated the rule to be 98% sensitive and 49% specific for knee fracture. 10 However, Atkinson and colleagues, who investigated radiography requesting patterns prior to teaching the OKR, observed only 80% sensitivity. 18 This study confirms our results and, together, suggests poor referrer awareness of the rule. In our study, the 43% potential reduction in radiography if the OKR was correctly applied is consistent with studies performed in Canada, Iran and Spain, which calculated reductions of 31.2%, 41% and 49%, respectively. 6,17,19 Our value is, however, higher than the rate in the original report by Stiell and colleagues (26.4%). 1 Finally, although the patient's time spent in ED was not specifically examined in our study, using the OKR could shorten the waiting time for our patients in ED. The original publications of the rule 1,2 found that adults who underwent knee radiography spent an average of 127 minutes in ED compared to 83 minutes for those who did not need radiography, and similar results were calculated in the implementation trial. 1,2 As with any study, there are several opportunities for bias to impact the results. In our measures of compliance, no written evidence of rule application was assumed 'non-compliant'. It is possible, however, referrers unknowingly acknowledged the OKR and still requested radiography. If so, the indicator for requesting radiography remains unknown. Furthermore, as each referrer requested radiography independently, we could not assess interobserver agreement for rule application. Given our retrospective methodology, we do not predict this to have introduced poor reliability. Unlike previously established methodologies, we did not limit our study to clinically important fractures (>5mm fracture length) and included all bony injuries, which may account for the seven false-negative results. Furthermore, with a restricted timeframe (8 months) and extensive exclusions list, we did not include all patients presenting with acute knee injury and achieved only a small sample size (n = 149). This led to uneven distribution of patients between referrers and may be the cause of the high fracture prevalence in our sample (16.1%) when compared to other studies. 1,[4][5][6][7][8] The high prevalence may also be attributed to the nature of the hospital as a tertiary centre. Although some referrers utilised the decision rules by Bauer and colleagues, 14 this was less commonly observed in our study than the OKR. Another decision guideline, the Pittsburgh rule, consists of two criteria identical to the OKR (over 55 years of age, inability to weight bear), 5 and it was therefore impossible to ascertain whether this was applied instead of or in conjunction with the OKR. Hence, our estimates of compliance may be positively skewed 'pro' OKR. Finally, as this was only a clinical audit, we could not definitively comment on the causes of performance discrepancy between the different professions. All referrers were not made aware of this quality improvement study, and documentation of the use of OKR on the request was not required. We suspect that consultant doctors and RMOs subconsciously used the OKR but did not document on the request form due to time constraints, workload and ambivalence about whether the radiologist/radiographer would require a full documentation of OKR. Additionally, the sample sizes for consultant doctors (8) and RMOs (95) are not consistent with the sample sizes for physiotherapists (26) and nurses (20). This might reflect an unbalanced result. We also predict the specialist training available to nurses and physiotherapists for requesting radiography would include clinical decision rules. To fully understand the principles behind these discrepancies, further qualitative research studies are required. Considering the low rates of compliance, we recommend a local survey of all emergency referrers to gauge the level of OKR awareness and the referrerperceived barriers to application. Analysis of the compliance rates within a single profession could also be performed to correlate level and/or recency of training with rule awareness. Using this information, a formal training model should be developed to educate referrers, improve compliance and reduce radiography requests. Hospitals should also investigate installing decision support tools and prompts on computer systems, as suggested by Beutel and colleagues. 15 Re-auditing postintervention is advised. Future implementation techniques may also involve educating radiographers on the OKR to assist their assessment of unjustified requests. Conclusion Although the OKR has been validated internationally, this is the first study to investigate its performance and referrer compliance in Australia. Our audit demonstrated moderate rule performance and variable compliance between the emergency referrers and the OKR. As a result, patients presenting to this centre received unnecessary radiation exposure. When implemented appropriately, the OKR can effectively rule out knee fractures and reduce patient waiting time in the emergency department. Hence, the findings of this study indicate that all emergency referrers could benefit from local education on how to apply and document OKR in radiography requests. Periodic audits to monitor compliance are also recommended.
2020-06-23T13:05:57.397Z
2020-06-21T00:00:00.000
{ "year": 2020, "sha1": "a0dc9db04f2400c626dca97587f985c68c2fc0e7", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jmrs.411", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a87ad7f74941fb39c85ebb7fdccbd353138f9e2a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
241329320
pes2o/s2orc
v3-fos-license
Experimental and analytical evaluation of stress-strain behavior of basalt fibred concrete The aim of this study is to determine the stress-strain behavior of basalt fibred concrete experimentally. Cylinders of standard size 150 x 300 mm are cast with with and without basalt fibres and tested in uni-axial compression under strain control as per IS: 516-1999 to understand the stressstrain behavior of basalt fibred concrete. After developing empirical equations for stress-strain curves of basalt fibred concrete, theoretical values of stresses are calculated at different values of strains in concrete based on the developed empirical equations as given above and theoretical stress-strain curves are plotted. These theoretical stress-strain curves are compared with experimental stress-strain curves and found that, theoretical stress-strain curves have shown good correlation with experimental stressstrain curves for all concrete mixes. Introduction The purpose of this experiment is to examine the stressstrain behaviour of basalt fibred concrete. To study the stress-strain behaviour of basalt fibred concrete, cylinders of standard dimension 150 x 300 mm are cast with and without basalt fibres and tested in uni-axial compression under strain control as per IS: 516-1999. The average stress-strain curve for M30 grade basalt fibred concrete is drawn from the values of stresses and strains, using the average values of the three cylinders' findings. Mathematical Modeling for Stress-Strain Behaviour Following the experimental determination of the stressstrain behaviour of basalt fibred concrete, an attempt was made to derive the analytical stress-strain curves for the aforementioned mix. A variety of empirical equations have been presented to characterise uni-axial * Corresponding author: hashamis61@gmail.com stress-strain behaviour of ordinary concrete, however most of them can only be utilised for the climbing section of the curve. Carriera and Chu expanded Popovics' empirical equation, which covers both ascending and descending sections of the full stressstrain curve, presented in 1985. The stress-strain diagram is given in a non-dimensional manner along both axes to compare the behaviour of basalt fibred concrete. Divide the stress at any level by peak stress and the strain at any level by peak strain to get the above form. As a result, at peak stress, all stress-strain curves will have the same point (1,1). The behaviour may be expressed as a generic behaviour by nondimensionalizing the stresses and strains as shown above. The stress-strain curves for basalt fibred concrete produced experimentally were normalised as described above, and normalised stress-strain values were computed. To get the entire stress-strain behaviour of recycled aggregate concrete, many equations in various forms were tested. Seanz's model was used to match the produced normalised stress-strain curves using analytical equations from a variety of potential trials. The developed equation is in the form of Y = Ax/ (1+BX 2 ) Where X -Normalized strain, Y-normalized stress A, B are constants for ascending portion and C, D are constants for descending portion for normalized stressstrain curves A, B and C, D are a set of constants for basalt fibred concrete mix. Constants are determined based on the boundary conditions of normalized stress-strain curves. Boundary conditions for ascending and descending portions of stress-strain curves are, i. At the origin the ratio of stresses and strains are zero i.e. at origin (Є / Є 0 ) = 0, (σ / σ 0 ) = 0 Є 0-strain at peak stress, σ 0-peak stress ii. The strain ratio and stress ratio at the peak of the non-dimensional stress-strain curve is unity. The slope of non-dimensional stress-strain curve at the peak is At 85% stress ratio the corresponding values of strain ratio is 1.3. i.e at (σ/σ 0 ) = 0.85 (Є/Є 0 ) = 1.3 Where σ 0 -corresponds to peak stress and Є 0 -corresponds to strain at peak stress The constants in the ascending section of the normalised stress-strain curve are determined by boundary conditions I and ii, whereas the constants in the descending portion of the curve are determined by boundary conditions ii, iii, and iv. Constants for basalt fibred concrete are derived using the boundary conditions in non-dimensional stressstrain curves, and equations are built from there. Finally, analytical equations that describe the entire stress-strain behaviour are created. The suggested equation for basalt fibered concrete is Y = Ax/ (1+Bx2). Further research will be conducted using these normalised stress-strain curves. The suggested empirical equations may be utilised to analyse the flexural behaviour of concrete structural components as a stress block. Calculation of Theoretical Stresses Using Proposed Analytical Equations Theoretical stresses have been calculated using proposed empirical equations for basalt fibred concrete which are derived from Seanz's model in the form of Y = ( A X ) / ( 1 + B X ² ) Where Y= (σ / σ o ) and X = (Є / Є 0 ) Substituting Let (Aσ 0 ) / Є 0 = A 1 and B/Є 0 2 = B 1 1+B (Є/Є 0 ) 2 Then σ = A 1 Є 1+B 1 Є 2 Where Є 0 -is the strain corresponding to peak stress σ 0 σ -is the stress corresponding to any strain Є A & B -are constants for normalized stress-strain curves. σ 0 -Corresponds to cylinder strength (taken as) = 0.8 f ck Є 0.85 -is strain corresponding to 85% peak stress on the descending portion of stress-strain curve If A, B, σ 0 and Є 0 values are known, the constants A 1 and B 1 ( constants for dimensional stress -strain curve) are determined using the relationships A1 = (Aσ 0 ) / Є 0 and B 1 = B/Є 0 2 Substituting the values of Є i.e. strain at extreme fibre of concrete, theoretical stress values at different values of Є are determined using the relationship σ = A 1 Є 1+B 1 Є 2 Theoretical values of stresses are calculated at different strains in concrete using the generated empirical equations and theoretical stress-strain curves are displayed after generating empirical equations for stress-strain curves of basalt fibred concrete. These theoretical stress-strain curves were compared to experimental stress-strain curves, and it was discovered that for all concrete mixes, theoretical stress-strain curves had a strong agreement with experimental stressstrain curves. Theoretical Stress-Strain behaviour Empirical equations for stress-strain behaviour of concrete mixes were established after experimentally getting the stress-strain behaviour of basalt fibred concrete. Stresses are computed using empirical formulae, and stress-strain curves are shown using theoretical stress values. The experimental stress-strain curves are compared to the theoretical stress-strain curves. A generic behaviour of basalt fibred concrete may be described by non-dimensionalizing the experimental stresses and strains. As a result, at peak stress, all stressstrain curves will have the same point (1,1), which can https://doi.org/10.1051/e3sconf/202130 E3S Web of Conferences 309, 01051 (2021) ICMED 2021 901051 be found by dividing the stress at any level by peak stress and the strain at any level by peak strain. The experimentally acquired stress-strain curves for all concrete mixes were normalised, and normalised stressstrain values were computed. To get the entire stress-strain behaviour of concrete mixtures including basalt fibres, several equations were tested. Conclusions From the observations made from stress-strain curves, the following conclusions are drawn: 1. When compared to regular concrete without fibres, optimally basalt fibred concrete has exhibited better stress values for the same strain levels. 2. Because the degree of internal micro cracking in basalt fibred concrete has decreased, the strain at peak stress is somewhat higher, and the slope of the falling section is steeper. 3. The model suggested for predicting stressstrain behaviour founds to be reasonable as the experimental values are correlating with the theoretical values validating the model developed. 4. For similar strains in basalt fibre concrete and normal concrete, peak stress in basalt fibre concrete is more indicating the ultimate load carrying capacity. 5. For similar stresses in basalt fibre concrete and normal concrete, the strains are improved in concrete due to inclusion of basalt fibres.
2021-10-15T16:19:23.331Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "2872265f1d155875c9b171a29255afc6db3c29c6", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/85/e3sconf_icmed2021_01051.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cf1e3ec8ac53fd7340de44cc4ca82ba9615be648", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
71974481
pes2o/s2orc
v3-fos-license
SWIR Cameras for the Automotive Field: Two Test Cases This paper presents the results obtained by the 2WIDE SENSE Project, an EU funded project aimed at developing a low cost camera sensor able to acquire the full spectrum from the visible bandwidth to the Short Wave InfraRed one (from 400 to 1700 nm). Two specific applications have been evaluated, both related to the automotive field: one regarding the possibility of detecting icy and wet surfaces in front of the vehicle and the other regarding the pedestrian detection capability. The former application relies on the physical fact that water shows strong electromagnetic radiation absorption capabilities in the SWIR band around 1450 nm and thus an icy or wet pavement should be seen as dark; the latter is based on the observation that the amount of radiation in the SWIR band is quite high even at night and in case of poor weather conditions. Results show that even the use of SWIR and visible spectrum seems to be a promising approach; the use in outdoor environment is not always effective. Introduction Increasing the road safety is an objective of mainstream importance for every political institution and great improvement capabilities are possible with development of more intelligent vehicles. The ability to properly analyze the context in which the vehicle is moving, under hard real time constraints, is strongly influenced by the availability of powerful sensors. Conversely this kind of sensors is usually quite expensive and so it makes the development of affordable intelligent vehicle a difficult task. Many research efforts are then spent with the aim to build cheap smart sensors that could provide data to better analyze such a complex environment as the automotive one. The SWIR sensor presented is such a kind of smart, low cost device. To validate its usefulness this paper presents the results obtained in two different functionalities: detecting pedestrians and in discriminating amongst wet, dry, or icy pavement. These functionalities were selected since the additional use of the SWIR bandwidth component should, theoretically, improve the results. Similarly to what happens with visible light, in standard automotive applications this band is mainly populated by the light reflected by different objects in the scene rather than by their thermal blackbody radiation, so that the only applications served by SWIR are those which benefit from reduced scattering effects of longer wavelengths like illumination from invisible sources, as passive illumination provided by the night glow originated in the upper atmosphere or active illumination from eye-safe lasers or alternatively from thermal emission of objects with temperatures above 150 ∘ C. So illumination is an important issue when dealing with SWIR images. Applications. Adverse weather conditions are dangerous for driving. Rain both reduces visibility and makes roadway surfaces dangerous. Wet brakes are less effective too. Snow and ice cause roads to become even more slippery, especially when the temperature is at or below freezing. Slush makes it difficult to steer, hard packed snow increases the danger of skidding, and black ice makes driving extremely dangerous. Stopping distances on slippery pavement are from two to ten times farther than on dry pavement so that for a vehicle travelling at 30 km/h they can get up to 52 m on black ice. Moreover, usually, antibrake systems (ABS) are tuned for the most slippery scenario and therefore less effective than they 2 ISRN Automotive Engineering can be in normal situations. Therefore the detection of a general road status or of the presence of slippery spots in front of the vehicle can significantly improve driving safety. It can be noticed that in Europe (EU-18) around 3800 casualties are due to wet, icy, or snowy situations [1]. Most of the proposed solutions to this problem are not based on a true prediction but are focused on the estimation of the road friction, namely, the monitoring of tyres slippering. These approaches are mainly based on the use of inertial sensors or GPS or on the monitoring of the tyres noise [2][3][4]. Conversely, different perception approaches have been proposed for a true prediction like the use of radars [5] or lasers [6]. The use of standard cameras has been proposed as well [7][8][9][10] exploiting the different polarization of the light reflected from the road surface. Anyway, the most promising approach seems to be the analysis of the different spectral content of the light reflected from the asphalt in dry, wet, icy, or snow conditions [11]. More precisely, the Short Wave InfraRed (SWIR, 0.9 m to 1.7 m) bandwidth shows different light reflection patterns depending on the road status (see Figure 1) [12]. According to this result, some solutions based on the use of custom spectrometers have been already implemented, for example, the Volvos Road eye or the Vaisala's Road Weather Sensors family. While the use of a spectrometer can be effective, the proposed solutions are not suitable for on-board installation on vehicles. Road conditions assessment is not the only implementation in which SWIR technology could be used: pedestrian detection could be a promising field of application. Although several improvements in vehicle safety have been achieved in the last 25 years (i.e., crash tests, passive safety measures, new energy absorption materials, etc.), further reductions in road fatalities and injuries must be achieved. The development of active video-based driver assistance systems to detect preemptively dangerous situations involving vulnerable road users (VRU) as pedestrians is thus of fundamental importance for warning the driver or automatically taking control of the vehicle (i.e., braking) and becomes particularly valuable in case of drivers distraction or poor visibility conditions. Yet vision-based pedestrian detection is a difficult problem for a number of reasons [13,14]. The objects of interest appear in highly cluttered backgrounds and have a wide range of appearance, due to body size and pose, clothing, and outdoor lighting conditions. Because of the moving vehicle, one does not have the possibility to use simple background subtraction methods (such as those used in surveillance applications) to obtain a foreground region containing the human shape. Furthermore, pedestrians can exhibit highly irregular motion, making prediction and situation analysis difficult. Finally, there are hard real-time requirements and tight performance criteria. A peculiar characteristic to be noted about the SWIR spectrum is that human skin, having a very high water content, absorbs much of the longer wavelengths appearing in SWIR images very dark if not almost black (see Figure 2). Previous works within the IR bandwidth have dealt with skin detection both for face recognition [15] or people detection [ [16][17][18], but they are ineffective for an automotive pedestrian detector with very little skin area usually showing from the clothes. We have therefore applied a classic approach for pedestrian detection, an SVM classifier based on deformable part models [19,20]. Hardware Equipment Different solutions have been developed and used to collect data. Solution 1 consists of a specific sensor and a large bandwidth lens has been developed. In addition, the camera is featuring a filter on the sensors that enables the independent acquisition of 4 different spectral bandwidths. The sensor of the 2WIDE SENSE camera module has been mainly developed by Alcatel-Thales III-V Lab and is an uncooled InGaAs and InP-based 640512 TM pixels array with a 15 m pitch and a MAGIC logarithmic readout circuit (see Figure 3). The two main features of the sensor are the large spectrum sensitivity (400 nm to 1700 nm) and the logarithmic gain that enables to avoid saturation effects. Furthermore a specific microlens module has been developed by the OPTEC s.p.a (see Figure 3) to let the camera exploit the full spectrum capabilities. The OB-V-SWIR 16 apochromatic lens is based on a combination of six elements produced using a specific moldable glass. The lens transmittance is nearly the same in the whole functioning band: 98% in the 400 nm-1550 nm interval and decreasing down to 96% in the 1550 nm-1700 nm. The most interesting feature of the developed camera is the presence of a Bayer-like filter to independently acquire specific spectral bandwidths that have been selected according to automotive world needs. More precisely, four different sapphire substrates have been grown on the pixel array and fourteen layers of TiO 2 and SiO 2 have been deposited on the substrates obtaining a 4 × 4 pixels mask pattern (see Figure 4). Each pixel-filter is a high-pass filter with the following bandwidths: C clear (no filter, full bandwidth), F1 1350 nm-1700 nm (SWIR), F2 1000 nm-1700 nm (SWIR), and F4 over 540 nm (Red, NIR, and SWIR) (see Figure 5). The bandwidths of each filter have been selected according to the most used ADAS functions like Traffic Sign Recognition and High Beam Assist. Anyway other bandwidths can be easily obtained combining different contributions; as an example the blue and green bandwidths can be obtained as a difference between the C component and the F4 contribution. The large bandwidth camera module developed during the project has been available and therefore only tested during the final stage of the 2WIDE SENSE experiments. This paper reports about the preliminary tests done using a state-of-the-art InGaAs camera module with the OB-V-SWIR 16 microlens and high-pass SWIR filters applied on the lens (transmission bands as F1 and F2). Solution 2 consists of a multispectral camera module. It has been developed during the project and has been available and therefore tested only during the final stage of the 2WIDE SENSE experiments. A detailed description of the camera sensor, the filter pattern, and the large bandwidth lens has been provided above. Conversely, most of the tests have been carried out using a state-of-the-art InGaAs camera module equipped with a SWIR high transmission lens. In order to mimic and evaluate the most suitable filters for the final prototype, a number of different filters have been used and tested (see Figure 3). The camera used for tests is the OWL SW1.7 high sensitivity InGaAs FPA produced by Raptor Photonics and equipped with a sensor developed by Alcatel-Thales III-V Lab, both partners of the project consortium. The camera has a sensitivity bandwidth in the 400 nm-1700 nm interval covering the whole spectrum from visible to the SWIR and acquires 320 × 256 14 bit images within a 500 ns-500 s exposure interval. The lens used is the OB-SWIR25/2 developed and produced by Optec SpA. It is a high transmission lens featuring a transmission rate > 94% in the 900 nm-1700 nm interval. The focal length is 25 mm with a 35.5 deg angle of view. In order to test a number of spectrum bandwidths and to compare the quantity of light reflected by the asphalt for different conditions and wavelengths, several filters have been used. In the preliminary phase of the project tunable liquid crystal filters have been employed to perform several temporal sequential acquisitions. These tunable filters allowed to choose different wavelengths with a 20 nm bandwidth resolution from 850 nm to 1800 nm and a transmittance around 60%. In the following phase a filter wheel (see Figure 6(b)) with 12 filters has been installed between the lens and the camera allowing to select between the available filters. This is a manual operation and therefore limits the use of the filters to still objects. Results for Road Safety. Outdoor tests have been performed using both the state-of-the-art InGaAs camera with the OB-SWIR25/2 lens and the filter wheel as shown in Figure 6(a). The acquisition sessions for this activity were done at daytime with sunny and cloudy weather conditions and the road surface both dry and wet or iced in some areas as shown in the examples reported in Figure 7. All combinations of gain and integration time values were also investigated to find the most suitable acquisition parameters for the RSM function. Some examples of these tests are shown in Figure 8. Dry, wet, and icy road conditions at daytime have been investigated. In the following, two scenes showing different illumination and road conditions have been selected (see Figures 9 and 10). The spectral analysis has been done measuring the intensity values of the selected ROIs by using the filters included in the filter wheel operating in the SWIR bandwidth only. The resulting ratios, shown in Figures 9 and 10, underline a behavior comparable to the indoor data although some relevant differences are noticeable: (i) ratio values are different respect to lab ones due to the different source spectrum, halogen lamp in the lab, and sun outdoor; (ii) due to modification in illumination condition (clouds, etc.) during the acquisition (spectra are collected by means of temporal sequential measurements of the filter wheel filters), it is not possible to find a ratio as good indicator for road condition. Taking into account the previous considerations, several measurements have been done in order to characterize how the presence of clouds could affect the ratios. Spectra in a changeable weather day, initially with clouds and then clear, have been collected using a calibrated spectrometer. The temporal spectral evolution was compared to the theoretical solar spectra at sea level. In order to understand the contribution of the clouds, we collected some outdoor spectra during a cloudy day. In the table of Figure 9 the variations of different spectral ratios during the acquisition are shown. It has not been possible to evaluate the I(1500)/I(1100) ratio due to the spectral sensitivity limitation of the spectrometer. From these measurements it can be noticed that illumination changes not only affect the intensity levels at all wavelengths but, due to the extra absorption of cloud water molecule, some wavelength ranges, that is, 1500 nm, are more deprived. Our tests have shown that for indoor acquisitions the lamp spectrum affects results only by a multiplicative factor, outdoor the unpredictable changes in illumination not only affect the intensity levels at all wavelengths but, due to the extra absorption of cloud water molecule, different wavelength ranges are also differently affected. Image processing techniques applied to satellite and airborne pictures have also been taken into account looking for a procedure to limit this unwanted behavior, but all spectral analysis techniques are applied to images clear of clouds, a hard restriction which is totally unsuitable for functions to be applied in the automotive field. Results for Pedestrian. A database of more than 10,000 images in different illumination and weather conditions with varied combinations of gain and exposure time has been collected, paying special attention to cases of reduced visibility caused by haze and fog (see Figures 11 and 12). A thorough investigation of images acquired in the SWIR 6 ISRN Automotive Engineering bandwidths and comparisons with images acquired in the visible spectrum have been carried out. In the following subsections three visibility conditions will be dealt with: clear sky, haze, and fog. To detect pedestrians in the SWIR spectrum the object detection method based on deformable part models illustrated in [19,20] has been employed. Being based on both contrast sensitive and contrast insensitive HOG features, we have found that training the classifier on visible images only was suitable for detections on SWIR images with comparable detection rates. The following results have therefore been obtained training the classifier on datasets publicly available on the web (the PASCAL datasets [21]) featuring images acquired in the visible spectrum only. Clear. Images acquired in the SWIR spectrum with clear sky conditions show that high water content objects like the human skin appear a lot darker than in visible only images. Nonetheless this peculiar characteristic is not very useful to effectively detect pedestrians as skin areas arising from clothes are of variable sizes and not in a fixed position, even the face (not always visible, i.e., in case of rear view) could be partially covered by sunglasses, scarf,. . .,ellipsis. In addition, image contrast may change significantly in different seasons with light changes or particular atmospheric conditions such as high humidity levels and other sorts of absorption phenomena, making the skin color an unreliable indicator for automotive applications (see Figure 13). Through the classification process, correct detection values comparable to those obtained on visible only images are achievable but with no practical advantage by employing a SWIR sensor in respect of a standard visible only one (see Figure 14). Haze. Haze is an atmospheric phenomenon where dust, smoke, and other wet or dry particles obscure the sky's clarity. The SWIR wavelengths are able to penetrate those particles layer, making visibility clearer at distance ISRN Automotive Engineering Acquisitions have been carried out with clear, hazy, and foggy visibility conditions, as indicated by the columns titles, and during different seasons, this having as a consequence different gain and exposure time settings to avoid image saturation. In particular, images shown in the first column appear quite dark in respect of the others as due to a very bright winter day the exposure time had been set to 1.0 m. Note that, as stated by the theory of major water absorption happening in the SWIR spectrum, in these images the snow color (low right side, adhering to the footpath curb) gets darker as the bandwidth narrows from the whole visible to SWIR spectrum to the pure SWIR ones. (see Figure 15). However, due to the space between particles, haze becomes perceptible only from kilometers afar making any pedestrian detection application for the automotive field of questionable utility. Fog. Acquisitions carried out in foggy conditions have shown that, despite the longer wavelengths capability of penetrating water particles suspension, in the presence of fog a clear visibility is not achievable employing a SWIR sensor (see Figure 17). Due to the peculiar nature of this atmospheric phenomenon, the scattering effect, predominantly in the forward direction, affects the SWIR wavelengths making imaging at distance impossible. The classifier returns correct detections only when the pedestrian is close enough to the camera (see Figures 16 and 18). Discussion The experiments carried out on the previously described sensor both for icy and wet pavement conditions and pedestrian detection in a real world context have not been fully satisfying. Then both the former and the latter applications have proven to be very sensitive to the environmental conditions both in terms of weather and illumination. The idea of a SWIR sensor adoption in the automotive field is anyway not ill posed: the SWIR spectrum presents very interesting physical properties but, in order to be able to effectively exploit them for real world applications, it is of mandatory importance to define proper strategies to address the above mentioned issues.
2019-03-09T14:07:24.567Z
2014-04-06T00:00:00.000
{ "year": 2014, "sha1": "e5a2e160243946a2c34422ec1776a50212e8120a", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/archive/2014/858979.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1e3849502de2fff98477b9a7347984285db5615b", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
23904686
pes2o/s2orc
v3-fos-license
Acute Kidney Injury Recognition in Low- and Middle-Income Countries Acute kidney injury (AKI) is increasingly common around the world. Because of the low availability of effective therapies and resource limitations, early preventive and therapeutic measures are essential to decrease morbidity, mortality, and cost. Timely recognition and diagnosis of AKI requires a heightened degree of suspicion in the appropriate clinical and environmental context. In low- and middle-income countries (LMICs), early detection is impaired by limited resources and low awareness. In this article, we report the consensus recommendations of the 18th Acute Dialysis Quality Initiative meeting in Hyderabad, India, on how to improve recognition of AKI. We expect these recommendations will lead to an earlier and more accurate diagnosis of AKI, and improved research to promote a better understanding of the epidemiology, etiology, and histopathology of AKI in LMICs. T he incidence of acute kidney injury (AKI) is increasing around the world. [1][2][3][4] The ongoing search for supporting procedures and interventions has produced improved guidelines and recommendations. 5,6 Demonstration of increasing AKI incidence has led to an emphasis on prevention or early intervention, 5 but unfortunately, analytical methods that predict AKI, or preventive and therapeutic approaches to accelerate recovery or prevent progression to chronic kidney disease (CKD), are only beginning to be understood. [7][8][9] Early recognition of AKI is essential to ensure prompt and appropriate management, and to avoid progression to deadlier stages of the disease 10,11 ( Figure 1). In the appropriate context, early detection requires a high degree of suspicion that AKI is occurring. Diagnosis requires a combination of a clinical history, a thorough physical examination, an accurate assessment of kidney function, appropriate imaging, and when indicated, a kidney biopsy. In low-and middle-income countries (LMICs), early detection is impaired by limited resources and poor understanding of the condition. 1,2,9,[12][13][14][15] Such limited understanding-to a large extent determined by inadequate reporting and education-limits awareness and early recognition, and delays the implementation of measures that permit early and adequate management. 16 To address this goal, the steering committee of the 18th Acute Dialysis Quality Initiative (ADQI) conference dedicated a work group with the task to identify what elements affect the recognition of AKI within the limited resource constraints prevalent in LMICs. Using a modified Delphi process, this group reached consensus regarding strategies to recognize and diagnose AKI focusing on low resource countries. The group addressed the following 3 questions that served as the basis for accompanying consensus statements: 1. When should AKI be suspected? 2. What tests are needed when AKI is suspected? 3. How do we confirm the diagnosis of AKI in patients with an initially elevated serum creatinine (Scr) level? Methods The ADQI process has been described previously. 17,18 Complete ADQI methodology description is available at www.adqi.org and in the editorial accompanying the ADQI 18 conference papers. 19 The broad objective of ADQI is to provide expert-based statements and interpretation of current knowledge for use by clinicians according to professional judgment, and to identify clinical research priorities to address these gaps. The 18th ADQI Consensus Conference Chairs convened a diverse panel that represented relevant disciplines (i.e., adult and pediatric nephrology, critical care, and renal pathology) from several continents (e.g., Africa, Asia, North America, Latin America, and Europe) around the theme of "Management of Acute Kidney Injury in the Developing World" for a 2-1/2-day consensus conference in Hyderabad, India on September 27 to 30, 2016. The preconference activities involved a search of the literature for evidence on the epidemiology, recognition, and management of AKI in developing countries and their differences with developed countries. A literature search was conducted using the following terms: recognition; awareness; diagnosis; point of care; and low income countries or developing countries, together with either acute kidney injury and acute renal failure in PubMed. This work group was also tasked to summarize the scope, implementation, and evaluative strategies for AKI recognition and diagnosis based on the location, resource availability, and a critical evaluation of the relevant literature. A series of phone conferences and emails that involved work group members before the meeting identified current knowledge to enable the formulation of main questions from which discussion and consensus would be developed. A formal systematic review was not conducted. During the conference, the work group developed consensus positions, and plenary sessions that involved all ADQI contributors were used to present, debate, and refine these positions. Following the meeting, this summary report was generated, revised, and approved by all members of the ADQI participants. All the participants interacted throughout the meeting in the general session, and all group deliberations were subjected to review and consensus agreement in the final versions. In addition, all participants discussed and approved the contents of this paper. The participants did not represent specific societies, but were invited because they had domain knowledge expertise. Their affiliations are provided in the Supplementary Appendix. For the purposes of all work group discussions, we used the current Kidney Disease Improving Global Outcomes (KDIGO) definitions for AKI and stages of AKI, which defines AKI as an episode that occurred within a 7-day timeframe. 5 Community-acquired AKI was defined as an episode of AKI when the initial event occurred outside of the hospital setting and where the patient was admitted to the hospital with AKI; hospitalacquired AKI was defined as an episode of AKI due to a kidney insult that occurred to hospitalized patients who developed de novo AKI during their hospital stay. 15 Q1: When Should AKI Be Suspected? Consensus Statement 1. In the appropriate clinical context, AKI should be suspected in patients who present with the signs and symptoms listed in Table 1. During the initial interaction of a patient with the health care system, the diagnosis of AKI is influenced by the clinical presentation and the context of the encounter 11,20 ( Figure 2). Improved awareness that the presenting symptoms and signs might correspond to AKI is the first step toward timely recognition. Unfortunately, AKI is frequently not recognized or is recognized too late, at a more severe stage. 21 Failure to recognize early AKI is frequently associated with disease progression that requires more aggressive therapies and support when recovery is less likely and mortality is heightened. 22 Figure 1. Acute kidney injury (AKI) recognition: the process and its modifiers. In addition to the usual AKI trajectory from clinical suspicion to confirmation to diagnosis, other factors modify the process. The degree of AKI awareness, the context in which the patient is encountered, and the available diagnostic resources may facilitate, delay, or impede the achievement of early AKI diagnosis. CKD, chronic kidney disease; KDIGO, Kidney Disease: Improving Global Outcomes; POC, point of care. In LMICs, because of the common absence of access to specialized nephrology care, increased awareness of the clinical situations associated with AKI, and the implications of failing to detect it, AKI must be more understood at all levels of the health care system. 23 A practical and easily accessible educational strategy focused on providers at the forefront of health care delivery is indispensable to achieve this goal. Providers must be trained to consider AKI in patients who present with certain signs and symptoms (Table 1) 24 in the right clinical context. For example, in areas where infectious diseases (e.g., severe malaria, leptospirosis, or dengue) are endemic and associated with high rates of AKI, 20,25-27 a febrile patient should elicit concern for renal injury. Similarly, in patients with severe volume depletion due to gastrointestinal loss, volume resuscitation is central to care and to prevent renal injury-preferably before the onset of persistent oliguria. 15 Management must be appropriate to the clinical condition. 49,50 The development of AKI as a maternal and neonatal complication deserves special consideration in the LMIC environment, 51-62 because failure to recognize renal injury frequently leads to significant consequences for both the mother and child. Successful efforts to improve early recognition have clearly demonstrated benefit, especially by reducing some of the more dreaded consequences such as cortical necrosis. 63,64 In some areas of the world, exposure to snake venom represents a frequent cause of AKI. 65,66 Administration of herbs by traditional healers has been associated with nephrotoxicity, and must be considered when confronted with AKI of unclear etiology. 12,21,67 Increased availability and use of overthe-counter allopathic medications (e.g., nonsteroidal anti-inflammatory drugs) significantly contribute to a rising incidence of AKI. In LMIC, recognition of AKI in the hospital faces challenges that are akin to those seen in the developed world; hospitalized patients demonstrate a high incidence of AKI related to exposure to nephrotoxic medications, antibiotics, intravascular administration of iodinated radiocontrast, and surgical procedures. 68 2. Evaluation for AKI should be incorporated into the diagnosis and management of specific endemic conditions associated with a high AKI risk (e.g., severe malaria, leptospirosis, dengue, and HIV). Endemic infections contribute significantly to the burden of AKI in LMICs. Much remains to be learned about the prevalence of AKI, the clinical characteristics that predispose to the onset of AKI, and the impact of AKI on the management of patients with those infections. Thus, the HIV epidemic in Sub-Saharan Africa has contributed to the rising burden of AKI, either as a direct result of the viral infection or as an unintended consequence of antiretroviral therapy. 16,26,[70][71][72][73] Other infectious diseases in LMICs have not received the same level of attention, and much remains to be understood about the nature of AKI associated with these conditions. 48,49,74,75 Research Recommendation In LMICs, efforts must be directed to a better understanding of the epidemiology and management of infection-related AKI. Q2. What Tests Are Needed When AKI Is Suspected? Consensus Statement 1. We recommend that patients suspected to have AKI should have an estimation of urinary output, a measurement of SCr levels, and a thorough urinalysis. 2. Whenever possible, the performance of urine microscopy and urine biochemistry is essential to elucidate the underlying etiology and to assess severity. 3. We recommend that point-of-care testing (POCT) technologies should be made available for the diagnosis of AKI in low resource settings. 4. In hospitalized patients, we recommend additional testing, including renal imaging and renal biopsy, as indicated. The use of newer biomarkers of structural injury in economically constrained environments should await demonstration of efficacy. Confirmation of AKI The diagnosis and staging of AKI using current KDIGO definitions rests upon changes in serum creatinine and/ or urinary output. 5 Additional testing and urinary microscopy are necessary to identify the underlying etiology. Urinary Output In patients with developing AKI, urine output is a sensitive functional marker of kidney dysfunction. [76][77][78][79][80] Unfortunately, oliguria may be easily confounded in its significance 79 and can be difficult to record accurately, thereby limiting its reliability as a marker of AKI. In the community setting, diuresis is often unknown or inaccurately recorded, which limits its usefulness. 5 In LMICs, oliguria is usually an accurate marker of AKI severity in children and neonates, and is associated with patient outcomes. 39,81-83 We recommend that training in microscopic urine examination and availability of basic examination equipment for such testing should be promoted as a key, low-resource test for detection of AKI in LMICs. Although the usefulness of urinary indices (Table 3) in the critically ill patient with sepsis has been questioned, 86,99 and may be confused by the use of diuretics, the combination of these tests with a thorough patient history, physical examination, and urinalysis will increase the sensitivity and specificity of AKI prediction and severity. 100 Serum Creatinine Despite limitations in the use of serum creatinine as a marker of renal function, changes in SCr and/or urine output form the basis of all AKI diagnostic criteria. SCr is a frequently inaccurate biomarker due to the need for a baseline and/or historical value to provide context [101][102][103][104] and the limitations of a delayed diagnosis. [105][106][107][108] Serum creatinine concentrations are affected by age, sex, and muscle mass 109 ; they can change in response to certain drugs and are unreliable in patients with liver dysfunction or fluid overload. Serum levels take 24 to 36 hours to rise after a definite insult. [110][111][112][113] In addition, although changes in creatinine concentration remain central to the diagnosis of AKI, differences in individual body composition that result in differences in creatinine production and volume of distribution across populations, as well as variations in dietary composition, have largely been ignored, 102 and may be different from current estimates originated in the developed world. Until recently, the most common assay for measurement creatinine was the alkaline picrate (Jaffé) assay. However, chromogens other than creatinine interfere with the assay, giving rise to errors in up to 20% in subjects with a normal glomerular filtration rate (GFR). Modern assays do not detect noncreatinine chromogens and yield lower levels of creatinine. The lack of standardization to adjust for this interference affects the ability to estimate kidney function based on SCr concentration by different laboratories, especially at higher levels of estimated GFR. Standardization will reduce but not completely eliminate this error. 114 Blood and Saliva Urea Nitrogen Serum urea and blood urea nitrogen (BUN) levels must be carefully interpreted as markers of kidney function in view of the numerous non-GFR factors that influence their blood concentrations. Levels of urea and/or BUN depend on protein intake, endogenous urea production, and tubular reabsorption. Reduced kidney perfusion in the setting of volume depletion enhances reabsorption of urea, which may lead to an elevation of BUN disproportionate to the concomitant decrease in GFR. Conversely, decreased protein intake or underlying liver disease can prevent the expected rise in BUN, whereas increased urea production (gastrointestinal bleeding, hypercatabolic status) or impaired protein anabolism (corticosteroid administration) can increase BUN in the absence of increased urea reabsorption. 84,115 Because of multiple confounding, the use of BUN as an isolated marker of kidney injury may be unreliable. Additional POCT tools such as saliva urea nitrogen have been recently proposed and may be effective to screen patients with elevated urea nitrogen levels when blood tests may be unavailable or unaffordable. 116 Serum Cystatin C Currently, cystatin C is not being widely used. The absence of a relationship with body composition makes this marker an interesting alternative, but its value is limited by changes in concentration in response to inflammation, lung disease, and cigarette smoking. 117 Point-of-Care Testing POCT for creatinine measurements occurs close to the patient instead of in a central laboratory (Table 4). It can be performed by nonlaboratory trained individuals, thus eliminating delays in testing and reporting of results. 118 Although POCT is a particularly attractive option in remote and low resource environments, it requires the implementation of a quality assurance program that ensures accurate and reliable results. Several POCTs for Scr are available in the market across the world 116,118-124 and can be classified into blood gas analyzers and nonblood gas analyzers. They also vary with respect to the types of samples that can be processed-whole blood, plasma, or serum. Other specific requirements include a power source, availability of deionized water, specific consumables (which sometimes require refrigeration), space, and requirements for calibration and disposal as a biohazard waste. As a result, most POCTs for SCr are not yet cost-effective and must be further tested for their usefulness in the detection of AKI. 119 The failure of most of POCT creatinine devices to be in full alignment with isotope dilution mass spectrometry equivalent standards is another limitation. 118,125 Definitive studies to determine the best practices to incorporate POCTs in low-resource health care settings are needed. Novel Biomarkers As discussed, SCr as the current gold standard remains a flawed marker of renal dysfunction. Newer biomarkers are being developed, but even in high-income countries their use is yet to become a standard of care; their application in the developing world is even more challenging. 126 Because of their simplicity of use and limited requirement for technological support, dipsticks are one of the most widely used tools to assess renal injury. Although traditional dipsticks allow the assessment of renal injury by primarily testing glomerular integrity (albuminuria and/or proteinuria), newer devices have more recently been modified as markers of renal dysfunction by estimating elevated BUN using saliva, or novel blood or urine markers of tubular injury such as kidney injury molecule-1 and neutrophil gelatinaseassociated lipocalin. 123,127 Recently, newer biomarkers in dipstick format have been made commercially available. 112 AKI etiologies in low-resource rural areas, where volume depletion, infection, and nephrotoxic agents are leading causes of AKI, 12,26 are usually different from those seen in the developed world. 12 Such differences pose a challenge in our understanding of how potential novel biomarkers can be deployed. The ideal biomarker would facilitate the distinction between AKI due to volume depletion and AKI due to intrinsic kidney injury, and must be able to distinguish transient elevations in SCr from persistent changes consistent with injury. Such markers should allow early detection of the most likely cause of AKI, facilitate a diagnosis in the absence of historical information on baseline renal function, and support early therapeutic intervention. 12,127 Unfortunately, novel AKI biomarkers remain poorly studied in clinical conditions commonly associated with AKI in LMICs; such limitations raise questions about their potential usefulness and practical implementation in those areas. 128 Newer AKI Definitions, Staging Criteria, and Recent Uncertainties Although newer AKI definitions and staging criteria such as KDIGO; acute kidney injury network (AKIN); and risk, injury, failure, loss, and end-stage kidney disease (RIFLE) 5,129-131 are appropriate to define AKI epidemiology and to design clinical trials, questions have been raised on their clinical application to the individual patient. 111,112 The classification of AKI and its various stages has been validated in multiple hospitalized populations by demonstrating a strong association with short-and long-term outcomes, 13,132 but significant problems in the usefulness of this classification persist. 110 Because they rely on renal function changes, current AKI definitions only permit a relatively late diagnosis hours or days after the risk of injury or when the actual lesion began. As discussed previously, efforts to achieve an earlier diagnosis have led to the development of biomarkers of injury and are currently in progress. 133 It is expected that newer biomarkers may detect kidney damage before the SCr and GFR become abnormal, but it is unclear how accurately those biomarkers will measure kidney damage instead of the severity of disease. [134][135][136][137][138][139] Because of current uncertainties on the correlation among AKI definitions, biomarker data, and histopathology, 140 better availability of histopathologic data in LMICs provides a unique opportunity to probe such correlation, and begins to close the gap between our understanding of actual human histopathology, the pathogenesis of AKI, and our current, strictly functional KDIGO, AKIN, and RIFLE definitions. 5,[129][130][131] Histopathology in AKI A better understanding of the histopathology and pathogenesis of AKI is indispensable to continue to unveil the process of kidney injury, 141 and by developing bench-to-bedside processes, to foster a better understanding on how to avoid and how to treat kidney injury. 142 During the evaluation of patients with renal injury, a diagnosis based on histopathology remains important because it not only provides insight into the injury pattern, but often guides patient management. Multiple causes of AKI require histopathological diagnosis, but unfortunately, the number of biopsies and publications on the histopathology of AKI is declining. 110,143 Concerns about procedural complications, including the risk of bleeding and the perception that AKI is commonly the result of acute tubular necrosis, appear to contribute to the reluctance to perform biopsies in the acute setting, despite evidence to the contrary. [144][145][146][147][148] Kidney biopsies are indicated when: (i) The clinical presentation suggests that biopsy findings will likely lead to important therapeutic changes, an improved probability of recovery, and avoidance of further injury; (ii) when the magnitude of benefit is assessed to be greater than the risk of the procedure; and (iii) when the temporal course of the disease and delayed recovery dictates the need for further ascertainment of histopathologic diagnosis and prognosis. Multiple old and new studies have reviewed the indications and attested to the safety and usefulness of percutaneous kidney biopsies in the management of kidney disease. [149][150][151][152][153][154][155][156][157][158][159][160][161] Currently, kidney biopsies in patients with AKI are more common in LMICs than in high-income countries; thus, there is a greater appreciation of the relative incidence of multiple etiologies and the value of a renal biopsy to guide management. 1,15,20,21,162 Although results from biopsy series are likely confounded by indication bias, those studies suggest that the role of a renal biopsy must be reconsidered in the diagnosis and management of AKI of unclear etiology, such as: unexplained AKI; acute interstitial nephritis 60,163,164 ; acute or chronic glomerulonephritis, or rapidly progressive glomerulonephritis 165 ; interstitial or tubular injury due to drug toxicity, or exposure to traditional herbal remedies 21,[166][167][168][169][170] ; thrombotic microangiopathies 171 ; or leptospirosis. [172][173][174][175][176] Because of current uncertainties on the relationship among AKI definitions, biomarker data and renal histopathology, and their effects on treatment and prognosis, 140 we strongly recommend that kidney biopsies be considered in patients with AKI, whenever appropriate and feasible. We further recommend that in LMIC settings, basic training be provided to local pathologists on renal histopathology, understanding that even the limited information provided by light microscopy may provide invaluable guidance in patient management. Training of members of the health care team in simple imaging, including ultrasonography, when feasible, is also desirable. Research Recommendation We recommend the development, validation, and standardization of POCT to facilitate the diagnosis of AKI in the community. Q3: How Do We Confirm the diagnosis of AKI in Patients With an Initially Elevated SCr Level? Consensus Statement 1. We recommend that patients with an isolated (single) elevated creatinine or oliguria be considered to have AKI until proven otherwise, to ensure rapid implementation of effective treatment measures. Concerns that the initially elevated SCr may be due to CKD may unnecessarily delay the initiation of urgent therapeutic measures. We strongly recommend that patients with apparently acute, severe dysfunction be emergently treated as if they had AKI, until proven otherwise (see the following). Consensus Statements 2. We recommend that the presence of CKD be evaluated using clinical history, urinalysis, renal imaging, and biopsy when indicated. 3. We recommend that the diagnosis of AKI should be confirmed by repeat assessment of renal function at no later than 7 days. a. We recommend that the frequency of repeat assessment of renal function be guided by the clinical context and response to intervention. Differentiation Between AKI and CKD When a patient without historical information presents in the community center with clinical features and/or an elevated creatinine consistent with a diagnosis of kidney injury, distinguishing isolated AKI from AKI superimposed on CKD or baseline CKD can be challenging. We believe this distinction should not be immediately relevant to the initial management, which should focus on the amelioration of the urgent metabolic and/or volume imbalances and on the correction of all known precipitating factors (Table 5). We suggest that all patients without a known history of renal disease who present with a first episode of kidney injury must be presumed to have potentially reversible AKI, until proven otherwise. Moreover, even when the presence of CKD is demonstrated, modifiable factors that could have led to potentially reversible acute deterioration of renal function should be identified and corrected. This distinction becomes very relevant in certain regions of the world, where decisions are made on resource allocation in countries where public health care systems only offer support for the dialytic management of potentially reversible AKI, but frequently deny dialysis if renal failure is irreversible. In patients presenting with kidney failure, all attempts should be made to explore whether previous measures of kidney function are available. This information can be part of previous encounters in the health care system, such as during pregnancy; presurgical screening; evaluation during an unrelated illness; or as part of medical screening before employment, insurance, or during school, corporate, or community health checks. In the fragmented LMIC health care systems, records are often unavailable, so when consulting, patients should be encouraged to bring all records of previous encounters with the health care system, which is a common practice in LMICs. Certain symptoms, signs, and laboratory or imaging findings (Table 6) can increase the suspicion of preexisting kidney disease, but should not be used to exclude the presence of coexisting AKI. In high-income countries, the first 48 hours of the SCr trajectory of patients hospitalized with initially elevated SCr has been used to evaluate the rate of AKI development and to assess whether kidney injury is transient or persistent. 177 In this approach, the attainment of peak SCr after the initial creatinine elevation is considered an indication of persistent AKI. In LMICs, when community patients reach hospitals with established AKI such time-course information is usually not available. In those situations, excluding the possibility of the preexisting presence of CKD on a clinical basis may not be possible. Diagnosis may require either a kidney biopsy or be made retrospectively, when kidney function fails to improve despite appropriate supportive therapy. Limitations The recommendations in this paper should not be limited to LMICs, but extended to all areas where nephrology resources are not widely available due to a variety of reasons, including cultural, geographic, or religious limitations. The World Bank country economic classification does not necessarily reflect either the health care structure or health care investment of each country. Many countries included in the LMIC category offer universal health care coverage, whereas some subpopulations in high-income countries may not have access to primary care, such as refugees, minorities, aboriginal peoples, or persons without health care coverage. Efforts should be directed toward a more granular analysis of the impact of health care investment and delivery on the recognition and management of AKI. Current limitations in the understanding of the epidemiology of AKI in LMICs are only beginning to be understood; continuously improving information will be necessary to enable the development of more accurate recommendations. Conclusions Measures to increase AKI awareness and recognition are essential to improve the treatment and prognosis of AKI in all regions of the world. To ensure a prompt to potentially reversible AKI, once a preliminary diagnosis is obtained by the demonstration of an elevated SCr, patients must be managed as if they had AKI until proven otherwise. Whenever possible, we recommend the pursuit of a diagnostic strategy geared toward the identification of the etiology of AKI to guide therapeutic options. This is particularly important in LMICs, where various endemic infections and toxicities often underlie renal damage. AKI is potentially treatable and reversible, and treatment is often specific to the underlying condition. To enhance AKI recognition, it is necessary to promote a better understanding of this epidemiological association of AKI with highly prevalent conditions, including endemic diseases, and to promote widespread education on AKI at all levels and to all members of the health care system. DISCLOSURE All the authors declared no competing interests. AUTHOR CONTRIBUTIONS JC, SM, GG, VJ, SS, SG, RC, and RM all participated in the consensus-building process and drafting of this paper. RM, RC, and AB provided a critical review of this paper.
2018-04-03T02:00:02.473Z
2017-04-26T00:00:00.000
{ "year": 2017, "sha1": "24e10d44f50f02d835c6d375bacb21a4160e8144", "oa_license": "CCBYNCND", "oa_url": "http://www.kireports.org/article/S2468024917301043/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9d7eefa3e3500768f188025c7020d03a6eda3a4d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3604074
pes2o/s2orc
v3-fos-license
Cortical thickness in adolescent marijuana and alcohol users: A three-year prospective study from adolescence to young adulthood Highlights • . Adolescent marijuana and alcohol users show thicker cortices compared to controls.• More cumulative marijuana use is associated with increased cortical thickness.• More cumulative alcohol use is associated with decreased cortical thickness.• Regular marijuana and alcohol use may have a deleterious impact on adolescent brain development. Introduction Adolescence is a unique developmental period characterized by major physiological, psychological, and neurodevelopmental changes. These changes typically coincide with escalation of alcohol and marijuana use (Brown et al., 2008), which continues into early adulthood (Sartor et al., 2007). The comorbid use of alcohol and marijuana among teens continues to subtly rise as perception of harm declines. Fifty-eight percent of alcohol drinking adolescents report using alcohol and marijuana simultaneously, (Agosti et al., 2002), 45% of youth endorse a lifetime prevalence of marijuana use by the 12th grade, and 22% of these youth endorse use in the past 30 days (Johnston et al., 2015). The adolescent brain undergoes considerable maturation, including changes in cortical volume and refinement of cortical connections (Huttenlocher and Dabholkar, 1997). These neural transformations (e.g., maturing neural circuitry, cortical thinning and fiber projections) leave the adolescent brain more susceptible to potential neurotoxic effects of substances (Brown et al., 2000;Spear, 2000;Spear and Varlinskaya, 2005;Squeglia et al., 2009;Tapert et al., 2002). Although overall brain volume remains largely unchanged after puberty, ongoing synaptic refinement and myelination results in reduced gray matter and increased white matter volume by late adolescence (Casey et al., 2008;Giedd, 2004;Sowell et al., 2003;Yakovlev and Lecours, 1967). Cortical gray matter follows an inverted U-shaped developmental course, with cortical volume peaking around ages 12-14 (Giedd, 2004;Giedd et al., 2009;Gogtay et al., 2004;Sowell et al., 2003). The mechanisms underlying the decline in cortical volume and thickness are suggested to involve pruning and elimination of weaker synaptic connections, decreases in neuropil, increases in intra-cortical myelination, or changes in the cellular organization of the cerebral cortex (Huttenlocher and Dabholkar, 1997;Paus et al., 2008;Tamnes et al., 2009). In contrast, white matter development generally is characterized by linear volume increases driven by progressive axonal myelination (Giedd et al., 2009;Gogtay et al., 2004;Simmonds et al., 2014). These processes refine motor functioning, higher-order cognition, and cognitive control (Bava et al., 2010). Marijuana use during adolescence is associated with altered brain structure. Studies show alterations in white matter integrity in adolescent marijuana users compared to non-users, particularly in fronto-parietal circuitry and pathways connecting the frontal and temporal lobes (Bava et al., 2009). Altered cortical morphometry has also been observed in adolescent marijuana users, with marijuana-using adolescents having larger cerebellar volumes than non-users (Medina et al., 2010), thinner cortices in prefrontal and insular regions, and thicker cortices in posterior regions when compared to controls (Lopez-Larson et al., 2011). Structural neuroimaging studies have also examined whether structural brain alterations were present before onset of marijuana use (Cheetham et al., 2012). Notably, orbitofrontal cortex (OFC) volumes at age 12 predicted initiation of marijuana use at age 16 when controlling for other substance use. Regional volume vulnerabilities may increase risk for initiation and maintenance of marijuana misuse. This study builds on previous work by our laboratory examining the acute and longer-term impact of adolescent marijuana use on cortical thickness pre-and post 28-days of monitored abstinence from marijuana . We found increased temporal lobe thickness estimates in adolescent heavy marijuana users (age 17), and negative associations with cortical thickness and lifetime marijuana use both acutely and following prolonged abstinence from marijuana. It is unclear if such structural alterations of the cerebral cortex persist into young adulthood. The aim of this prospective study was to identify differences in cortical thickness between adolescent heavy marijuana users and control adolescents with minimal substance use histories assessed at three independent time points (∼ages 18, 19 and 21 respectively). We hypothesized that those individuals who initiated heavy marijuana use during adolescence would show thicker cortices over time compared to our control teens by young adulthood in frontal and temporal brain regions. Participants Adolescents were recruited from local San Diego schools and followed for three years, which included a baseline assessment (ages 16-19 at enrollment) and subsequent 1.5, and 3-year in-person follow-up visit (see Table 1). Participants underwent neuroimaging and substance use assessment at all three time points. Study design invited individuals back every 18-months in order to capture relationships between substance use and neuroimaging estimates spanning adolescence to young adulthood (i.e., repeated assessment over 3 years beginning at ages [16][17][18][19]. Inclusion in the present study required valid neuroimaging data at all three time points (N = 68) to avoid asymmetrical processing in the longitudinal cortical thickness processing approach. All participants underwent written informed consent (or assent if under age 18 and consent from their guardians) in accordance with the University of California, San Diego Human Research Protections Program. Marijuana and control groups were selected based on lifetime marijuana use episodes at baseline (>100 lifetime marijuana use episodes for users and <10 for controls), and alcohol use was limited to <150 lifetime drinking episodes for both groups at enrollment. Adolescents were then classified at baseline as marijuana users who also use alcohol regularly (MJ + ALC, n = 30; ≥120 lifetime marijuana use episodes and ≥22 lifetime alcohol use episodes at study entry) or control teens with limited marijuana use histories (CON, n = 38; ≤9 lifetime marijuana use episodes, and ≤20 lifetime alcohol use episodes, on average). Average days of marijuana use per month ranged from 13 to 15 days over the course of three years for the substance users (see Fig. 1). The vast majority of substance users, MJ + ALC, met criteria for marijuana abuse/dependence over the course of the threeyear study (97%) and approximately 87% met criteria for alcohol abuse/dependence. Approximately 55% of controls met criteria for alcohol abuse/dependence over the course of the study; six participants (15%) in the control group met abuse criteria for marijuana use at 3-year follow-up. See Fig. 1 for frequency and cumulative alcohol and marijuana use reported over the course of three years for the sample. Exclusionary criteria at study entry included: history of a lifetime DSM-IV Axis I disorder (other than cannabis or alcohol abuse/dependence), history of learning disability; history of neurological disorder or traumatic brain injury with loss of consciousness >2 min; history of a serious physical health problem; complicated or premature birth including prenatal substance use; uncorrectable sensory impairments; left handedness; and use of psychoactive medications. Participants underwent weekly toxicology screening for four weeks prior to their neuroimaging session to confirm abstinence from marijuana at each time point (monitored abstinence at baseline, 1.5, and 3-year follow-up). Decreasing 11-nor-9-carboxytetrahydrocannabinol (THCCOOH) metabolite ratios confirmed completion of the marijuana abstinence protocol at each visit and helped ensure the longer-term adverse alterations in cortical thickness were being captured, as compared to acute effects of recent use. Compliance at each visit was determined for each positive test result by dividing each THCCOOH normalized collection by the previous collected specimen (urine 2/urine 1), per Huestis and Cone recommendations for determining new cannabis use as a function of time (Huestis and Cone, 1998;Smith et al., 2009). Notably, positive THCCOOH/creatinine ratios ranged from 0.0 to 10.6 ng/mg on the day of the scan session across all three time points (baseline, Year 1.5, and Year 3), which falls below the commonly used confirmation cutoff <15 ng/mL. Substance use and mental health assessment The Customary Drinking and Drug Use Record was used to assess lifetime alcohol, marijuana, cigarette, and other drug use , defined as cumulative use (e.g., alcohol, marijuana) episodes (i.e., number of days) reported at study entry (baseline), the interval from baseline to Year 1.5, and the interval from Year 1.5 to Year 3. The Timeline Followback was used to assess self-reported substance use (e.g., alcohol, marijuana) in the 28 days prior to each scan session (Sobell and Sobell, 1992). Emotional functioning and demographics The Diagnostic Interview Schedule for Children Predictive Scales (Lucas et al., 2001;Shaffer et al., 1996) was administered to youth and parent at the screening interview to identify and exclude those individuals with Axis-I disorders other than alcohol or cannabis use disorder. The Beck Depression Inventory (BDI; Beck, 1978) and Spielberger State Trait Anxiety Inventory (STAI; Spielberger et al., 1970) assessed depression and state anxiety. The Family History Assessment Module (Rice et al., 1995) assessed family history of psychiatric and substance use disorders. Parental income and grade point average were collected during a clinical interview prior to the baseline imaging session. The Wechsler Abbreviated Scale of Intelligence (WASI) Vocabulary subtest was included as an estimate of premorbid intellectual functioning (Wechsler, 1999). Cortical thickness acquisition and processing All scans were acquired on the same 3.0 T CXK4 short bore Excite-2 magnetic resonance system (General Electric, Milwaukee, WI) with an eight-channel phase array head coil at the University of California San Diego Center for Functional MRI. Subjects were asked to remain still in the scanner while a high-resolution T1-weighted anatomical spoiled gradient recall (SPGR) scan was acquired (TE/TR = min full, field of view = 24 cm, resolution = 1 mm 3 , 170 continuous slices). Cortical thickness estimates were extracted using previously published methods by our laboratory . The neuroimaging software FreeSurfer, which is well documented and freely available (version 5.1, surfer.nmr.mgh.harvard.edu), was used for cortical surface reconstruction and thickness estimates Fischl et al., 1999). The initial cross-sectional process involves motion correction and averaging of T1 weighted images, removal of non-brain tissue and transformation to standardized space, segmentation of subcortical white and deep gray matter structures, intensity normalization, and tessellation of the gray/white matter boundary. Local MRI intensity gradients then guide a surface deformation algorithm to place smooth borders where the greatest shift in intensity defines transition to other tissue classes Fischl and Dale, 2000;Fischl et al., 1999Fischl et al., , 2004; this procedure allows for quantification of submillimeter group differences (Fischl and Dale, 2000). Cortical thickness was calculated as the closest distance from the gray/white matter boundary to the gray matter/cerebral spinal fluid boundary at each vertex on the cortical surface (Fischl and Dale, 2000). Validity of the cortical thickness measurement procedures has been verified using manual measurements and histological analysis (Kuperberg et al., 2003;Rosas et al., 2002;Salat et al., 2004). Test-retest reliability across scanners and field strenghts has been shown using these standardized procedures (Han et al., 2006;Reuter et al., 2012). Following cross-sectional processing of all three time points, data was next fed through the longitudinal processing stream in FreeSurfer (Reuter et al., 2012). This approach extracts reliable volume and thickness estimates by creating an unbiased within-subject template space and image from the three crosssectionally processed time points (baseline and follow-ups) using a consistent robust inverse registration method (Reuter et al., 2010). Processing steps such as Talairach transforms, atlas registration, and spherical surface maps and parcellations are initialized with common information from the within-subject template, increasing reliability and statistical power (Reuter et al., 2012). To identify errors made during cortical reconstruction processing, one rater (JJ), blind to participant characteristics, followed the reconstruction and longitudinal edit procedures to correct any errors made during the cortical reconstruction process. This involved verification of the automated skull stripping, and a coronal plane slice-by-slice inspection of the gray/white and gray/cerebral spinal fluid surfaces. Modifications to the surfaces were made as necessary to correct for tissue misclassifications (e.g., residual dura mater classified as cortex). All longitudinal runs were checked for quality, and no editing was necessary following the longitudinal processing. Following inspection, an automated parcellation procedure divided each hemisphere into 34 independent cortical regions based on gyral and sulcal features (Desikan et al., 2006;Fischl et al., 2004). Cortical thickness estimates averaged over each parcellation region were extracted for statistical analyses in SPSS. Demographic comparisons Analysis of variance (ANOVA) and Chi-square tests were run between groups to evaluate differences on demographic and substance use variables and to identify appropriate covariates for subsequent analysis (see Table 1 and Fig. 1 for substance use characteristics of this sample). Cortical thickness measurement Repeated measures analysis of covariance (ANCOVA) examined main effects of group, time, and Group by Time interactions on cortical thickness values for 34 independent standard neuroanatomical cortical regions (Desikan et al., 2006) in each hemisphere. Significant between-group and interaction effects were followed-up post hoc to determine what time point was driving the statistically significant between-group differences (˛ = .05). Intracranial volume (ICV) and lifetime alcohol use was included as a covariate given the high rate of alcohol use reported by the marijuana users in this sample. Secondary correlational analysis Partial correlations (controlling for lifetime alcohol and marijuana use given associations between both substances and the outcome variable cortical thickness) ) were run in our MJ + ALC group (n = 30) and CON group (n = 38) to explore unique substance use associations (e.g., cumulative alcohol and marijuana use, recency of alcohol and marijuana use, and age of marijuana use onset) with cortical thickness at 3-year follow-up. Demographics No demographic differences were observed (see Table 1). Group differences in substance use were observed, consistent with inclusion criteria and recruitment efforts for heavy marijuana users and controls (see Table 1, Fig. 1). Cortical thickness measurement Examination of 34 independent cortical regions in each hemisphere revealed significant group and Group by Time effects consistent with findings of thicker cortices in the MJ + ALC group across the brain (ps < .05). No main effects of time were identified. Findings within each lobe of the brain are presented below (see Table 2). Lifetime alcohol use and ICV were identified a priori as covariates for each ANCOVA; all significant findings were re-run controlling for lifetime other drug use and results remain unchanged (ps < .05). Frontal lobe cortical thickness In the right hemisphere, the group main effect significantly predicted cortical thickness in the right precentral gyrus (F(1,64) = 6.54, p = .01) and right paracentral lobule (F(1,64) = 6.4, p = .01). Follow-up analysis for the right hemisphere revealed statistically significant between group differences at baseline and 3-year follow-up for the right precentral gyrus (ps < .02) and 1.5 and 3-year follow-up for the right paracentral lobule (ps < .02) (see Fig. 2) in which MJ + ALC showed thicker estimates compared to CON. In the left hemisphere, the group main effect significantly predicted cortical thickness estimates in the left precentral gyrus (F(1,64) = 12.21, p < .01), left pars opercularis (F(1,64) = 4.90, Table 2 Cortical thickness estimates across time points for between-group differences identified. Means below are adjusted for lifetime alcohol use and ICV. Cohen's d reflects between-group differences (CON < MJ+ALC) at each time point. Table 2). Follow-up analysis for the main effect of group in the left hemisphere revealed significant differences at all three time points for left precentral gyrus (ps ≤ .01) and left frontal pole (ps ≤ .04); and 1.5 and 3-year follow-up for left pars opercularis (ps ≤ .03) in which MJ + ALC showed thicker estimates compared to CON. Significant interaction effects revealed thicker estimates for MJ + ALC at 3-year follow-up only in the left paracentral lobule (p = .04); and while CON decreased in thickness from baseline to 3-year follow-up in this region (ps < .01), no statistically significant decline was observed for MJ + ALC. The between-group effect was only significant at 3-year follow-up in the left superior frontal gyrus (p = .02, MJ + ALC > CON), and both groups decreased over time (ps < .02). While no betweengroup differences were identified in left par orbitalis, decreasing thickness estimates (ps < .01) were identified across time points, with the exception of baseline to 1.5-year follow-up for MJ + ALC (see Table 2). In the left hemisphere of the parietal lobe, the group main effect significantly predicted thickness estimates in the left superior parietal cortex (F(1,64) Table 2, Fig. 2). Temporal lobe cortical thickness The group main effect significantly predicted cortical thickness estimates in the right transverse temporal cortex (F(1,64) = 9.44, p < .01), and follow-up analysis revealed significant between group differences at all three time points (ps < .01) (see Table 2). Occipital lobe cortical thickness No main or interaction effects were found to predict cortical thickness estimates in the right hemisphere. However, the main effect of group predicted cortical thickness in the left pericalcarine cortex (F(1,64) = 7.39, p < .01). Follow-up analysis revealed between group differences at all three time points (ps < .03) (see Table 2). Age of onset of regular marijuana use was negatively associated with cortical thickness in the right entorhinal cortex, controlling for lifetime alcohol use (pr = −.46, p = .01). There were no associations between age of onset of regular alcohol use (ps > .10) and recency of marijuana and alcohol use (ps > .05) and cortical thickness at followup. No significant partial correlations were identified within the CON group (ps > .05). Discussion This study looked at cortical thickness estimates at three independent time points (ages 18, 19, and 21, respectively) in adolescent marijuana and alcohol users compared to controls with limited substance use histories. We found significant betweengroup differences in cortical thickness estimates after controlling for lifetime alcohol use. MJ + ALC demonstrated increased cortical thickness estimates in all four lobes of the brain, bilaterally. Notably, 18 of 23 regions in which differences were observed were in the frontal and parietal cortex. Positive dose-dependent associations were identified in temporal brain regions, as cumulative marijuana use from ages 16 to 22 was associated with thicker cortices in inferior temporal and entorhinal cortex. Several negative associations were observed with lifetime alcohol use, as more alcohol use reported was associated with thinner cortical estimates in all four lobes. It is important to detail how these findings compare to our previous work with a similar sample, as we found both similarities and differences from our cortical thickness study in which adolescent marijuana users were observed pre-and post 28-days of monitored abstinence . In Jacobus et al. (2014), increased thickness estimates in our marijuana users (controlling for alcohol use) was found in the entorhinal cortex compared to matched controls. Similarly, the present study found increased thickness estimates in our user group compared to our controls, and findings were more widespread and noted in all four lobes of the brain. The present study also found more lifetime marijuana use was associated with increased thickness in the entorhinal cortex, a region rich in cannabinoid 1 (CB1) receptors and important for learning and memory (Battistella et al., 2014;Iversen, 2003;Tsou et al., 1998). However, dose-dependent bivariate correlations were different in that previously we saw increased marijuana use associated with thinner cortices and increased alcohol use associated with thicker cortical estimates at age 17, pre-and post monitored abstinence. Our dose-dependent associations in the present study suggest otherwise. We found increased lifetime marijuana use reported associated with thicker cortical estimates and increased lifetime alcohol use reported associated with thinner cortices (at age ∼21). This may reflect several points recently discussed by Filbey and colleagues (2014) in the literature, including (1) methodological issues, the present study assessed substance independently over the course of three years compared to 28-days at age 17; (2) age and maturational bias, correlations in the present study reflect associations following many years of substance use and potential for interference with complex neurodevelopmental processes; (3) changes in marijuana and alcohol use patterns, as individuals in the present study remain relatively chronic in their marijuana use over time but subtly increase in their alcohol use; and (4) possible interactions with pre-existing vulnerabilities that are present at age 17 (near initiation), but likely changes as the individual continues to chronically use substances and increase in age (Cheetham et al., 2012;Filbey et al., 2014;Jacobus et al., 2013a;Squeglia et al., 2014) Lopez-Larson and colleagues (2011) cross-sectionally investigated cortical thickness in adolescents ages 16-19 years, with heavy marijuana use histories. They found decreased thickness in frontal regions and the insula, along with increased thickness in lingual, temporal, and parietal regions. The present study found increases in thickness in parietal, temporal, and occipital cortices, consistent with work by this team. The mechanism by which marijuana may alter the neural architecture and plasticity of the brain is undetermined. The endocannabinoid system plays a role in neuromaturational processes (e.g., pruning) and modulates neurotransmission for several neurotransmitter systems (Berghuis et al., 2007;Iversen, 2003;Rubino and Parolaro, 2008;Stella, 2013). Interference with this system due to marijuana, or tetrahydrocannabinol (THC) administration, likely causes a cascade of neuronal events (Kim and Thayer, 2001;Kim et al., 2008) that changes brain structure and function (Batalla et al., 2013), and thereby neurocognitive processing (Meier et al., 2012), emotional regulation and reward processing (Cousijn et al., 2011), and propensity for psychiatric comorbidities and addiction (Hall, 2014). It is unclear how associations with marijuana use and cortical thickness remodeling may be unique compared to alterations in macrostructural volume (e.g., volume comprised of cortical area and thickness). Studies suggest that volume changes are driven by changes in surface area (Im et al., 2008;Pakkenberg and Gundersen, 1997) whereas others suggest thickness as one ages (Storsve et al., 2014), however relationships between these metrics are likely dynamic across the lifespan and represent different neuromaturational mechanisms at different stages of life and disease (e.g., myelination, quantity of cortical columns, cellular organization of cortical columns, dendrites, synapses) (Mountcastle, 1997;Ostby et al., 2009;Rakic, 1988;Storsve et al., 2014). Changes in regional brain volume associated with marijuana use have varied, as some have observed decreased volume (Ashtari et al., 2011;Demirakca et al., 2011;Schacht et al., 2012) and others have identified macrostructural volume increases in CB1-dense brain regions such as neocortex, amygdala, striatum, hippocampus, and cerebellum (Cousijn et al., 2012;Gilman et al., 2014;McQueeny et al., 2011;Medina et al., 2010). In reward-network regions specifically, such as the orbitofrontal cortex (OFC), a recent examination by Filbey and colleauges, found decreased orbitofrontal cortex (OFC) volume in heavy marijuana users compared to controls, and increased structural and functional connectivity within the OFC network. Lorenzetti and collages (2014), did not find OFC differences in their sample of heavy marijuana users, but did see smaller hippocampus and amygdala volumes. Cheetham et al. (2012) found that smaller OFC volume pre-initiation of marijuana use (age 12) predicted progression into use four years later (age 16). Taken together, findings underscore that alterations in cortical metrics are likely dynamic and influenced by age, pre-existing vulnerabilities, and exogenous factors such as marijuana use. Continuing to study associations between cortical metrics and substance use is important given estimates have been linked to cognitive functioning in several studies in our laboratory and others (Ashtari et al., 2011;Jacobus et al., 2014;Squeglia et al., 2012). Alcohol likely has similar deleterious consequences on the brain. The present dose-dependent associations are consistent with our previous findings, as Squeglia et al., found decreases in cortical thickness estimates associated with heavy episodic alcohol use in males (Squeglia et al., 2012), and accelerated declining brain volume trajectories in a large prospective investigation examining individuals (ages 12-24) who transitioned to heavy drinking (Squeglia et al., in press). Alcohol likely interferes with neural development of the cerebral cortex, and thinner cortices observed with more cumulative use reported may represent non-beneficial pruning and/or inhibition of cell generation or cell death (Crews and Nixon, 2009). Limitations of the present study include self-report of substance use, which can introduce measurement error. Further, while this study was prospective, participants were not assessed prior to initiation of substance use. However, previous work in our laboratory finds marijuana-related associations with white matter integrity in a sample of individuals assessed pre-and post-initiation of substance use (Jacobus et al., 2013b). Nevertheless, future work should determine the influence of pre-existing differences on cortical metrics. The current investigation included users of both marijuana and alcohol, and despite controlling for alcohol use, it remains unclear what is precisely the result of marijuana as compared to the combination of co-occurring marijuana and alcohol use. Our sample was predominately male (70%), however gender should be evaluated and future studies will focus on differential gender effects on brain morphometry in adolescent marijuana users. Group did not statistically differ on days since last use of marijuana and alcohol use, likely influenced by the monitored abstinence period, therefore acute effects may not have been captured in our reported findings. A statistically significant within-subjects effect was not widely observed (e.g., decreasing cortical thickness estimates), which may be attributed to the smaller sample size combined with a more restricted age range. We tried to reduce the number of correlational analysis that were conducted, however given that effects were modest, future work should replicate findings. Studies should continue to follow existing adolescent cohorts to understand neural and behavioral changes that occur into young adulthood. Understanding how co-occurring marijuana and alcohol use influences both macrostructural and microstructural brain development, along with structural and functional connectivity, will help clinical interventions target neural vulnerabilities to develop novel and effective interventions to reduce marijuana misuse as prevalence rates of marijuana continue to increase (Johnston et al., 2015).
2018-05-08T18:33:14.380Z
2015-04-27T00:00:00.000
{ "year": 2015, "sha1": "b1d81b837b14af6153d354a72a6c538c4681c0c6", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.dcn.2015.04.006", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b1d81b837b14af6153d354a72a6c538c4681c0c6", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
7389805
pes2o/s2orc
v3-fos-license
Supervisory Control for Turnover Prevention of a Teleoperated Mobile Agent with a Terrain-Prediction Sensor Module Teleoperated mobile agents (or vehicles) play an important role especially in hazardous environments such as inspecting underwater structures (Lin, 1997), demining (Smith, 1992), and cleaning nuclear plants (Kim, 2002). A teleoperated agent is, in principle, maneuvered by an operator at a remote site, but should be able to react autonomously to avoid dangerous situations such as collisions with obstacles and turnovers. Many studies have been conducted on collision avoidance of mobile agents (Borenstein, 1989; Borenstein, 1991a; Borenstein, 1991b; Howard, 2001; Niwa, 2004; Singh et al., 2000). In this research, however, we will focus on turnover prevention of mobile agents moving on uneven terrain because a turnover can cause more fatal damage to the agents. Here, we adopt the term ‘turnover’ as a concept which includes not only a rollover but also a pitchover. Extensive studies have been conducted on motion planning problems of mobile agents traveling over sloped terrain in the robotics research community (Shiller, 1991). Shiller presented optimal motion planning for an autonomous car-like vehicle without a slip and a rollover. The terrain was represented by a B-spline patch and the vehicle path was represented by a B-spline curve, where the terrain and vehicle path were given in advance. With the models of the terrain and the path, the translational velocity limit of the vehicle was determined to avoid a slip and a rollover. Also, many studies have been conducted on rollover prevention of heavy vehicles like trucks and sports utility vehicles in the vehicular research community. Takano analyzed various dynamic outputs of large vehicles, such as the lateral acceleration, yaw rate, roll angle, and roll rate, in the frequency domain for predicting rollovers (Takano, 2001). Chen developed the time-torollover (TTR)-based rollover threat index in order to predict rollovers of sports utility vehicles (Chen, 1999). This intuitive measure TTR was computed from the simple model and then corrected by using an artificial neural network. Nalecz et al. suggested an energy-based function called the rollover prevention energy reserve (RPER) (Nalecz, 1987; Nalecz, 1991; Nalecz, 1993). RPER is the difference between the energy needed to bring the vehicle to its rollover position and the rotational kinetic energy, which can be transferred into the gravitational potential energy to lift the vehicle. RPER is positive for Introduction Teleoperated mobile agents (or vehicles) play an important role especially in hazardous environments such as inspecting underwater structures (Lin, 1997), demining (Smith, 1992), and cleaning nuclear plants (Kim, 2002). A teleoperated agent is, in principle, maneuvered by an operator at a remote site, but should be able to react autonomously to avoid dangerous situations such as collisions with obstacles and turnovers. Many studies have been conducted on collision avoidance of mobile agents (Borenstein, 1989;Borenstein, 1991a;Borenstein, 1991b;Howard, 2001;Niwa, 2004;Singh et al., 2000). In this research, however, we will focus on turnover prevention of mobile agents moving on uneven terrain because a turnover can cause more fatal damage to the agents. Here, we adopt the term 'turnover' as a concept which includes not only a rollover but also a pitchover. Extensive studies have been conducted on motion planning problems of mobile agents traveling over sloped terrain in the robotics research community (Shiller, 1991). Shiller presented optimal motion planning for an autonomous car-like vehicle without a slip and a rollover. The terrain was represented by a B-spline patch and the vehicle path was represented by a B-spline curve, where the terrain and vehicle path were given in advance. With the models of the terrain and the path, the translational velocity limit of the vehicle was determined to avoid a slip and a rollover. Also, many studies have been conducted on rollover prevention of heavy vehicles like trucks and sports utility vehicles in the vehicular research community. Takano analyzed various dynamic outputs of large vehicles, such as the lateral acceleration, yaw rate, roll angle, and roll rate, in the frequency domain for predicting rollovers (Takano, 2001). Chen developed the time-to-rollover (TTR)-based rollover threat index in order to predict rollovers of sports utility vehicles (Chen, 1999). This intuitive measure TTR was computed from the simple model and then corrected by using an artificial neural network. Nalecz et al. suggested an energy-based function called the rollover prevention energy reserve (RPER) (Nalecz, 1987;Nalecz, 1991;Nalecz, 1993). RPER is the difference between the energy needed to bring the vehicle to its rollover position and the rotational kinetic energy, which can be transferred into the gravitational potential energy to lift the vehicle. RPER is positive for non-rollover cases and negative for rollover cases. Acarman analyzed the rollover of commercial vehicles with tanks that are partially filled with liquid cargo (Acarman, 2003). In this case, the frequency shaped backstepping sliding mode control algorithm was designed to stabilize and attenuate the sloshing effects of the moving cargo by properly choosing the crossover frequencies of the dynamic compensators in accordance with the fundamental frequencies of the slosh dynamics. Many studies have been conducted on turnover prevention of mobile manipulators like a fork lift. Rey described the scheme for automatic turnover prediction and prevention for a forklift (Rey, 1997). By monitoring the static and dynamic turnover stability margins of a mobile manipulator, it is possible to predict turnovers and take appropriate actions to prevent turnovers. Here, the dynamic force-angle measure of turnover stability margin proposed by Papadopoulos (Papadopoulos, 1996) is employed. Also, Sugano suggested the concepts about stability such as the stability degree and the valid stable region based on the zero-moment point (ZMP) criterion to evaluate the stability for a mobile manipulator (Sugano, 1993). In addition, the method of ZMP path planning with a stability potential field was suggested for recovering and maintaining stability . Based on the path planning method, the motion of the manipulator is planned in advance to ensure stability while the vehicle is in motion along a given trajectory. Furthermore, for stability recovery, the compensation motion of the manipulator is derived by using the redundancy of the manipulator, taking into consideration the manipulator configuration and the static system stability (Huang, 1997). In the abovementioned researches for an autonomous mobile agent, the path and trajectory of a vehicle and a manipulator were given in advance and modified for rollover prevention. However, the path and trajectory of a teleoperated mobile agent cannot be given in advance since both of them are determined by a teleoperator at each time instant. Thus, it is impossible to analyze and prevent rollovers in advance. For a fork lift mentioned above, its path and trajectory were not known in advance since it was maneuvered by an operator. Thus, the previous researchers estimated the path and trajectory using the proprioceptive sensor data (internal sensor data) for turnover prevention. However, in the case where there is a potential risk of turnovers due to an abrupt change in the configuration of the ground, the proprioceptive sensor data is not enough to prevent turnovers. Therefore, in this research, a low-cost terrain-prediction sensor with a camera vision and a structured laser light is proposed for predicting turnovers at front terrain before the agent arrives there. With these predicted data, a turnover prevention algorithm is suggested with the quasi-static rollover analysis of a rigid vehicle (Gillespie, 1992). A proposed turnover prevention algorithm (Park, 2006a) consists of a pitchover prevention algorithm and a rollover prevention algorithm (Park, 2006b). According to the turnover prevention algorithm, the translational and rotational velocities of the agent are restricted for avoiding turnovers. However, the turnover prevention control brings about some inconsistencies between the intended motion and the reactive motion of the agent. For compensating these inconsistencies, we propose a force reflection technique based on virtual reality. A force reflection technique has already been used in various research areas such as medical surgery (Chen, 1998;Basdogan, 2004;Nudehi, 2005), micromanipulation (Ando, 2001;Boukhnifer, 2004), and obstacle avoidance of teleoperated mobile agents (Park, 2003a;Park, 2003b;Park, 2004;Park, 2006b). In this research, a reflective force helps an operator control the agent without a turnover, where a 2-DOF force-feedback joystick is used as a Haptic device which can not only receive an operator's command from an operator but also send back a reflective force to him. Supervisory Control In a teleoperation system, an operator, in principle, controls a mobile agent at a remote site using a force feedback joystick, but the agent needs to control itself autonomously for escaping dangerous situations like overturning. As a result of autonomous control, the reactive motion of the agent may be different from the intended motion of an operator. It is to violate the principle rule of a teleoperation system as mentioned above. So we analyze boundaries of safe motion of an agent without turnovers and allow an operator to freely control the agent within the analyzed safe boundaries. That is, the agent motion determined by an operator is restricted for turnover prevention only when the agent motion is beyond the safe boundaries. Thus, the resultant motion of the agent is determined by the closest motion to the intended motion of an operator among the turnover-free motions. In addition, we propose a force feedback technique for an operator to recognize the inconsistency between the reactive and intended motions of an agent. If the agent controlled by an operator is faced with danger, the operator feels reflective force generated by the force feedback joystick for preventing the operator from controlling the agent beyond the safe boundaries. Thus, reflective force makes it possible that the operator drives the agent without turnovers. An example of the supervisory control is shown in Fig. 1. An agent moves to A according to an operator's command. From A to C, the agent is autonomously controlled for avoiding turnovers since it detects a potential turnover area. From C, the agent is controlled by an operator since the agent escapes from danger of turnovers. Again, the agent is autonomously controlled for turnover prevention from D to E. As described above, the operator's intended direction of the agent is modified by the autonomously controlled direction for turnover prevention in the case that the agent detects potential turnovers. Also, whenever the agent is autonomously controlled, the operator feels reflective force and is able to recognize the modified agent motion. However, in the case that there is no danger of turnovers, the agent is controlled by an operator. System Configuration The teleoperation system consists of a remote control system (RCS) and a mobile agent system (MAS) as shown in Fig. 2. The RCS and the MAS communicate with each other via wireless Ethernet communication. Control signals and sensor data are denoted in Table 1. The RCS receives input force F o (t) from an operator via a force feedback joystick, and the joystick position P J (t) is determined by F o (t). Then, velocity command V cmd (t) of the agent is determined from P J (t) by a position-to-velocity matcher, where V cmd (t) consists of the translational velocity v(t) and rotational velocity ω(t). Here, each velocity can be controlled independently since the agent used in this research is a differentialdrive machine which has two individually motorized tracks. The operator's command V cmd (t) is restricted by a turnover prevention controller for avoiding potential turnovers using predicted terrain data Tr(t) transmitted from the MAS. Finally, the resultant velocity command V d (t) for turnover prevention is transmitted to the MAS for actually controlling the agent without turnovers. Also, reflective force F R (t) is generated by P ub (t) and Plb(t), where Pub(t) and Plb(t) are determined by upper and lower bounds, Vub(t) and Vlb(t), for turnover-free ranges of v(t) and ω(t), respectively. As a result of force reflection, an operator can intuitively recognize whether the agent motion is restricted for turnover prevention or not. Symbols Descriptions Input force by an operator's command F R (t) Reflective force generated by the force feedback joystick P J (t) Joystick position determined by F O (t) Vub(t),Vlb(t) Upper and lower bounds of the translational and rotational velocities for avoiding turnovers P ub (t), P lb (t) Joystick positions determined by V ub (t) and V lb (t) V cmd (t) Control command of the agent determined by P J (t) Vd(t) Desired velocities for turnover prevention determined by V cmd (t), V ub (t) and V lb (t) Itr(t) Terrain image data obtained by the camera vision with the structured laser light Tr(t) Terrain data obtained after image processing of I tr (t) ) ( d t q Desired spinning speeds of the actual motors ) (t q Actual spinning speeds of the motors ) (t q Encoder data of the motors of two drive wheels for the desired velocity command Vd(t) received from the RCS, and sends them to the embedded controllers that control the actual motors to achieve the desired spinning speeds through internal feedback control loops with encoder data q(t). Next, front terrain data Tr(t) for turnover prevention are obtained by the terrain-prediction sensor module, which projects a structured laser light on the front terrain and detects the projected line using a web camera. In this case, the laser-line segment is extracted from terrain image data I tr (t) on the camera image plane for obtaining Tr(t). Finally, the obtained terrain data Tr(t) is transmitted to the RCS for turnover prevention. Let time Ts be the communication period between the RCS and the MAS. Then time Ts should satisfy where T a is the maximum delay for sensor acquisition and T d is the maximum delay for wireless communication. If the accessible range of IEEE 802.11b based Wireless Local Area Networks (WLANs) used in our system covers the locations of both RCS and MAS, the round-trip time delay 2Td (less than 0.29 ms) can be neglected as compared with time Ta (less than 100 ms). As the sum of communication packet sizes of both control signals and sensor data is less than 200 bytes (or 1600 bits) and the IEEE 802.11b standard promises data rates up to 11 Mbps, the time delay 2T d can be bounded in 0.29 ms. Although the coverage of the WLANs is reduced in crowded areas, the coverage can be easily expanded by establishing additive wireless Access Points (APs) in the areas. Also, the motion control of the agent can be completed in time Ts since the motion control time is much less than Ta and the motion control is conducted simultaneously with sensor acquisition. Hereafter, the translational velocity v(t) and the rotational velocity ω(t) will be discretely described as v(k) and ω(k), based on time index k which denotes time t = kT s . Of course, control signals and sensor data will also be described with k instead of t. Basic Assumptions Basic assumptions are introduced for terrain prediction and turnover prevention control as follows: 1. The communication period T s between the RCS and the MAS ensures enough time to complete the terrain data acquisition and the motion control of the agent, taking into consideration the maximum time delay for wireless communication. 2. No turnover occurs between the starting position of the agent and the first detected terrain position by the terrain-prediction sensor since the agent is impossible to avoid turnovers without terrain sensor data. 3. The process for terrain data acquisition is fast enough to obtain sufficient terrain data for turnover prevention control at each time instant. At least much more than two are available within the longitudinal length of the agent while the agent moves at its normal speed. 4. The agent is represented as one lumped mass located at its center of gravity (CG) with appropriate mass and inertia properties since all components of the agent move together. The point mass at the CG, with appropriate rotational moments of inertia, is dynamically equivalent to the agent itself for all motions in which it is reasonable to assume the agent to be rigid (Gillespie, 1992). 5. The agent has a trapezoidal velocity profile. That is, the translational acceleration of the agent is determined by constant values such as ac for accelerated motion, 0 for uniform motion and −ac for decelerated motion. 6. The motion controllers of the agent control the translational acceleration with tolerable errors according to the reference inputs such as a c , 0 and −a c . Therefore, we do not consider the variation of the acceleration depending on the various terrain types such as rocky and sandy terrain. 7. The agent is able to reduce its translational velocity from vmax to 0 for a distance of Dtr, where Dtr is the reference distance to the front terrain detected for turnover prevention control at each time instant. In other words, Dtr is defined to satisfy the condition D tr >v 2 max /2a c . Thus, taking the condition for D tr into consideration, the configuration of the terrain-prediction sensor module such as the orientations of the camera and the laser-line generator should be determined. According to this assumption, even though the agent detects inevitable turnover terrain at a distance of Dtr, it can reduce the translational velocity and stop before arriving at the detected terrain. Terrain-prediction Sensor Module We develop a low-cost terrain-prediction sensor module for obtaining front terrain data in advance. As shown in Fig. 4, the developed terrain-prediction sensor module consists of a web camera, a laser-line generator and an inclinometer, and is attached to the ROBHAZ-DT. The laser-line generator LM-6535ML6D developed by Lanics Co., Ltd. is used to project a line segment on front terrain. The fan angle and line width of the laserline generator are 60° and 1 mm, respectively. The wavelength of the laser beam ranges from 645 nm to 665 nm and the optical output power is 25 mW. The complementary metal-oxide-semiconductor (CMOS) web camera ZECA-MV402 developed by Mtekvision Co., Ltd. is used to detect the line segment projected onto the front terrain. The inclinometer 3DM developed by MicroStrain Inc. is used to measure the absolute angles from 0° to 360° on both yaw and pitch axes, and from −70° to 70° on the roll axis with respect to the universal frame. The data of the inclinometer are obtained via RS232 Serial interface. Acquisition of Vision Data For terrain data acquisition, we first propose an image processing method for extracting a projected laser line from an original camera image where the image size is 320×240 pixels. The partitioning an image into regions such as an object and the background is called segmentation (Jain, 1995). A binary image for an object and the background is obtained using an appropriate segmentation of a gray scale image. If the intensity values of an object are in an interval and the intensity values of the background pixels are outside this interval, a binary image can be obtained using a thresholding operation that sets the points in that interval to 1 and points outside that interval to 0. Thus, for binary vision, segmentation and thresholding are synonymous. Thresholding is a method to convert a gray scale image into a binary image so that objects of interest are separated from the background. For thresholding to be effective in object-background separation, it is necessary that the objects and background have sufficient contrast and that we know the intensity levels of either the objects or the background. In a fixed thresholding scheme, these intensity characteristics determined the value of the threshold. In this research, a laser line is an object to be separated from the background. Since a laser line is lighter than the background, an original image F(u 1 ,u 2 ) for u 1 =1,…,320 and u 2 =1,…,240 can be partitioned into the laser line and the background using a thresholding operation as follows: where FT(u1,u2) is the resulting binary image and T is a threshold. By (2), FT(u1,u2) has 1 for the laser line and 0 for the background. The results of producing an image using different thresholds are shown in Fig. 5. Fig. 5 (b) shows the resulting image with T=150. The left and right sides of the projected laser line is not separated from the background since the intensity values of both sides of the line are outside the interval. For detecting both sides of the line, an image is obtained using T=120 as shown in Fig. 5 (c). As compared with T=150, more parts of the line are detected. Finally, the binary image with T=100 is shown in Fig. 5 (d). Although the resulting image includes more parts of the line as compared with T=150 and T=120, some parts of the background pixels are wrong detected as the line since the intensity of some background pixels are in the interval. As shown in these examples, the threshold of the fixed threshold method should be appropriately determined according to the application domain. In other words, we have to change the threshold whenever the domain is changed. Also, the threshold needs to be changed for an illumination change. In this research, we propose an adaptive vertical threshold scheme in order to separate the laser line from the background regardless of an illumination change. The concept of the proposed threshold scheme is shown in Fig. 6. Although the intensity of both sides of the line is weaker than the intensity of the center of the line, the artificial laser light is lighter than other pixels on its vertical line. Using this fact, we define a threshold for the uth vertical line as follows: where u=1,…,320. As shown in (3), the vertical threshold T v (u) is adaptively determined as the maximum intensity of the pixels on the uth vertical line even though the intensity of illumination is changed. Using Tv(u), each vertical line is thresholded as follows: Park, J., Lee, B. Finally, the resulting binary image is obtained by the union of F VT (u,u 2 ) for u as follows: That is, the detected laser line is the region for F VT (u 1 ,u 2 )=1. The results of producing an image using the adaptive vertical threshold scheme are shown in Fig. 7. For the low intensity of illumination, the projected laser line is shown in Fig. 7 (a). In this case, the entire laser line is obtained as shown in Fig. 7 (b). For the high intensity of illumination, it is hard to distinguish the laser line from the background as shown in Fig. 7 (c). However, the entire line is also obtained by the proposed vertical threshold scheme as shown in Fig. 7 (d). That is, the adaptive vertical threshold scheme is not sensitive to an illumination change. Thus, the vertical threshold scheme can be directly applied to various application domains. Acquisition of 3D Information In this section, we obtain 3D information from the detected laser line on the 2D camera image using the geometry of the terrain-prediction sensor module. The mobile base frame {B} of the agent and the camera frame {C} of the terrain prediction sensor with respect to the universal frame {U} are depicted in Fig. 8, where the Yc-axis is set parallel with the Y b -axis. The X b -axis of {B} is parallel with the heading direction of the agent and the Z b -axis is normal to the surface of the ground. The Y b -axis is defined perpendicular to the X b -Z b plane and its direction is determined by the right-hand-rule (RHR). The origin of {B} is the agent center position (ACP), which is the projected point of the CG on the Xb-Yb plane. In this research, all other coordinate systems are also defined in accordance with the RHR. According to the relation between {B} and {C}, point Pc(xc,yc,zc)∈R 3 relative to {C} can be transformed into point Pb(xb,yb,zb)∈R 3 relative to {B} as follows: where f is the focal length of the camera, θlp is the projection angle of the laser line on the image plane, and b' is the distance between the center of the camera lens C and the intersection L' of the Z c -axis and the laser beam. According to the Sine's Law, the distance b' in triangle ΔLCL' can be obtained as follows: where b is the baseline distance between the center of the laser line generator L and the camera center C. By (7) and (8) Acquisition of Terrain Parameters The terrain data at a distance of Dtr in front of the agent consist of the roll and pitch angles of the agent set on that terrain. As shown in Fig. 10, the roll angle of the front terrain relative to the current roll angle of the agent is predicted as follows: where D track is a distance between the right and left tracks of the agent. P' bR (k) and P' bL (k) are the contact points of the right and left tracks with the front terrain at a distance of D tr , where P'bR(k) and P'bL(k) are denoted as (x'bR(k),y'bR(k),z'bR(k)) and (x'bL(k),y'bL(k),z'bL(k)), respectively. P'bR(k) and P'bL(k) are obtained as following steps: 1. Store the detected points P bR (k) and P bL (k) for the right and left tracks on the laser line and the translational velocity v(k) in the memory at time k. 2. For each time instant, find the minimum times Δk 1 and Δk 2 satisfying the following conditions: where θ3DM−Pitch(k) is the pitch angle of the agent obtained by the inclinometer at time k relative to {U}. 3. Using Δk 1 and Δk 2 satisfying (10) and (11), obtain P' bR (k) and P' bL (k) by the linear interpolation of PbR(k−Δk1+1) and PbR(k−Δk1) and the linear interpolation of PbL(k−Δk2+1) and PbL(k−Δk2), respectively. Finally, the roll angle relative to {U} is obtained from the predicted roll angle ΔθRoll(k) relative to {B} as follows: where θ 3DM−Roll (k) is the roll angle of the agent obtained by the inclinometer at time k relative to {U}. Fig. 10. Predicted roll angle Δθ Roll (k) at a distance of D tr relative to the roll angle at time k by using interpolated points P'bL(k) and P'bR(k) at time k. As shown in Fig. 11, the pitch angle of the front terrain relative to the current pitch angle of the agent is predicted by the terrain data obtained at times k and k−Δk 3 as follows: where Δk3 is the minimum time satisfying the condition Lfr≤|P'bF(k)P'bF(k-Δk3)|. Here, Lfr is the length of the agent tracks, and |P'bF(k)P'bF(k-Δk3)| is the distance between points P'bF(k) and P'bF(k-Δk3). Point P'bF(k) is defined by points P'bR(k) and P'bL(k) as follows: To obtain the distance |P' bF (k)P' bF (k-Δk 3 )|, point P bF (k-Δk 3 ) relative to base frame {B(k−Δk 3 )} defined at time k−Δk 3 needs to be transformed into point P' bF (k-Δk 3 ) relative to {B(k)} (or {B}) defined at time k as follows: The second term on the right-hand side of (15) indicates the displacement vector between {B(k−Δk 3 )} and {B(k)}. Finally, the pitch angle relative to {U} is obtained by the predicted pitch angle ΔθPitch(k) as follows: Fig. 11. Predicted pitch angle Δθ Pitch (k) at a distance of D tr relative to the pitch angle at time k by using interpolated points P' bF (k) and P' bF (k−Δk 3 ) at times k and k−Δk 3 , respectively. Turnover Prevention through Prediction In this section, a turnover prevention algorithm for preventing the agent from pitching over or rolling over is discussed. The pitchover-free range of the translational acceleration and the rollover-free range of the rotational velocity are determined by using the predicted-terrain sensor data. According to both ranges, the translational and rotational velocities of the agent are controlled for pitchover and rollover prevention. Dynamics of the Agent In order to determine turnover constraints for the agent moving through unknown terrain, we adopt the quasi-static rollover analysis of a rigid vehicle (Gillespie, 1992). By assuming the ROBHAZ-DT as a rigid vehicle, the deflections of the suspensions and tracks need not be considered in the analysis. The external forces acting on the agent consist of the friction forces between the vehicle and ground, the normal force, and the gravity force. The total friction force F, tangent to the X b -Y b plane, can be defined as follows: where f Xb and f Yb are the components tangent and normal to the heading direction of the agent, respectively. By modifying the dynamic-motion equations for the car-like agent described by Shiller (Shiller, 1991), the motion equation for a differential-drive agent moving through unknown terrain can be described in terms of the translational velocity v and the translational acceleration a as follows: where N is the magnitude of the normal force in the direction of Z b , m is the lumped mass of the agent, and r is the turning radius of the agent. Radius r can be represented as v/ω since the agent is a differential-drive vehicle. Parameters f Xb , f Yb and N can be obtained by the dot products of the unit vectors Xb, Yb and Zb with (18) where θ Roll and θ Pitch are determined according to the conventional method of the X-Y-Z fixed angles. Pitchover Prevention Control The force distribution of the agent is depicted in Figs.12 (a) and 12 (b) when the agent pitches over CCW and CW about the Y b -axis, respectively. At the point where the agent is about to pitch over CCW, the total normal force N and the friction force f Xb of the agent are applied on the only front endpoint of the track. Thus the moment on the agent created by those forces should satisfy the condition fXbh+NLfr/2≥0 for preventing a pitchover in a CCW direction, where h is the height of the center of gravity (CG) of the agent. In the same way, the moment on the agent should satisfy the condition fXbh-NLfr/2≤0 for preventing a pitchover in a CW direction, where forces N and fXb are applied on the only rear endpoint of the track. The resultant condition for preventing a pitchover can be determined by combining the above conditions as follows: Substituting (19) and (21) into (25) transforms the resultant condition to an inequality equation in a as follows: Hereafter, the upper and lower bounds of a in (26) are denoted as a ub and a lb , respectively. Bounds a ub and a lb are represented as surfaces in θ Roll -θ Pitch -a space as shown in Fig. 13, where θRoll and θPitch replace kXb and kZb in (26). That is, the inner region between the upper and lower surfaces indicates a safe region of the translational acceleration for preventing a pitchover. In this case, the permitted accelerations of the agent for accelerated, uniform and decelerated motions are represented as three planes a=a c , a=0 and a=−a c in Fig. 13. According to the relation of the two surfaces and the three planes, five possible cases of a pitchover are defined as shown in Fig. 14. Each case is determined by the intersection curves of two surfaces with three planes. According to the five cases, the control strategies of the translational velocity for pitchover prevention are described in Table 2. For pitchover prevention control, the pitchover possibility is determined by the front terrain data which are predicted by the terrain-prediction sensor. When the agent detects the terrain for the absolute pitchover CW or CCW case, the agent must decelerate to zero because all permitted accelerations of the agent are beyond the boundary of the safe region of the translational acceleration and thus the agent will unconditionally pitch over at the detected terrain. As a result of deceleration, the agent can stop before arriving at the dangerous terrain. For the potential pitchover CW case, the agent must maintain its velocity or decelerate since it is allowed to only move in uniform and decelerated motions to avoid the CW pitchover. Especially, if the agent detects the terrain where it must decelerate in order to prevent from pitching over CW, it will decelerate and stop before it reaches that terrain. That is, the agent does not enter that pitchover region since it already stops at around the vicinity of the region. On the other hand, in the case of the potential pitchover CCW case, the agent must maintain its velocity or accelerate to avoid the CCW pitchover. In this case, the agent can not accelerate further after its translational velocity reaches the maximum velocity. At this point of view, the agent must decelerate and stop before it arrives at that terrain. The potential pitchover CW case is similar to the potential pitchover CCW case explained before. Finally, in the no pitchover case, the agent is allowed to move in accelerated, uniform and decelerated motions. In other words, the agent need not be controlled for pitchover prevention. Cases Permissible acc. ranges Possible motions Control strategies No Pitchover −a c ≤a≤a c (aub>ac and alb<−ac) Accelerated Uniform Decelerated Rollover Prevention Control The force distribution of the agent is depicted in Figs.15 (a) and 15 (b) where the agent rolls over CCW and CW, respectively. In the case where the agent is about to roll over CCW, the total normal force N and the friction force f Yb of the agent are applied on the only left track. Thus, the moment on the agent created by those forces should satisfy the condition f Yb h+NW b /2≥0 for preventing a rollover in a CCW direction. In the same way, the moment on the agent should satisfy the condition fYbh-NWb/2≤0 for preventing a rollover in a CW direction where the forces N and f Yb are applied on the only right track as shown in Fig. 15 (b). Therefore, the resultant condition to prevent a rollover can be determined by combining the above conditions as follows: Substituting (20) and (21) into (27) transforms the resultant condition to an inequality equation in v and ω as follows: Hereafter, the upper and lower bounds of vω in (28) are denoted as (vω) ub and (vω) lb , respectively. In this case, the translational velocity v is determined by the operator's command and the condition of pitchover prevention. Thus, for the given v, the inequality equation (28) can be represented in terms of ω as follows: where Δv is the maximum increase of the translational velocity while the agent is moving the distance of D tr : Δv=−v+(v 2 +2a c D tr ) 1/2 . Due to the motor torque constraints, translational velocity v+Δv is restricted by v max . Here, the upper and lower bounds in (29) are denoted as ω ub and ω lb , respectively. The rollover-free region of the rotational velocity is defined as the inner region between surfaces ω ub and ω lb in θ Roll -θ Pitch -ω space as shown in Fig. 16. In this figure, three planes ω=ωmax, ω=0 and ω=−ωmax for the rotational velocity are also depicted with the surfaces, where ωmax is the maximum rotational velocity of the agent. According to the relation of the two surfaces and the three planes, the control regions for rollover prevention are defined as shown in Fig. 17. The boundaries b ui and b lj for i,j=1,2,3 are determined by the intersection curves of surfaces ω ub and ω lb with the three planes, respectively. According to the five control regions, the control strategies of the translational and rotational velocities for rollover prevention are described in Table 3. For the free moving region A, the rotational velocity of the agent can be determined for the entire permissible range from −ωmax to ωmax. That is, the operator can control the agent with no restriction for the rotational velocity. On the contrary, for the restricted regions B1 and B2, the rotational velocity must be restricted for preventing a rollover. If the detected terrain is in B1, the rotational velocity is truncated to range from −ωmax to ωub since ωub<ωmax. Especially, for the region between bu2 and bu3, the agent is allowed to only turn right since ωub<0. In other words, the agent cannot turn left and go straight. For the region B2, the rotational velocity is truncated to range from ω lb to ω max since −ω max <ω lb . Similarly to the case of B1, for the region between b l2 and b l3 , the agent is allowed to only turn left since ω ub <0. Finally, if the detected terrain is in the uncontrollable regions C1 and C2, the agent must stop before arriving at that terrain because the whole range from −ωmax to ωmax is beyond the safe range from ωub to ωlb and the agent will unconditionally roll over at that terrain. Control regions Rollover-free ranges of the rotational vel. Force Reflection System It is possible that turnover prevention control can cause inconsistencies between the driving command of the operator and the reactive motion of the agent. Thus, a reflective force is generated to compensate the inconsistencies. The experimental setup for force reflection is depicted in Fig. 18. The WingMan Force Pro joystick of Logitech is employed as a 2 DOF force feedback joystick which not only receives a command of an operator but also generates a reflective force. The joystick interface is developed by using the Microsoft DirectX 8.0 Software Development Kit (SDK). The positions about the X-axis and the Y-axis of the joystick coordinates determine the rotational and translational velocities of the agent, respectively. Position-based Reflective Force for Turnover Prevention The position-based force F R is depicted in Fig. 19. The force F R is determined by the position q about the axis of the joystick coordinates as follows: where the parameters of the position-based force are described in Table 4. If q is apart from q offset , the reflective force is generated for pushing the joystick to q offset . In other words, the position-based force makes it difficult for the operator to push the joystick far from qoffset. The force parameters FPS and kPC for q>qoffset and the parameters FNS and k NC for q<q offset can be determined independently. In addition, as the dead-band for the reflective force can be defined by W DB around q offset , no reflective force is generated if q is located between (q offset −W DB ) and (q offset +W DB ). Thus, the sensitivity to a slight displacement of q around q offset can be reduced. For pitchover prevention, the reflective force about the Y-axis of the joystick coordinates is generated as shown in Fig. 20 (a). As described in Section 4.2, if the agent detects pitchovers at front terrain, it must keep its translational velocity or decelerate to zero to avoid a pitchover. That is, the desired translational velocity v d for pitchover prevention is set as the current translational velocity of the agent or decreased continuously. Through the reflective force, the operator recognizes that the translational velocity is restricted by vd. If the operator pushes the joystick in the positive direction above the joystick position for vd, he/she will feel a repulsive force in the negative direction. On the other hand, if the operator pulls the joystick in the negative direction below the joystick position for v d , he will feel no reflective force. Therefore, the operator can recognize the upper limit v d for pitchover prevention by the repulsive force. The parameters of the reflective force are determined as q offset =f 1 (v d ), W DB =10 2 , F NS =0, F PS = 10 4 , kNS=0 and kPS=10 4 , where f1(·) is a mapping function of the desired translational velocity onto the joystick position. In this case, the only qoffset is changed according to the desired translational velocity v d for pitchover prevention. For rollover prevention, a reflective force about the X-axis is generated as shown in Fig. 20 (b). If the agent detects a possible rollover at the front terrain, the safety range of its rotational velocity is determined to avoid rollovers as discussed in Section 4.3. The operator can detect the safety range through the reflective force while driving the agent. If the operator maneuvers the agent within this safety range of the rotational velocity, no reflective force is generated. Thus, the operator can drive the agent without any restriction. However, if the operator pushes the joystick beyond the safety region, he will feel a reflective force which pushes the joystick in the direction of the safety region. That is, if the operator pushes the joystick above the joystick position for the upper bound of the safety region, the reflective force in the negative direction is generated to prevent from being pushed in the positive direction. Also, in the case where the operator pushes the joystick below the joystick position for the lower bound of the safety region, the reflective force in the positive direction is generated to prevent from being pushed in the negative direction. The parameters q offset and W DB of the reflective force about the Xaxis are determined according to the safety region of the rotational velocity as follows: [ ] where f 2 (·) is a mapping function of the center of the safety region onto the joystick position and f3(·) is a mapping function of the width of the safety region onto the deadband of the reflective force. The other parameters are determined as FNS=10 4 , FPS=10 4 , kNS=10 4 and kPS=10 4 . In this case, the parameters qoffset and WDB are changed according to the safety region for rollover prevention. As a result of reflective force generation, the operator can intuitively determine how to drive the agent for turnover prevention. Experimental Results Two experiments were carried out with the ROBHAZ-DT in order to verify the feasibility of the proposed turnover prevention algorithm. The sampling time T s was set as 100 ms. Two resultant paths of the agent moving on the sloped terrain are depicted in Fig. 21. The system parameters for the experiments are described in Table 5. Parameters Descriptions The first experiment was carried out with an only mobile base of the ROBHAZ-DT, where h 1 =25 cm. The agent moved for 6.7 s as shown in Path1 of Fig. 21 (a). The terrain data at a distance of Dtr in front of the agent are depicted in Fig. 22 (a). These terrain data were predicted by the terrain-prediction sensor and used for turnover prevention. In this experiment, no turnover was detected in the front terrain and thus the translational and rotational velocities of the agent need not be controlled for turnover prevention. That is, the desired translational velocity v d for pitchover prevention was set as the maximum translational velocity v max and the safety region of the rotational velocity covered the whole range of the rotational velocity of the agent as shown in Fig. 22 (b). Also, reflective force for turnover prevention was not generated and thus the operator could freely control the agent as shown in Figs.22 (c) and 22 (d). The second experiment was carried out using the mobile base with a manipulator. In this experiment, we assumed that the configuration of the manipulator was fixed while the agent was in motion since the action of the manipulator might bring about a change of the center of gravity (CG) of the agent. In this case, although the CG of the agent was not changed, the height of the CG rose up to h 2 =70 cm due to the mass of the manipulator attached to the mobile base. In the second experiment, the agent moved for 6.3 s as shown in Path2 of Fig. 21 (b). The solid line segment of Path2 indicates that the agent was autonomously controlled for turnover prevention. Especially, at A and B of Path2, the intended direction of the operator is modified for turnover prevention. If the agent is still controlled by the operator at A and B, it will overturn soon. The terrain data at a distance of D tr in front of the agent are depicted in Fig. 23 (a). In this case, the agent detected turnovers in the front terrain and thus the translational and rotational velocities of the agent were controlled as shown in Fig. 23 (b). For the given terrain data at each time instant, the desired translational velocity vd was autonomously controlled for pitchover prevention and the translational velocity vcmd of the operator's command was restricted by vd. As shown in Fig. 23 (b), the resultant translational velocity v was decelerated by −a c and accelerated by a c to follow the desired velocity v d . Also, the upper bound ω ub of the safety region of the rotational velocity was determined for rollover prevention and the rotational velocity ω cmd of the operator's command was restricted by ωub. At A and B of Fig. 23 (b), the resultant rotational velocity ω was restricted by ωub, since ωcmd exceeded ωub. Here, A and B of Fig. 23 (b) correspond to A and B of Fig. 21 (b), respectively. As shown in Fig. 23 (c), when the joystick position for v cmd exceeded v d , the reflective force about the Y-axis was generated in the negative direction. As a result, the operator felt a repulsive force preventing him/her from pushing above the position for v d , and hence recognized that the translational velocity of the agent was restricted by vd for pitchover prevention. Also, when the joystick position for ωcmd exceeded the upper bound of the safety region, the reflective force about the Xaxis was generated in the negative direction and vice versa. Thus, through the reflective force, the operator could intuitively recognize the safety region of the rotational velocity for rollover prevention and thus be guided to control the rotational velocity within the safety range. Conclusions The turnover prevention control algorithm of a teleoperated mobile agent was presented. For online prediction of front terrain, a low-cost terrain prediction sensor composed of a camera vision, a laser line generator, and an inclinometer was developed. The terrain parameters were obtained by finding structured laser line projected onto the front terrain and used for turnover prevention control through the quasi-static rollover analysis. As a result of turnover prevention control, the translational and rotational velocities of the agent were restricted. However, the velocity restriction for turnover prevention may bring about the inconsistencies between the intended motion and the reactive motion of the agent. Thus, the force reflection technique was proposed in order to compensate the inconsistencies. Through the position-based reflective force, the operator could intuitively recognize how the agent should be controlled to avoid turnovers. Finally, based on the experimental results, we found that the agent can even avoid turnovers in unknown sloped terrain. Future Works In future works, the proposed algorithm for a mobile manipulator with a moving manipulator will be studied. As the manipulator motion brings about a change of center of gravity, a change of the center of gravity of the agent needs to be considered simultaneously. Acknowledgement This works was supported in part by the Korea Institute of Science and Technology, in part by the Science Research Center/Engineering Research Center program of Ministry of Science and Technology/Korea Science and Engineering Foundation under Grant R11-1999-008, and in part by the Automation and Systems Research Institute.
2016-01-13T18:10:52.408Z
2006-12-01T00:00:00.000
{ "year": 2006, "sha1": "b0bbd21943923948763540498697fa32dd323b8a", "oa_license": "CCBYNCSA", "oa_url": "https://www.intechopen.com/citation-pdf-url/76", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "832f950b84582f0ad9f6cbfd0b082533d6a86a7e", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
51794540
pes2o/s2orc
v3-fos-license
Whole Body Vibration Therapy after Ischemia Reduces Brain Damage in Reproductively Senescent Female Rats A risk of ischemic stroke increases exponentially after menopause. Even a mild-ischemic stroke can result in increased frailty. Frailty is a state of increased vulnerability to adverse outcomes, which subsequently increases risk of cerebrovascular events and severe cognitive decline, particularly after menopause. Several interventions to reduce frailty and subsequent risk of stroke and cognitive decline have been proposed in laboratory animals and patients. One of them is whole body vibration (WBV). WBV improves cerebral function and cognitive ability that deteriorates with increased frailty. The goal of the current study is to test the efficacy of WBV in reducing post-ischemic stroke frailty and brain damage in reproductively senescent female rats. Reproductively senescent Sprague-Dawley female rats were exposed to transient middle cerebral artery occlusion (tMCAO) and were randomly assigned to either WBV or no-WBV groups. Animals placed in the WBV group underwent 30 days of WBV (40 Hz) treatment performed twice daily for 15 min each session, 5 days each week. The motor functions of animals belonging to both groups were tested intermittently and at the end of the treatment period. Brains were then harvested for inflammatory markers and histopathological analysis. The results demonstrate a significant reduction in inflammatory markers and infarct volume with significant increases in brain-derived neurotrophic factor and improvement in functional activity after tMCAO in middle-aged female rats that were treated with WBV as compared to the no-WBV group. Our results may facilitate a faster translation of the WBV intervention for improved outcome after stroke, particularly among frail women. Introduction A woman's risk of a stroke increases exponentially following the onset of menopause, and even a mild-ischemic episode can result in a woman becoming increasingly frail with age. Frailty 2 of 11 is characterized by an increased vulnerability to acute stressors and the reduced capacity of various bodily systems due to age-associated physiological deterioration [1]. Therefore, older women are more likely to experience decreased energy and strength, weight loss, increased susceptibility to disease and physical injury, increased hospitalization, and reduced daily living activities. Our understanding of the link between frailty and cerebrovascular diseases is limited [1]. Thus, understanding the factors that contribute to frailty in women could potentially allow for preventative measures that could decrease or slow down its onset, reduce risk of stroke and provide the basis for new treatment options. Exercise is a powerful behavioral intervention that has the potential to improve health outcomes in elderly stroke survivors. Multiple studies using human and animal models have shown that pre-ischemic physical activity reduces stroke impact on functional motor outcomes, edema, and infarct volume. The same studies also attributed these benefits to the mechanism of decreasing inflammation, and increasing brain-derived neurotrophic factor (BDNF) expression [2][3][4][5][6]. In many cases, however, stroke patients are unable to adhere to the physical activity regimen following their ischemic episodes due to a wide range of individual factors such as stroke severity, preexisting and comorbid conditions, motivation, fatigue, and depression. As a result, whole body vibration, a procedure mimicking exercise, has been proposed as an alternative to physical therapy [7]. Whole body vibration (WBV) is a novel rehabilitative exercise that uses low amplitude, low frequency vibration administered through a platform or Power Plate. WBV shows potential as an effective therapeutic approach and has been studied in a variety of clinical settings that include rehabilitation of patients with chronic stroke [8], spinal cord injury [9], lumbar disk disease and lower back pain syndromes [10], Parkinson's disease [11], elderly with sarcopenia [12,13], chronic obstructive pulmonary disease (COPD) [14], multiple sclerosis [15], obesity, osteoporosis, osteoarthritis and fibromyalgia [16] and children with cerebral palsy [17]. A growing body of evidence in laboratory animals and patients with chronic stroke has shown that WBV reduces or reverses pathological remodeling of bone and such a treatment could also help reduce frailty-related physiological deterioration [18][19][20]. Although WBV has shown to be an effective therapy under many different conditions, its specific application in stroke remains unclear. Several studies of WBV in stroke patients [21,22], of which none were specifically screened for frailty or pre-frailty, have produced inconclusive results [23]. Also, WBV has not yet been systematically studied specifically in women who are often more critically affected by stroke than men. Therefore, the goal of our current study is to investigate the effect of WBV on ischemic outcome in the reproductively senescent (RS) female rat model. Our selection of using a RS female rat model in this study is also adhering to Stroke Therapy Academic and Industry Roundtable (STAIR) guidelines that recommend more relevant animal models to better correlate with the aged population. Based on the currently available literature, we hypothesize that the benefit observed from WBV will be similar in mechanism to the one followed by physical therapy-reducing inflammation and increasing BDNF-resulting in reduced post-ischemic injury, improved activity and neurobehavior in reproductively senescent female rats. These results would serve as preliminary translational data for adoption in a clinical trial of pre-frail and frail women after stroke. Post-Ischemic WBV Reduced Infarct Volume in Middle-Aged Female Rats Our first hypothesis was that post-ischemic WBV reduced infarct volume. Rats exposed to transient middle cerebral artery occlusion (tMCAO) were treated with WBV or no-WBV and a month later, brain tissue was collected for histopathological assessment ( Figure 1A). The results demonstrate a significant reduction in infarct volume in a mild stroke model following WBV treatment as compared to no-WBV rats ( Figure 1B,C). We observed a 41% reduction in infarct volume of WBV treated rats as compared to no-WBV. Histological analysis of WBV or no-WBV-treated rat brains that underwent sham surgery did not show any infarct. In parallel, we also monitored neurological deficit of rats that were exposed to WBV/no-WBV treatment after tMCAO ( Figure 1D). Results demonstrated a significant improvement in the neurological score following WBV as compared to no-WBV rats. Secondly, we tested the hypothesis that post-tMCAO WBV treatment improves neurodeficit and motor coordination along with an observed reduction in ischemic damage. The neurodeficit score in each group was more than 9 at baseline when tested at 1 h after tMCAO. Over the period of 7 days, the neurodeficit score was reduced significantly in rats that were treated with WBV (p < 0.05) after tMCAO as compared with corresponding no-WBV-treated groups. The rotarod test scores from rats receiving WBV treatment as compared to no-WBV group were significantly higher on day 30 (p < 0.05) at 10, 30, and 40 rotations per minute (rpm) speed. These results demonstrate a significant improvement in functional activity after tMCAO in animals that were treated with WBV as compared to the no-WBV group ( Figure 2). Post-ischemic WBV treatment shows reduced infarct volume as compared to the no-WBV group (* p < 0.05 as compared to no-WBV using student t-test). (D) Neurological deficit (ND) assessment scores were significantly improved in the WBV treated group as compared to no-WBV (* p < 0.05 as compared to no-WBV using Student Newman-Keuls). Post-ischemic WBV treatment shows reduced infarct volume as compared to the no-WBV group (* p < 0.05 as compared to no-WBV using student t-test). (D) Neurological deficit (ND) assessment scores were significantly improved in the WBV treated group as compared to no-WBV (* p < 0.05 as compared to no-WBV using Student Newman-Keuls). Post-Ischemic WBV Improved Neuro-Deficit Score and Motor Function in Middle-Aged Female Rats Secondly, we tested the hypothesis that post-tMCAO WBV treatment improves neurodeficit and motor coordination along with an observed reduction in ischemic damage. The neurodeficit score in each group was more than 9 at baseline when tested at 1 h after tMCAO. Over the period of 7 days, the neurodeficit score was reduced significantly in rats that were treated with WBV (p < 0.05) after tMCAO as compared with corresponding no-WBV-treated groups. The rotarod test scores from rats receiving WBV treatment as compared to no-WBV group were significantly higher on day 30 (p < 0.05) at 10, 30, and 40 rotations per minute (rpm) speed. These results demonstrate a significant improvement in functional activity after tMCAO in animals that were treated with WBV as compared to the no-WBV group ( Figure 2). Post-Ischemic WBV Decreased Inflammasome Activation in the Brain of Middle-Aged Female Rats Western blot results demonstrated a two-fold decrease in the inflammasome proteins caspase-1, caspase recruitment domain (ASC), and interleukin-1β in the peri-infarct area of WBV treated rats. Since the peri-infarct area is salvageable tissue after stroke, for this study, we focused on investigating alterations in inflammasome proteins in the peri-infarct area of WBV treated versus the no-WBV rats ( Figure 3). Post-ischemic WBV decreased protein levels of caspase-1, ASC and IL-1β by 88% (p < 0.05), 57% (p < 0.05) and 148% (p < 0.05) in peri-infarct area as compared to no-WBVtreated group. , and IL-1β (C-Top), in the contra-lateral and ipsilateral peri-infarct region of the brain, respectively. Post-ischemic WBV decreases inflammasome proteins caspase 1 (A-Bottom), ASC (B-Bottom), and IL-1β (C-Bottom), in the contra-lateral and ipsilateral peri-infarct region of the brain, respectively (* p < 0.05 as compared to no-WBV using student t-test). Post-Ischemic WBV Increased Brain-Derived Growth Factor (BDNF) and Trk-B Protein Levels in the Peri-Infarct Area Studies from various laboratories demonstrate that growth factors play an important role in preserving brain function after ischemia. Therefore, we tested whether WBV treatment after tMCAO increases BDNF release and tyrosine kinase receptor subtype B (Trk-B) signaling in the female brain. We observed significant increases in levels of BDNF and pTrK-B in the peri-infarct Post-ischemic WBV improves motor coordination (* p < 0.05 as compared to no-WBV using student t-test). Post-Ischemic WBV Decreased Inflammasome Activation in the Brain of Middle-Aged Female Rats Western blot results demonstrated a two-fold decrease in the inflammasome proteins caspase-1, caspase recruitment domain (ASC), and interleukin-1β in the peri-infarct area of WBV treated rats. Since the peri-infarct area is salvageable tissue after stroke, for this study, we focused on investigating alterations in inflammasome proteins in the peri-infarct area of WBV treated versus the no-WBV rats ( Figure 3). Post-ischemic WBV decreased protein levels of caspase-1, ASC and IL-1β by 88% (p < 0.05), 57% (p < 0.05) and 148% (p < 0.05) in peri-infarct area as compared to no-WBV-treated group. Post-ischemic WBV improves motor coordination (* p < 0.05 as compared to no-WBV using student t-test). Post-Ischemic WBV Decreased Inflammasome Activation in the Brain of Middle-Aged Female Rats Western blot results demonstrated a two-fold decrease in the inflammasome proteins caspase-1, caspase recruitment domain (ASC), and interleukin-1β in the peri-infarct area of WBV treated rats. Since the peri-infarct area is salvageable tissue after stroke, for this study, we focused on investigating alterations in inflammasome proteins in the peri-infarct area of WBV treated versus the no-WBV rats ( Figure 3). Post-ischemic WBV decreased protein levels of caspase-1, ASC and IL-1β by 88% (p < 0.05), 57% (p < 0.05) and 148% (p < 0.05) in peri-infarct area as compared to no-WBVtreated group. , and IL-1β (C-Top), in the contra-lateral and ipsilateral peri-infarct region of the brain, respectively. Post-ischemic WBV decreases inflammasome proteins caspase 1 (A-Bottom), ASC (B-Bottom), and IL-1β (C-Bottom), in the contra-lateral and ipsilateral peri-infarct region of the brain, respectively (* p < 0.05 as compared to no-WBV using student t-test). Post-Ischemic WBV Increased Brain-Derived Growth Factor (BDNF) and Trk-B Protein Levels in the Peri-Infarct Area Studies from various laboratories demonstrate that growth factors play an important role in preserving brain function after ischemia. Therefore, we tested whether WBV treatment after tMCAO increases BDNF release and tyrosine kinase receptor subtype B (Trk-B) signaling in the female brain. We observed significant increases in levels of BDNF and pTrK-B in the peri-infarct , and IL-1β (C-Top), in the contra-lateral and ipsilateral peri-infarct region of the brain, respectively. Post-ischemic WBV decreases inflammasome proteins caspase 1 (A-Bottom), ASC (B-Bottom), and IL-1β (C-Bottom), in the contra-lateral and ipsilateral peri-infarct region of the brain, respectively (* p < 0.05 as compared to no-WBV using student t-test). Post-Ischemic WBV Increased Brain-Derived Growth Factor (BDNF) and Trk-B Protein Levels in the Peri-Infarct Area Studies from various laboratories demonstrate that growth factors play an important role in preserving brain function after ischemia. Therefore, we tested whether WBV treatment after tMCAO increases BDNF release and tyrosine kinase receptor subtype B (Trk-B) signaling in the female brain. We observed significant increases in levels of BDNF and pTrK-B in the peri-infarct region of WBV treated group as compared to the no-WBV (Figure 4). Post-ischemic WBV increased protein levels of BDNF and pTrk-B by 58% (p < 0.05) and 59% (p < 0.05) in peri-infarct area as compared to no-WBV-treated group. region of WBV treated group as compared to the no-WBV (Figure 4). Post-ischemic WBV increased protein levels of BDNF and pTrk-B by 58% (p < 0.05) and 59% (p < 0.05) in peri-infarct area as compared to no-WBV-treated group. Discussions The current study demonstrates that the post-stroke WBV intervention reduces brain injury in reproductively senescent female rats. Our study also demonstrated that the post-stroke WBV intervention significantly improved neurological and motor capabilities in female rats. The mechanism by which the WBV intervention improved outcomes after stroke is likely multifactorial, similar to that of exercise. The benefits of post-stroke exercise go beyond reduced infarct volume and have shown to improve motor and cognitive functions. Studies in recent years demonstrate that physical exercise has a profound effect on the normal functioning of the immune system [24][25][26]. Moderate intensity exercise was shown to be beneficial for immunity, which could be the result of reduced inflammation, thymic mass maintenance, changes in immune cells' compositions, increased immunosurveillance, and/or amelioration of psychological stress [24][25][26]. It is well known that exercise is an important intervention that can improve immunity and health outcomes in elderly stroke survivors. However, after stroke, patients are unable to exercise or less likely to adhere to the physical activity regimen following their ischemic episodes. A wide range of individual factors may affect stroke patient participation in physical therapy including stroke severity, preexisting and comorbid conditions, motivation, fatigue, and depression. Therefore, the current approach to reduce post-stroke inflammation and frailty using WBV has important translational value. The current study demonstrated that post-stroke WBV reduces pro-inflammatory cytokine IL-1β and inflammasome proteins in the brain in middle-aged female rats. The importance of inflammasome as a key component of the innate immune response in brain injury has been recently emphasized and targeted for therapeutic interventions [27][28][29][30]. Specifically, the inflammasome was shown to activate caspase-1 and initiate the processing of the inflammatory cytokines IL-1β and IL-18 [31]. In models of brain ischemia, evidence for inflammasome activation has been reported with elevations in inflammatory proteins such as ASC, and caspase-1. Our previously published studies demonstrated elevations in inflammasome proteins in the hippocampus of aged rats [32,33]. Discussion The current study demonstrates that the post-stroke WBV intervention reduces brain injury in reproductively senescent female rats. Our study also demonstrated that the post-stroke WBV intervention significantly improved neurological and motor capabilities in female rats. The mechanism by which the WBV intervention improved outcomes after stroke is likely multi-factorial, similar to that of exercise. The benefits of post-stroke exercise go beyond reduced infarct volume and have shown to improve motor and cognitive functions. Studies in recent years demonstrate that physical exercise has a profound effect on the normal functioning of the immune system [24][25][26]. Moderate intensity exercise was shown to be beneficial for immunity, which could be the result of reduced inflammation, thymic mass maintenance, changes in immune cells' compositions, increased immunosurveillance, and/or amelioration of psychological stress [24][25][26]. It is well known that exercise is an important intervention that can improve immunity and health outcomes in elderly stroke survivors. However, after stroke, patients are unable to exercise or less likely to adhere to the physical activity regimen following their ischemic episodes. A wide range of individual factors may affect stroke patient participation in physical therapy including stroke severity, preexisting and comorbid conditions, motivation, fatigue, and depression. Therefore, the current approach to reduce post-stroke inflammation and frailty using WBV has important translational value. The current study demonstrated that post-stroke WBV reduces pro-inflammatory cytokine IL-1β and inflammasome proteins in the brain in middle-aged female rats. The importance of inflammasome as a key component of the innate immune response in brain injury has been recently emphasized and targeted for therapeutic interventions [27][28][29][30]. Specifically, the inflammasome was shown to activate caspase-1 and initiate the processing of the inflammatory cytokines IL-1β and IL-18 [31]. In models of brain ischemia, evidence for inflammasome activation has been reported with elevations in inflammatory proteins such as ASC, and caspase-1. Our previously published studies demonstrated elevations in inflammasome proteins in the hippocampus of aged rats [32,33]. Consistent with our findings, others have demonstrated increased pro-inflammatory cytokine levels in middle-aged female rats [34]. It is now well documented that the depletion of estrogens at menopause/reproductive senescence elevates pro-inflammatory cytokines, which may increase the chances of inflammatory diseases in the body, including the brain. This decline in estrogen is also associated with a loss of muscle mass, bone, and strength that represent the core of the frailty syndrome [35,36]. Our use of reproductively senescent female rats closely mimics the age group of peri-menopausal women and the population that is likely to suffer frailty following stroke. Therefore, showing benefits of post-stroke WBV in reducing inflammation in the brain is of a translational value. Since post-ischemic inflammation eventually subsides while injured tissue undergoes structural and functional reconstruction, this process may further require the release/presence of variety of growth factors such as BDNF [37]. In our current study, we observed significant increases in levels of BDNF and pTrK-B in the peri-infarct region after WBV. BDNF, a member of the neurotrophic factor family, is one of the most powerful neuroprotective agents [38][39][40]. BDNF expression is regulated in an activity-dependent manner by physiological stimuli, and its biological effects are mediated through the high-affinity receptor, tyrosine kinase receptor subtype B (Trk-B) [41]. Since BDNF expression is augmented in neurons by various stressors (e.g., ischemia, epilepsy, hypoglycemia, and trauma [42]), chronic exposure to BDNF confers neuroprotection. In addition to pro-survival mechanism(s), BDNF also modulates synaptic plasticity and neurogenesis [43][44][45][46]. A direct application of BDNF is neuroprotective in focal and global cerebral ischemia models [47,48]. Importantly, continuous intraventricular administration of BDNF was required for mitigating ischemic brain damage in the aforementioned in vivo studies. Despite BDNF's neuroprotective ability against ischemic damage, treating patients with BDNF remains challenging because BDNF is unable to cross the blood-brain barrier [49,50]. Due to the difficulty of administering BDNF directly to the brain, a model in which BDNF is increased intrinsically has been proposed. Several studies have shown a strong correlation between increased levels of circulating BDNF and exercises, yet no studies have shown an increase in BDNF levels with WBV. One study has shown that exercise in mice is effective at preventing a decrease in BDNF levels in the CA1 and dentate gyrus that would otherwise be caused by exposure to Arsenic [51]. It is proposed that training to volatile fatigue is the optimal way to increase circulating BDNF levels in elderly participants [52]. Intravenous BDNF delivery enhances post-stroke sensorimotor recovery and stimulates neurogenesis [53]. It has also been demonstrated that BDNF up-regulation following exercise is associated with a robust activation of survival pathways that enhance adult neurogenesis in experimental animals [54,55]. Currently, it is unknown whether WBV leads to increases in hippocampal BDNF and whether this response promotes neurogenesis associated with improved cognitive outcome after stroke, but we suspect that this may be the missing link between WBV and exercise. The caveats of the current study are that (1) it lacks a mechanistic approach to prove the role of either inflammation or BDNF in WBV-mediated ischemic protection, and (2) the effects of post-stroke WBV are only tested on RS female rats. Therefore, the observed improvement in motor function and reduced infarct volume could not be generalizable to both rat sexes. In conclusion, the results of our study demonstrated that the post-ischemic WBV intervention reduces brain injury and frailty in reproductively senescent female rats, suggesting WBV may be a potential therapy to reduce post-ischemic frailty and improve functional and cognitive outcomes in women after stroke. Our use of reproductively senescent female rats closely mimics the age group of peri-menopausal women and is clinically relevant as it is estimated that 7 million American adults are living with a stroke and the majority of them are post-menopausal women. This is particularly important because we now know that stroke disproportionately kills more women than men. Although women are naturally protected against stroke in their pre-menopausal life, a woman's risk of stroke increases exponentially after menopause. The decline in ovarian hormones, especially estrogen, at menopause is associated with loss of muscle mass, bone and strength that represents the core of the frailty syndrome [35,36]. Whole body vibration as a simple and an inexpensive intervention that can be administered at homes has a great potential to aid in prevention and treatment of post-stroke frailty. Future pre-clinical studies investigating the specific mechanism of post-stroke frailty and efficacy of WBV in improving post-stroke frailty and other stroke outcomes can lead to its clinical translation. Materials and Methods All animal procedures were carried out in accordance with the Guide for the Care and Use of Laboratory Animals published by the U.S. National Institutes of Health and were approved (protocol # 17-034; 03-08-2017) by the Animal Care and Use Committee of the University of Miami, University of Miami, Florida, USA. Retired breeder (9-12 months) Sprague-Dawley female rats (280-350 g) were purchased, and their estrous cycles were checked for 14-20 days before experimentation by daily vaginal smears [56]. Rats that persisted in a single stage for 7 days were considered acyclic. The acyclic rats and rats that remained in constant diestrous were considered reproductively senescent (RS) and were used in the study [57]. Reproductively senescent rats were randomly exposed to 60 min of transient middle cerebral artery occlusion (tMCAO) or sham surgery. Transient MCAO was adapted from previous publications [58,59]. tMCAO was achieved by intraluminal suture. A 30-mm-long 3-0 nylon monofilament suture coated with silicone (Doccol) and was placed 19-20 mm into the internal carotid artery to occlude the ostium of the MCA. The suture was placed in the MCA for 60 min and the drop in cerebral blood pressure was confirmed using laser Doppler (LDF, Perimed Inc., Ardmore, PA, USA). For sham surgical procedure, rats were exposed to anesthesia for a period similar to that of the tMCAO group. Physiological parameters including, pCO 2 , pO 2 , and pH were maintained within normal limits through the surgery or sham-surgery. Mean arterial blood pressure (MABP) was continuously monitored and head and body temperatures were maintained at 37 • C. One day after the tMCAO, animals were randomly assigned to (1) a WBV intervention group or to (2) a no-WBV group. Animals randomized to the WBV group underwent 30 days of treatment performed twice daily for 15 min each session, 5 days each week. The vibration device was programmed in order to achieve a frequency of vibration within a range of about 40 Hz (0.3 g) similar to those used in clinical studies [9,60,61]. The duration and frequency of sessions were selected based on our recent publication [18], where we demonstrated an ability of WBV to improve selected biomarkers of bone turnover and gene expression and to reduce osteoclastogenesis after spinal cord injury. The no-WBV animals post tMCAO were also placed on the platform with no activation. To provide WBV intervention, animals were placed in a plexiglass box that contained four chambers. One rat was placed into each chamber in a random order from one session to the next to avoid any bias due to chamber placement. The vibration parameters were measured in each chamber and differences in these parameters between the chambers were negligible. Rats exposed to WBV or no-WBV treatment after tMCAO were allowed to survive for a month for histopathological assessment. At one month, rats were anesthetized and perfused via the ascending aorta with FAM (a mixture of 40% formaldehyde, glacial acetic acid, and methanol, 1:1:8 by volume) for 20 min after first being perfused for 2 min with saline. The rat heads were immersed in FAM for 1 day before the brains were removed. The brains were kept in FAM at 4 • C for at least 1 additional day, and then coronal brain blocks were fixed in paraffin. All brains were cut into 10-µm thick sections from 5.5 mm to −7.5 mm from bregma at 9 standard levels to span the entire infarcted area. Sections of the 9 levels were stained with hematoxylin and eosin to visualize the infarcted areas and to calculate infarct volumes. The electronic images of the tissue sections were obtained using a CCD camera and infarct volume was quantified using an MCID image analysis system [62]. Neurodeficit Sscoring and Motor Deficit Test A standardized neurobehavioral test battery was conducted as described previously [62]. This test consists of quantifications of postural reflex, sensorimotor integration and proprioception. Total neurodeficit score ranged from a score of 0, indicating normal results, to a maximal possible score of 12, indicating a severe deficit. To further test motor function, we performed the rotarod test as described in our previous publication [63]. In this test, the rats were placed on the rotarod cylinder, and the time that animals remained on the rotarod was measured. The speed was slowly increased from 10 to 40 rpm over 8 of 11 5 min. The trial ended if a rat fell off of the device or spun around for 2 consecutive revolutions without the rat attempting to walk. The rats were trained for 3 consecutive days before undergoing the MCAO procedure. The average duration (in seconds) on the machine was recorded from 3 different rotarod measurements 1 day prior to surgery. Motor function data are presented as percentage of mean duration (3 trials) on the rotarod compared to the internal baseline control (before surgery). The rats were tested at 1, 15, and 30 days after MCAO. Statistical Analysis The data are shown as the mean value ± SEM or median ± SEM, and the results from the densitometric analysis were analyzed by a two-tailed Student's t-test. The neurodeficit score was analyzed with a two-way repeated measures ANOVA followed by Student Newman Keuls test. A p < 0.05 was considered statistically significant.
2018-09-15T01:06:21.140Z
2018-07-11T00:00:00.000
{ "year": 2018, "sha1": "539b424dfd680e6055b9afae455483c547b592b4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/19/9/2749/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c3a58c7cc4e406ee9ef9937714f08dac915e74f8", "s2fieldsofstudy": [ "Psychology", "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
232775410
pes2o/s2orc
v3-fos-license
Pulmonary Vascular Complications in Hereditary Hemorrhagic Telangiectasia and the Underlying Pathophysiology In this review, we discuss the role of transforming growth factor-beta (TGF-β) in the development of pulmonary vascular disease (PVD), both pulmonary arteriovenous malformations (AVM) and pulmonary hypertension (PH), in hereditary hemorrhagic telangiectasia (HHT). HHT or Rendu-Osler-Weber disease is an autosomal dominant genetic disorder with an estimated prevalence of 1 in 5000 persons and characterized by epistaxis, telangiectasia and AVMs in more than 80% of cases, HHT is caused by a mutation in the ENG gene on chromosome 9 encoding for the protein endoglin or activin receptor-like kinase 1 (ACVRL1) gene on chromosome 12 encoding for the protein ALK-1, resulting in HHT type 1 or HHT type 2, respectively. A third disease-causing mutation has been found in the SMAD-4 gene, causing a combination of HHT and juvenile polyposis coli. All three genes play a role in the TGF-β signaling pathway that is essential in angiogenesis where it plays a pivotal role in neoangiogenesis, vessel maturation and stabilization. PH is characterized by elevated mean pulmonary arterial pressure caused by a variety of different underlying pathologies. HHT carries an additional increased risk of PH because of high cardiac output as a result of anemia and shunting through hepatic AVMs, or development of pulmonary arterial hypertension due to interference of the TGF-β pathway. HHT in combination with PH is associated with a worse prognosis due to right-sided cardiac failure. The treatment of PVD in HHT includes medical or interventional therapy. Hereditary Hemorrhagic Telangiectasia Hereditary hemorrhagic telangiectasia (HHT), additionally known as Rendu-Osler-Weber disease, is an autosomal-dominant inherited disease with an estimated prevalence of 1 in 5000 individuals and higher in certain regions [1]. HHT can initially present itself with spontaneous recurrent epistaxis and mucocutaneous telangiectases. However, HHT is additionally frequently complicated by arteriovenous malformations (AVMs) in the lung, brain, liver and digestive system [2]. Unfortunately, HHT is still underdiagnosed, and entire families remain unaware of available screening and treatment opportunities [2][3][4]. Diagnosing HHT can be done through genetic testing or by the use of the clinical Curaçao Criteria framework. The Curaçao diagnostic criteria for HHT consist of the following [5]: • Frequent and recurrent epistaxis, which may be mild to severe • Multiple telangiectases on characteristic sites: lips, oral cavity, fingers, and nose • AVMs or telangiectases in one or more of the internal organs (lung, brain, liver, intestines, stomach, and spinal cord) • A 1st-degree relative with HHT A diagnosis of HHT is considered confirmed if at least three criteria are present, and possible with two criteria, as listed above [6]. Currently, there are five different mutations known to cause HHT; this has led to the subdivision of HHT into five subtypes [7]. These mutations do not lead to abnormal proteins but cause haploinsufficiency, which leads to a reduction in concentration of functional proteins as well as an imbalance in the TGF-β signaling pathway [8]. The various types of HHT can be subdivided based on the genetic mutation in the TGF-β signaling pathway [2]: • HHT1 is caused by mutations in the ENG gene (cytogenetic location 9q34.1; OMIM187300, encoding for the protein endoglin). HHT type 1 is characterized by a higher prevalence of pulmonary and cerebral AVMs, mucocutaneous telangiectasia, and epistaxis compared to HHT type 2 [2,9,10]. • HHT type 2 is caused by a mutation in the ACVRL1-gene (cytogenetic location 12q13.13; OMIM600376, encoding for the ALK1 protein) and has a higher prevalence of hepatic AVMs compared to HHT type 1 [2,9,11]. • HHT type 3 and HHT type 4 are linked with mutations in, respectively, chromosome 5 and 7; however, the exact genes remain unknown [12][13][14]. • HHT type 5 is caused by a mutation in the Growth Differentiation Factor 2 gene (GDF-2) that codes for the Bone Morphogenetic Protein 9 (BMP9) (OMIM615506) which expresses an HHT-like phenotype and is therefore classified as HHT type 5 [14,15]. • Mutations in the SMAD4 gene (cytogenetic location 18q21.2; OMIM175050) can cause a rare syndrome that is a combination of juvenile polyposis and HHT. This mutation is only found in 1-2% of HHT patients [2,14,16]. Excessive TGF-β activation contributes to the development of a variety of diseases, including cancer, autoimmune disease, vascular disease, and progressive multi-organ fibrosis. The TGF-β signaling pathway is involved in many cellular processes, including cell growth, cell differentiation, apoptosis, cellular homeostasis, and others [17,18] (Figure 1). In endothelial cells (ECs), TGF-β can signal through two type-1 receptors: the ALK-5 pathway in which SMAD 2 and 3 are activated and the ALK-1 pathway in which SMAD 1, 5, and 8 are activated [2,7] (Figure 1). All the known mutations in genes that cause HHT are found in the TGF-β signaling pathway. Several studies show biphasic effect of TGF-β in ECs [19,20]. In low concentrations, TGF-β is pro-angiogenetic, while in high concentrations TGF-β inhibits angiogenesis [20,21]. ALK-1 and ALK-5 receptors play a pivotal role in the angiogenesis and quiescence of ECs [22]. Activation of the ALK-5 pathway provides a quiescent phenotype in which proliferation and migration are inhibited while stabilization of vessels by podocytes is stimulated [22]. In contrast to ALK-5, activation of the ALK-1 receptor induces proliferation and migration and increased VEGF expression [23][24][25]. ALK-1 and ALK-5 not only have opposite responses but are also co-dependent as presence of ALK-5 is necessary for maximum ALK-1 activation [22]. Studies have shown that in ALK-5 KO mice both the ALK-5 and ALK-1 pathway are defective [24,26]. Furthermore ALK-1 can directly antagonize the ALK-5/SMAD-2/3 pathway on the level of the SMADs [24,26]. Several studies show biphasic effect of TGF-β in ECs [19,20]. In low concentrations, TGF-β is pro-angiogenetic, while in high concentrations TGF-β inhibits angiogenesis [20], [21]. ALK-1 and ALK-5 receptors play a pivotal role in the angiogenesis and quiescence of ECs [22]. Activation of the ALK-5 pathway provides a quiescent phenotype in which proliferation and migration are inhibited while stabilization of vessels by podocytes is BMP-9 is a ligand involved in the TGF-β/ALK-1 complex, high concentrations of BMP-9 in vitro and ex vivo inhibit proliferation and migration of ECs [24,33]. In vivo, however, several studies have seen that BMP-9 in combination with TGF-β induces angiogenesis [34,35]. As with TGF-β, the functional outcome of BMP-9 in ECs appears to be dependent on multiple factors including ligand concentration and cellular context [24]. Mutations in the ENG and ACRVL-1 genes alter the ligand-receptor interaction, creating an imbalance between Vascular Endothelial Growth Factor (VEGF). VEGF is stimulated by the ALK-5 pathway and inhibited by the ALK-1 pathway [39,40]. Although ALK-1 and ENG are expressed in angiogenesis, their expression is suppressed in adults [40]. Increase of ALK-1 signaling pathway occurs in adults in case of tumor growth, wound healing or inflammation [41,42]. This allows for the creation of a thin-walled arteriovenous complex that is exposed to increased arterial blood flow and increased pressure [43]. VEGF appears to play an important role in HHT-patients with a 10 times increased VEGF plasma concentration compared to non-HHT controls [14,44]. No difference was seen in VEGF plasma concentrations between HHT1 and HH2 patients [45]. Several studies with ACVRL-1-deficient mice showed that no AVMs occur if VEGF concentrations are normal [25,46]. Although ENG and ACRVL-1 mutations cause HHT-1 and HHT-2, it is noteworthy that HHT vascular lesions only occur in certain organs and are not expressed throughout the body [47,48]. One theory that can explain this phenomenon is the second-hit hypothesis. As with other genetic diseases, it is believed that an external trigger or second hit such as vascular damage, inflammation, infection or angiogenic stimuli must occur to cause a second genetic mutation of a healthy HHT gene, which enhances endoglin haploinsufficiency [8,49]. In resting EC, endoglin is present at low concentrations, but when cells are actively proliferating or during angiogenesis and embryogenesis, the endoglin concentration is increased [8,48,[50][51][52]. Several studies with knock-out (KO) animal models (KO ENG and ACRVL-1) have shown that a local external trigger such as damage or stimulation of VEGF causes the formation of AVMs [8,48,53]. Thus, in the haploinsufficient HHT setting where a second hit occurs, endoglin and ALK1 do not reach the minimum concentration necessary to perform their roles in vascular damage [8,48,[54][55][56]. Further genome analysis of HHT families with phenotype variability as well as families with HHT whose genetic causes are unknown may be useful to identify new genes that may explain the heterogenic spectrum [9]. Pulmonary Arteriovenous Malformations in HHT Pulmonary AVMs (PAVMs) are a direct connection between a pulmonary artery and pulmonary vein without the interference of the pulmonary capillary bed. This results in an intrapulmonary right-to-left shunt with no gas exchange and a reduced filtering capacity of the pulmonary capillary bed. PAVMs are frequently underdiagnosed and asymptomatic [57]. PAVMs can be present from birth and are mainly fully developed in adulthood. However, PAVMs can continue to grow later in life, for example, during pregnancy or changes in pulmonary hemodynamics [58,59]. The size of the right-to-left shunt determines the degree of hypoxemia, increased ventilation, and cardiac output (CO) [59][60][61]. PAVMs can further result in rare but severe complications like massive hemoptysis, hemothorax, cerebrovascular events, and abscesses [62]. The estimated prevalence of PAVMs is 38 in 100,000 individuals [63]. Approximately 15-50% of HHT patients have PAVMs [57,64,65]. However, 80-90% of the patients with PAVM have HHT as the underlying cause. The prevalence of the PAVMs in HHT depends on the type of mutation: mutations in ENG have a higher prevalence (62%) compared to mutations in ACRVL-1 (10%) [66]. PAVMs are more common in women than men, and because pregnancy is a risk factor for PAVM-related complications, it is important to screen high-risk patients for its presence. The second international clinical guideline for diagnosis and management of HHT recommended to screen all patients with possible or confirmed HHT for pulmonary AVMs [4]. Ninety percent of PAVMs have a single feeding artery and are called simple PAVMs. In 5% of cases, it concerns a complex PAVM involving two or more feeding arteries from different segments. In 5% of cases, there are diffuse PAVMs involving many feeding arteries [10,67,68]. The degree of pulmonary shunt can be classified by the number of microbubbles found in the left heart. The presence of a moderate or large shunt is an independent predictor of cerebrovascular events and brain abscesses [71]. In case of a positive TTCE a chest CT pulmonary angiography should be performed to diagnose treatable PAVM [4]. However, a chest CT pulmonary angiography can be withheld in case of small right-to-left shunt because in this group either no PAVM is found or they are too small for embolization therapy [72]. Screening for PAVMs in (asymptomatic) HHT patients is justified due to good treatment options and non-invasive examination, reducing the risk of serious complications [4,73]. It remains unknown what the optimal screening interval is in HHT patients without a pulmonary shunt at the initial presentation [74]. The international clinical guidelines for diagnosis and management of HHT recommend treating PAVMs with transcatheter embolotherapy through the use of detachable coils or plugs, frequently preventing surgery [4]. Treatment of PAVMs is discussed further in Section 4. Pulmonary Hypertension Caused by HHT PH is a condition of increased blood pressure within the pulmonary arteries. PH has been defined as a mean pulmonary arterial pressure (PAP) ≥25 mmHg at rest as assessed by right-heart catheterization [6]. In the presence of a low pulmonary artery wedge pressure (≤15 mmHg), the PH is called pre-capillary. Within the clinical classification of PH, multiple clinical conditions have been categorized into five groups as follows [6]: There are no data describing the prevalence of PH per group. In an echocardiographic study, the prevalence of PH (estimated pulmonary artery systolic pressure >40 mmHg) was 11%, with 79% of patients suffering from left heart disease and 10% suffering from lung diseases, respectively [6]. A study from Peacock et al. 2007 has shown that the prevalence of PAH in Europe is 15-50 subjects per million individuals in the population [6]. Diagnosing PH in HHT can be challenging. Symptoms of HHT, such as fatigue, dyspnea and exercise intolerance, resemble those of PH due to anemia, hypoxemia associated with PAVMs, inadequate sleep due to epistaxis, and the psychological burden of a chronic illness [2,6,75]. A transthoracic ultrasound (TTE) should always be performed when PH is suspected. TTE provides different echocardiographic variables, such as an estimation of the PAP and secondary signs, to assess the probability of PH [6]. A right heart catheterization should be performed to confirm the diagnosis if treatment of PH is being considered [6]. PH in HHT can be divided into two groups: pre-capillary PH and post-capillary PH, based on the underlying etiology. One of the subgroups of PAH, a disease with a pre-capillary hemodynamic profile and an increased pulmonary vascular resistance (PVR > 3 Woods Units), is heritable PAH (HPAH) [76]. Several studies have shown that the bone morphogenetic protein receptor type II (BMPR2) gene is mutated in 70% of cases with HPAH [77][78][79][80]. The BMPR2 is a receptor of the TGF-β superfamily because it seems to be a ligand which influences cytokines involved in proliferation, migration, differentiation, and apoptosis [76]. ECs additionally regulate vascular function by controlling the production of vasoconstrictors, vasodilators, and the activation and inhibition of smooth muscle cells (SMCs) [2]. BMPR2 is a serine/threonine receptor kinase involved in EC apoptosis and prevents arterial damage and unfavorable inflammation [15,[81][82][83]. Dysfunction of the BMPR2 gene results in hypertrophy of the SMC, deposition of the extracellular matrix, the proliferation of endothelial cells, and an increase in adventitial fibroblasts. This leads to endothelial dysfunction, a decreased production of vasodilator and anti-proliferative agents (NO and prostaglandin), and an increase of the production of agents that promote vasoconstriction and proliferation (thromboxane A2 and endoteline-1). This results in an increased PVR and leads to right ventricular overload, hypertrophy, and dilatation, eventually leading to death [2,6]. Less than 1% of patients with HHT suffered HPAH caused by a mutation in the ACVRL1 gene [9]. Up to 20% of the known ACVRL1 mutations are associated with the development of PAH. Research by Vorselaars et al. showed that very few cases are known in the literature with the combination PAH-HHT. Ref. [2] in total, 113 cases have been described in the literature of which 18 patients have an ENG mutation (PAH-HHT1) and 79 patients an ACVRL1 mutation (PAH-HHT2). No data on the prognosis are described in literature on patients with PAH-HHT1, but it is expected that the prognosis is worse compared to patients with only HHT. Patients with PAH-HHT2 often present younger and have a worse prognosis. For example, 28% of patients with PAH-HHT2 are <18 years old [2]. However, a majority of family members of patients with HPAH in combination with HHT do not develop HPAH, indicating that other genetic or environmental factors are required to develop an HPAH phenotype [84]. Symptomatic HPAH patients with ACVRL1 mutations, frequently without HHT, are more likely to present with symptoms than patients with a BMPR2 mutation or idiopathic PAH [85]. Despite the fact that patients with HPAH respond more ideally to treatment with vasodilators, HPAH develops more progressively compared to patients with idiopathic PAH [79,86]. A study by Lee et al. has shown that patients with PAH-HHT have significantly lower three-year survival rates compared to patients with a BMPR2 mutation or idiopathic PAH (53% vs. 74%) [87]. Post-capillary PH arises from a hyperdynamic state caused by an increased cardiac output (CO), which can cause heart failure in the long term [6]. Within HHT, the increased CO (sometimes up to three times normal) is frequently caused by the HAVM-related shunt [75,88]. HAVMs occur in 32-78% of patients with HHT and are mainly seen in HHT2 caused by ACVRL1 mutation [2,71]. In addition to HAVM, the CO can also be increased due to anemia caused by epistaxis and gastrointestinal bleeding, which is frequent in patients with HHT. International guidelines recommended that screening for HAVM should be offered to adults with definite or suspected HHT. Doppler ultrasonography, multiphase contrast CT and MRI can be used to screen HAVM [4]. Treatment and Follow-Up of Pulmonary Vascular Disease in HHT Treatment of PAVMs in HHT is recommended to prevent severe complications-in particular, the development of brain abscesses and cerebral ischemic events-and is therefore justified for asymptomatic patients. In addition, symptoms of hypoxia and dyspnea can be reduced by PAVM treatment [4]. Embolization therapy is done by the transcatheter vaso-oclusion of PAVMs through the use of detachable coils or plugs. Complications that can arise with vaso-occlusion are pleural pain and pleural effusion, which improve when treated symptomatically [89]. Because a right-left shunt increases the risk of infections and ischemic events, treatment of PAVM is indicated when the supply vessels are ≥3 mm or as small as technically feasible. Cannulation and embolization of smaller vessels can be more challenging. It is important to embolize as distal as possible to ensure that other well-functioning branches of the pulmonary vasculature are not occluded as well [90], [91]. In the long term (2-21 years), the vaso-occlusion success rate is 75%. This is probably due to development of other supply vessels that were previously not visible during the initial procedure or growth of other PAVMs [92]. International guidelines recommended provid-ing long-term follow-up in patients with PAVMs, in order to detect growth of untreated PAVMs. It also recommended to advise patients to use antibiotics before procedures with a risk of bacteremia, avoid SCUBA diving and taking extra care when intraveneous access is in place to avoid air emboli [4]. Sometimes, the PAVM are complex and diffuse involving pulmonary arteries from different segments. This group is difficult to treat; surgery might be an alternative for percutaneous treatment. However, lung transplantation might be the only option left [90,91]. Despite the fact that local treatment of telangiectases and PAVMs continues to improve, no ideal systemic therapy is available to date. Various studies and trials have attempted to find new drugs and have investigated the possibilities for repurposing existing drugs [2,93]. Currently, anti-angiogenic drugs used in cancer treatment (anti-VEGF antibodies and thyrosine kinase inhibitors) are under investigation with the aim of inhibiting the pro-angiogenic processes in HHT (Figure 2). VEGF plays a role in the development of AVMs and anti-VEGF therapy has shown to be effective in the treatment of other AVMs. Several case reports described treatment of diffuse PAVM with bevacizumab in which respiratory symptoms improved and epi- VEGF plays a role in the development of AVMs and anti-VEGF therapy has shown to be effective in the treatment of other AVMs. Several case reports described treatment of diffuse PAVM with bevacizumab in which respiratory symptoms improved and epistaxis was decreased, without the formation of new AVMs on chest-CT during follow-up [94]. Bevacizumab is a monoclonal antibody used in the treatment of cancer. This antibody acts on endothelial growth factor (VEGF) to inhibit neoangiogenesis. VEGF promotes angiogenesis and interacts with ECs. In case of tumors, wound healing and HHT high VEGF concentrations are observed [14,44,[95][96][97]. Although these results are hopeful, further scientific research is needed to use bevacizumab in the treatment of PAVMs. Treatment differs for the different types of PH in HHT. There have not yet been randomized control trials to contribute to a guideline for PAH-specific therapy in HHT [2]. Standard therapy of PAH is currently recommended in patients with heritable PAH based on HHT specific mutations. The aim of current treatment is to unload the right ventricle and reduce symptoms, thus improving quality of life. Treatment of PAH consists of lifestyle advice and drug therapy. Lifestyle advice includes avoiding pregnancy and infections, having elective surgery performed in specialized centers with experience in PH, genetic testing of family members, oxygen, psychological assistance, and water and salt reduction [4,6]. Drug therapy for PAH consists of calcium channel blockers for patients responding well to invasive vasodilation test, endothelin receptor antagonists, phosphodiesterase type 5 inhibitors, soluble guanylate cyclase stimulators, and prostacyclin [2,6]. In young therapy-resistant patients, lung transplantation can be performed as well. Different drugs are currently under investigation for the treatment of PAH. Tacrolimus is a drug used to prevent rejection after allogenic organ transplantation [98]. The precise effect of tacrolimus is unknown. It is thought that tacrolimus engages in the BMP9-ALK1-ENG-SMAD pathway and stimulates the transcription of ENG and ALK-1 in ECs, reducing haploinsufficiency [8] (Figure 2). In addition, it appears that tacrolimus activated the BMPR-2 signalling pathway which is suppressed in PAH [98,99]. Research by Albinana et al. showed that after treatment with tacrolimus the ECs had an increase in mRNA expression as well as the protein endoglin and ALK-1. This stimulates the TGF-β /ALK-1 pathway and ECs functions such as tubulogenesis and cell migration [97,100,101]. Furthermore reduced VEGF activity was also seen in animal models [97,102]. A few case reports describe the use of tacrolimus in PAH with promising results [98]. A phase 2b randomized control study by Spiekerkoetter et al. 2017 showed an increase of BMPR2 expression, improvement of the 6-min walk distance, serological and ultrasound parameters of heart failure. However, these improvements were not significant probably due to small group size and requires further research [103]. A recent case report has shown that low-dose tacrolimus treatment improved HHTrelated epistaxis but had no effect on PH progression in HHT patients [104]. However, tacrolimus is poorly soluble in plasma and has a low bioavailability, so local availability in the lung may be not optimal since it is given in a low dose [105]. Systemic therapy with tacrolimus can also cause side effects of neuro-and nephrotoxicity. A study by Wang et al. showed that it is possible to administer tacrolimus in aerosol form by using biodegradable polymeric acetalated dextran nano particles (Ac-Dex NP), which can deliver tacrolimus deep into the lungs [106]. Due to local delivery and therefore bypassing the first pass effect, local concentrations of tacrolimus might be achieved with less risk of systemic side effects. PH due to left heart disease can be treated by means of salt reduction and diuretics. Embolization of liver AVMs can cause serious complications, such as biliary ischemia. Treatment of choice might be the use of intravenous bevacizumab, recently recommended in the international guideline for HHT [4]. Several small non-randomized studies show that the use of bevacizumab in PH due to hepatic AVMs improved the cardiac output, pulmonary arterial pressure, left ventricular filling pressure and reduced the progression of HAVMs [4,[107][108][109]. Tacrolimus has demonstrated to be a potent ALK-1 signaling mimetic, downregulating the ALK-1 loss of function transcription response, tacrolimus is therefore an interesting option for the treatment of HAVM in HHT2 with high cardiac output PH [8]. Secondly, if anemia is present the underlying etiology should be treated to reduce the high CO. Conclusions In this review, we discussed the pathophysiology, screening, and treatment of PVD, both PAVM and PH, in HHT. Research into pathophysiology of these mutations has led to potential targets for therapy like tacrolimus and bevacizumab. Although case reports show promising results, scientific evidence is still insufficient to use these therapies in daily practice. Further research is required, and it is reasonable to assume that clinical trials will follow. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: No new data were created or analyzed in this study. Data sharing is not applicable to this article. Conflicts of Interest: The authors declare no conflict of interest.
2021-04-04T06:16:23.812Z
2021-03-27T00:00:00.000
{ "year": 2021, "sha1": "8bc2cbb5810f3a15a2ed73b2feb1a1e7d7aa19d3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/7/3471/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "36ab298765874e993798254968743a5ad3b1764d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235240506
pes2o/s2orc
v3-fos-license
Ras, TrkB, and ShcA Protein Expression Patterns in Pediatric Brain Tumors Numerous papers have reported altered expression patterns of Ras and/or ShcA proteins in different types of cancers. Their level can be potentially associated with oncogenic processes. We analyzed samples of pediatric brain tumors reflecting different groups such as choroid plexus tumors, diffuse astrocytic and oligodendroglial tumors, embryonal tumors, ependymal tumors, and other astrocytic tumors as well as tumor malignancy grade, in order to characterize the expression profile of Ras, TrkB, and three isoforms of ShcA, namely, p66Shc, p52Shc, and p46Shc proteins. The main aim of our study was to evaluate the potential correlation between the type of pediatric brain tumors, tumor malignancy grade, and the expression patterns of the investigated proteins. Introduction It has been previously demonstrated that Ras activity depends on ShcA recruitment upon tyrosine kinase-associated receptor (Trk) activation [1]. Abnormal action of proteins involved in this pathway may be strictly associated with oncogenic processes [2,3], especially because Ras activation has an impact on downstream oncogenes such as Raf and Rac [4][5][6], which regulate cell proliferation. The role of ShcA proteins in oncogenic processes has been intensively studied in many models, and their functions in tumors of the nervous system have gained much attention. The ShcA transcript has three isoforms, p46Shc, p52Shc, and p66Shc, which have similar structures and adaptor functions [1]. Classically, all ShcA proteins that are located in the cytoplasm and ER are involved in growth factor signaling as they become tyrosine phosphorylated in response to growth factors binding to tyrosine receptor-associated kinases at the plasma membrane [7,8]. The signal is transmitted by the ShcA complex with Grb2, thus recruits the Son of sevenless (Sos) protein, exchanging GDP for GTP in the Ras protein, which leads to its activation. Shorter ShcA isoforms (p46Shc and p52Shc) facilitate Ras activation after Grb2-Sos complex formation and binding [9]. p66Shc, the longest isoform of the Shc adaptor protein 1 subfamily, possesses an amino-terminal extension and collagen homology domain 2 (CH2), which, when phosphorylated, prevents Grb2 and Sos complex formation and therefore Ras activation [1,7,9,10]. The CH2 domain of p66Shc also contains a serine phosphorylation site that undergoes phosphorylation in response to oxidative stimuli [10]. The pro-oxidant p66Shc-associated pathway has been described as an important factor contributing to multiple pathologies by increasing oxidative stress and damage in diabetes [11], cardiovascular disorders [12], neurodegeneration [13], and mitochondrial disorders [14]. p66Shc is also an important determinant of mammalian aging [10,15] and is involved in cancer development and progression [3,[16][17][18][19][20]. ShcA participation in the signal transduction pathway regulating proliferation can be potentially associated with the oncogenic process, which relies on the abnormal level or activation of plasma membrane receptors. Therefore, mutations leading to increased level or the activity of certain tyrosine kinase-associated receptors may result in constitutive Ras activation and uncontrolled proliferation [21]. Ras proteins (H-, K-, N-) are small GTPases that regulate multiple signaling pathways across membranes involved in cell adhesion, growth, migration, and survival [22], among other processes. Ras proteins are frequently mutated in different types of cancer, and the oncogenic behavior (both tumor onset and the progression of transformed cells) is given mainly by missense mutations that determine the constitutive activation of RAS downstream players [23,24]. In detail, H-RAS is the least mutated ever with a 4% probability, followed by N-RAS with 13% and finally K-RAS (83%) [25]. These three isoforms tend to have a distinguished expression among tumors; indeed, while K-RAS mutations are frequently detected in colon and pancreatic malignancies, those affecting N-RAS are found in myeloid leukemia and H-RAS in bladder cancer [26]. Interestingly, the studies of Davol et al. (2003), Frackelton et al. (2006), and Grossman (2007) showed that decreased levels of p66Shc as well as increased phosphorylation of ShcA proteins at tyrosine residues can be good markers for the diagnosis and prognosis of breast cancer and IIA colon cancer, respectively [27][28][29]. Moreover, another study showed that an increased level of p66Shc has been found in highly metastatic variants of the human breast cancer cell line MDA-MB-231 [30,31]. In neuronal cells, there are different tyrosine kinase-associated receptors, among which type A, -B, and -C can directly activate kinases such as PLCγ1 or indirectly transmit the mitogenic signal to downstream effectors such as Ras, Raf, or extracellular regulated mitogen activated kinases (MAPK/ERK) by Shc adaptor coupling [32,33]. In 2017, a comprehensive meta-analysis of 11 studies enrolling a total of 1516 patients by Zhang C. and coworkers collected important information about tyrosine kinase-associated receptor type B (TrkB) expression found in solid tumors including gastric [34][35][36], colorectal [37,38], nonsmall cell lung [39] and ovarian [40] cancers, nasopharyngeal [41], sinonasal and oral squamous cell [42,43], and hepatocellular [40] carcinoma. Immunohistochemistry (IHC) analysis detected TrkB as overexpressed in all tumors, which was significantly associated with poor overall survival and disease-free survival of patients. Interestingly, the level of TrkB was strongly and positively associated with the clinical stage (I-II versus III-IV), classifying the protein as a potential biomarker for poor prognosis in the cohort of cancers above-mentioned [44]. In our studies, we evaluated the expression patterns of TrkB, H-, K-, and N-Ras (namely, pan-Ras) and all three isoforms of ShcA (p66Shc, p52Shc, and p46Shc) proteins in samples of brain tumors belonging to several subgroups (astrocytic, oligodendrioglial, ependymal, choroid plexus, and embryonal tumors) and defined malignancy grades (I, II, III, and IV) based on the 2016 WHO classification [45]. New prognostic markers that can facilitate and predict metastasis risk in patients are still being searched. Hence, we investigated whether the levels of Ras, TrkB, and three isoforms of ShcA, namely, p66Shc, p52Shc, and p46Shc, which may be informative in terms of pediatric brain tumor type or malignancy grade. Ethics Human studies adhered to the Declaration of Helsinki of the World Medical Association. Tumor examinations have been performed in the range of diagnostic procedures. Analyzed tissues were retrieved from the archives of the Pathology Department of the Chil-dren's Memorial Health Institute, Warsaw, Poland, under the Bioethics Committee at the Children's Memorial Health Institute's approved protocol (Approval No. 155/KBE/2014 on 10 September 2014). All patients were treated according to the protocol of the Polish Pediatric Neurooncology Group. Patients gave informed consent for the use of resected samples for scientific purposes. Patient samples were anonymized. Tumor Classification Brain tumors were classified according to the World Health Organization classification of tumors of the central nervous system (CNS). The revised fourth edition of the WHO classification of tumors of the central nervous system 2016 [45] is based on a combination of histologic and molecular features for the definition of several neoplastic entities, particularly among gliomas and embryonal tumors. Brain Tumor Histology and Immunohistochemistry Forty-nine pediatric patients with pediatric brain tumors diagnosed at the Children's Memorial Health Institute in Warsaw, Poland were included in the analysis. Analysis was performed on formalin-fixed paraffin-embedded (FFPE) and frozen tissue samples collected at diagnosis. All tumors were retrospectively reviewed according to recent WHO 2016 criteria [45]. Hematoxylin & Eosin (HE): Paraffin blocks were cut into 3-µm-thick slices and mounted on SuperFrost microscope slides. After stepwise deparaffinization and rehydration, slides were stained with hematoxylin and eosin according to standard protocols. Immunohistochemistry (IHC) was performed on the Ventana BenchMark ULTRA IHC/ISH autostaining system using mouse monoclonal and rabbit antibodies (see Table 1). After antigen retrieval in CC1 buffer, detection of the signal was followed with the Ultra View HRP system (Roche/Ventana). Whole preparations were scanned in a Hamamatsu NanoZoomer 2.0 RS scanner (Hamamatsu Photonics, Hamamatsu, Japan) at the original magnification of 40×. The histological diagnosis of each group of brain tumors required usage of specific immunohistochemical markers, different for the individual groups of tumors. In our analysis, we used only the most common ones: • Ki67, which is considered to be malignancy marker; • GFAP expression allows identification of glial origin of neoplastic cells; • Pancytokeratin expression is considered to be a marker of epithelial origin of choroid plexus tumors; • Olig2 expression is considered to be a marker of gliomas: pilocytic astrocytoma, diffuse-type astrocytic tumors, and pediatric type of oligodendroglioma; • INI-1 is used in the diagnosis of CNS atypical teratoid/rhabdoid tumors. INI-1 expression picture allows to distinguish atypical teratoid/rhabdoid tumor (loss of INI-1) from choroid plexus carcinoma (INI-1+); • Synaptophysin expression allows for the identification of the neuronal origin of neoplastic cells; • EMA staining might serve as sensitive and specific markers of ependymal differentiation in glial tumors; and • S100 is a characteristic marker for glial tumors as well as choroid plexus tumors. Proliferation activity of neoplastic cells was described as a Ki-67 labeling index (Ki-67 LI). Percentage of nuclear expression of Ki-67 was counted in 300 hot-spot neoplastic cells in high power microscopic fields (400×). Western Blot Analysis Brain tumors after retrieval were stored in liquid nitrogen upon sample preparation. Small fragments of pediatric brain tumors were cut with a scalpel and added to 50 µL of homogenization buffer (75 mM saccharose, 225 mM mannitol, 5 mM Tris HCl) containing protease inhibitor cocktail (1.04 mM AEBSF, 0.8 µM aprotinin, 0.04 mM bestatin, 0.14 mM E-64, 0.02 mM leupeptin, 0.015 mM pepstatin A) and phosphatase inhibitor cocktail 3 (1 mM Na3VO4 and 10 mM NaF) used at a 1:100 dilution. Then, the samples were gently homogenized in a precooled Potter Elvehjem Tissue Homogenizer. Afterward, a double portion of cold lysis buffer (50 mM Tris, 150 mM NaCl, 1% Triton X-100, 0.1% SDS, 1% sodium deoxycholate, protease inhibitor cocktail: 1.04 mM AEBSF, 0.8 µM aprotinin, 0.04 mM bestatin, 0.14 mM E-64, 0.02 mM leupeptin, 0.015 mM pepstatin A and phosphatase inhibitor cocktail 3 used at a 1:100 dilution) was added. After 30 min of incubation on ice, samples were centrifuged at 14,000× g for 20 min at 4 • C to remove insoluble tissue and cellular debris. After centrifugation, the supernatant was collected, and the protein concentration was determined using the Bradford method. Samples dedicated for SDS-PAGE were denatured in reducing Laemmli loading buffer at 95 • C for 5 min. Due to the technical limitations to perform electrophoretic separation of 49 samples contemporary on one acrylamide gel, 49 samples were divided into smaller groups and several western blots preceded contemporarily. Each individual gel (group of samples) contained physically the same internal control sample (reference sample, Ref) that allowed us to later compare and merge the results from the individual blots. Proteins were separated on a 4-15% gradient or 8% gel prior to ShcA and TrkB detection and 12% prior to Ras detection. Proteins were then transferred onto PVDF membranes for 90 min using a 300 mA constant current, after which the membrane was blocked using a Li-Cor dedicated TBS-based blocking buffer (Li-Cor, Odyssey). ShcA proteins p46Shc, p52Shc, and p66Shc were detected using mouse monoclonal antibodies (BD Biosciences) concentrated 1:1000 in Li-Cor blocking buffer in TBS-Tween (TBS-T). For H-, K-, and N-Ras (pan-Ras) and TrkB protein detection, mouse monoclonal antibodies from Merck-Millipore and rabbit polyclonal antibodies from Cell Signaling, respectively, were concentrated at 1:1000 in Li-Cor TBS-T buffer. The membranes were incubated overnight at 4 • C with primary antibodies. Proper anti-mouse or anti-rabbit secondary fluorescent antibodies from Life Technologies were concentrated 1:5000 in Li-Cor TBS-T blocking buffer supplemented with 0.01% SDS. The fluorescence was detected with the use of an Odyssey Infrared Imaging System (Li-Cor Biosciences, Lincoln, NE, USA) and quantified with Image Studio Lite software from Li-Cor. Equal protein loading was verified with the use of the Revert™ total protein stain kit from Li-Cor (before incubation with primary antibodies) by whole membrane staining, and the signal was scanned with the use of a Li-Cor Odyssey scanner and quantified with Image Studio Lite software from Li-Cor. In the case of ShcA and Ras proteins, β-actin was used as a loading marker. In the case of the TrkB protein, total protein staining (Revert™ 700 Total Protein Stain, Li-Cor) was used for western blot normalization. Statistical Analysis The data were analyzed to check whether there were significant differences among tumor grades or tumor types. To compare the differences in mean values between gradeor tumor type-based groups according to measured markers, we first standardized the data. For each technical replicate of each marker, the data were first divided by the average value for this replicate. Then, each data point was averaged over all replicates for a marker. Based on this standardization, for each marker separately, we performed one-way ANOVA statistical tests, separately for tumor type and tumor WHO grades. For post-hoc multiple comparisons, we used the t-tests with Bonferroni correction. We considered as significant changes with one-way ANOVA p < 0.05 and post-hoc p < 0.05. Moreover, for directed comparison of each pair of groups for different grades, the nonparametric Wilcoxon rank test was performed. We also performed the Kruskal-Wallis rank sum test, which is the nonparametric analog of one-way ANOVA, followed by Tukey and Kramer (Nemenyi) rank tests as nonparametric post-hoc tests. Histopathological and Immunohistochemical Studies To evaluate the potential correlation between the type of pediatric brain tumor (and tumor malignancy grade) and the expression patterns of Ras and ShcA proteins, we analyzed 49 samples of pediatric brain tumors. In our studies, five main groups of investigated tumors were distinguished: choroid plexus tumors, diffuse astrocytic and oligodendroglial tumors, embryonal tumors, ependymal tumors, and other astrocytic tumors. Correct assignment to the individual group was confirmed based on histopathological and immunohistochemical investigations. Supplementary Table S1 contains information about tumor type, WHO grade, localization, age, year of diagnosis, and gender of the pediatric brain tumor donors used in the studies. Group 1 represents choroid plexus tumors. In this group, based on histopathological investigation, we distinguished two types of choroid plexus tumors: (a) choroid plexus papilloma (CPP) and (b) choroid plexus carcinoma (CPC). (a) Choroid plexus papilloma (WHO grade I)-a papillary neoplasm arising from choroid plexus epithelium that corresponds to WHO grade I. Histopathology of CPP showed numerous fibrovascular papillary structures covered by a single layer of cuboidal to columnar epithelium. Degenerative changes such as calcifications and hyalinization were seen ( Figure 1A). (b) Choroid plexus carcinoma (WHO grade III)-histopathological analysis revealed solid hypercellular sheets of pleomorphic epithelioid cells with brisk mitoses. In one case, foci of papillary architecture were retained. In all cases, foci of necrosis were present. Immunohistochemistry showed that neoplastic cells of both CPP and CPC were positive for pancytokeratin and negative for GFAP ( Figure 1A,B). Group 2 represents diffuse astrocytic and oligodendroglial tumors. In this group, based on immunohistochemistry and histopathological investigation, we distinguished three types of diffuse astrocytic and oligodendroglial tumors: (a) glioblastoma (GB), (b) pediatric-type oligodendroglioma, and (c) pediatric-type anaplastic oligodendroglioma ( Figure 2). (a) Glioblastomas, (WHO grade IV)-a diffusely infiltrating high-grade glioma with predominantly astrocytic differentiation were characterized by foci of high cellularity, marked cellular pleomorphism, brisk mitotic activity, and microvascular proliferation. In all cases of GB, foci of ischemic necrosis were present. In one case, the palisading necrosis was seen. Immunohistochemistry of the neoplastic cells showed the expression of GFAP and Olig2. The Ki67 LI was about 50% (Figure 2A). (b) and (c); Pediatric-type oligodendrogliomas (b) corresponded to WHO grade II and pediatric-type anaplastic oligodendroglioma (c) to WHO grade III. Oligodendrogliomas were composed of monomorphic cells with uniform round nuclei with surrounding perinuclear clearing "haloes". The network of delicate branching capillaries resembling "chicken wire" were present in all cases. Immunohistochemistry showed that neoplastic cells were positive for Olig2, and in some cases, revealed moderate positivity for GFAP ( Figure 2B,C). In anaplastic cases of oligodendroglioma, high mitotic activity and microvascular proliferation were seen. In two cases, foci of necrosis were present. (a) Medulloblastoma-this embryonal tumor arises in the cerebellum or dorsal brain stem mainly in children and corresponds to WHO grade IV. All MBs were classified as classic types. They consisted of densely packed small round undifferentiated cells with high mitotic activity ( Figure 3A). In three cases, Homer Wright rosettes were seen. MBs demonstrated immunoreactivity for synaptophysin. The Ki67 LI was about 50% ( Figure 3A). (b) Embryonal tumors not otherwise specified-all tumors in this group are WHO grade IV by definition. Microscopically, CNS embryonal tumors were composed of small cells with little perinuclear cytoplasm and hyperchromatic nuclei. Mitoses were abundant, and necrosis was prominent. Immunohistochemistry showed that these tumor cells were positive for synaptophysin. The Ki67 LI was 70% ( Figure 3B). (c) Atypical teratoid/rhabdoid tumor-AT/RTs were composed of undifferentiated, embryonal-like cells and foci of rhabdoid cells with round eccentric nuclei and prominent nucleoli. High mitotic activity was seen. In all tumors, necrosis was prominent. All AT/RT tumors showed focal immunoreactivity for epithelial membrane antigen (EMA) and synaptophysin. The diagnostic hallmark was the loss of INI-1 protein expression in neoplastic cells ( Figure 3C). Group 4 represents ependymal tumors. In the range of this group, we distinguished two types of ependymal tumors: (a) conventional ependymoma and (b) anaplastic ependymoma ( Figure 4). (a) Conventional ependymomas-on histologic examination, these were well circumscribed, moderately cellular tumors with uniform cells. In two cases, ependymal rosettes were seen. All cases presented pseudorosettes, with perivascular tumor cells extending radial, fibrillary processes toward the vessel wall. Mitoses were rare, while necrosis was observed in two cases ( Figure 4A). All samples of conventional ependymoma corresponded to WHO grade II malignancy. (b) Anaplastic ependymoma-these were characterized by high cellular density, high mitotic activity, and a high nuclear to cytoplasmic ratio. Pseudorosettes were less prominent. Microvascular proliferations were present. Immunohistochemistry showed that ependymal tumors were positive for GFAP ( Figure 4A,B). The picture of epithelial membrane antigen (EMA) expression showed a characteristic punctate, dot-like pattern of cytoplasmic positivity in neoplastic cells ( Figure 4A,B). Group 5 represents tumors characterized by the WHO as other astrocytic tumors. In the range of this group, we distinguished four types of tumors: (a) pilocytic astrocytoma, (b) pilomyxoid astrocytoma, (c) pleomorphic xanthoastrocytoma (PXA), and (d) subependymal giant cell astrocytoma (SEGA) ( Figure 5). (a) Pilocytic astrocytoma-on histologic examination, these were characterized by low to moderate cellularity and a biphasic growth pattern, consisting of compact areas with bipolar (piloid) tumor cells and microcystic areas with multipolar tumor cells. Rosenthal fibers and eosinophilic granular bodies were seen. Microvascular proliferation and degenerative cellular pleomorphism were observed. This type of tumor demonstrates strong immunoreactivity for GFAP, S100, and Olig2 ( Figure 5A). (b) Pilomyxoid astrocytoma-on histologic examination, it showed that tumor cells form pseudorosette-like angiocentric architectures. Interestingly, intermediate forms between pilocytic and pilomyxoid astrocytoma have also been reported. Tumor cells are strongly immunoreactive for a glial fibrillary acid protein (GFAP), Olig2, and S100. The Ki67 LI was low, less than 3% ( Figure 5B). (c) Pleomorphic xanthoastrocytoma (PXA)-histologic features of PXA included the presence of pleomorphic, sometimes bizarre, and multinucleated giant cells, lipidized astrocytic tumor cells, eosinophilic granular bodies, often perivascular lymphocytic infiltrates, and a variably dense pericellular/perilobular network of reticulin fibers. A fascicular growth pattern was often observed. GFAP immunoreactivity in neoplastic cells was strong. The Ki67 LI was less than 3% ( Figure 5C). (d) Subependymal giant cell astrocytoma (SEGA)-histopathological examination showed that SEGAs are moderately cellular tumors composed of pleomorphic large astrocytic or ganglioid cells with abundant glassy eosinophilic cytoplasm and round, vesicular nuclei with distinct nucleoli. In some cases, smaller spindle cells arranged in streams were commonly encountered. Multinucleated cells were present in two cases. The formation of perivascular pseudorosettes mimicking ependymal pseudorosettes was also seen. The presence of necrotic areas was observed in single cases. Most SEGA tumor cells presented immunoreactivity for GFAP and S100. The Ki67 LI was about 2% ( Figure 5D). Semiquantitative assessment of cytoplasmic expression of GFAP as negative (−), low (+), moderate (++), and strong (+++) as well as the Ki-67 labeling index (Ki-67 LI) describing the proliferation activity of neoplastic cells are presented in Supplementary Table S2. The expression profile of Ki67 expressed as Ki-67 LI strongly correlates with the grade of brain tumors (Supplementary Table S2). Next, when the assignment of individual samples to the individual group was confirmed, we evaluated the expression patterns of ShcA isoforms (p66, p52 and p46) as well as Ras and TrkB proteins in all characterized tumors samples. Figures 6 and 7 show representative images of p66Shc, p52Shc, p46Shc, Ras, and TrkB levels in the investigated tumor samples. Expression Pattern of ShcA, Ras and TrkB Proteins in Pediatric Brain Tumors and Their Levels in a Function of Tumor Malignancy Grade Densitometric analysis of the western blot bands allowed us to create an expression pattern of the investigated proteins in the studied pediatric brain tumors samples ( Figure 8) and to correlate their levels with the tumor malignancy grade (Figure 9). For each protein (p66Shc, p52Shc, p46Shc, Ras, and TrkB), we performed one-way ANOVA statistical tests and post-hoc t-tests with Bonferroni correction separately for each group of investigated tumors and their malignancy grades. Interestingly, the parametric tests revealed statistically significant differences between CNS embryonal (Group 3) and ependymal (Group 4) tumors for the TrkB protein ( Figure 8A). Moreover, we found statistically significant differences (p < 0.05) in the level of p46Shc protein between grade I and other grades of malignancy ( Figure 9B). The respective nonparametric tests for the grades confirmed this result. Discussion To evaluate the potential correlation between tumor type, tumor malignancy grade, and the expression patterns of Ras, TrkB, and ShcA proteins, we analyzed 49 samples of pediatric brain tumors. In our studies, five main groups were distinguished ( Figure 10). Correct assignment of individual samples to each group was confirmed based on histopathological and immunohistochemical investigations. Expression Pattern of ShcA, Ras, and TrkB Proteins in Studied Pediatric Brain Tumors Next, we evaluated the expression patterns of ShcA isoforms as well as Ras and TrkB in all characterized tumors represented by the following groups: choroid plexus tumors, diffuse astrocytic and oligodendroglial tumors, embryonal tumors, ependymal tumors, and other astrocytic tumors (see Figure 10 and Supplementary Table S1). It has been previously demonstrated that Ras activity depends on ShcA recruitment upon Trk activation by, for example, neurothrophins [8,46]. Affected functioning of proteins from this pathway may be strictly associated with oncogenic processes, especially because Ras has been described as an oncogene, and its activation may influence the activity of downstream kinases implicated in cell proliferation. Moreover, the Shc upstream molecules such as Trk receptors have been previously correlated with tumor malignancy and have been suggested as a prognostic factor in brain tumors [47,48]. TrkB, in particular, has been reported as a negative prognostic marker in neuroblastoma [48,49]. Analysis of the proteins of interest levels with the use of western blot revealed that the level of Ras protein was comparable in all studied types of brain tumors ( Figure 8E). This result may be in line with the actual knowledge about Ras in which the genotype (mutation status) is the main culprit for the different transforming potentials of Ras proteins [50] and tumor aggressiveness. Interestingly, as shown in Figure 8B,C, the levels of two ShcA isoforms (evaluated with the use of specific antibodies) that activate Ras protein, p46Shc and p52Shc, also seem to be equal in all groups. Despite no statistically significant differences, the lowest levels of p46Shc and p52Shc were detected in diffuse astrocytic and oligodendroglial tumors (group 2) and in tumors characterized by the WHO as other astrocytic tumors (group 5) in the case of p52Shc. Similarly, the level of the longest ShcA isoform, p66Shc, did not differ much in the five groups of investigated pediatric brain tumors ( Figure 8D). Interestingly, differences in the expression pattern were observed only for the TrkB protein. One-way ANOVA and post-hoc t-tests with Bonferroni correction showed a significant (at the 5% level) difference in the level of TrkB between CNS embryonal tumors (group 3) and ependymal tumors (group 4). Pediatric Brain Tumor Malignancy Grade and the Pattern of ShcA, Ras, and TrkB Proteins Additionally, we analyzed whether the protein patterns of Ras, TrkB, and all three isoforms of ShcA protein (p66Shc, p52Shc, and p46Shc) correlate with malignancy grade of the investigated tumors. Importantly, classifying and grading tumors enables the determination of treatment recommendations and prognosis. The simplest classification discriminates two groups: low grade and high grade neoplasms tumors. Low grade tumors are typically slow-growing and rarely spread via cerebrospinal fluid (CSF). They often have well-defined borders, so surgical removal in these cases can be an effective treatment. In contrast, malignant tumors tend to grow faster, and often relapse. The WHO classification of CNS tumors (2016) traditionally comprises a histologic grading to a four-tiered scheme of malignancy ranging from WHO grade I (benign) to WHO grade IV (malignant) lesions. Childhood brain tumors are the most common pediatric solid tumors and include several histological subtypes [51]. Pilocytic astrocytoma is a slowly growing, well-circumscribed, and frequently cystic astrocytoma of children and young adults corresponding to WHO grade I is the most common pediatric tumors of CNS [52]. The most common malignant brain tumor in children is medulloblastoma (WHO grade IV) [53]. Classic medulloblastoma is the most common variant, accounting for up to 70% of cases [53,54]. Another variant of MB such a desmoplastic/nodular medulloblastoma comprises 10-20% of cases [45,53,54]. Large cell/anaplastic (LCA) medulloblastoma comprises 5% of cases and is characterized by very aggressive course [45]. Medulloblastoma with extensive nodularity (MBEN) occurs in infants and presents an extreme degree of desmoplastic/nodular pattern. It likely has a better prognosis due to the degree of neuronal differentiation [45]. Brain tumors corresponding to grade I lesions are neoplasms with low proliferative potential. Grade II lesions are usually infiltrative and often recur with a tendency to progress to higher grades of malignancy. Grade III brain tumors disclose histological features of malignancy including nuclear atypia and mitotic activity. The grade IV designation is applied to mitotically active, necrosis-prone neoplasms with rapid evolution and fatal outcomes. Glioblastoma and embryonal tumors are examples of grade IV neoplasms [45]. Analysis of the TrkB, Ras, and ShcA isoform levels presented in a function of malignancy grade (Figure 9) revealed that only one isoform of ShcA proteins, p46Shc, was significantly elevated in tumors with grade I malignancy in comparison to the other tumors characterized by grades II, III, and IV of malignancy ( Figure 9B). A similar trend was observed for the p52Shc protein and malignancy grade, but here the differences were not statistically significant. In contrast to the studies describing an increased level of p66Shc in highly metastatic variants of the human breast cancer cell line MDA-MB-231 or the studies suggesting that decreased levels of p66Shc can be good markers for the diagnosis and prognosis of breast cancer, here in the case of pediatric brain tumors, we did not see such correlations. The level of p66Shc seemed to be equal in all groups of pediatric brain tumors characterized by different malignancy grade ( Figure 9D). The Ras level ( Figure 9E) also seemed to be equal in tumors with grades I, II, III, and IV of malignancy, indicating that in tumors of grade I, Ras was expressed to the lowest extent (however, this observation was also not statistically significant). These results may be perceived as contradictory to the studies investigating Ras level in tumors of the adult human central nervous system published by Gutierrez-Erlandsson and colleagues [55]. They demonstrated that R-RAS2 is more strongly expressed in low grade (grades I-II) rather than high grade (grades III-IV) tumors of the adult human central nervous system, suggesting that R-RAS2 is overexpressed in the early stages of malignancy. The contradictory results and opposite conclusion regarding Ras level in tumors of the central nervous system can result from two main reasons: (a) in our studies, expression pattern of proteins of interest (including Ras) as investigated in pediatric brain tumors, which have different molecular characteristics compared to the tumors of the adult human central nervous system described by Gutierrez-Erlandsson et al.; and (b) our data reflect the total Ras isoform pool, and in the case of Gutierrez-Erlandsson et al., the dependency on tumor malignancy was described for R-RAS2 only [55]. Additionally, in the case of TrkB, it was difficult to find any linear correlation between its level and malignancy grade. An opposite conclusion arose from the comprehensive meta-analysis by Zhang and coworkers [44], where the expression level of TrkB was strongly and positively associated with the clinical stage (I-II versus III-IV). Based on their analysis, Zhang and coworkers concluded that TrkB can be a potential biomarker for poor prognosis. It is necessary to highlight that such a conclusion was made based on the analysis of a cohort of cancers including gastric [34,36], colorectal [37,38], non-small cell lung [39], and ovarian [40] cancers, nasopharyngeal [41], sinonasal, and oral squamous cell [42,43], and hepatocellular carcinoma [44]. In contrast to the studies described above, our analysis was performed in a cohort of pediatric brain tumors, which were not taken into consideration by Zhang and coworkers [44]. Our study demonstrates that the reliance between the expression level of TrkB and tumor malignancy, described for other solid tumors, differs from that observed for pediatric brain tumors. Conclusions Based on the results of our comprehensive and comparative study, we demonstrated that the expression pattern of proteins such as Ras, ShcA (p66Shc, p52Shc, p46Shc) in astrocytic, oligodendrioglial, ependymal, choroid plexus, and embryonal tumors in children seem to be closely similar. In the case of TrkB, significant differences in its level have been found between CNS embryonal and ependymal tumors. Interestingly, a significantly higher level of p46Shc protein was observed in pediatric brain tumors with malignancy grade I in comparison to the tumors with grades II, III, and IV of malignancy. Such observations indicate that p46Shc and TrkB might be considered as useful biomarkers in the diagnosis and for the prognosis of pediatric brain tumors. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/jcm10102219/s1, Table S1: Tumor type, WHO grade, localization, age, year of diagnosis, gender of pediatric brain tumors donors, Table S2: Cytoplasmic expression of GFAP and Ki-67 labeling index in investigated pediatric brain tumors.
2021-05-30T05:11:14.226Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "e8a6db3c2223dec81b5521abc5d31bae09dcd322", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/10/10/2219/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e8a6db3c2223dec81b5521abc5d31bae09dcd322", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257682937
pes2o/s2orc
v3-fos-license
Prescribed Grass Fire Mapping and Rate of Spread Measurement Using NIR Images From a Small Fixed-Wing UAS This article focuses on the mapping and rate of spread (ROS) measurement of grass fires using near infrared (NIR) images acquired by a small fixed-wing unmanned aircraft system (UAS) operating at low altitudes. A new method is proposed for spatiotemporal representation of grass fire evolution using time labeled UAS NIR orthomosaics stitched from aerial images collected at varying time stamps over different regions of fire. Furthermore, a novel NIR intensity variance thresholding method is proposed for accurate identification and delineation of grass fire fronts based on the obtained NIR mosaics in digital numbers. The proposed methods are demonstrated and validated using UAS NIR imagery acquired over a prescribed tallgrass fire in Kansas (around 13 ha.). Three NIR short time-series orthomosaics are generated at a time interval of about 2 min with a spatial registration accuracy of 1.45 m (RMSE). The mean ROS for head, flank, and back tallgrass fires are measured to be 0.28, 0.1, and 0.025 m/s. twins of a fire event, data-based fire spread prediction, and the understanding of the impact of atmosphere, terrain, and fuel on the fire behavior [1]. During prescribed and wildfire operations, the fire behaviors can be estimated by empirically designed or physics-based fire models, such as the Rothermel [2], the CSIRO [3], and wildland urban interface fire dynamics simulator (WFDS) [4] models. Although these models have been widely used to predict the fire ROS in many fuel types [5], [6], [7], [8], one of the biggest challenges in their operational use is the lack of ground truth data for evaluation and validation. In addition, the accuracy and reliability of these models are highly dependent on the quality of weather, fuel, and terrain information during a fire event, which can be difficult to obtain. These concerns can be minimized with the help of direct and accurate fire spread measurements during an active fire event. For example, accurate measurements of fire front location and fire ROS of a benchmark wildland fire can greatly improve the evaluation, validation, and fine-tuning of the existing fire spread models. However, such direct fire measurements can be challenging to acquire, given the complex and highly dynamic nature of fire spread in varying atmospheric and field conditions, such as wind, relative humidity, temperature, fuel characteristics, and terrain features [9]. Many fire ROS measurements in the literature come from indoor observations through table-top and wind tunnel experiments [10], [11] or ground observations through towers or booms, which are limited to small scales and may not accurately depict the fire spread behavior across landscape scales. Remote sensing data can enable the accurate mapping of fire behaviors in larger spatial scales, making them better suited for wildland fire measurements. Although satellite remote sensing plays a vital role in fire monitoring, the coarser spatiotemporal resolutions of most satellite data make them more suited for large fires (lasting more than a day) and applications, such as fire hot spot detection [12], [13], [14] and fire damage assessment [15]. Measurements of fire ROS and fire front location of prescribed fires or wildfires that only last few or several hours can be better facilitated by airborne remote sensing. In fact, it has been suggested that spatial and temporal resolutions of 10 m and 10 min are desired for accurate data-enabled operational wildfire spread modeling and forecasting [1]. These finer resolutions are generally achievable by airborne remote sensing. In fact, This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ most existing remote sensing-based fire ROS measurements use imagery from manned aircraft [16], [17], [18], [19], [20]. The collected airborne imagery can be postprocessed for detection and extraction of fire fronts and ROS measurement. However, deploying manned aircraft over fires can be challenging due to adverse flying conditions (smoke and heat), limited flight path flexibility (to avoid turbulence), and high operating costs. In recent years, small unmanned aircraft systems (UAS) equipped with multispectral cameras are increasingly used in fire missions for applications including post-burn vegetation mapping [21], fire ignition [22], and fire detection [23]. Its applications in fire ROS measurements are still limited [24]. Small UAS are light-weight, easy to handle, and cost-effective, making them very handy for fire ROS measurements at low altitudes (< 122 m Class G airspace in the US). Thermal cameras can be installed on these UAS for fire measurement due to their ability to see through smoke and measure temperature [24]. However, thermal cameras are generally quite expensive and have lower image resolution as compared to RGB and near infrared (NIR) cameras [25], both of which have been widely used by the UAS multispectral remote sensing community [26], [27]. NIR images can be used for certain fire sensing missions since they are not affected much by smoke occlusion compared with RGB images [28], and can detect a lot more features than typical thermal images which generally have lower pixel resolutions (e.g., 640 × 512 pix. or lower). The main challenge for NIR-based fire mapping is that it cannot detect temperature changes directly and may create difficulties in fire front detection. Researchers have worked on NIR-based fire detection using ground images [28], [29] and airborne images [20]. For instance, NIR aerial images are first converted to normalized difference vegetation index (NDVI) and then used for fire line detection and extraction in [20]. However, such methods often require radiometric calibration of aerial NIR imagery, which may not be feasible to many UAS researchers, especially during low-altitude fire sensing missions. The objective of this article is to develop a low-cost grass fire mapping and ROS measurement system using NIR aerial images from a fixed-wing UAS. The methods in this article are demonstrated and validated using low-cost NIR UAS dataset over a prescribed grass fire that was conducted at the University of Kansas Anderson County Prairie Preserve (ACPP) near Welda, KS. The main contributions of this article are as follows. 1) A new method for spatiotemporal representation of grass fire evolution is proposed by introducing time labeled UAS NIR orthomosaics generated from aerial images with limited footprints. 2) A UAS prescribed fire dataset over a tallgrass field in Kansas, including short time-series NIR orthomosaics and local weather and terrain measurements (https://cusl.ku. edu/Flight_Log). 3) A novel NIR intensity variance thresholding (IVT) method for grass fire front classification and extraction using aerial imagery in digital numbers (DN). 4) Comprehensive results, discussions, and lessons learned using low-cost NIR nadir-view imagery for grass fire mapping and fire ROS measurements. The developed NIR-based grass fire sensing system, methods, and data can greatly benefit many other researchers as follows. 1) UAS remote sensing researchers and operators who want to collect grass fire spread data but cannot afford expensive thermal cameras. 2) Researchers who are interested in using UAS for monitoring and mapping the evolution of other dynamically evolving environmental processes, such as chemical leaks, flooding, and extreme weather. 3) Wildland fire managers or fire fighters who would like to have accurate predictions of grass fire behavior. 4) Grass fire behavior researchers and fire spread modeling researchers who need representative grass fire datasets. II. PRESCRIBED FIRE AND UAS DATA This section describes the prescribed grass fire and the UAS data that are used for the demonstration and analysis of the proposed methods. [30] near Welda, KS. The burn site is a relatively flat rectangular field (530 × 250 m) with uniform fuel vegetation cover dominated by C4 tallgrass and a mixture of herbaceous forbs and legumes (shown in Fig. 1). A ring fire pattern was conducted by two fire setting teams using drip torches. The fire ignition was initiated near the center of the north boundary and terminated near the center of the south boundary, with one team traveling clockwise and the other traveling counterclockwise. The ignition process was completed at around 12:17:32 P.M. after which the fire evolved naturally in the field. The boundary of the fire field is shown in Fig. 1. There were some inconsistencies in the fire ignition pattern with the teams having to spend more time to ignite the northeast and northwest corners. The weather conditions during the burn were measured in the field as 73 • F temperature with 41% relative humidity and 6.26 m/s prevailing wind from the south. The wind measurement is from a Campbell Scientific CSAT3B wind anemometer installed at 1.9 m above the ground level close to the east boundary of the fire field. B. KHawk UAS Data A KHawk 55 fixed-wing UAS was deployed over the prescribed fire for multispectral image acquisition. The KHawk 55 UAS is a low-cost multispectral remote sensing platform developed by the Cooperative Unmanned Systems Lab (CUSL) at the University of Kansas. It is equipped with a Ublox M8P Here GPS and a Pixhawk Cube autopilot [31], which can support both manual and autonomous flight. Key specifications are provided in Table I. The KHawk UAS was equipped with a low-cost PeauPro82 modified GoPro Hero 4 Black camera for NIR video acquisition. This camera was modified with an 850 nm IR pass filter making it sensitive to light in the NIR spectrum and was operated in a video mode at a frame rate of 29.97 Hz with pixel resolution of 1080 × 1920 pix, respectively (see Table II). Manual synchronization with the GPS logs is performed after the flight for image geotagging. Example images of the fire field are shown in Fig. 2. The KHawk UAS was programmed to fly multiple predetermined loops over the burning field at 120 m above the ground level to collect repeat-pass imagery of the burning field. It is worth mentioning that the UAS ground control station operator performed real-time adjustments to the predetermined flight path to follow the fire evolution based on ground fire observations. Repeat-pass imagery are defined as the images collected at the same location over the field at different time steps. The objective of such a flight plan is to collect images for the generation of short time-series orthomosaics, where one orthomosaic corresponds to one flight loop. In this mission, the UAS completed one loop for 2 min and achieved four loops in total from about 12:06 P.M. to 12:18 P.M. Three loops were used for orthomosaic generation to ensure map accuracy. The majority of the UAS flight path is overlaid on a National Agriculture Imagery Program (NAIP) image (spatial resolution of 1 m), shown in Fig. 3. The NAIP image was taken on June 30, 2019 and was used to geometrically register the UAS orthomosaics. III. METHODS This article introduces a new method for the grass fire evolution mapping and ROS measurement using low-cost NIR images from a small UAS. The first part of this method focuses on the spatiotemporal representation of the grass fire evolution using short time-series orthomosaics generated from repeat-pass images with limited footprints. In addition, time labeling is introduced for each grid within an orthomosaic to represent the different time stamps for UAS fire data acquisition. The second part is dedicated to fire front extraction from these orthomosaics using a novel NIR IVT method. Finally, these fire fronts are combined to form a fire evolution map that facilitates the calculation of the fire ROS. A. Spatiotemporal Representation of the Fire Field Using Time-Labeled Orthomosaics One of the main contributions of this article is a new method for the spatiotemporal representation of the burning field using UAS short time-series orthomosaics with time labels. A small UAS flying at low altitudes generally observes only small patches of the burning field at a time, which is not ideal for the mapping and measurement of fire spread. For the spatial representation of the fire spread within a specific duration of time, images from each loop are grouped and orthorectified to form one orthomosaic. With the UAS collecting data in multiple loops over the fire field, short time-series orthomosaics can be generated [24], as shown in Fig. 4. Since each orthomosaic is formed using multiple images collected at different times, a time interval can be assigned to each orthomosaic, where the starting and ending time corresponds to the time stamps of the first and last image in the loop, respectively. This is illustrated in Fig. 4. However, such time representation may not be enough for fire situational awareness and ROS calculation at finer scales. A new data representation is proposed in this article to address this problem, with the basic idea shown in Fig. 5. Instead of using only one time step to represent the data acquisition time information for an orthomosaic, the orthomosaic will be divided into small zones with their own time labels. The size of each time zone and the time difference between them can be customized based on the desired temporal resolution, camera footprint, overlapping percentage, and ground speed of the UAS. Given the UAS altitude of h above the ground and camera FOV of θ x and θ y , the size of one time zone in the orthomosaic can be computed as follows: where k is the scaling ratio between 0 and 1. The generated orthomosaics can then be analyzed for fire front detection, extraction, and later fire evolution map generation. B. NIR Intensity Variance Thresholding Method for Grass Fire Front Extraction A new method is proposed for the fire front extraction problem based on airborne NIR imagery, which is called NIR IVT method. Given an NIR DN orthomosaic O m with a size of X × Y pix., the IVT method can be used to identify and extract the pixels that represent the fire front O m f . This method can be categorized into three steps, 1) image grid generation and fire grid classification, 2) fire front extraction, and 3) fire front manual delineation, as illustrated in Fig. 6. The main advantage of this method comes from its use of NIR images in DN which does not require vicarious radiometric calibration efforts as the reflectance images. 1) Fire Grid Classification: The main objective of this step is to generate grids (see second row of Fig. 6) in an orthomosaic and classify them as fire and nonfire grids. O m can be divided into n equally spaced grids of dimensions x × y pix. The size of the grid can be determined based on the following two criteria. a) Maximum flame depth for grass fire: Flame depth is defined as the distance from the leading edge to the trailing edge of the flaming front [32]), which can be estimated from UAS NIR orthomosaics. One of the fundamental requirements of our IVT algorithm for successful fire grid classification is that the grids encapsulate pixels corresponding to unburned, burned, and fire regions. This is because the IVT algorithm relies on the intensity distribution pattern within a grid to differentiate between nonfire and fire grids. Therefore, the size of the grid has to be larger than the maximum flame depth as observed in NIR orthomosaics. We recommend that the grid size be at least 2 times of the maximum flame depth. For example, if the observed maximum flame depth is 4 m, a minimal grid size of 8 m is suggested. b) Minimizing outliers for fire front detection: One of the main challenges for using NIR DN orthomosaics for fire extraction is the presence of outliers or noisy pixels. Uncalibrated NIR orthomosaics tend to have random pixels that have very high intensity values. This can lead to the IVT algorithm falsely classifying these outlier pixels as fire, since NIR fire pixels also saturate or hold at very high intensity values. There will be more outlier pixels when the grid size increases. Multiple trials can be performed at different grid sizes starting from the minimal value provided in the above criteria a. A final value can be selected for a balance of accuracy, computational cost, and spatial resolution. After selection of the grid size, the next step is to generate grids and perform pixel classification. The main difference between the nonfire grids (Γ U and Γ B ) and fire grids (Γ F ) is that, Γ F contains pixels of unburned grass, burned grass, and the fire, while Γ U and Γ B only contain either unburned or burned grass pixels. Given that fire is represented as very high values or even saturated values in NIR images, the fire grids are expected to demonstrate a higher variability in pixel distribution, as compared to nonfire grids. In addition, the fire grids also demonstrate a higher pixel range (difference between maximum and minimum pixel intensity within a grid). This can be observed in Fig. 7, where Γ F shows a wider range and a higher variability as opposed to Γ U and Γ B . Note that this figure shows the grids in normalized (0-1) DN values. An orthomosaic, O m can be classified into nonfire grids and fire grids Γ F based on the distribution of all pixels enclosed within each grid. Two thresholds, α and β can be defined pertaining to the coefficient of variation CV and range R of each grid as the classification criteria. Here, CV Γ is defined as the ratio of standard deviation σ Γ and mean μ Γ and R Γ is defined as the difference between the maximum and minimum pixel values within a grid Γ. The grids that satisfy the α and β criteria are classified as fire grids, Γ F with a value of 1, while all other grids are classified as nonfire grids with a value of 0 α-Selection: The α threshold is to classify grids based on the extent of pixel intensity variability within the grid using the coefficient of variation CV . The α can be selected as the mean CV of all the grids, shown in the top part of Fig. 8 β-Selection: The β threshold is to classify grids based on the range of pixel intensity values within the grid. The β is determined empirically using the distribution of pixels in the orthomosaic O m . The maximum pixel intensity value in O b corresponding to the burned areas is used to calculate β, shown in bottom part of Fig. 8, where O m,B denotes the pixel intensity values in O m that represent the burned areas (low-intensity). The reason for using the pixel intensity variation and range criteria is to ensure that the algorithm observes the distribution of all the pixels within a grid and not just the minimum and maximum values. For example, if only the range criterion is used, grids with smoke occlusion or saturated pixels may wrongly be classified as fire grids. It is also worth mentioning that this algorithm observes the differences in distribution characteristics between fire and nonfire grids and does not depend on the absolute histogram distribution of the pixel intensity values. This algorithm is expected to successfully differentiate between fire and nonfire grids even if the pixel intensity values of grass are lower or higher, which can happen during different growing stages of the grass. 2) Fire Front Extraction: Given the identified fire grids in x × y pix. region, the next step is to locate the fire pixels within these regions for fire front extraction. This is also achieved using the pixel distribution within the fire grids. Since these grids exhibit a Gaussian distribution (shown in Fig. 7) and the maximum pixel values enclosed within them can be identified as fire pixels, a threshold γ can be defined based on the empirical rule of a Gaussian distribution. The pixels within each Γ F that satisfy the γ rule is classified as fire pixels where Γ F x,y is a pixel value at a geolocation (x, y) within a grid Γ F and γ is an empirically selected value. The value of γ is empirically selected between 2 and 3, these values correspond to values above 95% for the Gaussian distribution (68-95-99.7 rule). 3) Fire Front Manual Delineation: As illustrated in Fig. 6, the fire front extraction algorithm can isolate fire pixels that are often discrete and undesirable for later fire evolution mapping and ROS measurement. The extracted fire front pixels can be manually joined to form a continuous fire front curve for better representation. C. Fire Evolution Mapping The delineated fire fronts from each orthomosaic are then combined to form a fire evolution map. The main components of this map include, the fire front locations, associated time labels, and their spread direction vectors. For the spread direction vectors, a normal to the curve approach is used which is generally defined as the direction of the spread of a fire front [20], [33], [34]. An example of such a map is shown in Fig. 9. The fire evolution map contains the information required to calculate the ROS for any given point along a fire front, including the spread distance d i and the time difference (t n i+1 − t n i ), as shown in Fig. 9. IV. RESULTS The methods described in Section III are demonstrated using a GoPro NIR video collected by the KHawk 55 fixed-wing UAS over a Kansas prescribed grass fire. Detailed results and analyses are presented in this section. A. Spatiotemporal Representation of the Fire Field Using Time-Labeled Orthomosaics Repeat-pass individual frames are extracted from the NIR video and grouped accordingly for the generation of short time-series orthomosaics, as shown in Fig. 10. Each orthomosaic is generated from about 120-150 images using the same processing parameters in the Agisoft Photoscan Pro software. Table III shows the number of images used and corresponding time intervals for each orthomosaic. The orthomosaics shown in Fig. 10 are registered with an NAIP image with a spatial resolution of 1 m using the ArcGIS Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. [35]. Control point pairs between each orthomosaic and the NAIP image were manually selected such that they covered the whole field. All the orthomosaics were registered using an Affine transformation and achieved a root mean square error (RMSE) of about 1.3 to 1.45 m, as shown in Table IV. B. NIR-Based IVT Method for Fire Front Extraction The proposed IVT method (Section III-B) is then implemented on the registered NIR orthomosaics for fire front extraction. First, initial analyses are performed to determine a reasonable grid size. We observed that our algorithm generated similar results for the grid size between 10 and 20 m. A grid size of 10 × 10 m (or 100 × 100 pix.) is selected considering multiple factors, such as accuracy and computational efficiency. The registered orthomosaics are divided into equally spaced grids Γ. Then, the pixel distribution within each grid Γ is analyzed for fire grid classification. The NIR orthomosaics are normalized to 0-1 range, shown in Fig. 10. The α and β are selected as 0.02 and 0.6, respectively. All the grids with CV greater than 0.02 and range greater than 0.6 are classified as fire grids, while all the other grids are classified as nonfire grids, as shown in Fig. 11. The fire grids are then searched for fire pixels using (4), where all pixels within a grid that satisfy the γ condition are classified as fire pixels while all other pixels are classified as nonfire pixels. It was found that the fire pixels within the classified fire grids represented the 95th percentile and above values. Therefore, γ was selected to be 2. Fig. 11 shows the extracted fire fronts from each orthomosaic. Finally, these fire front pixels are manually delineated using a line feature class in ArcGIS pro to form continuous fire front curves. Fig. 12 shows the time-labeled orthomosaics for our UAS data set, which are generated based on Fig. 10. The fire fronts are identified using the proposed IVT method and shown as red grids. The time labeling information for each fire grid is manually generated and saved as a separate file in the shared UAS data set. The time information is also shown as the red color intensity in Fig. 12, where a higher red color intensity means a later time stamp and the number in the color bar is in second. For example, in the left orthomosaic of Fig. 12, 0 corresponds to 12:07:03 PM for several top-right grids and 120 second corresponds to 12:09:03 PM for several bottom-right grids. C. Validation of Fire Front Extraction Qualitative and quantitative validation analyses were conducted to show the effectiveness of the proposed IVT method. For qualitative validation, popular edge detection methods including, the Canny and LoG methods [36] are applied to the NIR orthomosaics and the results are visually compared to those generated by the proposed method. The objective of this analysis is to illustrate the effectiveness of the proposed method in rejecting noisy pixels, such as saturated and smoke pixels that are not often rejected by existing edge detection methods. The Canny, LoG edge detection, and the proposed IVT methods are applied to O 2 and shown in Fig. 13. From this figure, it is evident that proposed IVT method performs better than the existing edge detection methods for fire front extraction from high-resolution (0.1 m) NIR DN images. The main reason is that the IVT first identifies fire regions at a coarser resolution and then applies the fire front extraction algorithm to only those areas which rejects outliers that are often a problem when searching for the fire front directly in high-resolution images. For quantitative validation, the IVT extracted fire fronts are compared to manually extracted fire fronts from the orthomosaics. The minimum distance between the manual and IVT fire fronts are compared for error quantification. This analysis is conducted on all the NIR orthomosaics and the resulting errors are tabulated in Table V. It can be observed that the mean errors for each orthomosaic are less than or around 1 m. This error is reasonable and does indicate that the IVT is effective in accurately extracting the fire fronts. D. Fire Evolution Mapping The extracted and delineated fire front curves, f 1 , f 2 , and f 3 are then combined to form a fire evolution map which provides information about the fire front location, spread direction, and the ROS. Fig. 14 shows the fire evolution map with labels defining the head fire, flank fire, and back fire. Certain regions with stitching inconsistencies are excluded from the fire ROS analysis, such as the west and east fire fronts of f 1 , which are the overlapping areas of the two flight lines. The fire fronts shown in Fig. 14 are categorized into head, flank, and back fires based on the spread directions. Since the prevailing wind during the fire is from the south at about 6.26 m/s (measured at around 2 m above the ground level), the fire fronts spreading north are categorized as the head fire, while the fire fronts spreading east or west are categorized as the flank fire, and the fire fronts spreading south are categorized as back fire. The fire fronts with defined spread vectors are used to calculate the ROS. For analysis, the head and flank fire fronts are divided into two categorizes based on spread direction, NE, NW for the head fire front and E, W for the flank fire front. Note that these categories indicate the direction toward which the fire front is spreading. For example, the portion of head fire front spreading toward the NE is categorized as an NE fire front. The back fire ROS is calculated between f 1 and f 3 . The ROS between these fire fronts are calculated, as described in Section III-C and tabulated in Tables VI-VIII. From these tables, it can be observed the mean head fire, flank fire, and back fire ROS are measured to be 0.28, 0.1, and 0.025 m/s, respectively. The measured ROS are further visualized in a polar plot, as shown in Fig. 15. V. DISCUSSIONS AND LESSONS LEARNED Critical insights, in-depth discussions, and lessons learned from the proposed method and implementations are provided in this section. A. Accuracy in Time Labeling and Spatiotemporal Representation There exist several challenges in evaluating and analyzing the accuracy of the proposed spatiotemporal representation including handling of multiple overlapping aerial images looking at the Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. same grid and accurate labeling of fire front location. For our UAS fire dataset, the fire front in one grid may show up in about seven overlapping images on average (∼0.7 s time difference between two consecutive images), which raised the challenges in time and spatial accuracy analysis within each fire front grid. The 0.7-s difference is mostly determined by the longitudinal overlapping percentage of the orthomap, the camera fps, and the UAS ground speed. For our dataset, the time difference in seven overlapped images may result in about ±2.5 s uncertainty in time. Since our UAS flies much faster (∼25 m/s) than the fire spread (0.01-0.4 m/s), we can assume that the movement of the fire is trivial within the overlapped pictures. Second, manual corrections of fire labeling are sometimes needed since fire lines identified by IVT method may generate minor errors especially when the fire front is at the boundary of the grid. However, the correction may only shift fire front one grid away from the identified fire front. Finally, the acquired time stamp information using the proposed spatiotemporal representation is compared with the ones derived using the manual approach based on the thermal image dataset from the same prescribed fire [24]. The average time difference between spatiotemporal representation and the manual approach is 0.88 s, which falls in the ±2.5 s uncertainty bound. B. Fire ROS Accuracy The accuracy of the fire ROS measurements is critical to wildfire management, prescribed fire planning and policy making, and fire behavior model validation. Our fire ROS accuracy is analyzed from three sides, uncertainty analysis, literature data, and cross validation with thermal data. The uncertainty of our fire ROS measurements comes from both the fire front location and the elapsed time between the two fire front lines. The spatial position accuracy of the NIR fire front location is about 1.45 m (RMS, 1 − σ). Assume that the elapsed time between two fire lines is around 120 s, the fire ROS uncertainty will be around 0.024 m/s. This means that our head fire and flank fire ROS estimates are fairly accurate while the back fire ROS needs further confirmation. In addition, our measured grass fire ROS matches with the expected grass fire behavior in Kansas based on NWS researchers' former work, where a fire ROS of 0.18/0.36 m/s corresponds to a grass fire danger index of 5/10 with moderate/high difficulty of suppression [37]. Considering the strong prevailing wind velocity that day (6 m/s), it is not surprising that the fire danger is relatively high. Finally, the NIR derived fire ROS estimates agree with the thermal derived fire ROS from the same prescribed fire [24]. In fact, thermal cameras are more widely used for wildland fire ROS measurement in the literature [16], [34], [38]. The mean head fire and flank fire ROS for thermal data are measured to be 0.27 and 0.11 m/s [24], which have a 0.01-m/s difference for both data and are within the 0.024-m/s uncertainty bound. C. Using UAS Orthomosaic to Monitor a Slowly Evolving Process UAS orthomosaics have been widely used for mapping of static scenes. Our dataset and method showed that they can be extended to monitoring and mapping of slowly evolving processes, such as a grass fire (compared with the fast UAS flying speed). The following are observed and suggested based on our experience. 1) Feature Matching and Stitching: Feature matching among the longitudinal and lateral overlapped images are key to the generation of accurate orthomosaics. Since our UAS generally flies at a much faster ground speed (20-30 m/s) than the grass fire evolution (0.025-0.28 m/s mean ROS), the scene only changes slightly among the aerial images used for stitching. The feature matching among these aerial images is sufficient for the stitching process, which is shown by the small stitching error (< 1.5 m RMSE). For our grass fire dataset, the biggest error comes from the lateral overlapping areas when there is a fireline. This can be observed from the middle left of the fire front in the left subfigure of Fig. 10, which generated blurred fire front. Nevertheless, the stitching can still be achieved since the affected regions only cover a small portion of the overlapped areas. These blurred fire fronts are not used in the fire metrics analysis. 2) Time Labeling: One of the novelties of this article is the introduction of the time labeling compared with the conventional orthomosaics. The time labels for different fire grids within an orthomosaic can potentially be used for data-based fireline prediction and correction so that an improved UAS orthomap can be generated with the same timestamp across all the fire lines. This will be one future direction of our work. D. Other Factors Affecting the Accuracy of UAS-Based Fire Metrics Measurements The accuracy of UAS derived fire maps (for example, Fig. 14) is affected greatly by the quality of the aerial images and corresponding GPS location data collected by the UAS when flying over the evolving fire. The quality of these data may be affected by many UAS flight performance metrics, such as orientation tracking errors, flight speed, UAS flight trajectory, and the specification and setting of sensing payloads (cameras and GPS). Two key factors, the UAS flight trajectory and the sensor accuracy are discussed in detail as follows. 1) UAS Flight Trajectory: Fire missions designed for accurate fire mapping and fire ROS measurement require high-quality observations of the fire front at regular time intervals, which can be used to generate consistent time-series orthomosaics. For prescribed fire experiments (similar to the Anderson county grass fire shown in this article), an ideal UAS flight trajectory is to fly wings-level and steady in consistent loops over the fire field at regular time intervals while capturing images of the burning field. For example, a UAS needs to fly over the same front at time t 0 , t 1 = t 0 + δt, and so on, where δt is the time taken by the UAS to complete one loop. This way, the UAS can capture the evolution of fire fronts in the region at regular time intervals which can be used for fire metrics measurements, such as fire ROS. However, such a flight trajectory can be difficult to achieve due to multiple reasons, such as irregular fire evolution patterns and fire-induced turbulence, such as thermals. Fire-generated weather can also affect the orientation of the UAS, which can consequently result in the capturing of oblique and blurry images that may not be usable in the orthomosaic stitching. An example of such a scenario can be seen in the rightmost image in Fig. 10. The gap near the top-left portion of this image was caused by rejecting blurry images (due to oscillating UAS roll angles during capture) from the stitching process. 2) Sensor Properties: The properties of operating sensing payload, such as cameras and GPS play a vital role in the accuracy of UAS data derived fire metrics. Camera properties include frames per second (fps), image resolution, and FOV. Higher fps can achieve more frequent observations of the fire, while higher image resolution and FOV can achieve better spatial representations of the burning field. It is worth emphasizing that spectral properties of the images also play a crucial role in dictating the accuracy of the delineated fire front locations. For example, fire fronts within thermal images are easier to delineate than those in NIR images, while NIR images are less susceptible to smoke occlusion than RGB images and are sensitive to the charring of vegetation in the burning field. The IVT method proposed in this article extracts the fire front from NIR images using this property. It is worth mentioning that these camera properties only control the accuracy of fire front locations in the image coordinate frame. The locations and the ROS of the extracted fire fronts in the world coordinate frame (latitude and longitude) are directly affected by the accuracy of the GPS data onboard the UAS. This can be overcome by using cm-level RTK GPS or by performing image-to-image registration using reference images. In this article, the resulting fire maps are accurate up to 1.5 m since the time-series orthomosaics were registered using a 1-m NAIP reference image. VI. CONCLUSION This article describes a novel NIR-based grass fire mapping and ROS measurement method that uses UAS short time-series orthomosaics with time labels. This method uses low-cost NIR cameras instead of expensive thermal cameras, which are feasible to many UAS operators. Moreover, the proposed method is developed for DN images and does not require vicarious radiometric calibrations that can be challenging for UAS images. This method was demonstrated using a GoPro NIR video that was collected by KHawk fixed-wing UAS when flying multiple loops over a prescribed grass fire (530 × 250 m) in Welda, KS and yielded an accurate fire evolution map (about 1.5-m registration error compared to the NAIP image), using three NIR short time-series orthomosaics at regular time intervals (about 2 min). Finally, we determined that this prescribed grass fire had mean head fire, flank fire, and back fire ROS of 0.28, 0.1, and 0.025 m/s, respectively. Future goals for fixed-wing UAS-based fire evolution mapping and ROS measurement include, 1) fully automatic fire front detection using supervised learning, 2) real-time fire mapping and ROS measurement for better fire situation awareness, 3) autonomous UAS path adjustments based on onboard fire spread measurements, 4) integration of cm-level RTK GPS on-board the UAS and use of GCPs for improved orthorectification, and 5) generation of the guidelines for using UAS orthomosaics to monitor a slowly evolving process through comprehensive analysis and studies. Saket Gowravaram received the B.S. degree in aerospace engineering from SRM University, Chennai, India, in 2015, and the M.S. and Ph.D. degrees in aerospace engineering from the University of Kansas, Lawrence, KS, USA, in 2017 and 2022, respectively. He is currently a Data Scientist with Agrograph, Inc., Madison, WI, USA. His research interests include the development of novel algorithms using machine learning and remote sensing data from UAS, aircraft, and satellites to observe, analyze, and solve important environmental, agricultural, and Earth Science problems.
2023-03-23T15:20:07.246Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "ee5439f8a591ac254db6f664ec7789644c02ecd7", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/4609443/4609444/10077739.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "e1c537be893ff1e72a90a39ce1593161cca071e6", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
234839972
pes2o/s2orc
v3-fos-license
Formulation and Evaluation of Anti-bacterial Herbal gel Containing Terminalia catappa Extract International Journal of Pharmaceutical Sciences Review and Research International Journal of Pharmaceutical Sciences Review and Research Available online at www.globalresearchonline.net ©Copyright protected. Unauthorised republication, reproduction, distribution, dissemination and copying of this document in whole or in part is strictly prohibited. Available online at www.globalresearchonline.net 131 Shubhada Ukey, Ashwini Ingole *, Manish Kamble, Disha Dhabarde, Jagdish Baheti, 1. Kamla Nehru College of Pharmacy, Butibori, Nagpur (M.S.), India. *Corresponding author’s E-mail: ashwiniingole@rediffmail.com INTRODUCTION ince ancient era to modern system of medicines, plants are the major and important part of the medicine. Herbal medicines plays important role in health services. Around one fourth population of world are relay on traditional herbal medicine, for the primary health care, especially on plant drugs 1 . Folklore information suggested antimicrobial potential of certain Indian medicinal plants. Very few reports are available on inhibitory potential against certain pathogens. Hence, proper scientific evidence are required to discover the potential of medicinal plants. As the herbal practitioners dispense their own recipes, there is need to design and develop new formulation with diverse chemical nature, novel mechanism of action and most important resistance free medicine 2 . In the present study, Indian almond also called as tropical almond botanically equated as Terminalia catappa Linn. was used to explored scientific evidence for microbial inhibition. The plant is tall deciduous erect tree reaching 25-40m with an upright symmetrical crown and horizontal branches naturally distributed throughout India. Near the end of the twigs, leaves are crowned and alternate. Leaf blades are big and thick having smoother margins. Matured leaves are shiny above and pubescent below and dark green in colour. While new or younger leaves are with soft covering of hairs. After maturation and before falling in the winter they turned to yellow, red and purple shades. Fruits are 2 inches or more long and 1 inch across, fleshy fibrous pulp surrounding the large seed which is edible and tasted somehow sweet and same as almond. Full-sized fruits are green and turn red, brown, or yellow at maturity 3 . Traditionally, leaves, bark and fruits were used as medicine in various diseases and ailments. It also has nutritional value, used as source of Vit. C & E and dietary minerals 4 . Leaf was documented to possesses antioxident 5 , hepatoprotective 6 , antidiabetic 7 , and anti inflammatory 8 activities. While bark and fruits possess an anti-diarrheic, antipyretic hemostatic 9 , analgesic and anti-inflammatory potential 10 . The gel is semisolid system of at least two transparent phases interpenetrating into one another. Gels that contain water are called hydrogels, while those that contain an organic liquid are called organogels. Hydrogels are the mixture of water and cellulosic derivatives 11 . Collection of plant materials The fruits and leaves of Terminalia cattapa Linn were collected from college campus of Kamla Nehru College of Pharmacy, Butibori, Nagpur, Maharashtra and authenticated from Department of Botany, RTMNU, Nagpur. Preparation of extract from leaves Fresh leaves of Terminalia catappa were air dried and then crushed by using mechanical blender to obtain a coarse powder. 150 g of powder plant was macerated in 600 ml of ethanol for 72 hr at room temperature, and then filtered into a beaker using funnel and whatman filter paper No.1 (125mm). The filtrate was concentrated by evaporation in International Journal of Pharmaceutical Sciences Review and Research Available online at www.globalresearchonline.net ©Copyright protected. Unauthorised republication, reproduction, distribution, dissemination and copying of this document in whole or in part is strictly prohibited. 132 water bath at a temperature of 50° C to obtain the crude extract. Preparation of extract from fruit Analytical grade reagent and chemical were used. The fresh fruits were collected from the Kamla Nehru College of Pharmacy from Butibori, Nagpur. The fruit were cleaned with water and pericarp and mesocarp were separated from the whole fruit. The pericarp and mesocarp which were only used for further study and were then cut into small pieces of about 1sq centimetre. These pieces were divided into three parts of 50 g each and these were macerated for 24 hr separately with 100 ml of solution 70% ethanol and 30% water. The extract was evaporated at room temperature to nearly 1/3 rd of the original volume to get concentrated extract. These extracts were then preserved in tightly closed glass container and stored away from direct sunlight for 48 hr till use. Preparation of gel Carbopol 940, HPMC (3 different concentrations) was dissolved slowly with stirring in 60ml of demineralized water for 1 h to avoid agglomeration. Then propylene glycol solution, methyl paraben, ascorbic acid, amaranth colour was added and mixed well. Then triethanolamine was added dropwise to adjust pH by stirring the solution until clear consistent gel was formed. Three different gel were prepared using the formulae given in Table 1 and the viscosity was determined. Agar well diffusion method Sufficient quantity of nutrient agar was taken, and water was added to make up the volume. The dispersion was heated to boiling was sterilized in autoclave at 121°C. Then the solution was transferred to petridish and allowed to cool and solidify. Bacterial dispersion was spread on the agar surface. The wells were bored and solutions of standard antibiotic and gels containing extracts were added to the wells. The solutions were allowed to diffuse through agar and then incubated at 37⁰C for 24 hrs and zone of inhibition were observed. Visual examination The prepared gel formulae were inspected visually for their colour, appearance, texture. Determination of pH Weighed 50 gm of each gel formulation were transferred in 10 ml of the beaker and pH was measured by using the digital pH meter. Viscosity estimation The viscosity of gel was determined by using a Brookfield viscometer DVII model with a T-Bar spindle in combination with a helipath stand. Spread ability In this method, slip and drag characteristic of gel involves. Formulated gel (2g) was placed on the ground slide under study. The formulated gel placed (sandwich like) on this slide and another glass slides for 5min to expel air and to provide a uniform film of the gel between slides. Excess of the gel was scrapped off from the edges. The top plate was then subjected to pull of 80g with the help of string attached to the hook and the time (sec) required by the top slide to cover a distance of 7.5cm was noted. A short interval indicated better spread ability. Extrudability The gel formulation was filled in standard capped collapsible aluminium tubes and sealed by crimping to the end. The weight of tubes was recorded and the tubes were placed between two glass slides and were clamped. 500gm was placed over the slides and then the cap was removed. The amount of extruded gel was collected and weighed. The percent of extruded gel calculated as 1-When it is greater than 90%, then extrudability is excellent. 2-When it is greater than 80% then extrudability is good. 3= When it is 70% then extrudability is fair. Irritancy Test The gel was applied on left hand dorsal side surface of 1sq.cm and observed in equal intervals up to 24hrs for irritancy, redness and edema. Homogenicity The developed gel was tested for homogeneity by visual inspection they were tested for their appearance with no lump. Grittiness All the formulations were evaluated microscopically for the presence of any appreciable particulate matter which was seen under light microscope. Hence, obviously the gel preparation fulfils the requirement of freedom from particulate matter and form grittiness as desired for any topical preparation. RESULTS AND DISCUSSION The herbal gel was prepared and subjected to evaluation of various parameters. The gel was amaranth red in colour (Fig 1) with a translucent appearance. The pH was constant throughout the study to about 6.98 and the gel did not produce any irritation upon application to the skin. Extrudability was excellent and the gel also showed good spreadability. The initial viscosities were recorded at 25°C. The gel was found to be stable under normal conditions of stability. All evaluation data was shown in table 2 Antibacterial activity of gel Nutrient agar plates using agar well diffusion method with ciprofloxacin disc (10μg/ml) at the centre which served as a positive control. Zones of inhibition was measured for all the contents and activity was compared. Shown in figure 2 and table 3. CONCLUSION From the above observation, it has been revealed that herbal gels of plant Terminalia catappa leaves & fruit extract formulated using carbapol 940 and HPMC as polymer with other constituents and the evaluation of physical parameters shown satisfactory results. For the bacterial inhibitory potential, the gel formulation revealed more inhibitory potency than individual leaves extract and fruit extract in comparison to standard drug ciprofloxacin against tested bacteria. Hence, from the overall observations, it was finally concluded that the herbal gel containing leaves and fruit extract of Terminalia catappa have significant antibacterial potential and hence will safe and effective in microbial infection.
2021-05-15T10:16:35.396Z
2021-03-21T00:00:00.000
{ "year": 2021, "sha1": "8bf29d5174f88423d43cd6df3b00663bfbc31e8d", "oa_license": null, "oa_url": "https://doi.org/10.47583/ijpsrr.2021.v67i01.022", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8bf29d5174f88423d43cd6df3b00663bfbc31e8d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
268354080
pes2o/s2orc
v3-fos-license
Identification of a Single Nucleotide Polymorphism of Vitamin D Receptor (VDR) and Vitamin D Binding Protein (VDBP) Gene and Its Dysregulated Pathway Through VDR-VDBP Interaction Network Analysis in Vitamin D-Deficient Infertile Females Introduction: The prevalence of female infertility in Pakistan is currently estimated at 22%, and emerging research suggests that vitamin D (VD) deficiency (VDD) may play a significant role in influencing female fertility. The focus of this study was to investigate the single nucleotide polymorphism (SNP) patterns within the VD binding protein (VDBP). The study aimed to explore dysregulated pathways and gene enrichment through an interaction network analysis, specifically focusing on the interplay between the VD receptor (VDR) and VDBP in females experiencing unexplained infertility (UI) coupled with VDD. Methods: A cross-sectional study was conducted on VD-deficient, fertile, and UI female subjects. VDBP and VDR were assessed by enzyme-linked immunoassay and genotyping performed. FunRich (version 3.1.3; http://funrich.org/index.html) was employed for analysis of the identified proteins: VDR and VDBP and with their mapped gene datasets, gene enrichment, and protein-protein interaction (PPI) network. Results: The mean VD and VDR values of infertile females were significantly lower than those of fertile females. VDBP in infertile females (median (IQR)): 296.05 (232.58-420.23)) was lower than that of fertile females (469.9 (269.57-875.55), (p=0.01)). On sequence analysis, a mutation rs 4588 SNP (Thr 436 Lys) was found in exon 11 of the VDBP gene of UI females, but no mutation in exons 8 and 9 of the VDR gene, with some insignificant intronic variants, was observed. The proteins such as plasma membrane estrogen receptor signaling pathway (p < 0.001), VDR, SMAD3, NCOR1, CREBBP, NCOA1, STAT1, GRB2, PPP2CA, TP53, and NCOA2 were enriched after biological pathway grouping when VDR was made the focused gene and directly interacting with VDBP. Conclusion: The females with UI exhibited significantly low VD, VDBP, and VDR. The plasma membrane estrogen receptor signaling pathway was enriched in VDD infertile females. Introduction Vitamin D (VD) is involved in calcium-phosphate homeostasis and maintenance of bone mineral density.The active form for VD, namely, 1, 25-dihydroxyvitamin D3 (1, 25-(OH)2D3), metabolized by 1alphahydroxylase from cholecalciferol (25-OHD), exerts these effects through the VD receptor (VDR).This receptor is present in the intestines, bones, parathyroid glands, ovaries, and testes [1].Furthermore, it may be found along the walls of the central organs of reproduction such as the pituitary and hypothalamus and peripheral organs such as the oviduct, uterus, and placenta [2].VD binds to VDR a transcription factor situated in the nuclei of target cells that facilitates the genomic action of the active form of VD (1,25(OH)2D3).This transcription factor is circulated to different tissues, functions as an ovarian reserve marker, and stimulates the production of hormones from the ovaries [3]. The probable role of VD in the impairment of reproductive physiology and the relationship of VD deficiency (VDD), with VDR polymorphism and infertility in female subjects, has been explored [4,5]. The VDR gene is positioned on chromosome 12 that extents around 75 kb of genomic DNA and comprises 11 exons [6].An ongoing argument on global VDD and the presence of VDR in reproductive tissues with an increased prevalence of infertility has encouraged us to conduct the study. Approximately one in eight women of reproductive age pursue advice for infertility issues, and 85% of these patients have an underlying cause.Among females, the common causes of infertility are anovulation, tubal disorders, and endometriosis.However, in 15% of infertile women, no definite cause is identified, which leads to the diagnosis of unexplained infertility (UI) [7].UI is defined as the inability to conceive despite 12 months of unprotected intercourse, in the absence of known causes of infertility, including anovulation, tubal pathology, endometriosis, or semen abnormalities [8].Many women diagnosed with UI may conceive spontaneously over a period of time at the rate of 2%-4% per menstrual cycle, whereas others need treatment with ovarian stimulation and intrauterine insemination; if these approaches are not successful, in vitro fertilization (IVF) is considered [7].As UI is a diagnosis of exclusion, therefore, it is difficult to have any definite explanation of affected fertility potential in these patients. As far as the association between VD and infertility is concerned, the association of polycystic ovarian syndrome and endometriosis with VDD and infertility has been established [9].VD has been linked with successful outcomes in IVF outcomes as well [10].Similarly, low levels of VD have been correlated with low pregnancy rates due to its harmful effects on conception and endometrial receptivity in women undergoing IVF with single embryo transfer.Therefore, VDD may explain some cases of UI or can be a contributing element to other factors that alter fertility potential.However, no conclusive data are available to examine the relationship between VD and UI or low ovarian reserve. VDR exhibits widespread distribution across nearly all human tissues.Moreover, VD plays a regulatory role in the human genome, underscoring its potential influence on various systems, including reproductive processes [11].Due to the VDR presence in ovaries and endometrium, the impact on UI could be at multiple levels.In ovaries, VD-mediated calcitriol has effects on ovarian steroidogenesis that stimulate hormone synthesis, including progesterone, estradiol, and estrone [12].Furthermore, in vitro studies have shown an association of VD with ovarian reserve markers such as anti-Mullerian hormone (AMH) [13] and demonstrated the presence of functional VD response element (VDRE) on human AMH promoters [14].One study indicated a significant reduction in AMHR-II and FSH receptor mRNA with human granulosa cells by VD3 [15].As AMH has its inhibitory role on folliculo-genesis, this can be anticipated that VD treatment can have a beneficial role on folliculo-genesis by alleviating the inhibitory influence of AMH on the process. We wanted to identify the single nucleotide polymorphism (SNP) of VD binding protein (VDBP), the dysregulated pathways, and gene enrichment centered on interaction network analysis of VDR-VDBP in unexplained infertile females with VDD. Materials And Methods This study was a cross-sectional study, and it was conducted from June 2019 to July 2020 after approval from the Institutional Ethical Review Committee (ERC) of Aga Khan University (AKU-ERC 2019-0314-5627) in association with the Australian Concept Infertility Medical Centre (ACIMC). Inclusion criteria: In this study, we included fertile females between the age range of 18-45 years from all ethnic backgrounds and having a child with ages of less than two years.Recruitment of infertile females was based on the criteria of UI in the age range of 18-45 years and from all ethnic groups.Exclusion criteria: Infertile females due to male factor causes or due to tubal blockade were excluded.Furthermore, female subjects with a previous history of artificial reproductive techniques (ART) in preceding pregnancies, recurring miscarriages, thyroid abnormalities, uterine tumors, hypertension, and diabetes were excluded.Infertile females with serious general health problems, using contraceptive pills orally and any hormonal treatments, or using any contraceptive procedures were also excluded.We also excluded women (fertile and infertile) who were on VD therapy, calcium supplementation (for the last six months), or exposed to tobacco, gonadotropins, or prior chemotherapy. The history of the patient was acquired, and BMI was calculated by using South Asian criteria. Blood collection: For the recruitment of subjects, approximately 5 mL of venous blood sample was collected from each subject by a pain-free procedure.Serum was extracted by centrifugation of blood samples and quickly stored at -80 °C until estimation of biochemical parameters. Biochemical analysis: VD levels in serum were measured by a human, 1,25-dihydroxy VD enzyme-linked immunosorbent assay (ELISA) kit (Cat No. 95503), with an intra-and inter-assay coefficient of variation (CV) of 2.7% and 4.3%, respectively.The lowermost limit of detection of VD was 2.8 ng/mL.After the detection of VD levels, 80 VDD females (40 in each fertile and infertile group) were recruited for the study.VDR and VDBP levels were analyzed in the comparative groups.VDR levels in serum were measured by a commercially available ELISA kit (Cat.No: SEA475Hu; Cloud-Clone Corp, Houston, TX) with a detection range of the kit of 0.625-40 ng/mL.The analytical sensitivity was less than 0.225 ng/mL, and intra-and inter-assay CVs were found to be <10% and 12%, respectively.VDBP levels in serum were observed by using a commercially available human VDBP ELISA kit (Cat.No: 96577; Glory Science Co. Ltd., Taichung, Taiwan) using a detection range of the kit of 8-480 ug/mL. Genotyping Genotyping of VDR was performed using the SNP genotyping assay, primers (Table 1), and direct DNA sequencing methods.These primers were designed by the Primer3 online tool (https://primer3.ut.ee/). S. No. Primers Primer Sequence VDPB and VDR gene interaction pathways were studied in this in silico analysis. Bioinformatic analysis The VDPB and VDR protein interaction network, mapping of the gene datasets and interaction pathways, was obtained by FunRich (version 3.1.3;http://www.funrich.org),which is a functional enrichment analysis tool. Enrichment analysis Molecular functions, biological pathways, gene ontology (GO) terms, and sites of expression terms were retrieved by performing enrichment analysis.The depleted and enriched proteins were identified by the fold change for biological pathways, protein domains, and sites of expression. Interaction network analysis Biological pathway enrichment of defined nodes was used for visualizing and analyzing the protein-protein interaction (PPI) network.Only human-exclusive datasets were presented in which gene/protein annotations were collected from publicly available protein-centric and gene databases.The human-specific FunRich database was selected as the background database for complete analysis.The list of genes augmented in specific pathways was highlighted within the interaction network, and distinctive subnetworks were created for complete analysis.Specific sub-network was analyzed by tallying direct neighbors (interacting partners) for mentioned nodes in the sub-network and visualization.Specific nodes were focused, and the interacting partners of the focused nodes were mapped. GO functional categories, normal and overrepresented and identified pathway associations, and significant interactions with datasets were analyzed by using the Benjamini-Hochberg (BH) and Bonferroni tests.The pvalue correction was done with the BH and Bonferroni tests and hypergeometric test, and a p-value <0.05 was taken as the statistical cutoff and maintained as default after Bonferroni correction. Results This cross-sectional study describes the SNP of VDBP, the dysregulated pathways, and gene enrichment centred on interaction network analysis of VDR-VDBP in UI females with VDD. The demographic and biochemical characteristics of fertile and infertile female subjects (80, fertile=40 and infertile=40 females) are presented in Interacting proteins for VDBP and VDR genes FunRich (version 3.1.3)was employed for the analysis of the identified proteins: VDR and VDBP along with their mapped gene datasets, enrichment, and protein-protein interaction (PPI) network.There were 66 proteins identified interacting directly with VDR and VDBP (GC) (Figure 2). PPI analysis of VDBP and VDR genes The FunRich database was used to evaluate the PPI network and envisage the VDR-VDBP (GC) interaction. The interaction network was integrated into the pathway enrichment of the identified protein.The differentially controlled interacting proteins of potential retrieved from the interaction of VDR and VDBP (GC) were recognized in this network (Figure 2).Sixty-six genes were established interacting with VDR and GC, all interacting conjointly (Figure 2).The proteins described in the main group mapped along with VDR and VDBP were CREB-binding protein, nuclear receptor corepressor 1; mothers against decapentaplegic homolog 3; receptor-regulated SMAD (R-SMAD); nuclear receptor coactivator 3; and nuclear receptor coactivator 1. The important and associated pathways with their interacting proteins were epidermal growth factor receptor (EGFR)-dependent endothelin signalling events, platelet-derived growth factor receptor (PDGFR)beta signaling pathway, tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) signaling pathway, plasma membrane estrogen receptor signaling, validated nuclear estrogen receptor beta network, retinoic acid receptors-mediated signaling, androgen-mediated signaling, regulation of androgen receptor activity, glucocorticoid receptor signaling, mTOR signaling pathway, TGF-beta receptor signaling, and regulation of cytoplasmic and nuclear SMAD2/3 signaling. The proteins enriched in plasma membrane estrogen receptor signalling were 29 in number with p < 0.001.These include VDR, SMAD3, NCOR1, CREBBP, NCOA1, STAT1, GRB2, PPP2CA, TP53, and NCOA2, which were enriched in plasma membrane estrogen receptor signalling pathway when VDR was made focused gene and directly interacting with more than twofold enrichment, as shown in Figure 3.It is worth mentioning that VDR, NCOR1, and SMAD3 were enriched in the ovarian infertility gene pathway. Discussion This cross-sectional study revealed that a significantly low VDBP and VDR were in addition to low VD in UI females.We have found a mutation in exon 11 of the VDBP gene of infertile rs 4588 SNP, which may alter their protein function (Thr 436 Lys).The plasma membrane estrogen receptor signalling pathway was enriched in VD-deficient infertile females.VDR, SMAD3, NCOR1, CREBBP, NCOA1, STAT1, GRB2, PPP2CA, TP53, and NCOA2 were enriched in the plasma membrane estrogen receptor signalling pathway.It is worth mentioning that VDR, NCOR1, and SMAD3 were found to be enriched in the ovarian infertility genes pathway. Our results are supported by a study [16], which indicated VDR gene polymorphisms as a contributing factor for infertility in patients with UI.Impaired VDR gene expression also affects the endometrial receptivity and implantation process through unknown underlying mechanisms.Our study exhibited VDR gene BsmI and TaqI polymorphisms as a substantial risk for UI, whereas the VDR gene Aa genotype in ApaI polymorphism is a protective factor.Another study mentioned the important effects of VD on endometrial receptivity and implantation; however, the detrimental effects on oocyte and embryo quality were observed due to its antioestrogenic effect [17].Meanwhile, Jeremic et al. suggested measurements of VD in serum and follicular fluid as a complementary tool for routine assessment of embryos in UI patients undergoing IVF treatment [18]. VDR polymorphisms are associated with infertility and a decrease in folliculogenesis, oocyte yield, fertilization, and pregnancy rates after controlled ovarian stimulation (COS) responses in assisted reproductive techniques (ART) [19].FokI is one of the most evaluated polymorphisms of the VDR gene.A polymorphic variant (FF) is generated from a change of T to C in the start codon sequence that is reduced by three amino acids and displays an amplified transcriptional deficit of the VDR protein in contrast to the long ff allele form.A study from India has established an association between the VDR gene (FokI, rs-2228570; C > T) polymorphisms and male factor infertility [20]. The identification of VDR polymorphisms specifically related to infertility and response to ovarian stimulation may help in a better understanding of processes for UI and affected ovarian reserves.Djurovic et al. [21] explored the association of VDR gene polymorphisms and haplotypes with UI.They examined the DNA of 117 patients with UI and compared it with 130 control fertile women.The results highlighted those changes in the expression, and the activity of the VDR gene affected the expression of VD-responsive genes, leading to altered immune effect and possible impact on reproduction.With two identified haplotypes, BAT was associated with an increased risk for the ability to conceive again, whereas haplotype BAT indicated a protective role for the ability to conceive for the first time (p < 0.05). The presence of low VDBP in infertile females was indicated in a pilot study [22].The observed association of rs 4588 SNP mutation in exon 11 of the VDBP gene of infertile subjects in our study corroborates with mortality due to COVID-19 with VDD and VDBP polymorphisms of rs7041 and rs4588 in the literature [23]. Given the growing concerns over the widespread and uncontrolled use of ART and intracytoplasmic sperm injection (ICSI), VD3 supplementation may turn out to be a simple and cheap clinical treatment for infertile couples. Large-scale interaction networks encompass the results of experiments that help describe different biochemical interactions between genes and their encoding proteins [24]. The pathway analyses play an important role in appreciating biological steps involved in various disease processes.Hence, more compelling biomarkers can be identified using the dysregulated pathway [25]. We used a network-based method to determine the dysregulated pathways developed in VD-deficient infertile females, which may provide new discernments of the processes, leading to infertility in females. Estrogen facilitates its biological response through various potential cellular mechanisms, and this occurs mainly in two cellular ways, including receptors: genomic activity and rapid nongenomic effects [26]. It was described that the prompt response occurs within minutes in the process of therapy.Additionally, inhibition of the MAPK/ERK or AKT signalling pathway can block nongenomic effects.Initiating the commencement of these signalling pathway activities is closely related to GPR30-mediated plasmamembrane-associated processes [27]. In the current in silico interaction analysis, the plasma membrane estrogen receptor signalling pathway was found to be a dysregulated pathway based on the close interaction of VDR and VDBP genes.Our research identified several proteins enriched in estrogen receptor signalling pathways involving VDR, SMAD3, NCOR1, CREBBP, NCOA1, STAT1, GRB2, PPP2CA, TP53, and NCOA2. Estrogen receptor protein is considered a main factor for estrogen action as it binds estrogens to originate tissue responses.These receptor proteins are of two types and include ERα and ERβ, and both have distinctive expression configurations [28]. It is worth mentioning, in this study, that VDR, NCOR1, and SMAD3 were also found to be enriched in ovarian infertility gene pathways.Fertility attributes in human inhabitants are controlled genetically [29]. Genomic-wide association studies (GWAS) identified 34 genome-wide significant signals for fertility in women with replication in deCODE data for the Icelandic population and in the Women's Genome Health Study.The signals comprise association with intronic SNPs in the oestrogen receptor 1 (ESR1) gene that is also linked with the number of offspring [30]. Limitations Studies suggest that BMI can modify response to VD supplementation leading to individuals with higher BMI.However, we did not study the effect association of BMI with metabolic disorders and alterations in VDBP, VDR, and VD.The study includes a relatively small sample size (40 VD-deficient fertile females and 40 UI female subjects).This might limit the generalizability of your findings to a broader population [19].FokI is one of the most evaluated polymorphisms of the VDR gene.A polymorphic variant (FF) is generated from a change of T to C in the start codon sequence that is reduced by three amino acids.Our study has identified low levels of VDR and VDBP levels in UI females. With this context, the identification of VDR, NCOR1, and SMAD3 in this study must be evaluated further to explore their role in therapies for VD-deficient females with infertility.VD screening and correction may add value to the outcome of patients receiving infertility treatments. Conclusions The study revealed the potential implications of VDD and genetic variations in VDBP on reproductive mechanisms, highlighting the pathways and genes associated with unexplained infertility in females.This is supported by the identification of significantly reduced levels of VD, (VDBP) and VDR in UI females and a mutation in the VDBP gene at the rs 4588 SNP in exon 11 that affected the protein function (Thr 436 Lys).Furthermore, enrichment of the plasma membrane estrogen receptor signaling pathway in VDdeficient infertile females, especially VDR, SMAD3, NCOR1, CREBBP, NCOA1, STAT1, GRB2, PPP2CA, TP53, and NCOA2 directs towards a link of infertility with VDD. FIGURE 3 : FIGURE 3: Genes enriched in plasma membrane estrogen receptor signalling pathway VDR (Vitamin D receptor) TABLE 1 : Primer Sequence of VDR and VDBP VDR -Vitamin D receptor; VDBP -Vitamin D binding proteinPolymerase chain reaction (PCR) for exons 8 and 9 of the VDR gene was performed using a 2× PCR Master Mix (Cat.No.G013; Applied Biological Materials Inc., Canada) as per the manufacturer's instructions.PCR conditions were initial denaturation at 95 °C for 5 min by one cycle and then further 35 cycles at 95 °C for 30 s, 58 °C for 45 s, and 72 °C for 45 s, which is then followed by a final extension at 72 °C of 10 min.PCR was performed for VDBP using the GoTaq hot start master mix (Cat.No.M5122; Promega Corporation, Madison, WI) as per the instructions provided.PCR conditions were initial denaturation at 95 °C for five min by one cycle and then further at 95 °C for 30 s, 60°C for one min, 72 °C for 45 s by 40 cycles, and followed by a final extension at 72 °C of 10 min.The amplified products were run on gel electrophoresis using 2% agarose gel.Purification of the PCR products was performed by using PCR clean-up of DNA sequencing (Cat.No.BT5100; Bio Basic Inc., Canada) using the instructions in the protocol.Sanger sequencing is a classical method for sequencing.This method was utilized to sequence the VDR and VDBP genes in samples, and PCR products were sent to the sequencing company Operon Technologies Inc. (Alameda, Canada).Previously published VDR and VDBP gene sequence data were directly used to compare the resultant sequences using the National Centre for Biotechnology Information (NCBI) database MEGABLAST search engine.Sequence files were viewed by importing them into Chromas Lite and then analyzed by assembling them into Molecular Evolutionary Genetic Analysis (MEGA), version 6.0.Statistical analysis was accomplished by using Statistical Product and Service Solutions (SPSS, version 20; IBM SPSS Statistics for Windows, Armonk, NY) software by performing descriptive statistics and the Mann-Whitney U test.Statistical significance was considered at a p-value of < 0.05.Sequences were analyzed by using MEGA software (version 6.0). Table 2 . The mean age of the female subjects was comparable between fertile and infertile groups.The BMI (mean ± SD) was greater in infertile females (27.4 ± 3.6) as compared to fertile females (23.5 ± 1.7; p<0.001).The mean vitamin D values of infertile females (7.45 ± 2. TABLE 2 : Comparison of study variables in fertile and infertile females On sequence analysis, a mutation rs 4588 SNP (Thr 436 Lys) was found in exon 11 of the VDBP gene of infertile females, and a C/T mutation was found at position 4,7846,396 in exon 9 of the VDR gene of infertile females.Figure1Crepresents the sequencing chromatogram of exon 9 of the VDR gene in infertile samples.Figure1 Dshows Sanger sequencing chromatograms of exons 8 and exon 9 of the VDR gene in infertile females, respectively, with highlighted intronic variant (T/C) in exon 9. *VDR (Vitamin D receptor) -Values are expressed as mean ± SD, ** VDBP (vitamin D-binding protein) -Median (IQR).The Mann-Whitney U test was applied to find p-values, and p values <0.05 were considered statistically significant.Figures 1A-1B represent gel electrophoresis images of amplified PCR products for the VDR gene (band size=~355 bp) and for the VDBP gene (band size=~462 bp) in infertile and fertile females.
2024-03-12T16:13:17.116Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "b9406dfc16578bff4b1b6f9f243e255a9c1f9566", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/original_article/pdf/197998/20240306-8025-14p2m2.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b5dba4cdca8e63b8341c46bbbcd02623b8e4676b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
229413852
pes2o/s2orc
v3-fos-license
Adoption return trips: Family tourism and the social meanings of money Through a focus on the planning and making of family adoption return trips, this paper explores how the social meanings of money are entangled with family-making practices and family holidays. Adoption return trips are a global phenomenon, and travel agencies offer tailored adoption return-trip packages marketed as a type of family tourism. The new trend towards conducting adoption return trips as a family when children are still young is growing and has implications for families’ finances because return trips are expensive endeavours. Still, families prioritise these trips, raising them above purely economic values so they stand out as ‘priceless’. The empirical material consists of interviews with 10 Swedish transnational adoptive families. The analyses show that family adoption return trips, despite their original features, are yet one more way of doing family holidaying. Money becomes an important contribution for understanding how family life is being done in and through parental, child and family-holiday ideals, as well as family intimacy. Introduction The fact that family-making practices are intertwined with family holidaying has been acknowledged. Research shows, among other things, the connections between holiday consumption and money and the social construction of the family (cf. Cardell, 2015;Gram et al., 2018;Hall and Holdsworth, 2016). To further this discussion, and by drawing upon a theoretical framework acknowledging the idea that money is both produced by and productive of social relations (Zelizer, 2005), this qualitative article focuses on the social meanings that money is given when planning and conducting an adoption return trip. Family adoption return trips are a growing travel industry. Some travel agencies in Sweden, and elsewhere, have specialised in return trips and offer tailored packages marketed as a type of family tourism; a 'special experience for the whole family' (https:// lotustravel.se/restyp/adoptionsaterresor/ ACS:201005). According to the CEO of a Swedish travel agency, the advice to spend somewhat more money on a return trip than other trips is important. In this way, children can be kept in a good mood, which will make the trip successful (Travel Agency 1). A successful trip is important, not least due to the commercialised format within which such travel is framed. Families have high expectations. This was recently manifested in the Swedish television consumer show Plus, which featured a family with transnationally adopted children who demanded compensation from the travel agency. They had experienced their adoption return trip to China as unsuccessful because they were dissatisfied with the standard of the hotel and the cancelled visit to the child's orphanage (SVT, 2015). A family adoption return trip is an interesting example of family tourism because families use economic activities to create, maintain and negotiate intimacy and social ties, both within the family and in relation to the future of the child, the family, the trip and the children's birth countries. The planning and making of an adoption return trip today is a time-consuming and expensive endeavour and money is a complex, and deciding, factor for adoptive families who want to make an adoption return trip. The ways in which families use their money indicate that the meaning of money is broader than its purely economic value. The investment in the family return trip also has a greater emotional purpose that cannot be measured in economic terms. By investigating how return trips are made by families through the use and spending of money, it becomes possible to understand the ways in which the values of money and emotions turn the trip into a family adoption return trip with touristy elements, while also doing family togetherness. Understanding the social meanings of money (Zelizer, 2005) when planning and making an adoption return trip may also provide insights into how families are 'done' in and through the practices of family holidays. Literature review Tourism consumption is often conceptualised as a rationalised activity outside the realm of everyday life (Su, 2010). However, as shown by Hall and Holdsworth (2016), tourism consumption is entangled with everyday practices, family life and relations. From this perspective, holiday money is not 'simply' money but also a social matter that influences the everyday life of families (Cardell, 2015). Money is embedded in economic and social relations, as in holidaying (Desforges, 2001;Su, 2010). The ways in which holiday money is used and spent, for example, through shopping, can also become a way of negotiating family identity, relations and belonging (Powers, 2017;Wagner, 2015). There is a lack of knowledge about how children and their parents approach and use money in relation to their tourism experiences, and family holiday research has mainly focused on family members' different roles in holiday decision-making processes (Therkelsen, 2010). Today, family holiday planning has become a shared project between all family members, and children are perceived as having more influence than previously on family decision-making (Gram, 2007). Up until recently, research on family holiday consumption mainly focused on adult perspectives (Stilling Blichfeldt et al., 2011;Wu et al., 2010). This is why there is a need for further recognition of children's positions (Schänzel and Yeoman, 2014), because they have both a direct and an indirect impact on family holiday consumption (Cardell, 2015;Gram, 2007;Wu et al., 2010). As children initiate, gather information, evaluate and make actual decisions, just like their parents, they are active holiday consumers. Thus, the decision-making role is not only held by the person who possesses the economic power, the parent, but also by the person who possesses knowledge about the activity/product, that is, the child. In this way, the child becomes involved in the family economy (Wu et al., 2010). This means that, although children are not necessarily always in positions where they can easily access money, they are involved with how holiday money is used and spent (Cardell, 2015). The rationality of family holiday decision-making processes is not clear-cut and does need problematisation. This is because, as argued by Therkelsen (2010), decision-making processes have indistinct beginnings and ends, and both children and their parents play undetermined, shifting roles in these processes. The outcome is that family holiday consumption often depends on compromises between children and parents, whereby each has to accommodate the other (Carr, 2006). Focusing on holiday consumption decision-making processes as shared therefore enables more complex and heterogeneous insights into family holidays (Obrador, 2012). This is especially important if family holidaying is considered to be an integral part of family life, and how family life is performed and done in practice (Gram et al., 2018;Haldrup and Larsen, 2003;Hall and Holdsworth, 2016). Although viewed as a way of easing the tensions of everyday life, family holidays intertwine with everyday sacrifices and are sources of stress, conflict, disappointment and frustration (Hall and Holdsworth, 2016). 'Ups and downs' are thus part of family holiday consumption, just like any other family practice (Gram et al., 2018: 194). 'The downs' however often require legitimisation. 'Downs' caused by outside factors are usually dealt with more easily than problems, such as irritation, caused by for example, the intensive time spent together (Gram et al., 2018). Irritation can also be explained by the fact that the children are tired or miss things from home. 'Downs' are sometimes dealt with, after the trips, through humour by turning poor experiences into good stories. This in turn creates a sense that, in hindsight, the downs of a holiday can not only be turned into a good story but also become signs of family togetherness (Gram et al., 2018;see also Cardell, 2015). In the limited research on family adoption return trips, the concept of family and identity has often held a central position (Chin Ponte et al., 2010;Gustafsson et al., 2019;Yngvesson, 2003), whereas aspects such as money have been more or less overlooked. An American study, in which the researcher joined three adoptive families with young children on their return trip to China, reveals culture clashes, especially since divergent economic realities were experienced during the trip (Powers, 2011). To avoid the difficulties, the families involved themselves in activities in the birth countries that made them feel comfortable -often the easily accessible and consumable parts, such as shopping for souvenirs. In this way, they distanced themselves from the things they found uncomfortable about the visit, which most often turned out to be poverty (Powers, 2011(Powers, , 2017. Even though the study shows that the parents' desire was to integrate their children's birth culture, in this case Chinese, into their family lives, family consumption during the trip reinforced the families' and the children's American, rather than Chinese, identities and family bonds (Powers, 2011(Powers, , 2017. The findings from Powers' study (2011Powers' study ( , 2017 could thus be compared with the study by Gram et al. (2018), as they both indicate that family holiday consumption, and the 'downs' experienced during such activities, has the potential to strengthen family ties. As such, family adoption return trips are both similar to and different from ordinary family tourism. Comparing the two research fields, it is interesting to note that, despite the obvious relevance of money, both seem to have neglected to examine the mingling of money with intimacy when making a family adoption return trip or going on an ordinary family holiday. Based on ideas which acknowledge that the planning and making of a family holiday falls under the realms of family practices (Morgan, 2011), and that the family is an important site for reproducing the social meanings of money (Zelizer, 2005), this study will reveal how money becomes integral to family holidays and family life. Taking into account the complexities and tensions described within family holiday and family adoption return trip research, and the consumer values connected with them, this article sets out to explore how children and parents talk about money, intimacy and social relations when conducting a family adoption return trip. The study begins by drawing on the theory of Child Studies which acknowledges children as competent, active actors and contributors to family consumer practices (Sparrman and Sandin, 2012). Theory: A social approach to money This article is situated at the intersection between travel, money and doing family life through investigating a social approach to money in practice (Zelizer, 1994(Zelizer, , 2005. This means acknowledging the mingling of things that in everyday life are described as incompatible -the organisation of economic activities and the maintenance of intimate social relations (Zelizer, 2005). Following Zelizer, the idea is that economic activities do not necessarily corrupt or degrade intimacy; this mingling happens all the time and can create strong ties, as both earlier research has indicated and this article will show. To describe how this mingling takes place between family members in practice, and how it instructs decisions about how, when, where and why to spend money, I will use the term 'negotiation'. From Zelizer's (2005) point of view, 'economic activities', which include the use of both monetary and non-monetary values, are entangled with how people construct interpersonal relations and ways of life. This means that money, in a broader sense, is both constructive of and constructed by social relations (see also Zelizer, 1994) and is given social meanings beyond economic values. An important ingredient in this process is the social and cultural 'earmarking' of money (Cardell and Sparrman, 2012). It is therefore important to explore the values that money is given by people themselves, as well as how it shapes them and their intimate relations -in this case, children, parents and families who are planning and conducting adoption return trips -as it can give us important, and unexpected, insights into the lives of both adoptive families and family tourism. In this text, money involves the spending of household incomes and savings, tourism consumption, caring for children, gifts that send the right message and the provision of an adoption return trip for the sake of love. Parents, children, foster parents and orphanages all earmark money. This means that they distinguish different sums of money from each other (Zelizer, 1994) by deliberating about what money should, and could, be used for during different stages of the trips. This describes how money is subjected to moral critique, guided by values that influence the process of financial decision-making. The morality of money then emerges as individuals make decisions about which sums of money to spend on what, while also considering how and for what purpose the money was earned to begin with (Wherry, 2017). Through such earmarking processes, categories of money multiply and create boundaries between, and within, intimate social relations (Zelizer, 2005). As will be shown in the analysis, people thus manage their relations, and their expectations of these relations, through how they use money (cf. Wherry, 2017). This becomes obvious when the families undertake what Zelizer (2005) calls 'relational work' by differentiating and negotiating between the obligations, rights and intimate transactions specific to each relationship. Hence, the secret is to match the right sort of monetary payment with the social transaction at hand. Because intimacy is entangled with such transactions, a high level of trust and risk is implied (Zelizer, 2005). Consequently, people worry about doing the right thing and try to avoid moral failures by assessing the outcomes of how their economic activities are likely to impact upon their social relations (Wherry, 2017). This means that they also continuously avoid the confusion that would arise from mixing the 'wrong' kind of money with the 'wrong' relationship. Hence, different interpersonal relations involve different forms of intimacy and the level of intimacy in a relationship is displayed, or in Zelizer's (2005) word 'purchased', through how economic activities are adjusted (see also Zelizer, 1994). Accordingly, this study will demonstrate the influence of social life on how money is organised by families making adoption return trips. Zelizer's work has had a major impact on various research fields; nevertheless, she asks for more empirically situated research on money in practice (Zelizer, 2012). It is important to acknowledge intimate transactions because they are not trivial, they have macroeconomic consequences -not least on family tourism. Drawing on Zelizer, and the notion that intimacy and economics both mingle and draw apart, combined with child studies and family tourism, this article approaches money as a heterogeneous and social concept. This means acknowledging not only the social meanings of money, but also the dynamics of its use and its entanglement with values such as adoption, family life and tourism. Methodology: Interviewing children and parents The empirical material stems from a wider research project on family adoption return trips and consists of interviews with ten Swedish transnational adoptive families that were carried out during the period 2015 to 2018. Since more than a third of the transnational adoptions during the 21st century in Sweden have involved children from China (https://www.scb.se/hitta-statistik/artiklar/2018/allt-fler-adopterar-styvbarn/ ACS:201005), families were recruited through travel agencies specialising in return trips to China, and the Swedish adoption agency Adoptionscentrum, which mediates adoptions from a large number of countries. Families who contacted the travel agencies received information about the research project through an email sent by the agency, while Adoptionscentrum posted information on its Facebook sites as well as distributing leaflets during meetings arranged by the organisation. Information distributed by Adoptionscentrum invited all transnational adoptive families, regardless of their children's birth countries. The families participating in this study were all planning to make a trip, and consist of ethnically white, middle-class or upper-middle-class single parents and heterosexual couples with children adopted from Asia, South America and Africa (Families 1-10). Twenty-six interviews were conducted with a total of 10 focus children aged 6 to 13 years, five siblings aged 7 to 15 years and 17 parents, both mothers and fathers (Gustafsson et al., 2019). As adoption return trips are usually planned over a long period of time, the aim was to interview each family twice; firstly, while planning the return trip, the before-interviews, and secondly on their arrival back from the trip, what I have called the afterinterviews. However, some families had to reschedule their trips due to private issues, and some families did not respond to the request for the after-interview. As children are acknowledged in this study as competent cultural and social actors, consumers and tourists (Sparrman and Sandin, 2012), children and parents are approached as equal actors during the interviews (cf. Danby et al., 2011). The interviews were set up in two ways: (1) family interviews and (2) individual interviews. The idea was to capture the dynamics and interactions between family members as well as to give voice to children and parents individually (Reczek, 2014). The multiple forms of interviewing were not applied to gain a 'truer' or more holistic image of the world; rather, the strategy gives a nuanced understanding of both the return trip and money (Reczek, 2014). The interview setups required the interviewer to smoothly move in and out of loyalties and between children and parents (cf. Reczek, 2014). The interviews required careful methodological and ethical considerations, especially the child interviews, in order to avoid potential asymmetries between the interviewer and the child. Children had the opportunity to influence the interview situation by deciding whether they wanted to be interviewed by themselves or together with a sibling or parent. They could also stop the interview whenever they wanted to. The interviewer spent time with the children before the interviews, and the children were interviewed first, all to make enough space for them to express themselves and to be sensitive to the fullness of their voices, including silences (Spyrou, 2016). 1 This became especially important in family interviews after the trips, when children participated in and shaped the conversations but did not necessarily talk to the same extent as their parents. Video and audio recordings were made, and the recorded material amounts to approximately 20 hours of audio recordings and 9 hours of video recordings. The interviewing took place in the family homes and the time spent there ranged from 3 hours to an entire day, thus giving the interviews an ethnographic character (Gustafsson et al., 2019). The researcher took extensive fieldnotes and wrote a summary after each family visit (Emerson et al., 2011). In addition to this, the material also includes family travel diaries and two interviews with travel agencies. The material has been transcribed, manually coded, sorted and converted into visual, thematic maps (Braun and Clarke, 2006). Rather than being a linear process with fixed analytical stages, the analytical work has been flexible and abductive in the sense that it has alternated between inductive and deductive approaches (Kennedy and Thornberg, 2018). The thematic maps made it possible to work closely with the material, inductively, and to sift out recurring themes. Three broad themes were developed: memory, motives and money, of which money is the focus of this article. The initial analysis suggested a more specifical focus on the social aspects of money; therefore, a more deductive approach was applied to search the material theoretically from the top down. Hence, repeated readings of the material were made, with a focus on both implicit and explicit accounts of the mingling of money and social and material relations. The empirical examples quoted in the analysis are unique in terms of direct content but have been chosen to illustrate recurring sub-themes about money, such as relational work, family doing and how shopping is a way of doing family life. Purchasing adoption return trips Planning and saving money for family holidays is often part of everyday life routines. It is an economic practice that involves both budgeting and decision-making (Hall and Holdsworth, 2016). As the cost of an adoption return trip is high, most adoptive parents in this study had put a lot of effort into saving up for the trip, taking out loans and planning all the details. As mentioned earlier, the CEO of one of the Swedish travel agencies that organises family adoption return trips said that it is important to plan for spending more money on a family adoption return trip than usual family trips, for the sake of keeping the children in a good mood and making the trip successful. He also said that they recommend parents to spoil their children because it is important to avoid friction, arguments and tensions during a return trip (Travel Agency 1). Hence, adoptive parents are presented with the notion that a big investment goes hand in hand with an ideal, happy family experience and that the more money they spend, the fewer 'downs' (Gram et al., 2018) they risk facing. Even though it is the parents who manage the financing of the trips, the children in this study are aware of the high costs of making them. During a child interview with 12-year-old Minna, who was going to travel to Colombia with her family, she said: 'It costs a lot and it's a long way to travel, so it really costs a lot' (Family 8). After their family return trip to Ethiopia, 7-year-old Jacob was asked whether he would like to go back. His quick answer was 'of course', then he added, 'when you've earned some more money', while looking at his mother (Family 3). In a before-interview with Louise and Karl, parents of 13-year-old Filippa, they said they had bought a 2-week, tailor-made trip to China. The trip was planned by a travel agency and the family later said that they had spent approximately 9500€ on it. When asked if they had had to save money, Louise and Karl said that they had limited options when it came to financing the trip. Karl described family adoption return trips as projects that require long-term planning, that is, savings. In this case, however, Filippa had previously shown no interest in making the trip, so the parents had not saved up enough money. Instead, the family used Karl's pension insurance. Karl explains that, even though the money might be needed in the future, this trip is the most important thing to the family here and now. Later, Karl says that his biggest hope for the trip is that it will improve Filippa's self-esteem and provide her with information about her past. He also emphasises that the trip will generate experiences that will be 'cool' to share as a family in the future (Family 9). As well as viewing the trip as an opportunity for the children to reconnect with their birth countries (cf. Chin Ponte et al., 2010), many parents in this study expressed the same hopes as Louise and Karl, which suggests that the trips have several purposes and aims, one being the future benefits of this joint family experience. This makes the decision-making process rather intricate as it involves the wishes of both children and parents, the family's future and recommendations given to adoptive parents in the Swedish adoption context (Gustafsson et al., 2019). This complexity is revealed in Louise and Karl's discussion, which indicates that not going on an adoption return trip was never really an option. This impression is strengthened by the fact that the parents also compared the financing of the trip with the financing of the adoption itself by saying: 'that's also something you take out loans for'. This suggests that money earmarked for a return trip differs from the money allocated to a regular family holiday because the parents distinguish between different payments in order to make a statement about the importance of the return trip. Even though money enables the trip, it is emotional values and positive family outcomes that serve to justify the expense. The trip is, in Zelizer's (2012) words, priceless to the parents, especially because what is emphasised is the effort required to even realise the trip, rather than the fact that it becomes an economic loss for the family. Despite future economic uncertainty and the notable fact that the consumption of the return trip is achieved at the expense of other extremely important things, the sacrifice made by Karl is gently toned down in favour of the family value of the trip; an upcoming family experience that is not really measurable in terms of money. We can thus imagine that the sacrifices made by the parents become a symbol of their investment in the family. The purchase of the trip is an economic choice that confirms the high level of intimacy between the parents and their child, and is a way to ascribe further meaning to the family relationship that is made possible by special money. However, it is also the intimacy of the relationship that allows for this money to be invested in the return trip to begin with (see Zelizer, 2005). Imagining family intimacy through holiday consumption In family discussions about consumption-related practices following their return from the trip, talk about 'value for money' becomes important (cf. Desforges, 2001). As opposed to parents' talk before the trips, when money was almost invisible, now the talk is about high and low prices, knowledge of good bargains and where to go shopping for meals (families 3, 4, 7, 9, 10). One mother, Yvonne, states that her son was not even interested in the trip until he was told that one could travel there just for the sake of the shopping (Family 1). From this perspective, the spending of money is not only an important part of the trips, in some sense it can be said to saturate and even motivate them. This makes adoption return trips similar to regular family holidays because they, too, consist of various consumption-related sub-decisions that are made both before and during the trip (see Stilling Blichfeldt et al., 2011). During the before-interviews, many children said that they were looking forward to buying clothes (see families 8, 9), toys and souvenirs (see family 6) and/or trying different restaurants and food (see families 1, 3) during the trip. Twelve-year-old Minna, for example, said that she wanted to shop for clothes during the trip because it was cheaper to buy fashion items there (family 8). Thirteen-year-old Filippa said that she wanted to buy cheap designer items in China, but also that she was a bit unsure whether these items were fake or not (family 9). The children's ideas of treating themselves with items purchased at a reduced price are about making their money go further (cf. Hall and Holdsworth, 2016). In this way, children are competent consuming tourists because, not only do they understand the value of money, but their reasoning also indicates that they have knowledge about the value of consumable goods. This is further reinforced by the fact that some children say they were saving money from their weekly allowances and birthdays in order to shop for such things during the trips (see family 6,8,9). Hence, children earmarked their money by making it more valuable during the upcoming trip (cf. Cardell, 2015). During the trips, children and parents mainly did things that were already known to them. For example, they chose a safe track when shopping and went to branded places like Starbucks, Haagen-Dazs, MacDonald's, Subway and Nike. During a family interview after the return trip, one family explained that they got bored and homesick one day, especially the daughter, because there was nothing to do in the town where they were staying and the weather was very hot, so as a solution, they decided to go to Starbucks. According to the mother, her daughter Astrid then said: 'oh, how lucky I am you got me away from here, I don't want to live here' (Family 7). In this example involving Astrid, several aspects, both tangible and intangible, such as mood, the weather and the lack of attractions are used to explain a poor family experience during the trip, which led to homesickness and eventually a visit to Starbuck's where the family could reunite and regroup. The family talk about Starbucks and Astrid's expressed feelings also turn the poor experience in the birth country into a good family story (cf. Gram et al., 2018). In another family interview, the 7-year-old boy Jacob proudly displayed his purchases from his trip to Ethiopia. He brought out a painting he had bought at an Italian restaurant that was popular among tourists. When asked if many tourists visited this restaurant Jacob said: 'Yeah, but so were we, weren't we?' (family 3). While it seems to have been quite obvious to Jacob that he and his family were tourists among other tourists at the Italian restaurant, the story involving Astrid, on the other hand, illustrates how the consumption experience during the return trip became a tool for negotiating family identity (see also Powers, 2011Powers, , 2017 and bonds. These stories are important because they reveal that children's consumption during the return trips become symbols, not only of their temporary stay in their birth countries, and their tourist identities 'over there', but also of their Swedishness and family belonging. Therefore, they reveal that children do engage in relational work, setting boundaries by talking about the return trip as a tourist experience and creating connectedness within the family (cf. Zelizer, 2005). Negotiations involving family identity were also conducted through the parents' talk about the spending of money during the trip. In the travel diary of Karl's family, who went to China, the father uses the explicit value of money to describe the low cost of eating, shopping and getting around in China. At one point, Karl describes a visit he and his family made to a great mall, famous for its low prices. He describes how his family haggled a lot, to the extent that they found it embarrassing because they were not used to it. He then describes how the family tactics became to haggle the price down to 25% of what the sellers were demanding and that they always succeeded. He then sums up by saying that the family had a great time and that his wife and daughter looked extremely happy with their purchases at the end of the day (Travel Diary 3). Here, this father positions his family, the 'we', both as outsiders to the Chinese community and as Swedes. They are outsiders who finally managed to understand how consumption works 'there' by defining the sellers, getting bargains and ending up with a happy experience. This indicates that consumption during the trip becomes important because it is an opportunity to spend quality time together as a family. The shared learning about how to consume becomes a way of doing family and intimacy and is also a situated practice within which family members construct and negotiate the meanings of a successful family vacation (cf. Hall and Holdsworth, 2016; see also families 3, 4, 7). What seems equally important, however, is the fact that the experience involved spending as little money as possible and getting value for the money that was spent (cf. Desforges, 2001). In the stories told by these families, money becomes central for several reasons. Talk about low prices illustrates the positive sides of the return trip. It is the ultimate way of reassuring others that one truly got 'value for money'. From this perspective, consumption also seems valuable as a way of balancing the more serious and sensitive parts of the trips. Spending time together, shopping and enjoying the touristy parts of the trip allows relaxation and vacation time together as a family. It becomes a way of defending themselves and their family boundaries through shopping (cf. Zelizer, 2005). These activities also enable family members to participate on equal terms without emphasising the child's separate relation to the country. Family holiday consumption, and the spending of money, are thus important for the construction of family intimacy (Zelizer, 2005; see also Hall and Holdsworth, 2016), and for reconfiguring family identity. Treats and money as relational work Although an adoption return trip has similarities with regular family holidaying, the purpose of the travelling differs in some ways because it is a trip to the adopted child's birth country (Gustafsson et al., 2019). While some families considered it important to meet with people connected to the child's past and had made plans to do so (Families,1,3,4,5,6,8,9,10), other families had no such intention, they just wanted to experience the country (Families 2, 7). Where such meetings occurred, parents were responsible for arranging them, even though the children also participated in the discussions about them. On returning home from the trip, the families showed the researcher what they had received from early-care individuals and talked about how they had felt during the meetings. It turns out that economic activities (Zelizer, 2005) are central to many of these stories about meetings with early-care individuals. Orphanage visits and meetings with early-care individuals are described by travel agencies as very important parts of the trips and, as Powers (2011) (p. 140) states, they often become 'a stand in for biographical knowledge', a way to fill in the gaps in the child's past (see also Chin Ponte et al., 2010). However, they are also a way for adoptive parents to show appreciation to those who once cared for their children. One mother in this study, for example, explained that the main goal of the trip was to meet with her son's foster mother so that she could thank her for taking such good care of him (Family 1). This means that orphanage visits and meetings with foster families are associated with expectations, but also with the potential risk that the meetings will not live up to such expectations. After their return trip, mother Jill described the meeting with her daughter's foster family as the best part of the trip because the foster parents told them everything about their daughter as a baby and shared their theories about her unknown biological parents. They had lunch at the orphanage together with the foster mother, and described how the foster mother had been very intimate. For example, she fed their 8-year-old daughter, Simona, during the meal, which Simona herself said she found a bit strange. On top of paying a visiting fee, Simona's family also gave gifts and money to the foster parents and, together with the other travelling families, gave money to the orphanage (Family 4). Elisabeth, mother of 7-year-old Stina, described their rather close connection to the orphanage, how they had hung around with the staff and children. Like many other parents in this study, Elisabeth said that it was important to travel back while Stina was still young, so that the staff would remember her. Elisabeth also said that if she had not seen the positive sides of the orphanage, she and Stina would only have gone there for a short visit and guided tour. In this case, however, the mother and daughter took the entire orphanage, consisting of approximately 20 children plus staff members, on a day trip to an amusement park. This visit was paid for by Elisabeth and included a chartered bus, entrance fees and lunch (Family 10). These stories illustrate that the boundaries around the relations between the families and early-care individuals are partly negotiated through money. While the overall aim of the economic activities is to demonstrate care and appreciation to those who once cared for their child, the involvement is ambiguous because, at the same time as they are a token of gratitude, they are also interwoven with social expectations. The early-care individuals are expected to show engagement, add to the history of the child and contribute to an overall positive experience. Hence, the families' imaginings and expectations of genuine engagement also instruct their consumption (cf. Wherry, 2017;Zelizer, 2005; see also Desforges, 2001;Su, 2010). The experiences at an orphanage can also become a way of doing family because the meetings with early-care individuals add to the child's baby story. Visiting the orphanage, then, not only connects parents and children to the child's past and people from it, but also provides a deeper context for building/creating a family history (cf. Powers, 2011). Similar expectations, although with a different outcome, are explicitly described in one of the travel diaries. The mother writing the dairy explains that the orphanage, and its staff, were in a poor condition, which made the family sad and disappointed. The mother describes how a woman from the staff asked the parents for a donation of money, a request they turned down because: 'If we had spotted the slightest glimpse of hope at the orphanage, that the manager wanted to make things better for the children, we would immediately have reconsidered our decision not to give anything other than fruit and sweets. Unfortunately, we couldn't find that reason' (Travel Diary 1). Because the orphanage did not live up to expectations, and because, rather than being blissful, the experience involved anger, despair and disappointment, the financial donation was withheld. Due to her concerns about doing the right thing, the mother assesses the outcome of a potential economic transaction (cf. Wherry, 2017;Zelizer, 2005) and decides to give fruit and sweets instead. These are gifts that we can imagine are intended for the children at the orphanage rather than for the orphanage or its staff. What is interesting about this story is not just the money and how the withholding of money becomes a symbol of a disappointing family experience at an orphanage. It is also about how money is intertwined with the moral adjustments made by the family that dissolve, rather than strengthen, their ties to the orphanage, its staff and the child's history as a baby. This kind of deliberation about matching the right sort of monetary payment with the social transaction at hand (Zelizer, 2005) was also undertaken in the talk of another family, where the father, Gunnar, said that he was a bit sceptical before the meeting with the foster family because: 'I wondered if there was something shady, I mean, they got paid for being foster parents at the time, 19€ a month, and 1 kilo of sugar that they put in the formula, so we thought they had been paid too little and wanted more from us' (Family 4). It turned out, however, that the foster parents gave money to their daughter instead of asking for money. 'And then we were ashamed', Gunnar said, referring to his initial thoughts (Family 4). Apart from illustrating how the complex mingling of intimacy and economic activities serves as a valuation of an orphanage visit, this story further reveals how relational work becomes highly important and that money can serve as a moral compass and a way to negotiate what it is appropriate to give and receive within a caring relationship (cf. Zelizer, 2005). The experience eventually became a successful one for Gunnar and his family, partly because money was never requested. This, in combination with the gift given to their daughter, then becomes a symbol of a genuine relationship between the foster parents and the child. Zelizer's (2005) notion of 'the social meanings of money' has made it possible to explore how, and which, money shapes the adoption return trip and how children's and parents' talk about, and use of, money creates family. The value that money is given spreads out across what it is used for in relation to the trip. Children mainly talk about money in relation to enjoyment, focusing on shopping and consumption. This has a tendency to turn the trip into more or less any holiday. However, the children's talk about money before, during and after the trips shows that they are a part of, and involved with, the family economy and how holiday money is being spent (cf. Cardell, 2015;Wu et al., 2010). Parents' talk about money is strongly linked to the moral dimensions that motivate which money can be used for what. The different ways in which the trips themselves are paid for shows how the cost and the financial value of the trip are both immeasurable and indisputable to parents. The price of the adoption trip, and the fact that it must be paid for, is more or less unquestionable, even if the trip empties the retirement account. The moral values connected to money also become visible in relation to orphanage visits. Money is used in these situations to create, maintain and/or disclose social and emotional relations with foster parents, orphanages and staff. Concluding discussion Focusing the analysis on the mingling of intimacy and economic activities during this specific type of family holidaying -family adoption return trips -contributes insights into how the 'ideal adoptive family' on holiday is being constituted. For example, money becomes a unifying entity when children and parents discuss it. This mutual process even creates a bond of togetherness when the family invests both money and emotions (cf. Sparrman et al., 2016). In the sense of 'ups' and 'downs' described by Gram et al. (2018), the analysis has shown that adoption return trips both resemble and differ from regular family holidays. However, the 'downs' during a return trip seem to have less to do with internal family problems caused by the intense time spent together than with how they relate to external frictions and conflicts connected to the birth country. For example, the families highlight the lack of available activities, unwanted weather conditions and poor experiences at orphanages as explanations for homesickness, feelings of discomfort and dissatisfaction. And, as shown, such 'downs' in mood are often solved by spending money. The families, both children and parents, in retelling the return trips' shared moments of 'downs', seem to be bringing the family members closer together as a family, rather than bringing the child closer to her/his birth country. Drawing on family tourism and child studies, in combination with the theoretical framework of the social meanings of money, this study suggests that, not only can family adoption return trips be seen and understood as an example of family holidaying, but also that family holiday money and family adoption return trip money negotiates how one becomes and makes a family. That is, families' mingling of intimacy and economic activities while holidaying contributes to our understanding of yet one more way of shaping holiday, parental, child and family ideals. It is important to note that children are always part of how families are constructed and how the values of the family, money and even parenthood, are constituted (cf. Sparrman et al., 2016). This is equally true when studying family holidaying or, as in this case, adoption return trips. The argument is that it is precisely this complex combination of travel, money and intimacy that demonstrates how adoption return trips are an integral part of adoptive families' everyday lives, as the trip is negotiated before it is made, while it is occurring and after it has been completed. To understand family holidays, the analysis shows the importance of including both children and parents in research because this broadens our understandings, in this case of adoption return trips. However, the study also contributes to our understanding of contemporary family holidaying in general, by displaying the heterogeneity of what a family holiday, and a family, might be. Finally, from a broader perspective, these stories also reveal that intimate transactions are not trivial in any sense because they have macroeconomic consequences in generating cash flows between rich and poor countries; adoption payments, global travelling, return trip payments, gifts, donations and local hospitality consumption. Hence, they reinforce inequalities between countries and people (cf. Zelizer, 2005; see also Powers, 2011). However, the economic challenges presented by some of the parents in this study also reveal the global reality as somewhat messy, and perhaps even more complex. Funding The author received no financial support for the research, authorship, and/or publication of this article. Data availability statement The data can be found at Department of Thematic Studies -Child Studies, Linköping University, Sweden, only on special grounds through the author. Ethical review The study was approved by the Regional Ethical Review Board of Linkoping University (Reg. no. 2015/34131)
2020-12-10T09:08:09.275Z
2020-12-03T00:00:00.000
{ "year": 2020, "sha1": "2e02fffe72f15208e4fe89326834db414748f27f", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1468797620977543", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "e24cd444470a1bb403f0e29467b03ceaaf83cb29", "s2fieldsofstudy": [ "Business", "Sociology" ], "extfieldsofstudy": [ "Business" ] }
102352920
pes2o/s2orc
v3-fos-license
Metallic-Line Stars Identified from Low Resolution Spectra of LAMOST DR5 LAMOST DR5 released more than 200,000 low resolution spectra of early-type stars with S/N>50. Searching for metallic-line (Am) stars in such a large database and study of their statistical properties are presented in this paper. Six machine learning algorithms were experimented with using known Am spectra, and both the empirical criteria method(Hou et al. 2015) and the MKCLASS package(Gray et al. 2016) were also investigated. Comparing their performance, the random forest (RF) algorithm won, not only because RF has high successful rate but also it can derives and ranks features. Then the RF was applied to the early type stars of DR5, and 15,269 Am candidates were picked out. Manual identification was conducted based on the spectral features derived from the RF algorithm and verified by experts. After manual identification, 9,372 Am stars and 1,131 Ap candidates were compiled into a catalog. Statistical studies were conducted including temperature distribution, space distribution, and infrared photometry. The spectral types of Am stars are mainly between F0 and A4 with a peak around A7, which is similar to previous works. With the Gaia distances, we calculated the vertical height Z from the Galactic plane for each Am star. The distribution of Z suggests that the incidence rate of Am stars shows a descending gradient with increasing jZj. On the other hand, Am stars do not show a noteworthy pattern in the infrared band. As wavelength gets longer, the infrared excess of Am stars decreases, until little or no excess in W1 and W2 bands. INTRODUCTION As a class of chemically peculiar (CP) stars, Metallicline (Am) stars show weaker CaII K lines and enhanced metallic-lines in their spectra than normal A type stars. They were firstly described by Titus & Morgan (1940), and were formalized into the MK system by Roman et al. (1948). Conti (1970) gave a more detailed definition of Am star describing the nature of itself. These stars, in whose atmosphere, presenting the underabundance of the calcium (or scandium) elements and/or the overabundance of iron-group elements, are defined as the Am stars. According to the above definition, Conti (1970) divided Am stars into three subgroups, which are stars with both weak CaII K lines and strong metallic lines, investigated light variations of 29 Am stars. However, the numbers of Am stars used in above studies are still too few for their incidence, and the lack of Am stars has become a bottleneck in understanding Am phenomenon. After the first catalog of the chemically peculiar (CP) stars (Renson et al. 1991), Renson & Manfroid (2009) collected about 4,000 Am stars (or probable) from a large number of literatures and presented another catalogue of CP stars, in which 116 stars have been well studied. According to an empirical separation curve (ESC) derived from the line index of Ca II K line and 9 groups Fe lines, Hou et al. (2015) found 3,537 Am candidates from LAMOST DR1. This is the first search work of Am stars in a large database of low resolution spectra. However, Am stars and normal stars can not be distinguished in their marginal region simply by using a separation curve alone. In the following year, Gray et al. (2016) employed the MKCLASS program to classify spectra in the LAMOST-Kepler field and totally obtained 1,067 Am stars with hydrogen line types ranging between A4 and F1.The MKCLASS is a spectral classification software that performs the classification through mimicking human-like reasoning, but it was designed for spectra with common type and high quality and sometimes will not succeed on low quality or rare objects. A number of of sky survey projects, such as RAVE, LAMOST, SEGUE, and GAIA-ESO etc., collecting a massive number of stellar spectra which provide us opportunities to search for Am stars. Traditional astronomical research methods such as manual operation and human identification are no longer sufficient. Many machine learning algorithms have been used in analysis of astronomical data because of their ability to efficiently search and recognize certain type of stars. In this paper, we intend to search for Am stars using machine learning methods. Compared with various sophisticated classification algorithms, the RF algorithm wins both in successful rate and efficiency. In addition, we still need a manual inspection step to guarantee correctness of Am stars obtained by machine learning since the metallic lines in low resolution spectra are very weak and are easily affected by noise. Therefore, we adopt the RF algorithm to design a classifier, and manual examination is used to further check the results. A key issue is to rule out the contamination by Ap stars, which are also belong to a class of CP stars. Only one or a few elements (including silicon, chromium, strontium, and europium) are extremely enhanced in their stellar atmosphere. Since some Ap stars also exhibit abundance characteristics of Am stars (Smith 1996;Romanyuk 2007) to a certain degree, the obtained Am data set may contain a small amount of Ap stars. The largest difference between Am and Ap stars is that Ap stars have intense magnetic fields. However, it is difficult to distinguish between Am and Ap stars in low resolution spectra without spectral features caused by the magnetic effect. In this work, we only can label some spectra with extreme abundance of elements such as silicon, chromium, strontium or europium. Since those elements are also generally enhanced in Am stars (Gebran et al. 2008;Fossati et al. 2008a), we need follow-up analysis with high resolution spectra to identify whether they are Ap stars or Am stars. A large sample of Am stars from a single survey without instrumental or processing differences is useful for statistical study. In this paper, we searched for Am stars in LAMOST DR5 using machine learning methods in conjunction with manual inspection. The paper is organized as follows. In Section 2, we describe the data sets used in this study and data preprocessing steps. In Section 3, we compare various classification methods and show the advantage of RF algorithm in searching for Am stars, and give the result of Am through manually check as well as possible Ap stars. In Section 4, we conduct some physical statistical analysis for the Am stars. Then, a discussion is presented in Section 5. Finally, we summarize this work in Section 6. LAMOST Data The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is a reflecting Schmidt telescope with 4-m effective aperture and 5-degree field of view. 4000 fibers are mounted on the focal plane enable it observe 4000 objects simultaneously. The telescope is dedicated to a spectral survey over the entire available northern sky and is located at the Xinglong Observatory of Beijing, China (Cui et al. 2012;Zhao et al. 2012;Luo et al. 2012Luo et al. , 2015. Compared with SDSS spectroscopic observations, LAMOST survey is more concentrated on the Galactic disk where exists more young stars and is very advantageous to search for Am stars. By the end of the first five-year regular survey, LAM-OST has obtained 9,017,844 spectra of stars, galaxies, and QSOs with spectral resolution of R ≈ 1800, wavelength coverage ranging from 370 to 900 nm, and magnitude limitation is about of r ≈ 17.8 mag for stars. The total number of stellar spectra reaches 8,171,443, making it a gold mine waiting to be exploited. All the above numbers can be found in the LAMOST spectral archive 1 . Before searching for Am stars, we limited the search scope to a certain range through the following conditions: 1. Because spectral type of Am stars are generally A-and early F-type, we limited the search to only A-, F0-, F1-, and F2-type stars in LAMOST DR5, whose spectral types come from the LAMOST 1D pipeline. 2. The spectral features of Am stars mainly appears in blue wavelength band, and their metallic lines are relatively weak and susceptible to noise, thus we only retained spectra with the signal-to-noise ratio of the g-band greater than 50 (S/N 50). 3. To ensure the accuracy and stability of classification, we eliminated some spectra containing zero flux in the blue spectra. More than 10% objects have been repeatedly observed multiple times by LAMOST. For such targets, we only retain the spectra with the highest signal-to-noise ratio. Finally, we obtain 193,345 stellar spectra as our searching data set. We need labeled data to form the training and testing data sets, and all the labeled data are also handled using the three operations above except for Evaluation Set II, . Labeled Data In order to train and test the classifier, we collected known samples of Am stars and non-Am samples. We first selected Am samples with confidence greater than 0.5 and all non-Am stars from the work of Hou et al. (2015), and then removed those close to the experience separation curve to ensure the purity of the Am positive and the non-Am negative sample sets. This yielded 1,805 Am stars as positive samples. For the non-Am stars, further screening was conducted using of MK-CLASS software. We randomly chose the same number of non-Am stars as negative samples, and these positive and negative samples were distributed on average in the training and test sets. Finally, we obtained 1,806 training samples and 1,804 test samples. Renson & Manfroid (2009) Note-The first column lists the name of the dataset. The second column contains the number of samples in the dataset screened out by various criteria. The third column is the functional description of these datasets in this work. The last column lists the sources of these datasets. In addition, we also chose some Am stars from Gray et al. (2016) and Renson & Manfroid (2009) as evaluation sets in order to evaluate the performance of a variety of methods. For 1,067 Am stars presented by (Gray et al. 2016), we applied the pretreatment process in 2.1. Then according to K-line and metallic-lines spectral subtypes, those samples were classified into classical Am stars and marginal Am stars. The former, K-line type is earlier than the metallic-line type for at least five spectral subtypes. The latter, the difference between the two types is less than five spectral subtypes (Gray & Corbally 2009;Morgan et al. 1978). We obtained 357 classical Am stars and 76 marginal Am stars as Evaluation Set I. The 116 known Am stars from the catalog of Renson & Manfroid (2009) were cross-matched with LAMOST DR5, and only four counterparts were found, which were comprised in Evaluation Set II. All labeled data sets are summarized in Table 1. Input Feature Selection According to the characteristics of underabundance of Ca elements and overabundance of Fe group elements in the atmosphere of an Am star, Hou et al. (2015) classified Am stars using the empirical separation curve (ESC here after), which is derived from line indices of K line and 9 groups Fe lines. We used Evaluation Set I to evaluate the classification ability of the ESC, as shown in Figure 1, in which there are 357 stars labeled as classical Am stars (green dots in the figure) and 76 stars labeled as marginal Am stars (blue dots). We found that only 345 classical and 52 marginal Ams were judged as Am stars by the ESC (red curve), i.e. the recall through the ESC is 0.966 for classical Am stars and 0.684 for marginal Am stars respectively. Obviously, the ESC based on line index method is slightly inadequate for distinguishing marginal Am stars from normal early-type stars. This is mainly because the chemical peculiarity of the marginal Am stars is much weaker than the classical Am stars, and the difference between the marginal Am stars and the normal earlytype stars is smaller than that of the classical Am stars. In addition, some spectral lines, such as Fe lines, are weaker and the batch calculation of those line indices will lead to large errors, which will reduce the recall rate of marginal Am stars. Based on the above reasons, we decided to use the fluxes of spectra as the input features of classifier. Considering that the spectral line characteristics of Am stars are more densely concentrated in the blue wavelength range than the red, we choose the normalized fluxes in wavelength range between 3800Å, and 5600Å, as the input values of the classifier model. Input Feature Normalization Before selecting a classification model, we must remove the influence of pseudo-continuum on the classifier. This is the key to successfully distinguish Am stars. We have improved the fitting technology of pseudocontinuum (Lee et al. 2008) by applying the automatic identification operation of strong lines. The details of this procedure are as follows. Step 1, wavelength of all spectra was truncated from 3800Å to 5600Å . Step 2, a ninth-order polynomial was used to fit each spectrum, and those points that are outside 3σ away from the fitted function were masked including the strong spectral lines, cosmic rays and sky emissions residual from data reduction. Step 3, a ninth-order polynomial was used to iteratively fit spectra, where points more than 3σ below the fitted function were rejected. The purpose is to find the approximate upper envelope of each spectrum as its pseudo-continuum. Step 4, the pseudo-continuum was removed from each spectrum through dividing the observed spectrum by the pseudo-continuum. The intensity of each spectrum was rectified using this method. Figure 2 shows the results obtained with the improved method. One can see from Figure 2 that the pseudo continuum is well removed from the spectrum. Classifier Selection We selected six sophisticated classification algorithms from scikit-learn web: K-nearest neighbors (KNN), support vector classification (SVC), Gaussian process (GP), decision tree (DT), RF, and Gaussian naive Bayes (GNB). According to the parameter values recommended by the website, we trained these classifiers on the training set separately and tested their performance on the test set. We then performed two external evaluations for the first three winning classifiers and compared them to the previous methods of searching for Am stars such as ESC and MKCLASS software, and finally chose the RF algorithm as the classifier in our work. Classifier Evaluation Criteria We used precision, accuracy, recall, and F1 score as the criteria to evaluate the classifiers. The four evaluation criteria are defined as follows: Since the test set is composed of labeled samples, it is easy to judge whether the classifier's classification results are correct on the test set. TP is the number of true positive samples that are correctly classified as Am stars by a classifier. FP is the number of false positive samples that are misclassified as Am stars by the classifier. Similarly, TN is the number of true negative samples that are correctly classified as non-Am stars by the classifier, and FN is the number of false negative samples that are misclassified as non-Am stars by the classifier. Precision is the fraction of true positive samples among the set of Am classified by the classifier. Accuracy measures the fraction of samples that are correctly classified in the entire set. Recall measures the fraction of Am that are correctly classified over the total amount of Am. F1 score is the harmonic mean of precision and recall. Internal Testing The samples in the training set and the test set come from Hou et al. (2015), among which the positive samples (Am star) were labeled by ESC while the negative samples (non-Am) were labeled by both ESC and MK-CLASS software. According to the catalogue of Hou et al. (2015), the positive samples in the training set consisted of 490 (54.3%) classical Am stars and 413 (45.7%) marginal Am stars without considering the uncertainty of the spectral subtypes of the K line and metallic lines. Through the aforementioned classifiers trained on the training set, their classification performance on the test set are listed in Table 2. This table is ordered in terms of the F1 scores. Clearly, the first three classifiers (GP, KNN, and RF) show better performance during the internal test. Note-Here, we only show the accuracy and F1 in classical Am set and marginal Am set, because the recall equals to the accuracy and the precision equals 1 for positive samples. Meanwhile only the accuracy is listed in the non-Am set because the precision, the recall and F1 are all equal 0 for negative samples, We also divided the test set into three subsets: classical Am, marginal Am, and non-Am, and tested the performance of the classifiers on them separately. The detailed information is listed in Table 2. As can be seen from the table, the first three classifiers also have good classification performance for the marginal Am stars. External Evaluation I The evaluation data set I come from Gray et al. (2016), which consists of 357 classical and 76 marginal Am spectra, and were labeled by MKCLASS software. We tested the classification performance of GP, KNN, RF, and compared them with ESC using the data set. These results are shown in Table 3. For comparison purposes, the table is ordered in terms of the F1 scores. It can be seen from Table 3 that the classification performance of RF is more stable than that of other machine learning algorithms. In addition the classification ability of RF is also more prominent than the ESC method for both classical and marginal Am stars. Evaluation Set II were used to compare the RF algorithm with the MKCLASS software. The samples in Evaluation Set II come from the catalog of Renson & Manfroid (2009), in which only four counterparts can be found in LAMOST DR5. The RF algorithm and the MKCLASS package were used to classify the four wellstudied Am stars, and the results are listed in Table 4. These stars were also recognized as Am in some literatures based on analyzing the abundance of chemical elements and these literatures are also listed in Table 4 for reference. The RF algorithm classified these four stars as Am Stars, which is consistent with the results from these literatures. However, MKCLASS software can only classify the star HD73818 out of the four stars as an Am star. Obviously, the RF classifier is a more suitable tool for searching for Am stars. After all, the MKCLASS software is not a specially developed software for Am stars. External Evaluation II For visually inspection of the four Am stars, we plot their spectra and corresponding normal stellar templates with the same spectral types given by H lines in Figure 3. The best matching Kurucz template (Castelli & Kurucz 2003) for each spectrum was obtained through cross correlation. The black one in each panel is the normalized Am spectrum while the red one is the best matching template. One can see that the the Balmer lines of the four Am spectra fit well with their best templates, but the strength of the K-lines are weaker than that of their templates, on the other hand the metallic lines show just the opposite. This is in line with the characteristics of the first subgroup of Am stars with weak K line and strong metallic lines. Compared with ESC, MKCLASS, GP, KNN, SVC, DT, and GNB methods, the RF algorithm is the best choice to search for Am stars. After obtaining Am candidates using RF algorithm, eyeball check was conducted comparing with the best matching templates. RF-Based Classifier The RF algorithm a one kind of the bagging algorithm in ensemble learning. N training samples are randomly selected from the original sample set using the Bootstrapping method with replacement, and K training sets are obtained by K-round extraction. The K training sets are independent of each other, and elements can be duplicated. The K decision tree models are trained on the K training sets, and vote to produce classification results. The number of decision trees, K, is a key parameter in the RF algorithm, the larger the number of decision trees, the better the classification results, the longer time consumption. After multiple attempts, we used 1800 as the value of the number of decision trees as well as the number of input features. The remaining parameters were set to the default values. One advantage of the RF algorithm is that it can be used to evaluate the importance of each feature. Figure 4 shows the importance and accumulative importance of all features. The importance decreases sharply with the number of features and is almost negligible after number 300. The first 300 features play important roles in classification, and their accumulated importance reaches 91.2%. Figure 5 shows the distribution of the first 300 features in a spectrum. Those features basically fall on the absorption lines of CaII K, H, and transition metal elements, which are considered to be very important elements distinguishing Am stars. We identified the spectral lines where the first 50 feature points are located by consulting the line table of the theoretical model of Moore and other early-type stars literatures (Przybilla et al. 2017;Adelman 1994;Coupry et al. 1986;Smith 1973). The details are listed in Table 5. We only listed the main elements contained in spectral lines since metal lines in low-resolution spectra are mostly blended lines. It should be noticed that feature numbers and importance are not absolute. Different RF algorithm classifiers will produce different results because the data used by RFs in constructing each decision tree are randomly selected from the training set. Fortunately, their ranking and importance do not change much for most of the important features. It should also be noted that the wavelengths of the features are all in vacuum, because spectra of LAMOST are all converted to vacuum wavelength, and you can find the relevant keyword "VACUUM" in the FITS header of the spectra. Manual Inspection Three reasons require manual inspections of the Am candidates obtained using RF method: First, the in- Note-The first column indicates the identifiers of the stars in the HD catalog. The second column lists the names of FITS files of the LAMOST counterparts. The next two columns show the results of RF algorithm and MKCLASS software respectively. The last column give the literatures that identify the four stars as Am based on element abundance. Articles numbered 1, 2, 3, 4, 5, 6, 7, and 8 corresponds to Gebran et al. (2008) tensities of metal lines in Am spectra are very weak and are easily masked by noise, which would lead to errors in the results. Second, although the spectra were rectified, the residual continua still could affect the classification. Third, the precision of RF algorithm is 0.991, means there are still a small fraction of stars might be wrongly recognized. The specific process of manually inspection is to compare the spectral lines of those candidates with their best matching synthetic template, a set of quantitative standards is as follows: where, K spe and K mod are the CaII K lines of a spectrum and its matching template, respectively. M spe and M mod are metallic lines of a spectrum and its matching template, respectively. EW (·) is the equivalent width of a spectral line. In this work, we adopted the same EW definition of the CaII K line as Liu et al. (2015), the line is in the window [3927.7Å-3939.7Å], and the blue and red sideband are in [3903Å-3923Å], [4000Å-4020Å] respectively. For metallic lines, conventional EW calculation is not suitable because the Fe absorption in A-type stars is generally weak and too narrow to give wavelength bands that EW needs. So we had to use the method proposed by Hou et al. (2015) to calculate their equivalent width. We selected part of FeI lines listed in Table 5 by eliminating several FeI lines blended with ionized elements. We merged adjacent Fe lines into 15 Fe-group lines and listed the left ends and the right ends of these groups in Table 6. To calculate the EW, side bands for blue and red are defined in [Left End-5Å, Left End], [Right End, Right End+5Å] respectively. We limited the sidebands to 5Å to get the best local pseudo continuum for each of the 15 Fe-group lines avoiding affection by other lines. Figure 6 shows an example of a local pseudo continuum of 15 Fe-group lines. Although there are some uncertainty in calculating the equivalent width in batches, such as noise interference in Fe lines or mixing of other metal lines in the CaII K line wing, the line ratio of the CaII K to Hε or Hδ can be verified as important criteria during manual inspection. Labelling Ap stars There is some contamination by Ap stars to the obtained Am candidates because of similar spectral features. Most of the Ap stars are actually B-type stars, while a small portion of Ap stars are also found in Aand F-type stars. Among those cooler Ap stars, some also exhibit the characteristics of the overabundance of Fe elements and the underabundance of Ca elements. Therefore, a small amount of Ap stars will be mixed with Am stars. According to the definition of Ap star, we thought of objects whose Sr, Cr, Eu, or Si element are extremely abundant as Ap candidates. In general, the abundance of Sr, Cr, Eu, and Si in Am stars rarely exceeds 2.0 relative to the abundance of the sun Catanzaro & Ripepi 2014;Smith 1996;Lane & Lester 1987). Therefore stars with Sr, Cr, Eu, or Si element abundance exceeding 2.0 are likely to be Ap stars. The detailed method of finding Ap candidates is as follows: Firstly, according to the prominent spectral lines in the Figure 3. Comparison between the Am spectra and their best matching templates in Evaluation Set II. The black curves show the Am spectra and the red curve lines are the best matching templates. The name of the Am stars and the atmospheric parameters are also listed next to their corresponding curves in black and red. The blue, red, and green vertical dashed lines indicate the positions of the CaII K line, Balmer lines, and some Fe lines from Hou et al. (2015), respectively. All spectra and templates are normalized and the templates have been offset vertically by 0.6 continuum units for clarity. There is an abnormally strong absorption line at around 4480Å of Panel (d), which was caused by bad CCD pixels present in the raw data. The bad pixels were not removed by the data reduction pipeline. blue-violet spectra of Ap stars (Gray & Corbally 2009), and excluding some lines with nearby FeI lines or other Hou et al. (2015) to calculate the EW(tem) and EW(obs) of the 4077Å blend line for both the templates and observed Am spec-tra, and marked those objects as Ap candidates if their EW(obs)> EW(tem). Result For 193.345 spectra of the searching data set described in Section 2.1, we simply fitted with Kurucz templates in the wavelength range of [3900Å, 5600Å] to obtain T eff and logg. With the stellar parameters, we can further constrain the searching data set with 6500 K< T eff 11000 K and logg 4.0 dex, since the Am phenomenon often occurs in A-and early F-type main-sequence stars. By this constraint, 98,202 spectra were retained, and then the RF classifier was applied to identify 15,269 Am candidates, for which we carried out manual inspections. After these inspections we discarded 4,766 spectra, among which 1,338 candidates (28%) do not meet the reference criteria of Section 3.5, 2,585 spectra (54%) cannot be recognized by human eyes because of their small peculiarity, and 843 spectra (18%) are of bad spectral quality and not sufficient to identify the Fe line. In addition, using the method described in Section 3.6, we found 1,131 objects with extreme overabundance of Sr, Cr, or Si elements and labeled them as Ap candidates in the catalog. Whether or not they have the nature of Ap needs to be identified by subsequent analysis of their magnetic field strength. In total the catalog has 10,503 entries including 9,372 Am stars and 1,131 Ap candidates. In the statistical analysis section below, we excluded them from Am stars. For each Am star in the catalog, we also determined three different spectral subtypes of its K-line, Hlines, and metallic lines using the template matching method. The band used to match the spectral subtype of metallic-lines is the combined band of [4140Å, 4300Å], [4410Å, 4600Å], and [4900Å, 5400Å]. The matching band for H-lines is a combination of the Hβ, Hγ, and Hδ bands.It is worth noting that the spectral subtype obtained by matching with templates in the specific wavelength ranges alone are not completely accurate, and the spectral subtypes of some Am stars do not conform to the common characteristics of Am stars, i.e. the K-line spectral subtype is earlier than the metallic-line spectral subtype. This is why we can not use this criterion for Am search directly. The complete catalog of identified Am stars can be downloaded http://paperdata.china-vo.org/Qinli/2018/ dr5 Am.csv, and the example catalogue is presented in Appendix A. Effective Temperature Distribution We analyzed the effective temperature distribution for these Am stellar samples. Figure 7 shows that the distribution of Am stars and the incidence of Am stars in different effective temperature bins. As can be seen from Figure 7, the results are consistent with the conclusion presented by Smalley et al. (2017), namely that the temperature of Am stars is mostly distributed between 7250 K (F0) and 8250 K (A4), peaking near 7750 K. Due to our strict screening, the fraction of Am stars to the total A-and early F-type stars is smaller than the values reported in previous studies (Smith 1971;Abt 1981;Gray et al. 2016). The incidence of Am stars we give can be used as the lower limit. Space Distribution The spatial distribution of Am stars in the Galactic coordinate plane is plotted in Figure 8. The blue points indicate all A-and early F-type stars, and the red points are for Am stars. As shown in Figure 8, the number of Am stars on the Galactic disk is significantly higher than that in other regions, because most of the observa- In order to further understand the spatial distribution of Am stars, we analyze the frequency of occurrence of Am stars as a function of the vertical distance from the Galactic plane (Z). We obtained the parallax(ω) of most spectra by cross-matching with the Gaia DR2. For spectra with parallax 0, we then cross-matched with the catalog in Bailer-Jones et al. (2018) and got their estimated distances. Eventually, we obtained the distances of 92,870 early-type stars and 8,951 Am stars. The vertical distance Z for each star can be calculated with the following formula: where b is the Galactic latitude, and r is the estimated distance. In Figure 9, the blue and green histograms show the distribution of early-type stars and Am stars along the vertical distance |Z|, respectively. In each bin, we calculated the incidence of Am stars and represented them with red points. Figure 9 suggests that the incidence of Am stars increases as |Z| decreases. . Distribution and incidence of Am stars as a function of vertical distance from the Galactic plane Z. The red points on the left to each bin are the incidence of Am stars, and we did not compute the incidence for the bins in which there are less than 10 early type stars because of no statistical meaning. Infrared Photometry We also performed infrared photometric analysis on these Am stars. First, we cross-matched them with 2MASS and WISE catalogs with a matching radius of 3.0 arcsec, and obtained the corresponding magnitudes of the J, H, K, W1, W2, W3, and W4 bands. Because the WISE satellite has low angular resolutions of 6.1, 6.4, 6.5, and 12.0 arcsec in the W1, W2, W3, and W4 bands, we often found that one 2MASS counterpart is a different object to the WISE counterpart. In order to avoid this case, multiple WISE sources within a search radius of 10.0 arcsec were eliminated. In order to improve the accuracy of the result, the photometric errors were limited to less than 0.1 mag in all three 2MASS bands (Skrutskie et al. 2006) and 0.05 mag in the W1 and W2 bands (Wright et al. 2010). Then, the color excess of all data in the color (a−b) are estimated using the formula: values are given in Schlafly & Finkbeiner (2011) and the R(a − b) values are reddening coefficients of the color (a − b) from Yuan et al. (2013). We calculated color excess and dust reddening of 7,799 Am stars for the (J-H), (H-K), and (W1-W2) colors. Finally, our conclusions are shown in Table 7. One can see a very clear downward trend in the incidence of infrared excess from near-infrared to mid-infrared, and the incidence reduces to 0.15% in the W1 -W2 region. This is in contradiction with the conclusion about Am stars from Chen et al. (2017). They found that over half of Am stars have clear infrared excess ((W 1 − W 2) > 0.1) in the W1 -W2 region and have no or little infrared excess in the remaining regions, including J, H, K, and IRAS regions. We checked the data set from Chen et al. (2017) and found that they do not restrict the photometric precision to W 1 error < 0.05 mag and W 2 error <0.05 mag. When we add this constraint, there are only 3 sources in Chen's Am dataset with infrared excess. Thus, we statistically conclude that Am stars have no infrared excess in the W1 -W2 region. Generally, [Fe/H] is often used to relatively represent the metallicity of a star. However, compared to normal stars, the atmosphere of an Am star is Fe enriched and Ca deficient. The metallicity of an Am star obtained with conventional methods may be larger than the true value. Taking into account that Am stars comprise a significant fraction of early-type stars, researchers should take much care about the metallicity of Am stars given by pipelines for spectral surveys especially for statistical study. In order to understand the degree of metallicity overestimation in Am stars, we analyzed the metallicity distribution of Am stars with LAMOST atmospheric parameters. The metallicity given by LAMOST pipeline causes Am stars as a whole to be bi-ased toward metal enrichment relative to normal earlytype stars. A detailed statistical metallicity distribution ([Fe/H]) is shown in Figure 10. The blue histogram shows the distribution of [Fe/H] for all A-and early Ftype stars with the LAMOST atmosphere parameters. The yellow correspond to the Am stars with LAMOST atmosphere parameters. One can note that the right region of the Figure 10 is dominated by Am stars. The conclusion that most metal-rich stars are Am stars is obviously unreasonable. 6. SUMMARY Eight classification methods (GP, KNN, RF, SVC, DT, GNB, ESC, and MKClASS) were compared in this study. The RF algorithm is chosen to search for Am stars among early-type stars in LAMOST DR5 and 15,269 Am stars candidate are obtained. We analyzed the top 50 classification features given by the RF classifier, the total importance of which reached 57.57%. We recognized these spectral lines that RF classification depends on. These lines mostly are iron elements and were used to identify Am stars in the step of manual inspection. In addition, we also compared the difference between Am and Ap stars, and labeled Ap candidates in the final catalog. Finally, we found 9,372 Am stars and 1,131 Ap candidates, and provided an Am star catalog. We performed statistical analysis of the temperature distribution, spatial distribution, and infrared photometry for these Am stars. The distribution of effective temperature shows that Am stars are mainly concentrated between F0 and A4, with a peak near A7, which is consistent with previous works. The spatial distribution suggests that the frequency of occurrence of Am stars is inversely related to the vertical distance from the Galactic plane (|Z|). We also conducted an infrared photometric study for Am stars. We noticed that the incidence of infrared excess in Am stars gradually reduces from the near-infrared to mid-infrared range. We would like to thank the referee for his/her valuable comments. This work is supported by the Na- The first column show the name of FITS file of each spectrum. The next two columns are right ascension, and declination of J2000 in degrees. T eff is the effective temperature obtained through matching with the Kurucz grid in the [3900Å, 5600Å] wavelength range. Fe EW and K EW are the equivalent widths of the Fe-group lines and CaII K line in the observed spectrum, respectively. Fe EW m and K EW m are the equivalent widths of the Fe-group lines and CaII K line of the corresponding Kurucz template, respectively. The smaller the value of K EW -K EW m and the larger the value of Fe EW -Fe EW m are, the more obvious the Am phenomenon is. K type , H type , and m type are spectral subtypes of the CaII K line, Balmer lines, and metallic lines respectively. The column Z shows the vertical distance from the Galactic plane. The Ap flag is a flag column, Ap flag = 1 indicates that the star is an Ap candidate. The r est , r lo , and r hi come from the catalog in Bailer-Jones et al. (2018) and are the estimated distance, lower limit distance, and upper limit distances, respectively. Parallax and parallax error are taken from Gaia DR2 catalog. The next six columns list the magnitudes and errors of the J, H, and K bands from the 2MASS catalog. The next four columns are magnitudes and errors of the W1 and W2 bands from the WISE catalog. The E B−V values are taken from Schlafly & Finkbeiner (2011). The next three columns show the color of (J-H), (H-K), and (W1-W2), correcting the dust extinction. The column FeH lamost is metal abundance provide by LAMOST pipeline. The last two columns are EW of spectral line of the observed spectra and templates at 4077Å . The greater the value of 4077 EW -4077 EW m is, the greater the probability of Ap star is.
2019-04-05T19:06:52.000Z
2019-04-05T00:00:00.000
{ "year": 2019, "sha1": "54d56fe5503617af921a76569014ff62026a4f5b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1904.03242", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e950f43a258324b1deb4d0cd319ef7cc61fab27c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
244097874
pes2o/s2orc
v3-fos-license
Educational Significance of Nanoscience–Nanotechnology: Primary School Teachers’ and Students’ Voices after a Training Program : Most of the modern technological applications we use in our daily life originate from the progress of Nanoscience–Nanotechnology (NST). The projection, showing the great impact that these advances are going to have on society, exhorts science education researchers to incorporate the modern field into educational contexts. Among the several issues that have to be dealt with, NST’s educational significance comes to the fore. This pilot study aims to examine whether both nano-trained primary school teachers and nano-trained students acknowledge the need for the inclusion of the NST content to school contexts. Fourteen primary school teachers and ten students, after their participation in an NST training course, were interviewed in order to provide their justifications. The results show that the vast majority of the participants acknowledge the educational significance of NST. The main arguments were associated with career possibilities, the relevance of the content to everyday life, and the need for nanoliteracy attainment. The results of this study can be used by NST education researchers in order to formulate a NST content structure for primary school nanoeducation. Introduction Modern society is characterized by astonishing technological advancements that have improved the quality of life. Smarter electronics, more efficient drug delivery systems, more advanced materials than were used in the past, have all opened up new possibilities for citizens in terms of communication, medicine, textiles, sports etc. Many of these advances have been made as a result of the progress that has been achieved in research on the small scale of "nano". Nanoscience and Nanotechnology (hereinafter NST) are contemporary state-of-the art fields that focus on the discovery of new materials at the nanoscale that exhibit novel properties. For example, carbon nanotubes, an extraordinary material whose dimensions fall into the nano regime, are incorporated into tennis rackets' frames to make them lighter and, simultaneously, to boost their strength [1,2]. The impact of such advances and upcoming revolutionary developments on everyday life-both present and future-challenge science education researchers to incorporate this cutting-edge field into school contexts [3][4][5][6]. However, several challenges and issues emerge [7]. One of the main issues that has to be examined is whether teachers, and even students, acknowledge the educational significance of the inclusion of NST in school curricula. On the one hand, research findings indicate that teachers' perspectives should be taken into account for any attempt at curriculum innovation. Undoubtedly, teachers' ideas about the content, the teaching process, and the learning of science can hinder or promote the implementation of any reform [8,9]. For example, in the case of the innovation concerning the inclusion of NST in school curricula, teachers may believe that the underpinned content is too complex for their students to conceptualize, and as a 2 of 12 result, they may be reluctant to include NST in their classrooms. On the other hand, we consider that students' viewpoint about the educational significance of NST should also be examined. If students have the opinion that NST is a significant subject that is relevant to their daily lives then it is more likely to achieve notable gains in their learning. This study focuses on the above-mentioned issue. In particular, we aim to convey both students' and teachers' voices regarding the educational significance of the incorporation of NST into school curricula. Specifically, the objective of the study is to bring to the surface primary teachers' and students' views on whether they think the inclusion of NST into classrooms is necessary or not. Until now, we have not found any studies that examine whether teachers or students actually promote the inclusion of NST in school contexts. This study aims to shine a light on this gray area of research. We stress that we are interested in examining both nano-informed teachers' and students' views, since analyzing the educational possibilities of a new field "requires informants with an understanding of the field in question" [10] (p. 127). Consequently, the research question that drives this study is: what justifications do nano-trained primary school teachers and students provide concerning the educational significance of NST? The Model of Educational Reconstruction The Model of Educational Reconstruction (hereinafter MER) has been conceived by German researchers as a theoretical framework for studies that explore the possibility of teaching salient concepts and principles of science. It is structured by three interdependent components. One of them, namely, the clarification and analysis of the scientific content, includes the analysis of the educational significance of the content, i.e., whether it is worth teaching a particular area of science. MER has been implemented in analyzing the educational perspectives of modern scientific topics, such as non-linear physics and NST [10][11][12]. Through the clarification and analysis of the scientific content, on the one hand, the content is critically analyzed through leading scientific books and key publications of the field, and, on the other hand, its educational significance is highlighted. In the case of NST, the critical analysis of the NST content has already been documented [13]. In brief, through the close examination of a large number of published papers and books regarding the content of NST, the salient concepts of the field were identified. The content of these concepts was analyzed through the lens of introducing the particular content to primary education, i.e., whether the content is appropriate for the developmental stage of the students. For example, the concept of surface area to volume ratio, which explains the surface dominated properties of nanoscale materials, constitutes a concept that primary school students may find difficult to conceptualize due to the lack of an underlying developmental level. The educational significance is the subject area of this research paper. Among other methods, the analysis of educational significance may involve empirical studies, through which questionnaires or interviews may be used for examining the views of experts [14]. For example, Komorek et al. [15] performed an empirical study in order to examine the educational significance of non-linear physics. Likewise, this study aims to analyze the educational significance of NST by examining the relevant viewpoint of a "group of experts", i.e., nano-trained teachers and students. Educational Significance of NST: A Review of the Literature Through the relevant literature, the first issue arising is whether it is necessary to devote time and effort to educate students or teachers in NST concepts and phenomena. In the following paragraphs, we provide justifications presented by NST scientists and engineers, educational policy makers, and government agencies. Our aim is to scrutinize these in an effort to determine their validity for primary school education. One of the most common arguments is related to concerns about the lack of specialized staff in "nano-related" fields. Since the onset of NST, education was considered to be the main way to bridge the gap between the workforce needs and the cutting-edge field [16,17]. In fact, connecting NST with the workforce needs became compulsory education. Notably, NST educators argued that since primary school students constitute the future workforce in NST, their early contact with the emerging field is essential [18]. However, we consider that the teaching of NST in primary school education is not strictly oriented towards directing the students to pursue a relevant professional career. Instead, it is closely linked to one of the goals of science education, which is to familiarize students with new research and technology opportunities and help them enhance their ambitions. This is a perspective that students seem to find really fascinating about science [18,19]. Furthermore, there are arguments that focus on the importance of students achieving scientific and/or technological literacy, two concepts that have a substantial impact on curriculum design worldwide [20]. The basic idea is that all citizens will need some kind of "nanoliteracy" to formulate an opinion in an informed and responsible way on issues that stem from NST's advancements and affect their everyday life [20,21]. For instance, during the COVID-19 pandemic citizens had to make decisions about both their personal, as well as public, health [22]. A typical example was the modern mRNA vaccines that were based on nanoparticles and were developed due to the advancements of NST [23]. People had to decide if they would like to be vaccinated or not. It was evident in the public dialog, that misconceptions were uncovered and spread rapidly on social media. For example, some people stated that these vaccines could lead to the alteration of the recipient's genome via the injected RNA [24]. Despite the fact that scientific/technological literacy is a frequently stated goal of science education (e.g., acquiring literacy in energy or environmental issues), in the case of NST it requires special attention, due to the previous examples of modern research, which were greeted with skepticism by the population because of low literacy (e.g., biologically modified foods) [25,26]. Such public attitudes, in the case of NST, are sought to be prevented by creating a literate population on nanoscale issues, an effort that starts at an early age [9,27]. With the introduction of NST, primary school students have the opportunity to develop reasoning skills about balancing the risks and benefits of the products they utilize in their everyday lives [28]. Additional arguments place emphasis on the pedagogical benefits, which are related to students' interest in science. In particular, there is evidence that as students' studies at the educational level evolves, their interest in science decreases. On the one hand, this finding has been attributed either to the fact that school science seems rather disconnected from students' everyday lives, or, because the introduction of modern achievements in science and technology into the curriculum is underestimated [19,27]. On the other hand, NST includes a plethora of achievements and studies a wide range of phenomena of which students become witnesses through their everyday lives. Some typical examples are sports equipment (e.g., rackets), sunscreens containing nanoparticles, and superhydrophobic and self-cleaning fabrics, etc. The introduction of these products into the classroom, in combination with the underlying science principles, can guide students to the interpretation of the modern technological world and, also, can increase their interest in science [29]. Educators support the view that when teaching draws on applications of NST from everyday life all students are engaged, even those who do not consider themselves to be high-achieving in science [30]. In addition, the inherent interdisciplinary nature of NST is often seen as an opportunity for students to experience the collaboration of different fields of science and engineering to conceive the abstract nanoscale world [31,32]. Nanoscale research requires a high degree of collaboration between scientists and engineers in order to tackle complex phenomena [21]. In the context of science education, bridging and integrating scientific fields is considered to be one of the goals of curriculum reforms [32]. This effort is related to facilitating students' ability to understand the relationships between concepts belonging to different fields when they are introduced to real-world problems [21,33]. The integration of NST into education promotes an interdisciplinary approach that can "help students build meaningful insights into the great ideas of science" [31] (p. 12). Nano-trained science teachers also have expressed some arguments concerning the educational significance of NST. For example, in Laherto's [10] study, the majority of the science teachers that participated in a NST training course, emphasized that the modern content should be taught in schools. The main justification was associated with the NST products and applications that have invaded the modern markets, their societal impact, and the future prospects. In addition, some of the participants justified their opinion by correlating NST with the possibilities opening up to students for further studies and working life. Furthermore, teachers seemed to acknowledge that NST could be the means for promoting students' interest in science and technology in general. Finally, the fact that teachers preferred the incorporation of NST into existing traditional subject areas of science indicates that the participants identified the relevance of NST with the science curriculum. Nevertheless, there were two teachers that greeted the inclusion of NST with skepticism, since, as they argued, the curriculum is overloaded and there is no room for other scientific domain. In the following figure (Figure 1), the above-mentioned types of justifications provided by NST educators, policy makers, industry leaders, and science teachers are depicted in three circles. The "career opportunity" circle refers mainly to the need for a skilled workforce in nano-related fields. "Nanoliteracy" is associated with the need for students to be able to develop awareness about the modern NST advances they utilize and the everyday phenomena that originate from the nanoscale structures [34]. The "relevance with the curriculum" circle correlates with the possibility of formulating an interdisciplinary science curriculum, which will bridge the gap between the discrete facts coming from different science fields and contextualize learning through the numerous NST advancements. belonging to different fields when they are introduced to real-world problems [21,33]. The integration of NST into education promotes an interdisciplinary approach that can "help students build meaningful insights into the great ideas of science" [31] (p. 12). Nano-trained science teachers also have expressed some arguments concerning the educational significance of NST. For example, in Laherto's [10] study, the majority of the science teachers that participated in a NST training course, emphasized that the modern content should be taught in schools. The main justification was associated with the NST products and applications that have invaded the modern markets, their societal impact, and the future prospects. In addition, some of the participants justified their opinion by correlating NST with the possibilities opening up to students for further studies and working life. Furthermore, teachers seemed to acknowledge that NST could be the means for promoting students' interest in science and technology in general. Finally, the fact that teachers preferred the incorporation of NST into existing traditional subject areas of science indicates that the participants identified the relevance of NST with the science curriculum. Nevertheless, there were two teachers that greeted the inclusion of NST with skepticism, since, as they argued, the curriculum is overloaded and there is no room for other scientific domain. In the following figure (Figure 1), the above-mentioned types of justifications provided by NST educators, policy makers, industry leaders, and science teachers are depicted in three circles. The "career opportunity" circle refers mainly to the need for a skilled workforce in nano-related fields. "Nanoliteracy" is associated with the need for students to be able to develop awareness about the modern NST advances they utilize and the everyday phenomena that originate from the nanoscale structures [34]. The "relevance with the curriculum" circle correlates with the possibility of formulating an interdisciplinary science curriculum, which will bridge the gap between the discrete facts coming from different science fields and contextualize learning through the numerous NST advancements. At the time of writing this study, from the literature review, it can be concluded that we cannot find many studies that examine teachers' or students' views about the educational significance of NST. Much of the efforts refer to measuring teachers' or students' knowledge gains before and after the training courses [35,36], the difficulties that teachers face when they intend to teach NST to their students [27], finding the NST topics that students find interesting [37], and the identification of the insertion points of the essential NST concepts into the science curriculum [38] etc. At the time of writing this study, from the literature review, it can be concluded that we cannot find many studies that examine teachers' or students' views about the educational significance of NST. Much of the efforts refer to measuring teachers' or students' knowledge gains before and after the training courses [35,36], the difficulties that teachers face when they intend to teach NST to their students [27], finding the NST topics that students find interesting [37], and the identification of the insertion points of the essential NST concepts into the science curriculum [38] etc. The Context of the Study Fourteen primary school teachers (PTs) (1 kindergarten) participated in a NST training program which consisted of three phases that lasted approximately 9 months in total ( Figure 2). During the first 2 months, PTs were trained on salient NST concepts, phenomena, and applications, such as size and scale, lotus and gecko effects, water filtration systems, and models of nanoscale structures (e.g., models of nanoporous membranes) (approximately 27 h) (Table 1) [13,39]. Within this phase of the program, PTs were familiarized with the teaching methods that were implemented in order to be enabled to develop awareness about the modern field, such as inquiry-based teaching and learning, the Jigsaw method [40], and out-of-school activities etc. Consequently, the participants, through guidance, and in collaboration with the researchers, planned and taught NST content to their fifth-and sixth-grade students ( Table 1). In the final phase of the program, the participating PTs shared their reflections about the educational perspectives of NST in primary schools (e.g., the difficulties they faced during the implementation). The Context of the Study Fourteen primary school teachers (PTs) (1 kindergarten) participated in a NST training program which consisted of three phases that lasted approximately 9 months in total ( Figure 2). During the first 2 months, PTs were trained on salient NST concepts, phenomena, and applications, such as size and scale, lotus and gecko effects, water filtration systems, and models of nanoscale structures (e.g., models of nanoporous membranes) (approximately 27 h) (Table 1) [13,39]. Within this phase of the program, PTs were familiarized with the teaching methods that were implemented in order to be enabled to develop awareness about the modern field, such as inquiry-based teaching and learning, the Jigsaw method [40], and out-of-school activities etc. Consequently, the participants, through guidance, and in collaboration with the researchers, planned and taught NST content to their fifth-and sixth-grade students ( Table 1). In the final phase of the program, the participating PTs shared their reflections about the educational perspectives of NST in primary schools (e.g., the difficulties they faced during the implementation). Content Intended Learning Outcomes Teachers Should Be Able to: Students Should Be Able to: Size and Scale/observation tools (a) define the nanoscale by its size range, the landmark objects that includes, the tools that render the objects visible (b) acknowledge that electron microscopes can be used for viewing nanoscale objects (a) classify various sizes objects into the macro, micro and nanoworld based on a qualitative criterion i.e., the observation tool that render each world accessible (naked eye, optical microscope, electron microscope), (b) order macro, micro and nanoworld objects based on a qualitative criterion: "which object is part of the other or fits into the other?" Content Intended Learning Outcomes Teachers Should Be Able to: Students Should Be Able to: Size and Scale/observation tools (a) define the nanoscale by its size range, the landmark objects that includes, the tools that render the objects visible (b) acknowledge that electron microscopes can be used for viewing nanoscale objects (a) classify various sizes objects into the macro, micro and nanoworld based on a qualitative criterion i.e., the observation tool that render each world accessible (naked eye, optical microscope, electron microscope), (b) order macro, micro and nanoworld objects based on a qualitative criterion: "which object is part of the other or fits into the other?" Lotus effect understand the super hydrophobic and self-cleaning property of the lotus leaf and the importance of the surface contact area (a) explain the lotus effect using the concept of leaf's nanostructure and the trapped air in the interstitial spaces between the nanostructures (b) recognize commercial products that mimic the lotus effect Gecko effect understand the strong adhesion property of the gecko lizard and the importance of the surface contact area -Water nano-filters realize the size-exclusion effect used in water purification systems explain the filtration mechanism relating the size of the nanostructure to the size of the objects that excludes Models (a) Understand that models represent properties of macroscale, microscale and nanoscale objects (b) Realize that models can be used to obtain information about inaccessible targets (a) create models in order to explain phenomena e.g., the lotus effect (b) recognize epistemological aspects of models i.e., the nature and role of models (e.g., the models are representations, the models focus on specific aspects of the objects) Participants Among the fourteen primary school teachers that engaged in the NST training program, ten of them taught the NST content to fifth grade students, while the rest to sixth grade students. The participants of this study represent the total number of teachers (n = 14, 10 female) and of sixth grade students that attended all of the lessons of the NST content (n = 10 students, 7 girls). The primary school teachers of the study volunteered to participate in a lifelong learning program entitled "Educational Innovations in Science, Environment and Technology". Among the requirements, the participants had to be inservice teachers and, at that time of the year, to teach in the primary school grades. The teaching experience ranged from 5 to 20 years with a mean of 15 years. We note that we do not only examine teachers' views but also their students' views. Following this orientation, the arguments of the teachers are considered valid if they are similar to those of the students. To illustrate, we provide the following example. Suppose, on the one hand, teachers support the view that NST topics may be interesting for students, and, on the other hand, students may claim that NST topics do not belong to their field of interest, it is obvious that this particular argument provided by the teachers is not valid. Data Collection The research tool was a semi-structured interview protocol that was applied to the teachers and case studies of students one week after their participation in the training course and the Teaching Learning Sequence, respectively. The duration of the interview was approximately 30 min. The purpose of the interview was twofold: on the one hand, to inquire the level of both teachers' and students' understanding of NST concepts, and, on the other hand, to bring to the surface the participants' views about the educational significance of NST content in primary schools. This paper focuses on the latter. Typical examples of questions the teachers were asked include: "do you think that nanotechnology content should be included in primary school?" and "do you think that a nanotechnology course is valuable?", while, the corresponding questions the students were asked include: "how would you find the idea of other students participating in the Nanotechnology course to be?" and "how did you find the nanotechnology course?" The above questions have been addressed in other previous studies that had the goal of exploring the educational significance of nano-trained teachers (e.g., [10] secondary science teachers). Data Coding The data coding was qualitative and was hung upon the inductive category process [41]. Firstly, we transcribed the interviews in full. Then, we identified units of meaning (UM), namely, words or phrases that were meaningful for the educational significance of NST. After the first-round coding of the UM (9 initial codes), we increased the level of abstraction by identifying common themes among them and we developed three categories. In Table 2, we present the categories and the criteria for classifying a UM to the corresponding category. The criteria were based on the initial codes that we created. For the reliability of the coding process, two independent researchers with extensive experience in science education, coded all of the UM. We used Cohen's kappa value in order to estimate the agreement between the two researchers. Taking into account that when the Cohen's kappa value is above 0.80 the inter-rater agreement is considered almost perfect, we concluded that the agreement between the two researchers was high (Cohen's kappa = 0.89) [41,42]. The minimum differences that occurred were handled through discussion until the researchers reached a consensus. − -mentioning concepts of the curriculum that could be related to the NST − in the primary school the students could be familiarized with NST concepts while in secondary school they could expand their knowledge − curriculum constraints Results In Table 3 the percentage of Teachers' and Students' UM per category about the educational significance of NST are presented. Innovative Content Relevant to Everyday Life The category including the highest percentage of UM for both teachers and students was the "innovative content relevant to everyday life". For instance, a teacher highlighted that there are already available nanotechnology applications in everyday life that could be the subject of inquiry in the classroom: "It is valuable to teach nanotechnology in schools. First of all, nanotechnology is a part of our life. We can bring into the classroom nanotechnology applications, being useful in everyday life and familiarize students with them. Working in the classroom regarding a topic that is evident in real life is very important." Another teacher emphasized that the NST is offered so that impressive content can be brought into the classroom: "Firstly, it [NST] is something different and its outcomes are impressive, such as all these experiments we did with the children with the lotus leaf, with a gecko lizard that is 'hovering' . . . " Similarly, a student pointed out the relevance of nanotechnology to the resolution of real-life problems: "I liked that we learned about the nanofilter. If there were no nonscientists, the children from Africa would have not clean water. I don't say that now they all have clean water but efforts are being made." Another student gave prominence to the explanations of processes that are based on nanoscale agents, such as viral infections: "I find the knowledge of what is happening in the nanoworld very useful. (For example) how we catch a virus in order to protect ourselves". Future Career The category "future career" comprised three UMs for both teachers and students. The participants referred to opportunities for new jobs related to the NST field, or to further Educ. Sci. 2021, 11, 724 8 of 12 university studies in the same field. For example, a teacher highlighted opportunities for NST-related jobs: "For example, I saw that there are 300,000 nanotechnology companies. Since I can realize what these companies can do, I think that a lot of other companies will shift their interest towards NST. In other words, it is not bad for Greeks to start involving in this kind of work." Another teacher referred to university studies in the NST field: "This is the future of our students. In recent years, computer science has attracted students' interest for studying at the university because we thought that it was a good prospect. We started teaching computer science both in and out of schools from early stages. I think that nanotechnology is something similar and we have to start teaching it in schools because it is an excellent prospect." Similarly, a student explained that it is important that nanotechnology content is included in schools, in order for students to be informed about the new field for future university studies: "Maybe I would choose to study nanotechnology [at the university] if I had some basic knowledge about nanotechnology". Another student referred to a future career as a nanoscientist: "Because it can be something that may gain my interest at this age and will make me continue when I grow up. Maybe I want to become a scientist in the field of nanotechnology." Relevance of NST Content to the School Science Curriculum Not surprisingly, the category "relevance of NST content to the primary school science curriculum" was found only in the teachers' UM. Teachers think that in primary schools, students should be familiarized with NST concepts and construct some basic NST knowledge in order to expand it by understanding more sophisticated concepts in higher grades: "If we introduced a unit about nanotechnology in 6th grade, students could be aware that nanotechnology already exists and be familiarized with some concepts. Then in secondary schools it would be easier to expand their knowledge." Moreover, another teacher found a point of insertion for NST content to the primary school science curriculum. Specifically, it was mentioned that an optical microscope, that is included in the curriculum, could be a pathway to other NST concepts: "First of all, the primary school science book includes some pictures that depict an optical microscope. When my colleagues see this picture, they turn the page considering that it is just a picture, underestimating it. After the training program in nanotechnology, I realized that, that picture was very important at this part of the book, the optical microscope should be taught into the classroom as well as, other NST concepts could be incorporated to this specific part such as the electron microscope." Only one teacher expressed her disagreement for introducing NST concepts into the primary science curriculum, justifying her opinion based on curriculum constraints: "I admit that I don't teach physics in school. [...] All we learned was interesting. However, I would prefer the training program focus on the concepts that we already teach in primary schools, because I could apply that knowledge into the classroom immediately . . . I consider that in the way that our curriculum is structured today, as well as due to the existing deficiencies, nanotechnology is not so necessary in primary schools." To summarize, the findings indicate that both of the groups acknowledged the educational significance of NST, taking into account, on the one hand, the innovative content, which, as the participants argued, has close connections with the everyday life, and, on the other hand, the career possibilities that open up due to the rapid development of the field. In addition, primary school teachers justified their view, mentioning the relevance of NST content to the current primary school science curriculum. Discussion The purpose of the study was to highlight nano-informed primary teachers' and students' views about the educational significance of the modern field of NST. This aspect is considered a first step towards any educational innovation and is often examined by educators and policy makers. This particular study differs to the studies that examine the educational significance from the perspective of nanoliterate primary teachers and students, as "the teachers who are implementing the curriculum innovation must be heard in the first place" [10] (p. 136). To maintain objectivity in the research, we stress that the educational significance of NST was not discussed during the training programs with the teachers or the students. The opinions that we presented belong solely to the participants, since the training courses focused entirely on developing their NST content knowledge and not on introducing other aspects such as the educational significance of NST. Answering the research question, we notice that both the nano-trained groups (i.e., students and teachers) acknowledged the significance of introducing NST content in school contexts providing several justifications. The main one was the close relevance of the NST content to everyday life. The participants agreed that there are phenomena, such as the strong adhesion of a gecko lizard because of which it defies gravity, and applications, such as the water filters, which are closely connected to real life. We note that, during the reflection phase of the NST training course (Figure 2), some teachers noted their students' curiosity, enthusiasm, and engagement when conducting experiments in order to observe the superhydrophobicity that some plants and artificial fabrics exhibit. The fact that teachers and students acknowledged the close relevance of NST to their modern everyday life (innovative content relevant to everyday life) reflects the justification of scholars for the educational significance of NST that is associated with nanoliteracy ( Figure 1). Indeed, all of the members involved seemed to reach a consensus that one significant aspect of the educational significance of NST lays on the need of citizens to develop their nanoliteracy in order to handle issues that relate to NST and occur in their everyday lives (e.g., nano-based vaccines and nano-based fabrics etc.). In addition, both PTs and students seemed to project the opening of career opportunities with the advent of NST. The science curriculum of our country introduces scientific concepts to students in order to describe phenomena (e.g., the concept of force, or the concept of energy), however, they are not correlated with future careers or studies. Our findings indicate that PTs see the inclusion of NST in school contexts as an opportunity to discuss career issues with their students, which provides several pedagogical benefits, such as "inform students of the possibilities and door openers and thus developing the range of their aspirations" [19] (p. 72). To this direction, there were some students, who, in their interviews, did not close the door on becoming nanoscientists. Concerning the relevance of NST to the already existing curriculum, this particular argument is expressed by educators in order for teachers to be persuaded to include NST in their lessons. Some teachers tend to believe that there is no room for any new subject to the science curriculum, and as a result, they reject the educational significance of NST [29]. However, the nano-trained PTs of our study, consider that NST applications is a context in which abstract scientific concepts can be introduced to their students. For example, the strong reverse adhesion of the gecko lizard constitutes a good example that teachers can use to teach the concept of the electrical forces. Furthermore, the results show that neither teachers nor students thought NST to be the means for designing an interdisciplinary curriculum. This finding is likely based on the fact that the training program did not include any activities regarding the interdisciplinary feature of NST. On the other hand, scholars see NST as an appropriate subject for promoting interdisciplinarity. It seems that NST educational designers should provide learners with the opportunities to experience the interdisciplinary feature of NST, in order to enable learners to enrich their views about the educational significance of NST. In particular, concerning the teacher who stated that NST content should not be introduced in primary schools, we think that the disagreement could be justified as follows: during the training program this teacher expressed several times her lack of experience regarding teaching science to primary school. We consider this teacher as a case study of the category of teachers who do not teach science, and, as a consequence, don't feel confident to approach innovative science content, such as nanotechnology. In the literature, it is argued that for primary school teachers, the acceptance of the innovation and the willingness to bring it into the classroom is sometimes hindered by low self-confidence, related to the lack of knowledge of the new content [43]. Since teachers play a crucial role for any educational innovation and teachers' and students' viewpoints may contribute to the development of a curriculum and instructional materials, we consider that the findings should be taken into account by training programs' developers and educational policy makers. The findings, concerning educational significance, indicate that an appropriate NST content for primary schools should comprise NST applications that are meaningful to students' lives and that could be correlated with concepts that are included in the curriculum. In addition, the proposed content could include activities in order for students to be familiarized with nano-related professions. To conclude, this pilot study raises the question of whether nano-trained teachers and nano-trained students view NST as an important content area that should or should not be included in the school curriculum. It consists of the first attempt to map primary nano-trained teachers' and students' voices regarding the educational significance of NST. All in all, the participants agree for the need to incorporate NST content into the school science curriculum. Their justifications seem to be in line with those expressed by science education researchers, industry leaders, and education policy makers. The findings of the current study are limited by the small number of participants (students and teachers). Further research is needed; with a larger number of participants to verify the findings of the study. In addition, we should notice that the findings originate solely from participants' interviews that took place only after the implementation of the NST content to the student. We did not examine teachers' justifications that were expressed during all of the phases of the training course ( Figure 2). The above limitations inspire the formation of future studies concerning the educational significance of NST.
2021-11-14T16:24:06.267Z
2021-11-11T00:00:00.000
{ "year": 2021, "sha1": "3a0026ca341df45bab0137257bbaec286040965d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7102/11/11/724/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a650fbc5f7e22cd2117ed002eed58718c15f60aa", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
55921395
pes2o/s2orc
v3-fos-license
First Electrochemical Method of Nitrothal-Isopropyl Determination in Water Samples The aim of the research was the use of square wave adsorptive stripping voltammetry (SWAdSV) in conjunction with a hanging mercury drop electrode (HMDE) for the determination of nitrothal-isopropyl. It was found that optimal SW technique parameters were frequency, 200Hz; amplitude, 50mV; and step potential, 5mV. Accumulation time and potential were studied to select the optimal conditions in adsorptive stripping voltammetry: 45 s at 0.0 V, respectively. The calibration curve (SWSV) was linear in the nitrothal-isopropyl concentration range from 2.0 × 10 to 2.0 × 10mol L with detection limit of 3.46 × 10mol L. The repeatability of the method was determined at a nitrothal-isopropyl concentration level equal to 6.0 × 10mol L and expressed as RSD = 5.5% (n = 6). The proposed method was successfully validated by studying the recovery of nitrothal-isopropyl in spiked environmental samples. Introduction The agricultural practices intensively use pesticides, herbicides, fungicides, and other classes of chemical products to achieve maximal productivity.This has resulted in serious impacts on the natural environment, causing an increased level of pollutant residues in water, soil, river sediments, and foodstuffs [1,2].The elaboration of modern, easy-to-operate, rapid, sensitive, and inexpensive methods for the detection of hazard residues in the environment is a main task in analytical chemistry nowadays.Many published articles report the pollutants determination in natural samples, mainly using separation and spectrometric methods [3]. Fungicides are biocides that are usually applied to protect fruits and vegetables against fungi.The increased risk of fungicide residues accumulation can cause serious health problems also through human exposure to their remnants present in the food.Among the fungicides used, nitrothalisopropyl (diisopropyl 5-nitroisophthalate, NT, Figure 1) is a selective fungicide applied towards control of the development of many side effects of diseases caused by microorganisms in plant breeding of agricultural crops.It has been applied for the control of Podosphaera leucotricha in apple trees, Phytophthora infestans in tomato and potato, and Septoria apii in celery as well as Bremia lactucae in lettuce cultivation.Nitrothal-isopropyl is also applied as a seed treatment, in order to combat diseases of vegetable and ornamental plants.It is used in forestry for diseases causing needles drooping as well.In humans NT can cause eyes, skin, and respiration system irritation.Nitrothal-isopropyl is classified as a third-class toxicity pesticide for mammals. The most common methods of NT determination employ gas, liquid or thin-layer chromatography and selective detector [4][5][6][7][8].Voltammetric techniques are described with several advantages like wide linear concentration range, good sensitivity, low apparatus cost, capability of miniaturization, capability for on-site detection, relatively short time of analysis, and insensitivity to matrix effects [9,10].Hanging mercury drop electrode commonly used for characterization of electrode processes and analytical purposes can be characterized with wide range of potential in a negative region, easily renewable and smooth surface, capability for preconcentration of analytes, and so forth.Moreover square wave voltammetry (SWV) is recognized as the most frequently used voltammetric technique for electroanalysis [11].Pulsed voltammetric techniques are acknowledged with relatively low detection profiles ascribed to low background current.The electroanalytical techniques have long history in applications towards analysis of biocides in different environmental matrices [12][13][14][15].Electroanalytical techniques are also advantageous due to characterization possibilities in kinetic and equilibrium studies [16][17][18][19].Investigation of nitro-group containing compounds using voltammetric methods has been utilized since the very beginning of polarography [20] and voltammetry [21].Despite the increased fears of liquid mercury toxicity in the past years, a stable attention is given to the development of electroanalytical methods that utilize hanging mercury drop [22] or amalgam electrodes [23][24][25].The most common electrode reaction involves twostep mechanism corresponding initially to a four-electron reduction of the NO 2 group and the second signal with the reduction of the previously formed hydroxylamine group to the NH 2 group involving two electrons [26].There are also possible other reaction mechanisms depending on the type of electrode material, supporting electrolyte pH, and so forth.Due to high hydrogen overpotential mercury-based working electrodes were found suitable for voltammetric determination of nitroaromatic molecules. Up to date there was no voltammetric work dealing with elaboration of electroanalytical method of nitrothalisopropyl determination.Thus the goal of this work was aimed at the development of a simple and sensitive method for the aforementioned fungicide determination. Experimental 2.1.Apparatus.All electrochemical measurements were performed with microAutolab potentiostat (EcoChemie, Netherlands) through electrochemical software version GPES 4.9.A three-electrode cell was employed incorporating a hanging mercury drop electrode (AGH University, Cracow), an Ag/AgCl (3.0 M KCl) reference electrode, and a Pt wire as a counter electrode.No special pretreatment of electrochemical station was needed prior to the measurements except degassing the working solution in the voltammetric cell with pure argon (5 N).Mass transport was achieved with a Teflon-coated magnetic stirrer operated by M164 stand (mtm-anko).Measurements of pH were made using a pH-meter (Elmetron, Poland) with a combined glass electrode.All experiments were performed at room temperature 20 ± 1 ∘ C. Solutions. All chemicals used were of analytical grade.Double distilled demineralized water was exploited throughout experiments.Nitrothal-isopropyl was purchased from Dr. Ehrenstorfer Gmbh (Augsburg, Germany) and used as received.25 mL of a 1.00 mmol L −1 stock standard solution was prepared by dissolving 7.41 mg of NT in a mixture of ethanol and water (1 : 1, v : v).Solutions with higher dilution were freshly prepared before measurements from the stock standard solution.Britton-Robinson (BR) buffer solutions of different pH values were prepared by the addition of sodium hydroxide solution to a phosphoric, boric, and acetic acid mixture while citrate buffers were composed of sodium citrate in combination with required amount of hydrochloric acid.The final pH was controlled and adjusted using a pHmeter. Voltammetric Procedure.10 mL of buffer solution was placed in an electrochemical cell containing a specific amount of analyzed NT standard solution.In order to remove dissolved oxygen degassing was performed before each measurement by passing through an argon stream.Electrochemical measurements of nitrothal-isopropyl were carried out with SWSV and recorded in the potential range from 0.0 to to 2.0 V.The SW voltammetric parameters were as follows: frequency 200 Hz, step potential 5 mV, and amplitude 50 mV with accumulation at 0.0 V for 45 s. Results and Discussion 3.1.Electrochemical Behavior.Nitrothal-isopropyl is an electroactive compound and square wave adsorptive stripping voltammograms recorded in its presence show two welldefined reduction signals, first close to −0.1 and second approximately at −0.6 V (Figure 2).For analytical purposes signal at less negative potential value was chosen due to increased sensitivity caused by higher response. The influence of the supporting electrolyte pH on the electrochemical behavior of nitrothal-isopropyl was evaluated with the peak potential and current analysis.The electrochemical reduction of NT was investigated in the pH range 2.0-12.0 in 0.04 M BR buffer solution (inset in Figure 2).NT signals were observed in the whole studied pH range.The observed analytical peak current was highly dependent on the supporting electrolyte pH.The maximum peak current was observed at pH 2.5.The peak potentials shifted significantly towards more negative values with pH increment.The cathodic peak potential relocation can be specified with the following equation: (V) = 0.017 (V) − 0.046⋅pH; = 0.998.The slope of those dependencies is close to the theoretical value of 59.0 mV pH −1 and suggests protons involvement in the electroreduction of nitrothal-isopropyl most probably with equal number as electrons.The same analysis was performed in citrate buffer in the pH range 1.5-3.5 where the highest signal was again observed at pH 2.5.As can be seen (Figure 2 higher in BR buffer and therefore this supporting electrolyte with pH 2.5 was selected and applied in further experiments. The optimization of square wave adsorptive stripping voltammetric parameters for nitrothal-isopropyl determination was a crucial step in preparation of electroanalytical method.The results show significant influence of square wave voltammetric parameters on the NT reduction signals (data not shown).The step potential (Δ ), the amplitude (Δ), and the frequency () were studied within the variable ranges of 1-7 mV, 10-140 mV, and 8-500 Hz, respectively.The peak shape and current response for NT were greatly affected by varying step potential values.Taking into account signal shape for analytical purposes, step potential equal to 5 mV was chosen for further studies.The current response of NT increased linearly with amplitude up to 40 mV; above this value minor regression of the signal was noticed.Also ratio between peak current and half peak width suggests this value as optimal amplitude.In the range of studied square wave frequencies a nonlinear dependence between and peak current was observed.Significant signal shape deterioration was caused when frequency values higher than 250 Hz were applied.Keeping also in mind that the background current increases at higher frequencies an optimal value of 200 Hz was selected.Thus, for determination of NT the optimal values of square wave voltammetric parameters were found to be frequency 200 Hz; amplitude 40 mV; and step potential 5 mV.Next, stripping parameters were optimized in the range 0-180 s and 0.25-−0.05V for accumulation time ( acc ) and potential ( acc ), respectively. acc of 0.0 V was found to be suitable since the peak current was gradually increasing with accumulation potential shift towards less positive value and a significant drop of the signal at acc = −0.05V was observed.The dependence of peak current on accumulation time shows constant increments in the peak current up to 45 s and further continuous decline.In both cases the peak position shifted slightly towards more negative potentials when increasing analyzed parameter.As the most optimal accumulation potential 0.0 V and time 45 s were selected.Cyclic voltammetry was used to study the NT electrochemical behavior.The potential scan was started at pH 2.5 from 0.0 V to the negative direction and reversed at −2.0 V back to the starting potential.As can be seen in Figure 3 NT manifests two separated reduction peaks, related to irreversible two-step process as there is no evidence of corresponding oxidation signal.The influence of scan rate on NT peak current was investigated from 20 to 500 mV s −1 as well.When the scan rate increased, cathodic peaks shifted in more negative direction as expected from an irreversible reaction.Linear dependence between peak current and square root of scan rate was observed ( = −3.74×10 −7 V 1/2 − 1.8 × 10 −8 , = 0.999) indicating diffusional nature of electrode processes.It was confirmed by constructing the plot of logarithm of peak intensity (A) versus the logarithm of scan rate (V/s).The equation was log (A) = 0.58 log V (V s −1 ) − 6.43 ( = 0.989).Calculated slope for this dependence was close to 0.5 which is attributed to processes controlled by diffusion [27,28]. The above-described cyclic voltammograms, pH effect, and literature survey on the reduction of aromatic nitro compounds [22,25,29,30] suggest the following electrode reaction mechanism.The reduction peak appearing at less negative potential could be attributed to the reduction of the -NO 2 group through a single four-electron four-proton irreversible step into NHOH group.Furthermore, the next peak corresponds to further reduction of hydroxylamine group to an appropriate amino group with involvement of both two protons and two electrons. Electroanalytical Application. The dependence between the cathodic peak current and NT concentration was examined using SWAdSV (Figure 4).A linear relationship was observed over the range from 2 × 10 −7 to 2 × 10 −6 mol⋅L −1 in 0.04 M BR buffer solution (pH 2.5) under the optimum conditions.The calibration curve was calculated using least square equation.Table 1 provides the characteristic of the calibration plot. The LOD and LOQ values of the method were obtained based on SD/ ( = 3 for LOD, = 10 for LOQ, resp., where SD = standard deviation of the intercept, = slope of the calibration curve) [31].The LOD value was well enough below the lethal dose for most of living organisms (e.g., LC 50 = 0.33 mg L −1 in the case of trout) in natural water [32] and reflects the sufficient sensitivity of the method.The precision and recovery of the method were measured from six repeated measurements of the NT electrochemical signal at different concentrations (Table 2).the method is selective and can be used in cases of simple environmental samples without significant deterioration. Analysis in Spiked Water Samples.Water samples were spiked with nitrothal-isopropyl at the 4.0 × 10 −7 mol⋅L −1 concentration.Six replicate experiments were performed along with standard addition method to determine NT in spiked environmental samples.Exemplary voltammograms obtained during studies are presented in Figure 5.The consecutive standard additions of nitrothal-isopropyl have caused respective increments on the related peak at −100 mV (Figure 5).No matrix effects nor signal shifting/deteriorations were visualized.The reliability of the proposed square wave voltammetric method was investigated by assaying nitrothalisopropyl in water samples.A series of water samples were used to further investigate the accuracy of the proposed method.Analysis results are summarized in Table 3.The received results imply that the evaluated method is accurate, selective, and precise sufficiently enough to be introduced in routine analysis. Conclusion The above-described data clearly demonstrate the possible use of the hanging mercury drop electrode for square wave adsorptive stripping voltammetric determination of nitrothal-isopropyl.Since the proposed methodology is fast and of high precision and accuracy therefore it can be used for NT quantification in water samples with no matrix effects on the measurable response.All the data received using the optimized experimental conditions and voltammetric parameters acknowledged the practical application and viability of the proposed methodology, ensuring a new instrument for quantification of NT in water samples.The use of SWAdSV is usually more efficient than other conventional techniques.The newly developed procedure allows accurate detection of nitrothal-isopropyl and introduces a simple, fast, selective, and highly sensitive methodology.The capability to determine the fungicide content directly from the matrix medium or natural samples without any laborious pretreatment which are usually time-consuming and environmentally unfriendly is one of the main of the method. CFigure 5 : Figure 5: SWAdS voltammograms of nitrothal-isopropyl determination in spiked tap water samples using the standard addition method ((a) sample; (b), (c), and (d) standard additions).Experimental conditions are the same as in Figure 4. Inset: corresponding calibration curve. Table 1 : Quantitative determination of nitrothal-isopropyl in BR buffer; pH = 2.5 with SWSV.Basic statistic data of the regression line. Table 2 : Recovery and precision of the NT peak currents at various nitrothal-isopropyl concentrations. Table 3 : Results of NT determination in spiked samples with SWAdSV.
2018-12-09T02:58:28.134Z
2016-11-09T00:00:00.000
{ "year": 2016, "sha1": "2c4e8f53860a7353d14ca63d08e85dfa7e416aa9", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jchem/2016/6045347.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2c4e8f53860a7353d14ca63d08e85dfa7e416aa9", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
54810929
pes2o/s2orc
v3-fos-license
SCHOOLS OF EXCELLENCE AND EQUITY ? USING EQUITY AUDITS AS A TOOL TO EXPOSE A FLAWED SYSTEM OF RECOGNITION Leading discussions in public education today are focused on improving teacher quality and closing academic achievement gaps. This discourse is politically bathed in the language of excellence and equity. The standards-based movement, along with the federal No Child Left Behind (NCLB) legislation, proposes criteria for increasing the number of “highly qualified” teachers while simultaneously eliminating the achievement gap between minority students and their peers (Vaughan, 2002). According to DarlingHammond, Kohn, Meier, Sizer, and Wood (2004), “The broad goal of NCLB is to raise the achievement levels of all students, especially underperforming groups, and to close the achievement gap that parallels race and class distinctions” (p. 3). In doing this, school systems across the United States are now required to publish “report cards” that convey disaggregated data regarding student results on standardized tests. This information is then used either to recognize the academic performance of students and the quality of teaching within a school or, in some cases, to initiate the involvement of a team of people who take over the school to ensure excellence and equity. Unfortunately such language often fails to address blatant, disturbing systemic inequalities reSCHOOLS OF EXCELLENCE AND EQUITY? USING EQUITY AUDITS AS A TOOL TO EXPOSE A FLAWED SYSTEM OF RECOGNITION Introduction Leading discussions in public education today are focused on improving teacher quality and closing academic achievement gaps.This discourse is politically bathed in the language of excellence and equity.The standards-based movement, along with the federal No Child Left Behind (NCLB) legislation, proposes criteria for increasing the number of "highly qualified" teachers while simultaneously eliminating the achievement gap between minority students and their peers (Vaughan, 2002).According to Darling-Hammond, Kohn, Meier, Sizer, and Wood (2004), "The broad goal of NCLB is to raise the achievement levels of all students, especially underperforming groups, and to close the achievement gap that parallels race and class distinctions" (p.3).In doing this, school systems across the United States are now required to publish "report cards" that convey disaggregated data regarding student results on standardized tests.This information is then used either to recognize the academic performance of students and the quality of teaching within a school or, in some cases, to initiate the involvement of a team of people who take over the school to ensure excellence and equity.Unfortunately such language often fails to address blatant, disturbing systemic inequalities re-garding the provision of education (offered and defined broadly) to the public at large. The purpose of this empirical inquiry of staterecognized "Honor Schools of Excellence" was to explore how these schools of distinction are (or are not) promoting and supporting both academic excellence and systemic equity for all students.By definition, Honor Schools of Excellence in North Carolina have at least 90 percent of their students performing at or above grade level, and the school meets expected growth and federal NCLB requirements for adequate yearly progress (AYP).In many ways, this system of recognition, marked solely by students' attainment of a target score on a standardized test as defined and measured by NCLB, actually conflates excellence and equity, therefore offering a narrow definition of student achievement and perpetuating the current achievement gap that separates many minorities from their white counterparts. The research questions for this study were modified from goal four of Scott's (2001) equity audit, which deals with more equitable opportunities to learn.Its objective is to create "challenging learning opportunities such that every child, regardless of characteristics and educational needs, is given the requisite pedagogical, social, emotional, psychological and material supports to achieve the high academic standards of excellence that are established" (p.3).Consequently, quantitative data were collected through the use of equity audits to scan for and then document what Skrla, Scheurich, Garcia, and Nolly (2004) referred to as systemic patterns of equity and inequity internal to the school (i.e., patterns embedded within the many assumptions, beliefs, practices, procedures, and policies of schools themselves that promote, prevent, or form barriers to schools' equal success with all student groups).All of the data collected for these audits were public knowledge provided by the state department of instruction and posted on the district's Web site. Systemic Equity The evidence is clear, and alarming, that various segments of our public school population experience negative and inequitable treatment on a daily basis (Ladson-Billings, 1994;Valenzuela, 1999).When compared to their white, middle-class counterparts, students of color, of low socioeconomic status, who speak languages other than English, and with disabilities consistently experience significantly lower achievement test scores, teacher expectations, and allocation of resources (Alexander, Entwisle, & Olsen, 2001;Banks, 1997;Delpit, 1995;Ortiz, 1997).According to Oakes, Quartz, Ryan, and Lipton (2000), one reason that the gaps are so persistent, pervasive, and significantly disparate is that "American schools have been pressured to preserve the status quo" (p.573).The historic marginalization of underprivileged students and the perpetuation of the status quo have served to benefit the same students and families for hundreds of years while simultaneously ignoring the needs of low-income, black, brown, native, and multiracial students and their families (Apple, 1993;Larson & Ovando, 2001).As a result, these students, without realizing it, often fall into a predetermined mold designed for school failure and social inequity.They are "left behind" without hope, without vision, and without equal access to the excellent education to which all children are entitled.Freire (1990) proposed that the purpose of our educational system is to make bold possibilities happen for these students.He stated that it is the work, in fact the duty, of public education to end the oppression of these students.Moses and Cobb (2002) agreed, suggesting that educators today are actually frontline civil rights workers in a long-term struggle for greater educational equity across racial and socioeconomic levels.Although many schools are failing to fulfill this duty, others are meeting the challenge of serving each and every student really well (Oakes et al., 2000;Riester, Pursch, & Skrla, 2002).In striving for excellence and equity, students from varied racial, socioeconomic, linguistic, and cultural backgrounds in these schools are learning at high academic levels.There are "no persistent patterns of differences in academic success or treatment among students grouped by race, ethnicity, culture, neighborhood, income of parents, or home language" (Scheurich & Skrla, 2003, p. 2). Designed to deepen the contextualization of schools that are truly excellent and equitable, this study was theoretically driven by the conceptual framework of "systemic equity."According to Scott (2001), "Systemic equity is defined as the trans-formed ways in which systems and individuals habitually operate to ensure that every learner-in whatever learning environment that learner is found-has the greatest opportunity to learn enhanced by the resources and supports necessary to achieve competence, excellence, independence, responsibility, and self-sufficiency for school and for life" (p. 6).Scott's framework focuses on the ways that systems work to ensure all students are successful, including: (a) comparably high achievement and other student outcomes; (b) equitable opportunity to learn; (c) resource distribution equity; and (d) treatment equity.If even one aspect of the system is inequitable, a school cannot have systemic equity.For example, offering a high-quality and challenging curriculum is not effective if the staff does not have high expectations that all students will be successful with that curriculum. Student Achievement Variables Given the goal of excellence and equity for all, questions of causality persist.What variables (external demographic-related and internal education-related) actually influence student achievement, and how can schools capitalize on these to narrow the gaps?The quest for more effective forms of schooling has traditionally been synonymous with the quest for greater educational equity across racial and socioeconomic levels.Beginning with the Coleman report in the mid-1960s (Coleman et al., 1966), the past 40 years have witnessed a growing number of research studies aimed at reducing the gap in quality between the school experiences of disadvantaged and more affluent youth.Concluding that the strongest predictors of achievement across all racial groups were social characteristics of the student's home environment (e.g., ethnicity, parents' education, income), Coleman proposed that children from poor families and homes, lacking the prime conditions or values to support education, could not learn regardless of what the school did-in essence, absolving schools of any accountability for inequities among student subgroups.As a result of Coleman's statement that "schools bring little influence to bear upon a child's achievement that is independent of his background and general social context" (p.325), many people (including educators), still believe that demographic factors are the most reliable predictors of school achievement. Through the "effective schools research," Edmunds, Brookover, Lezzotte, and others (Rosenholtz, 1985) set out to find schools where children from low-income families were highly successful, thereby demonstrating that schools can and do make a difference and that children from high-poverty backgrounds can learn at high levels.Many of these process-product studies identified samples of highperforming schools, documenting certain school, classroom, and leadership practices that are critical to enhanced student achievement and school productivity, regardless of family background.Although the effective schools movement has been influential, questions remain regarding its various recommendations, particularly the direction of causal effect (Rowan, Bossert, & Dwyer, 1983).In other words, although certain characteristics might produce higher-achieving students, the reverse might also be the case; that is, schools may maintain these characteristics because they are fortunate enough to have greater numbers of high-achieving students.That some schools identified as effective at one point were found not to be so a few years later might, for example, suggest the latter possibility.Thus, although "effective schools" clearly share important practices, it has never been consistently established that ineffective schools could become more effective by adopting these practices. In continuing the search for a reliable set of techniques for transforming ineffective schools into effective ones, various researchers suggested that other internal factors such as school size (Haller, Monk, & Tien, 1993), class size (Mosteller, 1995), pupil-teacher ratios, special education assignment (Artiles, 1998), placement in gifted and talented programs (Ford & Harmon, 2001), the number of discipline referrals, and other school-related variables may also play an important role in what students learn.Incidentally, this same body of research also repeatedly indicated that students of color and students of poverty received a highly disproportionate share of negative consequences and an inexplicably low share of positive resources.According to McKenzie and Scheurich (2004), research on the achievement gap today reveals similar findings: There is an abundance of data and research that show that students of color not only are performing at lower achievement levels than their White counterparts but, also, are overrepresented in special education and lower level classes, dropping out of school at higher numbers, frequently educated by teachers who do not believe they can learn or who are actively negative in their attitude toward these students, underrepresented in gifted and talented and higher level classes, often times educated in schools with less resources and with the least experienced teachers, and more likely to be suspended or expelled.(p.602) Current research indicates that access to effective teaching is correlated with high and equitable levels of learning for all students (Ferguson, 1998;Goldhaber, 2002;Hanushek, Kain, & Rivkin, 2002).More so than any other home or school-level factor, statistics show that teacher quality actually has the greatest impact on student achievement (National Commission on Teaching and America 's Future, 1996).Simply put, skilled teachers produce better student results.As such, the NCLB teacher-quality provisions are driven by this research documenting the importance of teacher quality on student achievement and in closing achievement gaps between disadvantaged and non-disadvantaged students (Carey, 2004). Although most researchers agree that teachers matter, these same researchers tend to disagree on how "teacher quality" should be defined and then measured.Although some argue that some measures are better predictors of teacher effectiveness than others (Rowan, Correnti, & Miller, 2002), attributes such as teacher subject specialty, degree level, certification type, years of teaching experience, general academic proficiency as measured by standardized test scores (e.g., SAT, ACT, Praxis), and the selectivity of a teacher's alma mater are often used as proxies for teacher quality.Regardless of how it is technically defined and measured, teacher quality is extremely important.Unfortunately, like so many of the other resources, the pool of high-quality teachers is not distributed equitably across schools and districts.The fact that the less socially advantaged the students, the less likely teachers are to hold full certifi-cation and a degree in their field and the more likely they are to be inexperienced and have entered teaching without certification is itself a major contributor to the achievement gap (Darling-Hammond, 1999).In an effort to combine these factors and begin resolving this issue, Skrla, Scheurich, Garcia, and Nolly (2004) proposed this simple formula: teacher quality equity plus programmatic equity equals achievement equity.In part, the researchers involved in the current study begin to test that assumption. Participants According to Patton (1990), "The logic and power of purposeful sampling lies in selecting informationrich cases for study in depth" (p.169).Through purposeful sampling, 24 elementary schools were eventually selected from a list of 61 "honor" schools in one large school district in a southeastern U.S. state using the following predetermined criteria: • K-5 Honor School of Excellence during the 2004-05 school year (no middle schools or high schools included), • Regular, traditional-calendar school (no magnet, charter, or year-round schools included), • Principal has been in place for at least three years (no school with a new principal included), and • A critical mass of student diversity (at least 18 percent of the total school population is minority students).For this study, "minority" is defined as students who fall under the NCLB subgroups of African American students, Hispanic American students, Native American students, and multiracial students. All 24 traditional K-5 Honor Schools of Excellence identified during the 2004-05 academic year recorded proficiency rates of achievement (i.e., scoring at or above a Level 3 on the state's end-of-grade test) of 95 percent or above for all of their white and Asian American students.The proficiency rates for minority students in these same schools ranged from 64.6 to 87.1 percent.Based solely on minority achievement, the 24 schools were rank ordered and then separated into two types of schools.The 12 more equitable schools that recorded achievement gaps of less than 15 percent between their white students and their minority students were labeled SGS for "small gap schools."The 12 less equitable schools that recorded achievement gaps of 15 percent or more between their white students and their minority students were labeled LGS for "larger gap schools."Any gap, especially a gap of 15 percent, indicates inequity and illustrates the need for this research and the importance of learning from and building on the success of the more equitable schools in the district. The district involved in this study is unique in its focus on keeping most schools balanced by subgroups of students identified under NCLB.Around 20 years ago, the school board modified its racial desegregation plan by replacing racial considerations with a new student assignment plan based on a combination of socioeconomic status and academic performance.Accordingly, no school may have more than 40 percent of its children eligible for subsidized lunches or more than 25 percent of its students scoring below grade level on standardized tests.This approach actively resists the demographic trends toward high-poverty and low-performing schools by making decisions based on students' need rather than their race.As a result, the schools in this study have a population of minority students that ranges from 18 to 60 percent of the total school population.Although this demographic trend is not representative of many districts or schools in districts that essentially remain segregated, it does provide a unique opportunity to study and compare what is actually happening (or not happening) in schools that are similar demographically (i.e., to compare apples to apples and oranges to oranges, not apples to oranges). Instrumentation: Equity Audits Equity audits are a leadership tool that can be used to guide schools in working toward equity and excellence; they involve the use of district, school, and classroom data to identify, address, and remove systemic patterns of inequality that come from inside the school.Equity auditing is a concept with a respected history in civil rights, in curriculum audit-ing, and in some state accountability systems (English & Steffy, 2001).In this study, I took Skrla et al.'s (2004) advice and began "with a manageable set of demographic, teacher quality, programmatic, and student achievement indicators that together form a straightforward, delimited audit of equity" (p.141). Procedures Demographic equity for each of the SGS and LGS was explored by means of the following descriptive statistics: (a) number of students; (b) number of 3rd, 4th, and 5th graders who took the reading and math tests; (c) percentage of minority students (defined for this study as African American, Hispanic, Native American, and multiracial students); (d) percentage of economically disadvantaged students (defined for this study as students eligible for free or reduced lunch); (e) percentage of limited-English-proficient (LEP) students; (f) percentage of students with disabilities (tested and labeled); (g) number of AYP goals (subgroups identified under the federal NCLB Act); and (h) actual geographic location. Because high-quality teachers are key determinants of students' opportunities to be academically successful, evidence of teacher quality equity in each of the SGS and LGS involved four variables: (a) teacher education (percentage of teachers holding an advanced degree at the master's or doctoral level); (b) teacher credentials (percentage of fully licensed teachers, percentage of classes taught by highly qualified teachers, and percentage of teachers with National Board certification); (c) teacher experience (number of years as a teacher); and (d) teacher mobility (percentage of teachers leaving or not leaving a campus annually). According to Skrla et al. (2004), "Equally as important as teacher quality is the quality of the programs in which students are placed (or from which they are excluded) and in which teachers work" (p.145).Because quality varies largely among different placements and working conditions within schools and school districts, indicators of programmatic equity for this study involved data gathered on the following resources: (a) student space (percentage of school crowding and number of mobile units); (b) student discipline (number of acts of violence and number of student suspensions per 100 students per school year); (c) student access to books and technology (number of library books per student, number of students per computer, and number of students per Internet connection); (d) teachers' time; (e) facilities and resources; (f) teachers' empowerment; (g) school leadership; and (h) opportunities for professional development. Indicators of achievement equity in each of the SGS and LGS expanded the traditional attention on nationally normed achievement test results and included such evidence of student attainment as growth rates, academic levels, parent education, and AYP goals met.Adequate yearly progress standards are used to determine success under the federal NCLB legislation involving incremental growth from certain starting points in reading and mathematics.With a goal of closing achievement gaps, there are nine categories of students that are potentially identified as subgroups.They are: (1) white, (2) black, (3) Hispanic, (4) Native American, (5) Asian/Pacific Islander, (6) multiracial, (7) economically disadvantaged, (8) limited English proficient, and (9) students with disabilities.A school must achieve 100 percent of its targets (subgroups) in order to be deemed to have made adequate yearly progress.In each of the 24 schools, 95 percent or more of the white and Asian/Pacific Islander students were proficient on the end-of-grade reading and mathematics tests.The achievement audit for this study disaggregated the following available data based on the NCLB subgroups: (a) state achievement test results (from a state accountability program, focused primarily on average growth, designed to improve student achievement, reward excellence, and provide assistance to schools that need extra help); (b) growth rates; (c) academic levels; (d) parent education (proficiency rate of students whose parents do not have a college education); and (e) number of AYP goals met.*Note: National experts report that about 10-12 percent of a school's student population probably requires special education designations.Both types of schools in this study report higher than average classifications resulting in over assignment (Artiles, 1998). Audit of Demographics in Smaller Gap Schools (SGS) and Larger Gap Schools (LGS) Demographically speaking, the schools involved in this research study are very similar.All 24 are regular K-5, traditional-calendar Honor Schools of Excellence in the same large school district of over 135,000 students.All 24 schools are located within a 12-mile radius of one another, house an average of 722 students, and boast an average daily attendance figure of 95 to 97 percent.Approximately one-third of the student population in both the SGS and LGS is comprised of minority students.The SGS and LGS also both serve approximately the same number of economically disadvantaged students (~ 29.5 percent who qualify for free or reduced lunch [F&R] for SGS and LGS), same number of limited-Englishproficient students (~ 7 percent for SGS and LGS), and same percentage of students with disabilities (~ 16.5 percent for SGS and LGS).As a result, both sets of schools also have the same number of AYP goals to meet (20).See Table 1 for a snapshot of the demographic data for SGS and LGS. Audit of Teacher Quality in SGS and LGS Although defining and measuring teacher quality is a complicated task (Rowan, Correnti, & Miller, 2002), it is vitally important in raising student achievement. Researchers indicated that having a critical mass of licensed, experienced teachers with advanced degrees is directly correlated with students' academic success (Darling-Hammond, 1999).An audit of teacher quality revealed that teachers' credentials, education, experience, and mobility are very similar in both the SGS and the LGS.For this study, percentage of fully licensed teachers refers to the percentage of classroom teachers with clear initial or clear continuing licenses in all license areas (≈ 90 percent for SGS and LGS).Percentage of classes taught by highly qualified teachers includes classes taught by highly qualified teachers as defined by federal law (≈ 89.5 percent for SGS and LGS).Percentage of teachers with advanced degrees includes teachers who have completed an advanced college degree, including a master's or doctoral degree (≈ 25 percent for SGS and LGS).National Board-certified teachers refers to the percentage of school staff, including teachers, administrators and guidance counselors, who have received National Board certification (≈ 8.5 percent for SGS and LGS).The years of teaching experience measure was broken into three categories: 0-3 years, 4-10 years, and 10+ years. Although small, an interesting difference was noted in that half (51 percent) of the teachers in the SGS had 10+ years of experience compared to 43 percent of the teachers in the LGS.The LGS schools seem to employ more teachers in the 4-9 year range of experience (34 percent) compared to the SGS (29 percent).Overall, both types of schools seem to employ an appropriate balance of new teachers, midcareer teachers, and very experienced veteran teachers.Lastly, teacher turnover rate is defined as the percentage of classroom teachers who left their school staff from the start of the prior year to the start of the current year (≈ 19 percent for SGS and LGS).See Table 2 for a snapshot of the teacher quality data for SGS and LGS.Programmatic issues involve a number of concerns including resources, physical space, student discipline, and access to books and technology.Once again, an audit of the SGS and LGS revealed some striking similarities.For example, even though the SGS are 5 percent over capacity and the LGS are 10 percent over capacity with regard to school crowding and both sets of schools have approximately seven mobile units on their properties, the average class size for all 24 schools involved is still 21 students. School safety issues involve the number of acts of crime or violence per 100 students, which includes all acts occurring in school, at a bus stop, on a school bus, on school grounds, or during off-campus school-sponsored activities.Although the LGS reported one more act per 100 students than the SGS, the SGS reported one more short-term (10 days or less) or long-term (more than 10 days) out-of-school suspension or expulsion per 100 students than the LGS.Students in both the SGS and LGS have access to approximately the same number of library and media center books (≈ 17 books) and the same number of Internet-connected computers (≈ 4 to 1 student/computer ratio). Another way to assess programmatic equity is to examine the results of a statewide survey about teacher working conditions in the state in which my research was conducted (Center for Teaching Quality, 2006).The goals of the survey were to (a) hear from teachers and administrators about what they identify as areas in need of improvement, (b) understand what school characteristics appear to affect those perceptions, and (c) provide data on working conditions to local school leaders and state policymakers.Research and focus groups with teachers were conducted to develop 30 statistically sound working conditions standards for schools in five broad categories-time, empowerment, professional development, leadership, and facilities and resources.The online survey sent to every licensed public educator in the state solicits responses on 72 statements regarding working conditions in these five domains.Educators are asked to respond to each of the statements with a value of 1 through 6, with 1 representing "Strongly Disagree" and 6 representing "Strongly Agree."All statements are written to indicate a positive description of the school environment (e.g., "The principal is a strong, supportive leader" and "Adequate and appropriate time is provided for professional development").Therefore, higher scores always indicate a more positive opinion of the school environment.In 2004-05, surveys were completed and returned voluntarily by 42,209 educators from 1,471 schools in 115 of the state's 117 school districts.Seventy-six percent of the schools had a response rate of 50 percent or higher. The domain of time addressed in the survey ensures that teachers can work collaboratively and fo-cus on teaching all students.Empowerment is meant to ensure that those who are closest to students are involved in making decisions that affect them.Facilities and resources ensure teachers have the resources to help all children learn.Leadership ensures schools have strong leaders who support teaching and learning.And opportunities for professional development ensure teachers can continually enhance their knowledge and skills.The Southeast Center for Teacher Quality (Jacobson, 2005) found all five variables to be statistically significant and meaningful predictors of student achievement. Interesting findings emerged regarding the return rate, range of returns, and actual ratings on the surveys.First, 20 percent more teachers in the SGS actually completed the survey (total of 88 percent) than in the LGS (total of 68 percent).Second, the range of returns for the SGS was considerably smaller (29, or between 71 and 100 percent) than for the LGS (65, or between 35 and 100 percent).And third, the teachers in the LGS actually rated each of their working conditions slightly higher than the teachers in the SGS.(The SGS responses were more aligned with the district average.)See Tables 3 and 4 (page 9) for a snapshot of the programmatic data for SGS and LGS.These differences certainly speak to different cultures within each of the schools and may be explained in a variety of ways (positive and negative).Unfortunately, without more data (qualitative and/or quantitative), it is difficult to identify precise reasons for these results (e.g., culture of nonparticipation in some schools, pressure from the leadership to close gaps in other schools, only contented teachers completed the survey).Similarly, information needed to disaggregate the exceptional children's classifications, including cognitive and behavioral disabilities and gifted and talented, by race and income was not readily available.I intend to continue to mine for this data and the possibility of unequal representation in certain programs. Audit of Achievement in Smaller Gap Schools (SGS) and Larger Gap Schools (LGS) According to Scott (2001) achievement equity means having comparably high performance for all groups of learners when academic achievement data are dis-aggregated and analyzed.Although demographic, teacher quality, and programmatic audits all indicated a fair amount of equity between SGS and LGS, the achievement audit between both types of schools indicated great disparities.Across the board, at-risk students in the SGS outperformed their LGS counterparts (and the district, for that matter).The 11.2 percent difference in minority student proficiency between the two types of schools was used to separate the schools initially.Interestingly, SGS continued to outpace LGS in achievement among economically disadvantaged students (9.4 percent difference), limited-English-proficiency students (7.2 percent differ-ence), students with disabilities (4.9 percent), and students of parents with no college education (13.3 percent).Even though 95 percent of all students were tested in all 24 schools and each school noted some growth, a six-year analysis of growth indicated a greater difference of 6.3 percentage points for students in the SGS versus the LGS.Nine percent of the students in the LGS scored below proficiency at a Level 1 or 2, whereas only six percent of the students in the SGS scored at a Level 1 or 2. See Table 5 (page 10) for a snapshot of the achievement data for SGS and LGS.Although improving teacher quality continues to be a leading national priority, "the fact that, broadly speaking, our children experience differential levels of success in school that is distributed along race and social class lines continues to be the overridingly central problem of education" (Skrla, Scheurich, Johnson, & Koschoreck, 2001, p. 239).Changing demographics of the student population in the nation's schools, the stable demographics of the teaching force (i.e., white, middle-class females), and the growing contrast between these two sets of demographics support the need for all educators to increase their knowledge of and social responsibility toward diversity-and equity-related issues.In serving increasingly diverse student populations from a variety of cultural and linguistic backgrounds, many of whom experience poverty, neglect, or other negative situations that can seriously affect their physical, cognitive, and emotional development, Villegas (1992) argued that educators in a multicultural society need the following: (1) an attitude of respect for cultural differences; (2) knowledge of the cultural resources their students possess, and skills in tapping these resources in the teaching-learning process; (3) a belief that all students are capable of learning, evidenced in an enriched curriculum for all pupils; and (4) a strong sense of professional efficacy when evaluating students.Unfortunately, beliefs, attitudes, and mind-sets do no not lend themselves easily to empirical investigation (Pajares, 1992). As the results from this research indicate, equity audits are a practical, easy-to-apply tool that educators can use to identify educational inequalities objectively.Studying schools that teach similar populations of students from the same geographical region shows that it is impossible to ignore the role that schools play in the achievement of all students.Data are powerful; they separate personal agendas from organizational necessities.When data are collected, analyzed, and exhibited in a transparent way, it is difficult for teachers, parents, and even school board members to deny certain disparities in practices, deficiencies in systems, and gaps in outcomes. Actually addressing and then removing such systemic patterns of inequity requires more than awareness, though; it requires action.Igniting reform for true excellence requires the will to reform, as well as a close examination of personal beliefs coupled with a critical analysis of professional behavior.Even though convincing research suggests that beliefs are the best predictors of individual behavior and that educators' beliefs influence their perceptions, judgments, and practices, research also states that beliefs are hardy and highly resistant to change (Bandura, 1986;Dewey, 1933;Pajares, 1992;Rokeach, 1968).Understanding the nature of beliefs, attitudes, and values is essential to understanding educators' choices, decisions, and effectiveness regarding issues of diversity, social justice, and equity.Assessing beliefs in an effort to make them known and subject to critical analysis is an important initial step in the process.(See Brown, 2004, for a review of measures, instruments, inventories, and studies that assess educators' personal and professional beliefs, attitudes, perceptions, and preconceptions.)We can assume that the more critically conscious educators become, the more prone they are to behave appropriately and constructively in actual educational situations involving students of diverse cultures, ethnic groups, backgrounds, abilities, economic levels, and so forth, and the more attentive they will become to redressing social injustices and developing enduring educational practices embodying equity. According to Scheurich and Skrla (2003), "The success of our society will soon be directly dependent on our ability as educators to be successful with children of color, with whom we have not been very successful in the past" (p.5).These alarming gaps challenge us to dig deeper inside the schools for more subtle causes.Scott (2001) called these internal causes of inequity "systemic inequities" because they are built systematically into the processes and procedures of the system that is the school.A school culture that perpetuates the status quo and turns a blind eye to the social injustices that permeate our schools is not really "excellent" (i.e., the state's formula used to identify exemplary schools is in fact institutionally flawed).As such, excellence and eq-uity must be pursued concurrently to assure that all students are served well and that all are encouraged to perform at their highest level.Excellence without equity is not excellence-it is hypocrisy.Further research is needed to document the specific strategies that principals of "excellent, equitable schools" use to confront and change past practices anchored in open and residual racism and class discrimination. Table 1 : Demographic Data for Smaller Gap Schools (SGS) and Larger Gap Schools (LGS): Average Data Set for 2004-05 Table 3 : Programmatic Data for Smaller Gap Schools (SGS) and Larger Gap Schools (LGS): Average Data Set for 2004-05
2018-12-14T21:57:10.582Z
2010-07-19T00:00:00.000
{ "year": 2010, "sha1": "e0a889841b5b4a159f17d247f98e3506dd5a243b", "oa_license": "CCBYSA", "oa_url": "https://journals.sfu.ca/ijepl/index.php/ijepl/article/download/206/92", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e0a889841b5b4a159f17d247f98e3506dd5a243b", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
43512531
pes2o/s2orc
v3-fos-license
Rotation-induced Mode Coupling in Open Wavelength-scale Microcavities We study the interplay between rotation and openness for mode coupling in wavelength-scale microcavities. In cavities deformed from a circular disk, the decay rates of a quasi-degenerate pair of resonances may cross or anti-cross with increasing rotation speed. The standing-wave resonances evolve to traveling-wave resonances at high rotation speed, however, both the clockwise (CW) and counter-clockwise (CCW) traveling-wave resonances can have a lower cavity decay rate, in contrary to the intuitive expectation from rotation-dependent effective index. With increasing rotation speed, a phase locking between the CW and CCW wave components in a resonance takes place. These phenomena result from the rotation-induced mode coupling, which is strongly influenced by the openness of the microcavity. The possibility of a non-monotonic Sagnac effect is also discussed. I. INTRODUCTION Eigenmodes are fundamental in understanding both quantum and wave phenomena. When the system is perturbed, the eigenmodes of the original, unperturbed system become coupled. In optics, for example, the coupling can be introduced by matter-mediated interaction in cavity quantum electrodynamics [1,2], by nonlinearity in multimode lasers [3], and by linear scattering from a local defect or a gradual boundary deformation in optical waveguides [4] and microcavities [5][6][7][8]. In addition, rotation causes a minute change of the refractive index [9], which leads to the mixing of standing-wave resonances in optical microcavities [10,11]. The well-known Sagnac effect [12][13][14][15][16], i.e. the rotation-induced frequency splitting, has also been reported in microcavities [9][10][11]17]. Although most microcavities have open boundaries, the openness or coupling to the environment has not been considered as a key factor that can dictate the behaviors of rotating cavities; it has only been studied as a quantity that can be influenced by the rotation [9,17]. In microcavities much larger than the wavelength of the resonances, the effect of the openness is weak and the cavity can be treated as a closed system. This treatment is not sufficient for wavelength-scale microcavities [18][19][20], which are valuable for integrated photonics circuits, among others, because of their small footprints and mode volumes. In this report we show that the openness of wavelengthscale microcavities can have strong influence on rotationinduced phenomena, including the Sagnac effect. We first show analytically that it slightly enhances the Sagnac * Electronic address: li.ge@csi.cuny.edu † Electronic address: hui.cao@yale.edu effect in circular microcavities. Its effect is much stronger in asymmetric resonant cavities (ARCs) [21][22][23], which can lead to different scenarios of mode coupling, including crossing of the cavity decay rates and a non-monotonic frequency splitting with rotation. These behaviors are analyzed using a coupled-mode theory, and two key quantities are identified, i.e. the phase of the coupling constant between the quasi-degenerate resonances at rest, and the phase of the difference in their complex resonant frequencies. With closed boundaries both phases vanish, thus their non-zero values result from the openness of the cavity. Our analysis also reveals a passive phase locking between the clockwise (CW) and counterclockwise (CCW) traveling wave components in a resonance at high rotation speed. Below we focus on transverse magnetic (TM) resonances in two-dimensional (2D) microcavities without loss of generality. Their electric field is in the cavity plane, and their magnetic field, represented by ψ( r), is perpendicular to the cavity plane. To the leading order of the rotation speed Ω, the resonances ψ and their frequencies k of an open cavity are determined by the modified Helmholtz equation [9] where ρ(θ) is the boundary of the microcavity in the polar coordinates and the origin is at the rotation center. c is the speed of light in vacuum, and n is the refractive index inside the cavity. We have assumed that the rotation axis is perpendicular to the cavity plane and Ω > 0 indicates CCW rotation. Eq. (1) is a generalization for closed cavity modes discussed in Ref. [10], and the openness of the cavity makes the resonant frequencies k complex, with an negative imaginary part that reflects the cavity decay rate, i.e. κ = −2Im[k] > 0. II. CIRCULAR MICRODISK CAVITIES We start with a circular dielectrtic disk of radius R. The angular momentum m is a conserved quantity. A pair of CW (ψ ∝ e −i|m|θ ) and CCW resonances (ψ ∝ e i|m|θ ) are degenerate when the cavity is stationary, with the same complex resonant frequency k 0 . The angular momentum is still conserved at a nonzero rotation speed Ω, i.e. Eq. (1) can be solved by imposing the following ansatz wherek m ≡ n 2 k 2 − 2kmΩ/c = n(k − mΩ/n 2 c) + O(|Ω/c| 2 ),k m ≡ k 2 − 2kmΩ/c = k−mΩ/c+O(|Ω/c| 2 ), and J m , H + m are the Bessel function and Hankel function of the first kind. k is determined bȳ which is required by the continuity of ψ( r) and its radial derivative at r = R. Eq. (1) was studied numerically in Ref. [9] using a finite-different-time-domain (FDTD) method adapted to the rotating frame. The results show that the aforementioned double-degenerate resonances of frequency k 0 at rest split at infinitesimal Ω, and the differences in both real and imaginary parts of their complex frequencies increase linearly with Ω, with an enhanced Sagnac effect (i.e. for the real part) compared with closed microcavities [10]. Below we confirm these results analytically by expanding Eq. (3) to the leading order of the dimensionless rotation speed Ω ≡ RΩ/c, which reveals that the slightly enhanced Sagnac effect in an open cavity depends on the angular momentum. We derive from Eq. (3) that Note that both k and η m are complex due to the openness of the cavity. It can be shown that for whispering-gallery modes Re[η m ] is larger than 1 and it approaches this lower bound as |m| → ∞ [see Fig. 1 and the enhancement factor Re[η m ] is stronger in wavelength-scale microcavities where |m| is small. We note that this does not imply that the Sagnac effect itself is stronger in wavelength-scale cavities, since the dominant dependence still comes from the linear size of the cavity, reflected by the factor of |m| in Eq. (6). For a small refractive index inside the cavity (e.g. n = 2), the |m|-dependence of Re[η m ] is non-monotonic, and Re[η m ] reaches a local maximum at a certain m [ Fig. 1(c)]. The approximation (4) agrees well with the numerical solutions of Eq. (3). One example is given in Fig. 1(a) and (b), in which n = 2, k 0 R 5.3923 − 0.0114i and we found η 8 1.2478 + 0.0665i, indicating that the splitting of the imaginary parts of the complex resonances is about 20 times smaller than that of the real parts. But since Re[k 0 ]/|Im[k 0 ]| ∼ 50, the relative change of the splitting of the imaginary parts is larger compared with the real parts, as found numerically in Ref. [9]. We also note that the mixing of the CW and CCW waves of the same |m| found in Ref. [9] is not caused by rotation but rather by the way of excitation in the FDTD method, as we have shown that each resonance contains only CW or CCW wave of a single m. In the discussion above we have assumed that the rotation axis is at the center of the microdisk cavity. Eq. (1) still holds when the disk center is away from the rotation axis (which is the origin of the polar coordinates by definition), and the circular microdisk cavity becomes an ARC since now ρ(θ) = const. We will study ARCs in general in the next section. III. ASYMMETRIC RESONANT CAVITIES There are several ways to find the resonances in a rotating ARC. In addition to the modified Finite-Difference-Time-Domain (FDTD) simulation [9], a perturbative approach can be employed in any numerical methods that incorporates an outgoing boundary condition, such as One example is the Finite-Difference-Frequency-Domain method used in Ref. [3], where the cavity is put inside a circular computational domain. For any realistic value of the rotation speed, Ω 1 and the gradient term on the left hand side of Eq. (1) only leads to a small shift of the resonant frequencies. Thus a perturbative root search can be implemented by first calculating the resonant frequencies of the stationary disk (k 0 ), approximating k by k 0 in the gradient term, calculating the resulting k, inserting it back to the gradient term, and repeat the process until k converges. Here we employ a nonperturbative approach, the modified scattering matrix method proposed in Ref. [24]. Beside the consideration of numerical efficiency, one motivation is to capture the cavity shape exactly. It was recently found that even a minute perturbation on the scale of one thousandth of the wavelength can cause a drastic variation of the emission pattern in wavelength-scale microcavities [7]. Finite difference or finite element method unavoidably introduces a small deviation when approximating the smooth cavity boundary by discrete grids, while the scattering matrix method utilizes the analytical form of the cavity boundary and is free of spatial grids. The scattering matrix method applies to a concave cavity with a uniform refractive index and a smooth boundary deviation δρ(θ) from a circle satisfying the Rayleigh criterion |δρ(θ)| R. In this approach the wave function of a resonance inside the cavity is decomposed in the angular momentum basis, i.e. where H − are the Hankel functions of the second kind. Outside the cavity the outgoing condition (7) is used. Compared with the formulation for non-rotating cavities [25,26], the difference lies in the m-dependent frequencies k m andk m defined previously. Defining the vectors |α , |β , |γ from the coefficients in Eqs. (7) and (8), the regularity of ψ( r) at the origin is satisfied by requiring |α = |β . The continuity conditions of ψ( r) and its radial derivative at ρ(θ) can be put into the following matrix form in which andH + , D + are defined similarly withk m in place of k m . By eliminating |γ from Eqs. (9) and (10), a matrix equation can be found in the form S(k)|α = |β . By taking into account the constraint |α = |β mentioned above, we solve S(k)|α = |α [25,26] to find the resonances k. Below we exemplify the effect of the openness on the Sagnac effect and chiral symmetry breaking in a wavelength-scale limaçon cavity using this method. A. Chiral symmetry breaking and emission pattern asymmetry It was found that spontaneous breaking of chiral symmetry occurs in wavelength-scale microcavities [18][19][20]: CW and CCW waves in a resonance follow symmetric but distinct orbits. This is due to the wave effect of light, which cannot be treated as rays traveling in straight lines and undergoing specular reflections at the cavity boundary. We found that such orbits evolve into resonances dominated by CW or CCW waves at large Ω, with only small variations of their intensity patterns inside and outside the cavity. One example is shown in Fig. 2 using a limaçon cavity, the boundary of which is given by ρ(θ) = R(1 + cos θ). The deformation from a circle (due to a finite ) breaks the degeneracy of the resonances at rest, and each standingwave resonance now has multiple angular momenta, with a dominant pair (m, −m) if the deformation is small. Because the limaçon is symmetric about the horizontal axis (θ = 0, 180 • ), the wave functions of these standingwave resonances are either even or odd about this axis, which we denote by ψ + and ψ − . They appear as quasidegenerate pairs, with each pair having the same dominant angular momenta (m, −m). We will refer to the pair with |m| = 8 in Fig. 2 as Pair 1. Fig. 2(c) and (d) shows the similarity of the external field intensity I(θ; Ω) for CW and CCW waves atΩ = 0, 10 −3 and r = 3R. However, they are not exactly the same. This can be seen from the chiral symmetry at Ω = 0, i.e. I cw (θ; Ω = 0) = I ccw (−θ; Ω = 0), and the lack of it between I cw and I ccw at Ω = 10 −3 . The latter is true for the intensity patterns inside the cavity as well, and in general ψ ccw (r, θ; Ω) = ψ cw (r, −θ; Ω), even though ψ ccw (r, θ; Ω) = ψ cw (r, −θ; −Ω) as can be seen from Eq. (1). We note that I(θ; Ω) has a weak r-dependent even in the asymptotic region r R 2 /λ. This is because the argument of the Hankel functions in the expansion (7) outside the cavity is m-dependent, thus the factor exp(ik m r)/ k m r in the asymptotic form of the Hankel functions is not a common factor for all angular momenta, in contrast to the stationary case. We have considered a rotation speed much slower than c/r such that Eq. (1) is valid. For a faster rotation the higher-order terms O(Ω 2 ) neglected in Eq. (1) can be significant in the far field, which may cause an additional r-dependency of the far-field emission pattern. As shown in Fig. 3(a) and (c), the splitting of the resonant frequencies in an open ARC displays a threshold, similar to the Sagnac effect in closed microcavities [10]. However, the asymmetry χ(Ω) of the emission pattern, which can be characterized by does not have a threshold at low Ω; it displays an almost linear dependence on Ω until the wave function becomes dominated by either CW or CCW waves [ Fig. 3(d)], similar to the finding in larger cavities with |m| ∼ 100 [24]. This was explained using a coupled-mode theory [24], which we employ in the next section to study the Ω-dependence of the complex resonant frequencies, es-pecially the non-monotonic behaviors of their imaginary parts shown in Fig. 3(b) and (c). B. Rotation-induced mode coupling CCW rotation (Ω > 0) increases the effective index inside and outside the cavity for CW waves (m < 0) [9]: Therefore, we expect the resonant frequencies of CWdominated resonances to reduce as a function of Ω. Meanwhile, we expect their cavity decay rates (given by −2Im[k]) to increase, since the index contrast at the cavity boundary is reduced. The situation is reversed for CCW waves. Thus for a circular microdisk cavity both Re[k ccw − k cw ] and Im[k ccw − k cw ] are positive when the cavity undergoes a CCW rotation and they increase with Ω. These intuitive expectations are verified numerically in Ref. [9] and analytically in Fig. 1(a), (b). For the quasi-degenerate resonances Pair 1 of the limaçon cavity shown in Fig. 2, these expectations also hold at large Ω [see Fig. 3 however, that Im[k ccw ], Im[k cw ] undergo an avoided crossing at an intermediate Ω. The same behavior is observed for the next pair with a dominant angular momentum |m| = 9 (not shown). More surprisingly, we found that for the resonances with a dominant |m| = 10 (Pair 2), the CW-dominated mode has a lower cavity decay rate at large Ω [ Fig. 4(a)], and the intuitive prediction based on the index contrast fails. The same holds for the resonances with a dominant |m| = 11 (Pair 3) but now with a crossing of the cavity decay rates [ Fig. 4(b)]. (a) and (b)]. It is surprising, To understand these behaviors, we resort to the coupledmode theory described in Ref. [24]. It is similar to that developed in Refs. [10,11,27], but it is adopted to open cavities, taking into account the non-vanishing phases of the coupling constant g between the quasi-degenerate resonances ψ + , ψ − at rest and the difference of their complex resonant frequencies k + 0 , k − 0 , which we will show to be the key quantities that determine the different behaviors of the cavity decay rates mentioned above. As the cavity rotates, the resonances become CW-or CCW-dominated, which can be viewed as the result of the coupling of the corresponding standing-wave resonances ψ + and ψ − at rest, i.e. ψ(Ω) ≈ a + (Ω)ψ + + a − (Ω)ψ − . Eq. (1) can then be rewritten as a coupled-mode equation where G +− ≡ cavity ψ + ∂ θ ψ − d r and G −+ is defined similarly. We note that G ++ and G −− , which would have ap-peared on the diagonal of the coupling matrix in Eq. (15), vanish because their integrands are odd functions with respect to the horizontal axis. Likewise, cavity ψ + ψ − d r vanishes even though resonances of an open cavity are not orthogonal or biorthogonal in general. We have used the normalization cav (ψ ± ) 2 d r = 1. The difference of the two resonances k cw , k ccw at rotation speed Ω is given by where ∆k 0 = k − 0 − k + 0 and g is the dimensionless coupling constant defined by g ≡ 2 −G −+ G +− /n 2 . Equation (16) shows that the frequency splitting, both its real and imaginary parts, is very small for Ω smaller than the critical value Ω c ≡ c|∆k 0 /g|, below which the leading Ω-dependence is quadratic; ∆k(Ω) is reduced by a factor of Ω/2Ω c when compared with a circular microdisk, where ∆k 0 = 0 and the leading Ω-dependence is linear. Far beyond Ω c , ∆k(Ω) approaches its asymptote gΩ/c, and its real part gives the Sagnac frequency splitting, which is similar to the value of the corresponding resonances in a circular microdisk of the same radius [see the dashed lines in Figs. 3(c) and 4(a),(c)]. We also note that the sum of k cw , k ccw is given by the same expression (16) but with ∆k 0 replaced by k + 0 + k − 0 . Since |g|Ω/c |k cw |, |k ccw | for any realistic rotation speed, the sum (and the average) of k cw , k ccw only has a leading O(Ω 2 ) dependence even beyond Ω c , which is weaker than the rotation dependence of their splitting for Ω > Ω c . This explains why the real and imaginary parts of the complex frequencies shown in Fig. 3(a) and (b) look symmetric about their average. The coupling constant g is approximately real and positive in a cavity slightly deformed from a circular disk. This can be seen from its definition, and more specifically, the relation that G −+ ≈ −G +− . The minute phase of g is due to the openness of the cavity, and it determines whether the CW-or CCW-dominated resonance has a lower cavity decay rate for Ω Ω c . This can be seen by substituting (a + , a − ) in Eq. (15) by (1, −i) for a CWdominated resonance and (1, i) for a CCW-dominated resonance, leading to Therefore, the CW-dominated resonances have a lower frequency and higher cavity decay rate only if g is the in the first quadrant of the complex plane. This is the case for Pair 1 shown in Fig. 3, and a good fit is given by g = 4.99 + 0.30i. g is in the fourth quadrant for both Pair 2 and 3 (fitted with g = 5.92 − 0.15i, 5.90 − 0.07i in Fig. 4), and as a result the CW-dominated resonances have a lower frequency and a lower cavity decay rate for Ω Ω c . In view of these findings, the failure of the predictions based on the effective index (14) is understandable since it does not consider the interference between ψ + and ψ − , which changes as a function of the rotation speed. We note that the coupled-mode theory (15) does not apply to circular cavities, because the angular momentum is still a good quantum number as mentioned previously, which can only be achieved by a fixed combination of ψ + ∝ cos(mθ) and ψ − ∝ sin(mθ). We also note that the value of g calculated by integrating the wave functions obtained from the scattering matrix method agrees well with the value extracted from fitting the complex resonance splitting in large cavities [24]. In the wavelength-scale microcavities studied here, additional wave effects (such as multimode coupling [8]) are present and the two values of g only agree qualitatively; the calculated value of g is 4.09 + 0.03i, 5.00 − 0.18i, 5.34 − 0.15i for Pair 1, 2, and 3, respectively. Nevertheless, it is important to note that the calculated value and the fitting value of g for the same pair of resonances are in the same quadrant of the complex plane and close to the real axis. The minute phase of g, together with the phase of ∆k 0 , also determines whether the cavity decay rates of a pair of resonances cross each other. This can be understood by inspecting Eq. (16): crossing of the decay rates take place when the sum in the square root, denoted by Σ(Ω), becomes a real positive number at some value of Ω. A necessary condition is that g and ∆k 0 are in neighboring quadrants in the complex plane, which guarantees that Σ(Ω) can become real. Note that this criterion does not depend on whether the CW-dominated resonance originates from the parity-odd resonance ψ − or the parityeven resonance ψ + , or in other words, whether ∆k(Ω = 0) is given by ∆k 0 or −∆k 0 . For Pair 1, ∆k 0 R (1.99 + 4.59i) × 10 −5 and g = 4.99 + 0.27i are both the first quadrant; for Pair 2, ∆k 0 R (1.07−0.58i)×10 −5 and g = 5.92 − 0.15i are both in the fourth quadrant. Therefore, for these two pairs their respective decay rates do not cross each other. To find the sufficient condition for the crossing, we note again that g is almost real in a cavity slightly deformed from a circular cavity. The sufficient condition for the cavity decay rates to cross is completed by the requirement that the acute angle formed between ∆k 0 and the imaginary axis, denoted by ∠(∆k 0 , ±i), is larger than |Arg[g]|, where Arg denotes the principle value of the phase in (−π, π]. For Pair 3, ∆k 0 R (4.25 + 5.23i) × 10 −7 is in the first quadrant while g = 5.90 − 0.071i is in the fourth quadrant, satisfying the necessary condition. In addition, ∠(∆k 0 , ±i) = 0.68 > |Arg[g]| = 0.012, which completes the sufficient condition and leads to the crossing of the cavity decay rates. We note that crossing of the real part of ∆k(Ω) is also possible in principle [ Fig. 5(a)], which means that the Sagnac frequency splitting is no longer a monotonic function of the rotation speed. It occurs when Σ(Ω) becomes negative at some value of Ω. It still requires the same necessary condition that g and ∆k 0 are in neigh- boring quadrants in the complex plane, which guarantees that Σ(Ω) can become real. In addition, it requires that ∠(∆k 0 , ±i) < |Arg[g]|. An even more dramatic scenario can take place in principle, if Σ(Ω) becomes zero at some value of Ω. It requires that ∆k 0 and g are ±π/2 out of phase with each other, and when this holds, the two resonances reach an exceptional point [28] at Ω = Ω c , with identical complex resonant frequencies and wave functions. If the phase of g is really small, then an approximate bifurcation happens for Re[k] and an approximate inverse bifurcation happens for Im[k] [ Fig. 5(c),(d)], due to a phase singularity (a jump by π) of Σ(Ω). These two scenarios discussed here and shown in Fig. 5 require that ∆k 0 to be essentially imaginary, which may be realized by fine-tuning the cavity shape. C. Phase locking between CW and CCW waves Finally, we report a passive phase locking between CW and CCW waves in a resonance as the rotation speed increases. As Fig. 6(a) and (c) shows, the relative phase between α |m| (CCW) and α −|m| (CW) at rest is either 0 or ±π, which gives the parity-even and parity-odd resonances. As the cavity rotates, this relative phase gradually approaches a locked value ∆ϕ for Ω > Ω c . ∆ϕ is in [0, π/2] for the CW-dominated resonance in Pair 1, and it is in [−π/2, 0] for the CW-dominated resonance in Pair 2. This difference seems to be related to whether the CW-or CCW-dominated resonances have a higher cavity decay rates, or equivalently, whether the coupling constant g between ψ + and ψ − is in the first or fourth quadrant. To confirm this relationship, we again resort to the coupled-mode equation (15), which gives the mixing ratio ξ(Ω) ≡ a − /a + for a pair of quasi-degenerate resonances [24] ξ(Ω) 2 ≈ D ± D 2 + (2g 2 /c 2 ) ΞΩ 2 where D ≡ k − forward to show that the second term in the radicand dominates when Ω > Ω c , and in this limit we can further approximate by also taking into account that D/ √ 2Ξ ≈ ∆k 0 . For a CW-dominated resonance with a dominant angular momentum −|m| and a locked phase ∆ϕ, its wave function can be approximated by ψ(Ω) ≈ ζ exp(i∆φ) exp(i|m|θ) + exp(−i|m|θ) with a real ζ ≡ |α |m| /α −|m| | 1. or in other words, ξ(Ω) 2 → 2ζ exp(i∆ϕ) − 1 2ζ exp(i∆ϕ) + 1 (21) as Ω becomes much larger than Ω c . By comparing with Eq. (20), we immediately find ζ ≈ Ω c /Ω and the locked phase is given by ∆ϕ ≈ Arg ±∆k 0 g . It is clear from Eq. (22) that ∆ϕ is not determined by the phase of g alone but also by that of ∆k 0 . The latter is more influential in cavities slightly deformed from a circular disk, where g is almost real and positive as mentioned previously. The "±" signs in Eq. (22) come from the two possibilities that either ψ + or ψ − evolves into a CW-dominated resonance. This uncertainty can change ∆ϕ by π, but it does not mix the two different scenarios found in Fig. 6(a) and (c), i.e. whether ∆ϕ ∈ [0, π/2] or [−π/2, 0]. We find that the positive sign in Eq. (22) corresponds to the locked phase for the CW-dominated resonance in Pair 1 and 2, which gives ∆ϕ = 1.11, −0.471, respectively. They agree well with the numerical results shown in Fig. 6(a) and (c). The locked phase in the CCW-dominated resonance can be found similarly, which gives ∆ϕ ≈ Arg[∓g/∆k 0 ], and the sum of the two locked phases in these resonances is approximately ±π. The latter feature can be easily identified in Fig. 6(a) and (c). IV. CONCLUSION In summary, we have shown both analytically with the coupled-mode theory and numerically with a scattering matrix method that the openness of wavelength-scale microcavities has a strong effect on rotation-induced mode coupling. Openness results in non-vanishing phases of the coupling constant g and the complex frequency splitting ∆k 0 of the quasi-degenerate resonances at rest. These two quantities together dictate the rotation dependence of the decay rates and the resonant frequencies. The decay rates of the quasi-degenerate resonances may cross or anti-cross with increasing rotation speed, and unlike the circular microcavities, both the CW-or CCW-dominated resonances of asymmetric resonant cavities can have a lower cavity decay rate, depending on the phase of g. The well-known Sagnac effect, i.e. the linear increase of resonant frequency splitting with the rotation speed, may be altered by mode coupling and exhibit a non-monotonic behavior. Finally, the relative phase of the CW and CCW wave components in a resonance is locked at high rotation speed as a result of mode coupling. These unusual behaviors of mode coupling result from the interplay between openness and rotation in wavelength-scale microcavities.
2014-05-02T18:29:14.000Z
2014-05-02T00:00:00.000
{ "year": 2014, "sha1": "73e7c2aa86198a21b0372a998d2933dc561a2141", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1405.0468", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "de429328cbcc5abbf06525d396c35137d29db93d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
208020143
pes2o/s2orc
v3-fos-license
Analysis of risk factors for cervical lymph node metastasis of papillary thyroid microcarcinoma: a study of 268 patients Background To investigate the risk factors of cervical lymph node (LN) metastasis in papillary thyroid microcarcinoma (PTMC) patients. Methods We retrospectively analyzed the clinicopathologic data of all patients who received standard lobectomy for PTMC at our institution between October 2017 and January 2019. Central LNs were dissected in all patients. Lateral LNs were dissected if metastasis to the lateral LNs was suggested based on pre-op fine-needle aspiration biopsy. The relationship between variables available prior to surgery and cervical LN metastasis was examined using multivariate regression. Results Post-op pathologic examination revealed cervical LN metastasis in 79 (29.5%) patients. Seventy subjects had metastasis only to central LNs, and 4 (1.5%) patients had metastasis only to lateral LNs. Five patients had metastasis to both central and lateral LNs. In comparison to patients without cervical LN metastasis, those with LN metastasis were significantly younger (40.63 ± 13.07 vs. 44.52 ± 12.23 years; P = 0.021) and had significantly larger tumor diameter on pathology (6.7 ± 2.2 vs. 5.9 ± 2.4 mm; P = 0.010). Multivariate regression analysis identified the following independent risks for cervical LN metastasis: male sex (OR 2.362, 95%CI 1.261~4.425; P = 0.007), age (OR 0.977, 95%CI 0.956~0.999; P = 0.042) and ultrasound tumor diameter at > 5 mm (OR 3.172, 95%CI 1.389~7.240; P = 0.006). Conclusion Cervical LN metastasis occurs in a non-insignificant proportion of PTMC patients. Independent risks included male sex, younger age and larger tumor diameter on ultrasound. Background According to the World Health Organization (WHO) report, the number of new cases of thyroid cancer in China accounted for 15.6% of that worldwide, and the number of deaths accounted for 13.8% [1]. Papillary thyroid carcinoma (PTC) is the most common pathological type of thyroid malignancy and accounts for about 85 to 90% of all cases of thyroid malignancies. PTC with a maximum tumor diameter of 10 mm or less is defined as papillary thyroid microcarcinoma (PTMC) [2]. Several studies have indicated that PTMC has a low rate of recurrence and metastasis [3] as well as an extremely high 10-year survival rate [4]. In recent years, improvements in diagnostic methods such as imaging and ultrasoundguided fine needle aspiration biopsy have led to a significant increase in the diagnosis rate of PTMC [5]. The most common site of PTMC metastasis is cervical lymph nodes, especially the central lymph nodes (known as level 6) [6,7]. Surgery remains the main therapeutic modality for PTMC: it is a consensus that patients with cervical LN metastasis be managed by LN dissection; however, the necessity of LN dissection in patients with clinically negative lymph nodes (cN0) has been debated [8]. Active surveillance data from Japan suggested that even those at risk for disease progression such as young age and pregnancy should be actively monitored instead of surgery [2]. On the other hand, cervical LN metastasis increases the risk of loco-regional recurrence of PTC [9]. As a result, it is important to identify predictors of cervical LN metastasis. In the current retrospective analysis, we examined the potential correlation between pre-operative variables with cervical LN metastasis in 268 PTMC patients. Methods The current study (including access to raw data) was approved by the Ethics Committee of Shanghai Ruijin Rehabilitation Hospital (Committee Chairman: Ms. Yuezhen Dai) and performed in accordance with the Declaration of Helsinki and Good Clinical Practice guidelines. Patient consent was not required because of the retrospective nature of the study. Anonymized patient data (in the Chinese language) are available upon request. Patients This retrospective study analyzed the clinical data of pathologically proven PTMC patients who were initially treated at Shanghai Ruijin Rehabilitation Hospital, Shanghai, China, between October 2017 and January 2019. Patients were included if 1) they received initial diagnosis and treatment for thyroid nodules; 2) they were diagnosed with PTMC by preoperative fine-needle biopsy and underwent conventional lobectomy of thyroid carcinoma; 3) they had pathologically proven solitary PTMC; 4) the maximum tumor diameter on pathology was ≤10 mm. Patient evaluation The following data were retrieved from the hospital electronic record systems: name, sex and age and clinicopathologic and surgical data including 1) time to surgery from initial presentation; 2) surgical procedure received; 3) pathological type, maximum diameter and location (classified as inferior, middle, superior, or isthmus within the thyroid gland) of the tumor; 4) histological results of involved central and/or lateral lymph nodes; 5) concurrent Hashimoto's thyroiditis; 6) routine laboratory results such as blood chemistries, thyroid function, parathyroid hormone and blood calcium before and following operation. All patients underwent preoperative physical examination, high-quality thyroid ultrasonography (US), and US-guided fine-needle aspiration biopsy of suspected thyroid nodule or lymph nodes. Solitary PTMC was considered if the tumor showed no pathological evidence of multifocality within the thyroid gland. The maximum diameter and location of the primary tumor within the thyroid were determined by pathological examination. Surgery All patients underwent standard lobectomy. Central LN dissection was conducted in all subjects. In patients suspected of metastasis to lateral LNs based ultrasound examination prior to surgery, lateral LNs were also Fig. 1 The study flowchart dissected. The excised tissue was then sent for pathological examination. Patients received routine rehydration as well as symptomatic treatment postoperatively and were closely observed for incision hemorrhage, hoarseness, or numbness. Statistical analysis The data was analyzed with SPSS 20.0 software (SPSS Inc., Chicago, IL, USA). Student's t-test and χ 2 test were used to examine potential differences between subjects with vs. without cervical LN metastasis. Multivariate logistic regression was used to evaluate the correlation between pre-operative variables and cervical LN metastasis. P < 0.05 was considered statistically significant. Patient demographic and baseline characteristics The study flowchart is shown in Fig. 1. A total of 429 patients with a diagnosis of primary PTC with complete data were screened. Eighty-three patients with nonsolitary lesion were excluded and 78 patients were excluded because the maximum tumor diameter on pathology was > 10 mm. The final analysis included 268 patients (208 women; mean age 43.3 ± 12.6 years, range 15-72). Demographic and baseline variables are shown in Table 1. Lymph node metastasis Seventy-nine (29.5%) patients had cervical LN metastasis: 70 subjects had metastasis only to central LNs, and 4 (1.5%) patients had metastasis only to lateral LNs. Five patients had metastasis to both central and lateral LNs. Compared to patients without LN metastasis, those with cervical LN metastasis were significantly younger (40.63 ± 13.07 vs 44.52 ± 12.23 years; P = 0.021) and had significantly larger tumor diameter on pathology (6.7 ± 2.2 vs. 5.9 ± 2.4 mm; P = 0.010) ( Table 2). The two groups were comparable in other demographic and baseline variables. Discussion In the current study, approximately 30% PTMC patients had cervical LN metastasis. Independent risk factors included the male sex, younger age and tumor size on ultrasound at > 5 mm. Given the fact that cervical LN metastasis increases the risk of loco-regional recurrence of PTC [9], our findings suggest that this noninsignificant proportion of at risk PTMC patients should be actively monitored. Based on these findings, we believe that LN dissection is necessary in young men with PTMC with tumor diameter at > 5 mm. Skip metastasis was identified in 4 out of the 268 cases. We therefore did not attempt an analysis to examine the risk of skip metastasis. In recent years, there has been a noticeable increase in the incidence of thyroid malignancies in China and globally, in which PTMC accounts for a large proportion of all thyroid cancer cases [10]. The rate of cervical LN metastasis in PTMC patients has been reportedly from 12 to 64% [7]; the rate of cervical LN metastasis in our study (29.5%) falls within this range. Cervical LN metastasis is associated with tumor stage, invasiveness and recurrence rate, and serves as one of the essential prognostic predictors of PTMC [11,12]. Although thyroid malignancies are now diagnosed at an early stage owing to recent advances in imaging technologies, image reading can be influenced by multiple factors such as tumor characteristics and physicians' skills. In current consensus, LN dissection should be performed in PTMC patients with pathologically confirmed cervical LN metastasis. Whether cervical LNs should be routinely dissected in cN0 PTMC patients remains controversial. A previous study suggested that LN dissection had no significant effect on the recurrence and metastatic rate of PTMC [13]. Based on previous literature, independent risk factors of cervical LN metastasis in PTMC patients include age over 45 years [14], male sex [14,15], larger tumor diameter [16] and capsule invasion [6,7,16]. Other clinicopathologic variables associated with cervical LN metastasis may include lesion location [17], multifocality (the number of lesions no less than 2) [18][19][20][21], and concurrent Hashimoto's thyroiditis [15,22]. Our study focused on solitary PTMC and found that tumor diameter on ultrasound at > 5 mm is an independent risk of cervical LN metastasis in PTMC (OR = 3.172, P = 0.006). The current study also revealed an association between cervical LN metastasis with younger age (OR = 0.977, P = 0.042) and the male sex (OR = 2.362, P = 0.007). We failed to show an association between cervical LN metastasis with tumor location, capsule invasion or Hashimoto's thyroiditis, possibly due to insufficient sample size. The availability of gene sequencing data may allow molecular stratification of cancer patients. Lai et al. showed in a meta-analysis that BRAF V600E mutation was associated with extra-thyroid infiltration, lymph node and distant metastasis, and advanced TNM staging. Therefore, determination of BRAF mutation status preoperatively may allow preoperative evaluation of capsule invasion and cervical LN metastasis. However, more studies are required to validate of molecular markers such as BRAF. With the development of medical technology, PTMC patients now have more options for treatment. In addition to the more aesthetically pleasing surgical methods such as thyroidectomy with laparoscopy or robot-assisted surgery, radiofrequency ablation therapy has also garnered the interest of clinicians. Based on the findings from the current study, we believe that ablation may be suitable for selected patients (older age, women, tumor diameter at ≤5 mm) with low risk of cervical LN metastasis. Conclusion Cervical LN metastasis is not a rare occurrence in PTMC patients. Independent risks for metastasis include the male sex, younger age and tumor diameter on ultrasound at > 5 mm.
2019-11-15T16:15:34.132Z
2019-11-15T00:00:00.000
{ "year": 2019, "sha1": "f166048f32cf9311c2aaab5e35c82c3780972256", "oa_license": "CCBY", "oa_url": "https://bmcendocrdisord.biomedcentral.com/track/pdf/10.1186/s12902-019-0450-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f166048f32cf9311c2aaab5e35c82c3780972256", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
49867800
pes2o/s2orc
v3-fos-license
Development of Escherichia coli Asparaginase II for Immunosensing: A Trade-Off between Receptor Density and Sensing Efficiency The clinical success of Escherichia colil-asparaginase II (EcAII) as a front line chemotherapeutic agent for acute lymphoblastic leukemia (ALL) is often compromised because of its silent inactivation by neutralizing antibodies. Timely detection of silent immune response can rely on immobilizing EcAII, to capture and detect anti-EcAII antibodies. Having recently reported the use of a portable surface plasmon resonance (SPR) sensing device to detect anti-EcAII antibodies in undiluted serum from children undergoing therapy for ALL (Aubé et al., ACS Sensors2016, 1 (11), 1358–1365), here we investigate the impact of the quaternary structure and the mode of immobilization of EcAII onto low-fouling SPR sensor chips on the sensitivity and reproducibility of immunosensing. We show that the native tetrameric structure of EcAII, while being essential for activity, is not required for antibody recognition because monomeric EcAII is equally antigenic. By modulating the mode of immobilization, we observed that low-density surface coverage obtained upon covalent immobilization allowed each tetrameric EcAII to bind up to two antibody molecules, whereas high-density surface coverage arising from metal chelation by N- or C-terminal histidine-tag reduced the sensing efficiency to less than one antibody molecule per tetramer. Nonetheless, immobilization of EcAII by metal chelation procured up to 10-fold greater surface coverage, thus resulting in increased SPR sensitivity and allowing reliable detection of lower analyte concentrations. Importantly, only metal chelation achieved highly reproducible immobilization of EcAII, providing the sensing reproducibility that is required for plasmonic sensing in clinical samples. This report sheds light on the impact of multiple factors that need to be considered to optimize the practical applications of plasmonic sensors. ■ INTRODUCTION The E. coli L-asparaginase II (EcAII) isozyme hydrolyses L-Asn into L-Asp with a high catalytic efficiency. It is a critical component of chemotherapy for childhood acute lymphoblastic leukemia (ALL) and has been in the World Health Organization's list of essential medicines since 1995. 1−3 However, its use may be compromised by allergic reactions, overt or silent. 4−6 The main concern relative to the silent hypersensitivity that occurs in 5−46% of patients is the development of neutralizing antibodies that result in silent inactivation of EcAII, thus reducing treatment efficacy. 6−11 As a counterpart to its therapeutic use, EcAII is also used to capture and thus detect anti-EcAII antibodies in patients. 12,13 The crystal structure of native EcAII has been resolved in different space groups, free or complexed, and for several mutants, revealing a highly packed homotetrameric structure exhibiting four identical active sites formed by complementation of the socalled intimate homodimers. 1, 14 Several antigenic determinants of EcAII have been identified, including a dominant B-cell conformational epitope. However, little is known about the antigenicity of EcAII in an immobilized form, which is an essential aspect for immunosensing purposes. 15 We recently reported the application of a portable immunosensing device based on surface plasmon resonance (SPR) to detect anti-EcAII antibodies in undiluted serum from children undergoing therapy for ALL (Aubéet al., 2016). We strive to work directly with complex biological media to reduce the impact that the sample pretreatment may have on the analyte and to reduce the time of analysis. Several challenges were encountered during that study, the principal of which was poor and/or irreproducible surface immobilization of the native EcAII antigen. Here, we report a detailed examination of the mode of presentation of EcAII on SPR sensor chips to identify the immobilization chemistry eliciting optimal immunosensing properties. 12,16−20 Indeed, the mode of surface immobilization may preclude efficient antibody recognition if the binding site (epitope) or the surrounding regions are partly masked in the ensemble of EcAII molecules. 17,21 Furthermore, alterations in the quaternary structure upon immobilization could affect its antigenicity, where conformation or subunit assembly is essential, reducing the SPR response. To this effect, we compared heterogeneous surface immobilization of native EcAII by covalent cross-linking via its surface-exposed lysine residues and homogeneous surface immobilization by coordination of N-or C-terminal histidine (His)-tags. The quaternary structure, the activity, and the antigenicity of native EcAII and EcAII bearing N-or C-terminal 6-His-tags were compared before and after surface immobilization. We then validated those results in the context of SPR immunosensing in serum. This analytical method has many advantages over the enzyme-linked immunosorbent assay (ELISA) that is commonly used for the clinical monitoring of anti-EcAII antibodies. Indeed, SPR immunosensing can offer real-time, label-free, and on-site detection and quantification of antibodies. Using low-fouling self-assembled monolayer (SAM) surface technology to reduce nonspecific interactions, we assessed the sensitivity of antibody sensing by SPR in undiluted serum. 22 Changes to the quaternary structure had little influence on the receptor antigenicity. Although the His-tagged EcAII displayed lower sensing efficiency than the native EcAII, it provided increased surface coverage and reproducibility of immobilization, ultimately procuring significantly increased immunodetection sensitivity. These results shed light on the challenges encountered in our recently reported detection of serum anti-asparaginase antibodies in the sera of children undergoing chemotherapy 23 and the challenges expected to be encountered by others developing sensors to monitor the immunogenic response of patients undergoing therapy with biologic-type drugs. ■ RESULTS AND DISCUSSION EcAII functions as a tetramer, where the intimate homodimers further dimerize to form the functional homotetramer. 1,24−28 Among the many characterized linear T-and B-cell epitopes, a dominant conformational B-cell epitope has been identified on EcAII. This conformational epitope, present four times on EcAII, is formed by four different immunogenic segments clustered around the entrance of each of the four identical active sites and may be the principal target for neutralizing antibodies and silent inactivation 15,29−31 (Figure 1A,B). Toward the goal of capturing anti-EcAII antibodies, the EcAII protein was surface-immobilized to act as a receptor. Its numerous surface-exposed lysine residues (approximately 76) make it possible to undertake covalent, randomly oriented immobilization onto gold-coated SPR sensor chips. In parallel, we developed several His-tagged EcAII variants to allow oriented immobilization by coordination with surface-immobilized nitrilotriacetic acid (NTA)-cobalt (Co)-functionalized antifouling peptides ( Figure 1C). In addition to its functional tetrameric form, EcAII forms alternative oligomeric states in solution as a function of protein preparation and storage conditions. 32−35 Commercial preparations may contain up to 20% monomer and higher multimerization states (octamer and dodecamer, among others) that are less active than the tetramer. 36 Higher-state oligomers may present different antigenic determinants. Because modifying the protein sequence may alter the oligomerization state and thus alter immunogenicity, we investigated the quaternary structure of the His-tagged forms of EcAII. Recombinant N-and C-terminally His-tagged forms of EcAII (N21-, N26-and C8-EcAII) were named after the length of the fused tags that added 21 and 26 residues at the N-terminus or 8 residues at the C-terminus ( Figures 1C and S1). Replacement of the N-terminal signal peptide by the N-terminal His-tags led to lower yields of soluble proteins (9−15 and 1−16 mg L −1 of N21-and N26-EcAII culture, respectively) relative to the periplasmic overexpression of C8-EcAII (60−80 mg L −1 of culture), despite the overall expression levels being similar as observed using sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) (results not shown). This is 15 The N-and C-termini used for metal-coordination and the surface-exposed lysine residues for covalent cross-linking are colored according to the legend. (B) Intimate dimer shown as a ribbon structure (left) and a conformational epitope (orange surface) clustered around the entrance of the active site constituted of catalytic residues T12, Y25, T89, D90, and K162 (in sticks) from one subunit complemented by E283 from a second subunit (right). 27 The complexed L-Asp is in cyan. (C) Native and C8-EcAII precursor proteins harbor a signal sequence (22 residues) for periplasmic expression (black). Scissors indicate the cleavage site. Variants N21-EcAII and N26-EcAII have N-terminal His-tags (20 residues, in green; 25 residues, in blue), whereas C8-EcAII has a C-terminal His-tag (8 residues, in red). consistent with the expected toxicity of cytosolic EcAII, in accordance with the 3 orders of magnitude difference in affinity for L-Asn between the constitutive cytosolic EcAI (K M = 3.5 mM) and the periplasmic EcAII (K M = 10 μM) isoforms. 37 Each His-tagged EcAII was purified to ≥90% homogeneity, mostly in the tetrameric form ( Figure 2). Accurate mass determination using liquid chromatography−mass spectrometry (LC−MS) confirmed the expected molecular weight (MW) for the commercial Kidrolase (native EcAII) and for the processed C8-EcAII (native signal peptide cleaved) and revealed processing of the N-terminal methionine for both N21-and N26-EcAII, which contained 20 and 25 additional Nterminal residues, respectively (Table S2). The activity of the EcAII variants, both in solution and immobilized on the sensor chips, was monitored using the coupled reaction with glutamate dehydrogenase (GDH) and by measuring concomitant reduced nicotinamide adenine dinucleotide phosphate (NADPH) oxidation. 38,39 The activity of EcAII variants in solution was confirmed by measuring the ammonia produced using direct Nesslerization. 40 The apparent specific activity of all purified His-tagged EcAII variants was 3 to 4-fold lower than that of native EcAII (Kidrolase) freshly reconstituted from lyophilized powder ( Figure 2B and Table 1). Flash-freezing of reconstituted Kidrolase and storage at −80°C resulted in 30% loss of activity, whereas both Nterminally tagged EcAII retained 94−100% of their activity upon storage ( Figure S2). By contrast, the C-terminally tagged C8-EcAII was essentially inactivated upon flash-freezing/ thawing. For this reason, it was stored at 4°C where it showed high stability (>90% activity after 1 year). These results demonstrate that all EcAII variants adopt a fold compatible with catalytic activity, suggesting that they assemble as tetramers. The maintenance of native tetrameric quaternary structure might be an advantage when using EcAII as a receptor for antibody capture, according to the report of a dominant conformational B-cell epitope 15 (Figure 1). The commercial EcAII preparation (Kidrolase) was mainly tetrameric in solution according to analytical size exclusion chromatography (SEC; expected MW 138.4 kDa; observed MW ≈ 120 kDa) ( Figure 2A). It also included traces of octamer (expected MW 276.8 kDa; observed MW ≈ 315 kDa) and dodecamer (expected MW 415 kDa; observed MW ≈ 456 kDa) but no detectable monomer. The proportion of tetramer in the native EcAII decreased from 99 to 90% with a concomitant increase in the octameric form upon increasing the protein concentration (0.1−5 mg mL −1 ), consistent with previous reports 33,34,41 ( Figure 2A, Tables S3 and S4). Electrophoresis under native conditions [clear native (CN)-and blue native (BN)-PAGE] further confirmed that freshly reconstituted, native EcAII occurs principally as a tetramer, with some octamer and dodecamer and traces of higher state oligomers. As for native EcAII, freshly isolated N21-, N26-, and C8-EcAII were all predominantly tetrameric in solution (SEC, data not shown). Although N26-EcAII remained predominantly tetrameric after storage at −80°C, consistent with its activity, N21-EcAII showed a marked increase in octamer and dodecamer and tended to aggregate over long-term storage. Storage in 15% glycerol (GOH) stabilized the tetrameric form of N21-EcAII ( Figure 2C,E). The tendency of N21-EcAII to form higher-state oligomers was also observed on CN-PAGE, whereas N26-EcAII remained mainly tetrameric. By contrast, the C-terminally tagged C8-EcAII mostly dissociated into monomers after flash-freezing/thawing and did not reassociate in solution over time, consistent with its loss of activity ( Figure 2B,C,E). However, C8-EcAII stored at 4°C was mainly tetrameric according to SEC though it appeared roughly 50% dissociated into monomer on BN-PAGE. Interestingly, monomeric C8-EcAII appeared to undergo reassociation into tetramer and octamer during the course of CN-PAGE and appeared similar to freshly isolated C8-EcAII. Reassociation was not observed under any other condition, including BN-PAGE ( Figure 2E). Thus, a variety of conditions maintain the monomeric form of the dissociated C8-EcAII, including the lengthy SEC at 4°C, activity assays at 37°C over 10 min, and BN-PAGE in the presence of Coomassie blue G-250, whereas CN-PAGE promoted full reassociation of monomeric C8-EcAII into nativelike oligomeric forms. Although we did not further investigate the specific factors that promote this reassociation, we have identified conditions where each N-and C-terminally tagged EcAII is tetrameric and active upon storage and conditions where C8-EcAII is maintained in an inactive, monomeric form. The far-ultraviolet circular dichroism (UV CD) spectra of all EcAII variants were consistent with well-folded α/β proteins ( Figure S3). The minima for α-helices differed somewhat for N-terminally tagged EcAII ([Θ] 222nm < [Θ] 208nm ) relative to native and C8-EcAII ([Θ] 222nm > [Θ] 208nm ). This likely reflects the contribution of their N-terminal extensions but may also result from their cytosolic expressions. Interestingly, the CD spectrum of monomeric C8-EcAII is nearly identical to native tetrameric EcAII and tetrameric C8-EcAII, indicating that the secondary structure is maintained upon dissociating into a monomer. Intrinsic fluorescence revealed a similar packing of aromatic residues for all tetrameric EcAII variants (λ max em = 320nm). 42 By contrast, monomeric C8-EcAII displayed increased solvent exposure of aromatic residues (λ max em = 330nm), consistent with altered packing that may result either from subunit dissociation or from changes in the tertiary structure. The latter is supported by the lack of reassociation into a tetramer (Figures 2B,C and S4). Thermal denaturation revealed a cooperative and apparent two-state denaturation profile for native and N-terminally tagged EcAII, with T m consistent with CD-derived T m values reported for native EcAII 43,44 (Table 1 and Figure S4). A more complex unfolding profile was observed for tetrameric C8-EcAII (more than one transition), precluding the determination of T m . No transition was seen for monomeric C8-EcAII ( Figure S4). Upon refolding, native and N-terminally tagged EcAII recovered 75−80% of the initial fluorescence intensity. Interestingly, this was accompanied by a 7 nm red shift (λ max = 327 nm), similar to monomeric C8-EcAII before thermal denaturation (λ max = 330 nm) ( Figure S5), suggesting refolding into monomers. Overall, the N-terminally His-tagged EcAII variants show association and folding properties more similar to native EcAII than the C-terminally tagged EcAII. To verify if the quaternary structure, the His-tag at the N-or C-terminus or the immobilization mode of EcAII (random or oriented) modulate the sensing properties for detecting the anti-EcAII antibodies, antigenicity was analyzed using ELISA. 12 Titration of polyclonal rabbit IgG was determined with the EcAII variants randomly adsorbed on a surface that binds both the hydrophilic and hydrophobic regions of proteins (Max-iSorp) or with His-tagged EcAII variants immobilized in an oriented manner via Ni-NTA coordination ( Figure S6). Development was performed under conditions of high, intermediate, and low sensitivity by modulating the concentration of H 2 O 2 . In each case, the dynamic range of the binding assay spanned 2 orders of magnitude and was similar for all forms of EcAII ( Figure S7 and Table S5). The apparent dissociation constant (K D ) ranged from 10−26 ng mL −1 (∼100−160 pM) under high sensitivity conditions, 120−260 ng mL −1 (∼0.7−1.7 nM) at intermediate sensitivity, and 0.75− 1.5 μg mL −1 (∼5−10 nM) at low sensitivity (Tables S5 and S6). Monomeric C8-EcAII showed a slightly lower antigenicity (higher K D ) than tetrameric EcAII variants; among the tetrameric variants, both N-terminally tagged EcAII showed slightly higher antigenicity (lower K D ) than the native form (P = 0.0001). Antibody titration with His-tagged EcAII oriented on Ni-NTA-coated plates gave a 1.5-to 2.5-fold higher K D than for randomly oriented immobilization, indicating less efficient antibody recognition ( Figure 3 and Table 1). Overall, the quaternary structure of EcAII has little influence on its antigenicity under the conditions tested (Figures 3 and S8), yet the mode of immobilization has a clearly discernable effect on antibody−antigen affinity. Furthermore, we demonstrated that the presence of a His-tag is compatible with maintaining the antigenicity of EcAII. The impact of quaternary structure and mode of immobilization of EcAII on the extent of surface coverage was monitored on gold chips, and the immunosensing properties of immobilized EcAII variants were assessed using SPR. Native EcAII (Kidrolase) was immobilized onto the gold sensing surface in a randomly oriented fashion by covalent cross-linking of lysine residues using 1-ethyl-3-(3-dimethylaminopropyl)-carbodiimide N-hydroxysuccinimide (EDC/NHS) chemistry, whereas His-tagged EcAII variants were immobilized in an oriented fashion by metal affinity coordination (Co-NTA) (Figures 4 and S9). At equal protein concentration, the protein coverage (Γ) determined for oriented His-tagged EcAII was 5fold to more than 10-fold greater than for randomly oriented native EcAII (Kidrolase) (Table 2 and Figure 5). As for covalently bound Kidrolase, the metal-chelated His-tagged EcAII variants remained surface-bound upon extensive washing of the sensor chip, with the exception of the monomeric C8-EcAII that showed some dissociation ( Figure 4A). Importantly, the surface coverage for His-tagged EcAII was highly reproducible with relative standard deviation (RSD) ranging between 3 and 18%, contrary to cross-linked Kidrolase that afforded poor reproducibility (RSD 50−70%). We confirmed that the glycine present in the lyophilized Kidrolase preparation was not the cause of the poor surface coverage observed because no significant difference was observed upon dialysis of the resuspended Kidrolase (Γ = 60.25 ± 43.3 ng cm −2 without dialysis or 45.8 ± 44.7 ng cm −2 with dialysis). This suggests that the cross-linking method itself leads to lower yields and lower reproducibility of immobilization than the metal coordination of terminal His-tags, which is an important consideration for the development of an immunosensor. The lower surface coverage of C8-EcAII (whether monomeric or tetrameric) relative to N26-EcAII or N21-EcAII correlates with our observation of ready dissociation of C8-EcAII from the Ni-NTA purification column (with 70 mM imidazole) relative to the latter (with 250 mM imidazole). The short C-terminal tag (8 residues) may be less accessible for chelation than the 20-and 25-residue N-terminal tags. The activity of EcAII variants immobilized on the gold sensing surface was monitored to assess their integrity ( Figure 4B). The same pattern of specific activity was observed as that for free EcAII, where native EcAII was 2 to 4-fold more active than the His-tagged forms. In addition, the immobilized EcAII was 2.6 to 3.6-fold more active than the free enzyme (Table 2), consistent with other reports of enzymes that display higher specific activity upon immobilization. 45,46 Our results suggest that the tetrameric proteins remained essentially intact when immobilized on the sensing surface. Surface-immobilized monomeric C8-EcAII was inactive, indicating that it did not reassemble into an active tetramer upon immobilization on the sensor chip. The greater surface density observed for both N-terminally His-tagged EcAII relative to Kidrolase and C8-EcAII suggests that the N-terminally tagged EcAII may constitute more effective receptors for the detection of the anti-EcAII antibody (Ab). This was verified by assessing the SPR immunosensing signal using the P4SPR instrument and determined as the wavelength shift upon binding of polyclonal anti-EcAII antibodies (Ab) to immobilized EcAII antigenic receptors (Ag) directly in undiluted serum. The immunodetection was performed at two antibody concentrations within an analytically relevant range (15 and 150 μg mL −1 ). 23 At 15 μg mL −1 antibody concentration (∼100 nM), N26-EcAII and C8-EcAII provided approximately 2-fold greater SPR detection signal than did native EcAII (Kidrolase) and monomeric C8-EcAII. This difference was accentuated at 150 μg mL −1 (∼1 μM) antibody concentration, where oriented tetrameric N-and C-terminally tagged-EcAII provided approximately 5-fold and 3-fold greater detection signal than nonoriented cross-linked Kidrolase and oriented monomeric C8-EcAII, respectively . Immobilization and on-chip activity of EcAII variants. Surface immobilization of EcAII variants was followed upon the injection of 40 μg of protein (0.1 mg mL −1 ), and the activity of EcAII was monitored on 9 × 9 mm gold-coated glass slides for Kidrolase (black) or His-tagged EcAII: N21-EcAII (green), N26-EcAII (blue), and C8-EcAII (red). The monomeric variant is identified. (A) SPR sensograms for randomly oriented, crosslinked Kidrolase and oriented Co-NTA-coordinated His-tagged EcAII. The arrow indicates a wash step. (B) On-chip activity measurements for surface-immobilized EcAII variants monitored using the GDH-coupled assay. Functionalized chips lacking EcAII served as a blank (gold curve, partly masked). Each curve represents the average of three experiments except for monomeric C8-EcAII that is a duplicate. ( Figure 5 and Table 2). In addition to being less sensitive (lower detection signal generated), Kidrolase led to poor sensing reproducibility (average RSD = 35%) compared with the His-tagged variants (average RSD = 5−20%) at all antibody concentrations tested. A positive correlation was drawn between the immunosensing signal (detection of antibody; Ab) and the surface coverage of EcAII receptors (immobilized antigen; Ag), with R 2 = 0.9571 at the high antibody concentration (150 μg mL −1 ) and the oriented N-terminally tagged EcAII reliably providing the highest SPR signal ( Figure 5A). At a lower antibody concentration (15 μg mL −1 ), the correlation was not as strong (R 2 = 0.6102). To better understand the behavior of EcAII as an antigenic receptor for sensing, a second property was verified, namely, the sensing efficiency (efficiency of analyte recognition). This is defined as the number of analyte molecules (antibody; Ab) detected per molecule of the immobilized receptor (EcAII antigen; Ag) or the Ab/Ag binding ratio. Despite its low surface density and poor sensitivity, we observed that the native tetrameric EcAII (Kidrolase) afforded a significantly greater sensing efficiency than the tetrameric His-tagged EcAII. At 150 μg mL −1 antibody concentration, an average of 2.3 antibody molecules were detected per immobilized Kidrolase molecule (Ab/Ag binding ratio ≈ 2:1; Table 2). This ratio is 2 to 3-fold more efficient than for the tetrameric His-tagged EcAII variants, which bound only 0.9 antibody per molecule (Ab/Ag < 1:1). The lower Ab/Ag binding ratio of tetrameric His-tagged variants may be related to their higher density on the gold sensing surface than Kidrolase. Native EcAII has an average surface density of ∼1.15 × 10 12 molecules cm −2 in the plane of the crystal lattice (PDB 3ECA), with an average distance between protein tetramers of ∼11.4 nm, from center to center (c.t.c.). On the sensing surface, the density of the immobilized Kidrolase was ∼7-fold lower (1.6 × 10 11 molecules cm −2 ) with the average c.t.c. distance between the immobilized tetramers ∼2.7-fold greater (∼30.5 nm). On the contrary, the surface density of tetrameric His-tagged EcAII variants immobilized on the sensing surface was ∼1.3 to 1.8-fold greater than in the crystal lattice, with c.t.c. distances between tetramers ∼1.2 to 1.3-fold shorter. Their higher packing may be promoted by their 8-to 25-residue terminal linkers, allowing for some overlapping of the immobilized tetramers and may favor bivalent antibody binding between the tightly packed EcAII molecules (allowing each Fab domain to bind two distinct neighbor Ag molecules), consistent with the lower Ab/Ag binding ratio (<1:1) ( Figure 5B and Table 2). This contrasts with the looser packing of Kidrolase molecules, which appears to provide additional space between Kidrolase tetramers for a Specific activity of immobilized EcAII receptors (Ag) was monitored on 9 × 9 mm gold-coated glass slides. All other immobilization and detection measurements were performed in the P4SPR instrument using 20 × 12 mm gold-coated prisms. b Detection using 150 μg mL −1 anti-EcAII antibody (Ab). c Density or binding ratio of the monomer; values comparable to 4 monomers (1 equiv tetramer) are in parentheses. d Unit cell dimensions for ECAII in the tetrameric form (PDB 3ECA). e Unit cell dimensions for ECAII in the monomeric form (PDB 1NNS). antibodies to bind, consistent with the observation of an Ab/Ag binding ratio as high as 2:1. Despite a low surface density (c.t.c. distance 33% larger than in the crystal), monomeric C8-EcAII lost in sensing efficiency (Ab/Ag ratio = 0.2) if one considers molar ratios because the tetrameric EcAII benefits from four Ab binding sites per Ag molecule. Nonetheless, the sensing efficiencies of monomeric and tetrameric C8-EcAII were similar if considering equal amounts of subunit molecules (Table 2). Overall, the lower sensing efficiency of the tetrameric Histagged EcAII receptors was more than compensated for by their greater surface density (coverage) and by their high reproducibility relative to Kidrolase. These factors ultimately afforded significantly greater sensitivity when detecting anti-EcAII antibodies in the serum ( Figure 5A). ■ CONCLUSIONS We have examined the sensing properties of EcAII immobilized by various modes onto low-fouling SPR sensor chips for the detection of anti-EcAII antibodies in serum. We have determined that the native tetrameric structure, while being essential for activity, is not required for antibody recognition. Moreover, we showed that the extent of immobilization of EcAII was the main determinant of its immunosensing efficiency. Metal-coordination of His-tagged EcAII variants provided a significantly greater sensor coverage than covalent immobilization of native EcAII and therefore provided a greater sensitivity despite their reduced sensing efficiency per molecule. Moreover, metal chelation significantly improved the reproducibility of EcAII immobilization. This study illustrates the benefits of testing alternative immobilization strategies and highlights the positive impact of high receptor coverage and immobilization reproducibility toward obtaining a well-behaved sensing system. ■ EXPERIMENTAL SECTION Materials and Reagents. The pharmaceutical drug Kidrolase (EUSA Pharma) was obtained as a lyophilized powder that contains 48.6% mass of glycine−NaOH, pH 6.8− 7.0 and 51.4% mass of E. coli L-asparaginase II (EcAII) with an activity of 194.6 IU mg −1 . It was dissolved in a phosphatebuffered saline (PBS) buffer, pH 7.4 and prepared at a concentration of 0.1 mg mL −1 (0.72 μM EcAII/0.65 mM glycine) for SPR analyses or was dialyzed against PBS to remove glycine. L-Glutamate dehydrogenase (NADP) from Proteus sp. was purchased from Sigma-Aldrich. L-Asparagine and α-ketoglutaric acid were purchased from BioShop. NADPH tetrasodium salt was purchased from Calbiotech. The plasmid pET15b was purchased from Novagen. Human serum was purchased from Sigma. The E. coli L-asparaginase II (ansB) gene from E. coli K12 was obtained from the ASKA collection as a pCA24N-ansB construct. His-Tagged EcAII Constructs. The full-length EcAII precursor ORF, including the signal peptide sequence (1044 bp), was amplified using polymerase chain reaction (PCR) from the pCA24N/ansB construct using the following primers (restriction sites are underlined): 5′-AAACATATG-GAGTTTTTCAAAAAGACGGC-3′ (forward primer containing the NdeI restriction site) and 5′-AAAACTCGAGGTACT-GATTGAAGATCTGCT-3′ (reverse primer containing the XhoI restriction site) ligated into the similarly digested pET20b expression vector (Invitrogen). The resultant protein, named C8-EcAII, includes a C-terminal octapeptide His-tag (LEHHHHHH). To fuse an N-terminal His-tag to EcAII, the DNA sequence encoding the mature form of EcAII (978 bp) was amplified using PCR from the pET20b/ansB construct using the following primers: 5′-GGAATTCCATATGTTACC-CAATATCACCATTTTAGC-3′ (forward primer containing the NdeI restriction site) and 5′-CGGCTCGAGTTAGTACT-GATTGAAGATCTG-3′ (reverse primer containing the XhoI site). The ochre stop codon, TAA, was included (in bold in the reverse sequence). The amplicon was digested with the corresponding restriction enzymes and ligated into similarly digested pET15b vector for N-terminal fusion with a sequence encoding 21 residues containing a His 6 -tag, yielding the construct N21-EcAII. A second N-terminal fusion was constructed by introducing an enterokinase cleavage site between the mature EcAII sequence and the previous Nterminal fusion. The mature sequence was amplified from the pET15b/ansB construct with the forward primer: 5′-GGAATTCCATATGGACGACGACGACAAGTTACCCAA-TATCACCATTTTAGC-3′ and the same reverse primer as for N21-EcAII. It was similarly ligated into pET15b, yielding the construct N26-EcAII. The DNA sequence encoding the enterokinase cleavage site is shown in bold in the forward sequence. Each ligation was transformed into competent E. coli BL21(DE3). The transformed cells were plated onto luria broth (LB)-agar containing ampicillin (Amp; 100 μg mL −1 ), and the clones were selected and cultured in LB containing Amp (100 μg mL −1 ) and stored at −80°C in 25% GOH. The plasmids were isolated and the ORFs were sequenced at the IRIC Genomic Platform at Universitéde Montreál, using the T7 promoter and terminator primers. The DNA sequences were analyzed using the Clone Manager 9 (version 9.2) software. Protein Expression and Purification. For N-terminally tagged EcAII (N21-and N26-EcAII), the cytosolic expression was performed as follows. Terrific broth (TB) (Amp 100 μg mL −1 ) was inoculated with the appropriate GOH stocks and grown overnight at 37°C and 230 rpm agitation. For the expression, 4 L flasks filled to 25% capacity with TB + Amp were inoculated with the appropriate precultures (1:1000 ratio), and the cells were grown at 37°C and 230 rpm, until the optical density at 600 nm reached 0.6. The protein expression was then induced overnight at 18°C and 230 rpm by the addition of 0.5 mM isopropyl β-D-1-thiogalactopyranoside (IPTG). The induced cells were harvested by centrifugation for 30 min at 3500 rpm using a SLA-3000 rotor. The cell pellets were stored at −80°C for 24 h, then thawed on ice, and resuspended in the lysis buffer (50 mM sodium phosphate, 10− 20 mM imidazole, and 150 mM NaCl, pH 8) at a ratio of 10% (w/v). Lysozyme (final concentration of 1−2 mg mL −1 ) was added, and the cells were placed on ice for 30 min, before clarification by sonication on ice for three 15 s cycles at 20 pulses/s. The cells were lysed using a Constant Systems cell disruptor (27 kPSI) cooled to 4°C. The lysates were clarified from the cell debris by centrifugation for 30 min at 17.5k rpm using a SS-34 rotor and cooled to 4°C. The supernatants were filtered through 0.2 μm filters, and the expressed His-tagged recombinant proteins were purified from the soluble fraction under native conditions by immobilized metal affinity chromatography (IMAC) (Ni-NTA) at 4°C using a AKTA FPLC system equipped with a UPC-900 monitor and a P-20 pump system (GE Healthcare) and a 5 mL His-trap column (GE Healthcare). The nickel resin was equilibrated with 5 column volumes (CV) of the lysis buffer. The lysate was applied at a flow rate of 0.5 mL min −1 . After the recovery of the absorbance baseline at 280 nm, an imidazole gradient of 10− 100 mM was applied over 3 CV and maintained at 100 mM for 3 CV before eluting with a jump to 500 mM imidazole. Fractions of 1 mL were collected. Following the analysis on 15% SDS-PAGE, the fractions containing EcAII were pooled, concentrated to 1 mL using an Amicon concentrator [molecular weight cutoff (MWCO) = 10 kDa] and applied at a flow rate of 0.5 mL min −1 on a 90 mL Superdex 75 gel filtration column (1.6 × 55 cm) equilibrated with PBS, pH 7.4 at 4°C. The collected 0.5 mL fractions corresponding to the major peak were analyzed using 15% SDS-PAGE, and the fractions of highest purity were pooled. The protein concentration was determined using the bicinchoninic acid (BCA) method 47 with bovine serum albumin (BSA) and Kidrolase as standards. The purified EcAII samples were diluted to 0.1−1 mg mL −1 in PBS, pH 7.4, aliquoted, and flash-frozen over dry ice/ethanol for storage at −80°C. Similar procedures were performed for purification of the C-terminally His-tagged C8-EcAII, with the following modifications. Periplasmic expression of C8-EcAII was performed in a ZYP-5052 autoinducing medium inoculated (1:100) with a preculture. Following growth at 37°C for 2 h, the cultures were incubated at 20°C for overnight expression. Following the application of the cell lysate onto the His-trap column, the elution was performed by a stepwise gradient from 20 to 250 mM. The purified protein was quantified, filter sterilized (0.2 μm), and flash-frozen for storage at −80°C or kept at 4°C. Exact Mass Determination (LC−MS). The exact mass of the purified proteins (0.1 mg mL −1 in PBS) was determined using electrospray ionization (ESI) mass spectrometry on a LC−MS time of flight (TOF) (Agilent) spectrometer at the Regional Mass Spectrometry Centre at Universitéde Montreál. L-Asparaginase Activity Measurements. The hydrolysis of L-Asn catalyzed by EcAII was assessed using spectrophotometry at 37°C by monitoring ammonia production as an end-point assay using direct Nesslerization or using a continuous coupled assay with GDH. 38 Each reagent was freshly prepared in the reaction buffer (modified PBS: sodium phosphate concentration increased to 50 mM and pH adjusted to 8 before the experiment). Direct Nesslerization. A standard curve for ammonia concentration was generated with the (NH 4 ) 2 SO 4 concentration ranging from 0 to 5 mM in PBS, pH 8 or Tris-HCl, pH 8.6. The EcAII reactions (1 mL) were performed in the same buffer with 5 μg of L-asparaginase and 9 mM L-Asn. Each reaction was performed for 30 min at 37°C and then quenched with 0.05 mL of 1 M trichloroacetic acid. A total of 0.1 mL of the reaction solution was used for revelation with 0.25 mL of Nessler reagent and 2.15 mL of water (final volume of 2.5 mL), and the absorbance was measured at 450 nm. The enzyme specific activity was determined based on freshly generated standard curves. Coupled assay. The ammonia produced upon L-Asn hydrolysis by EcAII served as the substrate for the second enzyme GDH in a coupled reaction that converts αketoglutarate into glutamate with the oxidation of NADPH into NADP + . The reaction rate was observed in a continuous manner by monitoring the decrease in the NADPH absorbance at 340 nm, using ε 340nm = 6.22 mM −1 cm −1 . The parameters of the coupled assay were optimized regarding the concentration of GDH and each substrate. To this effect, the affinity and the catalytic efficiency of GDH for ammonia were determined under saturated concentrations of NADPH (250 μM = 10 × K M ) and α-ketoglutarate (175 μM = 17 × K M ) and variable concentrations of NH 4 Cl (0−50 mM) ( Table S1). The ammonia produced by EcAII at a saturated concentration of freshly prepared L-Asn (5 mM ≈ 500 × K M ) was monitored using the coupled GDH assay under the saturated conditions described above (freshly prepared substrates) with various amounts of EcAII (0.1−10 μg). The maximal rate of NADPH oxidation (maximal GDH velocity) was then plotted as a function of EcAII loading. The slope of the linear portion of the curve (dynamic range) observed from 0.1 to 1 μg EcAII was taken as the apparent specific EcAII activity (μmole NADPH oxidized/min per mg of EcAII) under the assay conditions. We refer to this value in terms of units (U mg −1 EcAII) in the coupled assay. SEC. The oligomerization states of native EcAII (Kidrolase) and each His-tagged EcAII were analyzed using analytical SEC using an AKTA FPLC system. Different protein concentrations (0.4 mL injections) were applied onto a calibrated 24 mL size exclusion column (GE Superdex 200, 10/300 mm) equilibrated with PBS, pH 7.4 at a flow rate of 0.5 mL/min in PBS, pH 7.4 at 4°C. The EcAII oligomeric forms were determined by correlating the elution volume (V e ) of each peak with both the expected MW according to the elution volume of protein standards and the Stoke's radius. The calculated accessible surface area (ASA) reported for the monomer is 14 000 Å 2 but only 38 500 Å 2 for the tetramer. 14,48 When considering the Stoke's radius of the calibration standards, the calculated Stoke's radius of the EcAII tetramer (39 Å) is consistent with the measured Stoke's radius both in crystal form (32 Å) and in solution (30.3 Å). 49 Native-PAGE. Protein preparations were concentrated to 2.5 mg mL −1 in PBS, pH 7.4 using Amicon concentrators (MWCO = 10 kDa) and analyzed using CN-PAGE and BN-PAGE using Novex NativePAGE 4−16% Bis-Tris gels (1.0 mm), pH 7. For BN-PAGE, the protein samples were prepared in a loading dye consisting of 50 mM Bis-Tris, 500 mM ACA, 10% GOH, and 5% Coomassie blue G-250 and applied to 4− 16% Bis-Tris gels or 10% Bis-Tris gels. Native electrophoresis was performed with deep or light blue cathode buffer [50 mM tricine, 15 mM Bis-Tris with 0.02% (deep) or 0.001% (light) Coomassie blue G-250] and anode buffer (50 mM Bis-Tris), both adjusted to pH 7. Electrophoresis under light blue conditions was referred to as LBN-PAGE. For CN-PAGE, the Coomassie blue G-250 loading dye was replaced by bromophenol blue with or without ACA. In addition, a clear cathode buffer was used (without G-250). Electrophoresis was performed for 2 h at 90−100 V. CN-PAGE gels were stained with Coomassie brilliant blue R-250. CN-PAGE and BN-PAGE gels were destained with 10% acetic acid and 45% methanol. CD. The CD spectra of Kidrolase and each His-tagged EcAII were recorded using a Chirascan spectropolarimeter (Applied Photophysics). Far-UV CD spectra (190−250 nm) were recorded under a nitrogen atmosphere at 25°C in PBS, pH 7.4 with a protein concentration of 0.1 mg mL −1 in a 1 mm quartz cuvette. The scans were performed with a step of 0.4 nm (3.6 s/point) and a bandwidth of 1 nm. The spectra were corrected from the background (buffer). The data were converted into molar ellipticity (Θ). Fluorescence Spectroscopy. Intrinsic fluorescence measurements were performed using a Varian Cary Eclipse spectrofluorimeter. Fluorescence spectra were recorded from 287 to 450 nm after excitation at 278.5 nm (λ max of absorption) at a protein concentration of 0.05 mg mL −1 in PBS, pH 7.4 in a 1 cm quartz cuvette. Thermal denaturation was performed from 20 to 90°C (0.5°C/min) using a Peltier temperature controller. Melting curves were generated by plotting the fluorescence intensity at λ max for emission as a function of temperature. Although the chemical denaturation of EcAII may proceed via the intimate dimer intermediate (N 4 → 2I 2 → 4U), 50 the thermal denaturation was treated as an apparent two-state process, consistent with the overall shape of the unfolding curves. We thus determined the apparent melting temperature by fitting to a two-state model (N → D), where denaturation operates between the fraction of native ( f N ) and denatured ( f D ) molecules and where f N + f D = 1. In this model, the fluorescence signal value (y) at any point of the unfolding curve is given by the following equation. 51 The values y N and y D correspond to the fluorescence intensity for the native and denatured states, respectively. By combining these equations, the fraction of denatured molecules at any value of y (fluorescence intensity) is obtained by the following equation The denaturation equilibrium constant can be calculated as follows The unfolding free energy change can be calculated as follows The melting temperature (T m ) can be obtained by plotting the unfolding free energy (ΔG) as a function of temperature, where T m corresponds to the temperature where f N = f D and the unfolding free energy is null (ΔG = 0). We noted that, beyond the inflection point, some aggregation was observed. ELISA. MaxiSorp microplates (Nunc-Immuno plate, Thomas Scientific, cat no. 62409-50) were coated with 0.1 mL of native or His-tagged EcAII (10 μg mL −1 ) diluted in 0.05 M carbonate/bicarbonate, pH 9.5 and incubated overnight at 4°C . ELISA assays were performed, as reported by Wang and coauthors, 12 with the following modifications: the primary rabbit polyclonal anti-EcAII antibody (IgG) ANSZ (Antibodies Online, cat no. ABIN95396) was resuspended in human serum (Sigma, cat no. H4522) and diluted in PBS, pH 7.4 to concentrations ranging from 0.1 pM to 1 μM for calibration, assuming a MW of 150 kDa. Polyclonal HRP-conjugated goat antirabbit IgG (Abcam, cat no. ab97200) at a dilution of (1:1000) in PBS was used for secondary detection. A volume of 0.1 mL of freshly made 0.4 mg mL −1 o-phenylenediamine dihydrochloride (OPD) prepared in 0.1 M citrate buffer, pH 6 containing 0.02, 0.16, or 3% hydrogen peroxide (high, intermediate, and low sensitivity conditions, respectively) was added to the wells and incubated for 30 min in the dark, after which the reaction was stopped by the addition of 0.1 mL of 1 M phosphoric acid. The anti-EcAII antibody concentration was measured by monitoring the absorbance at 490 nm (specific product absorption). The absorbance at 490 nm was then subtracted from the absorbance at 650 nm to control for nonspecific adsorption. Oriented His-tagged EcAII were further analyzed using ELISA using Pierce Nickel-coated plates. Calibration was performed with the ANSZ antibody from Antibodies Online and was confirmed with a rabbit polyclonal antiL-asparaginase II antibody (IgG) from Novus Biologicals (NB100-66516; not shown). The titration curves were fitted to a logarithmic function, and the dissociation constant was measured using multiple binding sites analysis using the GraphPad Prism 6.0 software. Sensor Chip Fabrication. Sensor chips were constructed by depositing a thin gold film on either 9 × 9 × 0.5 mm glass slides for monitoring the extent of immobilization and on-chip asparaginase activity or on 20 × 12 × 3 mm glass prisms for immunosensing in a portable P4SPR instrument (AffiniteÍ nstruments) that has been described in a previous report. 19 Sensing surfaces were prepared by depositing chromium (∼0.7 nm thick) and then gold (∼50 nm thick) on the glass surface using a Cressington 308R sputter coater (Tel Pella Inc.). SPR sensing experiments (wavelength interrogation) were performed in Kretschmann configuration. The gold surfaces were immersed in a 1 mg mL −1 solution of 3-MPA-LHDLHD-OH peptide in dimethylformamide (DMF) to form a SAM that prevented surface fouling. 22,52 The terminal carboxylates on the SAM remain free, to covalently immobilize native EcAII by cross-linking with its surface-exposed lysine residues. For immobilization of EcAII by coordination of terminal His-tags, the SAM was functionalized with NTA-Co to yield Au-MPA-LHDLHD-NTA-Co, as previously described. 17 On-Chip Measurement of Activity and Extent of Immobilization. The specific activity of immobilized EcAII was monitored on the 9 × 9 mm gold-covered chips. The SAMfunctionalized chips were placed on a dove prism (above a thin layer of immersion oil) and sealed with a rubber ring fitted in the injection module. The chip surface was rinsed with water and then with the buffer (PBS, pH 7.4). Following the adjustment of the plasmonic band (typical minimum at ≈ 620 nm), the baseline was set in S polarization, and the reference was collected (average of 10 scans per spectrum for a total of 100 spectra). The sample wavelength shifts (Δλ SPR ) were recorded in P polarization. The native EcAII was surfaceimmobilized by cross-linking its surface-exposed lysines to the free carboxylates of the SAM, using a previously-reported EDC/NHS cross-linking procedure. 53 The chip was rinsed with at least 6 mL of the buffer for 2 min, and 1 mL of EDC/NHS (1:1 mixture with a final concentration of 200 and 100 mM, respectively) was injected on the peptide surface. PBS, pH 4.5 was injected to activate the surface (for 2 min), and 0.8 mL of 0.1 mg mL −1 native EcAII (Kidrolase) was injected. Surface immobilization was monitored for 20 min before the surface was washed with 6 mL of PBS. Immobilizing the His-tagged EcAII onto NTA-Co-functionalized SAM was performed as above, with the exception of the EDC/NHS and the PBS, pH 4.5 injection steps. The surface coverage (Γ, ng cm −2 ) of the immobilized native or His-tagged EcAII was calculated from the change in the wavelength (Δλ SPR ) upon immobilization of EcAII using the following equation 54 ρ λ Γ = − − Δ − l mn n ( /2) ln(1 ( / ( ))) d S A M m e d i u m (5) where ρ corresponds to the density of the adsorbed protein monolayer (1.3 g cm −3 ), l d is the plasmon penetration distance (∼230 nm), Δλ is the shift in the wavelength associated with protein immobilization, m is the refractive index sensitivity of the SPR sensor (1765 nm/RIU), n SAM is the refractive index of the peptide SAM (1.57 RIU), and n medium is the refractive index of the buffer (1.33476 RIU). The total amount of immobilized EcAII on the sensing surface (Q) was determined using the formula Q = ΓS, where S = 0.166 cm 2 in contact with the protein. To monitor the activity of the surface-immobilized EcAII, the 9 × 9 mm EcAII-coated chips were placed upright along the side wall of a UV/vis quartz cuvette. 17 The chips were immersed in a solution containing 5 mM L-Asn, 17 K M αketoglutarate, 10 K M NADPH, and 1.0 IU GDH in a modified PBS buffer, pH 8 (as described for asparaginase activity measurements) with slow agitation. The activity of the immobilized EcAII was measured by monitoring the change in the absorbance at 340 nm over 240 min because of the oxidation of NADPH accompanying the consumption of ammonia by GDH. The maximal GDH velocity was corrected from the blank (no EcAII) and was used to determine the activity of the immobilized EcAII. The specific activity (U mg −1 ) of the immobilized EcAII was determined according to the mass of the immobilized protein (Q) on the chip. SPR Immunosensing. SPR immunosensing experiments were performed using the P4SPR portable instrument (AffiniteÍ nstruments). 19 The 20 × 12 mm dove prisms coated with gold and functionalized with the appropriate antifouling SAM (as described above, either with or without NTA-Co) were placed in the P4SPR instrument for EcAII immobilization. The baseline from 1 mL of PBS was recorded for 2 min, and native or His-tagged EcAII (0.4 mL of 0.1 mg mL −1 protein) was injected as above, followed by rinsing with 1 mL of PBS for 2 min. Following immobilization, 0.4 mL of blank human serum was injected over 10 min to passivate the surface. Human serum spiked with different concentrations of polyclonal rabbit anti-asparaginase antibodies (0.4 mL) was then injected, and antibody binding was monitored for 20 min. Calibration of the sensor was performed with serial injections of increasing concentrations of anti-asparaginase antibodies on a single sensor chip, as previously described. 23 The SPR shifts were calculated with MATLAB software and served to calculate the surface density of the analyte bound onto the immobilized antigenic receptors, as described above. Taking into account the exact MW of each EcAII variant, we calculated the number of EcAII molecules immobilized on the sensing surface (molecule cm −2 ). Calculation of the number of antibody molecules bound to antigen was based on a MW of 150 kDa for the antibody. Statistical analysis of the variance was performed according to Tukey's multiple means comparison test identified by one-way ANOVA. Dimensions of the unit cell of tetrameric EcAII in crystal form (PDB 3ECA: a = 7.6 × b = 9.6 × c = 11.1 nm; α = 90°, β = 97.1°, γ = 90°) allowed estimating an average footprint of 87 nm 2 (8.7 × 10 −13 cm 2 ) per tetrameric EcAII molecule, with an average distance from c.t.c. between EcAII molecules of 11.4 nm in the crystal. The surface density of 1.15 × 10 12 molecules cm −2 in the plane of the crystal lattice allows to scale the distance between the immobilized tetrameric EcAII receptors on the sensing surface based on their determined surface density. Dimensions of the unit cell for monomeric EcAII (PDB 1NNS; monomeric asymmetric unit: a = 7.6 × b = 13.5 × c = 6.5 nm; α = 90°, β = 97.1°, γ = 90°) allowed estimating an average footprint of 80 nm 2 (8 × 10 −13 cm 2 ) per monomer with an average c.t.c of 10.8 nm in the plane of the crystal lattice. * S Supporting Information The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acsomega.7b00110. Amino acid sequences of EcAII variants, secondary, tertiary, and quaternary structure analyses, enzyme stability, antigenicity, and immobilization data (PDF)
2018-08-06T12:46:11.635Z
2017-05-17T00:00:00.000
{ "year": 2017, "sha1": "990df347ff93d82172e9cef42a362006a15ff282", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.7b00110", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "96360914cf2f1dc6e300cd5039bc7d97da05a321", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
260067994
pes2o/s2orc
v3-fos-license
Greater stress and trauma mediate race-related differences in epigenetic age between Black and White young adults in a community sample Black Americans suffer lower life expectancy and show signs of accelerated aging compared to other Americans. While previous studies observe these differences in children and populations with chronic illness, whether these pathologic processes exist or how these pathologic processes progress has yet to be explored prior to the onset of significant chronic illness, within a young adult population. Therefore, we investigated race-related differences in epigenetic age in a cross-sectional sample of young putatively healthy adults and assessed whether lifetime stress and/or trauma mediate those differences. Biological and psychological data were collected from self-reported healthy adult volunteers within the local New Haven area (399 volunteers, 19.8% Black, mean age: 29.28). Stress and trauma data was collected using the Cumulative Adversity Inventory (CAI) interview, which assessed specific types of stressors, including major life events, traumatic events, work, financial, relationship and chronic stressors cumulatively over time. GrimAge Acceleration (GAA), determined from whole blood collected from participants, measured epigenetic age. In order to understand the impact of stress and trauma on GAA, exploratory mediation analyses were then used. We found cumulative stressors across all types of events (mean difference of 6.9 p = 2.14e-4) and GAA (β = 2.29 years [1.57–3.01, p = 9.70e-10] for race, partial η2 = 0.091, model adjusted R2 = 0.242) were significantly greater in Black compared to White participants. Critically, CAI total score (proportion mediated: 0.185 [0.073–0.34, p = 6e-4]) significantly mediated the relationship between race and GAA. Further analysis attributed this difference to more traumatic events, particularly assaultive traumas and death of loved ones. Our results suggest that, prior to development of significant chronic disease, Black individuals have increased epigenetic age compared to White participants and that increased cumulative stress and traumatic events may contribute significantly to this epigenetic aging difference. Introduction Life expectancy amongst Black Americans in the United States has consistently been less than the national average. Despite advances in modern medicine and governmental policies that have improved access to care for all Americans, National Vital Statistics Services in 2020 had reported life expectancy amongst Black Americans as 2-3 years less than the general population (National Vital Statistics Services, 2021), a gap that widened during the COVID-19 pandemic (Lundberg et al., 2023). Additionally, even controlling for income, gender, and socioeconomic status, Black Americans have greater prevalence, increased burden, and earlier onset age of chronic illness when compared to the general population (Williams et al., 2010). While these race-related health disparities and some of the potential causes that may underlie them are widely reported (Colen et al., 2018;Pascoe and Richman, 2009), our understanding of how race-related differences in the experience of psychosocial stress or trauma, the pathological means of how such experiences may be biologically embedded, and its impact on the observed race-related mortality gap remains obscured. The GrimAge epigenetic clock has been identified as a reliable predictor of age-related morbidity and mortality (Horvath and Raj, 2018a;Lu et al., 2019;McCrory et al., 2020). Comparing epigenetic age to chronological age provides a biomarker for biological age, indicating whether aging is advanced or delayed (Horvath and Raj, 2018b). Chronic diseases such as type II diabetes mellitus, hypertension, cardiovascular disease, and obesity, have been associated with advanced aging and shorter life expectancies (Ayotte et al., 2012;Kho et al., 2021). Similarly, traumatic or adverse events correlate with various metabolic and inflammatory disorders (Mathur et al., 2016;Pantesco et al., 2018;Sullivan et al., 2018), and long-term stress increases aging markers in chronically ill individuals (Chae et al., 2016;Mathur et al., 2016;Simons et al., 2018;Xu et al., 2018). Both our research and other studies have found that cumulative stress and trauma significantly accelerate epigenetic aging (Harvanek et al., 2021;Sullivan et al., 2018;Wolf et al., 2016) in young non-ill community samples. Though social factors such as differential healthcare access and systemic racial bias significantly drive life expectancy disparities following disease onset (Cerdeña et al., 2021;Fitzgerald and Hurst, 2017), differences in trauma and stressor frequency and/or intensity may influence epigenetic aging prior to evident disease, subsequently impacting health outcomes (Kho et al., 2021). How these stressors are biologically embedded is currently unclear, though is an area of active study (Geronimus et al., 2016). Longitudinal studies have indicated that race-related differences in childhood stress and trauma exposure can negatively impact epigenetic aging (Wolf et al., 2018) and neurological development (Dumornay et al., 2023). In contrast, protective factors like supportive family environments do not appear to affect epigenetic aging despite trauma exposure (Brody et al., 2016). Prior research has noted signs of accelerated aging in self-identified Black Americans (Kho et al., 2021), though these studies typically focus on individuals with chronic medical or psychiatric diseases (Chae et al., 2016;Geronimus et al., 2010;Simons et al., 2021). The presence of race-related disparities in epigenetic age before evident disease and the role of adverse stress experiences in these differences among a young, ostensibly healthy population remains uncertain. Drawing upon previous research (de Mendoza et al., 2018;Everage et al., 2012;Geronimus et al., 2010;Heard-Garris et al., 2018;Simons et al., 2018), we hypothesize that there will be significant race-related differences in the number of lifetime stress and traumatic events between Black and White participants. We propose that higher number of stress and traumatic events will be associated with greater epigenetic aging in Black compared to White participants. Using a cross-sectional study involving young to middle-aged volunteers in self-reported good health, we assessed the relationship between stress and trauma, race, and epigenetic aging via GrimAge. Our investigation initially sought to determine whether race is associated with cumulative stressful events and with GrimAge acceleration (GAA) in a healthy community sample. Through an exploratory mediation analysis, we then examined whether cumulative stress and trauma mediates the relationship between race and GAA and whether specific types of stressors, particularly traumatic events, primarily contribute to this effect. Finally, we incorporated socioeconomic and biobehavioral covariates differing between the populations to ascertain if stress and trauma continue to mediate the relationship between race and GAA, even after considering these factors. Cohort recruitment Participants for this research were 399 community adults between the ages of 18-50 who self-identified as Black (79 individuals) or White (320 individuals) from the greater New Haven, CT area who provided written and verbal informed consent to participate in this research at the Yale Stress Center (Table 1) (Xu et al., 2018). Those who identified in a racial group other than Black or White were excluded. Participants were recruited via advertisements to participate in a study on the effects of stress on their health via advertisements online, in local newspapers, and at a community center. Participants were excluded if they had an active mental health disorder or substance use disorder (not including nicotine) as assessed via the Structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders 4th Edition(American Psychiatric Association, 1994); were pregnant; had a chronic medical condition (e.g., hypertension, diabetes, hypothyroidism); were unable to read English at or above the 6th grade level; had a head injury; or were using any prescribed medications for any psychiatric or medical disorders. Urine toxicology and breathalyzer screens were conducted at each appointment to ensure drug abstinence. The research protocol was reviewed and approved by the Yale Institutional Review Board (IRB). Psychological measures Cumulative stress was assessed using the Cumulative Adversity Inventory (CAI (Turner et al., 1995)), which is a 140-item well-validated, retrospective structured interview that assesses the occurrence of specific types of stressful life events, including work, financial, relationship, traumatic, major life events, family and neighborhood and health related stressors across the lifetime, and in addition, the participants' perceived sense of being overwhelmed by specific events. Occurrence of the specific stressful life events listed above and frequency of occurrence of each were quantified and summed to make up the CAI life events total score. In addition, the events were categorized by three subscales: major life events, traumatic life events, and recent life events. For purposes of scoring, a "yes" to the specific stressful event occurring led to a "1" and a sum of all the "yes" endorsements comprised the subscale score. A fourth chronic stress subscale assessed the participant's sense of feeling overwhelmed by the specific life events (see our prior paper, (Harvanek et al., 2021). The chronic stress subscale was rated on a "not true", "somewhat true", or "very true" scale, with assigned scores of 0, 1, and 2, respectively. The final score is a sum of these values for the chronic stress subscale. The CAI total score was a sum of each of the subscale scores, with a higher score indicating a higher overall level of lifetime cumulative stress. To further understand traumatic stress, traumatic events were sub-categorized into four areas: assaultive violence, other injury or shocking event, learning of traumas of a closer friend or relative and the death of a loved one, based on previous work utilizing the Detroit Area Survey (Breslau et al., 1998). This method has been used to classify different types of trauma and their effects (Breslau et al., 2004). The alpha reliability of the CAI and trauma subscale are 0.87 and 0.77 respectively. Self-reported current health was assessed by the Cornell Medical Index (CMI) (Brodman et al., 1949). Physical and psychological heath symptoms are captured by a 195-question interview, a validated reliable measure of current general health used by various studies (Abramson, 1966;Brodman et al., 1949;Perlmutter and Nyquist, 1990). CMI alpha reliability is 0.95. Self-report was also used to identify smoking status (current smoker or non-smoker), and alcohol use (here quantified by standard drinks per 28 days). DNA methylation and epigenetic clock analysis As previously described, DNA was extracted from whole blood (Xu et al., 2018). Methylation for DNA samples were profiled using Illumina Infinium HumanMethylation450 Beadchips, which covers 96% of CpG islands and 99% of RefSeq genes. Quality controls are as previously published (Xu et al., 2018), further information regarding DNA methylation is available in the supplementary methods. The New Methylation Age Calculator at https://dnamage.genetics. ucla.edu/new (Lu et al., 2019), was used to estimate epigenetic age as outlined by Lu et al. As per their protocol, normalized data and advanced analysis option were used. We utilized GrimAge acceleration (GAA), which is defined as the residuals of a linear correlation of GrimAge to chronologic age. In the analyses of GAA, we accounted for proportions of B Cells, CD4 + T cells, CD8 + T cells, Monocytes, and NK cells by incorporating them as covariates in a linear model. The Houseman method (Houseman et al., 2012) was used to calculate the proportions. Our conclusions were not significantly altered by exclusion of these cell proportions from our models. Statistical analysis Data organization and analysis were conducted using R 3.6.3 (Bunn and Korpela, 2016) and RStudio. We utilized Wilcoxon rank-sum tests to address the non-normality of variables, except the variable of assaultive trauma. Due to the higher frequency of scores of 0, assaultive trauma was compared using a Poisson regression. For GAA analysis, all multivariable linear regressions adjusted for sex and cell proportions (dropping granulocytes to avoid overfitting). All tests were two-tailed with alpha of 0.05, with Bonferroni corrections used when assessing multiple subscales at once as indicated in the text. Exploratory mediation analysis was performed to determine if race (independent variable) impacts GAA (dependent variable) via CAI or its subscales, including trauma (mediating variables). All mediation effects were calculated via the mediation package in R using 10,000 simulations with bootstrapping, including covariates of sex and cell proportions. Preliminary analyses showed similar effects using quasi-Bayesian monte carlo simulations, though only the bootstrapping models are presented for simplicity. Mediation was considered significant if the proportion mediated was greater than 0 with an alpha of 0.05, and with Bonferroni corrections applied when assessing the subscales (2) and types of trauma (4). We next explored whether there were differences in CAI subscales between Black and White participants. After accounting for multiple comparisons (adjusting for 4 comparisons), Black participants reported significantly higher traumatic life events (TE subscale) (mean difference: 2.9, median difference: 2, adjusted CI/p [1-4, p = 2.55e-5], Fig. 1B): and major life events (mean difference: 0.9, median difference: 1, adjusted CI/p: [0-1, p = 6.52e-5], Supplementary Fig. 1A). However, we found no significant difference between Black and White participants on chronic stress (mean difference: 2.3, median difference: 1, adjusted CI/p (Table 1). Further assessment of specific traumatic event types revealed Black participants experiencing 87% more assaultive violence, 104% more personal injuries or shocking events, 38% more traumas of a close friend/relative, and 34% more deaths of a close friend or relative, as compared to White participants (Table 2). Assessing self-reported health symptoms between Black and White participants utilizing the Cornell Medical Index (CMI), we found overall low self-reported health symptoms in the sample, indicative of their good health status. Despite the difference in GAA, there was no difference between Black and White individuals in reported health symptoms (Table 1 and Supplementary Fig. 2, mean difference = 2.6, median difference: − 1 [− 3 -2, p = 0.504]). As Black participants reported significantly higher traumatic life events and major life events, we next asked whether these subscale scores also are associated with GAA in Black and White participants. After accounting for sex and cell proportions as well as multiple comparisons (adjusting for 2 comparisons), traumatic life events are associated with higher GAA in both Black (β = 0.20 adjusted CI/p: Fig. 3). Fig. 1. Racial differences in reported stress and GrimAge Acceleration (GAA) (A) Black participants report a significantly higher level of cumulative stress as measured by the total Cumulative Adversity Inventory (CAI) when compared to White Americans (p = 2.14e-4 by Wilcoxon rank sum test). (B) Black versus White participants report a significantly higher level of traumatic events as measured by the CAI trauma subscale (adjusted p for 4 comparisons = 2.55e-5). (C) Both Black and White study participants demonstrate a strong correlation between chronologic age and GrimAge (R 2 presented on graph represents univariate correlation between age and chronologic age). When considering a linear model with GrimAge dependent on both chronologic age and race, race accounts for a 1.73-year difference between Black and White participants. (D) Black participants, on average, have a higher GAA when compared to White study participants. Cumulative stress and traumatic life events mediate race-related effects on GrimAge Acceleration While these data are cross-sectional, we next pursued exploratory mediation analyses to determine if higher levels of stress and trauma would be a possible mechanism of these race-related differences in epigenetic aging. Accounting for sex and cell proportions, we found that CAI total score significantly mediated the relationship between race and GAA (proportion mediated: 0.185, [0.073-0.34, p = 6e-4], Fig. 2C). Next, we assessed whether the subscales of Traumatic Events and Major Life Events might be possible mediators. After accounting for sex, cell proportions, and multiple comparisons (adjusting for 2 comparisons), the traumatic life events subscale also mediated the relationship between race and GAA (Proportion mediated: 0.190; adjusted CI/p: [0.06-0.37, p = 0.0008], Fig. 2D), as did the major life events subscale (Proportion mediated: 0.112, adjusted CI/p: [0.026-0.25, p = 0.0052]). In all mediation models, race maintained a significant direct effect on GAA after accounting for the mediating effects of total CAI or the specified life trauma or major life events subscale (see Fig. 2C and D). Notably, even after considering all behavioral and demographic covariates that differ between Black and White participants in Table 1 (years of education, BMI, and alcohol use) as well as sex and cell proportions, total CAI scores (proportion mediated: 0.136 [0.031-0.30, p = 0.0042]) showed significant mediating effects. When performing a similar analysis on the traumatic and major life event subscales and accounting for multiple comparisons (adjusting for 2 comparisons), the trauma subscale (proportion mediated: 0.147, adjusted CI/p: [0.020-0.37, p = 0.0152]) showed significant mediating effects, though the major life events subscale no longer had a significant mediating effect (proportion mediated: 0.062, adjusted CI/p: [− 0.0061 -0.19, p = 0.102]). The sub-categories of assaultive trauma and death of loved ones demonstrate significant mediating effects We next assessed whether specific sub-categories of trauma were related to GAA. After correcting for multiple comparisons ( Discussion These findings show significant race-related differences in the epigenetic age marker GrimAge in a young putatively healthy community population. As expected, Black compared to White sample showed both higher epigenetic aging and greater number of stressful and traumatic life events. Notably, exploratory mediation analyses suggested that the significantly higher number of traumatic stress events (particularly assaultive trauma and death of loved ones) in Black relative to White participants significantly mediated these differences. Even after accounting for significant behavioral and demographic differences (BMI, alcohol use, and education), this higher relationship in Black compared to White samples remained. These results suggest that young Black Americans exhibit significantly increased epigenetic age as a result of more adverse stressful life events, suggesting a possible "dose effect" of cumulative stressful and traumatic life events. Remarkably, despite the differences in epigenetic age, stressful life events and traumatic life events, there was no difference in reported health symptoms via the CMI. This suggests the biological embedding of stress and trauma in the epigenome may occur while individuals are healthy as per selfreport, and before the development of differences in negative health symptoms. While these findings come from a cross sectional analysis of race, stress and epigenetic age and need replication in longitudinal samples, they suggest that interventions to mitigate assaultive and other traumatic stressors are paramount to improving provisional life expectancy of Black Americans. In contrast to prior studies ( 2018) which often included older or less healthy individuals, our study utilizes GrimAge Acceleration as a biomarker of disparities in aging between putatively healthy Black and White young adults (Kho et al., 2021;Philibert et al., 2020;Simons et al., 2021) with no significant difference in current health symptoms. Identifying accelerated aging prior to the onset of illnesses suggests possible intervention points to detect changes in epigenetic aging prior to the emergence of medical/chronic illnesses. The impact of traumatic events on epigenetic age is consistent with an emerging literature implicating the number of psychosocial stressors and traumas as one factor contributing to health differences in Black and White adolescents and children (Harnett et al., 2019;Lavner et al., 2022). While our exploratory mediation analysis is limited by the crosssectional nature of our data, within that correlative framework we explored the relationship between race and epigenetic aging with stress and trauma significantly mediating that relationship. We identified specific stressful and traumatic events that may mediate the relationship between race and increased epigenetic age. While the CAI life events subscales measures the occurrence of specific significant events (including traumatic events, Table 2), the chronic stress subscale, measures perceived subjective response to those stressors. In contrast to CAI and trauma life event scores, which were associated with significant differences in epigenetic age, the relationship between race and chronic stress was not significant. This stood out as particularly salient as it suggests that the stress influence on epigenetic age may not be a result of the subjective perceived responses to stressful life events (Mathur et al., 2016), but rather via the biological embedding of the experience of specific stress and trauma events themselves. It is also notable that education, alcohol use, and BMI differed between Black and White participants in our study. While stress and trauma continued to show mediating effects after accounting for these covariates, prior studies have demonstrated relationships between epigenetic aging, education, alcohol, and BMI which could also contribute to differences in race-related aging (Crimmins et al., 2021;Lundgren et al., 2022;Quach et al., 2017). Future longitudinal studies could assess not only stress and trauma, but also behavioral and socioeconomic contributors to race-related differences in epigenetic aging. When subcategorized by trauma type, Black participants had higher prevalence in each trauma subcategory. Assaultive trauma demonstrated both a significant relationship with increased epigenetic age and significantly mediated the relationship between race and GAA. This is particularly relevant and consistent with previous research showing that Black Americans are 22% more likely to experience a violent crime (Morgan and Oudekerk, 2019) and are more than twice as likely to have a violent or lethal encounter with law enforcement (Fagan and Campbell, 2020). Consistent with this trend, Black participants in the current sample reported such events (being assaulted, shot/threatened with a gun, or chased while fearing being hurt) more frequently (Table 2). These findings underscore the urgent need for early social and policy interventions, as our understanding of the biological effects of structural racism have become more prominent in the national consciousness (Lund, 2020). These changes in epigenetic age associated with trauma also suggest that targeting factors that decrease the higher rates of occurrence of trauma and adversity in Black Americans may mitigate their impact on epigenetic aging. The significant mediating effect of death of loved ones is also of interest. As work in both humans and model systems has suggested exposure to death may increase morbidity or even mortality (Gendron et al., 2023;Keyes et al., n.d.). This could represent a mechanism through which early mortality spreads within communities, suggesting that health effects of trauma may spread beyond the individual. This study has several important limitations, and its findings should be understood in the context of limited sample size, a cross-sectional dataset, and geographic distribution. First, this study's overall sample size is small and limited to individuals from the greater New Haven Area that has an approximate population of 600,000. This study is also limited by only comparing Black and White participants and we were unable to assess other racial and ethnic groups or account for the diversity within the Black and White groups (i.e., ethnicity). Second, while the CAI is a broad and powerful tool covering numerous stressful life events for identifying many different types of traumas and adversity, it does not specifically measure perceived discrimination, and thus we cannot draw conclusions regarding how epigenetic aging is affected by a unique and asymmetric stressor such as perceived discrimination. Third, due to the cross-sectional nature of this study, we were also limited in our ability to draw causative inference nor comment on the various different theoretical life course models such as weathering/cumulative stress, predictive adaptive response, or stress generation models as outlined by Simons et al. (2018). The cross-sectional nature of our study also makes it possible that other, unmeasured variables such as inherited or intergenerational trauma could be correlated with potential trauma response of the participant and its effect on the rate of epigenetic aging. Future studies utilizing longitudinal data could provide more insight on the timeline of stress and trauma effects on epigenetic alterations and their consequent impact on health. Finally, some have suggested that epigenetic clocks such as GrimAge may be biased due to their method of construction (Levine, 2020), although more recent work has supported its use to compare Black and White populations (Graf et al., 2022). While our observed direct effect of race on GAA could represent such racial bias in epigenetic clocks (Levine, 2020), this could also represent areas for future study. Over the past 5 years, a growing body of literature correlating unique stressors such as racial trauma, perceived/experienced discrimination, institutional barriers to care/access, housing instability, and citizenship status, have been shown to alter biomarkers of accelerated aging (Bastos et al., 2010;Chapman et al., 2018;Hicken et al., 2018;Williams et al., 2018;Williams et al., 2018). Understanding the intersectionality of these unique/nuanced stressors and increased epigenetic age is necessary to understanding their impact on health in these under-represented populations. Future studies could use longitudinal assessments that include measures of discrimination, incorporate a broader swath of the population, and utilize new epigenetic clocks trained on more diverse populations to elaborate on these findings. While GrimAge has the advantage of correlating with morbidity and mortality, as the field of epigenetic clocks advance future studies also may be able to provide more mechanistic details on specific aspects of aging and how they differ by race, stress, or discrimination. Conclusions Despite the above limitations, to our knowledge this is one of the first studies to investigate whether specific types of trauma may mediate differences in GrimAge between racial groups in a putatively healthy, young-to-middle-aged population. Increased epigenetic aging in Black participants is significantly mediated by cumulative stress, and particularly trauma, which may inform the biological underpinnings of the life expectancy gap in the United States. Health disparities observed later in life may begin during early adulthood, even in the absence of negative health symptoms or diagnosed medical illnesses or conditions, and may be detectable via epigenetic markers, particularly amongst Black Americans. Our findings underscore the need for better understanding the impact of these differences in social stress experiences and their effect on biological aging. Overall, these findings highlight an urgent public health need for societal reforms and policy interventions aimed at reducing the occurrence of such stressors and traumatic events. Such interventions may contribute to decreasing the morbidity and mortality gap between Black and White Americans. Declaration of competing interest Dr Rajita Sinha has research collaborations with Aelis Farma, Aptinyx Inc, CT Pharma and she is on the Scientific Advisory Board of Embera Neurotherapeutics. The current submission is unrelated to these T.D. Holloway et al. collaborations. Drs. Holloway, Harvanek, Gordon and Xu have no competing interests to declare. Data availability Data will be made available on request.
2023-07-22T13:15:54.750Z
2023-07-21T00:00:00.000
{ "year": 2023, "sha1": "5945dfafc8c4b5f38ea782fb063501b371af9894", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ynstr.2023.100557", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c69fe5c19003fa9567468d8f4b9f01870b57d6df", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
227034814
pes2o/s2orc
v3-fos-license
Intelligent Ship Detection in Remote Sensing Images Based on Multi-Layer Convolutional Feature Fusion : Intelligent detection and recognition of ships from high-resolution remote sensing images is an extraordinarily useful task in civil and military reconnaissance. It is di ffi cult to detect ships with high precision because various disturbances are present in the sea such as clouds, mist, islands, coastlines, ripples, and so on. To solve this problem, we propose a novel ship detection network based on multi-layer convolutional feature fusion (CFF-SDN). Our ship detection network consists of three parts. Firstly, the convolutional feature extraction network is used to extract ship features of di ff erent levels. Residual connection is introduced so that the model can be designed very deeply, and it is easy to train and converge. Secondly, the proposed network fuses fine-grained features from shallow layers with semantic features from deep layers, which is beneficial for detecting ship targets with di ff erent sizes. At the same time, it is helpful to improve the localization accuracy and detection accuracy of small objects. Finally, multiple fused feature maps are used for classification and regression, which can adapt to ships of multiple scales. Since the CFF-SDN model uses a pruning strategy, the detection speed is greatly improved. In the experiment, we create a dataset for ship detection in remote sensing images (DSDR), including actual satellite images from Google Earth and aerial images from electro-optical pod. The DSDR dataset contains not only visible light images, but also infrared images. To improve the robustness to various sea scenes, images under di ff erent scales, perspectives and illumination are obtained through data augmentation or a ffi ne transformation methods. To reduce the influence of atmospheric absorption and scattering, a dark channel prior is adopted to solve atmospheric correction on the sea scenes. Moreover, soft non-maximum suppression (NMS) is introduced to increase the recall rate for densely arranged ships. In addition, better detection performance is observed in comparison with the existing models in terms of precision rate and recall rate. The experimental results show that the proposed detection model can achieve the superior performance of ship detection in optical remote sensing image. Introduction The intelligent detection and recognition of ships is quite important for maritime security and civil management. Ship detection has a wide range of applications, including dynamic harbor surveillance, traffic monitoring, fishery management, sea pollution monitoring, the defense of territory and naval battles, etc. [1]. In recent years, satellite and aerial remote sensing technology has developed rapidly, and optical remote sensing images can provide detailed information with extremely high resolution [2]. Therefore, ship detection has become a hot topic in the field of optical remote sensing. Due to the large Our method is different from other methods proposed in the literature. The main contributions of our work can be summarized as follows:  A dataset for ship detection in remote-sensing images (DSDR) is created. Deep learning methods need a lot of training data during the complicated training process. Thus, a ship dataset is badly needed. DSDR contains rich satellite remote sensing images and aerial remote sensing images, which is an important resource for supervised learning algorithms.  We introduce data augmentation to supplement the lack of ship samples in military applications. Thus, preventing the model from overfitting can increase the detection accuracy of ship targets. We adopt an affine transformation method to change the perspectives of ships, thereby increasing the accuracy of ship detection in aerial images.  A dark channel prior is adopted to solve the atmospheric correction on the sea scenes. We remove the influence of the absorption and scattering of water vapor and particles in the atmosphere by using the dark channel prior. The image quality is greatly improved by atmospheric correction. Atmospheric correction is beneficial to improving the accuracy of target detection in remote sensing images.  A feature fusion network is used to comprehend different levels of convolutional features, which can better use the fine-grained features and semantic features of the target, achieving multi-scale detection of ships. Meanwhile, feature fusion and anchor design are helpful for improving the performance of small target detection.  Soft non-maximum suppression (NMS) is used to assign a lower score for redundant prediction boxes, thereby reducing the missed detection rate and improving the recall rate of densely arranged ships. The detection accuracy is improved compared to the traditional NMS. Our proposed approach can achieve better performance in terms of detection accuracy and inference speed for ship detection in optical remote sensing images compared with previous works. The CFF-SDN model is very robust under different disturbances such as fogs, islands, clouds, sea waves, etc. The rest of this paper is organized as follows: we state the framework of our ship detection model based on convolutional feature fusion in Section 2, and the experimental results based on DSDR dataset are presented in Section 3. In Section 4, we discuss the advantage of the model and the measures to suppress false alarms. Finally, the conclusions are provided in Section 5. Dataset The dataset for ship detection in remote-sensing images (DSDR) was collected from Google Earth and aerial remote sensing images, including images of multiple spectral such as visible light images and infrared images. The DSDR dataset contains ships in different sea environment. In the dataset, there are 1884 optical remote sensing images, including 4819 ships with different sizes. The average number of ships per image is 2.56. Some optical remote sensing images in the DSDR dataset Our method is different from other methods proposed in the literature. The main contributions of our work can be summarized as follows: • A dataset for ship detection in remote-sensing images (DSDR) is created. Deep learning methods need a lot of training data during the complicated training process. Thus, a ship dataset is badly needed. DSDR contains rich satellite remote sensing images and aerial remote sensing images, which is an important resource for supervised learning algorithms. • We introduce data augmentation to supplement the lack of ship samples in military applications. Thus, preventing the model from overfitting can increase the detection accuracy of ship targets. We adopt an affine transformation method to change the perspectives of ships, thereby increasing the accuracy of ship detection in aerial images. • A dark channel prior is adopted to solve the atmospheric correction on the sea scenes. We remove the influence of the absorption and scattering of water vapor and particles in the atmosphere by using the dark channel prior. The image quality is greatly improved by atmospheric correction. Atmospheric correction is beneficial to improving the accuracy of target detection in remote sensing images. • A feature fusion network is used to comprehend different levels of convolutional features, which can better use the fine-grained features and semantic features of the target, achieving multi-scale detection of ships. Meanwhile, feature fusion and anchor design are helpful for improving the performance of small target detection. • Soft non-maximum suppression (NMS) is used to assign a lower score for redundant prediction boxes, thereby reducing the missed detection rate and improving the recall rate of densely arranged ships. The detection accuracy is improved compared to the traditional NMS. Our proposed approach can achieve better performance in terms of detection accuracy and inference speed for ship detection in optical remote sensing images compared with previous works. The CFF-SDN model is very robust under different disturbances such as fogs, islands, clouds, sea waves, etc. The rest of this paper is organized as follows: we state the framework of our ship detection model based on convolutional feature fusion in Section 2, and the experimental results based on DSDR dataset are presented in Section 3. In Section 4, we discuss the advantage of the model and the measures to suppress false alarms. Finally, the conclusions are provided in Section 5. Dataset The dataset for ship detection in remote-sensing images (DSDR) was collected from Google Earth and aerial remote sensing images, including images of multiple spectral such as visible light images and infrared images. The DSDR dataset contains ships in different sea environment. In the dataset, there are 1884 optical remote sensing images, including 4819 ships with different sizes. The average number of ships per image is 2.56. Some optical remote sensing images in the DSDR dataset are shown in Figure 2. Figure 2i is an infrared image. We can see that the background of the ships is particularly complex, including islands, clouds, and sea clutter, etc. The ships in Figure 2a,b are surrounded or blocked by clouds, the ship in Figure 2c has an island nearby, the ripple in Figure 2d-f will affect the detection of ship. Due to the occlusion of surrounding obstacles in Figure 2e, the shadow around the ship will also increase the difficulty of ship detection. Figure 2f shows a ship docked at the port. Figure 2i is an infrared image. We can see that the background of the ships is particularly complex, including islands, clouds, and sea clutter, etc. The ships in Figure 2a-b are surrounded or blocked by clouds, the ship in Figure 2c has an island nearby, the ripple in Figure 2d-f will affect the detection of ship. Due to the occlusion of surrounding obstacles in Figure 2e, the shadow around the ship will also increase the difficulty of ship detection. Figure 2f shows a ship docked at the port. Figure 2g-i illustrate ship images from different perspectives. We divide the DSDR dataset into three parts-the training set, the validation set and the test set-in the proportion 6:2:2. The division of the DSDR dataset is shown in Table 1. We divide the DSDR dataset into three parts-the training set, the validation set and the test set-in the proportion 6:2:2. The division of the DSDR dataset is shown in Table 1. In this paper, we use the image annotation tool LabelImg (https://github.com/tzutalin/labelImg) to annotate the ship's ground-truth boxes of each image manually. LabelImg is the most widely used image annotation tool when making your own dataset. After the image is annotated, a .txt file is generated, which contains the category of the target, the position of the center point of the target, as well as the width and height of the target. The labeling example of the ship's ground-truth boxes is shown in Figure 3. The image data in training set and validation set, together with the .txt files generated after annotation is the input data for model training. Remote Sens. 2020, 12, x FOR PEER REVIEW 7 of 30 In this paper, we use the image annotation tool LabelImg (https://github.com/tzutalin/labelImg) to annotate the ship's ground-truth boxes of each image manually. LabelImg is the most widely used image annotation tool when making your own dataset. After the image is annotated, a .txt file is generated, which contains the category of the target, the position of the center point of the target, as well as the width and height of the target. The labeling example of the ship's ground-truth boxes is shown in Figure 3. The image data in training set and validation set, together with the .txt files generated after annotation is the input data for model training. Data Augmentation To prevent the model from overfitting and to increase the detection accuracy of ship targets, we performed data augmentation for the images in the training set. In the case of limited detection data, data augmentation strategies can increase the diversity of training samples and improve the robustness of the model. In this paper, we use horizontal flipping, vertical flipping, random rotation, random scaling, random cropping or expansion to enrich the training samples. Color jittering is also applied to ship images, including the adjustment of contrast, brightness, saturation and hue. The image augmentation of the training set is shown in Figure 4. Data Augmentation To prevent the model from overfitting and to increase the detection accuracy of ship targets, we performed data augmentation for the images in the training set. In the case of limited detection data, data augmentation strategies can increase the diversity of training samples and improve the robustness of the model. In this paper, we use horizontal flipping, vertical flipping, random rotation, random scaling, random cropping or expansion to enrich the training samples. Color jittering is also applied to ship images, including the adjustment of contrast, brightness, saturation and hue. The image augmentation of the training set is shown in Figure 4. Because aerial images are difficult to acquire, the number of aerial images is much smaller than satellite images. The detection of ships in aerial images is more difficult than that in satellite images, because satellite images are mostly taken from a vertical angle of view, and the aerial images have a wide range of azimuth and pitch angles for ship reconnaissance, and the characteristics of the ship will vary greatly from the angle of view. We propose an affine transformation method, which enables satellite images to be expanded to images with different viewing angles. The images from different perspectives produced by the affine transformation of satellite remote sensing images are shown in Figure 5, it can be seen that the perspective of the ship has changed, similar to that in aerial remote sensing images. performed data augmentation for the images in the training set. In the case of limited detection data, data augmentation strategies can increase the diversity of training samples and improve the robustness of the model. In this paper, we use horizontal flipping, vertical flipping, random rotation, random scaling, random cropping or expansion to enrich the training samples. Color jittering is also applied to ship images, including the adjustment of contrast, brightness, saturation and hue. The image augmentation of the training set is shown in Figure 4. Because aerial images are difficult to acquire, the number of aerial images is much smaller than satellite images. The detection of ships in aerial images is more difficult than that in satellite images, because satellite images are mostly taken from a vertical angle of view, and the aerial images have a wide range of azimuth and pitch angles for ship reconnaissance, and the characteristics of the ship will vary greatly from the angle of view. We propose an affine transformation method, which enables satellite images to be expanded to images with different viewing angles. The images from different perspectives produced by the affine transformation of satellite remote sensing images are shown in Figure 5, it can be seen that the perspective of the ship has changed, similar to that in aerial remote sensing images. Atmospheric Correction Atmospheric correction is a serious problem for the ship detection on the sea environment and it cannot be ignored. Atmospheric correction can reduce the influence of atmospheric scattering and improve the accuracy of ship detection. Since we do not have atmospheric parameters such as atmospheric water vapor concentration and spectral data when the images were taken, we cannot use moderate resolution atmospheric transmission (MODTRAN) or fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) models to perform image correction on remote sensing images based on real-time atmospheric parameters. It is difficult for us to perform atmospheric corrections for different atmospheric conditions. We adopt a method based on dark channel prior to solve the atmospheric correction on the sea scenes. The images of sea scenes are usually degraded by the medium in the atmosphere, such as particles, water-droplets. Since the amount of scattering depends on the distance from the scene point to the satellite or aircraft platform, the degradation varies with space. He [32] used the dark channel Atmospheric Correction Atmospheric correction is a serious problem for the ship detection on the sea environment and it cannot be ignored. Atmospheric correction can reduce the influence of atmospheric scattering and improve the accuracy of ship detection. Since we do not have atmospheric parameters such as atmospheric water vapor concentration and spectral data when the images were taken, we cannot use moderate resolution atmospheric transmission (MODTRAN) or fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) models to perform image correction on remote sensing images based on real-time atmospheric parameters. It is difficult for us to perform atmospheric corrections Remote Sens. 2020, 12, 3316 9 of 30 for different atmospheric conditions. We adopt a method based on dark channel prior to solve the atmospheric correction on the sea scenes. The images of sea scenes are usually degraded by the medium in the atmosphere, such as particles, water-droplets. Since the amount of scattering depends on the distance from the scene point to the satellite or aircraft platform, the degradation varies with space. He [32] used the dark channel prior theory to remove the haze in the image. Inspired by this theory, we used the dark channel prior to remove the influence of the absorption and scattering of water vapor and particles in the atmosphere. The image quality is greatly improved by atmospheric correction. The atmospheric scattering model is based on the assumption that suspended particles are uniformly distributed in the atmosphere. The formula is: where I represents the light intensity of the image, J represents the scene radiance, A is the global atmospheric light, t represents the portion of the light that is not scattered and reaches the image sensor. The goal of atmospheric correction is to recover J, A, and t from I. When the atmosphere is homogenous, the transmission t can be expressed as: where β is the scattering coefficient of the atmosphere, d is the scene depth. The dark channel prior is based on a basic assumption: in most of the non-sky patches, at least one channel has very low intensity at some pixels. Based on the above assumptions, for an input image J, the dark channel is defined as: where J c is a color channel of J and Ω(x) is a local patch centered at x. The intensity of J dark tends to be zero if J is the image without atmospheric absorption and scattering. J dark is the dark channel of J. The above knowledge is called the dark channel prior. The estimate of transmittance is described as: The layering of the image needs to be considered, so the parameter λ is introduced to correct the transmittance. Substitute Formula (5) into Formula (1) to get the final image: The 0.1% pixels with the largest brightness in the dark channel image are taken to estimate atmospheric light intensity A. The maximum value of these pixels in the original image is the estimated value of atmospheric light intensity. Because when t(x) is close to 0, the value of J will be too large, and the overall image is biased towards white, we set a threshold for t(x), and the minimum value of t(x) is set to 0.1. The atmospheric correction effect of satellite remote sensing images and aerial remote sensing images is shown in Figure 6. It can be seen that the atmospheric correction method based on the dark channel prior can well reduce the influence of atmospheric absorption and scattering on remote sensing images. After atmospheric correction, the ships in the remote sensing image are clearer, and the color fidelity of the ships are higher. Whether it is for satellite remote sensing images or aerial remote sensing images, the atmospheric correction effect is very effective. The correction of atmospheric absorption and scattering helps improve the accuracy of ship detection. Detailed Description of the Network Architecture CFF-SDN The architecture of our proposed optical remote sensing images ship detection system is shown in Figure 7. The input images to be detected are resized to 416 × 416, and the channel's number of images is 3. CFF-SDN is mainly composed of a backbone network and a convolution feature fusion network. The backbone includes a residual block and a convolutional block, which are used to extract the shallow features and semantic features of ship targets. Convolutional feature fusion network outputs three feature maps of different sizes. Feature map 52 × 52 corresponds to shallow features, and the deep semantic information of feature map 26 × 26, and feature map 13 × 13 is merged in the shallow feature map 52 × 52. Scale 1 has a small receptive field and is suitable for detecting small ships. Scale 2 is used for detecting medium ships. The feature map of scale 2 is 26 × 26, which incorporates the semantic information obtained by upsampling from the feature map 13 × 13. The feature map 13 × 13 has a large receptive field, which extracts deep features and has rich semantic information. Scale 3 is suitable for detecting large-scale ship targets. Detailed Description of the Network Architecture CFF-SDN The architecture of our proposed optical remote sensing images ship detection system is shown in Figure 7. The input images to be detected are resized to 416 × 416, and the channel's number of images is 3. CFF-SDN is mainly composed of a backbone network and a convolution feature fusion network. The backbone includes a residual block and a convolutional block, which are used to extract the shallow features and semantic features of ship targets. Convolutional feature fusion network outputs three feature maps of different sizes. Feature map 52 × 52 corresponds to shallow features, and the deep semantic information of feature map 26 × 26, and feature map 13 × 13 is merged in the shallow feature map 52 × 52. Scale 1 has a small receptive field and is suitable for detecting small ships. Scale 2 is used for detecting medium ships. The feature map of scale 2 is 26 × 26, which incorporates the semantic information obtained by upsampling from the feature map 13 × 13. The feature map 13 × 13 has a large receptive field, which extracts deep features and has rich semantic information. Scale 3 is suitable for detecting large-scale ship targets. Detailed Description of the Network Architecture CFF-SDN The architecture of our proposed optical remote sensing images ship detection system is shown in Figure 7. The input images to be detected are resized to 416 × 416, and the channel's number of images is 3. CFF-SDN is mainly composed of a backbone network and a convolution feature fusion network. The backbone includes a residual block and a convolutional block, which are used to extract the shallow features and semantic features of ship targets. Convolutional feature fusion network outputs three feature maps of different sizes. Feature map 52 × 52 corresponds to shallow features, and the deep semantic information of feature map 26 × 26, and feature map 13 × 13 is merged in the shallow feature map 52 × 52. Scale 1 has a small receptive field and is suitable for detecting small ships. Scale 2 is used for detecting medium ships. The feature map of scale 2 is 26 × 26, which incorporates the semantic information obtained by upsampling from the feature map 13 × 13. The feature map 13 × 13 has a large receptive field, which extracts deep features and has rich semantic information. Scale 3 is suitable for detecting large-scale ship targets. The residual block and the convolution block are used to extract feature of ship targets. In the convolutional feature fusion network, scale 1 detects small ships, scale 2 detects medium ships, and scale 3 detects large ships. Feature Extraction Network. The basic unit of the feature extraction network is DBL, which is composed of three different layers of darknet convolution, batch normalization (BN) and Leaky ReLU. DBL stands for darknet convolution + BN + Leaky Relu. The feature extraction network uses residual connection in the backbone inspired by the residual network. The residual structure alleviates the problem of gradient disappearance in model training [33]. Therefore, the convolutional neural network can be stacked very deep. Due to the usage of residual connections, our model is easier to converge. By introducing a shortcut branch to the residual block, the network fit the residual mapping instead of directly fit the mapping. Compared with direct optimization mapping, it is easier to optimize residual mapping. The batch normalization layer is used to change the data distribution to avoid the parameters falling into the saturation zone. The batch normalization layer makes the network easier to converge during the training process. Leaky rectified linear unit (Leaky ReLU) is the activation function of feature extraction network. Convolutional Feature Fusion Due to the different shooting distance of aerial remote sensing images, the size of the ship target is different. In the same reconnaissance field, there will also be ships of different scales. Therefore, our ship detection method is required to be scale invariant. The convolutional feature fusion structure fuses shallow convolutional features and deep convolutional features, generating three kinds of fusion ship target feature: fusion feature 1, fusion feature 2 and fusion feature 3, inspired by the experience of feature pyramid networks (FPN) [34] and SSD. Figure 8 is the structure of convolutional feature fusion. As is shown in Figure 8, if the size of input image is W × W, the size of the fused convolution feature is W/8, W/16 and W/32. The deep convolution feature needs to be upsampled before fusion with shallow feature. The concatenation operation uses channel fusion instead of element-level fusion like FPN algorithm. The fusion of different levels of convolution features can better use the fine-grained features and semantic features of the ship, achieving multi-scale detection of ships. Remote Sens. 2020, 12 The anchor design in this paper is inspired by YOLOv3, but it is very different from YOLOv3. Each grid cell in the YOLOv3 detection layers has three anchors of different sizes. There is only one type of detection target involved in this paper, and the ships in remote sensing images are mostly small and medium. CFF-SDN model has three kinds of fusion feature and performs prediction three times. The first prediction has a large receptive field, and two anchor boxes are allocated for prediction. The second prediction has a medium receptive field, and three anchor boxes are allocated for prediction. The third prediction has a small receptive field, and four anchor boxes are allocated for prediction. The anchor design of the CFF-SDN model is shown in Table 2. The dense anchor boxes CFF-SDN uses multi-scale convolution feature fusion, which is very effective for detection of small objects like remote sensing ships. CFF-SDN performs detection at three different scales. The feature maps in our proposed model combine fine-grained information from shallow layers and semantic information from deep layers. Fine-grained information contains more detailed features of ships, which is very conducive to the detection of small targets. This network structure allows the network to use fused features for detection, which helps us greatly improve the accuracy of small target detection. The anchor design in this paper is inspired by YOLOv3, but it is very different from YOLOv3. Each grid cell in the YOLOv3 detection layers has three anchors of different sizes. There is only one type of detection target involved in this paper, and the ships in remote sensing images are mostly small and medium. CFF-SDN model has three kinds of fusion feature and performs prediction three times. The first prediction has a large receptive field, and two anchor boxes are allocated for prediction. The second prediction has a medium receptive field, and three anchor boxes are allocated for prediction. The third prediction has a small receptive field, and four anchor boxes are allocated for prediction. The anchor design of the CFF-SDN model is shown in Table 2. The dense anchor boxes can effectively improve the recall rate of the network, which are conducive to the detection of small ships. We use the k-means clustering algorithm to cluster the ship sizes of the DSDR dataset. Nine anchor boxes of preset sizes are generated for classification and bounding box regression, respectively (17 There is only one type of detection target in our paper, that is, ships on the sea. In addition, their sizes are mostly small and medium, and the number of prior boxes allocated in the network depth information is increased to improve the detection accuracy and performance of small targets. The size of the ship targets in the images are mostly small and medium. By increasing the number of prior frame allocations in the network depth information, the detection accuracy and performance for small targets can be improved. Soft NMS Non-maximum suppression (NMS) plays a very important role in the field of target tracking and object detection. NMS is an algorithm designed to remove duplicate prediction boxes, which can effectively improve the detection performance of ship targets. We select the prediction box with the highest score in the neighborhood and suppress the prediction boxes which have lower scores with the assistance by NMS. The processing of NMS depends on the adjustment of the intersection over the union (IOU) threshold. The predicted box is drawn in green while the ground-truth box is drawn in red. IOU is the intersection over the union, the range of IOU is 0 to 1. Figure 9 shows the IOU between the prediction box and the ground-truth box. Remote Sens. 2020, 12, x FOR PEER REVIEW 13 of 30 the union (IOU) threshold. The predicted box is drawn in green while the ground-truth box is drawn in red. IOU is the intersection over the union, the range of IOU is 0 to 1. Figure 9 shows the IOU between the prediction box and the ground-truth box. However, violently eliminating the prediction boxes that do not have the highest score is the major problem of NMS. In the optical remote sensing images during ocean surveillance, especially in areas near ports, or when a fleet performs jointly missions, a ship will be surrounded by ships nearby, even obscured by other ship targets. Therefore, as is shown in Figure 10, the prediction boxes of nearby ships may exceed the preset overlap threshold. As a result, the ship's prediction box will be suppressed, causing the loss of ship targets. This situation causes the missed detection rate to be very However, violently eliminating the prediction boxes that do not have the highest score is the major problem of NMS. In the optical remote sensing images during ocean surveillance, especially in areas near ports, or when a fleet performs jointly missions, a ship will be surrounded by ships nearby, even obscured by other ship targets. Therefore, as is shown in Figure 10, the prediction boxes of nearby ships may exceed the preset overlap threshold. As a result, the ship's prediction box will be suppressed, causing the loss of ship targets. This situation causes the missed detection rate to be very high, affecting the mean average precision. However, violently eliminating the prediction boxes that do not have the highest score is the major problem of NMS. In the optical remote sensing images during ocean surveillance, especially in areas near ports, or when a fleet performs jointly missions, a ship will be surrounded by ships nearby, even obscured by other ship targets. Therefore, as is shown in Figure 10, the prediction boxes of nearby ships may exceed the preset overlap threshold. As a result, the ship's prediction box will be suppressed, causing the loss of ship targets. This situation causes the missed detection rate to be very high, affecting the mean average precision. To solve this problem, soft NMS is used to remove redundant prediction boxes. Unlike traditional NMS, soft does not directly zeroing the scores of high overlap detections, instead, it is assigned a lower score, so the ship target in this prediction box can still be detected. Soft NMS is denoted as follows: where si is detection scores; bh represents the prediction box with the highest score; bi represents other prediction boxes; IOU(bh, bi) is the intersection-over-union between the prediction box bh and bi; Nt represents IOU threshold. The implementation of soft NMS is shown in Figure 11. To solve this problem, soft NMS is used to remove redundant prediction boxes. Unlike traditional NMS, soft does not directly zeroing the scores of high overlap detections, instead, it is assigned a lower score, so the ship target in this prediction box can still be detected. Soft NMS is denoted as follows: where s i is detection scores; b h represents the prediction box with the highest score; b i represents other prediction boxes; IOU(b h , b i ) is the intersection-over-union between the prediction box b h and b i ; N t represents IOU threshold. The implementation of soft NMS is shown in Figure 11. Figure 11. The implementation of the soft NMS algorithm. Loss Function The CFF-SDN is an end-to-end model, and the result of model is to provide the localization, category, and confidence of the prediction box. The total loss is divided into three parts, which are localization loss, classification loss, confidence loss, which is expressed as: Loss = λ loc L loc + λ clc L cls + λ conf L conf (7) Figure 11. The implementation of the soft NMS algorithm. Loss Function The CFF-SDN is an end-to-end model, and the result of model is to provide the localization, category, and confidence of the prediction box. The total loss is divided into three parts, which are localization loss, classification loss, confidence loss, which is expressed as: Loss = λ loc L loc + λ clc L cls + λ conf L conf (7) where λ loc , λ clc , λ conf are the weights of different kinds of losses. CFF-SDN only has one anchor box responsible for predicting an object within a ground-truth box. The loss regarding localization of prediction box contains the loss of location of the center point, the loss of width and height of the anchor box, which is defined as: where l ship ij denotes whether the anchor box j of grid cell i contains a ship. If the anchor box contains a ship, l ship ij set to 1, otherwise set to 0. When the anchor box is responsible for a ground-truth object, it causes losses for classification. The classification losses are defined as: The confidence loss consists of two parts, including the confidence loss when the anchor includes a ship and the confidence loss when the anchor does not include a ship. The weight of the confidence loss when the anchor does not include ship needs to be appropriately reduced, so λ noship < 1. The confidence loss is expressed as: Model Pruning Although large network structures have a strong representation power, they consume a lot of resources and affect the detection speed. In this paper, a method is proposed to prune the model. The channels with small scaling factors are pruned in the trained network. The channel-wise sparsity is applied to the optimization objective, so the channel pruning process is very smooth. The removal of redundant channels does not affect the accuracy. Therefore, after pruning, we can obtain a compact model with considerable accuracy. Figure 12 shows the method to compress the CFF-SDN model by pruning. A scaling factor for each channel is introduced to the network. The scaling factor is multiplied to the output of channel. Then, we train the network weights and these scaling factors together and perform sparse regularization on them. Finally, we prune the channels with small scaling factors. The training objective of our method is defined by: where (x, y) represents training input and target output, W represents the model weights, the first term is consistent with the normal training loss of the model, g(·) is a sparsity-induced penalty on the scaling factors, λ is responsible for the balance between the two terms. When a channel needs pruning, we remove all input and output connections for this channel, so that we can obtain a slim network. The pruned network can significantly reduce the inference time at runtime. Model Pruning Although large network structures have a strong representation power, they consume a lot of resources and affect the detection speed. In this paper, a method is proposed to prune the model. The channels with small scaling factors are pruned in the trained network. The channel-wise sparsity is applied to the optimization objective, so the channel pruning process is very smooth. The removal of redundant channels does not affect the accuracy. Therefore, after pruning, we can obtain a compact model with considerable accuracy. Figure 12. The model is compressed by pruning. During training, the model automatically recognizes unimportant channels. The channels with small scaling factor will be pruned. After pruning, the model will be more compact, occupy less memory and run faster, without loss of accuracy. Figure 12 shows the method to compress the CFF-SDN model by pruning. A scaling factor for each channel is introduced to the network. The scaling factor is multiplied to the output of channel. Then, we train the network weights and these scaling factors together and perform sparse regularization on them. Finally, we prune the channels with small scaling factors. The training objective of our method is defined by: (11) where (x, y) represents training input and target output, W represents the model weights, the first term is consistent with the normal training loss of the model, g( )  is a sparsity-induced penalty on the scaling factors, λ is responsible for the balance between the two terms. When a channel needs pruning, we remove all input and output connections for this channel, so that we can obtain a slim network. The pruned network can significantly reduce the inference time at runtime. Model Training We trained the CFF-DSN model on DSDR dataset. The DSDR dataset contains optical remote sensing images and ships in the images have different sizes and orientations. Due to the diversity of the dataset, the model is highly generalized on the test set and it is very robust to other scenarios. We Figure 12. The model is compressed by pruning. During training, the model automatically recognizes unimportant channels. The channels with small scaling factor will be pruned. After pruning, the model will be more compact, occupy less memory and run faster, without loss of accuracy. Model Training We trained the CFF-DSN model on DSDR dataset. The DSDR dataset contains optical remote sensing images and ships in the images have different sizes and orientations. Due to the diversity of the dataset, the model is highly generalized on the test set and it is very robust to other scenarios. We trained our deep learning model CFF-SDN on the training set and validation set. The training parameters for CFF-DSN model are listed in Table 3. Some ship detection results on DSDR dataset are displayed in Figure 13. To be fair, the experiments are conducted on the same platform. The models were trained and tested using a PC with Intel Xeon E5-2678 v3 @ 2.5GHz × 12 and 32 GB of RAM memory, and the GPU was NVIDIA RTX 2080Ti with 11G memory and using CUDA10.0. The operating system on the computer was 64-bit Ubuntu 18.04. Our experiments are performed on the PyCharm [35] software development platform, with Python 3.6 language. The result of Figure 13 is the performance of our model on the test set. Decay 0.0005 Epochs 2000 Some ship detection results on DSDR dataset are displayed in Figure 13. To be fair, the experiments are conducted on the same platform. The models were trained and tested using a PC with Intel Xeon E5-2678 v3 @ 2.5GHz × 12 and 32 GB of RAM memory, and the GPU was NVIDIA RTX 2080Ti with 11G memory and using CUDA10.0. The operating system on the computer was 64bit Ubuntu 18.04. Our experiments are performed on the PyCharm [35] software development platform, with Python 3.6 language. The result of Figure 13 is the performance of our model on the test set. Most of the ships on the sea are small targets, and the CFF-SDN ship detection model is especially designed for the detection of small targets. CFF-SDN uses multi-scale convolution feature fusion, which is very effective for detection of small objects like remote sensing ships. Our model uses the k-means clustering algorithm to cluster the ship sizes of the DSDR dataset, and the number of priori boxes allocated in the network depth information is increased to improve the detection accuracy and performance of small targets. Our model achieved good results for the detection of small targets in remote sensing images. The small target detection results of CFF-SDN model on DSDR dataset are displayed in Figure 14. It can be seen from Figure 14 that even for small targets with pixels smaller than 7 × 7, our model can detect and recognize ships very well. It can be found from Figure 14 of priori boxes allocated in the network depth information is increased to improve the detection accuracy and performance of small targets. Our model achieved good results for the detection of small targets in remote sensing images. The small target detection results of CFF-SDN model on DSDR dataset are displayed in Figure 14. It can be seen from Figure 14 that even for small targets with pixels smaller than 7 × 7, our model can detect and recognize ships very well. It can be found from Figure 14 that our model can reliably detect small ships in different directions and attitudes. Model Evaluation To evaluate the overall performance of our model after detection, we use the precision, recall, F1 score and the mean of average precision (mAP) to analyze the performance of our proposed ship detection model quantitatively. The precision is the ratio of true positives in all prediction boxes. The recall is the ratio of the ships that are detected correctly to the number of all ground-truth samples. As for ship detection, high accuracy and recall are both very important. However, the precision and recall indicators sometimes contradict each other, so we need to consider them comprehensively. F1 score is a comprehensive reflection of precision and recall. It is the weighted average of precision and recall. The precision, recall and F1 score are defined as follows: Model Evaluation To evaluate the overall performance of our model after detection, we use the precision, recall, F1 score and the mean of average precision (mAP) to analyze the performance of our proposed ship detection model quantitatively. The precision is the ratio of true positives in all prediction boxes. The recall is the ratio of the ships that are detected correctly to the number of all ground-truth samples. As for ship detection, high accuracy and recall are both very important. However, the precision and recall indicators sometimes contradict each other, so we need to consider them comprehensively. F1 score is a comprehensive reflection of precision and recall. It is the weighted average of precision and recall. The precision, recall and F1 score are defined as follows: where TP represents the number of true positives, which is when the detected ship is actually a ship target. FP represents the number of false positives, which is when a ship is detected, but the real value is not a ship. FN represents the number of false negatives, indicating a ship is not detected, but the result is a ship [36]. Precision is the ratio of detected true ships to all detected targets by the model. Recall is the ratio of the detected true ships to the total number (ground truth) of ships. When the recall is high, the precision is very low. When the precision is high, the recall is often low. mAP comprehensively considers the precision and different recalls; it does not have any preference for precision or recall. mAP represents the area under the precision-recall curve and reflects the global performance of models, which is defined as: Comparison with Other Methods CFF-SDN adopts convolutional feature fusion network, which can combine multi-layer ship features. The feature maps in our proposed model combine fine-grained information from shallow layers and semantic information from deep layers. Therefore, the CFF-SDN model is more suitable for the detection of multi-scale ships. At the same time, CFF-SDN can solve the problem of adjacent ship detection through soft NMS. To verify the superiority of the method we proposed, we compare the performance of our model with other state-of-the-art natural image object detection frameworks, such as Faster Regions Convolution Neural Network (Faster R-CNN), Single Shoot MultiBox Detector (SSD), You Only Look Once v3: An Incremental Improvement (YOLOv3). The size of input images is uniformly scaled to 416 × 416. Figure 15 shows the detection results of satellite remote sensing images from different models, including Faster R-CNN, SSD, YOLOv3, and our proposed method, CFF-SDN. In Figure 15, the first row is affected by flare near the ship, leading to the Faster R-CNN generates a false alarm. The scale of the ship target in the second row in Figure 15 is relatively large, almost filling the entire image. In this case, it is difficult to detect or locate ship. The SSD mistakes the wave for ship, resulting in a false alarm. While the YOLOv3 failed to detect the ship, causing a missed detection. Our proposed model CFF-SDN and Faster R-CNN can detect the ship, but the localization of ship is not so accurate. In the third row, the scale of the ship varies greatly, and some ships are similar to the background. YOLOv3 did not detect the ship that similar to the background. As can be seen from the fourth row, SSD is seriously interfered with by the cloud. In the fifth row, the docking facility interferes with the detection of ship, causing the YOLOv3 algorithm to misunderstand the port facility as a ship and generates a false alarm. For example, in the sixth row, the detection and localization effects of each model are very good under simple backgrounds. Compared with other models, it is found that our proposed model CFF-SDN achieves better performance, because it adopts a convolutional feature fusion network, and uses multiple feature maps of different scales for detection and regression, simultaneously uses multiple strategies for data enhancement. All these measures mean that the model is able to detect ships with multiple scales, and can suppress the interference caused by clouds, landing facilities, ripples, and flares. The experimental results show that our model is very robust to various environments and interference. Figure 16 shows the detection results of aerial remote sensing images from different models. The first row and second row are the detection results of visible light remote sensing images; it can be seen that the SNR of these images is quite low, and this situation is very common due to the influence of water vapor near sea. In the first row, disturbed by wake while the ship is sailing, SSD generates redundant detection, and YOLOv3 does not give the precise location of the ship. The last two rows are detection results of aerial infrared remote sensing images. Because the background is relatively simple, each model obtains a good classification and localization effect. Experiments show that the CFF-SDN model can detect remote sensing images of different spectrums very well. By using affine transformation to enrich ships with different perspectives in the dataset, the model has a good detection effect on ships with different perspectives in aerial remote sensing images. Figure 16 shows the detection results of aerial remote sensing images from different models. The first row and second row are the detection results of visible light remote sensing images; it can be seen that the SNR of these images is quite low, and this situation is very common due to the influence of water vapor near sea. In the first row, disturbed by wake while the ship is sailing, SSD generates redundant detection, and YOLOv3 does not give the precise location of the ship. The last two rows are detection results of aerial infrared remote sensing images. Because the background is relatively simple, each model obtains a good classification and localization effect. Experiments show that the CFF-SDN model can detect remote sensing images of different spectrums very well. By using affine transformation to enrich ships with different perspectives in the dataset, the model has a good detection effect on ships with different perspectives in aerial remote sensing images. For each detection framework participating in the comparison, we train DSDR dataset respectively and calculate the mAP value of the detection results. The mean average precisions of different models are shown in Table 4. As shown in Table 4, Faster R-CNN is a two-stage detection framework, it has a Region Proposal Network (RPN) before class prediction and object localization, so it has higher mAP value than SSD and YOLOv3. Rotation Dense Feature Pyramid Networks (R-DFPN) [37] is also a two-stage object detector, similar to Faster R-CNN. R-DFPN adopt rotation anchors to avoid the side effects of NMS and overcome the difficulty of detecting densely arranged targets. However, complex scenes (such as a port or naval base) often contain objects with similar aspect ratios. such as roofs, container piles, and dock. Disturbances like roofs and docks will cause false alarms on the R-DFPN. Therefore, the F1 score of R-DFPN is only 89.6%, which is lower than our one-stage object detector CFF-SDN. The squeeze and excitation rank faster R-CNN (SER Faster R-CNN) [38] is designed to improve the ship detection performance in SAR images based on the Faster R-CNN by using a squeeze and excitation strategy. The SER Faster R-CNN extracted multiscale information based on VGG network. The F1 score of SER Faster R-CNN is 83.6%, and the F1 score of the CFF-SDN model is 7.7% higher than this. For each detection framework participating in the comparison, we train DSDR dataset respectively and calculate the mAP value of the detection results. The mean average precisions of different models are shown in Table 4. As shown in Table 4, Faster R-CNN is a two-stage detection framework, it has a Region Proposal Network (RPN) before class prediction and object localization, so it has higher mAP value than SSD and YOLOv3. Rotation Dense Feature Pyramid Networks (R-DFPN) [37] is also a two-stage object detector, similar to Faster R-CNN. R-DFPN adopt rotation anchors to avoid the side effects of NMS and overcome the difficulty of detecting densely arranged targets. However, complex scenes (such as a port or naval base) often contain objects with similar aspect ratios. such as roofs, container piles, and dock. Disturbances like roofs and docks will cause false alarms on the R-DFPN. Therefore, the F1 score of R-DFPN is only 89.6%, which is lower than our one-stage object detector CFF-SDN. The squeeze and excitation rank faster R-CNN (SER Faster R-CNN) [38] is designed to improve the ship detection performance in SAR images based on the Faster R-CNN by using a squeeze and excitation strategy. The SER Faster R-CNN extracted multiscale information based on VGG network. The F1 score of SER Faster R-CNN is 83.6%, and the F1 score of the CFF-SDN model is 7.7% higher than this. Because it is a two-stage object detector, the speed of it is relatively slow, and the inference time is 250 ms. Although the SSD outputs several different layers of feature maps for multi-scale detection, the information of single-layer feature maps is limited, so the accuracy rate is not very high. The improvement of SSD models, such as FA-SSD, introduces feature fusion and attention models to improve the performance of small target detection [20], but because there is only one detection layer, the accuracy rate is still not very high for ship detection in remote sensing images. ScratchDet [39] is another improvement method of SSD. The method integrated batch normalization to help the detector converge well. It can train SSD from scratch without pre-training weights. ScratchDet proposed the Root-ResNet backbone network, which achieved higher accuracy than SSD. However, the training time is 2.8 times that of SSD. The inference time is 37 ms, which is much higher than our CFF-SDN model. CFF-SDN uses data augmentation strategies to enrich the scale, perspective, and color information of ships, and uses convolutional feature fusion information of different layers for detection, making the CFF-SDN model have the highest mAP among these algorithms. By changing the confidence threshold from 0 to 1, we can get different evaluation results. Figure 17 shows the precision-recall curves of different models for optical remote sensing image ship detection. The precision-recall curve goes up and to the right means the model has better ship detection performance. The precision-recall curve of CFF-SDN model is clearly above other curves. Therefore, the ship detection model CFF-SDN that we propose in this paper has better performance than Faster R-CNN, SSD and YOLOv3. Because it is a two-stage object detector, the speed of it is relatively slow, and the inference time is 250 ms. Although the SSD outputs several different layers of feature maps for multi-scale detection, the information of single-layer feature maps is limited, so the accuracy rate is not very high. The improvement of SSD models, such as FA-SSD, introduces feature fusion and attention models to improve the performance of small target detection [20], but because there is only one detection layer, the accuracy rate is still not very high for ship detection in remote sensing images. ScratchDet [39] is another improvement method of SSD. The method integrated batch normalization to help the detector converge well. It can train SSD from scratch without pre-training weights. ScratchDet proposed the Root-ResNet backbone network, which achieved higher accuracy than SSD. However, the training time is 2.8 times that of SSD. The inference time is 37 ms, which is much higher than our CFF-SDN model. CFF-SDN uses data augmentation strategies to enrich the scale, perspective, and color information of ships, and uses convolutional feature fusion information of different layers for detection, making the CFF-SDN model have the highest mAP among these algorithms. By changing the confidence threshold from 0 to 1, we can get different evaluation results. Figure 17 shows the precision-recall curves of different models for optical remote sensing image ship detection. The precision-recall curve goes up and to the right means the model has better ship detection performance. The precision-recall curve of CFF-SDN model is clearly above other curves. Therefore, the ship detection model CFF-SDN that we propose in this paper has better performance than Faster R-CNN, SSD and YOLOv3. Table 5 shows the time cost of ship detection by different models. Due to Faster R-CNN being a two-stage method, it spends a lot of time generating regions of interest (ROIs), so the detection speed of Faster R-CNN is the slowest of these methods. The SR network with faster R-CNN yielded the very good results for small objects on satellite imagery; however, the detection speed of the network is slow [23], so it is difficult to deploy in engineering applications. SSD is a one-stage multibox detector, it takes 61 ms. YOLOv3 is a one-stage model, and it takes 22 ms. Since the CFF-SDN model uses a pruning strategy, it takes 9.4 ms, which is the least time-consuming of these methods. The removal of redundant channels does not affect the accuracy. Due to the slimming of the network, the inference time of the model can be reduced. Therefore, the proposed model pruning method can speed up the detection speed without reducing the accuracy. Before model pruning, the mAP of the Table 5 shows the time cost of ship detection by different models. Due to Faster R-CNN being a two-stage method, it spends a lot of time generating regions of interest (ROIs), so the detection speed of Faster R-CNN is the slowest of these methods. The SR network with faster R-CNN yielded the very good results for small objects on satellite imagery; however, the detection speed of the network is slow [23], so it is difficult to deploy in engineering applications. SSD is a one-stage multibox detector, it takes 61 ms. YOLOv3 is a one-stage model, and it takes 22 ms. Since the CFF-SDN model uses a pruning strategy, it takes 9.4 ms, which is the least time-consuming of these methods. The removal of redundant channels does not affect the accuracy. Due to the slimming of the network, the inference time of the model can be reduced. Therefore, the proposed model pruning method can speed up the detection speed without reducing the accuracy. Before model pruning, the mAP of the ship detection model is 91.508%, and the average inference time is 20 ms. After model pruning, the mAP of CFF-SDN model is higher, which is 91.51%. It is found that mAP fluctuates up and down by 0.002% is a normal phenomenon in the experiment, so it can be considered that the mAP after model pruning is almost the same as normal training. As the model pruning makes the network slim, the average inference time is improved by 10.6 ms. Effect of Data Preprocessing The data preprocessing in our model includes data augmentation and atmospheric correction. Data augmentation methods for remote sensing images have been proposed to prevent the model from overfitting and increase the detection accuracy. Means like horizontal flipping, vertical flipping, random rotation, random scaling, random cropping or expansion are used to enrich the training samples. Color jittering is applied to adjust the contrast, brightness, saturation and hue of ship images. An affine transformation method is also proposed, which enables satellite images to be expanded to images with different viewing angles. An atmospheric correction method based on the dark channel prior can well reduce the influence of atmospheric absorption and scattering on remote sensing images. After atmospheric correction, the ships in the remote sensing image are clearer, and the color fidelity of the ships is higher. The correction of atmospheric absorption and scattering helps improve the accuracy of ship detection. We evaluated the impact of data preprocessing in the performance of CFF-SDN model. The size of input images is uniformly scaled to 416 × 416. Table 6 shows the effect of data augmentation and atmospheric correction for CFF-SDN model. The mAP of the CFF-SDN with data augmentation was 90.42%, while the mAP of CFF-SDN model without data augmentation was 88.84%. Through data augmentation, the mAP value is improved by 1.58%. The mAP of the CFF-SDN with atmospheric correction and data augmentation was 91.51%; through atmospheric correction, the mAP value was improved by 1.09%. Figure 18 shows the precision-recall curves of CFF-SDN model with data preprocessing and CFF-SDN model without data preprocessing. The precision-recall curve of the CFF-SDN model with augmentation is much higher than the CFF-SDN model without augmentation. The precision-recall rate curve with image augmentation and atmospheric correction is the highest, which is closest to the upper right. This means that data augmentation and atmospheric correction are helpful for improving the accuracy of ship detection. Performance Comparison of Different Image Sizes We evaluated the impact of different image sizes in the performance of CFF-SDN model. To get images of different sizes, we resized the remote sensing images in the DSDR dataset to 320 × 320, 512 × 512, 640 × 640, respectively. The mAP of the CFF-SDN ship detection model was 88.61, 92.44 and 93.25 percent. Table 7 shows the performance of CFF-SDN with different image sizes. In general, as the image size increases, the detection performance of the CFF-SDN model improves to a certain extent. However, the computational complexity of the model increadses as the image size becomes larger. Billion floating point operations per second (BFLOPS) increased from 5.7 to 22.7, as the image width and height increased from 320 to 640. When we need to detect larger images, the CFF-SDN model requires greater inference time than detecting small images. Figure 19 shows the precision-recall curves of CFF-SDN model for different image sizes. The precision-recall curve of 640 is much higher than others. This means that the larger the size of the images, the higher the accuracy of ship detection. In engineering application, we can select the appropriate input image size according to the required detection accuracy and allowable detection speed. Performance Comparison of Different Image Sizes We evaluated the impact of different image sizes in the performance of CFF-SDN model. To get images of different sizes, we resized the remote sensing images in the DSDR dataset to 320 × 320, 512 × 512, 640 × 640, respectively. The mAP of the CFF-SDN ship detection model was 88.61, 92.44 and 93.25 percent. Table 7 shows the performance of CFF-SDN with different image sizes. In general, as the image size increases, the detection performance of the CFF-SDN model improves to a certain extent. However, the computational complexity of the model increadses as the image size becomes larger. Billion floating point operations per second (BFLOPS) increased from 5.7 to 22.7, as the image width and height increased from 320 to 640. When we need to detect larger images, the CFF-SDN model requires greater inference time than detecting small images. Figure 19 shows the precision-recall curves of CFF-SDN model for different image sizes. The precision-recall curve of 640 is much higher than others. This means that the larger the size of the images, the higher the accuracy of ship detection. In engineering application, we can select the appropriate input image size according to the required detection accuracy and allowable detection speed. Remote Sens. 2020, 12, x FOR PEER REVIEW 24 of 30 Figure 19. Precision-recall curves of CFF-SDN model for different image sizes. Discussion Through comprehensive analysis and comparison with other models, our proposed CFF-SDN model was shown to be effective for ship detection in optical remote sensing images. The multi-layer convolutional feature fusion method is innovatively proposed, enhancing the fine-grained information and semantic information. It can be seen through experiments that our model has excellent performance in terms of detection accuracy and speed. We proposed the CFF-SDN model, which can fuse fine-grained information from shallow layers and semantic information from deep layers. This network architecture is very beneficial for the detection of small objects like ships in remote sensing images. Due to the use of fused feature maps for regression and classification, the CFF-SDN model has good adaptability to the multi-scale changes of ships. Table 3 shows that the CFF-SDN model can achieve better performance than other object detectors. Various data augmentation strategies are important measures for improving detection accuracy. Innovatively, affine transformation was used to change the perspective of satellite remote sensing images. As shown in Figure 5, the satellite image after affine transformation is very similar to the aerial remote sensing images taken from different perspectives. The use of rich satellite remote sensing images to improve the detection accuracy of aerial remote sensing images plays an important role in improving the overall detection accuracy. As ships are often densely arranged on the sea, as shown in Figure 10, unlike traditional nonmaximum suppression, we use soft NMS to suppress redundant prediction boxes, which increases the probability that the ship will be detected when closely arranged, effectively improves the recall rate of the model, and reduces missed detections. Since our model adopts a model pruning strategy, the CFF-SDN model has a lower computational complexity. As shown in Table 4, our proposed model has a faster detection speed than the other compared models, and is thus more conducive to migration to the embedded platform, in order to achieve real-time ship target detection in engineering applications. By comparing the many groups of experiments, it is verified that the CFF-SDN ship detection model can achieve high performance on detection accuracy, as shown in the precision-recall curves Discussion Through comprehensive analysis and comparison with other models, our proposed CFF-SDN model was shown to be effective for ship detection in optical remote sensing images. The multi-layer convolutional feature fusion method is innovatively proposed, enhancing the fine-grained information and semantic information. It can be seen through experiments that our model has excellent performance in terms of detection accuracy and speed. We proposed the CFF-SDN model, which can fuse fine-grained information from shallow layers and semantic information from deep layers. This network architecture is very beneficial for the detection of small objects like ships in remote sensing images. Due to the use of fused feature maps for regression and classification, the CFF-SDN model has good adaptability to the multi-scale changes of ships. Table 3 shows that the CFF-SDN model can achieve better performance than other object detectors. Various data augmentation strategies are important measures for improving detection accuracy. Innovatively, affine transformation was used to change the perspective of satellite remote sensing images. As shown in Figure 5, the satellite image after affine transformation is very similar to the aerial remote sensing images taken from different perspectives. The use of rich satellite remote sensing images to improve the detection accuracy of aerial remote sensing images plays an important role in improving the overall detection accuracy. As ships are often densely arranged on the sea, as shown in Figure 10, unlike traditional non-maximum suppression, we use soft NMS to suppress redundant prediction boxes, which increases the probability that the ship will be detected when closely arranged, effectively improves the recall rate of the model, and reduces missed detections. Since our model adopts a model pruning strategy, the CFF-SDN model has a lower computational complexity. As shown in Table 4, our proposed model has a faster detection speed than the other compared models, and is thus more conducive to migration to the embedded platform, in order to achieve real-time ship target detection in engineering applications. By comparing the many groups of experiments, it is verified that the CFF-SDN ship detection model can achieve high performance on detection accuracy, as shown in the precision-recall curves in Figure 17. However, ships sometimes sail in complex scenes, and the shapes and textures of interfering objects (such as islands, clouds) can change considerably. Sometimes the shape, color, and texture of clouds or islands are very similar to those of ships. These disturbances can cause false alarms in the detector, as shown in Figure 20. Remote Sens. 2020, 12, x FOR PEER REVIEW 25 of 30 in Figure 17. However, ships sometimes sail in complex scenes, and the shapes and textures of interfering objects (such as islands, clouds) can change considerably. Sometimes the shape, color, and texture of clouds or islands are very similar to those of ships. These disturbances can cause false alarms in the detector, as shown in Figure 20. Although CFF-SDN fully reuses feature information by fusing features from different layers, it is still not enough to eliminate all false alarms. Both the training set and the test set contain harbor images, and the ship detection in these images is interfered with by the land. The ship detection results of harbor images containing land are shown in Figure 21. The CFF-SDN model can detect ships in the harbor. Although the model does not appear to be overfitting, the detection effect in the harbor images is not as good as that in the ocean images. The ships near the shore in Figure 21a-c are well detected. Three ships were detected in Figure 21d, but one ship docked on the shore was not detected. There are many interferences when detecting ships in the harbor, and the detection effect is lower than that of ships on the sea. The mAP would be significantly decreased when the trained model is applied to the harbor images. Enhancing the robustness of algorithms for ship detection in harbor is an important research topic in the future. We need to collect more harbor images to support the quantitative analysis of ship detection in the harbor. Although CFF-SDN fully reuses feature information by fusing features from different layers, it is still not enough to eliminate all false alarms. Both the training set and the test set contain harbor images, and the ship detection in these images is interfered with by the land. The ship detection results of harbor images containing land are shown in Figure 21. The CFF-SDN model can detect ships in the harbor. Although the model does not appear to be overfitting, the detection effect in the harbor images is not as good as that in the ocean images. The ships near the shore in Figure 21a-c are well detected. Three ships were detected in Figure 21d, but one ship docked on the shore was not detected. There are many interferences when detecting ships in the harbor, and the detection effect is lower than that of ships on the sea. The mAP would be significantly decreased when the trained model is applied to the harbor images. Enhancing the robustness of algorithms for ship detection in harbor is an important research topic in the future. We need to collect more harbor images to support the quantitative analysis of ship detection in the harbor. The interferences of ship detection on different datasets are quite different. We collected several different datasets, including vehicle detection in aerial imagery (VEDAI) dataset [40], dataset for object detection in aerial images (DOTA) [41], and high-resolution remote sensing detection (HRRSD) dataset [42]. These datasets contain various types of targets such as airplanes, tractors, ships, trucks, etc. The ship images extracted from these datasets are detected by CFF-SDN model to detect ship images. In addition, the number of ships in these datasets is not as high as in our dataset DSDR. The ship detection results of CFF-SDN model on other datasets are shown in Figure 22. ocean images. The ships near the shore in Figure 21a-c are well detected. Three ships were detected in Figure 21d, but one ship docked on the shore was not detected. There are many interferences when detecting ships in the harbor, and the detection effect is lower than that of ships on the sea. The mAP would be significantly decreased when the trained model is applied to the harbor images. Enhancing the robustness of algorithms for ship detection in harbor is an important research topic in the future. We need to collect more harbor images to support the quantitative analysis of ship detection in the harbor. (a) (b) (c) (d) Figure 21. Examples of ship detection results of harbor images containing land. The ships near the shore in the images (a-c) are well detected. Three ships were detected in (d), but one ship docked on the shore was not detected. There are many interferences when detecting ships in the harbor, and the detection effect is lower than that of ships on the sea. Remote Sens. 2020, 12, x FOR PEER REVIEW 26 of 30 Figure 21. Examples of ship detection results of harbor images containing land. The ships near the shore in the images (a-c) are well detected. Three ships were detected in (d), but one ship docked on the shore was not detected. There are many interferences when detecting ships in the harbor, and the detection effect is lower than that of ships on the sea. The interferences of ship detection on different datasets are quite different. We collected several different datasets, including vehicle detection in aerial imagery (VEDAI) dataset [40], dataset for object detection in aerial images (DOTA) [41], and high-resolution remote sensing detection (HRRSD) dataset [42]. These datasets contain various types of targets such as airplanes, tractors, ships, trucks, etc. The ship images extracted from these datasets are detected by CFF-SDN model to detect ship images. In addition, the number of ships in these datasets is not as high as in our dataset DSDR. The ship detection results of CFF-SDN model on other datasets are shown in Figure 22. It can be seen from Figure 22 that various types of ships in these datasets were detected, and no interfering objects such as harbor facilities were mistakenly detected as ships. The ship detection results on different datasets prove that our model is very robust. The detection result of the DOTA It can be seen from Figure 22 that various types of ships in these datasets were detected, and no interfering objects such as harbor facilities were mistakenly detected as ships. The ship detection results on different datasets prove that our model is very robust. The detection result of the DOTA dataset in the first row of Figure 22 shows that the localization of the ship in the upper left corner is not accurate enough. In the future, the localization accuracy of the CFF-SDN model on other datasets needs to be improved. Increasing the learning category is a better solution to this problem. Common disturbances such as clouds and islands are divided into separate categories. In addition to learn the target characteristics of the ships, the model also learns the characteristics of common interferers that cause false alarms to distinguish between ships and interference. The fusion of visible and infrared image information may be another idea for enhancing the recognition capability of the detector by comprehensively using the interference suppression effect of different spectrum bands to improve the performance of distinguishing ships from false alarms, but this depends on the linkage of the visible and infrared sensors, so as to obtain both visible and infrared images of the same scene. Conclusions In this paper, we proposed an end-to-end ship detection model that can effectively cope with various disturbances in optical remote sensing images, such as satellite remote sensing images, visible aerial remote sensing images, infrared aerial remote sensing images. Because our method uses a convolutional feature fusion network and multi-scale feature maps are used for regression and classification, it can detect ship with different sizes in remote sensing images. Our model uses the affine transformation method, so the CFF-SDN model can detect ships with different perspectives. A dark channel prior is adopted to solve the atmospheric correction on the sea scenes, removing the influence of the absorption and scattering of water vapor and particles in the atmosphere. Above all, in the feature extraction stage, the convolutional feature extraction network is used to obtain ship features from shallow to deep. Then, in the feature fusion stage, we integrate different levels of ship features through feature fusion network. Finally, soft NMS is applied to suppress redundant predictions. The model outputs the localization, classification and confidence of ships in the remote sensing images. Since the CFF-SDN model uses a pruning strategy, the detection speed is faster than other comparison models. Overall, the mAP of our proposed detection framework was 91.51% with resolution 416 × 416, and the average inference time was 9.4 ms. Our model has good performance for small target detection, and can detect ships with pixels as small as 7 × 7 in remote sensing images. The experimental results show that our model is robust, effective and fast, and can be used for real-time detection of ships. In our future work, we plan to enrich the aerial remote sensing images in DSDR dataset to improve the training effect. On the other hand, transplant the model to the embedded platform to realize the engineering application of ship detection. Author Contributions: Y.Z. and L.G. designed the proposed detection model. Y.Z. and F.X. collected the experimental data. Z.W. provided experimental equipment. Y.Z. drafted the manuscript. Y.Y. assisted in the experiment of atmospheric correction. F.X. and X.L. edited the manuscript. L.G. provided guidance to the project, reviewed the manuscript, and obtained funding to support this research. All authors have read and agreed to the published version of the manuscript.
2020-10-28T19:21:12.260Z
2020-10-12T00:00:00.000
{ "year": 2020, "sha1": "bf12178e718842331aef4c64d258027b4ab73eae", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/12/20/3316/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1155bd411cf971816d58e969d8509582ee95c62e", "s2fieldsofstudy": [ "Computer Science", "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
3563981
pes2o/s2orc
v3-fos-license
Anti-TNF-α effects on anemia in rheumatoid and psoriatic arthritis A key role in the pathogenesis of rheumatoid arthritis (RA) and psoriatic arthritis (PsA) is played by inflammatory cytokines, including tumor necrosis factor-α (TNF-α), which are also involved in inducing inflammatory anemia. We have followed 67 RA patients and 64 PsA patients for 1 year to evaluate the effects of TNF-α inhibitors on disease activity and on inflammatory anemia. Patients were divided into three different treatment groups, according to a randomized assignment to receive therapy with etanercept, adalimumab, or infliximab. Treatment with anti-TNF-α resulted in a significant reduction in disease activity score-28 (DAS28) values both in RA and PsA patients, already from the third month of treatment (P = 0.01). In both populations, there was an increase in hemoglobin (HB) levels already after 3 months of treatment (P = 0.001), and HB levels were inversely proportional to the disease activity, regardless of the type of medication used. The increased HB values and the reduction of DAS28 values during the observation period suggest the existence of a negative correlation between them both in RA and PsA, regardless of the type of anti-TNF-α used. Our data suggest a pleiotropic action of anti-TNF-α, such as the well-known action on the activity of the disease, and the improvement in inflammatory anemia. Introduction Inflammatory cytokines play a crucial role in the pathogenesis of chronic inflammatory diseases, such as rheumatoid arthritis (RA) and psoriatic arthritis (PsA). The block of these agents, such as tumor necrosis factor-α (TNF-α) and interleukin-6 (IL-6), represents an important therapeutic opportunity against chronic inflammatory diseases with an inhibitory effect on inflammatory cells recruitment, on migration of circulating leucocytes into inflamed joints, on angiogenesis, and on tissue damage by decreasing expression of adhesion molecules and chemokines. 1 Moreover, proinflammatory cytokines, primarily TNF-α and IL-6, play an important role in inducing inflammatory anemia. In particular, IL-6 induces the synthesis of hepcidin, the key regulator of iron metabolism in anemia of chronic diseases (ACD), which is involved in reducing the intestinal iron absorption and blocking its release from deposits; 2 TNF-α, on the contrary, is probably involved in altering erythropoiesis. 3 Thus, anemia is an important extra-articular manifestation of RA and PsA, correlated to physical disability and increased mortality. 4,5 Since TNF-α and IL-6 cytokines are involved in the pathogenesis of ACD, the use of biotechnological drugs, such as tocilizumab (humanized monoclonal antibody directed against the receptor of the IL-6) and inhibitors of TNF-α, could potentially lead to increased levels of hemoglobin (HB). In fact, in several studies in patients with RA and other chronic inflammatory diseases, increased HB levels were observed after treatment with inhibitors of IL-6 and TNF-α. 6,7 This study wants to evaluate the effects of three different TNF-α inhibitors, etanercept (ETN), adalimumab (ADA), infliximab (IFX), in terms of clinical efficacy, effects on HB, iron, and ferritin levels, in patients affected by PsA or RA, not suffering from other co-morbidities and not treated with erythropoiesis-stimulating agents or iron. We want to compare these three anti-TNF-α agents among them and to assess if there is a correlation between their effects on HB and the disease activity. Patients and methods RA patients, fulfilling the American College of Rheumatology (ACR) criteria 2010 and patients affected by PsA according to the CASPAR criteria (Classification of Psoriatic Arthritis Study), afferent to our Rheumatology Clinic, were recruited and followed prospectively for 1 year. 8,9 The study protocol was approved by the local ethics committee. Written informed consent was obtained from all participants. The two populations were homogeneous for age and sex. Exclusion criteria were age <18 years and pregnancy. All patients were receiving methotrexate and were naive to biological drugs. Each patient has given his or her consent through a written form. The patients not responsive to methotrexate and with high (disease activity score-28 (DAS28) >5.1) or moderate disease activity (DAS28 >3.2 and ⩽5.1) were candidated for treatment with biologic drugs. Patients with RA and patients with PsA were divided into three different treatment groups, according to a randomized assignment to receive therapy with ETN, ADA, or IFX. Anemia has been defined according to World Health Organization (WHO), which establishes the values of HB threshold for anemic state corresponding to <12 g/dL for women and <13 g/dL for men. 10 The study excluded patients who had extraarticular involvement (lung, heart, intestine); HB levels below 8.5 g/dL; creatinine levels >1.5 mg/ dL; history of cancer, liver, kidney, and endocrine disease; and neurological and psychiatric disorders. We also excluded patients treated with hemodialysis or continuous treatment with erythropoietin or iron and patients with active gastrointestinal bleeding in the 2 months prior to enrollment and during the entire duration of the study. We also excluded from the study patients with deficiency anemia, by differentiating irondeficiency anemia from the typical inflammatory anemia through the evaluation of serum ferritin values; therefore, we have considered only patients with HB values below the threshold levels dictated by WHO and ferritin levels >200 ng/ mL, assuming these levels as suggestive of inflammatory anemia (reduced serum iron, normal or reduced transferrin, reduced transferrin saturation index). The clinical response to inhibitors of TNF-α was assessed by the reduction in DAS28 from baseline values up to 12 months of therapy. Continuous variables were expressed as mean ± standard deviation (SD). Differences between groups were assessed using the non-parametric Mann-Whitney U test for continuous variables; the Wilcoxon rank test was used within the continuous variables. Pearson correlation coefficient was used to evaluate the correlation between selected continuous variables. Values of P < 0.05 were considered as statistically significant. In all, 67 patients with RA and 64 patients with PsA were included in the study. In PsA group, 23 patients started therapy with ADA, 20 with IFX, and 21 with ETN. In RA group, 22 patients started therapy with ADA, 21 with IFX, and 24 with ETN. All patients were treated with methotrexate (dosage between 10 and 15 mg/week) and steroid (5 mg/day of prednisone). None of them have stopped or modified methotrexate treatment during the observation. TNF-α inhibitors dosage remained stable during the study (ADA 40 mg every 2 weeks; IFX 5 mg/kg at week 0, 2, 6, and subsequently every 8 weeks; ETN 50 mg/week). Results The main demographic and clinical characteristics are shown in Table 1. At baseline (before to start therapy with anti-TNF-α agents), HB values were significantly lower in RA patients than in PsA patients; on the contrary, CRP values were significantly higher in RA patients than in PsA patients. There were no significant differences between groups for the other evaluated parameters (age, DAS28, iron, ferritin), even if values of DAS28 were higher in RA patients than in PsA patients, without reaching statistical significance. Disease activity in the different treatment groups in RA and PsA Treatment with anti-TNF-α agents resulted in a significant reduction in disease activity both in RA patients and in PsA patients, in terms of DAS28 already after 3 months (P = 0.01) (Figure 1(a) and (b)). The decreasing trend of DAS28 has regarded the overall length of the study in almost all treatment subgroups with anti-TNF-α, except for the ADA subgroup, which showed a loss of efficacy after 12 months both in RA and PsA patients, with higher DAS28 values at 12 months than the ninth month, but still significantly lower than at baseline (P = 0.01). At baseline, RA patients in subgroups randomized to ADA and IFX showed higher DAS28 values than those obtained in patients randomized to receive treatment with ETN, but without statistical significance (P > 0.5). However, after 9 months, the ETN subgroup showed a significantly lower DAS28 compared with IFX and ADA subgroups (P < 0.001 and P = 0.04, respectively; Figure 1(a)). Among patients with PsA (Figure 1(b)), at baseline, DAS28 value was significantly higher in the ADA subgroup than in the ETN subgroup (P = 0.03) and in the IFX subgroup (P = 0.01). After 3 months of treatment, DAS28 values were significantly reduced in all treatment subgroups and continued to reduce also for subsequent observation months. The response, in terms of clinical efficacy, at the treatment with biotechnological drugs was observed for all three TNF-α inhibitors from the third month of therapy. Subsequently, it has been constantly maintained for all the length of the study with the exception of the ADA group which at the twelfth month showed a lag in clinical efficacy. Patients treated with ETN showed an inconstant disease activity; up to 6 months of therapy, there was a significant reduction in DAS28 values, followed by an increase in DAS28 values at 9 months of treatment, with a reduction at 12 months. At 12 months, ETN showed a more efficient effect in reducing disease activity compared to ADA (P = 0.9) and IFX (P = 0.3). Effects of different treatments with anti-TNF-α agents in RA and PsA on HB levels At baseline, RA patients had higher ferritin values than PsA patients (P = 0.3). During the treatment with anti-TNF-α agents, there was a gradual reduction in ferritin values in both populations (Figure 2). In patients with RA (Figure 3(a)), there were no differences between the three different treatments (ADA, IFX, and ETN) in increasing HB values, which were significant for all three drugs from the third month. After 12 months, the levels of HB in patients treated with ETN were significantly higher than those of patients treated with IFX (P = 0.01). If we consider data expressed as percentage increase, we found that the HB percentage increase in ADA group was higher than in ETN (P = 0.018) and IFX (P = 0.2) groups until the ninth month of observation; the percentage increase of HB reached a plateau at the ninth month for ADA and IFX, while in ETN group HB percentage increased progressively; thus, after 12 months, percentage variations in ETN group were higher than in the other treatment groups, reaching significance compared with IFX group (similar to what was observed when data were analyzed in absolute values). HB values analysis in the three subgroups of patients with PsA (Figure 3(b)) showed that in all three subgroups HB values increased from T1. The HB increase from baseline to the twelfth month was 1.3 g/dL for IFX group, 1.17 g/dL for ADA group, and 1.78 for ETN group. Percentage increase of HB in ETN group was higher than in ADA and IFX groups at the third month (P = 0.04) ( Figure 3b); however, in subsequent observations, the percentage HB increase in ETN group was lower than in the follow-up in the other groups. Furthermore, percentage increase of HB from baseline to the twelfth month was almost identical in all three treatment subgroups. Correlation between changes in HB and DAS28 HB levels were inversely proportional to the disease activity both in RA patients (r = −0.5, P < 0.0001) and PsA patients (r = −0.5, P < 0.001). Discussion Many studies have shown the role of IL-6 inhibition in improving HB levels in RA patients by interfering hepcidin production and increasing iron bioavailability. Nevertheless, less clear is the role of TNF-α inhibition in improving HB levels. 3,6,[11][12][13][14] In our study, DAS28 and CRP values were significantly higher in RA patients than in PsA patients. Moreover, HB levels were significantly lower in RA patients than in PsA patients. These data can be explained by considering that the putative more intense inflammatory status may influence HB levels in RA. We have observed increased HB levels in patients with RA for all three anti-TNF-α agents. At the twelfth month, patients treated with ETN had HB levels higher than patients treated with IFX and ADA. We have hypothesized that anti-drug antibodies probably play a role in reducing the efficacy of monoclonal antibodies, such as IFX and ADA, on anemia compared to a soluble TNF receptor fusion protein, such as ETN. 15 The increased HB values and the reduction in DAS28 values suggest the existence of a negative correlation between HB levels and DAS28 values both in RA and PsA, regardless of the type of anti-TNF-α agent used. Our data confirm the well-known action of anti-TNF-α agents on the disease activity, and the improvement in inflammatory anemia, demonstrated by the reduction in serum ferritin values. Ferritin value in this study was considered as discriminating between iron-deficiency anemia and inflammatory anemia and, given its function of acute phase reactant, the reduction of its value in the course of treatment with TNF-α inhibitors does not contradict the expected results. Anti-inflammatory effects of anti-TNF-α therapy may explain both the tendency of ferritin levels to decrease and the tendency of HB levels to increase. Ferritin is a cellular iron storage protein and increased ferritin levels usually are related to excessive iron storage as commonly found in inflammatory anemia. As demonstrated by Song et al., 12 serum ferritin levels in chronic arthritis show a direct correlation with serum hepcidin levels, suggesting that increased hepcidin is responsible for increasing iron storage and for reducing the amount of serum iron. Thus, it is conceivable that anti-TNF-α treatment may be involved in reducing both hepcidin and ferritin levels and that improvement in anemia may be related to an improvement in serum iron available for HB synthesis and erythrocyte production. In view of the fact that the literature is quite abundant about anemia in RA, improvement of HB values obtained in PsA patients was most surprising. Nevertheless, further studies are needed in order to better define the role of anemia even in course of PsA.
2018-04-03T03:19:47.497Z
2017-06-12T00:00:00.000
{ "year": 2017, "sha1": "08e543090ff8dd7d4b4288f983b789fe2098e0c4", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0394632017714695", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "08e543090ff8dd7d4b4288f983b789fe2098e0c4", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
15213781
pes2o/s2orc
v3-fos-license
Evidence for weak magnetic fields in early-type emission stars We report the results of our study of magnetic fields in a sample of 15 Be stars using spectropolarimetric data obtained at the European Southern Observatory with the multi-mode instrument FORS1 installed at the 8m Kueyen telescope. We detect weak photospheric magnetic fields in four stars, HD56014, HD148184, HD155806, and HD181615. We note that for HD181615 the evolutionary status is not obvious due to the fact that it is a binary system currently observed in the initial rapid phase of mass exchange between the two components. Further, we notify the possible presence of distinct circular polarisation features in the circumstellar components of Ca II H&K in three stars, HD58011, HD117357, and HD181615, hinting at a probable presence of magnetic fields in the circumstellar mass loss disks of these stars. We emphasize the need for future spectropolarimetric observations of Be stars with detected magnetic fields to study the temporal evolution of their magnetic fields and the correlation of magnetic field properties with dynamical phenomena taking place in the gaseous circumstellar disks of these stars. Introduction Be stars are defined as rapidly rotating main sequence stars showing normal B-type spectra with superposed Balmer emissions. Further, these stars are characterized by episodic dissipation and formation of a new circumstellar (CS) disk-like environment, non-radial pulsations and photometric and spectroscopic variability. A number of physical processes in classical Be stars (e.g., angular momentum transfer to a CS disk, channeling stellar wind matter, accumulation of material in an equatorial disk, etc.) are more easily explainable if magnetic fields are invoked. In recent years this conclusion was strengthened by numerous theoretical works (e.g., Cassinelli et al. 2002, Maheswaran 2003, Brown et al. 2004. A detailed discussion concerning the mechanisms involved in formation and evolution of disks around Be stars can be found in a recent publication by Owocki (2006). To date we know only three Be stars with magnetic field detections: ω Ori (80±40 G - Neiner et al. 2003), β Cep (95±8 G - Donati et al. 2001), and 16 Peg (−156±31 G - Hubrig et al. 2006). These results are cited in the literature as observational evidence for the existence of magnetic fields in Be stars (e.g. Brown et al. 2004). However, β Cep has been shown to be a binary star, where the Hα emission line and the magnetic field originate in two different components (e.g. Schnerr et al. 2006), and the magnetic field ⋆ Based on observations collected at ESO, Paranal, Chile (ESO programmes Nos. 075.D-0507 and 077.D-0406). ⋆⋆ Corresponding author: e-mail: shubrig@eso.org measured in ω Ori is below a 3σ threshold. A longitudinal magnetic field at a level larger than 3σ has been diagnosed for the Be star 16 Peg (Hubrig et al. 2006). This star has v sin i=104 km/s and was classified as a Be star by Merrill & Burwell (1943) due to the detection of double emission in Hα. However, the emission was not confirmed by subsequent observations, and so the question of the presence of magnetic fields in classical Be stars remains open. It is quite possible that the difficulty of detecting magnetic fields in Be stars is quite similar to that discussed for young Herbig Ae/Be stars with accretion disks by Hubrig et al. (2007). However, in contrast to Herbig stars showing evidence of magnetically mediated disk accretion, the Be disks originate from ejection or decretion of mass from the rapidly rotating B stars. The observed spectral lines may form over a relatively large volume, and the magnetic field topology is likely rather complex. The presence of a mixture of photospheric and CS magnetic fields could drive the net line-of-sight magnetic flux to near zero values. Also, even if there are very strong, small scale magnetic fields distributed over the surface of the star, these could go undetected in our measurements, as the measured mean longitudinal magnetic field B z is the average over the stellar hemisphere visible at the time of observation of the component of the magnetic field parallel to the line of sight, weighted by the local emergent spectral line intensity. Below, we report our results of the study of magnetic fields in a sample of 15 Be stars carried out with the multimode instrument FORS 1 installed at the 8 m Kueyen telescope at the VLT. Analysis The observations have been carried out in April-September 2005 in service mode. Using the narrowest available slit width of 0. ′′ 4 the achieved spectral resolving power of the FORS 1 spectra obtained with the GRISM 1200g was about 4000 in the spectral region λλ 3850-5000Å. Each observation consisted of 8-10 subexposures with a typical duration of the order of tens of seconds for each subexposure. One additional observation of the Be star HD 148194 (χ Oph) was obtained in May 2006 with the same instrument and GRISM 600B using the narrowest available slit width of 0. ′′ 4 to obtain a resolving power of about 2000. A detailed description of the assessment of the longitudinal magnetic field measurements using FORS 1 is presented in our previous papers (e.g., Hubrig et al. 2004aHubrig et al. , 2004bHubrig et al. , and 2004c, and references therein). The errors of the measurements of the polarization have been determined from photon counting statistics and have been converted to errors of field measurements. Results The results of our analysis are summarised in Table 1. In the first four columns we give the HD number of the target, another identifier, the V magnitude, and the modified Julian date of the observations. In the next two columns follow the spectral type and the v sin i value, which were both taken from Yudin (2001). In columns seven and eight we present T eff and log g values used to calculate the synthetic spectra, and in column nine we list the measured mean longitudinal magnetic fields B z . The mean longitudinal magnetic field is diagnosed from the slope of a linear regression of V /I versus the quantity − g eff e 4πmec 2 λ 2 1 I dI dλ B z +V 0 /I 0 . We show an example of the regression detection for the magnetic field in HD 56014 in Fig. 1. Our experience from studying a large Regression detection of the magnetic field in HD 56014. sample of magnetic and non-magnetic Ap and Bp stars revealed that this regression technique is very robust and that detections with B z > 3σ z result only for stars possessing magnetic fields. A longitudinal magnetic field at a level larger than 3σ has been detected in four stars, HD 56014, HD 148184, HD 155806, and HD 181615 (also indicated in bold face in Table 1). Also, an inspection of the Stokes V spectra of these four stars reveals noticeable Zeeman features at the position of numerous spectral lines. As an example, we present in Fig. 2 the Stokes I and V spectra for HD 148184 and HD 155806 in the spectral region around the line He I λ 4471.5Å. According to Porter & Rivinius (2003), the photospheric spectra of Be stars are frequently superposed by strong absorption spectra from Be CS disks. For three stars in our Stokes I and V spectra of HD 148184 and HD 155806 in the spectral region around the line He I λ 4471.5Å. sample, HD 58011, HD 117357, and HD 181615, we noticed the presence of distinctive circular polarization signatures detected in the Stokes V spectra of the Ca II H&K lines which appear unresolved at the low spectral resolution achievable with FORS 1 (R∼4000), denoted in column 10 of Table 1. The profiles of these Ca lines in the FORS 1 spectra taken in integral light are deeper than predicted by synthetic spectra computed with the code SYNTH + ROTATE developed by Piskunov (1992). In Fig. 3 we present Stokes I spectra (upper panel) and Stokes V spectra (lower panel) around the Ca II resonance doublet for the stars HD 58011, HD 117357, and HD 181615. HD 58011, the Be star with numerous strong emission lines in the visible spectrum, was reported to be variable with an amplitude of 0. m 25 after the Hipparcos mission by Adelman, Mayer & Rosidivito (2000). However, the magnetic field is determined only at a 2.8σ level ( B z = +135±48 G). Not much is known about the rather faint Be star HD 117357 (m V =9.1). Wiegert & Garrison (1998) report about the presence of variable emission in Balmer hydrogen lines. In our Stokes I spectra the emission lines are quite weak, and the magnetic field is not detectable. A detailed discussion on the peculiar spectrum of the system HD 181615 was presented by Koubský et al. (2006) who suggested that the visible spectrum may actually be a combined spectrum of the disk rim and disk face. As our FORS 1 spectra are taken at a rather low resolving power, it is presently not possible to correctly ascertain the origin of these Ca II H&K lines. Additional high-resolution high signal-to-noise spectroscopic observations are needed to study the Ca line profiles to be able to decide whether they are formed in the CS disks around these two stars. Below we present brief notes on individual objects with detected photospheric magnetic fields. HD 56014 Rivinius,Štefl & Baade (2006) describe this star as a conventional Be shell star with narrow absorption lines and with central quasi emission bumps in photospheric lines detected in several Fe II lines and in Mg II λ 4481Å. The magnetic field for this star is detected at a 4.5σ level ( B z = −146±32 G). According to the Washington Double Star Catalog (WDS -Worley & Douglass 1997) and Mason et al. (1997) this object is a visual binary (WDS 07143-2621) with an angular separation of 0. ′′ 150 and an orbital period of about 180 yr. HD 148184 This Be star with numerous strong emission lines in the Stokes I spectrum belongs to Upper Scorpius, which is the youngest of the three subgroups that form the Scorpius-Centaurus association. The variability of the Hα line profiles has been reported by Austin et al. (2004). Hubert & Floquet (1998) discovered cyclic behaviour of the Hipparcos photometry, but could only constrain the period to be >0.45 d. The magnetic field for this star is detected at a 4σ level ( B z = +83±21 G) and at a 8σ level ( B z = +136±16 G). HD 155806 This is the hottest star in our sample with spectral type O7.5IIIe and is therefore the earliest known star showing emission lines typically seen in Be stars (e.g. Negueruela, Steele & Bernabeu 2004). The magnetic field for this star is detected at a level of 3.1σ ( B z = −115±37 G). HD 181615 This system is a very rare emission-line binary system of a strange spectral type and complexity. The existence of strong Hα emission in the visible spectrum was reported by Campbell (1895) and other investigators. Very recently, Koubský et al. (2006) concluded that this system with an orbital period of 138 d is one of very few known binary systems observed in the initial rapid phase of mass exchange between the two components. From the photometric and spectroscopic observations Koubský et al. infer the presence of bipolar jets perpendicular to the orbital plane, similar to those found for β Lyr, and argue that the peculiar character of the line spectrum of the brighter component could also be understood as originating from a pseudo-photosphere of an optically thick disk. The magnetic field is detected at a level of 3.8σ ( B z = +38±10 G). Discussion The detected magnetic fields in the studied Be stars are rather weak, with the largest longitudinal magnetic field of −146 G measured in the Be star HD 56014. These results suggest that strong large scale organized magnetic fields are not common among the group of Be type stars. Further, we noticed the possible presence of distinct circular polarisation features in the CS components of Ca II H&K in two Be stars, (Hubrig et al. 2007). The CS Ca II H&K line profiles in Stokes I spectra of Herbig Ae/Be stars are frequently quite complex and consist of several components which are usually assumed to be formed at the base of the stellar wind and in the accretion gaseous flow. A future careful high resolution high signal-to-noise spectropolarimetric study of the temporal behaviour of the Zeeman features in the Stokes V spectra will allow to get the highly desirable insight into the nature of gaseous disks of these two Be stars. During our observations the exposure times for each target were rather short, of the order of 15-20 min. Thus, not much can be inferred with respect to the variability and evolution of their magnetic fields and the correlation of magnetic field properties with dynamical phenomena taking place in these stars. To constrain the structure of magnetic fields in Be stars and to probe the presence of localized transient magnetic fields suggested by several previous studies (e.g., Mathys & Smith 2000 and references therein), the magnetic field measurements should be carried out over short (minutes) and long (rotational periods) time scales. The existence of small scale, i.e. highly non-dipole magnetic fields has been suggested by X-ray observations of flaring events (e.g. Smith, Robinson & Corbet 1998).
2007-11-13T22:17:26.000Z
2007-11-13T00:00:00.000
{ "year": 2007, "sha1": "3444bedaacdb2b60eec73f833be9f9c10cc550fe", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0711.2085", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3444bedaacdb2b60eec73f833be9f9c10cc550fe", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1078135
pes2o/s2orc
v3-fos-license
Parents' information and support needs when their child is diagnosed with type 1 diabetes: a qualitative study Abstract Aim and objective The aim of this study was to describe and explore parents' information and support needs when their child is diagnosed with type 1 diabetes, including their views about the timing and chronology of current support provision. Our objective was to identify ways in which parents could be better supported in the future. Design and participants Semi‐structured interviews were conducted with 54 parents of children with type 1 diabetes in four paediatric diabetes clinics in Scotland. Data were analysed using an inductive, thematic approach. Findings Parents described needing more reassurance after their child was diagnosed before being given complex information about diabetes management, so they would be better placed psychologically and emotionally to absorb this information. Parents also highlighted a need for more emotional and practical support from health professionals when they first began to implement diabetes regimens at home, tailored to their personal and domestic circumstances. However, some felt unable to ask for help or believed that health professionals were unable to offer empathetic support. Whilst some parents highlighted a need for support delivered by peer parents, others who had received peer support conveyed ambivalent views about the input and advice they had received. Conclusions Our findings suggest that professionals should consider the timing and chronology of support provision to ensure that parents' emotional and informational needs are addressed when their child is diagnosed and that practical advice and further emotional support are provided thereafter, which takes account of their day‐to‐day experiences of caring for their child. Findings Parents described needing more reassurance after their child was diagnosed before being given complex information about diabetes management, so they would be better placed psychologically and emotionally to absorb this information. Parents also highlighted a need for more emotional and practical support from health professionals when they first began to implement diabetes regimens at home, tailored to their personal and domestic circumstances. However, some felt unable to ask for help or believed that health professionals were unable to offer empathetic support. Whilst some parents highlighted a need for support delivered by peer parents, others who had received peer support conveyed ambivalent views about the input and advice they had received. Conclusions Our findings suggest that professionals should consider the timing and chronology of support provision to ensure that parents' emotional and informational needs are addressed when their child is diagnosed and that practical advice and further emotional support are provided thereafter, which takes account of their day-to-day experiences of caring for their child. 580 Background Type 1 diabetes (T1D) is one of the most common chronic conditions in childhood. The incidence has been rising by 3-4% per year in many countries, with the largest age-specific rise occurring amongst children aged under 5 years. 1,2 In the UK, there are approximately 26,500 children living with T1D 3,4 and the incidence (24.5-26/100 000 children per year) is amongst the highest in the world. 2,5,6 Most newly diagnosed children are now treated with one or two injections of long-acting basal insulin and injections of short-acting insulin at meal times. This regimen requires parents, who assume most responsibility for day-to-day management of pre-adolescent young children, to undertake frequent (at least four per day) checking of blood glucose levels, determine and administer insulin doses, count carbohydrates, be aware of physical activity levels and prevent hypo-and hyperglycaemia. 7,8 These responsibilities place considerable emotional demands on parents, particularly mothers, 9 who normally assume most responsibility for their child's diabetes care. As several qualitative studies have highlighted, parents whose children have been recently diagnosed with T1D describe feeling shocked, exhausted and out of control. [10][11][12][13] Such parents also report feeling anxious about administering injections, being frightened about episodes of hypoglycaemia and experiencing difficulties developing new and making adjustments to existing family routines, such as shopping, cooking and outings. [10][11][12][13][14][15][16][17] Parents of newly diagnosed children have also been shown to experience higher levels of clinically significant anxiety and stress than parents of healthy children and a higher prevalence of clinically significant depressive symptoms. [18][19][20][21][22] Furthermore, parents have reported feelings of frustration, guilt and anger due to the difficulties of adhering to complex regimens and feelings of personal failure when regimens are not strictly followed. 23 Whilst the emotional, psychological and practical impact of a child's diagnosis on parents has been comprehensively reported, 10,14,20 there has been relatively little exploration of parents' views about the information and support they receive in the aftermath of diagnosis in order to undertake their new caregiver responsibilities. There has also been no research which has looked at parents' views about the timing and chronology of current support provision, despite this being shown to be a salient issue for adults newly diagnosed with diabetes. 24,25 Furthermore, when parents' post-diagnostic accounts of support have been reported, the findings have often focused on those with very young children (typically, 4 years and under), 12,26,27 have tended to be brief, and have been presented as part of their broader experiences of making adaptations to family life, acquiring skills and learning how to manage their child's T1D over time. 10,12,[14][15][16] In this paper, we report findings which emerged from a qualitative study originally established to explore the experiences of parents who care for children aged under 12 years, diagnosed with T1D. Early into our interviews it became apparent that all parents wanted and expected to discuss their experiences of diagnosis in-depth together with their views about how they could have been better supported. In response to this emerging finding, we adapted our topic guide to allow for a detailed exploration of parents' experiences of, views about, and need for, information and support after their child's diagnosis. In doing so, our objective was to inform recommendations for how parents could be better supported in order to help alleviate their anxiety and distress and to foster effective diabetes management when they first begin to care for a child with T1D. Research design In-depth semi-structured interviews were undertaken with parents of children aged 12 years and under who had been diagnosed with T1D. In-depth interviews were used as these afforded the flexibility needed for parents to share their own understandings and experiences and to raise and discuss issues which were salient to them, including those not anticipated at the study's outset. 28 The study used an emergent design, in which data collection and analysis took place simultaneously, informed by the principles of grounded theory research. 29 This enabled the issues identified in the early interviews to inform the areas explored in later ones. Sample and recruitment Fifty four parents (mothers and fathers) of children aged 2-12 years were recruited using an opt-in method by health professionals at the time of their child's consultation. Parents who attended consultations were recruited from paediatric diabetes clinics located in four health boards which serve diverse (urban, rural and remote rural) catchment areas across Scotland. Purposive sampling was used to ensure diversity of location, occupational status (full-time/ part-time), relationship status and their child's gender and demographic/disease characteristics ( Table 1). During recruitment, parents who attended a consultation alone were asked to enquire whether their partner would also like to participate in an interview. If two parents were present during a consultation, both were offered the opportunity to take part. A recruitment log was maintained and reviewed on a weekly basis by members of the research team during the study's recruitment period. These weekly reviews were used to inform discussions with recruiting staff to ensure that parents in all of the groupings in Table 1 were included in the final sample. To assist this process, recruiting staff had access to clinical records to inform participant selection. All participants who opted-in completed a written consent form prior to participation. Permission was also sought to re-approach parents, if necessary, by telephone to clarify information or explore emergent issues during data collection/ analysis. Recruitment continued until no new findings or themes were identified in new data collected. Data collection A total of 40 interviews were performed, including 26 solo interviews (24 with mothers and two with fathers) and 14 joint interviews (involving both mothers and fathers). Interviews took place between November 2012 and August 2013 and were undertaken face-to-face in parents' homes. All data were collected by DR, who has extensive experience of conducting interviews involving sensitive content and knowledge of T1D. Interviews were informed by topic guides developed in light of literature reviews, original research questions and inputs from members of the study's advisory group, which included health professionals, policy makers and parents of children with T1D. Topics explored in the interviews are shown in Table 2. Interviews averaged 120 minutes, were digitally recorded (with consent) and transcribed in full. Data analysis A thematic analysis was undertaken by two experienced qualitative researchers (DR and JL) who performed independent analyses, reading each participant's interview in full. Participants' accounts were also cross-compared using the constant comparative method 29 to identify issues and experiences which cut across different parents' accounts. Joint meetings were held after independent analyses had been undertaken to compare interpretations, explore parents' underlying reasoning, resolve any differences in interpretation and reach agreement on recurrent themes and findings. 29 These meetings were also used to develop a series of codes which reflected the topics explored with participants and emergent themes. NVivo, a qualitative software package (QSR International, Doncaster, Australia), was used to code and retrieve data. Coded datasets were printed out to be read and subjected for further analyses to identify further themes and subthemes and illustrative quotations. Below, data are tagged using unique identifiers with the letter 'M' or 'F' signifying a child's mother or father respectively. • Experiences and views about the information, support and advice that they had received from health professionals when their child was diagnosed; views about when this support was provided; what they had found helpful and unhelpful. • Experiences and views about education or training that they had received from health professionals around diagnosis to help them to manage their child's diabetes at home; what they had found helpful and unhelpful. • What were their own needs for support at the time of diagnosis and whether/how were these addressed; what were their unmet needs for support • Examples of additional support or education they would have found beneficial around the time of diagnosis. • Experiences and views about diabetes-related support they had received when their child was discharged from hospital; when was support sought/provided; what they had found helpful and unhelpful; what were their unmet needs. • Whether and for what reasons they had sought any other forms of support aside from the help provided by health professionals. • Examples, and experiences, of seeking and receiving alternative forms of support. Findings Parents reported similar experiences of diagnosis, emotional impacts on themselves and accounts of how they could be better supported, irrespective of the length of time which had elapsed since their child had been diagnosed. Most began by describing how their child had been diagnosed with T1D by the family doctor and then admitted to and treated in hospital for between 2 and 8 days. During this time, parents described how their child's blood glucose levels were stabilized and they were given instruction and education on how to manage their child's T1D at home. Parents were also provided with contact details of the diabetes team and out of office hours telephone numbers if they required advice about managing their child's diabetes at home. Below, we explore parents' accounts of information and support received after their child was diagnosed, the challenges they encountered when they first began to manage diabetes at home, their unmet needs for support, and their suggestions for how other parents could be better supported in future. Experiences of information provided during hospital admission Information overload As we have reported elsewhere, 30 many parents described being very distressed when they were informed about their child's diagnosis, particularly when this was accompanied by life-threatening diabetic ketoacidosis. In the initial days after admission, all parents praised their child's clinical care and the detailed instructions given by staff on how to perform injections, monitor blood glucose levels, detect signs of hypo-and hyperglycaemia; and, in some instances, count carbohydrate in food. However, with the exception of those who had T1D or were health professionals themselves, or who were well acquainted with other people who had the condition, parents highlighted the challenges and difficulties they had encountered understanding and assimilating information delivered using unfamiliar terminology: he started talking about ketones and it was like, 'what, huh?, pardon?' Total foreign language (013M); I got very upset because we used to get taken each day into a room and be given all this training. . . I felt like I was an Arts student who had been thrown into a medical lecture theatre (017M). As well as struggling to understand clinical terminology, parents frequently described feeling overwhelmed by health professionals' instructions and advice: "it was such a blur because it was all so much information at once" (001M). Many parents also discussed how they had felt stressed and extremely upset after finding out their child had T1D, and how, as a consequence, they had been unable to assimilate and retain any of the information and advice imparted to them in the hospital: they gave me leaflets to read while we were waiting but, it was, I was in shock and that, I wasn't absorbing anything I was reading or listening to (022M); she was only three, she was screaming the place down. . . and, to be honest, it was in one ear and out the other (034M). Parents' need for more reassurance and emotional support after diagnosis Despite feeling overwhelmed and, hence, struggling to absorb information, virtually all parents recognized their need for regimen-specific information before their child could be discharged: "you feel devastated but you've got to get over that and, you're like 'right, what have I got to do?'" (004M); "it's a massive learning bit and you have to learn instantly, there's no gradualness, really" (002M). However, because of their state of shock, many parents described how it would have been better if health professionals had given them reassurance and emotional support in the first instance: it was so much practical information, you know, this is how you keep him alive, this is about Other parents, likewise, reported needing emotional support to address their initial concerns and anxieties following their child's diagnosis and admission to hospital and described how reassurance given upfront might have made them more receptive and better able to assimilate practical advice thereafter: just in layman's terms, this is what's wrong, this is how we're going to treat her, she will get better as in she's not going to die and then. . . when everything's calmed down and you, your relief's sort of like swept over you . . . then start explaining, right, you have to start to give her injections, the BG [blood glucose], the meters, then start going through all the equipment. (002F). Returning home with a child newly diagnosed with T1D Many parents described how health professionals' advice had left them unprepared for how having a child diagnosed with T1D was likely to affect their lives with several suggesting that they would have benefited from being given information in this regard: when it was first diagnosed. . . you don't want to just hear about the medical side, you want to hear more, you want to see how this is really going to impact your life (023M). Indeed, the challenges involved in managing a child with T1D often only became fully apparent when parents first returned home with their child, a situation which 003M, like others, likened to one where: somebody gives you a massive book for a computer and says, you know, 'you've now got to start work with it straightaway'. Concerns about administering injections Parents reported several key challenges soon after diagnosis, including having to convey information about diabetes to young children and explain the need for daily injections: "how do you tell a five year old he's going to have to be injected four times a day?" (010M). Whilst all parents recognized that their child's life depended on them administering insulin, many described traumatic and distressing experiences where they had had to chase after and physically restrain a child who resisted injections: I would have to literally pin him down; I would have his legs between my knees to, to be able to do it (015M); he would smash up the house 'cause he didn't want his injection. . . and I do remember quite often just sobbing, just sobbing in the corner . . .thinking, 'oh God, nobody knows what this is like.' (006M). These parents often described initially dreading injections because of their child's resistance to being injected whilst those with larger families expressed concerns about how siblings might be affected: "I don't think it was good for them to see me having to pin down their big brother and see him screaming" (016M). They also spoke about needing, yet struggling, to obtain information and support on how to handle these potentially distressing situations and to preempt and prevent their child's fear and upset. As well as having to deal with their child's distress, some parents reported suffering from needle phobia themselves or feeling so concerned about inflicting pain on their child that they struggled to perform injections. In some instances, this led to parents going to extraordinary lengths to psychologically prepare themselves before they were able to administer an injection and to their feeling isolated and unsupported: if I don't do this right, I could end up doing something to her . . . and it used to take me an hour to set up needles in the kitchen just to get myself psyched up to come in and jag her. . . I was just kind of stalling it. . . I could have done with someone coming round to help (037M). Concerns about nocturnal hypoglycaemia The majority of parents also reported feeling very concerned that their child would not detect symptoms of hypoglycaemia when asleep, fail to wake up and die in bed. Whilst most felt able to monitor their newly diagnosed child for signs of hypoglycaemia during the day, many described only sleeping lightly and, as Sullivan-Bolyai et al. 26 has also reported, remaining in a constant state of alert at night: "I slept in beside him for the first few weeks, just sort of monitoring him and I wasn't really sleeping" (005M). Such parents described feeling exhausted as a consequence: when we were up very much every night one, you feel sorry for [daughter] . . . and two, you feel sorry for ourselves because we're not getting any sleep (017F). Others also described how their sleep had been disrupted at regular intervals because they had set alarms to check on their child or because they had been too frightened to go to sleep at all: "I just couldn't sleep because I just, well, I wanted to see, was checking her all the time" (034M). In addition, several parents described how these concerns had even pervaded into the following day when they went to wake up their child: I would test her, I would go through the motions at night, really, to be there. . . and sometimes I would go through in the morning, try and wake her and if she was in quite a deep sleep, cause she can be in a deep sleep, I was, my, your heart pounded, you know, and your stomach. . . (037M). Unmet needs for support when caring for a child at home Emotional support Although most parents praised the extent of the telephone support provided by health professionals to help them manage their child's diabetes at home (e.g. to adjust insulin doses; advice on treating hypoglycaemia), several reported that: there was absolutely nowhere to go, nobody to turn to, there was nobody to speak to you about how you were actually feeling about the situation (032F). Whilst a few parents described seeking and receiving emotional support from members of their child's diabetes team during clinic appointments, others described being reluctant to ask for help because "you don't really have a relationship with the people you're dealing with" (008M) or, more typically, because they were worried that they would be perceived by staff in negative ways. This included 006M who did not share her feelings of anguish and depression with health professionals when her son was diagnosed because of her concerns she might be seen as a failed parent: you need someone just so that you can sit there and be open and it not be, feel like you're being judged, cause if you say it to a nurse or a diabetes doctor, the last thing I wanted was for them to think well, [son's name] mum is not coping and so, therefore, that raises alarms. . . I wanted him to be at home. I wanted life to get on. Whilst most parents praised the accessibility of their child's diabetes team by telephone, many also described how they would have preferred it if clinicians had initiated contact with them, particularly in the early days after returning home: I think that at that point you need it thrust upon you, they need to be in touch with you, every day, they need to actually speak to you rather than you having to phone them, they need to phone you and say, 'look, how is it going?' Just so you know there is somebody there (002M). This included parents who suggested that they would have benefited from home visits from health professionals in order to observe how they were implementing their child's insulin regimen: I wonder sometimes if there was . . . somebody who came out just to see how things were going that weren't exactly about his [blood glucose] numbers. . . I can massively see how you would just think, 'ah, to hell with it' . . .or 'och, if it's high, we'll just give him insulin later' you know, and just not bother (025M). Others described how health professionals could use home visits to ascertain how they were coping and if they needed to be offered emotional support. This included 021M who reported feeling depressed after her child was diagnosed and who suggested that she would have benefited from home visits modelled on the service provided by midwives to new mothers: to make sure the baby's alright and that you're alright as a parent" but with a broader remit than the child's glycaemic control: "just someone to come round to say, 'what has been happening?' so that you know there's someone there not just about blood levels . . . but it was also having more an impact on [my husband] and I because we were seeing it, I suppose, like death for a child. Practical support for parents In addition to emotional support, parents also talked about needing practical help delivered in the home setting, which took into account their family circumstances and the challenges of implementing their child's regimen in 'real life' circumstances. Peer support from other parents with a child with T1D Whilst all parents spoke about needing and/or valuing clinical input when they first returned home with their child, several suggested that health professionals were not ideally placed to provide the empathetic and non-judgemental forms of support they saw themselves as requiring, because they lacked real-life experience of caring for a child with T1D: although they're a medical professional or whatever, yes they have, you know, empathy, etc. . . . but they still can't appreciate what it's like (014M). Hence, such parents suggested that they would have benefited, soon after returning home, from speaking to: somebody that really understands how diabetes eats into your world. . . I don't think I even needed support to manage it, I think I needed someone to come and give me a big cuddle, really, more than anything else because so much of it is about coping and making life good for [son] (013M). Some parents also described how they would have benefited from opportunities, soon after diagnosis, to meet and talk with other parents who had experience of caring for a child with T1D. Others, likewise, suggested that experienced parents could provide those whose child had been newly diagnosed with practical suggestions and advice about the day-to-day issues involved in managing diabetes and how to deal with novel situations such going on holiday. This included 015M who described how peer support could be useful in situations where advice from a health professional was not required: just someone else to speak to if you've got a problem, not necessarily a medical problem but, say something, you know, you're struggling to get your child to eat . . . what do they do when they come up against these things (015M). Furthermore, parents suggested that those with recently diagnosed children might benefit from emotional support provided by peers who had experience of caring for a child with T1D and who could offer reassurance: "that the emotions that, that you'll go through, and the tests to your relationship, are all pretty normal and you will get through it" (011M). In addition, parents described how they might benefit from opportunities to talk openly to experienced parents without worrying about being seen as a failure by their child's clinicians: "you can sort of, offload a lot more stuff to somebody else that's in your position" (010M). Other parents, however, described having been very upset and traumatized by their child's diagnosis and needing time alone to make adjustments which led some to question whether they would have been receptive to peer support because: "at diagnosis, everything's that sort of crazy. . . the thought of going to talk about it might have even been a bit much" (016M). In some instances, parents described having received and declined further opportunities to access peer support after meeting other parents at organized events for children with diabetes, or after being introduced to peer parents (e.g. by health professionals), and having encountered people who they described as obsessed and overbearing or whose advice could be contrary to that provided by clinicians. This included 007M who described an unhelpful meeting with another parent where she had disagreed with the peer parent's restrictive approach to her child's diet: we'd ordered a healthy food platter but she was going, 'no, you can't have that grape'. . . and I was sitting there thinking 'Oh my God, that kid's going to grow up with food issues' . . . and I thought, that's just making me anxious, it's not giving me the support that, that I was looking for so I gave up on it Discussion This study explored in-depth parents' views about the information, advice and support received after their child is diagnosed with T1D and is one of the first to explore their accounts of the timing and chronology of current support provision, and how this support could be improved in order to better care for their newly diagnosed child. Many of the parents who took part in our study reported feeling overwhelmed when their child was diagnosed and needing more emotional support prior to receiving practical instruction and regimen-specific advice to manage their child's diabetes. Parents also described struggling to implement clinical and dietetic aspects of their child's regimen when they first returned home and how they would have benefited from more practical and emotional support from health professionals or from other parents, who, as some of those interviewed suggested, might be better placed to offer empathetic support. Whilst parents' emotional reactions to their child's diagnosis, and the challenges they confront attempting to assimilate information, have been extensively documented, 12,13,16,19,22,31 our findings suggest that parents would benefit from having specific worries and concerns addressed in the first instance, such as their fear that their child might die, before receiving education about diabetes management. Specifically, our findings highlight the importance of health professionals spending time with parents soon after diagnosis, asking about their existing knowledge and understanding of T1D using lay and easy to understand terminology, and, when appropriate, providing them with reassurance about their child's condition, in order to better prepare them to assimilate complex regimen-specific information to enable them to manage their child's diabetes effectively at home. Our findings, in line with those from other studies, also highlight parents' need for more emotional support and practical advice in order to help them to adjust to and integrate their child's new regimen within everyday family life. 14,[16][17][18][19]31 Unlike these other studies, however, we have shown that parents may not ask for, or access, the help they need. Specifically, we have shown that parents may feel reluctant to approach health professionals and admit that they are feeling unable to cope, or that they question whether health professionals, given their lack of personal experience of parenting a child with T1D, can provide the empathetic and non-judgemental support which some of those interviewed described themselves as needing. To address the former, health professionals could consider offering proactive support in the first weeks following diagnosis; for example, through scheduling home visits and/or initiating phone calls with parents. To address the latter, the provision of peer support interventions, delivered by experienced or "veteran" parents, 32 may be a potentially fruitful avenue to pursue. Indeed, the need for this kind of support was highlighted by some of the parents we interviewed. However, evaluations of peer support interventions for parents with children who have T1D, 33,34 and a range of other chronic conditions, [35][36][37][38] have thus far shown mixed results. Whilst parents have generally articulated positive views about receiving peer support, 39,40 researchers have been unable to demonstrate conclusive positive effects using psychosocial and other quantitative measures, such as parental stress and diabetes-related concerns. 34,41 Whilst it has been argued that better measures may need to be developed to capture and quantify the true benefits of peer support for parents, 34,40,42 our findings suggest that a possible reason for the lack of positive effects may be because peer support does not suit all parents. Not only may parents be put off by negative experiences, they may also be exposed to information which contradicts clinical recommendations. Hence, an alternative option to explore might be to provide health professionals with experiential training in caring for a child with T1D that would enable them to offer more empathetic and individualized support to parents. Whilst patients' adherence to medical regimens is known to be affected by doctors' empathic communication skills, 43,44 there has been no research to date undertaken to explore the impact of experiential skills-training in caring for a child with T1D for health professionals who provide consultations to parents of these children; hence this may be an important avenue to pursue. Strengths and limitations The study was strengthened by the use of a multi-site recruitment strategy, involving parents from four clinics serving diverse geographic areas, which improves the generalizability of our findings. A potential limitation is the inclusion of parents of children diagnosed up to 11 years previously which could mean that some were not describing contemporary practices and/or their accounts were subjected to recall bias. This potential problem could be addressed in future studies by including parents of children who have been recently diagnosed. However, it should be noted that the parents who took part in our study presented similar accounts and described similar support needs irrespective of the length of time since their child's diagnosis. A further limitation is that in our study all children diagnosed with T1D were admitted to hospital whereas, in some areas, children who are not diagnosed in DKA may be managed without admission to hospital. 45,46 Hence, future work could explore and compare parents' experiences when initial care and education is provided in outpatient settings or in the home. Conclusion Our findings have important implications for service development and indicate that parents of children diagnosed with T1D might benefit from a package of support which extends from when their child is first diagnosed and admitted to hospital through to the weeks and months after they return home and begin to integrate their child's regimen into everyday family life. For instance, our findings highlight a need for more attention to be given to the timing and chronology of support offered to parents, in particular for parents to be offered emotional support soon after diagnosis to better enable them to assimilate diabetes management information at a time of great distress. Health professionals should also consider ways to provide more practical support to parents soon after they return home with their child, to help them integrate diabetes management into their family's normal lifestyle. participating centres and, especially, the parents who took part. We would also like to thank Lesley Gardner for proof-reading the manuscript. This article presents independent research funded by the Chief Scientist Office (CSO) of the Scottish Government Health and Social Care Directorates (CZH/4/722). The views expressed here are those of the authors and not necessarily those of CSO.
2018-04-03T02:52:39.892Z
2014-07-29T00:00:00.000
{ "year": 2014, "sha1": "18ec81cba3c2fb80742992311d6c0501a12df2a9", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/hex.12244", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "18ec81cba3c2fb80742992311d6c0501a12df2a9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17602228
pes2o/s2orc
v3-fos-license
Bmc Medical Research Methodology Open Access the Transitive Fallacy for Randomized Trials: If a Bests B and B Bests C in Separate Trials, Is a Better than C? Background: If intervention A bests B in one randomized trial, and B bests C in another randomized trial, can one conclude that A is better than C? The problem was motivated by the planning of a randomized trial, where A is spiral-CT screening, B is x-ray screening, and C is no screening. On its surface, this would appear to be a straightforward application of the transitive principle of logic. Background Consider three statements: A, B, and C. In formal logic, if A implies B, and B implies C, then A implies C. This can be illustrated by a Venn diagram in which set A lies entirely within set B and set B lies entirely within set C. This implies set A lies entirely within set C. This is an example of transitivity. Extending this logical construct to the design and interpretation of clinical trials, one might conclude that (even with large sample sizes) if in a randomized trial, intervention A is shown superior to intervention B, and in another randomized trial intervention B is shown superior to C, then A will be shown superior to C in another randomized trial. However, statistical association is not generally transitive. Consider three random variables: A, B, C. If A is positively correlated with B, and B is positively correlated with C, then A may or may not be positively correlated with C [2]. To our knowledge no one has investigated transitivity of results from separate randomized trials. In fact, for the sake of perceived efficiency and limited resources, the principle of transitivity is sometimes assumed in clinical trial design. The possibility of a transitive fallacy surfaced in discussions about a planned randomized clinical trial to assess efficacy of low dose spiral computed tomography (CT) for lung cancer screening. The aim of the investigation is the use of graphical methods to explore conditions when transitive inference for randomized trials does not hold. As the mathematician, John Allen Paulos writes, "It's odd that logical acuity, rather than helping one to clarify statements, often reveals hidden ambiguities within them. Instead of leading one to form more conclusions, it makes clear that fewer conclusions are justified." [3] Methods We extended the graphic in [1,4] for illustrating Simpson's paradox. In the Simpson's paradox graphic, the horizontal axis is the fraction of subjects with an unobserved variable and the vertical axis is the probability of a binary outcome. One diagonal line in the plot represents the effect of unobserved variable on the probability of outcome, given treatment A. A second diagonal line, which is lower and parallel to the first, represents the effect of the unobserved variable on the probability of outcome, given treatment B. The graphic shows that if the fraction with the unobserved variable differs between groups receiving A and B (as in an observational study), then subjects receiving treatment B could have a higher probability of outcome than subjects receiving A, even though the line for A is higher than the line for B. For this investigation, we added a third treatment to the graphic and investigated three hypothetical, but plausible, scenarios, one involving lung cancer screening, one involving treatment for gastric cancer, and one involving antibiotics. Unlike the Simpson's paradox example, in all the scenarios there is an interaction between an unobserved Figure 1 Screening for lung cancer. Hypothetical results are shown for A = spiral CT, B = x-ray, and C = no screening. In the sequential study, results for B and C are similar in the first trial (when 30% of the subjects receive the new therapy), and results for A and B are similar in the second trial (when 70% of the subjects receive the new therapy). However, as shown with three-arm trial, it is incorrect to make the transitive inference that when 70% of the subjects receive the new therapy, the results for A and C would be similar. (2) a randomized three-arm trial of A, B, and C. In the discussion below, we emphasize that the trial sizes would be large enough to eliminate considerations of simple statistical variation or imprecision in measurement of outcome variables. We also emphasize that the results would hold if there were no bias from contamination or noncompliance. Results We present graphical illustrations of the transitive fallacy in three hypothetical, but plausible, scenarios. Lung cancer screening with improved therapy (Figure 1) Let A denote spiral-CT screening, B denote chest x-ray screening, and C denote the control group of no screening or "usual care." The endpoint is lung cancer mortality rate. A previous randomized trial of B versus C found similar lung cancer mortality rates for the two interventions [5,6]. Currently there is discussion of a new randomized trial to compare A and B. Suppose the new randomized trial shows similar cancer mortality rates for A and B. Would that constitute proof of similar cancer mortality rates for A and C? The unobserved binary variable is an indicator of whether or not subjects received a (relatively) new (and effective) therapy after early detection. Although type of therapy is observable, it is not generally analyzed or reported in papers summarizing the results of screening trials. We suppose that the new therapy decreases the lung cancer mortality rates for A, B, and C, but at different rates (i.e. a quantitative interaction). In addition, unless there is substantial overdiagnosis, it is reasonable that the greatest decrease would occur with A due to earliest detection. Also, unless the therapy is effective at all stages of cancer, it is reasonable that the smallest decrease would occur with C due to late detection and greater total tumor burden. We realistically assume that the percent of subjects who receive the new therapy has increased over time as randomized treatment trials establish its worth. For purposes of illustration, we specify that in the first trial of B versus C, 30 percent of the subjects receive the new therapy, and in the planned second trial (either A versus B, or A versus B versus C) 70 percent of the subjects will receive the new therapy. Figure 2 Treatment for gastric cancer. Hypothetical results are shown for A = radical gastrectomy / splenectomy, B = "simple" gastrectomy, and C = radiation. In the sequential study, results for B and C differ in the first trial (when 30% of the subjects receive effective supportive care), and results for A and B are similar in the second trial (when 70% of the subjects receive effective supportive care). However, as shown with three-arm trial, it is incorrect to make the transitive inference that, when 70% of the subjects receive effective supportive care, the results for A and C would differ. Figure 1 shows a realistic set of outcomes. In the first trial the cancer mortality rate is similar for B and C. A second trial of A versus B also indicates similar cancer mortality rates (Figure 1, left). Under the transitive fallacy, one would incorrectly conclude that the cancer mortality rates for A and C are similar at the time of the second trial However, when 70% of the subjects receive the new therapy, the three-arm trial correctly shows that A is substantially better than C (Figure 1, right). Separate 2-arm Trials Cancer treatment with improved supportive care ( Figure 2) A second hypothetical example involves treatment for gastric cancer. Let A denote, radical gastrectomy/splenectomy, B denote "simple" gastrectomy, and C denote radiation. Suppose the endpoint is percent mortality over some time period and the unobserved covariate is effective supportive care. It is plausible that supportive care improves over time with better intensive care and better antibiotics for any infections that arise. In an earlier period when a small percentage of subjects receive effective supportive care, the more aggressive treatments carry substantially more treatment-related mortality. As shown in Figure 2 (left side), a randomized trial of B versus C during this earlier period demonstrates considerably higher mortality for B than C. In a later period, when a larger percentage of subjects receive effective supportive care, the mortality rates converge. As shown in Figure 2 (left side) a randomized trial of A versus B during the later period indicates similar mortality rates. Under the transitive fallacy, one would incorrectly conclude that the cancer mortality rates for A and C differ substantially at the time of the second trial. However, when a large percentage of subjects receive effective supportive care, the three-arm trial correctly shows that mortality rates for A and C are similar (Figure 2, right). Antibiotic treatment with change in percent gram positive ( Figure 3) A third hypothetical example involves the use of empiric antibiotics to treat clinical pneumonia. Suppose A is an antibiotic that treats gram positive organisms but not gram negative; B is an antibiotic that treats gram negative but not gram positive pneumonias; and C is an antibiotic that treats gram positive pneumonias better than A. The endpoint is fraction successfully treated. Suppose the percent of organisms that are gram positive is an unmeasured Figure 3 Antibiotic treatment for clinical pneumonia. Hypothetical results are shown for A = antibiotic for gram-positive, B = antibiotic for gram-negative, C = antibiotic for gram-positive that is more effective than A. In the sequential study, A bests B in the first trial (when 80% of the subjects are gram positive), and B bests C in the second trial (when 10% of the subjects are gram positive). However, as shown with the three-arm trial, it would be incorrect to make the transitive inference that when 10% of the subjects are gram-positive, A is better than C. covariate. This is a realistic scenario given that the spectrum of bacterial infections can shift over time, or can differ from hospital to hospital at the same time. Suppose a randomized trial of A versus B has been completed and investigators are considering a randomized trial of B versus C. (In this scenario, A versus B occurs prior to B versus C, even though it is depicted farther to the right on the graph.) We realistically assume the percent of organisms that are gram positive has decreased by the time of the second trial. For purposes of illustration suppose 80 percent of the subjects are gram positive in the first trial and only 10% are gram positive in the second trial. In this situation A bests B in a randomized trial of patients who mainly have gram-positive infections, and B bests C in a randomized trial of patients with mainly gram-negative infections. (Figure 2 left). Under the transitive fallacy, one would incorrectly conclude that when 10% of the subjects have gram-negative infections, A would best C. However, the three-arm study correctly shows that when 10% of the subjects have gram-negative infections, C would best A (Figure 2 right). Conclusion Given only a previous randomized trial of B versus C and a new randomized trial of A versus B, inference about A versus C can be misleading. In contrast a three-arm randomized trial of A, B, and C, will yield appropriate inference about both A versus B and A versus C. Validity of the sequential studies strategy (B versus C, and A versus B) rests on the assumption that there is no intervening important covariate that could confound the implied principle of transitivity. Given the amount of resources that are often invested in large "definitive" clinical trials, the possibility of such covariates should be an explicit part of the discussion in designing the trials and interpreting the results.
2018-05-08T18:24:09.689Z
0001-01-01T00:00:00.000
{ "year": 2002, "sha1": "77c5615e21e76a5e3610fe039d8fdd0ea8191911", "oa_license": "CCBY", "oa_url": "https://bmcmedresmethodol.biomedcentral.com/track/pdf/10.1186/1471-2288-2-13", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "77c5615e21e76a5e3610fe039d8fdd0ea8191911", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
13248986
pes2o/s2orc
v3-fos-license
Case report and summary of literature: giant perineal keloids treated with post-excisional radiotherapy Background Keloids are common benign tumors of the dermis, typically arising after insult to the skin. While typically only impinging on cosmesis, large or recurrent keloids may require therapeutic intervention. While no single standardized treatment course has been established, several series report excellent outcomes for keloids with post-surgery radiation therapy. Case presentation We present a patient with a history of recurrent keloids arising in the absence of an ascribed trauma and a maternal familial history of keloid formation, whose physical examination several large perineal keloids of 6-20 cm in the largest dimension. The patient was treated with surgical extirpation and adjuvant radiation therapy. Radiotherapy was delivered to the scar bed to a total dose of 22 Gy over 11 daily fractions. Acute radiotherapy toxicity necessitated a treatment break due to RTOG Grade III acute toxicity (moderate ulceration and skin breakdown) which resolved rapidly during a 3-day treatment break. The patient demonstrated local control and has remained free of local recurrence for more than 2 years. Conclusion Radiotherapy for keloids represents a safe and effective option for post-surgical keloid therapy, especially for patients with bulky or recurrent disease. Background Keloids are benign, common, fibroproliferative dermal tumors which usually occur as a result of trauma to the skin [1], though their pathogenesis is distinct from hypertrophic scars [2]. Keloids are often refractory to treatment, recurring frequently. While the precise etiology and genetic mechanism of this pathology is unknown, keloids are believed to be more prevalent in patients of African descent. While most keloids are sporadic, familial keloids seem to represent an incomplete penetrance autosomal dominant disease, with varying degrees of clinical severity within a pedigree [3]. An association with diabetes, which was noted in the medical history of our patient and her maternal line, has also been observed in at least one series [4]. Additionally, this case is remarkable for the fact that few keloids form in the female genital region in comparison to other high-skin tension regions, for unknown reasons [5]. While not frequently considered as candidates for oncologic therapies, severe lesions may require radiotherapy or intralesional injection of interferon and flourouracil in addition to surgical extirpation [2][3][4]. This case presents the formation of keloids without recollection of any recollected traumatic event, notable for their location and size, in a female patient subsequently treated with post-surgical radiotherapy. Case presentation A 34-year old African-American woman presented with a giant perineal tumor associated with a 29-year history of keloid formation without recalled dermal injury or abrasion. The patient's history revealed two family members (a mother and sister) with similar symptomology, resulting in a diagnosis of familial keloid syndrome. However, neither the mother nor sister was affected with perineal keloid development. Past medical history was also notable for arthritic symptoms and diabetes mellitus, which were present in both mother and sister. The index lesion was firm, pliable growth, 20 cm in its greatest diameter, adjacent to 10 cm and 6 cm perivulvar lesions, which caused the patient considerable discomfort and affected ambulation ( Figure 1). Physical examination revealed multiple other hypertrophic nodular growths on the posterior neck, behind the right ear, bilateral scapular regions, right flank and breast, abdomen and extremities in addition to the primary lesion. Past medical history evinced numerous heterogeneous treatments for various keloids in multiple loci; she had previously received surgical extirpation, steroid injections, and two episodes of radiotherapy to the back. Despite these interventions, her keloids have either recurred or persisted. Surgical extirpation of the largest perineal lesion was undertaken, and histopathologic examination was performed, denoting the classic keloid-associated features of haphazard collagen deposition, with nodular formations thickened hyalinized bands ( Figure 2). On the day following surgical excision, the patient was treated with radiotherapy using photons at 6MV. The total dose delivered was 22 Gy in 11 days, with a daily fraction of 2 Gy. The dose fraction was split between two fields with an anterior-posterior/posterior-anterior (AP/PA) port arrangement. Maximum acute Radiation Therapy Oncology Group skin toxicity score was Grade 3 (moderate ulceration and skin breakdown), which resolved after a 3-day treatment break. At 6 months post-therapy, the lesion in question had not recurred (Figure 3), and the patient reported no difficulty attributable to the lesion. After 10 months after completion of radiation treatment for perineal keloids, the patient returned for additional treatment to her back and chest wall. Radiotherapy was delivered at 3 Gy/fraction with 9MeV electrons to her back, lateral back, and anteromedial back over 4 days. No complications have been noted, and the patient is currently being followed, with > 24 months since therapy.. Conclusion Keloids are benign fibroproliferative growths distinguished by excessive collagen deposition in the dermis. The exact etiology of these lesions remains unknown. They are considered a derailment of the normal wound healing process with a higher prevalence in darker pigmented races. Keloids are often described as benign fibroproliferative growths resulting from a connective tissue response to a variety of insults, such as surgery, burns, trauma, inflam-Pre-therapy photograph of giant perineal keloid, showing lob-ulated appearance Figure 1 Pre-therapy photograph of giant perineal keloid, showing lobulated appearance. mation, foreign-body reactions, endocrine dysfunction. However, they occasionally occur without apparent external cause. They are characterized by excessive collagen and glycosaminoglycan deposition within the dermis, an increase in collagen turnover, and micro-vasculature regeneration [6,9,10]. Clinically, keloids may not appear for several months and can be delayed for several years after initial injury. Minor injuries can produce a fairly large, deep, and reddish-purple indurated lesion that rarely subsides. They can range in size from small papules limited to only a few millimeters in diameter to football size and larger. Their texture can vary between a soft and dough-like to a hard and rubbery consistency. These lesions most commonly affect areas of increased skin tension. Very rarely, keloids may develop on the palms of the hands, soles of the feet, and the genitalia [5]. Keloid formation can be found in all ethnicities, but has a higher predilection for darker pigmented populations. Why occurrence rates are higher among these groups as opposed to others is inconclusive. Inheritance patterns may offer clues as to who could be at a greater risk of being predisposed to forming these types of lesions. Several reports have suggested that keloids follow an autosomal dominant or autosomal recessive inheritance pattern, although the exact mode of inheritance remains unknown. Maneros et al. report observing 14 pedigrees with familial keloids that spanned 3 generations [5]. While most families in the study where African-American, the report concludes that this may be associated more ethnicity rather than skin pigment, since some lighterskinned members of the families had the more severe lesions. Through the use of a genome wide linkage screen, plausible gene loci for these keloid pedigrees were identified [6]. Their results found a pattern consistent with an autosomal dominant mode of inheritance. Subsequent linkage analysis has revealed two distinct gene loci which may serve as specific susceptibility genes [11]. In keloid lesions, the therapy chosen is predicated upon several factors, including: size of lesion, location, depth of lesion, age of patient, and past response to treatment. Surgical excision, radiation, pressure therapy, cryotherapy, intralesional injections of corticosteroids, interferon and fluorouracil, topical silicone and other dressings, and pulse-dye laser treatment have all been found to induce some degree of regression [12]. Despite the broad range of treatment modalities, there is no universally accepted treatment protocol. In most instances these therapies are used as an adjuvant to surgical excision. Radiation therapy has been a rather controversial issue in keloid treatment. Surgery-alone and adjuvant intralesional corticosteroid approaches exhibit literature reported recurrence rates of 45-100% and under 50%, respectively [7]. Comparatively, extant radiotherapy series have demonstrated recurrence rates which are markedly better than surgery alone or adjuvant corticosteroid injection. A brief summary of selected English language series of teletherapy treatment for keloids are collated in Table 1. However, the only prospective randomized trial of any Three-month follow-up appearance after surgical excision and radiotherapy kind for keloids demonstrated greater control rates for surgical excision and radiotherapy compared to surgery and corticosteroid injection, with recurrence rates of 12.5% after surgery and radiation therapy, versus 33% after surgery and steroid injections, though with a statistically non-significant mean differential [8]. The favorable outcomes with this approach are attributable to destruction of keloid fibroblasts by ionizing radiation, which has been shown to enhance apoptosis when given in small to moderate doses. In a study by Luo et al., gamma radiation was found to cause a 2-fold increase in the density of apoptotic cells in both normal and keloid tissue [13]. According to a study by Ragoowansi et al., using 60 kV photon irradiation of 10 Gy in a single fraction to treat 80 keloids in 80 patients, the majority of keloids can be controlled by a single operation with immediate adjuvant single-fraction radiotherapy [14]. Unresectable keloids can also be treated satisfactorily with radiotherapy, as Malaker et al. demonstrated by treating 64 patients with 86 unresectable keloids with 37.5 Gy was given in 5 fractions over a 5-week period [15]. By 18 months 97% of patients had complete regression and 3% had partial regression. Surveyed, 63% of patients were happy with the outcome. Additional series have concurred that recurrent keloids may be successfully treated with the radiotherapy postexcision [16], and have also explored brachytherapy as an option for patients failing primary therapy [17]. Electron radiotherapy has also been used with good result. Maarouf et al. report a series of 134 keloids treated following surgical excision, with an 84% control rate) and minimal side effects, with a mean follow-up period of 7.2 years [9]. Ogawa et al followed 129 keloid cases for 18 or more months after post-operative irradiation with 4-MeV electron beam radiation of 15 Gy [19]. With a median followup period of 24 months, there was an overall 32.7% recurrence rate. The most common side effects of radiotherapy consist of hyperpigmentation, pruritis, and erythema. Additionally, there is a small, but notable, stochastic risk for future secondary malignancy inherent in any radiation exposure. However, at present, few series have exhibited notable secondary carcinogenesis [10]; Dinh et al. [11] note that in a cumulative review by Ragoowansi [12] five (5) possible secondary malignancies were noted in 6,741 treated keloids, for crude risk of 1/1,348 patients, according to the literature. Consequently, patients must be informed and radiotherapy used judiciously, and with careful follow-up of patients over the course of their lives. While no optimum treatment modality has been demonstrated for recurrent keloids in adults, surgical resection and adjuvant radiation therapy may provide an effective option, with notable clinical success rates. In summary, radiotherapy is, while not without risk, an exceedingly effective primary adjuvant or salvage therapy for some keloids (particularly large and recurrent tumors). Radiother-apy has been shown in extant series to exhibit better results than the other notable adjuvant therapy of choice, corticosteroid injection [8], with a secondary malignancy risk that is minimal enough such that it should not preclude utilization [10,11].
2014-10-01T00:00:00.000Z
2006-04-19T00:00:00.000
{ "year": 2006, "sha1": "a01752ec53ace04b226e2535b06b26b1549bb7d1", "oa_license": "CCBY", "oa_url": "https://bmcdermatol.biomedcentral.com/track/pdf/10.1186/1471-5945-6-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5515b94be87365153e9485cdbf93292f094b065f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219755557
pes2o/s2orc
v3-fos-license
Optimization of the Use of Wireless Lan Devices to Minimize Operational Costs WLAN technology has been widely developed for the needs of internet access in people's lives. Several generations of WLAN technology include IEEE 802.11b, IEEE 802.11a, IEEE 802.11g and IEEE 802.11n. At the STAHN Rectorate Building, Gde Pudja Mataram, WLAN technology in its application requires financial consideration because excessive use of Internet Service Provider services results in a waste of operational costs. The application of WLAN is still not optimal, because there are not too many users, but the operational costs of implementing the local wireless network are very large, due to less optimal application of network infrastructure. The recommended WLAN technology is IEEE 802.11n, while the technology is the latest technology that has better quality than the previous generation technology. The research methodology uses the Network Development Life Cycle (NDLC). Of the 6 stages available, only 3 stages are used, namely Analysis, Design and Simulation of Prototyping. The results obtained from this study are models that design a WLAN that suits your needs, and complements the entire Building area. Optimization has succeeded in reducing the need for ISP and client services while still being able to enjoy services as needed and cost optimization can be reduced by around 28%. Interconnection. This model is also called the OSI seven-layer model. The OSI model was created to overcome various internetworking constraints due to differences in architecture and network protocols WLAN is applied in the STAHN rectorate building of Gde Pudja Mataram to provide access to internet services for all staff, but the application of that WLAN is still not optimal, because with not too many users, the operational costs of implementing the local Wireless network are very high. This is due to less optimal application of the infrastructure, here one of the problems is the application of devices that are not in accordance with needs. Based on this condition, the authors intend to optimize WLAN devices in the STAHN rectorate Building, Gde Pudja Mataram. Some commonly WLAN devices used such as routers, switches, and access points. II. METHODOLOGY The research methodology used in this optimization design is NDLC (Network Development Life Cycrle). NDLC is a method used in developing or designing infrastructure networks that enable network monitoring to find out statistics and network performance. The results of the performance analysis are taken as consideration in designing network designs, both physical network designs or logical networks [5]. From the 6 NDCL phases, this design only used 2 stages, namely analysis and design. Analysis There are several analyzes or identification about user devices, service applications, and the previous network [6]. User devices identification The use of the device as a client by rectorate building staff in the operational process mostly in the form of notebooks which are categorized by Aerohive's handbook as a low-end Laptop with wireless device capabilities as follows [7]: The measurements were made by using Wifi Network Analyzer tools. The power of the Wifi signal is indicated by dBm (decible milli Watt). Ii is the absolute value of the power unit, calculated as 10log power value / 1mW. When the shown value is greater, the strength of the signal will be smaller [8]. The Quality Standards for the Signal to Noise Ratio (SNR) quality variable in the Signal Level indicator are as in Table 6. below. Wireless Router in the Public Area is measured from 3 points as shown in Figure 3 below. Measurements of the red dot are ± 23m the with signal quality between -76 to -80 dBm, measurements of the yellow dot are ± 21m with the signal quality between -74 to -80 dBm, and the measurement of the green point is ± 24m with the signal quality between -79 to -83 dBm. The difference of signal quality is not only influenced by distance but also by indoor interference such as walls and doors. Assuming the coverage area of the wireless router signal is represented by a red line. The second measurement is made on the wireless router in the Financial Room with the following measurement results: The Wireless Router in the Financial Room is measured from 3 points as shown in Figure 12. The measurement at the red point is ± 19m with the signal quality between -80 to -83 dBm. Measurements at the yellow dot is± 20m with the signal quality between -76 to -79 dBm. While the measurement at the green point is ± 24m, with the signal quality between -78 to -81 dBm. The third measurement is made to the wireless router in the academic room with the following measurement results: The Wireless Router in the Academic Room is measured from 3 points as shown in Figure 13. The Measurement of the red point is ± 25m with the signal quality between -77 to -81 dBm. The second measurement of the yellow dot is ± 23m with the signal quality between -76 to -83 dBm. While the third measurement of the green point is ± 22m with signal quality between -83 to -85 dBm. The fourth measurement is made on the wireless router in the data base unit room with the results of the measurement below: The Wireless Router in the UPD room is measured from 3 points as shown in Figure 14. The measurement at the red point is ± 23m with the signal quality between -73 to -77 dBm. Measurements at the yellow dot are ± 23m with the signal quality between -79 to -85 dBm. While the measurement at the green point is ± 22m with the signal quality between -83 to -88 dBm. The last measurement was carried out on the wireless router in the middle room loby with the the results of measurement below: Gambar 6. Coverage area wireless router Lobby. Midle R The Wireless Router in the Lobby room is measured from 2 points as shown in Figure 15. The measurement at the red point is ± 25m with the signal quality between -79 to -85 dBm. Measurements at the yellow dot is ± 18m with the signal quality between -69 to -72 dBm. While on the 3rd floor, there is no WLAN device that can transmit signals, but the wireless router signal can still be reached from the UPD Room and Lobby Room Central on the 2nd floor but with very poor quality condition, which is below -90 dBm. From the measurement results of each ISP wireless router on the 1st and 2nd floors, obtained the overall area as follows: On the 2nd floor, the result show that, the wi-fi signal can cover most of the entire room on the 2nd floor specifically in the corner of the WK3 room, but there is a slight angle where the wi-fi signal categorized as bad category and it is While on the whole of 3rd floor is assumed as death zone because of the wi-fi signal that can be reached only with a signal quality of -90 dBm and it cannot access the network. Design In order to get the design, several things will be determined based on the results of the analysis such as, the needed technology will be applied, and the identification of coverage signal and environmental characteristics. A. Technology Identification Based on Need There are several things that need to be identified in this part, such as: 1. The Wireless network model will be used 2. The WLAN model selection will be to be utilized, and At this stage, the scope and characteristics of the environment are identified based on the location of the application of the WLAN which will be optimized in order to obtain supporting analysis in selecting the WLAN technology which will be used. The Environmental characteristics consist of the building area of the rectorate building and the existing of annoying interferences of wireless signal. Coverage Identification is the identification of the wireless range signal based on the capabilities of the device to be used in optimization. III. RESULTS AND DISCUSSION This section discusses the results of the recommended WLAN optimization design model started from the results of the optimization network design, the process of simulation, and the results of the budget after optimization. Network Design There are some indicators that need to be considered in optimizing this WLAN device like the technology used and the environmental characteristics of the WLAN application location. A. Technology Identification Based on Need The utilization of WLAN in the rectorate building is still a standard of PT. Telkom namely Wireless Router which is configured as a PPPoE Client in order to be able to receive the internet connections from PT. Telkom. It is needed external router such as proxy, tp-link, huawei, cisco, and so on when we want to do network management In this design, in order to external router can perform network management optimally, the dial up process to PPPoE Server is done through an external router. An external router here has functions as a PPPoE Client, while the Wireless Router functions as a bridge. The selection of Wireless Network Model WLAN has 2 types of networks that can be used, namely Ad-Hoc network and Infrastructure network types. Ad-Hoc network type is a wireless network that does not use an Access Point. While the infrastructure network type is a wireless network that requires an access point [9]. The model used in the design of this optimization is an infrastructure model. When the client want to access the network especially the internet, they must be connected to the Wireless Router ISP / Access point. The selection of WLAN Technology In accordance with the specifications of the user equipment and for the better access speeds, the WLAN technology that is used in the STAHN Gde Pudja Mataram Rectorate Building is IEEE 802.11n with a frequency of 2.4 GHz. One example of a WLAN device with the IEEE 802.11n standard with MIMO features and works at a frequency of 2.4 GHz that can be used is the Totolink N9 access point. Calculation of access points number needed The amount of AP required is calculated based on airtime per device and airtime utilization. Airtime per device is the length of time of communication between the device and the AP is needed in order to the application throughput needs can be reached [7]. In this case, the throughput needs that become reference are the biggest throughput from the various application needs. Rectorate Building Staff requires the biggest throughput in the File Sharing application of 850 Kbps. Airtime per device, APD is calculated according to the formula [7]: AppT is the recommended throughput required by applications that use the network in order to be able to function normally while the client data rate, CDR, is the maximum bandwidth of the user's device communication capabilities. Airtime utilization is the total communication time (airtime) requirement of all user devices and will be rounded up to the amount of AP required. Airtime utilization, AU, calculated according to the formula [7]: NoD is the number of devices or the number of user devices. Thus the need for airtime per device based on (Equation 1) is: Each APUT will be connected to a router that is connected to the ISP Wireless Router. If each AP serves about 15 to 20 users with application throughput at 0.8 Mbps then the aggregate throughput is 9.6 Mbps to 16 Mbps. So, a WLAN connection with a bandwidth of 20 Mbps is sufficient on 1 point of access point with 10-20 users. If you need a throughput of 850 Kbps / 0.8 Mbps, so 50 connections will require bandwidth of 42.5 Mbps. If you need a bandwidth of 42.5 Mbps, ISP services that can be utilized are 50 Mbps. However, to reduce risks such as when many visitors in the Rectorate Building such as events in the hall, the ISP service that can be utilized is 100 Mbps. Frequency Channel Selection With limited wireless communication capabilities of user devices that can only work in the 2.4 GHz frequency with the 802.11n standard, the frequency band used is 2.4 GHz. Channel width is adjusted by user devices that only support 20 MHz channel width. In the 2.4 GHz frequency band there are only 11 channels while only 3 channels can be used close together. The division of channels at the 2.4 GHz frequency as shown in Figure 8 below. GHz Frequency Band Channels [10] Using improper frequency channels will cause interference because the frequencies used are overlapping. Therefore, in order to avoid interference, it must use non-overlapping channels with the formula "+5 & -5", i.e. 1, 6, 11 on a different Wireless Access point network. B. Identification of Scope and Characteristics of the Environment Some important factors that also need to be known in designing this WLAN optimization model are the characteristics of the study location environment, as well as the coverage area that can be reached by the WLAN device used. Environmental Characteristics The Rector's Building has three floors covering ± 45m x ± 18m, and the floor height is ± 4 meters, on the first floor there are nine rooms, the three largest rooms have a length of ± 15-17 meters with a width of ± 7 meters, while other are smaller room, the second floor has twelve rooms and the third floor has only one hall room. Each room is separated by a partition in the form of a ± 12 cm brick wall and a ± 3 cm thick wooden door. On the second floor, placement of 2 access points assuming 35 meters signal coverage, placed at a distance between devices that is 22 meters in order to get a midpoint, then wireless routers from ISPs and external routers are also placed in the UPD Room for easy management, with an overview of the coverage area like Figure 12 above. On the 3rd floor, because it is a large room namely the Hall, the placement of the access point device with assume 70 meter signal coverage is placed in the middle of the Hall's room, with the coverage area as shown in Figure 13 above. With this optimization design model, the distributed wi-fi signal will reach every corner of the Rector's Building and there will be no death zone. Simulation This stage is carried out to simulate the use of the required WLAN device, and provide an overview of how the network works recommended based on the results of the study. The initial stage of the simulation is to input the device according to the design requirements of the Cisco packet tracer and a topology is made. Based on the results of the identification of technological requirements, the devices needed include routers, access points, and laptops / notebooks Figure 14. Inputting device and topology In the topology above, a switch is added if the external router used to manage the network has a few ports. The placement of 5 access points is represented by 5 different color areas. Configuration is performed on an external router to provide DHCP IPs to clients that are routed through the access point. It also configured NAT Overload so that existing clients can connect to the internet using 1 IP address, while the router from the ISP is configured to be bridge mode. And after configuring, then try the connections of all clients connected to the access point to the ISP router using a simple PDU. The testing parameter is whether the network design model as recommended can run as expected or vice versa. The trial results are as follows: Figure 15. The results of ping from the client to the ISP router It appears that from all clients connected to the access point successfully connected to the ISP router, this means that the client can connect to the ISP network. Then to prove whether the client can access the internet, will be tested to access google.com on an internet server, with the following results: gure 16. Tested access to the google.com site Seen in Figure 16. above, with the recommended WLAN optimization design, clients in the STAHN Gde Pudja Mataram Rectorate Building can still access the internet. Operational Cost Analysis After Optimization Calculation of operational cost analysis after optimization is calculated from the needs of the first month, where, in the first month the budget is calculated based on service costs and the cost of procuring new equipment. The first point is about Internet Servive Provider services, based on the calculation of service application needs, where the recommended bandwidth requirement is 100 Mbps. So, from 5 ISP services from vendor PT. Telkom Indonesia, which was previously used, will be stopped by 4 other services with a bandwidth of 50 Mbps, 20 Mbps, 20 Mbps, and 20 Mbps. The use of 100 Mbps service on the previous network has a monthly fee of Rp. 1,391,000. The second point is about the need for an external router device, using the Mikrotik Routerboard RB2011iLS-IN router that is in the Database Unit Room and is not utilized. So there is no budget for external router costs. Mikrotik Routerboard RB2011iLS-IN has 10 ports and can meet the needs of the port to connect 5 access points with the router. The third point, where the procurement of new devices in this optimization model requires an access point to spread the wi-fi signal to clients who use 5 Totolink N9 access points at a price of Rp. 610,000. Then the total budget requirement for access point costs is Rp. 3,050,000. The budget for the access point itself only applies to procurement in the first month and does not need a budget for the following months. So, the budgeting needs of several points for optimizing the device, the budget for the first half semester calculation is obtained as in Table 6 below. Table 6. Budget after optimization In the first month, the total budget cost is calculated based on ISP service costs and procurement of 5 access points, which is Rp. 4,441,000. For the following month, the budget is only based on ISP service costs, because after the first month there will be no more cost for accessing the device. So, if the estimated operational costs of implementing WLAN in the Rectorate Building for half a semester of ISP service payments is Rp. 4,173,000. The fee for one semester is Rp. 8,345,000. and the annual service fee is Rp. 16
2020-06-04T09:04:02.762Z
2020-05-30T00:00:00.000
{ "year": 2020, "sha1": "ad1abc58d58078fcefebc966e01acce6e6663447", "oa_license": null, "oa_url": "https://journal.universitasbumigora.ac.id/index.php/matrik/article/download/665/468", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a30be2fd1a9ab2862a7309fdba275357d0e6fc34", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
253823267
pes2o/s2orc
v3-fos-license
Potent Application of Scrap from the Modified Natural Rubber Production as Oil Absorbent The production of raw natural rubber always ends up with leftover latex. This latex is later collected to produce low grades of rubber. The collection of this latex also depends on the latex’s quality. However, reproducing the latex may not be applicable if the latex contains many specks of dirt which will eventually be discarded. In this work, an alternative solution was to utilize such rubber in a processable form. This scrap rubber (SR) from the production of natural rubber grafted with polymethyl methacrylate (NR-g-PMMA) production was recovered to prepare an oil-swellable rubber. The rubber blends were turned into cellular structures to increase the oil swellability. To find the suitable formulation and cellular structure of the foam, the foams were prepared by blending SR with virgin natural rubber (NR) at various ratios, namely 0/100, 20/80, 30/70, 50/50, 70/30, 80/20, and 100/0 (phr/phr). The foam formation strongly depended on the SR, as it prevented gas penetration throughout the matrix. Consequently, small cells and thick cell walls were observed. This structure reduced the oil swellability from 7.09 g/g to 5.02 g/g. However, it is interesting to highlight that the thermal stability of the foam increased over the addition of SR, which is likely due to the higher thermal stability of the NR-g-PMMA waste or SR. In summary, the blending NR with 30 phr of SR provided good oil swellability, processability, and morphology, which benefit oil recovery application. The results obtained from this study will be used for further experiments on the enhancement of oil absorbency by applying other key factors. This work is considered a good initiative for preparing the oil-absorbent material based on scrap from modified natural rubber production. Introduction Oil spills in the sea have become a crucial problem for marine environments. It is turning into a more significant issue due to the expansion of the offshore oil sector and the requirement for marine oil transportation [1][2][3]. This is also not limited to an oil spill in industries or workshops, especially from the machinery and piping systems. In the past, different techniques have been used to clean up oil spills, including in situ burning, mechanical collection of chemical dispersants, bioremediation, and the use of absorbent materials [4,5]. Owing to the economy and efficiency of oil collection and remediation, searching the compromising methods is the most desirable choice. Therefore, it is essential to look for a practical means of producing an absorbent substance for oil cleanup. Various polymers have been widely used for oil spill redemption. Among them, alkyl acrylate and its derivatives, which have high hydrophobicity, have been attracting much interest in oil spill redemption. Jadhav et al. [6] prepared graft copolymerization of Poly(methyl methacrylate) (PMMA) on the backbone of Meizotropis pellita fibers (MPF). A PMMA-grafted MPF by 120% absorbed crude oil at 23.60 g, 17.41 g, and 14.21 g for the first to the third cycle of absorption, whereas diesel oil was absorbed from 13.14 g to 5.25 g for three cycles of absorption. Grafted PMMA onto natural rubber (NR) foam for oil sorbent Table 2 lists the materials and mixing sequence for compounding the oil-absorbent foam. SR and NR ratios were 0/100, 20/80, 30/70, 50/50, 70/30, 80/20, and 100/0 (phr/phr), respectively. The full amounts of the SR or NR, ADC, TDAE oil, and other additives were prepared using a two-roll mill. Proper control of Mooney viscosity was encouraged for preparing the rubber foam. The compounds were free-blown into specific shapes (12.5 mm in thickness) inside the compression-molded at the temperature of 150 • C. The time consumed was based on the curing times measured by a moving-die rheometer (MDR), as described in the following section. Measurement of Curing Characteristics Using an MDR (Rheoline, Mini MDR Lite) at 150 • C, the curing characteristics of the rubber were measured according to ASTM D5289. This was utilized to calculate the torque, scorch time (ts 1 ), and cure time (tc 90 ). Measurement of Relative Foam Density and Expansion Ration The physical properties were investigated, including the relative foam density and expansion ratio. The relative foam density was measured according to ASTM D3575, using Equation (1) as given below. Relative foam density = D f /D c (1) where D f is foam density (g/cm 3 ), and D c (g/cm 3 ) is compound density. Equation (2) illustrates how the expansion ratio was calculated by comparing the density of the natural rubber compound specimen to the density of the natural rubber foam. Measurement of Hardness The hardness of the specimens was measured in accordance with ASTM D2240 by using an indentation durometer shore OO, and the readings were taken after a 10-second indentation. Oil Absorbency The sample was prepared by cutting into a dimension of 1.5 × 1.5 × 1.0 cm 3 . Then, the sample was weighed prior to immersion in crude oil solution. The swelling lasted for a certain period of time, where the development of swelling uptake or oil absorbency was measured to find the equilibrium swelling. To measure the swollen sample, the sample was tapped with filter paper to remove excess oil and then weighed on a balance. The oil absorbency was calculated by the following formula. where W a is the weight of absorbed oil, and W b is the weight of the sample. The unit of oil absorbency is g/g which is the mass of oil swells per 1 g of sample. Diffusion Studies Diffusion studies were carried out simultaneously as the oil absorbency test but calculated differently. The test piece was weighed using a series of time intervals, beginning with 10 min for the 1 st hour, followed by 20 min for the 2 nd hour, 30 min for the 3 rd hour, and then hourly until the test piece's weight reached equilibrium. The solvent uptake was employed to create a plot of the sorption curve with the mole percent (mol%) against the square root of the time (t 1/2 ). The equation is displayed as follows. where Q t is the mole percent uptake, m t is the weight of the swollen sample at a given time, m o is the initial weight of the sample, and M w is the molecular weight of the solvent. The swelling coefficient, which represents the sample's swelling behavior, can then be calculated by inserting the weight into Equation (5). where β denotes the swelling coefficient, m ∞ is the weight of the sample at equilibrium, m o is the weight of the test piece before swelling, and ρ s is the solvent's density. Diffusivity (D) is a kinetic parameter that depends on segmental polymer mobility. D can be calculated using Equation (6) (the second Fickian law): where h is the thickness of the sample before swelling, θ is the slope of the linear sorption curve, and Q ∞ is the equilibrium solvent uptake. The action of permeate molecules initially penetrating and dispersing within the polymer matrix is explained by sorption. Equation (7) can be used to calculate the sorption coefficient from the swelling. where m ∞ is the weight of the solvent taken at equilibrium and m o is the initial mass of the polymer sample. Equation (8) can be used to estimate the permeability coefficient (P), which provides data on how much solvent permeates through a uniform region of the sample each minute. Yao et al. [16] proposed a practical technique for determining how much solution is released from a slab of time (t) which is in terms of the total amount of solvent uptake as shown in Equation (9). The sorption curve's result can be fitted to the empirical data to determine the samples' mode of transport, as indicated in Equation (10). where Q t is the mole percent uptake, Q ∞ is the equilibrium solvent uptake, k is the constant, all of which depend on the polymer's structural properties and how the sample interacts with the solvent. The magnitude of n indicates the type of transport. The linear regression was used to calculate the values of n and k. Optical Image and Scanning Electron Microscopy The physical appearance of natural rubber foams was captured using a mobile phone camera through a default setting. The morphology was screened using a FEI Quanta™ 400 FEG scanning electron microscope (SEM; Thermo Fisher Scientific, Waltham, MA, USA). Each specimen was coated with a layer of gold/palladium to remove the charges that had built up during imaging. Thermogravimetric Analysis (TGA) A PerkinElmer Pyris 6 TGA analyzer was used to perform a thermogravimetric analysis on the samples. The sample was heated at a rate of 10 • C/min in a nitrogen flow while being scanned from 30 • C to 600 • C. Fourier Transform Infrared-Spectroscopic Analysis (FT-IR) Fourier transform infrared spectroscopy (FTIR) was used to examine the functionalities shown in NR and SR via the FTIR spectroscope model TENSOR27 (Bruker Corporation, Billerica, MA, USA). The spectra were captured in transmission mode throughout the range of 4000-550 cm −1 at a resolution of 4 cm −1 . Figure 1 shows the FTIR spectra of NR and SR. These two infrared spectra were different: SR shows intense absorption peaks at 1729 cm −1 and 1148 cm −1 , which are associated with -C=O and -C-O groups in the PMMA chains grafted onto the NR molecules. The absorbance ratios of the peaks at 1729 cm −1 to 837 cm −1 might be used to roughly evaluate the amount of grafted PMMA on the NR molecules. The 837 cm −1 peak results from the =C-H out-of-plane bending of cis-1,4 polyisoprene, whereas the 1729 cm −1 peak is related to the C=O stretching of grafted PMMA. The result clearly shows a higher intensity of 1729 cm −1 over the 837 cm −1 , suggesting that higher PMMA was grafted to NR molecules. The peaks observed in SR agreed well with the previous works on the preparation of PMMA grafted onto NR molecules [17][18][19]. Cure Characteristics The rheometric curves of the foams are shown in Figure 3. All the curves showed a marching trend except for the blend at 30/70 phr/phr. The marching curve is present when The change in a sample's mass as a function of temperature in a controlled atmosphere is measured by thermogravimetric analysis (TGA). The measurement is primarily used to ascertain the compositional characteristics and thermal and oxidative stabilities of materials. The thermal decomposition behavior of raw NR and SR is shown in Figure 2. The decomposition temperature at 50% weight loss and char residue are also listed and embedded in this Figure. Notably, the decomposition temperature at 50% weight loss of SR was higher than NR. This is simply due to the higher thermal stability of SR as the grafting of MMA onto NR reduces the diene content in the NR. The thermal stability of SR was then improved. Moreover, both NR and SR exhibited low residue which was less than 1%. Cure Characteristics The rheometric curves of the foams are shown in Figure 3. All the curves showed a marching trend except for the blend at 30/70 phr/phr. The marching curve is present when there is a development of crosslinking [20]. The reversion trend observed for the blend at 30/70 phr/phr may be due to the SR itself. The received SR was from the waste of NR-g- Cure Characteristics The rheometric curves of the foams are shown in Figure 3. All the curves showed a marching trend except for the blend at 30/70 phr/phr. The marching curve is present when there is a development of crosslinking [20]. The reversion trend observed for the blend at 30/70 phr/phr may be due to the SR itself. The received SR was from the waste of NR-g-PMMA production, which is uncontrollable material. This phenomenon also occurred in the experiment by Nakason et al. [18], who varied accelerator types to the NR-g-PMMA. They found the reversion of the rubber vulcanizates regardless of accelerator types. The reversion may be associated with the degradation of NR molecules. Separately, the important point to highlight is the increment of the minimum torque (M L ) and maximum torque (M H ) after replacing NR with SR, indicating that the material became stiff over the addition of SR. Grafting MMA onto NR makes the rubber harder due to the presence of the PMMA component as a glassy thermoplastic phase. The scorch time (t S1 ) and curing time (t C90 ) were reduced with the addition of SR. t S1 is the induction time experienced by a rubber compound before vulcanization is initiated. At the same time, t C90 is the time rubber takes to become 90% vulcanized. The decrease in these two values indicated that SR could quicken the vulcanization time of rubber. According to Harpell et al. [21] and Bhatti et al. [22], the decomposition of ADC produces hydrazodicarbonamide, urazol, and a gaseous mixture of nitrogen (N 2 ), carbon monoxide (CO), cyanic acid (HNCO), and ammonia (NH 3 ) through competitive and exothermic chemical pathways. N 2 is the primary source of gas that makes the foam freeflowing. Depending on the circumstances of the process and the status of the result, some paths may be preferred over others. The focal point here is the production of ammonia, which tends to react with PMMA through ammonolysis and results in the formation of primary amide. Such amide derivative may cause the vulcanization of rubber to accelerate, hence decreasing the ts 1 and tc 90 in the blends containing a higher content of SR. On the contrary, this kind of phenomenon did not happen for un-foamed specimens. As reported by Nakason et al. [18], they found that the addition of NR-g-PMMA prolongs the ts 1 and tc 90 . This was attributed to the polar functional groups of the graft copolymer absorbing certain accelerators as a result of their polarity. Consequently, the accelerator in these quan- tities was unable to accelerate the vulcanization process. Therefore, a longer cure period was needed to finish the crosslinking process and get the optimum curing capabilities. They found the reversion of the rubber vulcanizates regardless of accelerator types. The reversion may be associated with the degradation of NR molecules. Separately, the important point to highlight is the increment of the minimum torque (ML) and maximum torque (MH) after replacing NR with SR, indicating that the material became stiff over the addition of SR. Grafting MMA onto NR makes the rubber harder due to the presence of the PMMA component as a glassy thermoplastic phase. The scorch time (tS1) and curing time (tC90) were reduced with the addition of SR. tS1 is the induction time experienced by a rubber compound before vulcanization is initiated. At the same time, tC90 is the time rubber takes to become 90% vulcanized. The decrease in these two values indicated that SR could quicken the vulcanization time of rubber. According to Harpell et al. [21] and Bhatti et al. [22], the decomposition of ADC produces hydrazodicarbonamide, urazol, and a gaseous mixture of nitrogen (N2), carbon monoxide (CO), cyanic acid (HNCO), and ammonia (NH3) through competitive and exothermic chemical pathways. N2 is the primary source of gas that makes the foam freeflowing. Depending on the circumstances of the process and the status of the result, some paths may be preferred over others. The focal point here is the production of ammonia, which tends to react with PMMA through ammonolysis and results in the formation of primary amide. Such amide derivative may cause the vulcanization of rubber to accelerate, hence decreasing the ts1 and tc90 in the blends containing a higher content of SR. On the contrary, this kind of phenomenon did not happen for un-foamed specimens. As reported by Nakason et al. [18], they found that the addition of NR-g-PMMA prolongs the ts1 and tc90. This was attributed to the polar functional groups of the graft copolymer absorbing certain accelerators as a result of their polarity. Consequently, the accelerator in these quantities was unable to accelerate the vulcanization process. Therefore, a longer cure period was needed to finish the crosslinking process and get the optimum curing capabilities. Table 3 lists the relative foam densities and expansion ratios of the foams with different ratios of blending. As a higher content of SR was used, less gas was subsequently generated due to less flexibility of SR. This increased the relative foam density, which significantly increased from 0.57 g/cm 3 to 0.84 g/cm 3 . Higher SR content hardens the rubber matrix, thus restricting the escape of gas through the foam surface. This allowed the foam to have less expansion and, consequently, produce foam with a higher relative density [23]. The relative foam density was directly related to the porosity values and expansion ratio. For this reason, the increased relative foam density when using a high content of SR revealed less expansion of these specimens. The porosity values and expansion ratios are also shown in Table 3. The increase in relative foam density also played a role by decreasing the size of cells per unit volume. The glassy phase of SR may prevent the penetration of gas during compression. Table 3 also lists the hardness of the foams prepared using various SR content. The hardness increased with an increase in the SR content. This was due to the higher foam porosity in the matrix during the formation of the gas phase. Again, an inherent chain stiffness of SR responds to an increase in the hardness of the samples. Physical Properties, Appearance, and Morphologies Further evidence can be identified from the SEM images shown in Figure 4. The SEM images showed a systematic correlation between the number of cells per unit volume and the average cell size. An increase in the SR resulted in smaller cells and a thick cell wall, indicating the difficulty of gas to diffuse and generate the foams. In this experiment, ADC content was fixed at 5 phr, and the volume of gas generated after the decomposition was assumed to be the same. However, the gas produced could not diffuse or penetrate through the rubber matrix, especially at a higher content of SR. Consequently, the number of cells per unit volume decreased, resulting in a smaller average cell size and a thicker cell wall in the foam [9]. Increasing the levels of SR also affected cell distribution. The foam cell distributed unevenly as the content of SR increased, and random cell size was seen for the sample using a high content of SR. The SEM images are in good agreement with the porosity values and expansion ratios reported in the previous section. Oil Absorbency Figures 5 and 6 depict the oil absorbency over the contact time and the equilibrium absorbency of an oil-absorbent material prepared from various blend ratios. It was observed that the oil absorbency was reduced over the content of SR. Adding SR reduced the oil swellability from 7.09 g/g to 5.02 g/g. The results obtained in this work were found differently compared to previous literature [7,8]. Previous works prepared the foam differently, where the NR and NR-g-PMMA were mixed in the latex stage. The generation of the foam or foam forming was done by Dunlop Process [24]. The foam is generated easier than with the dry rubber method. This has provided a different cellular structure. In this experiment, foaming became more difficult due to the harder phase of SR, where the decomposed gas ineffectively penetrated throughout the matrix. This can be seen from the morphology of the foams (see Figure 4). As mentioned previously, the size of cells decreased, and the thickness of cell walls increased when a higher content of SR was used. As the cell size decreased, the porosity decreased, and the oil could not penetrate easily from one cell to another, leading eventually to a decrease in the swelling uptake. This is further explained in a study by Lee et al. [25], which stated that the swelling of NR foam is influenced by the cell structure and the density, where a lower foam density has caused a higher swelling uptake. This explanation can be clearly understood when correlated with the swelling schematic shown in Figure 7. Oil Absorbency Figures 5 and 6 depict the oil absorbency over the contact time and the equilibrium absorbency of an oil-absorbent material prepared from various blend ratios. It was observed that the oil absorbency was reduced over the content of SR. Adding SR reduced the oil swellability from 7.09 g/g to 5.02 g/g. The results obtained in this work were found differently compared to previous literature [7,8]. Previous works prepared the foam differently, where the NR and NR-g-PMMA were mixed in the latex stage. The generation of the foam or foam forming was done by Dunlop Process [24]. The foam is generated easier than with the dry rubber method. This has provided a different cellular structure. In this experiment, foaming became more difficult due to the harder phase of SR, where the decomposed gas ineffectively penetrated throughout the matrix. This can be seen from the morphology of the foams (see Figure 4). As mentioned previously, the size of cells decreased, and the thickness of cell walls increased when a higher content of SR was used. As the cell size decreased, the porosity decreased, and the oil could not penetrate easily from one cell to another, leading eventually to a decrease in the swelling uptake. This is further explained in a study by Lee et al. [25], which stated that the swelling of NR foam is influenced by the cell structure and the density, where a lower foam density has caused a higher swelling uptake. This explanation can be clearly understood when correlated with the swelling schematic shown in Figure 7. Diffusion Study The most common method of delivering small compounds to polymers is solution diffusion. The penetrant molecules, in this case crude oil, are first absorbed by the rubber before being diffused through it. The sorption data of the crude oil into the rubber at room temperature was determined and expressed as the mole percent uptake (Qt) against t 1/2 (min 1/2 ), as shown in Figure 8. The curves show gradual steps of absorption. A significant concentration gradient caused the initial steep zone with a high sorption rate. In contrast, as equilibrium approached, the sorption rate decreased in the later regions. It is well acknowledged that the cross-link density of a network chain and the equilibrium mole percent uptake are correlated [26] and the efficacy of oil or solvent in penetrating rubber molecules since this study was prepared under controlled formulation. It was expected that the cross-linking was more or less the same. The only reason to be regarded with a lower mole percent uptake or penetration into the rubber was due to the efficacy of oil or solvent to penetrate. Thus, the swelling resistance of the blends containing a higher content of SR increased. Similar results were obtained for the rubber's diffusion and swelling coefficients, as shown in Figure 9. As the SR component increased from 0 to 100 phr, the swelling coefficient of the rubber steadily dropped from 8.19 to 5.81 cm 3 /g. Therefore, it was more difficult for the oil to penetrate the rubber. The diffusion coefficient (D) is a kinetic parameter that relies on segmental mobility. Based on the outcome depicted in Figure 9, D was determined using an equation derived from the second Fickian's law. It demonstrates a non-steady decreasing trend, indicating that the oil had difficulty penetrating the rubber matrix with a high SR content. The sorption coefficient and the permeability coefficient are two more characteristics that can be derived from the rubber's diffusion studies (see Figure 10). The initial absorption and dispersion of permeate molecules into the rubber matrix can be explained by the sorption coefficient. In contrast, the amount of penetrant that passes through a consistent area of the sample per minute is shown by the permeability coefficient. As the SR content increased, the rubber's sorption and permeability coefficients gradually declined, indicating that SR segments impede oil diffusion of the rubber foam upon the addition of SR. Diffusion Study The most common method of delivering small compounds to polymers is solution diffusion. The penetrant molecules, in this case crude oil, are first absorbed by the rubber before being diffused through it. The sorption data of the crude oil into the rubber at room temperature was determined and expressed as the mole percent uptake (Q t ) against t 1/2 (min 1/2 ), as shown in Figure 8. The curves show gradual steps of absorption. A significant concentration gradient caused the initial steep zone with a high sorption rate. In contrast, as equilibrium approached, the sorption rate decreased in the later regions. It is well acknowledged that the cross-link density of a network chain and the equilibrium mole percent uptake are correlated [26] and the efficacy of oil or solvent in penetrating rubber molecules since this study was prepared under controlled formulation. It was expected that the cross-linking was more or less the same. The only reason to be regarded with a lower mole percent uptake or penetration into the rubber was due to the efficacy of oil or solvent to penetrate. Thus, the swelling resistance of the blends containing a higher content of SR increased. Similar results were obtained for the rubber's diffusion and swelling coefficients, as shown in Figure 9. As the SR component increased from 0 to 100 phr, the swelling coefficient of the rubber steadily dropped from 8.19 to 5.81 cm 3 /g. Therefore, it was more difficult for the oil to penetrate the rubber. The diffusion coefficient (D) is a kinetic parameter that relies on segmental mobility. Based on the outcome depicted in Figure 9, D was determined using an equation derived from the second Fickian's law. It demonstrates a non-steady decreasing trend, indicating that the oil had difficulty penetrating the rubber matrix with a high SR content. The sorption coefficient and the permeability coefficient are two more characteristics that can be derived from the rubber's diffusion studies (see Figure 10). The initial absorption and dispersion of permeate molecules into the rubber matrix can be explained by the sorption coefficient. In contrast, the amount of penetrant that passes through a consistent area of the sample per minute is shown by the permeability coefficient. As the SR content increased, the rubber's sorption and permeability coefficients gradually declined, indicating that SR segments impede oil diffusion of the rubber foam upon the addition of SR. Polymers 2022, 14, x FOR PEER REVIEW 13 of 18 Figure 11 displays the results of fitting the oil uptake data into Equation (10) to determine the mode of the transport mechanism. According to Table 4, linear regression analysis of the initial linear slope was used to obtain the values of n and k. Based on the relative mobility of the penetrant and polymer segments, a few categories can be used to classify the transport mechanism, which are: (i) Case I or Fickian diffusion, (ii) Case II diffusion, and (iii) non-Fickian or anomalous diffusion [27]. When n has a value of less than or equal to 0.5, the concentration gradient acts as the main driving factor for diffusion in a Fickian transport mode. The diffusion rate is, therefore, lower than the polymer chain relaxation rate. However, for Case II transport, where n is equal to 1, the diffusion rate is greater than the relaxation process. On the other hand, if the n value is between 0.5 and 1, the transport is anomalous, and the diffusion rate matches the rate at which the polymer chains are relaxing [28]. Transport Mechanism It is clear from Table 4 that the transport mechanism was anomalous for rubber containing SR between 0 and 100 phr. This result agrees well with the sorption curve in Figure 8, which, over the same time period, showed a modest increase in the mole percent solvent uptake before reaching equilibrium. Polymer chains adapt to a penetrant's presence quickly, but it takes a while for the equilibrium solvent absorption to occur. Additionally, the value of k reflects how oil penetrates rubber or how the penetrant interacts with the rubber matrix. A lower k value represents a lower speed of solvent or oil-penetrating rubber. Here, the calculated results show that the penetrant was more difficult when having more SR. This is simply because the foam contains a thicker cell wall which deactivates the efficacy of oil in penetrating rubber molecules. Figure 11 displays the results of fitting the oil uptake data into Equation (10) to determine the mode of the transport mechanism. According to Table 4, linear regression analysis of the initial linear slope was used to obtain the values of n and k. Based on the relative mobility of the penetrant and polymer segments, a few categories can be used to classify the transport mechanism, which are: (i) Case I or Fickian diffusion, (ii) Case II diffusion, and (iii) non-Fickian or anomalous diffusion [27]. When n has a value of less than or equal to 0.5, the concentration gradient acts as the main driving factor for diffusion in a Fickian transport mode. The diffusion rate is, therefore, lower than the polymer chain relaxation rate. However, for Case II transport, where n is equal to 1, the diffusion rate is greater than the relaxation process. On the other hand, if the n value is between 0.5 and 1, the transport is anomalous, and the diffusion rate matches the rate at which the polymer chains are relaxing [28]. Transport Mechanism It is clear from Table 4 that the transport mechanism was anomalous for rubber containing SR between 0 and 100 phr. This result agrees well with the sorption curve in Figure 8, which, over the same time period, showed a modest increase in the mole percent solvent uptake before reaching equilibrium. Polymer chains adapt to a penetrant's presence quickly, but it takes a while for the equilibrium solvent absorption to occur. Additionally, the value of k reflects how oil penetrates rubber or how the penetrant interacts with the rubber matrix. A lower k value represents a lower speed of solvent or oil-penetrating rubber. Here, the calculated results show that the penetrant was more difficult when having more SR. This is simply because the foam contains a thicker cell wall which deactivates the efficacy of oil in penetrating rubber molecules. Figure 12 depicts the TG curves of the samples. The decomposition temperature at 50% weight loss (T−50%) and the content of char residue are embedded in this Figure. Two regions of degradation of specimens were seen. The initial minor mass loss at around 180-200 °C was due to the presence of volatile matter such as stearic acid and TDAE oil, and the process was complete at about 300 °C [29]. Then, the major step of degradation of the blends (330-450 °C) was caused by the degradation of both SR and NR segments. It is noteworthy that the specimens' thermal stability marginally improved as the content of SR increased. This can be seen from the T−50%, which clearly shows that by introducing SR, the temperature was shifted to a higher temperature. The enhancement in thermal stability can be evidently related to the original stability of SR seen previously in Figure 2. The replacement of MMA onto the backbone of NR has made the NR-g-PMMA more stable against the formation of degradation. A similar observation was also found in the literature [17]. They found that increasing the content of MMA onto NR has enhanced the thermal resistance of rubber. With increased PMMA content in the grafting reaction, stronger chemical interactions between the molecules were speculated to be the cause. Additionally, increasing oxygen compounds exhibited a higher resistance to thermal degradation than that of the original NR. Moreover, the decomposition temperature of foam specimens Figure 12 depicts the TG curves of the samples. The decomposition temperature at 50% weight loss (T −50% ) and the content of char residue are embedded in this Figure. Two regions of degradation of specimens were seen. The initial minor mass loss at around 180-200 • C was due to the presence of volatile matter such as stearic acid and TDAE oil, and the process was complete at about 300 • C [29]. Then, the major step of degradation of the blends (330-450 • C) was caused by the degradation of both SR and NR segments. It is noteworthy that the specimens' thermal stability marginally improved as the content of SR increased. This can be seen from the T −50% , which clearly shows that by introducing SR, the temperature was shifted to a higher temperature. The enhancement in thermal stability can be evidently related to the original stability of SR seen previously in Figure 2. The replacement of MMA onto the backbone of NR has made the NR-g-PMMA more stable against the formation of degradation. A similar observation was also found in the literature [17]. They found that increasing the content of MMA onto NR has enhanced the thermal resistance of rubber. With increased PMMA content in the grafting reaction, stronger chemical interactions between the molecules were speculated to be the cause. Additionally, increasing oxygen compounds exhibited a higher resistance to thermal degradation than that of the original NR. Moreover, the decomposition temperature of foam specimens was slightly lower than raw rubbers (see Figure 2). This is because the foam was heated during processing and vulcanization. Therefore, slight degradation of foam specimens may occur during such a process. was slightly lower than raw rubbers (see Figure 2). This is because the foam was heated during processing and vulcanization. Therefore, slight degradation of foam specimens may occur during such a process. Conclusions The aim of this work was to utilize the waste from the production of NR-g-PMMA as an oil-absorbent rubber. The rubber blends were then turned into cellular structures to increase the oil swellability. Results indicated that adding SR quickened the vulcanization process of rubber due to the ammonolysis taking place during the vulcanization of the foam. It was observed that the cellular structure of the foam was difficult to generate when using a higher content of SR. This made the oil swellability reduce from 7.09 g/g to 5.02 g/g. This agreed well with the study of swelling kinetics. It demonstrates a non-steady decreasing trend; the transport mechanism was anomalous for rubber containing SR between 0 and 100 phr. There was a minor degradation of the samples at temperatures of 180-300 °C due to the presence of volatile matters, mainly from TDAE oil. This would not affect the application of oil-absorbent rubber since it was added to facilitate the foaming process. However, it is interesting to note that the prepared oil-absorbent rubber still provided higher thermal stability, particularly at temperatures over 400 o C. This is a very good compromise between oil absorbency and thermal stability. Based on the overall properties, the SR content at 30 phr is suggested. This is considered from oil absorption capacity, yet other factors include blend processability, foam structure, and physical and thermal properties. The results obtained from this study will be used for further experiments on the enhancement of oil absorbency by applying other key factors. This work is a good initiative for preparing the oil-absorbent material based on scrap from modified natural rubber production. At the same time, our work provides an alternative route to fabricate oil-absorbent foam for an oil recovery application. Conclusions The aim of this work was to utilize the waste from the production of NR-g-PMMA as an oil-absorbent rubber. The rubber blends were then turned into cellular structures to increase the oil swellability. Results indicated that adding SR quickened the vulcanization process of rubber due to the ammonolysis taking place during the vulcanization of the foam. It was observed that the cellular structure of the foam was difficult to generate when using a higher content of SR. This made the oil swellability reduce from 7.09 g/g to 5.02 g/g. This agreed well with the study of swelling kinetics. It demonstrates a non-steady decreasing trend; the transport mechanism was anomalous for rubber containing SR between 0 and 100 phr. There was a minor degradation of the samples at temperatures of 180-300 • C due to the presence of volatile matters, mainly from TDAE oil. This would not affect the application of oil-absorbent rubber since it was added to facilitate the foaming process. However, it is interesting to note that the prepared oil-absorbent rubber still provided higher thermal stability, particularly at temperatures over 400 • C. This is a very good compromise between oil absorbency and thermal stability. Based on the overall properties, the SR content at 30 phr is suggested. This is considered from oil absorption capacity, yet other factors include blend processability, foam structure, and physical and thermal properties. The results obtained from this study will be used for further experiments on the enhancement of oil absorbency by applying other key factors. This work is a good initiative for preparing the oil-absorbent material based on scrap from modified natural rubber production. At the same time, our work provides an alternative route to fabricate oil-absorbent foam for an oil recovery application. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2022-11-24T16:13:59.687Z
2022-11-22T00:00:00.000
{ "year": 2022, "sha1": "595fc81103448722419d5dbb72c97cbd5ef6aa2d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/14/23/5066/pdf?version=1669277687", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ff7c81c3af287c16dcfa805f3111d7007b05ceca", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
16894787
pes2o/s2orc
v3-fos-license
Triclosan Resistome from Metagenome Reveals Diverse Enoyl Acyl Carrier Protein Reductases and Selective Enrichment of Triclosan Resistance Genes Triclosan (TCS) is a widely used antimicrobial agent and TCS resistance is considered to have evolved in diverse organisms with extensive use of TCS, but distribution of TCS resistance has not been well characterized. Functional screening of the soil metagenome in this study has revealed that a variety of target enoyl acyl carrier protein reductases (ENR) homologues are responsible for the majority of TCS resistance. Diverse ENRs similar to 7-α-hydroxysteroid dehydrogenase (7-α-HSDH), FabG, or the unusual YX7K-type ENR conferred extreme tolerance to TCS. The TCS-refractory 7-α HSDH-like ENR and the TCS-resistant YX7K-type ENR seem to be prevalent in human pathogenic bacteria, suggesting that a selective enrichment occurred in pathogenic bacteria in soil. Additionally, resistance to multiple antibiotics was found to be mediated by antibiotic resistance genes that co-localize with TCS resistance determinants. Further comparative analysis of ENRs from 13 different environments has revealed a huge diversity of both prototypic and metagenomic TCS-resistant ENRs, in addition to a selective enrichment of TCS-resistant specific ENRs in presumably TCS-contaminated environments with reduced ENR diversity. Our results suggest that long-term extensive use of TCS can lead to the selective emergence of TCS-resistant bacterial pathogens, possibly with additional resistance to multiple antibiotics, in natural environments. Triclosan (TCS) is a widely used antimicrobial agent and TCS resistance is considered to have evolved in diverse organisms with extensive use of TCS, but distribution of TCS resistance has not been well characterized. Functional screening of the soil metagenome in this study has revealed that a variety of target enoyl acyl carrier protein reductases (ENR) homologues are responsible for the majority of TCS resistance. Diverse ENRs similar to 7-α-hydroxysteroid dehydrogenase (7-α-HSDH), FabG, or the unusual YX7K-type ENR conferred extreme tolerance to TCS. The TCS-refractory 7-α HSDH-like ENR and the TCS-resistant YX7K-type ENR seem to be prevalent in human pathogenic bacteria, suggesting that a selective enrichment occurred in pathogenic bacteria in soil. Additionally, resistance to multiple antibiotics was found to be mediated by antibiotic resistance genes that co-localize with TCS resistance determinants. Further comparative analysis of ENRs from 13 different environments has revealed a huge diversity of both prototypic and metagenomic TCS-resistant ENRs, in addition to a selective enrichment of TCS-resistant specific ENRs in presumably TCS-contaminated environments with reduced ENR diversity. Our results suggest that long-term extensive use of TCS can lead to the selective emergence of TCS-resistant bacterial pathogens, possibly with additional resistance to multiple antibiotics, in natural environments. The emergence of bacterial resistance to antimicrobials and the growing number of multidrug-resistant bacteria have become a global public health concern 1 and have precipitated the need for the development of more effective antibiotics 2 . It is widely accepted that the development and spread of antibiotic resistance in microbes can be largely attributed to the abuse and misuse of antibiotics and biocides 3 . A direct correlation between biocide and antimicrobial use and the extent of antimicrobial resistance has been demonstrated 3,4 . Various environments, including waste water treatment plants (WWTPs), sediment, surface water, sewage, sludge and soil, have been known to serve as potential reservoirs of antibiotic resistance genes (ARGs) 5 . Exchange of ARGs by horizontal gene transfer between bacteria from natural environments and pathogenic bacteria has been reported 6 . This phenomenon reveals the importance of the environmental resistome in terms of the possible transmission and spread of selected ARGs to human pathogenic bacteria. The widely used biocide triclosan [5-chloro-2-(2,4-dichlorophenoxy)-phenol, TCS] has a broad-spectrum capacity to kill many microorganisms and is a constituent in a variety of personal care products 7 . TCS blocks bacterial type II fatty acid synthesis by targeting enoyl-acyl carrier protein reductases (ENRs), which catalyze the last enoyl reduction step in the fatty acid elongation cycle 8 . So far, four ENR isozymes have been reported in bacteria, namely FabI, FabL, FabV, and FabK 9 . FabI and FabL ENRs are known to share a similar YX6K type catalytic domain, whereas FabV carry a YX8K type catalytic domain 10 Table S1), were screened for TCS resistance. AS was collected from the sediments of Nakdong river water in the Eulsukdo island area, where TCS was detected at approximately 0.66 μ g/L (data not shown). The AS receives combined sewer overflow from different sites, including the Sasang-gu industrial complex, where ICS was collected. TCS was detected at approximately 1.07 μ g/kg and 1.29 μ g/kg in AS and ICS samples, respectively (data not shown). A total of 123 fosmid clones with TCS resistance were classified into 32 groups (Table 1) on the basis of their restriction digestion profiles (data not shown). Shotgun library construction and transposon-based mutagenesis of those 32 representative clones were conducted, followed by sequence analysis to identify candidate genes responsible for TCS resistance. Metagenomic clones carried diverse TCS-resistance determinants. Metagenome-derived TCS-resistance determinants identified in this study fell into several different groups. The majority of the clones either carried different versions of ENRs, including prototypic FabI, FabV, FabK or FabL-like ENR homologues, or novel candidate 7-α -HSDH, FabG and YX7K-like ENR homologues (Fig. 1a). Other clones carried efflux pumps, novel hypothetical proteins, or unknown determinants that conferred resistance to TCS. ENR or FabG activity of selected metagenomic clones was confirmed by genetic complementation of temperature-sensitive (ts) fabI or fabG mutants of E. coli, respectively (Table 1). A higher diversity of TCS resistance determinants was observed in AS compared to ICS (Fig. 1b). RAIphy (relative abundance index phylogenetic analysis)-based taxonomic assignment of metagenomic clones revealed that the majority belonged to the phylum proteobacteria (Supplementary Table S2). The metagenomic 7-α -HSDH-type ENRs and their closest homologues (obtained from the Uniref50 database) clustered together in the FabL clade, but the well characterized 7-α -HSDH from E. coli and C. testosteroni clustered together on a separate clade (Fig. 2). Both the YX7K and FabG-type metagenomic ENR homologues clustered on a separate clade ( Supplementary Fig. S1a,b) Metagenome-derived TCS resistance-associated FabI ENR substitutions were predominant in human-associated pathogenic bacteria and soil-borne plant pathogens. Among the various metagenome-derived and TCS-resistant ENR homologues, five were FabI homologues (Table 1) carrying typical YX6K-type catalytic domains (Fig. 3a-e). These FabI ENR homologues conferred various levels of resistance to TCS (Fig. 4, Supplementary Table S3). Mutations that are well known to confer TCS resistance to the E. coli ENR FabI and to the Mycobacterium ENR InhA include amino acid substitutions such as G 93 -V, G 93 -S, M 159 -T, F 203 -L and F 203 -C. However, the metagenomic FabI ENRs carried unique patterns of substitutions ( Fig. 3a-e), such as G 93 -A, F 203 -A and F 203 -V, which differed from the previously known amino acid substitutions 15,24 . Additionally, these residues in the metagenomic FabI ENRs carry different side chains, which in turn may affect protein-TCS interactions (Fig. 3f). We performed genome-wide investigations of FabI ENR variations to examine the comparative abundance of metagenomic-identified substitutions and previously known FabI ENR mutations in human-associated pathogenic bacteria, non-pathogenic bacteria and soil-borne plant pathogenic bacteria (Supplementary Data 1). Most of the FabI ENR homologues from these organisms carried substitutions similar to those found in the metagenomic ENRs rather than mutations reported in the previously known FabI ENRs of E. coli and other organisms (Fig. 3g, Supplementary Data 1). Metagenomic FabI ENR substitutions were mostly present as single point mutations (G 93 -A or F 203 -A) in the FabI ENRs of these organisms. Among these substitutions, the frequency of G 93 -A replacement was higher (50%) than F 203 -A (7%) in ENRs from these organisms. Few of the FabI ENR homologues carried substitutions, which were different from both previously known and Scientific RepoRts | 6:32322 | DOI: 10.1038/srep32322 metagenome-derived substitutions. Additionally, the metagenome-derived FabI ENRs, with different patterns of substitutions, not only conferred parental levels of resistance to TCS but also complemented ENR activity (Table 1). Metagenomic YX7K-type ENRs shared by both eukaryotes and prokaryotes are abundant among intracellular pathogenic bacteria and apicomplexans. Two of the metagenomic subclones, pAB1-3 and pAF1-6 ( catalytic domains from the YX7K type and additional loops in their structure ( Supplementary Fig. S2a,b) but also shared the least identity with other prototypic ENRs. These YX7K-type ENRs were abundant in many obligate intracellular pathogenic bacteria and apicomplexan microorganisms (Supplementary Table S4-S5), where the YX7K-type catalytic domain was highly conserved, along with other shared features. The 7-α-HSDH homologues of Helicobacter pylori and Campylobacter jejuni confer similar levels of resistance to TCS as metagenomic 7-α-HSDH. Metagenome-derived 7-α -HSDH gene homologues were predominant in three major families of Epsilonproteobacteria -Campylobacteraceae, Helicobacteraceae and Nautiliaceae -the first two of which include many human and other animal pathogens (Supplementary Table S6). In fact, the 7-α -HSDH gene homologues of C. jejuni NCTC11168 and H. pylori HPKTCC B0100 were able to confer the equivalent MIC to TCS in naturally TCS-sensitive E. coli (Fig. 4). Comparison of the 7-α -HSDH gene homologues from C. jejuni and H. pylori with closely related and well characterized prototypic 7-α -HSDH and ENRs revealed that these enzymes shared similar key enzyme features ( Supplementary Fig. S3c,d). Furthermore, Figure 1. Metagenomic analysis of TCS-resistant clones revealed diverse patterns of TCS resistance determinants, with a higher diversity observed in AS compared to ICS. (a) A summarized sketch of the TCS resistance determinants from metagenomic library screening. The majority of the clones carried different versions of ENR homologues, including previously known prototypic ENRs, novel candidate ENRs, and ENR homologues with compromised ENR activity. The ENRs that complemented ENR activity in vivo are indicated by forward light green arrows or are indicated otherwise. Other clones carried TCS resistance efflux pumps, hypothetical proteins or unknown determinants for resistance to this biocide. Some of the clones even carried genes for resistance to other antibiotics colocalized with the TCS resistance determinants. (b) Distribution of TCS resistance determinants between AS and ICS. TCS resistance determinants in AS and ICS are indicated in cyan and red, respectively. The comparative diversity of TCS resistance determinants was higher in AS compared to ICS. 3-oxoacyl-ACP reductase (FabG)-like ENRs with compromised FabG activity confer significant resistance to TCS. Two TCS-refractory clones, pD-1-7 and pAM1-4 ( Fig. 4) Fig. 4). Comparison with a prototypic FabK ENR from S. pneumonia revealed that the Data 1). The G 93 -A substitution was more commonly present in 50% of bacterial ENRs, followed by the F 203 -A substitution, which was present in 7% of the bacterial ENRs from the investigated 401 bacterial strains. Therefore, the G 93 -A and F 203 -A substitutions constitute 88% and 12% of those found in ENRs, respectively. or the hypothetical-protein-like ENR homologue by Tn insertion led to the complete loss of TCS resistance. Intriguingly, a variety of microorganisms, including the potential human pathogen Massilia timonae, contained a metagenome-derived pAH4-3 hypothetical-protein-like homologue (Supplementary Table S18). The remaining seven clones did not carry any genes previously known to confer resistance to TCS (Supplementary Table S8), although some of them carried different efflux pump protein homologues. In addition, none of the seven E. coli clones exhibited any significant decrease in TCS concentration in the medium (data not shown), suggesting that TCS is not being removed or inactivated by those seven clones, nor is ENR activity complemented in those clones. Metagenomic TCS-resistant clones conferred co/cross-resistance to other antibiotics. Eleven of the TCS-resistant clones showed resistance to at least one of the antibiotics tested (Supplementary Table S19-S20, Supplementary Fig. S6a,c,e,i, Supplementary Fig. S7a-g, Supplementary Tables S9, S11, S13, S17, S21-S27). The pBC1 clone, which carried two different homologues of the acrB gene ( Supplementary Fig. S7g, Supplementary Table S27), showed mild cross-resistance to tetracycline when treated with sub-lethal concentrations of TCS. The pAV2 clone, which showed co-resistance against other antibiotics, contained an ARG cluster colocalized with the TCS-resistant ENR ( Supplementary Fig. S7c, Supplementary Table S23). This cluster contained genes encoding two multidrug efflux pump family protein homologues and an aminoglycoside-modifying enzyme, along with a TCS-resistant ENR homologue. Most of the metagenomic clones that showed co/cross-resistance to TCS contained colocalized genes that conferred resistance to other antibiotics, either in the form of efflux pumps or antibiotic-modifying enzymes (Supplementary Table S28). However, two clones did not carry any genes known for conferring antibiotic resistance, even though both showed co-resistance to other antibiotics. Furthermore, seven of the metagenomic clones contained mobile genetic element signatures (Supplementary Table S29 Data 2). These metagenomic datasets cover 13 different environments, and the WWTPs and ocean sediment were considered TCSREs relative to the presumed TCS-free environments (TCSFEs) or low TCS environments, such as the glacier, cave, forest, fresh water and human oral cavity environments. These metagenomic datasets were screened for the presence of both metagenomic and prototypic ENR homologues. Total ENR abundance analysis revealed that ENR content in TCSREs was greatly reduced compared to TCSFEs (Fig. 5a). A comparative analysis for both TCS-sensitive (prototypic) and TCS-resistant (prototypic and metagenomics) ENRs showed that TCSREs tended to shape the ENR diversity in a different way compared to TCSFEs (Fig. 5b). The TCSFEs were more likely to share similar ENR diversity patterns. Interestingly, the metagenome-derived 7-α -HSDH-like ENR homologues were the most abundant ENRs (34-42%) in TCSFEs (Fig. 5b). A comparative abundance analysis for TCS-resistant ENRs revealed that TCSREs tended to enrich for TCS-tolerant prototypic (approximately 17.6-fold) and metagenomic (approximately 17.1-fold) FabV-type ENRs and other TCS-tolerant ENRs, such as the metagenome-derived YX7K-like (approximately 8.1-fold) and the metagenome-derived FabG-like (approximately 2.4-fold) ENR homologues (Fig. 5c,e,f). Interestingly, the metagenomic novel-hypothetical-protein-like ENR homologues were exceptionally enriched (approximately 263-fold) in TCSREs (Fig. 5c,e,f). Moreover, β -diversity analysis revealed that the ENR diversity in TCSREs was dissimilar from that found in TCSFEs (Fig. 5d). Discussion Our results provide valuable information about the TCS resistance gene (TRG) reservoir in the natural environment at the metagenomic level. The level of TCS detected in our samples suggested potential TCS contamination of these sites, and is in concordance with various other studies 23 , where different TCS concentrations ranging from 0.02-35 μ g/kg have been reported. Here, we demonstrate that novel metagenome-derived diverse ENR variants associated with TCS resistance are abundant in natural soils and in a number of human-associated pathogenic microorganisms. We submit that the excessive use of TCS and the frequent exposure of microorganisms to TCS may have led to modifications in ENR, the target enzyme, to the generation of various versions of functional redundancy in ENR activity. The prevalence of TCS-resistant FabI ENR homologues, which were abundant in most of the human pathogens investigated raises concerns about the efficacy of using TCS and TCS-based analogues against these microorganisms. Intriguingly, our results revealed for the first time that a similar YX7K-type ENR is shared by both eukaryotic and prokaryotic intracellular pathogenic microorganisms, suggesting either a potential evolutionary link in the ENRs among these intracellular pathogens or a potential horizontal gene transfer event among these intracellular pathogenic organisms. Potential ENR dissemination via horizontal gene transfer have previously been assessed, where TCS resistance in a Staphylococcus aureus isolate was mediated by an unusual additional sh-fabI allele originated from and 100% identical to that of S. haemolyticus 26 . Additionally in other staphylococci, including S. aureus and S. epidermidis identical to sh-fabI homologues were located on plasmids, which suggested high mobility potential of these ENR homologues 26 . The metagenome-derived TCS-resistant 7-α -HSDH homologues were predominant in Epsilonproteobacteria; however, the lack of other prototypic ENR homologues in some of the members of this class, such as Campylobacter lari and Helicobacter bilis, indicate that 7-α -HSDH is the only ENR in these organisms. FabG-like and completely TCS-refractory ENRs were predominant in Streptomyces spp., which are known for having various polyketide synthesis pathways and might therefore have different types of ENRs for fatty acid biosynthesis. The slight structural similarities and differences in TCS resistance levels between metagenomic FabG and prototypic FabL ENRs suggest the possibility that enzymes of intermediate structures may have, over evolutionary history, left different versions of these enzymes throughout a diverse range of microorganisms. For instance, FabL and FabI are structurally very similar, with similar catalytic domains, but both have different cofactor requirements and different affinities for TCS. Additionally, phylogenetic analysis suggests that the metagenomic 7-α -HSDH, FabG and YX7K-like ENRs may have potentially diverged from FabL or FabI-like ENRs or from either 7-α -HSDH or FabG. Consequently, these different types of ENRs may be difficult to distinguish by sequence comparison or sequence annotation alone. The unusual characteristics of two of the metagenomic prototypic FabV and FabK-like ENR homologues might be attributed to the absence of the conserved FAD-binding domain 27 and the mutation in the catalytically active residue 28 , respectively. We assume that these nonfunctional ENRs may act as biocide load reducers by binding and sequestering a certain amount of TCS, thereby reducing the overall biocide burden of the functional ENR in bacterial cells. Our results suggested that even minute variations among these ENR homologues might result in conformational changes that, in turn, lead to the loss of ENR activity. To the best of our knowledge, this report is the first describing nonfunctional FabV and FabK ENR homologues with moderate TCS resistance. It has been proposed that additional cellular targets of TCS might exist in addition to the previously known target 20 . TCS resistance conferred by the metagenomic partial acrB gene homologue revealed the presence of different versions of this protein or different codon usage 29 . Similarly, the novel hypothetical protein-encoding gene homologue from clone pAH4 and the unknown TCS resistance determinants in 7 other metagenomic clones further strengthen the hypothesis that novel ENRs and genes or efflux pumps can confer TCS resistance. The co/cross-resistance to other antibiotics observed in the metagenomic clones may be associated with the colocalized genes encoding efflux pumps, antibiotic-modifying enzymes, ARG clusters and other genes. The ARGs that colocalized with TCS resistance determinants indicate that TCS may selectively enrich the complex patterns of multiple resistance to other antibiotics in the environment. TCS-mediated overexpression of the AcrB efflux system 22,30 and the resulting co/cross-resistance to other classes of antibiotics have been extensively addressed 31,32 ; however, our results provide clear evidence of co-or cross-resistance mediated not only by different versions of previously reported efflux pumps but also by other ARGs that colocalized with TCS resistance determinants. Metagenomic comparisons from different environments suggest that TCS may act as a driving force to influence the total ENR diversity in TCSREs, which are the most likely sites for TCS resistance development 32 . Moreover, both the prototypic and metagenome-derived ENRs were abundant in nature, while one of the metagenomic 7-α -HSDH-type ENRs represented the majority of ENRs present in TCSFEs. The reduction in total ENR diversity, the similar diversity patterns and the selective enrichment of prototypic and metagenome-derived TCS-tolerant ENRs in TCSREs indicate that this biocide may pose a selective pressure for the enrichment of specific groups of bacteria carrying TCS-tolerant ENRs. We conclude that different versions of ENRs and novel TCS resistant genes (TRGs) are abundant in nature. The vast diversity of ARG pools has made it possible for only some of the ARGs to be selectively transferred to many bacteria 8,33 ; therefore, selective enrichment of TCS resistance in TCSREs 32 and the spread of specific TRGs Scientific RepoRts | 6:32322 | DOI: 10.1038/srep32322 are possible in a diverse range of environmental and pathogenic microorganisms. The metagenome-derived novel patterns of TCS resistance raise concerns about the efficacy of this biocide and about the development of TCSand non-TCS-based ENR inhibitors. The spread of these ENRs and any colocalized ARGs may lead to resistance to TCS, other ENR inhibitors, and multiple other antibiotics. Additionally, TCS may pose a selective pressure to enrich human-and soil-borne plant pathogenic bacteria for those with TCS resistance determinants in the environment. Moreover, the great diversity of the metagenome-derived ENR isozymes may provide a clue as to the evolution of different fatty acid synthesis systems in microorganisms and brings into question the evolutionary pressures that natural or synthetic inhibitors may have exerted in selecting for ENR diversity. Methods Bacterial strains, plasmids, genomic DNA and culture conditions. The bacterial strains and plasmids used in this study are described in Supplementary Table S31. Genomic DNA from Helicobacter pylori HPKTCC B0100 was obtained from the Korean Type Culture Collection (KTCC), Korea National Research Resource Bank (KNRRC). The genomic DNA from Campylobacter jejuni NCTC11168 was kindly provided by Dr. Jong-Hyun Kim at the Centers for Disease Control and Prevention. E. coli strains/mutants DH5α , EPI-300, JP1111 (obtained from E. coli genetic resources at Yale; CGSC) and CL37 (kindly provided by Dr. John Cronan at the University of Illinois at Urbana-Champaign) were routinely grown at 37 °C in Luria-Bertani (LB) broth or LB agar supplemented with appropriate antibiotics. The antibiotic concentrations used were as follows; ampicillin, 100 μ g/ml; kanamycin, 50 μ g/ml; chloramphenicol, 30 μ g/ml and TCS, 0.1-650 μ g/ml. Fosmid pCC1FOS (Epicentre Biotechnologies, Madison, WI, USA) was used to construct genomic libraries, whereas pUC119 and pGEM ® -T Easy were used for further subcloning experiments. Determination of minimum inhibitory concentration (MIC). To determine the MIC of TCS, E. coli EPI-300 with pCC1FOS and DH5α with pUC119 were first grown to an OD 600 of 1.0, and these bacterial suspensions were serially diluted to 1 × 10 5 colony-forming units (CFU)/ml. The cell suspension (1 × 10 5 CFU/ml) was spread onto LB agar medium containing antibiotics and TCS in a range of 0.1-5 μ g/ml. The LB plates were incubated at 37 °C for three days, and bacterial colony formation was examined at regular 24 h intervals. The lowest concentration of TCS (0.9 μ g/ml) that prevented bacterial growth of E. coli EPI-300 was considered the MIC for TCS. To determine the MICs for the clones carrying various ENR homologues, the procedure was followed as above, but the negative control E. coli EPI-300 carried an additional chromosomal fabI gene from wild-type E. coli K12 in the pCC1FOS fosmid vector to prevent ENR overexpression effect. This experiment was performed in triplicate for each antibiotic concentration. General DNA manipulations. Standard recombinant DNA techniques were followed as previously described 34 . DNA sequencing and primer synthesis were performed commercially at the DNA sequencing facility of MacroGen (Seoul, Korea). Sequence comparisons (nucleotides/amino acids) were performed using the BLAST and ORF finder online services provided by the National Center for Biotechnology Information (NCBI http:// blast.ncbi.nlm.nih.gov). Multiple alignment analysis was performed using BioEdit software in combination with GeneDoc. Metagenomic library construction and screening for TCS-resistant clones. The metagenomic library from alluvial soil (AS) was previously constructed 35,36 ; however to construct the metagenomic library from the industrially contaminated area, soil samples were collected from the Gam-geon stream (Sasang-Gu, Busan, Republic of Korea), which receives combined sewer effluent from various industries, as this area has been highly urbanized by a number of industries since 1968. The Gam-geon stream finally meets the Nakdong River, which in turn converges to the East Sea, a marginal sea of the Pacific Ocean, and forms a unique ecosystem. Soil DNA was isolated as previously described 37 . Both of the soil sample collection sites for metagenomic library construction were presumed to be TCS contaminated due to anthropogenic processes. The metagenomic library was constructed and stored following the protocol described previously 35 . To select TCS-resistant clones from the fosmid library, the library pool stocks were diluted in a buffer (per liter: NaCl, 8.5 g; KH 2 PO 4 , 0.3 g; Na 2 HPO 4 , 0.6 g; MgSO 4 , 0.2 g; gelatin, 0.1 g), and fosmid clones from the metagenomic library were spread on LB agar containing 30 μ g/ml chloramphenicol and 5 μ g/ml TCS. E. coli colonies that grew on LB with TCS were picked and further tested at higher concentrations of TCS until they were unable to grow or were found refractory to TCS. Pure cultures of TCS-resistant clones were processed for fosmid isolation followed by BamHI restriction digestion, and the unique clones were finally selected based on BamHI restriction profiles. Subcloning of TCS-resistant clones. Secondary library was constructed in pUC119 following the previously described procedure 38 and TCS resistant subclones (primary subclones) were selected. Once the nucleotide sequence analysis of the TCS-resistant clone was completed, the candidate gene(s) for TCS resistance were further subcloned into pUC119 or the pGEM ® -T Easy vector (secondary subclones) and tested at a similar concentration of TCS as the parental clones. Metagenomic TCS-resistant clones that failed to produce subclones with restriction digestion were processed using Tn-5 transposon mutagenesis using the EZ-Tn5 ™ <KAN-2> Insertion Kit (Epicentre) to select for the genes responsible for TCS resistance. Transposon mutagenesis was performed according to the manufacturer's protocol. Mutants that were unable to grow on TCS-containing medium were selected, and transposon insertion sites were sequenced. Complementation analysis. To confirm the ENR activity of various metagenome-derived ENR homologues and other candidate clones, complementation studies were performed. Recombinant plasmids carrying candidate ENR homologue genes from metagenomic clones were transferred to a temperature-sensitive fabI mutant of E. coli JP1111 39 . This mutant has a mutation in FabI ENR that renders it unable to grow at the non-permissive temperature (42 °C). E. coli JP1111 containing the fosmid or metagenomics clones were grown in triplicate on LB agar medium with IPTG, and bacterial growth was observed at 30 °C and 42 °C. Bacterial growth of E. coli JP1111 at 42 °C for 48 h indicated complementation of FabI ENR activity. To confirm the 3-oxoacyl-acyl-carrier-protein reductase (FabG) activity of two of the metagenomic clones, complementation studies were performed in a similar manner as described for ENR complementation but using a fabG temperature-sensitive mutant of E. coli CL37 40 . This mutant has two mutations (A154T and E233K) in its FabG reductase, rendering it unable to grow at the non-permissive temperature (42 °C). Co/Cross-and multiple-resistance tests. To investigate whether E. coli DH5α with various metagenomic clones carrying TCS resistance showed any differential responses to a variety of antibiotics, we first grew these cells to an OD 600 of 1.0 in the presence or absence of 0.1 μ g/ml TCS, a sub-lethal concentration (to observe cross-resistance). These bacterial cultures were then serially diluted to 1 × 10 5 CFU/ml, and 3 μ l of each cell suspension (1 × 10 5 CFU/ml) was spotted onto LB agar medium containing the antibiotics to be tested for co/ cross-resistance. LB plates were incubated at 37 °C for three days, and bacterial colony formation was examined at regular intervals of 24 h. The lowest concentration of antibiotic that prevented bacterial growth was considered the MIC for that antibiotic. This experiment was performed in triplicate for each concentration of each antibiotic. Prior to testing for co/cross-resistance, two independent experiments were performed in which a set of TCS-resistant clones was cultured in LB broth supplemented with 0.1 μ g/ml TCS (for cross-resistance studies) with an identical set not exposed to TCS (to observe co-resistance). This experiment was performed in triplicate for each concentration of each antibiotic. MIC of E. coli carrying TCS resistance genes was determined to 19 different antibiotics of 8 different classes (Supplementary Table S19). Phylogenetic and taxonomic analysis. Phylogenetic analysis was performed as described previously 41 for unique metagenomic ENR groups using amino acid sequences of metagenomic ENRs, previously known, well-characterized ENRs and other protein homologues that shared maximum identity to specific metagenomic ENR types. Sequence similarity searches were performed using the UniRef50 database for the well-characterized ENRs (FabL 42 , FabI 43 , and protein homologues other than ENRs, including FabG 40 and 7-α -hydroxysteroid dehydrogenase (7-α -HSDH) 44 from E. coli and Comamonas testosteroni 45 and for the unique metagenomic ENR candidate groups. This analysis resulted in sequences that were more than 50% identical. For each homology search, the top 10 scoring entries were selected. All identified sequences were compiled together with closely identical corresponding metagenomic ENRs, and redundant sequences were removed using the online Decrease Redundancy program 46 . The sequences were then aligned with MEGA 6 47 using the MUSCLE algorithm 48 . The alignment output was analyzed using the maximum likelihood method in MEGA, utilizing the nearest-neighbor-interchange strategy, which allows for deletion of gaps that exist in less than 50% of the sequences, and 500 bootstrap replicates to evaluate the confidence. The taxonomic origin of functionally selected DNA fragments was determined using RAIphy, a composition-based classifier that can accurately predict taxonomy without a strict reliance on phylogenetically close sequences in public databases compared to other similarity-based methods 49 . To predict the source phylum of resistance-conferring soil DNA fragments, we used all of the assembled metagenomic fragments, seeded predictions using the RAIphy 2012 RefSeq database, and binned DNA fragments with the 'iterative refinement' option in RAIphy. Selection of environmental metagenomic datasets. To investigate the abundance and diversity of genes encoding ENR homologues in presumably TCS-contaminated and TCS-free environments, 49 metagenome datasets were selected from 13 different environments. The presumably TCS-rich environments (TCSREs) included 3 samples from WWTPs (from an activated sludge WWTP in China, a tannery WWTP in China, and a WWTP in Hong Kong) and one sample from the ocean sediment in China. The presumably TCS-free or low-TCS environments included 9 samples: 2 human oral samples (ancient human oral sample from an archeological site and a human oral sample from Spain), 4 fresh water samples (from Antarctica, Minnesota, Australia and South Australia), 2 forest soil samples (a Mediterranean forest soil from Spain and a Temperate forest soil from Finland) and one glacier sample (from Austria). The metagenomic data sets for these samples were downloaded from the public repository MG-RAST web site (http://metagenomics.anl.gov/). Detailed information about how metagenomic data sets were selected and about the TRG reference database can be found in Supplementary Data 2. Comparative search for ENR diversity and abundance in environmental metagenomic datasets using the TRG reference database. A TRG reference database was constructed. The TRG database contained the deduced amino acid sequences of well-known prototypic and metagenome-derived ENRs identified in this study (Supplementary Data 2). TRG sequence reads were identified by performing a homology search between the environmental metagenomic datasets (query as nucleotide sequences) and the TRG reference database (subject as protein sequences) using BLASTx. Annotated sequence reads were removed if their e-value was lower than 10e −10 . The annotated read numbers (hits) from individual metagenomic datasets were normalized as previously described 5 . Briefly, to normalize the annotated read number (hits) per metagenome, we considered the average read length, the TRG reference sequence length, the 16S rRNA gene sequence length and the 16S rRNA sequence hits identified from metagenomic datasets in the Greengenes database (retrieved from MG-RAST) 5 . The TRG normalized abundance data are described in Supplementary material Fig. 5.xlsx. Normalization of the data for each dataset was performed as described using the following equation 5 where N TRG homologous sequence is the number of annotated TRG-like sequences, annotated as one specific TRG reference sequence; L TRG reference sequence is the sequence length of the corresponding specific ENR reference sequence in the TRG database; N 16S sequence is the number of 16S rRNA gene sequences identified in the metagenomic data; L 16S sequence is the average length of the 16S rRNA gene sequence found in the Greengenes database (1,432 bp); n is the number of mapped TRG reference sequences; and L metagenomic data read length is the sequence length for the corresponding sequence technology used, such as Illumina HiSeq, Sanger sequencing, or 454 pyrosequencing (refer to Supplementary Data 2). Analysis of normalized abundance of ENR homologues from environmental metagenomes. All comparisons of the relative abundance of TRG homologues between metagenomes were performed using the normalized abundance reads for each ENR divided by the total number of normalized ENR reads in the dataset (Supplementary material Fig. 5.xlsx). To compare the dispersion of TCS resistant ENRs in various environments (metagenomes), principal coordination analysis (PCoA) was performed using the Bray-Curtis dissimilarity measures for the TRG subtypes. All statistical analyses were performed with R software (version 3.2.2) (http:// www.r-project.org/) using Vegan 50 and ggplot2 51 packages (Supplementary material Fig. 5.xlsx, Supplementary material Fig. 5d.xlsx).
2017-09-15T18:51:03.412Z
2016-08-31T00:00:00.000
{ "year": 2016, "sha1": "78b8bcb9b4be1f884fcd1e182cf6795ba0d6770e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/srep32322", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "78b8bcb9b4be1f884fcd1e182cf6795ba0d6770e", "s2fieldsofstudy": [ "Biology", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
85557648
pes2o/s2orc
v3-fos-license
Clockwork Inflation We investigate the recently proposed clockwork mechanism delivering light degrees of freedom with suppressed interactions and show, with various examples, that it can be efficiently implemented in inflationary scenarios to generate flat inflaton potentials and small density perturbations without fine-tunings. We also study the clockwork graviton in de Sitter and, interestingly, we find that the corresponding clockwork charge is site-dependent. As a consequence, the amount of tensor modes is generically suppressed with respect to the standard cases where the clockwork set-up is not adopted. This point can be made a virtue in resurrecting models of inflation which were supposed to be ruled out because of the excessive amount of tensor modes from inflation. Introduction The clockwork mechanism [1,2] allows to explain the presence of light degrees of freedom with highly suppressed interactions in theories where there are no small parameters to start with. A general theory of the clockwork mechanism valid for scalars, fermions, gauge bosons, and gravitons has been recently proposed in Ref. [3]. Let us briefly show how it operates for scalars and consider a theory endowed with a global U(1) N +1 spontaneously broken at the scale f . The degrees of freedom at energies smaller than f are the N + 1 Goldstone bosons π i U i (x) = e iπ i (x)/f , i = 0, · · · , N. (1.1) The π i fields transform by a phase under the corresponding Abelian factor U(1) i . Suppose now that the low-energy description of the theory is described by the Lagrangian where the presence of the explicit mass terms breaks soflty the symmetry U(1) N +1 down to a single U(1). The square mass matrix is given by In the mass eigenstate basis φ i (i = 0, ..., N ) where O is a real orthogonal matrix, the eigenvalues are given by m 2 φ 0 = 0, m 2 φ k = λ k m 2 , λ k ≡ q 2 + 1 − 2q cos kπ N +1 , k = 1, · · · , N. (1.5) The elements of the rotation matrix O are given by − sin (i + 1)kπ N +1 , i = 0, .., N ; k = 1, · · · , N, (1.6) and (1.7) The key point of the clockwork mechanism is now that the massless eigenstate φ 0 is coupled to the rest of the fields in the theory with a coupling which is suppressed by O i0 ∼ q −i . In particular, if the rest of the degrees of freedom in the matter sector couples only to the N -th pion π N , the state φ 0 couples to them with a suppressed coupling scaling like q −N . If N is large and q > 1, then the coupling is efficiently suppressed. In the case in which the number of copies is very large, it has been pointed out that there exists also a five-dimensional continuum limit of the clockwork mechanism [3]. It is achieved by introducing a dilaton field S in a five-dimensional braneworld with the fifth dimension compactified on S 1 /Z 2 . The corresponding action reads where k 2 characterizes the negative vacuum energy in the bulk, R is the radius of the fifth dimension, M 5 is the fundamental scale in the bulk, and V 0 and V π are tensions on the brane satisfying the relation V 0 = −V π = −4kM 3 5 . The corresponding metric is found to be with η µν the flat Minkowski metric. In this picture hierarchies are produced on the y = πR brane and the discrete suppression factor q −N is replaced in the continuum with e −kπR . The goal of this note is to show that the clockwork mechanism can be adopted in inflationary theories to efficiently generate flat inflaton potential sustaining a de Sitter phase as well as small masses and couplings to match the small amount of observed scalar perturbations. We will present in section 2 various examples of such ability from the four-dimensional discrete perspective. In section 3 we will study the phenomenon of inflation from the fivedimensional continuum perspective and show that the amount of clockworking producing small masses/couplings depends on the Hubble rate during inflation. Maybe more interestingly, in section 4 we show that, within the clockwork set-up, the clockwork charges of the gravitons are site-dependent and the amount of tensor modes generated during inflation is suppressed with respect to the standard scenario due to the fact that tensor modes are intrinsically bulk degrees of freedom. Section 5 contains our conclusions. Clockwork inflation: the four-dimensional discrete perspective In this section we show how to exploit the clockwork theory in inflation. The clockwork set-up is suitable to get either small masses (compared to the fundamental mass scale of the problem) or small couplings (compared to couplings of order unity). This is exactly what is needed during inflation in order to get the right amount of density fluctuations. The comoving curvature perturbation ζ in the flat gauge is [4] where the subscript * indicates that quantities should be computed at the epoch of Hubble radius exit for the comoving scale k = aH, φ is the inflaton field, H is the Hubble rate during inflation, M pl is the reduced Planck mass, and one has to remember that observables scales in our current universe correspond to the last 60 e-folds or so before the end of inflation. Dots indicate differentiation with respect to time and is one of the slow-roll parameters. The observed perturbations are matched if Large field models of inflation To illustrate the advantages of the clockwork set-up in producing flat potential during inflation, let us consider the class of large field models of inflation. The simplest model of inflation is given by a linear potential We know that the slow-roll conditions are attained when φ M pl and that the density perturbations are given by [4] ζ ∼ m M pl for m ∼ 10 15 GeV M pl . In the clockwork scenario, we can assume that there are N + 1 copies of the inflaton fields and the potential is where M 2 M 1 (say smaller by a factor of 10), but both close to the fundamental scale. The first piece of the potential is invariant under a shift symmetry which is broken by the last term in the potential. Upon diagonalization of the mass matrix (which is not altered by the presence of the linear term) and going to energy (to be identified with the Hubble rate H) much smaller than M 2 , the lightest mass eigenstate state φ 0 will have a potential Taking for instance M 2 ∼ 10 −1 M pl , q = 2, we need N ∼ 20 copies to match the observed level of perturbations. We should also note that possible one-loop contributions from the matter sector to the inflaton potential are suppressed, at least by a factor of q −N /16π 2 , and therefore such contributions are fully under control. Another issue the clockwork can be useful for in large field models is the one of superplanckian field excursion. This was already noticed in Ref. [2]. If the scalar perturbations are ascribable to only one scalar degree of freedom, then ∆φ M pl r 2 × 10 −2 where r is the so-called tensor-to-scalar ratio. A future detection of gravitational waves requires in general variation of the inflaton field of the order of the Planck scale [5]. This would pose a problem as slow-roll models of inflation disregard the possible presence of higherorder operators with powers of (φ/M pl ). However, super-Planckian field excursions can be mimicked while preserving the regime of renormalizable four-dimensional field theory by the clockwork mechanism. Indeed, the slow-roll of the inflaton field over super-planckian field values corresponds to a clockwork of phase-rotations of the fields N + 1 copies of fields π i with a U(1) N +1 global symmetry whose effective decay constant is amplified with respect to the original one by a factor q N . Hybrid models of inflation To illustrate the advantage of the clockwork mechanism in terms of efficiently producing small couplings during inflation, let us consider the hybrid model of inflation [6,7] with N + 1 copies of the fields π i and and extra field Φ (2.10) The dots represent one or more additional terms, which give the potential a minimum at which it vanishes but play no role during inflation. By performing the standard diagonalization of the clockwork mechanism for the mass squared term in Eq. (2.10) and working at energies much smaller than the Hubble rate H, the potential (2.10) for the lightest eigenstate φ 0 is reduced to For suitable choices of the parameters, inflation takes place with the field Φ held at the instantaneous minimum, leading to a potential (2.12) Imposing the condition (2.3) gives [4] and one could explain a small coupling λ 0 with the clockwork mechanism even though all the other mass scales in the problem are of the order of the Planck scale. Small field inflation In small field models of inflation the problem is to have a flat enough potential close to the (2.14) As the spectral index of the scalar perturbations is given by [4] 1 − n = 2M 2 pl m 2 /V 0 ∼ 0.04, one needs m 2 ∼ 10 −1 H 2 to be in agreement with the observations. To produce a potential suitable for small field inflation, in the clockwork scenario it is enough to couple the pion π N to fermions charged under some strong group. Below the confinement scale the lightest mode acquires a potential of the form used in natural inflation [8] in such a way that, expanding around the maximum of the potential we obtain and the condition m 2 H 2 requires This condition can be easily satisfied if f M pl even for moderate values of N 1 . Furthermore, the normalization of the density perturbations (2.3) imposes 18) where N e is the number of e-folds till the end of inflation and φ e is the value of the inflaton field φ 0 when inflation ends. Since the typical scale for φ e is q N f M pl , one sees that the clockwork can allow sizeable Λ. Starobinsky inflation Another illustrative example of the efficiency of the clockwork mechanism is provided by the so-called Starobinsky model of inflation [12]. Let us consider N + 1 copies of GR with action where we have added a quadratic curvature term for the N -th term whose strength is parametrized by a dimensionless parameter α. It is known that in (R + R 2 )-theory, there is a massive scalar mode on top of the gravitons which can be uncovered with the usual methods, leading to the action (2.20) For this theory, as shown in Ref. [3], there is a massless graviton with a corresponding Planck mass, which in the large N -limit is Therefore, the action for the massless graviton and the scalar becomes During inflation φ N takes large values and the dynamics is dominated by the vacuum energy In order to get the correct normalization (2.3) one needs q −2N ≈ 10 −5 for α = O(1) . For instance, for q = 2, the moderate value N = 8 is required. In addition, the tensor-to-scalar ratio is r = (12q −2N /N 2 e ). Hence, the tensor modes are suppressed by an extra factor of q −2N = 10 −5 , leaving to no room for tensor modes in the clockwork Starobinsky inflation model. The clockwork and the generation of perturbations from light fields other than the inflaton Even though the inflationary paradigm is by itself quite elegant and simple, the mechanism giving rise to the adiabatic cosmological perturbations is far from being established. It is fair to say that we do not know at present what is the source of the scalar perturbations during inflation, the inflaton field itself or some other field. The total curvature perturbation ζ might not be a constant (in time) on super-Hubble scales and changing on arbitrarily large scales due to a non-adiabatic pressure perturbation which may be due to extra scalar degrees of freedom. For instance, in the curvaton mechanism [13,14] the curvature perturbation is generated from an initial isocurvature perturbation associated to the quantum fluctuations of a light scalar field σ, the curvaton, whose energy density is not dominant during inflation. The curvaton isocurvature perturbation becomes the adiabatic one once the curvaton decays into radiation after the end of inflation. During inflation a flat spectrum is produced in the curvaton field After inflation, the curvaton field starts oscillating during some radiation-dominated era, causing its energy density to increase and converting the initial isocurvature into curvature perturbation ζ. The curvaton mechanism works as long as the curvaton can be quantum mechanically where φ is the inflaton field. The clockwork provides a possible solution to this problem. Suppose that there are N + 1 copies of curvatons, let us call them again π i with potential V (π 1 , · · · , π N ) = M 2 This potential has the usual shift symmetry π i → π i + 1/q i . Like in the construction of Ref. [2], this shift symmetry is a manifestation of the fact that the π i 's are indeed pseudo Nambu-Goldstone bosons of a U(1) N +1 global symmetry. Since gravity is expected to break such a global symmetry, one expects corrections to the mass of the form where the c ij are O(1) coefficients and we have supposed that the vacuum energy is located at the N -th site. Upon diagonalizing the mass matrix (2.26) one finds that the lightest eigenstate σ 0 receives corrections to its mass squared suppressed at least by 1/q N 1, which is enough to obtain a light curvaton during inflation. Clockwork inflation: the five-dimensional continuum perspective In this section we investigate the clockwork inflationary scenario from the continuum limit point of view. We imagine that the vacuum energy driving inflation is located at one of the two branes and, as a result, each fifth dimensional section is inflating with constant Hubble rate. Let us start with the five-dimensional action [3,15] L bulk is a bulk matter action, L brane is the brane action and [K] = K + − K − is the jump of the trace of the extrinsic curvature where is a four-dimensional de Sitter metric. The equations of motion are then In addition, the Israel matching conditions along the branes Σ a located at y = y a with tension L (a) brane = V a for the metric (3.7) give where σ a = σ(y a ), S a = S(y a ). The scalar field equation (3.11) is not independent as it is connected to the Bianchi identity. So only Eqs. (3.9) and (3.10) are independent from where, eliminating S 2 we get the system of equations When H = 0, the solution to Eqs. (3.13) and (3.14) is the linear dilaton solution σ = σ 0 = 2k 3 y, S = S 0 = 2ky, (3.15) where the boundary condition σ 0 (0) = S 0 (0) = 0 has been assumed. However, the solution of Eqs. (3.13) and (3.14) for non-zero H is not easy to be found. Nevertheless, by using Eq. (3.13) we can express the scalar S in terms of the wrap factor as By taking the derivative of Eq. (3.14) and comparing with Eq. (3.13), we can completely decouple the scalar S and we find that σ satisfies the equation the third order equation Although we were not able to find an exact solution to Eq.(3.17) we can try to find a solution perturbatively in H 2 . For this, we may write σ ≈ σ 0 + H 2 σ 1 (3.18) and treat σ 1 as a first order perturbation. We then find that σ 1 satisfies (3.19) and therefore, where C 1,2,3 are integration constants. In addition, we find that S is given by It is straightforward to verify that σ = σ 0 + H 2 σ 1 and S in Eq. (3.21) indeed satisfies the equations (3.13) and (3.14) to leading order in H 2 . With a compact fifth dimension and two branes at y = 0 and y = πR with brane action we find from Eq. (3.12) that the discontinuities of σ, S should satisfy The solutions that satisfy (3.23) have C 1 = 0 and if we take C 2 = 0 and C 3 = 1/2k so that σ(0) = S(0) = 0, we get Note that if S π = S(πR), we have from Eq. (3.25) which fixes the radion field R in terms of the boundary value of the dilaton S at y = πR. The latter can be determined by a potential for example of the form At leading order in H 2 /k 2 , the extra vacuum energy driving inflation in the y = πR brane 2 is given by We may write the last equation as which is just the standard Friedmann law on the brane expected with a stabilized dilaton to lowest order in H 2 . In (3.30) we have defined the four-dimensional Planck mass in flat space-time as which coincides with the one found by dimensional reduction of the five-dimensional action reported in [3]. If the vacuum energy driving inflation comes from the y = πR brane, the scalar perturbations have the same behavior at leading order in the slow-roll parameters as in the four-dimensional case [16] and we expect that no detectable signature remains from the non-trivial geometry in the bulk 3 . The reason is the following. Differently from the tensor modes, which are genuinely five-dimensional free fields quantized in the bulk, scalar metric perturbations are generated by the brane scalar field which is quantized on the brane and is four-dimensional in all regimes. The coupling between the inflaton field and the metric concerns the long-wavelength limit for which the metric evolves as in the four-dimensional theory. Moreover, such a coupling is localized on the brane, and no signature remains from the warped geometry in the bulk. Corrections to this result are of the order of −ḢR 2 = (HR) 2 as metric perturbations probe the extra dimension on times scales −H 2 /Ḣ. The same logic is not valid for tensor modes, as we now proceed to discuss. Tensor modes in clockwork inflation In this section we study the behaviour of the tensor modes during a de Sitter stage in the clockwork set-up. The goal is to show that tensor modes are suppressed in this scenario with respect to the standard case, the reason being that the tensor modes feel the bulk by full strength and that during the de Sitter stage the latter is more warped. We start from the five-dimensional continuum perspective. The advantage of using tensor modes to probe the clockwork mechanism is that there is a dependence only on the geometry, not on the microscopic models of inflation and stabilization of the extra fifth dimension. Tensor modes: the five-dimensional continuum perspective As shown in Refs. [16,18], a massless tensor mode is produced in braneworls scenarios from inflation with the amplitude H/M pl . The point is that the effective Planck mass during inflation for a curved de Sitter braneworld differs from that of the flat brane at low energies, due to a dependence on the Hubble rate H. Thus, the amplitude of tensor modes from inflation in a clockwork scenario is usually different from a standard period of inflation without clockwork. Let us indeed consider the equation of motion of the graviton field in the continuum clockwork scenario starting from the factorized metric (3.7). By decomposing the five-dimensional tensor mode as where m is the eigenvalue corresponding to the Kaluza-Klein modes and the transverse- µν (x ρ ), one can show that that the eigenvalue problem can be written as [16,18] −D − D + h m (y) = m 2 h m (y), is modified from the flat case. It is easy to see that the wrap factor e σ(y) is convex and an is an increasing function of H 2 , see Fig. 1. As the amount of tensor modes is proportional to (H/M dS pl ) 2 , this means that in the clockwork scenario the amount of tensor modes is reduced with respect to traditional case. Is this a negative point? Not necessarily so. There are many models of inflation which has been ruled out by the recent Planck data [19] as they produce a too large amount of tensor modes. Examples are the chaotic large field model of inflation λφ 4 [20] and power-law inflation [21]. By embedding these models into the clockwork theory, these models become allowed again by current data. Tensor modes: the four-dimensional discrete perspective Let us analyze now the clockwork graviton in de Sitter space from the four-dimensional perspective. In particular we assume N + 1 copies of metrics g µν i describing N + 1 copies of general relativity with their associated N + 1 massless gravitons and Planck mass M i . The gravitons h µν i are fluctuations around the de Sitter metric g dS µν such that g i µν = g dS i µν + h i µν /M 2 i . Clockworking will break the N + 1 diffeomorphisms to a single diffeomorphic invariance corresponding to a single massless graviton. The clockwork dynamics will then be described by the following action where h µν i is transverse traceless ∇ µ h µν i = 0, γ µν h µν i = 0. For q i+1 = 0, the action above describes N + 1 massive gravitons on de Sitter in transverse traceless gauge [22,23]. Note that we have allow for different q's in the mass term (q 1 = q 2 . . . = q N ) since the de Sitter metric is not flat. We will motivate this choice by deconstructing the five-dimensional space where we will see that this choice is necessary when the background is not flat in general. The clockwork theory described by the action (4.6) is invariant now under the transformation where ξ µ is a vector in de Sitter. Therefore, we expect a massless graviton in the spectrum, the existence of which can be verified by diagonalizing the mass matrix It can easily be verified that the mass matrix M 2 has a zero eigenvalue corresponding exactly to the symmetry of Eq. (4.7). The action (4.6) can be written then as where m 2 i are the non-zero eigenvalues of M 2 and We see that the theory described a massless graviton h µν 0 and N − 1 massive spin-2 states h µν i , (i = 1, . . . , N − 1). Let us now deconstruct the clockwork direction. The graviton fluctuations around a fourdimensional de Sitter background will be described by the action which after deconstruction turns out to be (after redefining h µν → e − 3 2 σ h µν ) (4.12) Simple manipulations lead to the following expression (4.13) For the warp factor σ(y) given in Eq. (4.4), the deconstruction is carried out in the so-called 0-frame of the discrete clockwork, where the SM resides in the first site [3]. To go to the N -frame used here as well as in Ref. [3] where the Standard Model (SM) is localized at the N -th site in the discrete clockwork, we should change σ → −σ so that q i → 1/q i . In this case we find that q i+1 = e (4.14) Hence, the charge increases in going from the first site to the N -th site, and (4.13) can be written as This is identical to Eq. (4.6) which is motivated here as the deconstructed action along the clockwork direction. When the y = const. sections are flat, σ is a linear function of y as in Eq. (3.15) and hence we get that q 1 = q 2 . . . = e ka . However, in the case in which the y = const. sections are not flat, and in particular de Sitter spaces like in the present setup, then we have that q 1 = q 2 . . . = q N and the charge in the N -th site is larger in de Sitter than it would be in Minkowski spacetime, explaining why the Planck scale is larger during inflation. Conclusions The clockwork is an ingenious mechanism to generate large mass/coupling hierarchies in theories where no small parameters are present to start with. In this paper we have offered an handful number of examples of how the clockwork set-up may help to construct inflationary models with no fine-tuning. Interestingly, clockwork inflation predicts an amount of tensor modes which is smaller than in standard scenarios with no clockwork. While this result is bad news for current and future efforts in detecting tensor modes in the B-mode polarization of the CMB, it is certainly good news for inflation model builders as many models of inflation prematurely ruled out by Planck observations for their excessive tensor mode power spectrum, are now back to business.
2016-11-10T14:38:56.000Z
2016-11-10T00:00:00.000
{ "year": 2016, "sha1": "80e9042e20440113c74be43eb105425f88f32291", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2017.01.042", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "80e9042e20440113c74be43eb105425f88f32291", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
231667927
pes2o/s2orc
v3-fos-license
Incidence and risk factors of micronutrient deficiency in patients with IBD and intestinal Behçet’s disease: folate, vitamin B12, 25-OH-vitamin D, and ferritin Background Patients with inflammatory bowel disease (IBD) and intestinal Behçet’s disease (BD) are vulnerable to micronutrient deficiencies due to diarrhea-related gastrointestinal loss and poor dietary intake caused by disease-related anorexia. However, few studies have investigated the incidence and risk factors for micronutrient deficiency. Methods We retrospectively analyzed 205 patients with IBD who underwent micronutrient examination, including folate, vitamin B12, 25-OH-vitamin D, and/or ferritin level quantification, with follow-up blood tests conducted 6 months later. Results Eighty patients (39.0%), who were deficient in any of the four micronutrients, were classified as the deficiency group, and the remaining 125 (61.0%) were classified as the non-deficient group. Compared to those in the non-deficiency group, patients in the deficiency group were much younger, had more Crohn's disease (CD) patients, more patients with a history of bowel operation, and significantly less 5-amino salicylic acid usage. Multivariate analysis revealed that CD and bowel operation were significant independent factors associated with micronutrient deficiency. Conclusions The incidence of micronutrient deficiency was high (39.0%). Factors including CD, bowel operation, and younger ages were found to be associated with higher risks of deficiency. Therefore, patients with IBD, especially young patients with CD who have undergone bowel resection surgery, need more attention paid to micronutrition. Background Inflammatory bowel disease (IBD), including Crohn's disease (CD) and ulcerative colitis (UC), is a chronic disease of the gastrointestinal (GI) tract associated with unclear etiology, leading to rectal bleeding, abdominal pain, and weight loss and repeated cycles of relapse and remission [1,2]. CD is an IBD that can affect entire GI tract from mouth to anus and it affects mainly terminal ileum and colon. It often includes both intestinal and extra-intestinal symptoms [2,3]. On the other hand, UC is mostly restricted to the colon and it usually involves continuous lesion in the intestinal mucosa [2]. Most patients with IBD, especially those with CD, suffer from weight loss and malnutrition during the course of the disease [4,5], which may be related to lack of oral intake, increased nutrient requirements, increased GI loss, and intestinal resection or bypass surgery [4,6]. In addition, intestinal Behçet's disease (BD), which is an intestinal invasion of BD with chronic relapsing multisystem vasculitis disorder Open Access *Correspondence: SJPARK@yuhs.ac 1 Department of Internal Medicine, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea Full list of author information is available at the end of the article [7], is similar to CD with respect to clinical courses, symptoms and treatment modalities [8]. Therefore, there is increasing interest in patient management and nutritional status in intestinal BD as well as IBD. Nutrients can be classified as either macronutrients or micronutrients. Macronutrients are energy-providing nutrients including carbohydrates, lipids, and proteins. Malnutrition can occur in cases of active, severe IBD, when macronutrients are not consumed or absorbed in sufficient quantities. Micronutrients, including minerals, vitamins, and trace elements, are often deficient in patients with mild disease activity or remission status [9,10]. According to the European Society for Clinical Nutrition and Metabolism guidelines, patients with IBD should be regularly checked for micronutrient deficiencies and certain deficits should be adequately corrected [11]. Several studies have reported vitamin and mineral deficiencies in patients with IBD; these studies assessed their symptoms and effects on the quality of life and observed widely variable clinical significance [12][13][14]. Vitamins are organic compounds and are classified as either water-soluble, including thiamine (B1), riboflavin (B2), nicotinic acid/niacin (B3), pyridoxine (B6), cobalamin (B12), biotin, pantothenic acid, folic acid, and vitamin C (ascorbic acid), or fat-soluble, including vitamins A, D, E, and K [9]. Dietary minerals are important inorganic components that work as cofactors and catalysts in maintaining cell structure and enzymatic processes, such as calcium, phosphate, potassium, magnesium, and iron. Trace elements are necessary for the function of enzymes in the body, including zinc, copper, and selenium [9,10]. Clinically relevant micronutrient deficiencies that occur over the course of IBD disease progression include anemia (caused by iron, folate, and vitamin B12 deficiencies), bone mineral density loss (due to insufficient calcium, vitamin D, magnesium, and vitamin K levels), impaired thrombosis (caused by folate, vitamin B6, B12 deficiency) and wound healing deficits (due to deficiencies of vitamin A, C, and zinc), and carcinogenesis (related to folate, vitamin D, and calcium deficiency) [9]. Among them, anemia is the most common complication affecting up to 70% of patients with IBD, including UC and CD, and intestinal BD [15,16]. Iron deficiency is the most common cause of anemia in 30-90% of patients with IBD, but folate and vitamin B12 deficiencies are also highly prevalent in these patients, especially in those with CD, compared to the general population [17][18][19]. In addition, bone density is an important factor, which affects not only the quality of life of IBD patients but also the disease course of IBD, as it is highly related to treatment modalities such as corticosteroids as well as micronutrients [20][21][22]. However, studies assessing micronutrient concentrations in patients with IBD are scarce, and there are currently no studies on micronutrients in patients with intestinal BD, to our knowledge. Therefore, we aimed to investigate the prevalence and risk factors of micronutrient deficiency in patients with IBD and intestinal BD. Patients We conducted a retrospective study of patients with IBD and intestinal BD who underwent laboratory tests to quantify micronutrients such as iron, folate, vitamin B12, and 25-OH-vitamin D from March 2016 to March 2017 at the Severance Hospital, Yonsei University College of Medicine, Seoul, Korea. Out of 3,695 patients, a total of 205 with IBD and intestinal BD who underwent micronutrient testing twice were enrolled retrospectively. A total of 3,490 patients were excluded from our study for the following reasons: (1) patients did not undergo micronutrient testing during the study period (n = 3,047); (2) patients underwent micronutrient blood testing only once (n = 426); (3) data were not available or were lost to follow-up (n = 10); and (4) patients diagnosed with other diseases such as cancer or non-specific inflammation after evaluation (n = 7) (Fig. 1). We included patients who underwent micronutrient testing at least twice, at baseline and at follow-up after 6 months for one or more of the following four micronutrients: folate (n = 127), vitamin B12 (n = 128), 25-OHvitamin D (n = 184), and ferritin (n = 97). In addition, we divided patients into two groups: a group consisting of those with a micronutrient deficiency (n = 80) and another group with those without micronutrient deficiency (n = 125) at baseline. This study was performed in accordance with the ethical guidelines of the 1975 Declaration of Helsinki and was approved by the institutional review board of Severance Hospital. Baseline characteristics Variables of baseline characteristics included demographic information, routes of laboratory tests (outpatients or inpatients), medications, supplements, past bowel surgery, and underlying diseases. In our study, we defined bowel surgery as cases of surgery due to diseases of IBD and intestinal BD only, and excluded all the other causes including diverticulum and foreign body. Statistical analysis Variables are expressed as either median (interquartile range, IQR) or n (%). Baseline characteristics were compared using independent Student's t-tests (or Mann-Whitney tests) for continuous variables, and χ 2 tests (or Fisher's exact tests) were used for categorical variables, as appropriate. Independent predictors of micronutrient deficiency were analyzed using logistic regression analysis. Odds ratios (ORs) and the corresponding 95% confidence intervals (CIs) were calculated. Data analysis was performed using Statistical Package for the Social Sciences (SPSS) software (version 20.0; SPSS Inc., Armonk, NY, USA). A P-value < 0.05 was considered statistically significant. The median age was 34 years [IQR, , and 57.6% of the enrolled patients were male. In addition, most patients underwent laboratory testing in an outpatient clinic setting compared to an inpatient setting (93.2% vs. 6.8%; P = 0.150). There were no significant differences in most of the types of medications used by patients, but there were significantly more users of 5-aminosalicylic acid (5-ASA) in the non-deficient group (41.3% vs. 64.8%; P = 0.001). Micronutrient deficiency was higher in patients who underwent previous bowel surgery to treat Table 1). Relative risk of micronutrient deficiency Based on the univariate analysis, variables that were negatively associated with micronutrient deficiency with Table 1 Baseline characteristics between micro-nutrient deficiency group and non-deficiency group Data are expressed as median (interquartile range, IQR) or n (%). *P-value for comparing patients with micronutrient deficiency group and non-deficiency group IBD, Inflammatory bowel disease; UC, Ulcerative colitis; CD, Crohn's disease; BD, Behçet's disease; 5-ASA, 5-aminosalicylic acid; anti-TNF, anti-tumor necrosis factor; 6-MP, 6 Table 2). Changes in micronutrient levels Patients with UC and CD had the highest rates of ferritin deficiency compared to the other micronutrients assessed, including vitamin B12, folate, and 25-OHvitamin D, while in those with intestinal BD, the deficiency did not vary among the micronutrients assessed. However, rates of vitamin B12 deficiency significantly differed among those with UC, CD, and intestinal BD (0% vs. 13.7% vs. 18.2%, respectively; P = 0.038) (Fig. 2). We assessed changes from baseline at the first 6-month follow-up in terms of folate, vitamin B12, 25-OH-vitamin D, and ferritin levels in this study population. Initial laboratory tests in patients with UC showed ferritin and 25-OH-vitamin D deficiency, which had improved at follow-up testing after supplementation. In patients with CD, ferritin, vitamin B12, 25-OH-vitamin D, and folic acid deficiency occurred, and ferritin deficiency was the highest. Patients with intestinal BD had more deficiencies in folic acid and vitamin B12 than ferritin and 25-OH-vitamin D. In all cases, deficiencies were treated at the time of discovery, and after 6 months, all micronutrient deficiencies were reduced (Fig. 3). Discussion The study found that micronutrient deficiency was high (39.0%) in patients with IBD and intestinal BD, and 83% of the deficiency group were CD patients. In IBD patients with CD or UC, ferritin deficiency was significant, as was 25-OH-vitamin D deficiency. However, patients with intestinal BD had more folic acid and vitamin B12 deficiencies than those with IBD, who had more ferritin and 25-OH-vitamin D deficiencies. In addition, young age, CD, and intestinal surgery were significantly associated with micronutrient deficiency. Hwang et al. reviewed studies on micronutrient deficiency in patients with IBD and showed prevalence of micronutrients deficiencies. Among IBD patients, watersoluble vitamins deficiencies have been reported up to 11-78% [19,[23][24][25]; fat-soluble vitamins deficiencies to 22-90% [22,23,[25][26][27][28]; and macro mineral deficiencies to 36-90% [15,23,24,[29][30][31]. In most studies, patients with CD showed a higher correlation with vitamin or mineral deficiencies than those with UC. Our study also showed a high prevalence (46.2%, 66 of 143 patients) of micronutrient deficiency in patients with CD. Most vitamins and minerals are absorbed in the proximal small intestine, and vitamin B12 is absorbed in the terminal ileum [9]. The distal ileum is also where bile acid absorption occurs, which is important for the absorption of fat and fat-soluble vitamins [9]. Therefore, micronutrient deficiency should be carefully observed in patients with CD, which is relatively invasive in the small intestine and frequently affects the terminal ileum. In addition, caution should also be exercised in cases of intestinal BD because it often manifests as an oval shape and causes deep ulcers in the ileocecal area [7]. Therefore, patients with intestinal BD also showed micronutrient deficiencies (38.46%, 5 of 13), especially vitamin B12 deficiency; there was a significant difference among patients with UC, CD, and intestinal BD, with the highest prevalence in patients with intestinal BD (UC, 0% vs. CD, 13.7% vs. intestinal BD, 18.2%; P = 0.038) (Fig. 2b). The mostly likely explanation could be due to the location of the intestine affected by the disease, as intestinal BD may be associated with high cobalamin (vitamin B12) deficiency rates. Therefore, 156 patients with CD and intestinal BD with similar invasion sites were analyzed separately, but there was no significant difference in the deficiency of micronutrients between CD and BD patients. However, our results showed that micronutrient deficiency was as high in patients with intestinal BD as well as in patients with CD. Further research is needed in a larger study group. In patients with IBD, surgery is a significantly important factor affecting the likelihood of micronutrient deficiency. There have been reports of significantly lower vitamin D levels when CD affects the small intestine or when the small intestine is excised [32]. Active Crohn's ileitis and small bowel resection are reported to be risk factors for folic acid deficiency, as they can lead to malabsorption [19,25]. In addition, Battat et al. reported that ileal resection greater than 30 cm was an associated risk factor for cobalamin deficiency in patients with CD [33]. When further analyzing the risk factors for cobalamin deficiency in our study population, disease-related surgical operation was found to be a significant associated risk factor (OR, 5.513; CI 1.829-16.613; P = 0.002; data not shown). Our study is the first report on micronutrient deficiency in patients with intestinal BD and IBD. However, the study has several limitations. First, it is a retrospective cross-sectional study with selection bias. Because we examined patients who had been tested for a period of 1 year, we could not observe the cumulative effects of time to compare what the deficiencies had been before the start of the study and whether the deficiency occurred again after it ended, even though the deficiencies had been treated. This also showed a limit to the outcome of the assessment of patients with vitamin D deficiency. Even though our study used a lower vitamin D deficiency criterion (< 10 ng/mL) than other studies, many patients were already taking supplements (62.0%), which made it difficult to make a clear comparison. In addition, it may include cases in which several micronutrients are deficient in one patient with high risk of micronutrient deficiency. Furthermore, deficiency of certain micronutrients such as vitamin B12 can be affected depending on the location of the disease. For example, CD and intestinal BD patients are at high risk of vitamin B12 deficiency, while UC patients are relatively less likely to be affected. Therefore, there is a limitation with the possibility of selection bias. However, it is meaningful that it can analyze the risk of overall micronutrients deficiency. Second, we could not examine the relationship between disease activities. In particular, data on ESR (erythrocyte sedimentation rate) and CRP (C-reactive protein) levels, medical records of disease activity (CDAI [Crohn's disease activity index], Mayo score, DAIBD [Disease Activity Index for Intestinal Behcet's Disease]), and disease extent (Montreal classification) were insufficient for analysis. Most were outpatient clinic patients (93.2%), so we could not obtain accurate disease information based solely on the medical records. For example, 55.6% of the included patients were being treated with Pentasa (5-ASA) and 5.9% of patients were treated with steroid medications, suggesting most patients were not in the acute phase of disease progression. Thus, it may have been difficult to clearly compare differences in micronutrient deficiencies among those treated with various medications. Nevertheless, it was found that micronutrient deficiency was high in the outpatient population where the disease activity was not severe. Conclusions In conclusion, in patients with IBD and intestinal BD, the incidence of micronutrient deficiency was high (39.0%). In addition, CD, a history of intestinal surgery, and young age were factors significantly associated with deficiency. Therefore, those with IBD, especially young patients with CD who have undergone bowel resection, should be observed more carefully to assess the need for supplementation to treat micronutrient deficiencies.
2020-10-28T19:06:43.889Z
2020-10-06T00:00:00.000
{ "year": 2021, "sha1": "1e2c31e1e88b31dd51db402d94cd9a7d2c55ea2d", "oa_license": "CCBY", "oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/s12876-021-01609-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8764c7dbe56d41f114b35f11c07b9838576d1e68", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
44100921
pes2o/s2orc
v3-fos-license
Effects of Excision of a Mass Lesion of the Precentral Region of the Left Hemisphere on Disturbances of Graphomotor Output In the present study, the effect of neurosurgery on graphomotor output of a right-handed female patient with a mass lesion of the precentral region of the left frontal lobe was reported. For examination of handwriting movements a digitizing tablet was used. Preoperatively, the patient showed longer movement times than healthy subjects and patients with lesions of the left frontal lobe without involvement of the precentral region. Furthermore, the analysis of kinematic data revealed a severe dysfluency of her handwriting. Postoperatively, a significant improvement of writing time and fluency of handwriting was observed. Since the integrity of handwriting plays an important role in everyday functioning, disturbances of handwriting movements should be objectified and reassessed in follow-up assessment using new techniques such as digitizing tablets. Introduction The concept of quality of life has become established as an important consideration in the treatment of patients with intracranial mass lesions.Since alterations of mental functioning are common in patients with such lesions [1][2][3], the impact of neurosurgery, in particular on cognitive functions, has been examined in recent research [4][5][6].As well as cognitive and sensory disturbances, patients often complain of motor disturbances.These disturbances affect not only gross motor functioning but also the coordination of fine motor movements [7]. Case Report A 56-year-old right-handed woman with no history of neurological or psychiatric disease was admitted to the Department of Neurosurgery for excision of a meningioma in the left frontal lobe.The first symptoms she had complained of were fatigue and severe headache.Before admission, the patient had worked as a secretary for a small company.Over the four months previous to admission, she had experienced increasing problems with handwriting.She complained of an altered style of writing, with reduced legibility and of fatigue during writing which impaired her performance at work.No further disturbances of motor functioning, cognition or emotion were reported. No deficits were found on preoperative neurological examination.Neuroradiological examination using a 1.5 Tesla MRI system (MAGNETOM Vision; Siemens, Erlangen, Germany) in triplanar imaging with common sequences (T1, T2, contrast-enhanced T1) revealed a mass lesion situated in the precentral region of the left hemisphere with local compressive effect and almost no perifocal edema.Homogenous enhancement of the tumour matrix and surrounding dural enhancement in coronar view implicated a small meningioma of the left convexity as tumour lesion (Figure 1).Tumour histology was neuropathologically confirmed. On preoperative neuropsychological examination, the patient was alert, cooperative and well orientated.Her intellectual functions were average [8,9].While, according to published normative data, no disturbances of memory functions, attention, working memory, verbal fluency functions or either visuo-spatial or visuo-constructive abilities were observed [10][11][12][13][14][15][16], her reaction time in a simple computerized reaction time task was increased [17].In a detailed examination of language functions, no deficits were found with regard to spontaneous speech, Token Test, reading, naming or comprehension of speech [18].Spelling was also unimpaired.However, her writing was laborious, effortful and slow. Apparatus and Measures For further examination of handwriting, a digitizing tablet (WACOM IV) with a specific pen containing a normal ink refill was used.Digitizing tablets make possible the examination of specific kinematic aspects of handwriting movements such as velocity and acceleration of single strokes.The analysis of velocity and acceleration of handwriting provides evidence of the existence of simple motor programs.It has been suggested that handwriting in healthy subjects is formed by the sequential activation of these motor programs which are probably stored in the form of a spatial code [19].Single letterstrokes, the smallest relevant units of the writing process, are formed by open loop movements which are characterized by velocity profiles with only one peak (inversion of the direction) and a bell shaped course.Automated and non-automated handwriting movements can be distinguished from one another by the profiles of velocity and acceleration [20].Only one inversion in velocity is expected when the writing movement is an open loop (fully automated or fluent).More than one inversion of velocity per stroke points to a disturbance of handwriting fluency or automation [21].This means that the more inversions produced by subjects, the poorer they have mastered the movement. The tablet used in the present examination had a maximum sampling rate of 200 Hz.The position of the pen on the tablet, velocity and acceleration were measured continuously during writing.Data was stored on a personal computer connected to the tablet.Kinematic data were calculated and smoothed using nonparametric regression methods (kernel estimators) [22].It was possible to localize the tip of the pen with an accuracy of 0.2 mm in both directions (x/y).Movements of the pen tip above the paper, up to a maximum of 1.3 cm, could also be recorded.Data processing was performed with a computer program for the analysis of handwriting movements [23].For examination, the patient was asked to write the sentence "Ein helles grelles Licht" ("A bright and glaring light") repeatedly.Before the start of these writing tasks, several practice trials were undertaken in order to familiarize the subjects with the writing tablet.The tablet was constructed to resemble a common desk pad in order that subjects could produce their usual handwriting.No restrictions of posture, speed or size of writing were imposed.The sentences were written on unlined white paper (size 297 × 210 mm).For data analysis, the total writing time (movement time) and the distance of the writing trace of the test sentence were recorded per trial.Movement time (in ms) was defined as the time between the first and final movement in the writing of the test sentence.Distance (in mm) was defined as the distance covered by the pen during the writing of the test sentence.For further analysis, a mean movement time and a mean distance was calculated.Furthermore, the letter combination "ll" of the German words "helles" (bright) and "grelles" (glaring) were taken for the assessment of kinematic aspects of handwriting.The letter combination "ll" was chosen since these letters represent a simple letter combination which is usually written with the letters joined.Furthermore, while writing the letter combination "ll", the pen remains in contact with the tablet.In the evaluation of kinematic data, the mean number of inversions of the direction of the velocity (NIV) and acceleration profiles (NIA) of the letter combination "ll" were calculated.Kinematic analysis of the letter combination "ll" was performed, since the examination of the dynamic and static writing trace may often require its segmentation into meaningful units.From a motor viewpoint, single letters and in particular single strokes represent the smallest relevant units of the handwriting movement [19].Data analysis focused on the vertical component of the strokes. Participants In order to exclude age-related impairments of handwriting movements, five right-handed female subjects aged 51 to 57 years without neurological or psychiatric diseases performed the same handwriting task.Furthermore, five right-handed female patients aged 54 to 58 years with histologically confirmed menigiomas of the left frontal lobe without involvement of the precentral region underwent the same procedure. Results With regard to movement distance, our patient (mean distance: 427 mm) displayed no differences in comparison to healthy subjects (mean distance: 294 to 532 mm) and patients with mass lesions of the left frontal lobe without involvement of the precentral region (mean distance: 385 to 452 mm).However, she showed longer movement times (mean time: 14,983 ms) than both healthy subjects (mean time: 6170 to 8573 ms) and the patient group (mean time: 6374 to 8727 ms).In addition, the number of inversions of velocity (NIV) and acceleration profiles (NIA) were markedly increased in our patient (Figure 2).While healthy subjects (mean NIV: 4.0 to 4.4; mean NIA: 6.4 to 8.1) and patients with left frontal lesions without involvement of the precentral region (mean NIV: 4.0 to 4.8; mean NIA: 6.2 to 10.2) performed single letter-strokes by open loop movements, a severe dysfluency of handwriting, as reflected in a higher number of inversions in velocity and acceleration profiles, could be observed in our patient (mean NIV: 24.4; mean NIA: 39.9). Four months after total surgical removal of the mass lesion the patient underwent a second examination using the same test procedures.Since drugs have been shown to affect fine motor movements such as handwriting movements [24][25][26][27], postoperative assessment was performed after the patient had completed courses of steroid and anticonvulsive medication.Neurological examination revealed no deficits.Postoperative neuroradiological examination showed a complete removal of the tumour mass.The patient complained of sporadic headache and disturbances of attention but mentioned that she had noticed an improvement in her handwriting.In comparison to the results of the preoperative assessment of cognitive functioning no significant alterations were found.While verbal fluency functions, language, memory (including working memory) and visuo-constructive abilities were undisturbed, the reaction time of the patient remained increased.However, kinematic analysis of handwriting revealed a significant improvement of writing time and fluency of handwriting.While, postoperatively, the distance of the writing trace of the test sentence was unchanged (mean distance: 442.8 mm), the patient needed between 7466 and 8520 ms to complete the sentence.Furthermore, she displayed automated handwriting movements (mean NIV: 4.2) as indicated by a single inversion of the velocity profile per stroke (Figure 2).In addition, the number of inversions of acceleration profiles was markedly decreased (mean NIA: 10.4). Discussion The present results indicate that intracranial mass lesions of the precentral region of the left hemisphere may affect the automation of handwriting movements.This finding is not surprising in view of the anatomy of the precentral motor areas and its functions.The primary motor cortex, the supplementary motor cortex and the premotor cortex are involved in the processing of handwriting movements.In right-handed people, handwriting is controlled by the primary motor cortex of the left hemisphere.The supplementary motor cortex plays an important role in the programming and coordination of movement and posture.Although the functions of the premotor cortex are less well understood, there is some evidence that this cortical region controls the proximal movements that move the arm to targets.Therefore, more complex movement sequences, such as handwriting movements, can be executed under the control of the premotor cortex [28].Furthermore, our findings are also consistent with the results of neuroimaging studies.It has been shown that automated handwriting movements of healthy right-handed subjects were related to an increased regional cerebral blood flow (rCBF) of the dorsal and ventral premotor cortex and the inferior and superior parietal lobule [29].Yousry and colleagues [30] observed during non-automated handwriting movements in their right-handed subjects an additional activation in fMRI of the pre-and postcentral gyri of the right hemisphere.They also found additional activation in the precentral gyrus, middle frontal gyrus and middle occipital gyrus of the left hemisphere.Peinemann and colleagues [31] also reported a higher cortical activation (rCBF) of the left prefrontal cortex and the right anterior parietal lobule, including the postcentral gyrus, during non-automated handwriting.However, when their right-handed subjects were requested to perform automated handwriting movements, a higher activation of the left supplementary motor cortex and the hand area of the left primary sensorimotor cortex was observed. We assume that the impairments of handwriting movements in our patient were the consequence of disturbed functioning of the motor system, including the primary motor cortex, the supplementary motor cortex and the premotor cortex.These areas were probably affected by increased intracranial pressure and compression of adjacent brain tissue caused by the mass lesion of the precentral region of the left hemisphere.As a result, the patient was unable to produce automated handwriting movements.She therefore attempted to compensate her deficits by producing highly controlled handwriting movements which are associated with a higher activation of the right pre-and postcentral gyri [30,31] and an impaired handwriting fluency [32,33].Following surgical intervention, the detrimental effects of the space occupying lesion were ameliorated and fully automated handwriting movements were restored. Handwriting in adults is a c mplex psychomotor ability o which constitutes a dynamic interplay of relatively slow horizontal movements of the lower arm, wrist movements and finger movements [34,35].As well as semantic and syntactic demands, the process of handwriting necessitates the storage and retrieval of motor information, movement preparation, motor execution and the consideration of spatial requirements [19,36].Therefore, both cognitive abilities and motor skills contribute to handwriting [37].With regard to our patient, impairments of cognitive functioning can be ruled out since neuropsychological assessment using standardized test procedures revealed no disturbances in various aspects of cognition including memory, attention and both visuospatial and visuo-constructive functions.Kinematic assessment of handwriting movements has also been shown to allow an objective analysis of psychomotor symptoms in patients with other clinical conditions including neurological or psychiatric diseases, such as Parkinson's disease, Huntington's disease, Alzheimer's disease, Attention Deficit Hyperactivity Disorder and Major Depression [25,33,[38][39][40][41].The clinical relevance of these kinematic assessments is supported by the finding that disturbances of handwriting may cause considerable handicap in everyday life and may even lead to loss of employment [42].Since the integrity of handwriting plays an important role in everyday functioning, patients' complaints about disturbances of handwriting should be taken seriously.This is of particular importance in patients with intracranial mass lesions, since the mass lesions and their surgical treatment can cause all kinds of writing disturbances and abnormal writing behaviors such as agraphias or hypergraphia [43,44].Furthermore, careful surgery involving intra-operative direct cortical stimulation (brain mapping) by using language and writing tasks demonstrated that language areas can be spared during tumor removal [45].Since handwriting represents a motor act, a computerized registrations of handwriting movements which has been shown to pro-vide an objective, valid and reliable measure of psychomotor functioning [19,[46][47][48] should also be part of the assessment of patients with handwriting disturbances.The assessment is easy to perform, is well tolerated by patients and takes only a few minutes.In summary, disturbances of drawing and handwriting movements of patients with space occupying lesions can be objectified using digitizing tablets.This technique could make an important contribution to the pre-and post-operative assessment of psychomotor functioning and an early referral of patients to motor rehabilitation programs. Figure 2 . Figure 2. Left: Number of inversions in velocity (NIV) and acceleration (NIA) in a healthy participant (a) a patient with a left frontal lesion without involvement of the precentral region (b) and the patient with a left frontal lesion with involvement of the precentral region during preoperative (c) and postoperative assessment (d).
2018-05-28T13:41:51.928Z
2012-06-26T00:00:00.000
{ "year": 2012, "sha1": "dc0144832a073588f7e614b5fadffc5421ccf809", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=19658", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "a930af4670c043f5e0a436c76226ab99af1c6f9a", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
118595604
pes2o/s2orc
v3-fos-license
Stability of the A-like Phase of Superfluid 3He in Aerogel with Globally Anisotropic Scattering It has been suggested that anisotropic quasiparticle scattering will stabilize anisotropic phases of superfluid $^3$He contained within highly porous silica aerogel. For example, global anisotropy introduced via uniaxial compression of aerogel might stabilize the axial state, which is called the A-phase in bulk superfluid 3He. Here we present measurements of the phase diagram of superfluid 3He in a 98% porous silica aerogel using transverse acoustic impedance methods. We show that uniaxial compression of the aerogel by 17% does not stabilize an axial phase. When disorder is introduced into superfluid 3 He by way of high porosity silica aerogel a metastable A-like phase appears on cooling 1,2,3,4 . This phase is thought to be like the A-phase in bulk superfluid 3 He, known to be the axial p-wave state. At sufficiently low temperatures this metastable phase undergoes a transition to an isotropic superfluid phase similar to the isotropic state observed in bulk 3 He, the B-phase. However, a distinct transition from the B-like phase to the A-like phase in aerogel is not seen upon warming. Tracking experiments 3,4,5,6 have shown that coexistence of A-like and B-like phases occurs in a narrow window of temperature, ≈ 20 -50 µK, near the normal-to-superfluid transition temperature in aerogel, T ca . This is contrary to the expectation that the B-phase should be stable at all pressures and temperatures if the disorder introduced is homogenous and the scattering is isotropic 7 . On the other hand, it has been predicted that scattering anisotropy from the strands of aerogel might destabilize the B-like phase in favor of the A-like phase 7 . Pursuing this idea, Vicente et al. 6 suggested that the introduction of global anisotropy into aerogel, for example by uniaxial strain, might increase the stability of the A-like phase. Recent calculations 8 have shown that uniaxial anisotropy (achieved for example by compression along one axis) should stabilize the axial state, whereas radial anisotropy (radially compressed or radially reduced by preferential shrinkage during growth) might stabilize the polar state. Our previous results 9 for 3 He in aerogel with preferential radial shrinkage suggest a phase with increased stability, but the aerogel was not rigidly adhered to the transducer surface so there is some question as to whether or not this was an effect intrinsic to superfluid 3 He in aerogel. In this paper we present our measurements of the phase diagram for superfluid 3 He in a sample of 98% porosity silica aerogel grown directly on the surface of a transducer and then subjected to uniaxial strain of 17%. We used transverse acoustic impedance measurements 2,3,9 at the third harmonic (17.6 MHz) of an AC -cut quartz piezoelectric transducer, 0.84 cm in diameter. The impedance was measured using a frequency modulated RF -bridge, described elsewhere 14 . It has been shown 2,3 that for aerogel grown directly onto the transducer surface, the measured impedance is sensitive to all phase transitions through coupling of the shear transducer to the superfluid and is coincident with transitions in the interior of the aerogel. We grew our aerogel sample in the open space between two parallel transducers separated by two spacer wires, 0.0305 cm diameter, held under tension from a stainless steel spring, Fig. 1. Two additional spacer wires of smaller diameter, 0.0254 cm, were placed along side and between the larger ones before aerogel was grown to fill the entire assembly. The aerogel was synthesized at Northwestern University via a one-step sol-gel process followed by supercritical drying 15 . The density was controlled by the ratio of the reactants during the synthesis and was measured after drying to be 97.8% porous. After drying, the excess aerogel was removed, leaving only the aerogel between the two parallel transducers such that their outer surfaces could be exposed to bulk 3 He. Next, the 0.0305 cm diameter spacers were removed, maintaining tension with the spring, such that the aerogel was com- pressed to 0.0254 cm, giving 17% uniaxial strain. This amount of compression was shown by Pollanen et al. 15 to result in global anisotropy on the length scale of the correlation length of aerogel, ≈ 20 nm, causing minimal plastic deformation, as measured by small angle x-ray scattering (SAXS). Additionally, Pollanen et al. have used optical birefringence measurements to demonstrate that this method of imposing strain transmits anisotropy uniformly from macroscopic length scales to the microscopic scale probed by SAXS. Samples of the aerogel removed from the assembly region, adjacent to the acoustic sample, were also characterized using optical birefringence techniques 15 to ensure that, before compression, our aerogel sample was isotropic and homogeneous. The aerogel sample and experimental assembly were cooled in liquid 3 He using a dilution refrigerator, followed by adiabatic nuclear demagnetization 14 . A SQUID-based paramagnetic salt (LCMN) thermometer was used 14 , calibrated from the Greywall temperature scale 16 using bulk superfluid 3 He transitions that were easily identified in the acoustic response, V z , Fig. 2a. We determined the temperature of the aerogel phase transitions by taking the derivative of the acoustic response with respect to the temperature, Fig. 2b. The transition temperature from normal-to-superfluid in aerogel is best indicated by the point of separation of the warming and cooling traces as shown in Fig. 2b. Transitions from the A-like phase to the B-like phase are seen upon cooling, appearing as a dip in the derivative trace. No such transition is seen on warming. Similar signatures of these phase transitions have been reported previously for isotropic aerogel 2,3 . Gervais et al. 3 and Vicente et al. 6 performed tracking experiments by warming up close to, but not through, the aerogel superfluid transition temperature, T ca . After stopping at a 'turn-around' temperature the samples were then cooled again to look for an A-like to B-like transition. In this way it is possible to find the warming transition and how close the turn-around temperature must be to the critical temperature, T ca , to observe it. The magnitude of the impedance change is a measure of the amount of superfluid undergoing the A-like to B-like transition. We performed these tracking experiments at 25 bar in order to determine the window of coexistence of A-like and B-like phases in uniaxially compressed aerogel. We integrated the area of the dip in the derivative of the acoustic response with temperature and plot this as a function of the 'turn-around' temperature in Fig. 3 at 25 bar. The coexistence region is ≈ 40 µK which, to within our precision, is within 50 µK of T ca similar to that reported earlier 3,6 for nominally isotropic aerogel. In Fig. 4, we show the superfluid transitions, T ca , as well as the amount of supercooling in our uniaxially compressed aerogel compared to that of Gervais et al. 2,3 and Nazaretski et al. 4 . The similarity is striking given the significant amount of global anisotropy in our sample. The only apparent difference between our results on axially compressed aerogel and previous work is the increase in the supercooling of the A-like phase at pressures below 20 bar. This does not bear directly on the stability of the A-like phase, but suggests that the mechanism for nucleation of the B-phase is suppressed at lower pressures for uniaxially anisotropic aerogel. We have also found that the signature of the A-like to B-like transition becomes smaller as the pressure is decreased until it becomes difficult to measure below 12 bar. Although we find that uniaxial compression of the aerogel does not enhance phase stability, nonetheless we note that there are recent reports that the orientation of the superfluid order parameter can be influenced by anisotropy 10,11,12,13 . In summary, we find that the introduction of global anisotropy from uniaxial compression of 17% does not stabilize the A-like phase of superfluid 3 He in aerogel, in contrast to various suggestions 6,8 . The region of coexistence of the A-like and B-like phases is approximately 40 µK and indistinguishably close to the normal-tosuperfluid transition, nearly the same as that measured previously in uncompressed aerogel 3,6 . Consequently, it appears that uniaxial strain does not stabilize an A-like phase, or for that matter any phase, in aerogel. The pressure versus temperature phase diagram is remarkably similar to uncompressed aerogel, except for increased supercooling at low pressures in the range, 12 -20 bar. We acknowledge support from the National Science Foundation, DMR-0703656 and thank W.J. Gannon for useful discussions.
2008-02-25T17:54:57.000Z
2008-02-25T00:00:00.000
{ "year": 2008, "sha1": "a2c8a1a901d5bd8d1005f87bc411a5940537d6ec", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0802.3667", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a2c8a1a901d5bd8d1005f87bc411a5940537d6ec", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
229695633
pes2o/s2orc
v3-fos-license
#ENT: Otolaryngology Residency Programs Create Social Media Platforms to Connect With Applicants During COVID-19 Pandemic Objective: To determine which otolaryngology residency programs have social media platforms and to review which programs are utilizing platforms to advertise virtual open houses and virtual subinternships for residency applicants. Study Design: Cross-sectional study. Setting: The study was conducted online by reviewing all accredited otolaryngology residency programs in the United States participating in the Electronic Residency Application Service. Methods: Otolaryngology residency programs were reviewed for social media presence on Instagram, Twitter, and Facebook. Social media posts were evaluated for virtual open houses and virtual subinternships. Residency websites and the Visiting Student Application Service were evaluated for the presence of virtual subinternships. All data were collected between September 5, 2020, and September 9, 2020. This study did not require approval from the University of Alabama at Birmingham Institutional Review Board for Human Use. Results: Among 118 otolaryngology residency programs, 74 (62.7%) participate on Instagram, 52 (44.1%) participate on Twitter, and 44 (37.3%) participate on Facebook. Fifty-one Instagram accounts, 20 Twitter accounts, and 4 Facebook accounts have been created during 2020. Forty-two (36%), 30 (25.4%), and 15 (13%) programs are promoting virtual open houses on Instagram, Twitter, and Facebook, respectively. Two programs on the Visiting Student Application Service offered virtual subinternships. Seven residency program websites offered virtual subinternships. Nine, 6, and 1 program offered virtual subinternships on Instagram, Twitter, and Facebook, respectively. Conclusion: This study demonstrates that social media presence on Instagram and Twitter among otolaryngology residency programs has substantially grown in 2020 at a higher rate compared to previous years. These data suggest that otolaryngology residency programs are finding new ways to reach out to applicants amid an unprecedented type of application cycle due to the challenges presented by COVID-19. Many programs are advertising virtual open houses via social media platforms to connect with applicants, and a few programs are offering virtual subinternships to replace traditional subinternships. Introduction The US residency application process for medical students has drastically changed since the outbreak of COVID-19. In response, medical schools and residency training programs have implemented strict policies and guidelines to minimize viral spread, thereby limiting interaction between medical students and outside institutions. [1][2][3][4][5][6][7] Many open house meet-and-greets, external subinternships, and in-person interviews have been cancelled to comply with safety protocols. For competitive surgical specialties like otolaryngology, external rotations are considered critical opportunities for student doctors to showcase their abilities, gather evaluation letters, and foster new relationships with faculty at institutions of interest for potential future residency training. Without these interactions, residency programs must find new ways to evaluate the skills, knowledge, and personalities of applicants. Considering these new limitations after the onset of COVID-19, multiple otolaryngology organizations including the Society of University Otolaryngologists (SUO), Otolaryngology Program Directors Organization (OPDO), and Association of Academic Departments of OHNS (AADO) have released statements, recommending students to avoid away rotations and for residency programs to lower expectations regarding the number of typical outside experiences and evaluation letters on students' applications. 8,9 Furthermore, they suggest programs to expand their online presence and explore opportunities to showcase programs via virtual tours. In this study, we evaluated the use of Twitter, Facebook, and Instagram during the COVID-19 pandemic as an alternative medium for programs to showcase attributes and allow for evaluation and communication with potential residency applicants. Methods A list of accredited otolaryngology-head and neck surgery residency programs was gathered from the Electronic Residency Application Service, which consisted of a total of 118 civilian programs. This study excluded programs maintained at military bases. We determined which programs had Instagram accounts through links on residency program websites, Google searches, and suggested Instagram accounts through the ''similar accounts'' option under each otolaryngology Instagram account. Instagram feeds were reviewed for previous and/or scheduled virtual open house invitations and virtual subinternship opportunities. Twitter and Facebook were evaluated in a similar manner. The Visiting Student Application Service (VSAS) through the Association of American Medical Colleges website was also utilized to determine subinternship opportunities. Twitter data were collected and deemed current as of September 5, 2020. Instagram and Facebook data were collected and deemed current on September 8 and 9, of 2020. Graphs and calculations were generated on Microsoft Excel. This study did not require approval from the University of Alabama at Birmingham (UAB) Institutional Review Board for Human Use. Instagram Of the 118 otolaryngology residency programs, 74 (62.7%) programs participate on Instagram. Twenty-nine (24.6%) only have a department-based account, 38 (32.2%) only have a residency-based account, and 7 (5.9%) have both types of accounts. Forty-two (35.6%) programs are promoting virtual open house meet-and-greets on Instagram (Table 1). Among all 81 Instagram accounts (total number of department and residency accounts), 51 (63.0%) accounts have been created during the 2020 calendar year (Figure 1). Forty-one of the accounts were created in June, July, and August 2020. (Table 1). Among all 55 Twitter accounts (total number of department and residency accounts), 20 (36.4%) accounts have been created during the (Table 1). Among all 44 Facebook accounts (total number of department and residency accounts), only 4 (9.1%) have been created during the 2020 calendar year ( Figure 1). Four Facebook accounts were created in June, July, and August 2020. Subinternships Data regarding the number of external subinternships offered in prior years are unavailable on VSAS. Currently, VSAS offers 42 in-person external subinternships which are presumed to be cancelled. Two programs on VSAS are offering virtual subinternships. Nine programs offered virtual away rotations on Instagram. Six programs offered virtual away rotations on Twitter, and 1 program offered virtual away rotations on Facebook. Seven residency program websites offered virtual subinternships. Discussion The data show a growing number of social media accounts among academic otolaryngology departments and residency training programs, the rate of which has increased compared to previous years ( Figure 1). In our study, there was a relative 170%, 57%, and 10% increase in Instagram, Twitter, and Facebook otolaryngology social media accounts, respectively, between 2019 and 2020. Our data demonstrate a plateauing of Facebook accounts in comparison to the exponential growth witnessed on Instagram and Twitter. It is pertinent to note that social media usage has increased on a global scale as evidenced by an increase in the number of accounts on Instagram, Twitter, and Facebook in the United States by 5.5%, 0.1%, and 1.4%, respectively, between 2019 and 2020. [10][11][12][13][14][15][16][17] This global growth may partially explain our findings. However, these rough estimates may be inaccurate. There is uncertainty surrounding the number of existing social media accounts due to privacy of company data as well as an abundance of fake accounts. 18,19 A selection of surveys analyzing social media demographics conducted by the PEW Research Center shows that the percentage of adults who claim to use social media has increased by 3.9% this year compared to 2019. [20][21][22] Younger adults (ages 18-29) appear to utilize Facebook the most (79%), then Instagram (67%), and finally Twitter (38%), which contrasts with the decreased usage seen in adults over 30 on Instagram and Twitter. Our data show a shift toward these 2 platforms (Instagram and Twitter) in recent years among otolaryngology residency programs. Although global trends contribute to the growth of otolaryngology residency social media accounts, we suspect that most accounts were created this year in response to the cancellation of in-person engagements. These interpretations are primarily based on the timing and increased rate of growth demonstrated by our data in the summer months of 2020 after the onset of the pandemic. The majority (63%) of all otolaryngology Instagram accounts were created this year. Interestingly, 41 Instagram accounts were created in the months of June, July, and August. Twitter accounts followed a similar pattern. The timing of this occurrence aligns with the summer season of fourth-year medical students finalizing applications and engaging potential residency programs. In alignment with the suggestions of the SUO, OPDO, and AADO stated in April 2020, it seems that otolaryngology residency programs have adapted to the new restrictions placed by COVID-19 by expanding their online presence. Furthermore, otolaryngology residency programs have explored new routes to communicate with applicants via social media platforms. This is supported by the fact that 42 programs and 30 programs are currently using Instagram and Twitter, respectively, to invite students to virtual open houses. Before COVID-19, these opportunities did not exist. In general, posts about virtual open houses will list dates and contact information to sign up for the event. These opportunities offer applicants the option to meet faculty and residents. Social media accounts will also post virtual tours which consist of brief videos featuring facilities, residency workspaces, and other program attributes. The quality and effectiveness of these virtual experiences compared to traditional meetings and tours may be an area of future investigation. Residency programs appear to be utilizing social media to ''brand'' themselves. [23][24][25] Facebook reaches a broad and large audience, Instagram engages a younger population with visual content, and Twitter disseminates information with the ''@'' and ''#'' functions. A 2020 review on the use of social media for otolaryngology residency programs suggested programs establish a more prominent online presence and use otolaryngology-related Twitter hashtags such as #VirtualOTO-Match and #OTOMatch2021 to reach out to applicants. 26 Department-based accounts may celebrate new faculty, achievements, and post health announcements for the general public and patients, whereas new residency-based accounts likely target applicants. Residency-based accounts may demonstrate ''branding'' in posts by showcasing resident lifestyles, resident operative time, research involvement, and community outreach. Programs can distinguish themselves and attract applicants who share similar personalities as their online personas. In-person subinternships are available only to those who lack an at-home institutional otolaryngology program as suggested by the SUO, OPDO, and AADO. New educational modalities are necessary to replace the traditional in-person rotation. 27 For instance, the University of Pennsylvania has established a virtual otolaryngology surgical rotation that is comprised of livestream operations and telehealth patient communication. 28 However, few virtual subinternships are available to applicants currently. The impact of the COVID-19 pandemic on graduate medical education will be revealed in the years to come. Efforts to preserve medical education and alternative measures to evaluate and connect with otolaryngology resident applicants are essential and will continue to evolve. Here, we demonstrate a growth of otolaryngology social media accounts which we largely attribute to the cancellation of in-person engagements after the onset of COVID-19. Notably, both Instagram and Twitter have surpassed Facebook this year in terms of the number of otolaryngology social media accounts. Instagram is the most prevalent platform. It is possible that this platform's visual-based content is more appealing, attractive, and preferable among users surveying residency programs. The changes reported in this study will likely continue to impact the dynamic of the otolaryngology residency application process. Considering the recent exponential growth of social media platforms in the recent months among otolaryngology residency programs, the numbers presented in this study will most likely do not represent the actual numbers during the date of publication. For this reason, data were collected within the same time frame (September 5-9, 2020). It is possible that not all programs were represented in the study due to browsing limitations on the internet. Google searches generate most relevant links and can sometimes hide social media accounts, particularly accounts that are new without as much traction. To combat these possible browsing limitations, social media platforms were searched on Google and within the search engines of respective social media platforms, including a review of suggested accounts. Conclusion COVID-19 has changed the residency application process. In response, there has been a notable growth of social media accounts among otolaryngology residency programs within the last few months. Many programs are advertising virtual open houses via social media platforms to connect with residency applicants, and a few programs are offering virtual subinternships to replace traditional external experiences. Authors' Note Andrew B. DeAtkine contributed as primary author by collecting data, generating graphs, writing manuscript, and submitting to journal. Jessica W. Grayson contributed by writing and revising paper. Nikhi P. Singh contributed by organizing project methodology and revising paper. Alexander P. Nocera contributed by formulating project idea and methodology and revising paper. Soroush Rais-Bahrami contributed by formulating project idea and methodology and revising paper. Benjamin J. Greene contributed as senior author by writing and revising paper. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article.
2020-12-24T09:07:42.463Z
2020-12-23T00:00:00.000
{ "year": 2020, "sha1": "ae6ad7414890d9e406f8247b4259eda959c192f2", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1177/0145561320983205", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "d157a0f821ed49384f4f63e340c44d269eae200d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54512224
pes2o/s2orc
v3-fos-license
Genetic Contributions to Age-Related Decline in Executive Function: A 10-Year Longitudinal Study of COMT and BDNF Polymorphisms Genetic variability in the dopaminergic and neurotrophic systems could contribute to age-related impairments in executive control and memory function. In this study we examined whether genetic polymorphisms for catechol-O-methyltransferase (COMT) and brain-derived neurotrophic factor (BDNF) were related to the trajectory of cognitive decline occurring over a 10-year period in older adults. A single nucleotide polymorphism in the COMT (Val158/108Met) gene affects the concentration of dopamine in the prefrontal cortex. In addition, a Val/Met substitution in the pro-domain for BDNF (Val66Met) affects the regulated secretion and trafficking of BDNF with Met carriers showing reduced secretion and poorer cognitive function. We found that impairments over the 10-year span on a task-switching paradigm did not vary as a function of the COMT polymorphism. However, for the BDNF polymorphism the Met carriers performed worse than Val homozygotes at the first testing session but only the Val homozygotes demonstrated a significant reduction in performance over the 10-year span. Our results argue that the COMT polymorphism does not affect the trajectory of age-related executive control decline, whereas the Val/Val polymorphism for BDNF may promote faster rates of cognitive decay in old age. These results are discussed in relation to the role of BDNF in senescence and the transforming impact of the Met allele on cognitive function in old age. A functional single nucleotide polymorphism (SNP) in the catechol-O-methyltransferase (COMT) gene mapped to chromosome 22q11 is thought to infl uence dopamine concentration in the prefrontal cortex (Akil et al., 2003;Lachman et al., 1996). COMT is a post-synaptic enzyme that catabolizes dopamine released in the prefrontal cortex, and a valine (Val) to methionine (Met) amino acid substitution at the 158/108 locus of the peptide sequence affects the thermostability of the enzyme. The Met/Met form of this polymorphism produces a less active enzyme resulting in higher dopamine levels than the Val/Val or the Val/Met polymorphism. In schizophrenic populations, as well as normal functioning young adults, the Met/Met form of the COMT polymorphism has been related to superior performance on a number of tests of executive function including the Wisconsin Card Sort Task (Egan et al., 2001;Joober et al., 2002;Malhotra et al., 2002) and the n-back task . In addition, people with the Met/Met form of the COMT polymorphism can elicit higher or lower levels of activity in the prefrontal cortex depending on the task characteristics and cognitive demands (Bertolino et al., 2006;Caldu et al., 2007;Egan et al., 2001;Ettinger et al., 2008;Mattay et al., 2003;Winterer et al., 2006). However, some studies, including a recent meta-analysis have reported that INTRODUCTION Old age is often accompanied by cognitive impairment with the largest defi cits on executive control tasks that are reliant on prefrontal cortex function (Hedden and Gabrieli, 2004). Evidence from both humans and non-human animals suggests that some cognitive defi cits observed in old age could be related to disruptions in the dopaminergic and neurotrophic systems (Bäckman et al., 2006;Pang and Lu, 2004). For this reason, genetic polymorphisms that affect the concentration or secretion of neurotrophic factors and neurotransmitters could contribute to some of the individual differences in cognitive function in older adults (de Frias et al., 2004;Harris et al., 2006). the Met/Met form of the COMT polymorphism is not always associated with enhanced or more effi cient cognitive function or prefrontal activity compared to Val carriers (Barnett et al., 2008;Bruder et al., 2005;Ho et al., 2005;MacDonald et al., 2007;Tsai et al., 2003). A few studies have examined whether individual differences in cognitive function in non-demented older adults could be attributed to the COMT polymorphism. One study found no association between the COMT polymorphism and cognitive function (O'Hara et al., 2006) while others have reported better cognitive performance in Met homozygotes compared to carriers of the Val allele (de Frias et al., 2004;Liu et al., 2008). On the other hand, some studies have reported better cognitive function in the Val/Met heterozygotes than either of the homozygotes (Harris et al., 2005), and yet others have reported that both Val/Met heterozygotes and Met homozygotes perform better than the Val/Val counterparts (Starr et al., 2007). Some have speculated that this variation in the literature could be related to (a) an age-related shift in the U-shaped curve that refl ects dopamine signaling in the prefrontal cortex (Harris et al., 2005;Starr et al., 2007), (b) that COMT interactions with gender and age (O'Hara et al., 2006) could be masking or driving certain effects, or (c) the tasks that are employed in some studies do not adequately and specifi cally refl ect dopamine or prefrontal cortex engagement and therefore do not validly refl ect the impact that the polymorphism has on prefrontal function (O'Hara et al., 2006). There are also a number of other explanations for the discrepancies found between studies including interactions or covariation between the COMT polymorphism and lifestyle factors, demographic variables, or other genes or polymorphisms. Most of the studies described above have utilized crosssectional designs, that is, they assess the relationship between the COMT polymorphism and cognitive function at one point in time. This method is capable of assessing whether any association exists between the COMT polymorphism and cognition but cannot determine if the polymorphism accounts for withinsubject change in cognitive function across the lifespan. In order to assess whether the polymorphism relates to the trajectory of cognitive decline in old age, longitudinal investigations are needed. Two longitudinal studies have examined the effects of the COMT polymorphism on the trajectory of cognitive decline in older adults. The fi rst study reported that Met homozygotes between 50 and 60 years old experienced a more rapid decline in episodic memory performance over a 5-year period than Val carriers of the same age (de Frias et al., 2004). However, the COMT polymorphism did not moderate changes in episodic memory in either middle-aged adults or adults between 65 and 80 years of age. The main conclusion was that change in episodic memory over a 5-year period is largely independent of the COMT polymorphism except in younger-old adults, that is people between 50 and 60 years of age. A second longitudinal study, with a follow-up period of 4 years, reported that there was no interaction between the COMT polymorphism and time on cognitive function in people between 60 and 64 years of age. However, there was a signifi cant effect of the COMT polymorphism after controlling for general cognitive function at age 11, suggesting that there was a change in the effect that the polymorphism had on cognitive function over the lifespan (Starr et al., 2007). These results suggest that if the effect of the COMT polymorphism on cognitive performance changes as a function of age, the changes occur before the age of 60. Although the dopaminergic system has been proposed to underlie some of the age-related cognitive defi cits in prefrontal function (Bäckman et al., 2006), it is likely that changes in the concentration or effi cacy for an array of molecules and receptors infl uences cognitive function in old age. For example, brain-derived neurotrophic factor (BDNF) is another molecule involved in cognition that may be related to cognitive impairment and dementia. For example, the mature form of the BDNF (mBDNF) molecule enhances learning and memory and longterm potentiation , induces synaptic plasticity (Lu, 2003), and promotes neurogenesis (Pencea et al., 2001) and there is evidence that age-related cognitive impairments might be related to a decrease in the production or secretion of BDNF (Hayashi et al., 2001;Pang and Lu, 2004). A functional polymorphism in the gene for BDNF produces a single amino acid substitution of Val to Met at codon 66 in the pro-domain, with the Met allele selectively impairing the regulated secretion and intracellular traffi cking of BDNF in primary cortical neurons and neurosecretory cells Egan et al., 2003). Previous studies have found that people with the Met allele have impaired episodic memory, working memory, and hippocampal function Hariri et al., 2003;Ho et al., 2006) and lower hippocampal levels of N-acetylaspartate, a putative measure of neuronal integrity . In addition, Met carriers have less gray matter volume throughout the prefrontal and middle temporal lobes compared to Val carriers (Ho et al., 2006(Ho et al., , 2007Pezawas et al., 2004). Only a few studies have examined the relationship between the BDNF polymorphism and age-related cognitive impairment. Inconsistent with the majority of studies in young adults and patients with depression, Met homozygotes at 64 and 79 years of age outperformed the Val homozygotes and heterozygotes (Harris et al., 2006) after controlling for sex and cognitive performance at age 11. This fi nding suggests that the infl uence that BDNF has on cognitive function may change across the lifespan and that the Met allele may be neuroprotective during later stages of life. Consistent with this fi nding, there is some evidence that the Val allele may increase the risk for Alzheimer's disease (Matsushita et al., 2005;Ventriglia et al., 2002), but some recent studies have failed to fi nd such a relationship (Akatsu et al., 2006;He et al., 2007). Others however have reported the opposite fi nding, that is older adults carrying the Met allele perform worse across a variety of cognitive domains compared with the Val homozygotes (Miyajima et al., 2008) and have a higher risk for developing late-life depression (Taylor et al., 2007) and white-matter hyperintensities (Taylor et al., 2008). However, similar to the research on COMT, the studies mentioned above on BDNF have been cross-sectional in nature and are therefore limited in their capability for drawing conclusions concerning the relationship between within-subject changes in cognitive function and the BDNF polymorphism. A longitudinal study would help resolve these issues. The current study assessed whether cognitive decline over a 10-year span in a group of older adults was moderated by the COMT and/or BDNF polymorphism. Our within-subject design provided us with more statistical power than previously conducted cross-sectional studies. To assess whether individual differences in cognitive decline varied as a function of the BDNF or COMT polymorphism we utilized a well-studied task-switching paradigm that requires participants to rapidly switch between one simple cognitive task to a different cognitive task (Rogers and Monsell, 1995). This paradigm was chosen because of its established capability to tap prefrontal and executive resources (Braver et al., 2003;Kimberg et al., 2000) and to demonstrate age-related defi cits (Kramer et al., 1999a;Kray and Lindenberger, 2000). Therefore, this paradigm allowed us to test the prediction that (a) the COMT and BDNF polymorphisms affect performance on tasks that depend on prefrontal function, and (b) agerelated executive defi cits vary as a function of the BDNF or COMT polymorphism. Furthermore, our 10-year longitudinal design doubles the length of previous longitudinal studies to date. A long span between testing periods increases the likelihood of fi nding changes in cognitive function across time, which then provides us with the variation necessary to assess whether the COMT and/or BDNF polymorphisms moderates the decline in performance. Based on the extant literature, we predicted that the COMT polymorphism would explain little of the changes in cognitive performance over the 10-year span given that the age of our sample at the fi rst time point was on average above the age in which interactions across the lifespan have been observed in prior investigations (de Frias et al., 2004). On the other hand, there have not been any longitudinal studies examining whether the BDNF polymorphism infl uences cognitive function across the adult lifespan. Some have argued that at younger ages, the Val/Val allelic combination provides some neuronal and cognitive benefi ts, but with advancing age the Met allele, instead of being detrimental to cognitive and brain function and morphology, actually carries some protection against the development of dementia and cognitive impairment (Harris et al., 2006). We explored these hypotheses in the current study. PARTICIPANTS Fifty-three healthy older adults (14 male, mean age of 75.5 ± 5.3, range: 67-86) who had participated in a previous study ∼10 years ago (Kramer et al., 1999b) were recruited to participate in this study. Forty-three percent of the original sample of 124 participants agreed to return. We assessed whether the participants who agreed to return for the follow-up session were different from those who declined the invitation to return for the follow-up session in terms of male to female ratio or age. The male to female ratio was similar between those that agreed to return (m:f = 0.35) versus those that declined the invitation to return (m:f = 0.38), with slightly more men declining the invitation to return for the follow-up. Furthermore, the average age of the participants was nearly equivalent for those that returned [average age = 65.33 (10 years ago)] compared with those that declined to return for the follow-up session [average age = 65.50 (10 years ago)]. Independent samples t-tests demonstrated that these differences were not signifi cant (all p > 0.05). The University of Illinois Institutional Review Board approved the study, and all volunteers signed an informed consent. TASK-SWITCHING The task-switching paradigm examines subjects' ability to rapidly disengage from the performance of one task and switch to another. Subjects performed two different tasks that alternated after every two trials -two trials of one task followed by two trials of the other task and so on (see Figure 1). In one task subjects performed an odd/even numerical judgment (i.e., is a single digit number odd or even). In the other task subjects performed a vowel/consonant judgment. When one type of trial (e.g., digit judgment) was followed by a trial of the same type (e.g., digit judgment) it was labeled a Repeat trial. However, when participants needed to respond to a trial that was of a different task type than the previous trial, it was labeled as a Switch. Response times (RTs) to switch trials are higher than RTs to repeat trials and the accuracy rates are lower. Figure 1 | Description of the task-switching paradigm. Letter and digit stimuli were presented simultaneously in a 2 × 2 grid and participants had to switch between responding to the letters versus responding to the numbers on every 3rd trial (adapted from Rogers and Monsell, 1995). The task stimuli, a letter and a single digit number, were presented together in a 2 × 2 matrix centered in the middle of the computer screen. When the letter and digit were located in one half of the matrix subjects performed the odd/even judgment task, when the letter and digit were in the other half of the matrix subjects performed the consonant/vowel judgment task (e.g., perform odd/even judgment for upper two quadrants of the matrix and consonant/vowel judgment for lower two quadrants). The letter and digit were presented in the matrix in a continuous clockwise direction. Thus, the occurrence of a task switch was predictable. The location (i.e., left, right, upper, lower) of each task was counterbalanced across subjects (task adapted from Rogers and Monsell, 1995). Each stimulus pair was presented until the subject responded and then the next stimulus pair was presented 400 ms following response. Subjects responded with one of two keys on the computer keyboard for both of the tasks (e.g., one key was used to respond to an odd number or a consonant). The key representations were counterbalanced across subjects. Subjects fi rst performed two 30 trial single task blocks followed by one 30 trial task-switching block as practice. The practice blocks were then followed by four 60 trial task-switching blocks. The main dependent variables included mean RT and accuracy to both Repeat and Switch conditions. MMSE AND IQ To test for general cognitive function and possible dementia we employed a modifi ed and revised version of the Mini-Mental Status Examination (MMSE) that has a high score of 57 and a cut-off for possible dementia at 51. To assess the relationship between the BDNF and COMT polymorphisms and general intelligence (IQ) we employed the Kaufman Brief Intelligence Test. Both of these tests were conducted at the Time 2 session and were therefore not subjected to a repeated-measures analysis to assess for change in either score. GENOTYPING Buccal cells were collected from all participants using MasterAmp™ Buccal Swab Brushes (Epicentre Biotechnologies). Genomic deoxyribonucleic acid (DNA) was extracted from the buccal swabs using MasterAmp™ DNA Extraction Solution (Epicentre Biotechnologies). COMT Primers COMT-F 5′-TCA CCA TCG AGA TCA ACC CC-3′ and COMT-R 5′-GAA CGT GGT GTG AAC ACC TG-3′ were used to amplify the 176 bp polymorphic COMT fragment (Barr et al., 1999). The amplifi cation was done in 50 μl reactions containing ∼125 ng genomic DNA, 200 μM deoxynucleoside triphosphates (dNTPs), 10 pmol/l of each primer, 10× HotStarTaq® buffer (QIAGEN), and 1 U HotStarTaq® DNA polymerase (QIAGEN). Polymerase chain reaction (PCR) conditions consisted of an initial denaturation step at 95°C for 15 min followed by 30 cycles on a thermocycler (denaturation at 94°C for 30 s, annealing at 52°C for 30 s, and extension at 72°C for 30 s) and fi nished with a fi nal extension at 72°C for 10 min. Eight microliters of the PCR product were digested with 10 U NlaIII (New England Biolabs) (Barr et al., 1999) at 37°C for 1 h and analyzed by gel electrophoresis on a 3.5% MetaPhor® agarose gel (Cambrex Bioscience, Inc./Lonza). The gel was immersed in an ethidium bromide solution for 15 min and visualized under ultraviolet light. Digestion resulted in bands of 82, 54, and 41 bp for the Val 158 allele. The 82 bp fragment was cut into 64 and 18 bp bands for the Met 158 allele. BDNF Primers BDNF-F 5′-GAG GCT TGA CAT CAT TGG CT-3′ and BDNF-R 5′-CGT GTA CAA GTC TGC GTC CT-3′ were used to amplify the 113 bp polymorphic BDNF fragment (Neves-Pereira et al., 2002). The amplifi cation was done in 25 μl reactions containing ∼125 ng genomic DNA, 200 μM dNTPs, 10 pmol of each primer, 1.5 mM MgCl 2 , 1 U Taq DNA polymerase (Invitrogen) (adapted from Neves-Pereira et al., 2002). PCR conditions consisted of an initial denaturation step at 95°C for 5 min followed by 30 cycles on a thermocycler (denaturation at 94°C for 30 s, annealing at 60°C for 30 s, and extension at 72°C for 30 s) and fi nished with a fi nal extension at 72°C for 5 min (Neves-Pereira et al., 2002). 6.5 μl of the PCR product were digested with 3 U Eco721 (Fermentas) at 37°C overnight and analyzed by gel electrophoresis on a 4% 3:1 NuSieve® agarose gel (Cambrex Biosciences, Inc./Lonza) (adapted from Neves-Pereira et al., 2002). The gel was immersed in an ethidium bromide solution for 10 min and visualized under ultraviolet light. Digestion resulted in an uncut band of 113 bp for the Met 66 allele. The 113 bp fragment is cut into 78 and 35 bp bands for the Val 66 allele. STATISTICAL ANALYSIS We analyzed the task-switching data (reaction time and accuracy) with repeated-measures analyses of variance with Time (1997Time ( , 2007 and Condition (repeat, switch) as within-subject factors and group (COMT or BDNF genotype) as a between-subjects factor. Effect sizes were calculated and reported here as partial etasquared (η 2 p ). IQ scores were used as a covariate for COMT (see 'Results' section). One-way ANOVAs and independent samples t-tests were also employed to assess differences in demographic characteristics, IQ, MMSE scores, or task-switching performance as a function of the BDNF and COMT polymorphism at separate time points. All data was analyzed using SPSS 16.02 for Mac. COMT Demographics One-way ANOVAs were used to test whether the COMT SNP was related to age or IQ (see Table 1). We found no effect of age [F(2,52) = 1.87; n.s; η 2 p = 0.07], but we did fi nd a trend for an effect of IQ obtained from the Kaufman Brief Intelligence Scale [F(2,52) = 2.61; p < 0.08; η 2 p = 0.09]. Post hoc tests revealed that those with the Val/Val form of the COMT gene had lower IQ scores than the heterozygotes (p < 0.04) and marginally lower scores than the Met homozygotes (p < 0.06). Met homozygotes and Val/Met heterozygotes did not differ in IQ scores (p < 0.92). MMSE We employed a one-way ANOVA to examine whether the COMT SNP was related to performance on the MMSE task -a general and widely used measure to test for possible dementia and impaired cognitive function (see Table 1). There was no relationship between the COMT SNP and performance on the MMSE [F(2,51) = 1.84; n.s.; η 2 p = 0.07]. Task-switching Repeated-measures ANOVAs were run with COMT genotype (Val/Val; Val/Met; Met/Met) as a between-subjects factor and Time (1997 -Time 1; 2007 -Time 2) and Condition (Repeat; Switch) as within-subjects factors on RTs and accuracy rates separately. We included IQ score as a covariate given its marginal relationship to the COMT polymorphism (see above). Therefore, all results described here can be considered to be statistically independent from IQ. We found that main effects of Time [F(1,42) = 1.75; n.s.] and Genotype [F(2,42) = 1.23; n.s.] were not signifi cant (Table 2). Furthermore, consistent with our hypotheses and results from previous studies (de Frias et al., 2004;Harris et al., 2005) we failed to fi nd a Time × Genotype interaction [F(2,42) = 0.34; n.s.; η 2 p = 0.02] or a Time × Genotype × Condition interaction on the RTs from the task-switching paradigm [F(2,42) = 0.30; n.s.; η 2 p = 0.01] indicating that RTs (for both the repeat and switch conditions) did not change over the 10-year period as a function of the COMT polymorphism. Converting the RTs into a switch cost (switch RT -repeat RT) at each time point confi rmed this effect. We also conducted the same repeatedmeasures analysis on the accuracy rates and found neither a signifi cant Time × Genotype interaction [F(2,42) = 0.86; n.s.; η 2 p = 0.04] nor a Time × Genotype × Condition interaction [F(2,42) = 0.70; n.s.; η 2 p = 0.03]. To assess performance on the task-switching paradigm at individual time points as a function of the COMT polymorphism we conducted a series of univariate ANOVAs with Genotype as a fi xed factor and RTs and accuracy rates for each condition as the dependent variable. There was no effect of Genotype on the RTs for either the switch condition or the repeat condition at either time point (all p > 0.05; all η 2 p < 0.07). However, for the accuracy measures, we found that at Time 1 there was a marginally signifi cant effect of Genotype for the switch condition [F(2,51) = 2.85; p < 0.06; η 2 p = 0.11]. Post hoc comparisons revealed that the Met homozygotes had signifi cantly higher accuracy rates compared with the heterozygotes (p < 0.02), but were not reliably different from the Val homozygotes (p < 0.63). No other comparisons reached signifi cance (all p > 0.05). There were six participants who could not complete the task-switching paradigm at Time 2 because the task was too challenging, however these participants had been able to successfully complete the task 10 years prior. Interestingly, fi ve out of the six participants had the Val/Val form of the COMT polymorphism and the other participant was heterozygous. None of the participants who failed to complete the task at Time 2 were homozygous for the Met/Met form. We tested whether this distribution was signifi cantly different from chance using a χ 2 non-parametric test with the respective sample sizes for each polymorphism as the null hypothesis (Val/Val = 16, Val/ Met = 19, Met/Met = 18). We found a non-signifi cant χ 2 (1.05; p < 0.59) for the given frequencies, indicating that although there were more Val homozygotes who failed to complete the task, the number of participants who fell into this category was not signifi cantly greater than chance. However, studies with larger samples could more validly test this trend. MMSE In an independent samples t-test we found that there was no relationship between the BDNF polymorphism and performance on the MMSE [t(1,51) = 1.42; n.s. -see Table 1]. Task-switching Similar to the COMT polymorphism described above, we assessed the infl uence of the BDNF polymorphism on cognitive decline in the task-switching paradigm by employing a repeatedmeasures ANOVA with Genotype (Val/Val; Val/Met) as a betweensubjects factor and Time ( Table 2). However, consistent with the view that the Met allele might provide some protection in old age or that the Val homozygotes might experience greater decline with advancing age, we found a signifi cant Time × Genotype interaction [F(1,44) = 7.54; p < 0.009; η 2 p = 0.15] on RTs in the taskswitching paradigm such that the Val homozygotes experienced a signifi cantly greater decline in performance over the 10-year period compared to the Met carriers (see Figure 2). There was also a trend for a Time × Genotype × Condition interaction [F(1,44) = 3.15; p < 0.08; η 2 p = 0.07] such that Val carriers experienced a greater decline in performance for the switch condition compared with the repeat condition over the 10-year span compared to the Met carriers. This trend was confi rmed by examining switch cost (switch RT -repeat RT). For the accuracy measures, neither the Time × Genotype interaction or the Time × Genotype × Condition interaction reached signifi cance indicating that unlike the RTs, the accuracy rates were not infl uenced by the BDNF polymorphism. However, we found a signifi cant Genotype × Condition interaction on the accuracy rates [F(1,44) = 4.94; p < 0.03; η 2 p = 0.10] with the Val homozygotes performing better than the heterozygotes on the repeat condition compared to the switch condition. We conducted a series of independent t-tests at each time point and for each condition separately to test whether the two groups differed at either time point. There were no signifi cant differences between the two BDNF polymorphisms at either time point for RTs or accuracy measures (all p > 0.05). Like the COMT polymorphism results described above we examined whether the participants who could not complete the task-switching paradigm at Time 2 had a particular form of the BDNF polymorphism. The results from the χ 2 test was not signifi cant (0.04; p < 0.83) indicating that the BDNF polymorphism did not explain the failure to complete the task-switching paradigm at Time 2. DISCUSSION In this study we examined whether the BDNF or COMT polymorphisms could explain variation in the trajectory of cognitive decline over a 10-year span in older adults. Our participants were older than the participants in a previous 5-year longitudinal study examining the infl uence of the COMT polymorphism on cognitive performance in older adults (de Frias et al., 2004). Furthermore, this is the fi rst known longitudinal study of the effects of the BDNF polymorphism on age-related cognitive decline. Consistent with prior studies, we found no evidence that the COMT polymorphism contributes to age-related declines in executive function as assessed by the task-switching paradigm (O'Hara et al., 2006;Starr et al., 2007). In a longitudinal study, an interaction with age was only reported for people between 50 and 60 years of age (de Frias et al., 2004). The participants in our sample were 65 years of age on average in 1997 and ∼75 years of age as of 2007, and therefore may have been outside the age range to detect an interaction if such an interaction is specifi c to the 5th decade of life (de Frias et al., 2004). Our results are more consistent with a 4-year longitudinal study of older adults that did not fi nd an interaction between the COMT polymorphism and age on cognitive function (Starr et al., 2007). In short, our results suggest that, after the 6th decade of life, the COMT polymorphism does not explain cognitive decline over a 10-year period. On the other hand, the BDNF polymorphism reliably explained variation in age-related decline in performance for both the repeat and switch conditions of the task-switching paradigm (Figure 2). In a cross-sectional study, Harris et al. (2006) reported that older adult Met homozygotes had better reasoning skills than Val homozygotes or heterozygotes. Partially consistent with this fi nding, we demonstrate that the Met carriers have spared cognitive function over a 10-year period while the Val homozygotes showed a signifi cant decline in performance. These results, however, are generally inconsistent with the majority of the BDNF-gene literature, which typically reports poorer performance and functioning for Met carriers in both young and old adults Taylor et al., 2007). This discrepancy between our fi nding and cross-sectional studies might be explained by the age of the sample studied. Based on our longitudinal results it is apparent that at an average age of 65 the Val homozygotes tend to perform better than Met carriers, however there is a crossover such that by the average age of 75 Val homozygotes tend to perform worse than Met carriers. Cross-sectional studies that assess older adults around 65 years of age might fi nd cognitive enhancement associated with the Val/Val genotype, whereas studies examining a sample with an average age of 75 might produce the opposite pattern. There are a number of possible reasons that could explain the crossover effect that we observed. First, we have no information regarding the trajectory of cognitive decline earlier in life. Therefore, it is possible that Met carriers could have shown a decline in performance at an earlier age and then plateaued, while the Val homozygotes had spared cognitive function until the 6th to 7th decade of life. Second, a number of factors that infl uence BDNF signaling could be altered during old age such that greater secretion of BDNF would be detrimental for cognitive and neural activity. For example, the precursor form of the BDNF (pro-BDNF) molecule and the mBDNF have distinct receptors and signaling cascades resulting in opposing effects on the nervous system (Lu et al., 2005;. Pro-BDNF enhances the capability for eliciting long-term depression, synaptic retraction, and cell death; whereas mBDNF increases the capability for eliciting long-term potentiation, synaptic formation, and cell survival (see Lu et al., 2005). In neurons, pro-BDNF is usually converted to the mature form in the extracellular space by proteases including tissue plasminogen activator (tPA). Cerebral levels of tPA decline with age and this decline is exacerbated in rodent models of Alzheimer disease (Cacquevel, et al., 2007). Therefore, greater secretion of BDNF may not enhance cognitive and neuronal function unless the cleavage molecules are present to convert it from its precursor form to its mature form. In fact, greater secretion of pro-BDNF into the synaptic space without an adequate concentration of cleavage molecules (e.g., tPA) to convert it to its mature form might result in cognitive impairment and decline instead of cognitive enhancement (Lu et al., 2005;. The Val/Val form of the BDNF polymorphism increases the regulated secretion and traffi cking of the pro-BDNF molecule , and we fi nd that this polymorphism is associated with a more rapid decline in cognitive function than their genetic counterparts that have reduced secretion of pro-BDNF. Our result is clearly in line with the hypothesis that the enhancing role of BDNF on cognition is dependent on a number of molecular factors including those that infl uence the presence or concentration of the cleavage molecules and that these supporting molecules might also be affected by aging. In short, a complex array of molecules are involved in BDNF signaling, and increased cellular secretion due to a genetic polymorphism may not always be associated with better function. Third, a number of environmental factors infl uence BDNF translation and concentration in rodents including environmental enrichment (van Praag et al., 2000), physical exercise (Cotman et al., 2007), caloric restriction (Mattson et al., 2003), and estrogen administration (Scharfman and Maclusky, 2005). Interactions between the BDNF polymorphism and any of these environmental factors could be moderating the age-interaction observed in this study. In short, there are a multitude of reasons for why older adult Met carriers would demonstrate spared cognitive function while Val homozygotes undergo a greater decline in cognitive function with advanced age. It is also important to note that we did not fi nd that the BDNF polymorphism disproportionately affected one of the task-switching conditions more than the other. That is, the switch cost (switch RT -repeat RT) was only marginally related to the BDNF polymorphism (p = 0.08). This result suggests that the BDNF polymorphism, and its role in infl uencing the trajectory of cognitive decline in old age, may primarily affect decline in speed of processing rather than a domain specifi c decline in executive function. More research employing a wider variety of tasks is warranted to examine this hypothesis. It is also interesting to consider our results within a cognitive reserve framework in which individuals with more education often demonstrate spared cognitive function despite having disease-related pathology (Fratiglioni and Wang, 2007). We found that BDNF heterozygous individuals that performed more poorly 10 years earlier showed more stability and reserve over the 10-year span. In our sample, IQ scores, which are often used as a measure of cognitive reserve, were unrelated to the BDNF genotype, suggesting that both homozygotes and heterozygotes had equivalent levels of 'reserve' as assessed by this measure. It might be possible that the BDNF genotype acts as a moderator between cognitive reserve measures such as IQ or education and cognitive function. This hypothesis would predict that Val/ Val individuals with higher levels of education or IQ would not show the same rate of decline in performance as Val/Val individuals with lower education or IQ scores. A study with a larger sample size would be more capable of investigating this potential moderating relationship. Finally, there are a number of limitations of the current study. First, although we have gained statistical power compared to cross-sectional studies by conducting within-subjects comparisons, we have lost statistical power by only being able to recruit 53 out of the 124 original participants. Therefore, our small sample size could have precluded our ability to fi nd a signifi cant interaction with the COMT polymorphism. However, despite this small sample size, we were able to detect a signifi cant effect of the BDNF polymorphism on task-switching performance and our effect sizes were similar to those reported by prior studies (de Frias et al., 2004;Harris et al., 2005Harris et al., , 2006. Second, although we report that the individuals who returned for the follow-up session did not differ in age or sex from those who decline to return, it is possible that the 53 people who agreed to participate in this study were healthier, higher functioning, and may not be a representative sample of the BDNF or COMT polymorphisms in this age range. This potential bias could have affected the pattern of results that we describe here. A longer longitudinal study with a larger sample size would be able to reduce this possible confound. In sum, we report that the BDNF polymorphism, and not the COMT polymorphism, infl uences the rate of cognitive decline over a 10-year span in older adults. Both conditions of the taskswitching paradigm were affected by the BDNF polymorphism while general cognitive function as assessed by the MMSE and IQ tests was not related to the BDNF polymorphism. The Met carriers of the BDNF gene demonstrated spared function over the 10-year span while the Val homozygotes experienced a signifi cant decline in performance. This result is inconsistent with a growing literature on the impact of the BDNF polymorphism on depression, cognitive function, and neural activity in young adults, but is partially consistent with at least one study in older adults (Harris et al., 2006). More longitudinal studies with larger sample sizes that employ a wider range of cognitive tests and a more comprehensive array of factors that could explain some individual differences (e.g., physical fi tness measurements) and possibly covary or interact with the BDNF and COMT polymorphisms would greatly enhance the interpretation of the results described in this study.
2018-04-03T03:21:33.147Z
2008-08-18T00:00:00.000
{ "year": 2008, "sha1": "6e527d1407c0c899f357fcd9fab407f20d560df4", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/neuro.09.011.2008/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6d8fde5694c895366a6fcfef634d4a0ea4f21e2f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
251507538
pes2o/s2orc
v3-fos-license
Assistance in using the marketplace platform for scavenger groups A problem in small-scale industries is the lack of marketing knowledge of digital technology. Based on this problem, this Community Service activity was carried out with the aim of increasing the knowledge of the scavenger group who are members of the Tunas Mulia Foundation, in the Bantar Gebang Integrated Waste Management Site, Bekasi. Activities are carried out through training and assistance in using sales applications through the marketplace platform. The purpose of this activity is to help increase the income of scavenger families by expanding the market through digital technology. The method used is the presentation of the material at the location of the Foundation. The material is given by the lecturer and then direct practice in making the platform is assisted by students as a companion to the participants. A total of 40 scavengers who participated in the mentoring were divided into two classes. The PKM activity was carried out for half a month, starting on December 14-30, 2021. The results of this activity were very satisfying for the participants, as seen from the results of the questionnaire and had increased the ability of participants to market their products through the marketplace and proved to signifi cantly increase family income. INTRODUCTION The Bantargebang Integrated Waste Management Site (BIWM) is located in Ciketing Udik Village, Cikiwul Village and Sumur Batu Village, Bantargebang District, Bekasi City. The BIWM occupies an area of 110.3 hectares, consisting of an eff ective area of 81.91% and the remaining 18.09% for infrastructure such as entrance roads, offi ce roads and leachate processing installations. The largest BIWM in Asia has been operating since 1989. This location has become the estuary of the residents of the capital city. Every day LPPM UNMER MALANG The problems faced by small and medium industries in general are marketing problems which are constrained by knowledge of online marketing (Arifudin, 2020). The trust of e-commerce consumers in Indonesia is strongly infl uenced by the quality of the website and the security arrangement presented through the website, while the reputation of the vendor through the website does not signifi cantly infl uence consumers to be more confi dent in an online fashion store vendor (Pujastuti et al., 2015). In the online selling assistance eff ort, participants showed enthusiasm to migrate the sales system from a conventional approach to a digital approach, namely by selling online on a digital platform (Huda & Sukadiono, 2021). Several service activities carried out previously have proven that online sales training is proven to increase sales. In the long pandemic period at the end of this second year, online sales are also a useful solution to prevent the spread of COVID-19 (Budiyanto et al., 2020). That the use of social media really helps MSMEs to survive during the pandemic is also stated by Syifa et al. (2021), Supriyani & Untari (2021), Candra et al. (2021), Zai (2021), Muyassarah et al. (2021), and Wisataone (2021). Regarding digital marketing training, the result is an increased order rate of 60 percent, as well as the expansion of marketing networks through social media (Qurrata et al., 2021). Likewise, the dedication to craftsmen in the fi eld of digital marketing is carried out by Hidayah et al. (2021), one of the results achieved is that there is an increase in sales due to the online shop. Similar activities were carried out by Sofi yana (2021). The result of the service carried out is that partners have an understanding of using the internet as a marketing tool. Partners also have the ability to market products using social media, namely Facebook and Instagram. Likewise, the results of the service Wardani (2019), Irianti et al. (2021), Sule & Siswanto (2021), and Septiningrum et al. (2020) all of which showed good results. mean the same. Based on the experience, benefi ts and results of previous training activities in community service, as well as studying partner problems during an assessment at the fi rst meeting, Community Development Program (CDP) which aim to provide solutions to partner problems are designed with illustrations of service activity designs as shown in Figure 1. Figure 1 shows the outline of Community Development Programs which are divided into 3 (three) stages. The fi rst stage starts with listening to the problems faced by scavengers in marketing their products. The main partner problems are the lack of education level, the lack of technological knowledge in the fi eld of digital marketing and the diffi culty of reaching a wider market. After knowing the partner's problems, enter the second stage, which is to fi nd solutions and design service activities with the aim of providing solutions to sales problems in order to be able to take advantage of technology using the marketplace platform. In the third stage, namely implementation, the activities carried out include training, practice, and mentoring. With the problems faced by partners and conventional sales conditions, the purpose of this The training provided is about the use of technology in marketing a product online using a marketplace. Starting from training on how to take good product photos, how to upload products to how to carry out strategies in marketing products at the Tunas Mulia Foundation. Online sales or e-commerce can be used to maximize profi ts because they can reach a wider market without time and place restrictions. A lot of types of e-commerce are developing in Indonesia. Marketplace type e-commerce is a type of e-commerce that is very developed in Indonesia. Marketplaces in Indonesia include Shopee, Tokopedia, Bukalapak, Lazada, Blibli, Instagram, and so on. This training is carried out by choosing to use the market place Shopee, Tokopedia and Instagram. With online sales, the products are expected to be more easily accessible to many people. METHODS This Community Development Program is carried out using a Participatory Action Research (PAR) approach. The PAR approach is a form of activity by actively involving all interested parties. (Huda & Sukadiono, 2021). The involvement of all these parties is required by using their own experiences as case examples. The goal is to make changes to the method for the better. The activities are carried out for approximately 15 working days, namely 14-30 December 2021. The implementation method is carried out through training, practice and mentoring. The location of Community Development Program is Tunas Mulia Foundation (TMF), Bantargebang Integrated Waste Management Site (BIWMS), Ciketing Udik Village, Sumur Batu Village, Bantar Gebang District, Bekasi City. The implementation time is every Monday-Saturday, 13.00-17.00 WIB. The location of t BIWMS is shown in Figure 2. Figure 2 shows the location of the Bantar Gebang integrated waste disposal site in the Bekasi area, not far from the city of Jakarta. This location is the fi nal destination for waste disposal from all the garbage of the citizens of the capital. Garbage as much as 7,000-8,000 tons every day, becomes fertile ground for the search for discarded goods for scavengers. Every day there are approximately 1,300 garbage trucks from Jakarta who come to dispose of Jakarta residents' garbage to BIWMS (Unit Pengelola Sampah Terpadu, n.d.). Training During the 15 days of training, the allocation of time for training and mentoring is on the fi rst day of introductions and assessments. Furthermore, on the second to fourth day, the presenters share by giving lectures in front of the class to the participants. Participants consisting of scavengers and their families listened to how to take photos of products to make them look attractive, how to write product-related information, and how to choose and register in the marketplace. Participants seemed enthusiastic with lots of questions and discussions. Practice On the fi fth to eighth day, the practical method is carried out so that the training participants can directly apply the knowledge from the training using the presentation and question and answer method. The presenters helped the trainees by providing some examples that had been applied. The practical method is carried out with the help of students to be able to show the process of the practice being carried out (Muhsinin et al., 2019). Practice is carried out on the fourth to seventh day by collecting products, packing products as attractively as possible, choosing products to be photographed, choosing the best photos, trying shopee, Tokopedia and Instagram applications, uploading product photos, inputting data and marketing online. on line. This activity is led by a lecturer and accompanied by several students who help carry out the practice. Mentoring At this stage the product has been marketed and participants are accompanied to receive orders, prepare products for delivery and receive payments. In online marketing, trust from buyers is able to mediate the relationship between reputation and buying interest. (Kusumawati & Diyani, 2021). Therefore, it is very important to maintain the reputation of the products marketed online. Assistance is carried out on the ninth day until the fourteenth day, so that participants can really maintain their reputation by quickly responding to orders that come in online. This training and mentoring were carried out for a group of scavengers who live around the Bantar Gebang TPST location, as shown in Figure 3. Garbage trucks come from all over Jakarta. While the other picture is the atmosphere of a scavenger village located not far from a garbage dump. Training and mentoring activities are carried out by adjusting the scavenger's free time, which is in the afternoon until the afternoon after fi nishing working as a scavenger, so that they can still work to earn money. The training was carried out near the waste disposal site, but the learning atmosphere was quite conducive, although occasionally there was a slight unpleasant odor when the wind was blowing hard and there were lots of fl ies in that location. The training participants were always enthusiastic, making the resource persons and the accompanying team also excited and ignoring the disturbance of smells and fl ies. Pictures of PkM activities during training, practice and mentoring for scavengers are shown in Figure 4. Figure 4 shows the start of mentoring activities. Resource persons, assistants, management of the Tunas Mulia Foundation and some of the training participants, namely scavengers, took a group photo in front of the training location. The picture on the right shows the atmosphere in the classroom when the resource person delivers the material. Evaluation design This community development program, which is carried out through training, practice and mentoring, has 10 criteria to measure the success and satisfaction of participants. Assessment is done through: (1) Presentation of training materials is easy to understand; (2) Achievement of training program targets; (3) Effi cient use of training time; (4) The suitability of the training method used; (5) The ability of the presenter in delivering the material; (6) The ability of the presenter to master the class; (7) The ability of the presenters to liven up the training atmosphere; (8) Ability of mentors to help trainees; (9) The ability of the companion to master the participant's questions; (10) The ability of the companion to liven up the training atmosphere. The success of the activities and the satisfaction of the training participants can be seen from the results of the questionnaires distributed. In every activity, the presenters are always accompanied by students who act as assistants who help accompany the scavengers so that the activities can run smoothly. Result The service has been carried out in 15 meetings every Monday to Saturday at 13.00-17.00 WIB. The implementation is carried out in half a day because in the morning the scavengers are busy with their work. There are also scavenger children who take part in the activity who are still in school so that the selection of the afternoon training time can be attended by more participants. The result of this training and mentoring is the successful marketing of products from the business of scavengers within the Tunas Mulia Foundation through online applications based on marketplace platforms, namely Shopee, Tokopedia and Instagram. The following are the results of the assistance that has been carried out and implemented by the scavenger group within the Tunas Mulia Foundation as shown in Figure 5. Figure 5 shows a screenshot of the Shopee and Tokopedia accounts which are used as a market place for scavengers in marketing their products. The picture is also proof that marketing has been running through the marketplace platform. In the implementation of community service, the stages that have been carried out. | 477 | Assistance in using the marketplace pla orm for scavenger groups Stages of initial meeting In the initial stage, the PkM team visited the BIWMS location to introduce themselves to the foundation's management and a group of scavengers who will attend the training. In this initial meeting, obtained an overview of the results of passion fruit plantations, cassava plantations, banana plantations, quail farming products and their products such as passion fruit drink, banana chips, cassava chips, fried quail workers as well as handicraft production from recycled materials made by scavenger family. In addition to knowing the results of the scavengers' eff orts, the service team accommodates all problems related to production and marketing submitted by the scavenger group and then records the problems that occur. In this assessment, it is possible to identify partner problems that the team can assist with in solving them. Figure 6 shows the atmosphere during the assessment at the Tunas Mulia Foundation location, Bantar Gebang. Figure 6 shows the atmosphere during the initial meeting, namely an assessment from the Bina Insani University represented by 2 (two) resource persons, with the management of the Tunas Mulia Foundation. It was seen that some of the scavengers were present to listen to the explanation of the planned activities in the meeting room owned by the Foundation, which is located not far from the BIWMS, Bekasi. Selection of products to be marketed online Tunas Mulia Foundation has a variety of products that are self-produced and marketed through the Foundation Cooperative. The concept of selling is conventional sales by means of buyers coming directly to the cooperative in the neighborhood where the scavengers live in BIWMS. The products available include passion fruit drink, banana chips, cassava chips, fried quail workers and handicrafts made from recycled materials. In the fi rst stage, several products were selected to be marketed online. The consideration of product selection is more to guarantee the availability of raw materials, practicality of processing, ease of packaging and is a product that is needed by the community so that it can be expected to be of sale value. If online marketing is considered successful, then the next step will be to try to market other products and add a market place platform. Packaging design and photo taking This stage of packaging design and photo taking aims to increase the ability of participants, especially in preparing products to make them look attractive to be photographed. The selection of the shape and color of the packaging needs to be prepared. So is the background, light and photo taking angle. Marketplace platform selection and learn how to upload The participants discussed to get an agreement to determine the marketplace platform that would be used to market their products. At this stage at the same time learning everything related to online marketing, from downloading applications, uploading to how to handle when there is an order. Good cooperation is needed from all parties on duty, namely the person in charge who monitors orders through 3 (tasks) marketplaces, conveys correct information to those who prepare orders, packs and ensures delivery to the buyer correctly. Selection of administrators who are responsible for handling online sales In the stage of selecting the board of directors, several names were appointed to be in charge of several functions. This appointment is still temporary to be evaluated later. Some were appointed as the operator in charge of each chosen platform, some were appointed as treasurer, marketing department, message recipient, packaging division, as well as general administration and fi nance. The administration section is in charge of recording all activities ranging from purchasing raw materials, purchasing packaging materials, recording all orders and other administrative work. While the fi nance department, apart from serving as a cashier, is also in charge of recording income and all expenses that occur and making simple fi nancial reports to be accounted for every week to the management and all members of the Tunas Mulia Foundation. Review At the review stage, all participants gather and convey problems or obstacles that arise in using the marketplace platform. With the guidance of the team, solutions are sought to overcome them. The review is divided into 3 (three) things, namely: (1) A review of the success of online marketing; (2) A review related to the determination of the person in charge of online marketing; and (3) A review conducted to determine the satisfaction of the training participants. A review of the success of online marketing shows that sales have increased compared to previous sales which were done conventionally. However, the very short observation period cannot be used to determine the percentage increase in sales because there are still fl uctuations in sales every day. For a review related to the determination of the person in charge of marketing, it is carried out at the same time at the closing of service activities. Meanwhile, the review on the satisfaction of the training participants, which was previously distributed through a questionnaire, was discussed at the end of the discussion. The benefi ts of this questionnaire are for input for improvement for the team in carrying out further community development programs. Closing ceremony The closing ceremony was held at the end of the activity. Located at the Tunas Mulia Foundation in the Bantar Gebang Integrated Waste Management Site, attended by the foundation's management, all training participants and the entire community development team, namely lecturers and students. In this closing event, an offi cial board was formed and ratifi ed which is responsible for handling online sales. It is hoped that the training and mentoring that has been carried out by the team can really develop so that the scavengers can develop themselves through other marketplace platforms. Training materials The materials presented in this community development program at the Tunas Mulia Foundation is a continuation of the previous service activities which focused on making logos or images and writing on packaging. As in the previous explanation, service activities are divided into 3 (three) activities, namely training, practice, and mentoring which are carried out for 15 working days. The implementation details are in Table 1. Activities Introduction and exploration - Scavengers raise marketing problems -List products marketed conventionally -Determination of the implementation date of the service and the duration of the activity Objectives Introducing each other, both from the activity team and the team of prospective trainees The service team knows partners' problems - The service team knows the various products that will be marketed online -Prepare each other's needs to be ready on the date of implementation - Meeting 2-4 Training Activities Activities Sharing by the presenters by means of lectures related to online marketing -Sharing by the presenter by taking photos of the product to make it look attractive -Sharing how to write product-related information -Sharing how to choose and register on the marketplace - Objectives Participants know how to market online -Participants know how to take product photos to make it look attractive -Participants know how to write product-related information -Participants know how to vote and register in the marketplace - Meeting 5-8 Practice Activities Actiities The presenter provides several examples that have been applied - The speaker shows the process of the practice being carried out -Participants are asked to collect/collect product data -Participants practice packing the product as attractively as possible - The presenter shows how to choose the product to be photographed - The presenter shows how to choose the best photos - The speaker gives examples of trying the Shopee, Tokopedia and Instagram applica-tions The presenter shows how to upload product photos - The presenters provide examples of inputting data and marketing online - Objectives Participants get an overview of the examples that have been applied - Participants get an overview of the process from the practice that is being carried out Participants have product data that is neatly written -Participants are able to package the product as attractively as possible - Participants are able to choose the product to be photographed -Participants are able to choose the best photos -Participants can try the Shopee, Tokopedia and Instagram applications -Participants are able to upload product photos -Participants are able to input data and market online - Activities Monitor products that have been marketed -Accompany receiving orders -Accompany when preparing products for shipping and receiving payments - Objectives Knowing the market reaction to products that have been marketed -Respond quickly when there is an order -Maintain reputation by quickly preparing products for shipping -| 480 | ABDIMAS: Jurnal Pengabdian Masyarakat Universitas Merdeka Malang Volume 7, No 3, August 2022: 471-486 Meeting 15 Closing Activities Participants gather and convey problems or obstacles -Review of the personnel appointed as the person in charge -Review of satisfaction questionnaires for service activities - Objectives To fi nd solutions to problems and be able to overcome obstacles -To establish the authorization of responsible personnel -For input for improvement for the team in carrying out further service activities -This community development program was carried out for 15 consecutive days. After the fi rst day the activity was fi lled with introductions and in principle this community development program is divided into 3 (three) activities, namely training, practice and mentoring as shown in Table 1. Figure 7 shows photos during training and mentoring activities. Figure 7 shows photos of the atmosphere during mentoring in class. The participants immediately practiced using their respective cellphones accompanied by students who were tasked with assisting resource persons so that the training and mentoring process ran smoothly. Training and practical activities are activities that are very full of enthusiasm for the participants. Participants are always present on time even though they have to spend time in the middle of their working hours. This is due to the desire to change to a more advanced direction so that they can market their products online. As has happened to MSMEs, businesses managed by these scavengers have also been heavily aff ected by the COVID-19 pandemic. The practice of taking pictures of the product and choosing the text to be included in the product turned out to be a very new thing for the participants. However, the participants actually felt entertained by the funny atmosphere full of laughters and laughing at the photos. Figure 8 below is the result of photo design and information written on products made by scavenger groups. Figure 8 shows the photos of the participants, showing their products. The results of these photos are then modifi ed for the purposes of making advertisements. By displaying their own photos, the scavengers feel very proud and satisfi ed. Companion provides direction so that the photos get the brightness level and highlight the product well. In the last stage, namely mentoring, several products have been successfully marketed through the marketplace platforms Tokopedia, Shopee and Instagram. At this stage, the scavenger group is still accompanied by the community development service team so that the division of tasks can be organized when orders come in. The responsibilities that have been prepared previously are the functions of the operator in charge of each platform, the treasurer, the message receiving section, the packaging section, and the general administration section, each working according to their function but still helping each other if there are things that are not handled properly. This condition is observed, studied and recorded if there are other parts that need assistance or need a longer time allocation than other functions. What needs to be observed and studied is that improvements can be made and a more suitable format is sought. Figure 9 shows the appearance of Instagram and Tokopedia. Figure 9 shows the participants have successfully uploaded their product advertisements to Instagram and Tokopedia. View a list of products for sale. Discussion This community development program certainly provides benefi ts for all parties, namely the Tunas Mulia Foundation, the scavengers, the community development team, namely lecturers and students from Bina Insani University. Apart from benefi ting from the results of service, it also has an economic and social impact for scavengers, increasing their insight and ability to sell online. There are also contributions to other sectors that are directly related to changes in the way of marketing products so that it has an impact on increasing income. The following are the benefi ts obtained as an outcome of the implementation of service activities. Benefi ts of Community Development Program results The benefi ts of community development program that have been carried out at the Tunas Mulia Foundation include: (1) Providing solutions for scavengers for problems they face related to marketing diffi culties; (2) Introduce online sales activities through marketplace applications, namely Tokopedia, Shopie, and Instagram; (3) Improved skills in using digital marketing and e-commerce applications; (4) Business products can be recognized by the wider community; and (5) Help increase sales. Economic and social impact Assistance activities for using sales applications through the marketplace platform for business activities for scavenger groups within the Tunas Mulia Foundation provide economic and social impacts. (1) The economic impact that can be felt is that existing businesses can be increased through marketplacebased e-commerce applications and provide an increase in income for scavenger groups within the Tunas Mulia Foundation; (2) The social impact is to increase social status because the scavenger group has been underestimated by the community, but after having new skills and having an entrepreneurial spirit, the scavengers become more confi dent. Contribution to other sectors Contributions to other sectors from the implementation of this service activity are reducing poverty levels through increasing income from scavenger groups within the Tunas Mulia Foundation, in Bantar Gebang. Other improvements also help increase the creative industry sector through the business results produced by scavenger groups such as the results of recycling skills, utilizing coff ee waste into business opportunities. Improving the skills of scavengers raises the opportunity to have the opportunity to earn income from other than work as scavengers. Obstacles The obstacles faced in the implementation of community development program through funding assistance programs for independent campus learning policy are as follows: (1) The mentoring time is very limited and must adjust the activities of the implementing team with partners; (2) The lack of knowledge of partners in operating technology so that it takes longer for partners to become more skilled. In the beginning, not all scavengers had cellphones that met the qualifi cations that could be used to make sales online. Some scavengers also have limited knowledge of technology so they need patience for resource persons and assistants in assisting scavengers directly during the training. Follow up In accordance with the constraints or obstacles that have been described, as a follow-up to the mentoring activities for the scavenger group, the following are: (1) Scheduling assistance for the next stage; (2) Assistance is carried out on an ongoing basis until there is a signifi cant change in selling skills manually and online through sales application. Outside of training hours, resource persons and assistants are ready to be asked at any time if the scavengers encounter obstacles in the implementation of online sales. The assistance provided is really attached so that the scavengers become independent. Discussion on participants' success and satisfaction level Community development program was carried out through training, practice and mentoring with 10 criteria to measure the success and satisfaction of participants, measured using a Likert scale 1-5 with the following criteria: 1=not satisfi ed, 2=less satisfi ed, 3=quite satisfi ed, 4=satisfi ed, 5=very satisfi ed. Questionnaires were distributed to the training participants on the last day of mentoring. After processing the data, the results obtained for the 10 assessment criteria are shown in Figure 10. The explanation of Figure 10 Diagram of Participant Satisfaction Levels is as follows: (1) Presentation of training materials is easy to understand. Participants who are very satisfi ed 52%, satisfi ed = 42%, quite satisfi ed = 6%; (2) Achievement of training program targets. Participants who are very satisfi ed 55%, satisfi ed = 45%; (3) Effi cient use of training time. Participants who are very satisfi ed 50%, satisfi ed = 44%, quite satisfi ed = 6%; (4) the suitability of the training method used. Participants who are very satisfi ed 56%, satisfi ed = 40%, quite satisfi ed = 4%; (5) the ability of the presenter in delivering the material. Participants who are very satisfi ed 51%, satisfi ed = 46%, quite satisfi ed = 3%; (6) the ability of the presenter to master the class. Participants who are very satisfi ed 52%, satisfi ed = 43%, quite satisfi ed = 5%; (7) the ability of the presenters to liven up the training atmosphere. Participants who are very satisfi ed 58%, satisfi ed = 40%, quite satisfi ed = 2%; (8) Ability of mentors to help trainees. Participants who are very satisfi ed 51%, satisfi ed = 40%, quite satisfi ed = 9%; (9) the ability of the companion to master the participant's questions. Participants who are very satisfi ed 50%, satisfi ed = 40%, quite satisfi ed = 10%; (10) the ability of the companion to liven up the training atmosphere. Participants who are very satisfi ed 48%, satisfi ed = 40%, quite satisfi ed = 12%. Judging from the results of the questionnaire, it appears that there are no participants who choose less satisfi ed and not satisfi ed. All choices are in the results of very satisfi ed and satisfi ed. This means that this community development program can be said to be successful, achieving its goals and satisfying all parties. The 11th additional question regarding recommendations so that this community development program has sustainability and is carried out regularly is answered that 87% want continuity on a regular basis and the remaining 13% want further service activities but not routinely, as shown in Figure 11 below: Figure 11. Program sustainability recommendations Figure 11 Recommendations for program sustainability, showing that participants still want regular training, with other required topics, as evidenced by the blue color representing 87% of the participants. The intended routine is scheduled, for example every month there is always training but not every day, for example 1-2 times a week. The remaining 13% in red represent participants who want the continuation of the program but do not need to be scheduled regularly. This community development program is basically very well received and very much needed so that all participants want this activity to continue. However, there are time constraints for the participants, all of them whom work as scavengers. When participating in the training, the scavengers feel that they have lost half a day of work which has resulted in reduced income because the positive impact of online sales has not yet been eff ective. CONCLUSION AND RECOMMENDATIONS Community development program is carried out for the scavenger group who are members of the Tunas Mulia Foundation in the Bantar Gebang Integrated Waste Management Site are carried out in the form of training, practice and assistance in using sales applications through the marketplace platform. This service activity is carried out to help overcome problems or obstacles faced by scavengers related to the lack of knowledge about online sales. Another goal of this activity is to help increase the income of scavenger families by expanding the market through digital technology to increase sales. This assistance has proven to be very satisfying for both parties. Participants who initially did not know how to market online, are now profi cient in using sales platforms through the Shopee, Tokopedia, and Instagram applications. Sales are becoming increasingly in demand because they can reach a wider market. The impact of the wider market certainly increases sales and automatically increases family income. From the results of the questionnaire regarding the level of participant satisfaction, this service activity was concluded to be very satisfying to the participants. In order to continue to compete in marketing, the trainees are expected to continue to improve their skills in marketing online so that they can add other marketplace platforms and seek other creative ideas. The skills that have been taught are expected to be passed on to other scavengers who have not had the opportunity to participate in the training. With the passage of time, the products produced by scavengers are also expected to be more various. In the long term, it is hoped that scavengers under the tutelage of Tunas Mulia Foundation will continue to be partners in community development program for Bina Insani University lecturers with other training agendas so that signifi cant changes can be measured in the sales of managed businesses.
2022-08-12T15:20:54.060Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "e3ef02e78252efbfc8e755650baecf6f935da152", "oa_license": "CCBYSA", "oa_url": "https://jurnal.unmer.ac.id/index.php/jpkm/article/download/7155/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c01cb64912c324646c395a0572ca68e3fb67d519", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
18284927
pes2o/s2orc
v3-fos-license
NEUROTRANSMITTER CONCENTRATIONS IN THE PRESENCE OF NEURAL SWITCHING IN ONE DIMENSION In volume transmission, neurons in one brain nucleus send their axons to a second nucleus where neurotransmitter is released into the extracellular space. One would like methods to calculate the average amount of neurotransmitter at different parts of the extracellular space, depending on neural properties and the geometry of the projections and the extracellular space. This question is interesting mathematically because the neuron terminals are both the sources (when they are firing) and the sinks (when they are quiescent) of neurotransmitter. We show how to formulate the questions as boundary value problems for the heat equation with stochastically switching boundary conditions. In one space dimension, we derive explicit formulas for the average concentration in terms of the parameters of the problems in two simple prototype examples and then explain how the same methods can be used to solve the general problem. Applications of the mathematical results to the neuroscience context are discussed. Introduction. A fundamental mechanism by which neurons convey information is one-to-one neural transmission, in which a neuron fires an action potential that travels down its axon to a synapse that is adjacent to the cell body or a dendrite of a second neuron.The arrival of the action potential at the synapse causes biochemical changes that result in neurotransmitter being released into the synaptic cleft where it diffuses to the post-synaptic membrane (i.e. the 2nd neuron), binds to receptors, and tends to make the second neuron fire or not fire depending on whether the neurotransmitter is excitatory or inhibitory.In this type of neural transmission, commonly called electrophysiological, the "purpose" is to convey the electrical signal from one neuron to the next.The role of biochemistry (in the synapse and in the synaptic cleft) is simply to facilitate the electrophysiology. However, neurons convey information by another mechanism as well.Certain collections of neurons that have the same neurotransmitter can project to a distant volume (a nucleus or part of a nucleus) in the brain and when they fire they increase the concentration of the neurotransmitter in the extracellular space in the distant volume.The increased concentration modulates the electrophysiological neural transmission in the distant region by binding to receptors on the cells in the target region.This kind of neural activity is called volume transmission [11,19].It is also called neuromodulation because the effect of the neurotransmitter is the modulation of one-to-one transmission by other neurons or synapses in the projection region.There are many important examples of volume transmission such as the dopaminergic projection from the substantia nigra to the striatum [6], the serotonergic projection from the dorsal raphe nucleus to the striatum [2,3], and projections of norepinephrine neurons from the locus coeruleus to the cortex [11].The serotonin and dopamine projections are crucial to motor control and Parkinson's disease, and the norepinephrine projection to the initiation and maintenance of wakefulness. The purpose of this paper is to use recently developed mathematical machinery on the stochastic switching of boundary conditions in PDEs [14,16] to understand certain aspects of volume transmission.Suppose that a large number of neurons with the same neurotransmitter project randomly to a distant volume where they release neurotransmitter into the extracellular space.Each neural terminal in the projection region is a source of neurotransmitter when the neuron fires and is a sink for neurotransmitter otherwise because transporters carry the neurotransmitter back into the terminals.Given the statistics of the stochastic firing of each neuron, how can we calculate the average neurotransmitter level over the whole extracellular space?How can we calculate the spatial dependence of expected neurotransmitter level?How do the answers to these questions depend on firing rates, amounts released, average distances between terminals, diffusion constants, and other important parameters?In this paper, we answer these questions in one space dimension.Of course, one space dimension is unphysiological, but the techniques and the answers (some of them surprising) give insight into what can be expected in higher dimensions. In Section 2 we consider the following two simple prototype problems.Let u(x, t) be the concentration of neurotransmitter in the interval [0, L].For the first problem, we consider the stochastic process that solves and switches randomly between the boundary conditions Thus, at x = 0 there is a hard wall through which neurotransmitter cannot diffuse.At x = L the boundary condition switches between firing (f ) where neurotransmitter is released into the interval at a constant rate, c, and quiescence (q) where it is reabsorbed by the terminal.We are interested in computing the asymptotic behavior of Eu(x, t) as t → ∞.We will see that Eu(x, t) is asymptotically constant in x under general assumptions on the distributions of the switching times, and we compute an explicit, simple formula for this constant in terms of D, L, c, and the switching constants if the switching is at exponentially distributed times.Finally, we numerically compute the spatial dependence of the standard deviation. For the second case, we derive explicit formulas for Eu(x, t) as t → ∞ for the problem where u(x, t) satisfies (1) and at exponentially distributed times switches between the boundary conditions: This models the situation where there is a neuron at x = L that switches stochastically between firing (f ) and quiescence (q) while there is a glial cell at x = 0 that absorbs neurotransmitter.Again, in addition to finding a formula for the mean we compute the standard deviation numerically. In Section 3 we consider the much more general stochastic process where u(x, t) satisfies (1) but now there are neurons at both x = 0 and x = L.Both neurons fire stochastically and independently with exponential rates that are different and release rates, c 0 , c L , that are different.Methods used in Section 2 and from [14] are used to show that lim t→∞ Eu(x, t) can be computed in terms of all the given parameters of the problem by solving a set of eight linear equations. Section 4 is devoted to extracting information about volume transmission from the explicit formulas for lim t→∞ Eu(x, t) derived in Sections 2 and 3.In the Discussion we explain why the questions in two and three dimensions are much more difficult and indicate some of our preliminary results. 2. Simple prototype problems.We use two approaches to the technical calculations.The "iterated random function" method, Section 2.1, is very general and allows us to calculate the limits with very weak restrictions on the distribution of switching times.The "moment" method, Section 2.2, assumes that switching times are exponentially distributed.This allows us to use Markov methods and to compute standard deviations. 2.1. Iterative random function approach.We wish to consider the L 2 [0, L]valued stochastic process that solves (1) and (2).To define the process {u(x, t)} t≥0 and cast it in the setting of [16], we define two self-adjoint operators: The operators A q and A f generate C 0 -semigroups on L 2 [0, L], which we denote respectively by e Aqt and e A f t .The solution operator Φ t q : L 2 [0, L] → L 2 [0, L] for the heat equation (1) with the (q) boundary conditions in (2) is given by Φ t q (g) := e Aqt g. The eigenvalues and eigenvectors for A q are If we let h(x) := 2L π c 1 − cos( π 2L x) , then the solution operator for the heat equation with the (f ) boundary conditions in (2) is The eigenvalues and eigenvectors for A f are with β 0 = 0 and b 0 (x) = 1/L.In order to define the random switching times, we define the set Ω of all possible switching environments and equip it with a probability measure.Let µ f and µ q be two continuous probability distributions on the positive real line with finite first and second moments.We further assume that if τ q is drawn from µ q , then We note that this is satisfied if the probability density of µ q is bounded in a neighborhood of 0. Define each switching environment, ω ∈ Ω, as the bi-infinite sequence ω = {ω k } k∈Z , where each ω k is a pair of non-negative real numbers, (τ k f , τ k q ), drawn from µ f × µ q .That is, (τ k f , τ k q ) is an R 2 -valued random variable drawn from the product measure µ f × µ q .We take P to be the infinite product measure generated by µ f × µ q and let E denote the corresponding expectation.Summarizing notation, we have that ω = (. . ., ω −1 , ω 0 , ω 1 , . . . ) = . . ., (τ −1 f , τ −1 q ), (τ 0 f , τ 0 q ), (τ 1 f , τ 1 q ), . . .∈ Ω. To define the stochastic process {u(x, t)} t≥0 we need some notation from renewal theory.For each ω ∈ Ω and natural number n, define the elapsed time after n pairs of switches with S 0 := 0. Define the number of pairs of switches before time t N (t) := max{n ≥ 0 : S n ≤ t}. We also define the state process J(t) which indicates the current boundary condition Finally, for t ≥ 0, define the elapsed time since the last switch (often called the age) by For each ω ∈ Ω, g ∈ L 2 [0, L], and integers k, n, define the map For u 0 ∈ L 2 [0, L], ω ∈ Ω, and t ≥ 0, define our continuous-time L 2 [0, L]-valued process {u(x, t)} t≥0 by f (ϕ 1,N (t) (u 0 )).( 12) Since the switching time distributions are continuous, they are non-arithmetic (also known as non-lattice).Thus, by Theorem 2.4 of [16], we have that u(x, t) converges in distribution as t → ∞ to an L 2 [0, L]-valued random variable, ū(x). In order to describe this limit ū, we define the random variables where g ∈ L 2 [0, L] and τ q is an independent draw from µ q .By Proposition 2.1 in [16], the random variables Y f and Y q exist almost surely and are independent of g. (We remark that random variables such as Y f and Y q are often called random pullback attractors because they take an initial condition and pull it back to the infinite past [7,17,20]).By Theorem 2.4 in [16], we have that ū is given by where ξ is an independent Bernoulli random variable with parameter and a q and a f are two independent random variables taking values on the positive real line with cumulative distribution functions given by Equation ( 13) has a simple interpretation.It means that in order to find the distribution of the solution one must do the following.First, flip a coin with parameter ρ to decide the current boundary condition (either (q) or (f )).If it is (q), then apply the map Φ q to the pullback Y f for time a q , where a q is the amount of time since the last switch given that Φ q is currently being applied.If it is (f ), then apply Φ f to Y q for time a f .In order to extend some further results in [16], we must overcome two barriers.First, our process {u(x, t)} t≥0 defined in equation ( 12) does not have a deterministic bound like the processes considered in [16].Second, we need to make statements about the spatial derivative of our process.The following few lemmas collect the necessary estimates to overcome these difficulties. Here and throughout, we use where 1 u(t) ≥K denotes the indicator function on the event u(t) ≥ K. Proof.It is straightforward to see from the definitions in equations ( 4) and ( 6) that there exists constants α > 0, K 1 , and K 2 such that Thus, if we define a process z(t) ∈ R by equation ( 12) with Φ t q and Φ t f replaced by the maps with initial condition z 0 := u 0 , then u(t) ≤ z(t) for all t almost surely.Hence, if {z(t)} t≥0 is uniformly integrable, then { u(t) } t≥0 is uniformly integrable.We will prove the following condition that implies uniform integrability Recalling the definition of S n from equation (10), observe that It is easy to see that the first term in ( 16) satisfies Further, since the random times are independent, we can bound the expectation of the second term in ( 16) where τ f and τ q are drawn from µ f and µ q , respectively. To bound the expectation of the third term in (16), observe that Hence, if we let then C < ∞ since τ f has finite first and second moments by assumption.Thus Putting the bounds in equations ( 17), (18), and (19) together with ( 16), we see that E[z 2 (S n )] is bounded above independently of n.A similar calculation shows that E[z(S n )] is also bounded above independently of n.Further, it is immediate that at any time t ≥ 0, we have that Equation ( 15) follows. A similar argument gives the following lemma. Lemma 2.2.The random variables Y q , Y f , and ū have finite expectation. In order to make certain statements about spatial derivatives, we need the following lemma. Lemma 2.3.The random variables Y q ∞ and Y f ∞ have finite expectation. Proof.By Proposition 2.2 in [16], if τ f and τ q are independent draws from µ f and µ q , then we have the following equality in distribution Hence, recalling the eigenvalues and eigenvectors in ( 5), we have that by independence, the Cauchy-Schwarz inequality, the assumption in equation ( 8), Lemma 2.2, and the fact that a k = 1 and a k ∞ = a 1 ∞ for all k.Further, by equation (20) we have that by equation ( 21) and the assumption that τ f has finite expectation. In the following theorem, we prove that the mean of u(x, t) is constant in x at large time.To see intuitively why this is true, first take the expectation of (1) and interchange expectation with differentiation to show that the mean satisfies the heat equation.Next, since u(x, t) converges in distribution at large time, the time derivative of the mean vanishes at large time and thus the mean satisfies the steady state heat equation and is therefore linear.Finally, since the boundary condition is always no flux at x = 0, the mean satisfies a no flux condition at x = 0. Combining these last two points forces the mean to be constant.The proof of the theorem makes this argument rigorous. In the following theorem and in the remainder of this subsection, we use E to denote the Bochner integral of L 2 [0, L]-valued random variables and not the pointwise expectation of random functions. Theorem 2.4.Assume that the switching time distributions, µ q and µ f , are continuous, have finite first and second moments, and satisfy equation (8).Then the process u(x, t) converges in distribution as t → ∞ to an L 2 [0, L]-valued random variable, ū(x), whose expectation is equal to a constant for almost every x ∈ [0, L]. Proof.Let g ∈ L 2 [0, L] and let ε > 0. By Lemma 2.1, there exists a K > 0 so that It follows from Theorem 2.4 in [16] and the definition of convergence in distribution that there exists a T > 0 so that where •, • denotes the L 2 [0, L] inner product.Combining ( 22) and ( 23) with the Cauchy-Schwarz inequality, we have that Thus we conclude that Since taking the inner product against g is a bounded linear operator on L 2 [0, L], we can exchange expectation with the inner product in equation ( 24) and obtain that Eu(t) → Eū weakly in Thus, to show that s = 0, we only need to show that we can exchange the expectation with the limit in equation ( 28) and then apply equation (27).To do this, we need to find an integrable random variable M such that Recalling the eigenvalues and eigenvectors in equation ( 4), observe that since a k ∞ = a 1 ∞ for all k.Thus, if we let M 1 be the righthand side of (30), then M 1 bounds d dx Φ a q q (Y f ) ∞ .We now check that M 1 has finite expectation.Using ( 20), ( 6), (5), and Lemma 2.3, it is straightforward to obtain that Since a q and Y f in (30) are independent, M 1 has finite expectation.A similar argument shows that there exists a random variable M 2 that almost surely bounds and has finite expectation.Recalling the definition of ū from equation ( 13), we have that M := M 1 + M 2 satisfies equation (29) and EM < ∞.Thus, by the dominated convergence theorem and equations ( 27) and (28), we have that and the proof is complete.Now that we have shown that the expectation of the process at large time is constant in space for very general switching time distributions, we compute this constant in the case of exponential switching times in the following theorem. Theorem 2.5.If the switching time distributions, µ f and µ q , are exponential with respective rate parameters r f and r q , then the constant value of Eū is given by where µ := r q /r f and η := (r f + r q )/D. Proof.By Corollary 2.5 in [16], ū is given by where ξ is an independent Bernoulli random variable with parameter Hence, by Theorem 2.4 above, there exists a M ∈ R such that We will use equation (32) to find M. Let {φ n } ∞ n=1 be such that φ n ∈ C ∞ 0 (0, L), φ n ≥ 0, and φ n 1 = 1 for each n and for each g ∈ C[0, L].Then, by equation (32), we have that for each n where {b k } k≥0 are the eigenvectors of A f given in equation (7).We want to take the limit as n → ∞ in equation (34).Since Y q is almost surely smooth and Y q (L) = 0 almost surely, using Lemma 2.3 along with Holder's inequality and the dominated convergence theorem gives lim n→∞ φ n , EY q = 0. (35) Next, we want to show that In order to show this, we need to prove that converges uniformly in x.To do this, we now find an expression for b k , EY f .By equation (32), we have that for all integers k ≥ 0. By Proposition 2.2 in [16], if τ f is an independent draw from µ f , then we have the following equality in distribution . Hence, recalling the definition of Φ f in equation ( 6) and letting {β k } k≥0 be the eigenvalues of A f given in equation ( 7), we have that for k ≥ 1 After solving the system of equations in (38) and (39), the sum in (37) becomes converges uniformly in x, and thus (36) is verified. 2.2.Moment approach.Suppose the switching time distributions µ f and µ q defined above are exponential with respective rates r f and r q .It follows that the state process {J(t)} t≥0 defined in equation ( 11) is a continuous time Markov jump process on {0, 1} that leaves the firing state (state 0) at rate r f and leaves the quiescent state (state 1) at rate r q 0 r f rq 1. Though the stochastic process {u(x, t)} t≥0 defined above in equation ( 12) is constructed as an L 2 [0, L]-valued process, u(x, t) is actually smooth in x ∈ [0, L] for each t > 0 by virtue of being a solution to the heat equation.Thus, the process is well-defined pointwise in [0, L].That is, for fixed is a stochastic process taking values in R. Hence, for t > 0, x ∈ [0, L], and j ∈ {0, 1}, we define where by E we mean the expectation of the R-valued process in (41).Throughout the remainder of the paper, we use E to denote this pointwise expectation.Assume that J(t) is initially distributed according to its invariant distribution.The following proposition follows from Theorem 1 in [14].The PDE and boundary conditions satisfied by v i follow from interchanging differentiation with expectation.Thus the proof in [14] amounts to checking the hypotheses of the dominated convergence theorem, which follow from standard estimates for the heat equation. Proposition 1.The functions v 0 and v 1 defined in equation ( 42) satisfy the following boundary value problem. with boundary conditions where ρ = r f /(r f + r q ) is the proportion of time in the (q) state. We can solve the boundary value problem in Proposition 1 at steady state to yield where µ := r q /r f and η := (r f + r q )/D, which recovers the result found above in Theorem 2.5. In addition to finding the mean, we can also find the standard deviation.For t > 0, (x, y) ∈ [0, L] 2 , and j ∈ {0, 1}, we define the two-point correlations The following proposition follows from Theorem 1 in [14]. Proposition 2. The functions C 0 and C 1 defined in equation ( 44) satisfy the following boundary value problem on the square [0, L] 2 . with boundary conditions that couple to the moments defined in equation ( 42) It is straightforward to numerically solve this boundary value problem at steady state to obtain After obtaining this function, we subtract from it the square of the steady state mean found above to obtain the variance at large time A plot of the square root of this function (i.e. the standard deviation) is given in Figure 1 (left).While the mean is constant in space, the standard deviation is much higher near the switching boundary.2) conditions (left plot) or the equation (3) conditions (right plot).The means are given by ( 31) and (46) for the left and right plots, respectively.The standard deviation for the left plot is found by numerically solving the boundary value problem in Proposition 2 at steady state, subtracting the square of the mean, and then taking the square root.The standard deviation for the right plot is found analogously.For both plots, parameters are L = D = r q = 1 and c = r f = 100. 2.3. Another simple problem.Consider the same switching PDE as above, but now suppose the boundary condition at x = 0 is always an absorbing Dirichlet condition, u(0, t) = 0.That is, suppose u(x, t) satisfies the heat equation in the interval [0, L] and at exponentially distributed random times switches between the boundary conditions This models the situation where there is a neuron at x = L that switches stochastically between firing (f ) and quiescence (q) while there is a glial cell at x = 0 that absorbs neurotransmitter.Using the methods from subsection 2.2 above, we can calculate the expectation of the solution at large time.In particular, after solving the steady state version of a boundary value problem similar to that in Proposition 1, we find that the expected solution at large time is where µ := r q /r f and η := (r f + r q )/D.We can also derive the analog of Proposition 2 for this problem in order to find the standard deviation.The mean and standard deviation is plotted in Figure 1 (right). 3. The general problem.Suppose we now let both ends of the interval switch independently.That is, suppose u(x, t) satisfies the heat equation in the interval [0, L] and suppose the boundary condition at x = 0 switches between u(0, t) = 0 and ∂ x u(0, t) = −c 0 < 0, while the boundary condition at x = L switches between u(L, t) = 0 and This models the situation where there is a neuron at x = L and at x = 0 that fire independently. Suppose two independent Markov jump processes control the states of the x = 0 and x = L boundaries.It is straightforward to combine these two independent Markov processes into a single 4 state Markov process J(t) ∈ {0, 1, 2, 3} with generator Q.For j ∈ {0, 1, 2, 3} define the functions Assume that J(t) is initially distributed according to its invariant distribution.The following proposition follows from Theorem 1 in [14]. Proposition 3. The functions defined above in equation (47) satisfy the PDEs where and r q is the rate of switching from the quiescent state to the firing state and r f is the rate of switching from the firing state to the quiescent state.The intuitive reason why the expectation is constant is that if x is far away from L then it is hard for neurotransmitter to diffuse there without being reabsorbed first, but once there it is hard to be reabsorbed when the boundary conditions switch because of the distance to L. It is clear that M should be proportional to c since c is the rate at which neurotransmitter is put into the interval when the neuron is firing. If r q gets larger and/or r f gets smaller then µ will increase and cause M to increase since the neuron is spending a larger fraction of time in the firing state.Similarly, if r q gets smaller and/or r f gets larger, both µ and M will decrease because the neuron is spending a smaller fraction of time firing.But what if we keep µ constant and scale r q and r f to be large?Then η becomes large so M approaches 0 since coth is monotone decreasing to 1.This makes sense because it is very hard for neurotransmitter to escape a small region near L because as soon as it is released, the boundary conditions switch and it is reabsorbed.More generally, fast switching between Dirichlet and Neumann always becomes pure Dirichlet if the proportion of time in each state is fixed [15].This phenomenon can be understood in terms of the mean absorption time of a Brownian motion to a switching boundary.Indeed, for a particle starting on a boundary that switches between reflecting and absorbing with the proportion of time in each state fixed, the mean absorption time goes to zero as the switching rate increases [4,5]. On the other hand, if µ is constant then η gets small as both r q and r f get small, so M approaches ∞.Intuitively, this is because the input is constant in time when the neuron is firing but the absorption is (approximately) proportional to the amount in the interval so the input dominates as the switching times become long.To see why absorption is approximately proportional to the amount in the interval, first observe from the form of the solution operator in (6) that after being in the firing state for a long time s, the solution is approximately the product sφ(x) for some function φ(x).Then, when the boundary condition switches to absorbing, the form of the solution operator in (4) implies that the amount absorbed before the next switch will be proportional to s. The diffusion constant D has the reciprocal effect on η as the sum r q + r f , so when D get small M also gets small and when D gets large M goes to ∞ for the reasons given above.Finally, notice that M gets smaller as L increases, which makes sense because the neurotransmitter is diffusing into a larger region.What is interesting, however, is that coth(z) asymptotes to 1 as z → ∞, which means that once L is large compared to η the value of M is almost independent of L, M ≈ c µ η . 4.2.General formulas.In Section 3 we considered the general problem where there is a neuron terminal at both ends of the interval, the parameters of the neurons are different and they switch independently.We showed that lim t→∞ Eu(x, t) = M, which does not depend on x and we indicated how to compute an explicit formula for M in terms of all the parameters of the problem.We also displayed an explicit formula for M in the case where the neural parameters were identical.Although the formula in this special case remains complicated, we can take various limits that have biological significance.It is not hard to check that: This makes sense because as η gets large, the switching gets faster and faster as we discussed above, and therefore the neurotransmitter cannot easily escape from a small region about either endpoint.Also, This makes sense because, as we saw above, as the switching gets slower and slower, input dominates over removal.Finally, In the simple case where there is a terminal at one end and a no flux condition at the other end, this is the limit that we saw in Section 4.1 above.The limit exists because as L gets very large the coth term goes to one and M becomes independent of L. The same thing happens here if the terminals at the ends are sufficiently far apart. 4.3.Real neural parameters.Many dopaminergic and serotonergic neurons fire at a basal rate of about 1 spike/sec [10,12].The length of a typical action potential is about 1-10 milliseconds [13] and so it's reasonable to assume that the release of neurotransmitter lasts a total of about 5 milliseconds.This means that reasonable values are r q = 1/sec, r f = 200/sec, and µ = rq r f = 1 200 .The diffusion constant for dopamine is approximately 10 −6 (cm) 2 /sec [22], so The spacing between neural terminals varies widely, but for serotonin it has been estimated that there are about (2.6)10 6 terminals per cubic millimeter [21] or a distance of about 7µm between terminals.In [18], Figure 1, some terminals are considerably further apart than 20 µm and some are less.If we assume that 7µm ≤ L ≤ 20µm, then 9.9 ≤ ηL = ( Thus coth (ηL) ≈ 1 and we are well within the range of L where M is approximately independent of L. Typical extracellular concentrations of dopamine are approximately .090µM[1] and from this one can use the formula (51) to compute c, the one parameter for which there are no experimental measurements. 4.4. Complete solution of the one-dimensional problem.In Section 3 we outlined a method to give an explicit formula for the constant mean, M, in the case where there are neurons at both 0 and L and they fire independently with different parameters.This allows us to compute the overall spatial mean in the very general situation where there is a piece of neural tissue represented by the interval [a, b] that contains many neurons, switching between firing and quiescence, and glial cells that absorb neurotransmitter, see Figure 2. We assume that there are finitely many neural terminals and finitely many glial cells in the interval [a, b].Since we are in one-dimension, each neuron or glial cell separates the tissue on its left from the tissue on its right.Put differently, the tissue is divided into subintervals with terminals or glial cells only at the ends.If there are glial cells at each end, then the neurotransmitter mean in that interval will be 0 as t → ∞.If there is a glial cell at one end and a neuron at the other end, then the mean is linear over the interval (formula (46) above) and the spatial mean over the interval is elementary to compute.If there are terminals at both ends, then the spatial distribution of the asymptotic mean in the interval is given by the complex general formula (that we did not state explicitly) in Section 3. Finally, at the ends of the tissue if a glial cell is nearest the end, then the mean neurotransmitter level is 0 in that interval, and if a terminal is nearest the end, then the mean is constant over the interval and given by the formula (51) above.Thus, once we know the parameters for each neural terminal, we can compute the neurotransmitter mean in each subinterval, and therefore, by elementary methods can compute explicitly the overall mean over the whole tissue as t → ∞. 5. Discussion.The goal of this paper was to begin the development of mathematical methods for understanding volume transmission in which cells in one nucleus project their axons to a distant nucleus and release neurotransmitter extrasynaptically.Thus the terminals on the projections maintain (or change) the concentration of the neurotransmitter in the extracellular space in the projection nucleus.Given the properties of the neurons, one would like to compute various quantities like average neurotransmitter concentration.The problem is interesting from a mathematical point of view because the neuron terminals are both the source of neurotransmitter and the most important sink.We formulated the question as a problem of switching boundary conditions for the heat equation where each terminal is a point at which neurotransmitter is released at a constant rate when the neuron is firing and becomes an absorbing boundary condition when it is not.This formulation ignores some details of the biology.Surely the release rate is not constant throughout the short action potential, and some neurotransmitter may continue to be released for some milliseconds after the action potential is finished.The reuptake of neurotransmitter into the terminal is by transporters obeying Michaelis-Menten kinetics and therefore treating reabsorption as an absorbing boundary condition is an approximation.And, we have only considered the question in one space dimension. Nevertheless, we have found some interesting and unexpected phenomena that could be important for biological understanding if they also hold in two and three dimensions.Chief among these is the fact that in the simplest problem, a switching neuron at x = L and a no flux boundary condition at x = 0, lim t→∞ Eu(x, t) is a constant, M, independent of x and the variance decays rapidly as one moves away from the terminal.Thus the whole tissue, [0, L], sees the same average concentration of neurotransmitter even though some parts are closer to the terminal and some parts are further away.And, we were able to calculate a simple explicit formula for M in terms of the parameters of the problem.In Section 3 we developed a method to solve the problem when there are independently switching neuron terminals at both ends of the interval.There we prove that M is constant in space if the terminals have the same parameters and is linear in x if the parameters are different. The other simple problem that we considered was to have a switching terminal at x = L and an absorbing glial cell at x = 0.There are 10 times as many glial cells in the brain as there are neurons [10].Some of them are known to take up neurotransmitter [8,9], though probably not as quickly or efficiently as neural terminals.In this case M(x) = lim →∞ Eu(x, t) is a linear function of x and we compute its slope explicitly in terms of the neural parameters.Recent work with a biological collaborator [24] suggests that this uptake mechanism plays an important role for serotonin. We have begun some calculations in higher dimensions where the analytical and geometrical issues are much more difficult.The problem can be formulated similarly to what have done here, but closed formulas for M seem much harder to obtain, though M can be computed numerically.Numerical calculations show some similar properties to the one dimensional results in this paper.However, the terminals can no longer be treated as points and thus their shape matters, and it is not yet clear how the excluded volume, the tortuosity of the extracellular space, and the placement of geometric obstacles affects the spatial variation of lim t→∞ Eu(x, t). is a weak solution to Laplace's equation on the interval [0, L].But by the regularity of ∆ on [0, L], we have that Eū is actually a classical solution and thus it is the affine function (Eū)(x) = sx + M, for constants s, M ∈ R. It remains to show that s = 0. Let {φ n } ∞ n=1 be such that φ n ∈ C ∞ 0 (0, L), φ n ≥ 0, and φ n 1 = 1 for each n and lim n→∞ φ n , g = g(0) (26) for each g ∈ C[0, L].Since ū is almost surely smooth and d dx ū(0) = 0 almost surely, integration by parts gives that lim taking the inner product with d dx φ n is a bounded linear functional in L 2 [0, L] and since Eū = sx + M, integration by parts gives lim n→∞ Figure 1 . Figure1.Large time pointwise mean and standard deviation for the process that solves (1) and at exponential times switches between either the equation (2) conditions (left plot) or the equation (3) conditions (right plot).The means are given by (31) and (46) for the left and right plots, respectively.The standard deviation for the left plot is found by numerically solving the boundary value problem in Proposition 2 at steady state, subtracting the square of the mean, and then taking the square root.The standard deviation for the right plot is found analogously.For both plots, parameters are L = D = r q = 1 and c = r f = 100. Figure 2 . Figure 2. A line of neural tissue with glial cells and neural terminals.
2016-10-31T15:45:48.767Z
2016-08-01T00:00:00.000
{ "year": 2016, "sha1": "4af5045312544d6dbdde1887234f76fbeb50f25b", "oa_license": "CCBY", "oa_url": "https://www.aimsciences.org/article/exportPdf?id=1ab456a6-5818-4a9a-93a8-ce7b31d81af5", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "4af5045312544d6dbdde1887234f76fbeb50f25b", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
222378195
pes2o/s2orc
v3-fos-license
Kagome quantum anomalous Hall effect with high Chern number and large band gap Due to the potential applications in the low-power-consumption spintronic devices, the quantum anomalous Hall effect (QAHE) has attracted tremendous attention in past decades. However, up to now, QAHE was only observed experimentally in topological insulators with Chern numbers C= 1 and 2 at very low temperatures. Here, we propose three novel two-dimensional stable kagome ferromagnets Co3Pb3S2, Co3Pb3Se2and Co3Sn3Se2that can realize QAHE with high Chern number of |C|=3. Monolayers Co3Pb3S2, Co3Pb3Se2 and Co3Sn3Se2 possess the large band gap of 70, 77 and 63 meV with Curie temperature TC of 51, 42 and 46 K, respectively. By constructing a heterostructure Co3Sn3Se2/MoS2, its TC is enhanced to 60 K and the band gap keeps about 60 meV due to the tensile strain of 2% at the interface. For the bilayer compound Co6Sn5Se4, it becomes a half-metal, with a relatively flat plateau in its anomalous Hall conductivity corresponding to |C| = 3 near the Fermi level. Our results provide new topological nontrivial systems of kagome ferromagnetic monolayers and heterostructrues possessing QAHE with high Chern number |C| = 3 and large band gaps. Inspired by the Haldane model, in the past decades, most of the QAHE were predicted based on a honeycomb lattice. Nevertheless, kagome lattice with out-ofplane magnetization is also an important platform for investigating the QAHE [28][29][30][31]. In particular, layered magnetic kagome lattice Co 3 Sn 2 S 2 was recently reported to be a Weyl semimetal with a large intrinsic AHC [32]. Because of the successful synthesis of the bulk Co 3 Sn 2 S 2 , monolayer Co 3 Sn 3 S 2 was studied theoretically, and was found to be a Chern insulator with C = 3 [33]. In this work, inspired by recent studies on Co 3 Sn 3 S 2 , we systematically investigate monolayers Co 3 X 3 Y 2 (X = C,Si,Ge,Sn,Pb; Y =O,S,Se,Te,Po) based on the first principles calculations. Our results show that monolayers Co 3 Pb 3 S 2 , Co 3 Pb 3 Se 2 and Co 3 Sn 3 Se 2 are stable. According to the results of anomalous Hall conductivity σ xy and Chiral edge states, a high Chern number |C| = 3 was obtained in these three compounds. Furthermore, we find that the band gap and T C are all sensitive to the applied strain. For Co 3 Sn 3 Se 2 monolayer, its band gap can be decreased to zero with compressive strain of -3%, and its T C can be increased to 65K with tensile strain of 4%. T C = 60 K and tensile strain of 2% can be obtained by constructing a heterostructure Co 3 Sn 3 Se 2 /MoS 2 . We have also explored the topological properties of bilayer Co 3 Sn 3 Se 2 , and found a Weyl node on the Γ → M path near the Fermi level. Although there is no global gap, we find a relatively flat plateau in AHC corresponding to |C| = 3 near the Fermi level. II. II. COMPUTATIONAL METHODS In our studies, the first-principles calculations were performed using the projector augmented wave (PAW) method [34] based on the density functional theory (DFT) as implemented in the Vienna ab initio simula- tion package (VASP) [35,36]. The electron exchangecorrelation functional is described by the generalized gradient approximation (GGA) in the form proposed by Perdew, Burke, and Ernzerhof (PBE) [37]. A 20Å vacuum space is build to avoid the interlayer interactions. Lattice constants and atomic positions are fully optimized with the conjugate gradient (CG) scheme until the maximum force acting on all atoms is less than 1×10 −3 eV/Å and the total energy was converged to 10 −7 eV. The 9 × 9 × 1 and 15 × 15 × 1 K-meshes generated by Γ-centered Monkhorst-Pack grid [38] are used for structure optimization and self-consistent calculations. The plane-wave cutoff energy is set to be 500 eV. The phonon frequency calculations have been carried out using the finite displacement approach as implemented in the PHONOPY code [39] with a 4 × 4 × 1 supercell. The thermal stability is examined by performing a molecular dynamics (MD) simulations in the canonical (N V T ) ensemble in a 4 × 4 × 1 supercell at different temperatures with a Nosé thermostat. In the calculation of Co 3 Sn 3 Se 2 /MoS 2 heterostructure, zero damping DFT-D3 method is adopted to take interlayer van der Waals forces into account. Surface states are investigated by an effective tight-binding Hamiltonian constructed from the maximally localized Wannier functions [40,41]. And the iterative Green function method [42] is used with the package WannierTools [43]. The crystal structure of Co 3 X 3 Y 2 (X = Sn,Pb; Y = S,Se) with the space group of P3m1 (No. 164) is depicted in Figs.1(a) and (b). Co atoms form a 2D kagome lattice with one X atom sandwiched by X and Y atoms in the center. Each primitive cell contains one formula unit. After checking the dynamical stabilities of all compounds Co 3 X 3 Y 2 (X = C,Si,Ge,Sn,Pb; Y =O,S,Se,Te,Po) by calculating their phonon spectra, we found that only Co 3 Pb 3 S 2 , Co 3 Pb 3 Se 2 and Co 3 Sn 3 Se 2 monolayers are dynamically stable, because there are no imaginary phonon modes in the whole Brillouin zone as shown in Figs.1(d)-(f). The calculated lattice constants for Co 3 Pb 3 S 2 , Co 3 Pb 3 Se 2 and Co 3 Sn 3 Se 2 are 5.38, 5.44 and 5.32Å, respectively. Furthermore, the stabilities of these compounds are also checked by the formation energy, which is defined as B. B. Magnetic and electronic properties In DFT calculations, the magnetic ground state of Co 3 X 3 Y 2 monolayer can be obtained by calculating the energy difference between ferromagnetic (FM) and antiferromagnetic (AFM) spin configurations(∆E = E F M -E AF M ), as shown in Figs.2(a) and (b). The AFM configuration is consisting of in-plane spin polarization with angles of 120 [44]. The results are listed in Table.I, where the negative values of energy difference between the FM and AFM configurations indicate that Co 3 Pb 3 S 2 , Co 3 Pb 3 Se 2 and Co 3 Sn 3 Se 2 all favor FM ground state. For the FM ground state, the magnetic anisotropy energy (MAE) which is defined as the energy difference between total energies corresponding to in-plane and out-of-plane FM configurations for these three compounds were calculated as listed in Table.I. One may see that all these compounds prefer an out-of-plane magnetization. The magnetism of these compounds can be described by the following Heisenberg-type Hamiltonian where J and D are the nearest-neighbor exchange integral and single-ion anisotropy (SIA), respectively. In order to obtain J|S| 2 and D|S| 2 , the energies corresponding to three different magnetic configurations: FM(m x), FM(m z) and AFM are expressed as where E 0 is the energy which is independent of spin configurations. The corresponding J|S| 2 and D|S| 2 are listed in Table.I. The Monte Carlo (MC) simulations on a 80 × 80 × 1 kagome lattice with periodic boundary conditions are carried out with each temperature calculations containing 10 6 MC steps [45]. The Curie temperatures are estimated to be 51, 42 and 46K for Co 3 Pb 3 S 2 , Co 3 Pb 3 Se 2 and Co 3 Sn 3 Se 2 monolayers, respectively, as shown in Fig. 2(c). The magneto-optical Kerr effect is usually expected in FM materials, due to their potential applications in magneto-optical storage devices. The Kerr rotation angle can be written as θ Kerr = −Re where ω is the photon energy, xx and xy are the diagonal and offdiagonal terms of the dielectric tensor . And the dielectric tensor has a relationship with optical conductivity tensor σ that could be expressed as σ(ω) = ω 4πi [ (ω) − I], where I is a unit tensor. Optical conductivity tensor σ and Kerr rotation angle can be obtained using VASP along with WANNIER90. The photon energy dependent Kerr angles are illustrated in Fig.2(d). The sign of Kerr angles of Co 3 Pb 3 S 2 and Co 3 Pb 3 Se 2 differ from Co 3 Sn 3 Se 2 in the low photon energy range, due to the opposite sign of xy . For these three compounds, at the photon energy of about 0.2 eV Kerr angles reach to the maximum values of about 2.9 as listed in Table.I, which is much larger than the reported 0.8 for Fe bulk and is comparable to the Tc-based ferromagnetic semiconductors [46]. The spin-polarized band structures were calculated as shown in Fig.3. In the absence of spin-orbit coupling (SOC), they all behavior as a Weyl half-semimetal [16] with a fully spin-polarized Weyl point on the K→Γ path near the Fermi level. Because of the symmetries, there should be three pairs of Weyl nodes in the whole BZ. The corresponding partial density of states (PDOS) show that the density of states near the Fermi level is mainly attributed to the t 2g and e g orbitals of Co atoms. After including SOC, the band gaps of about 70, 77 and 63 meV are opened for Co 3 Pb 3 S 2 , Co 3 Pb 3 Se 2 and Co 3 Sn 3 Se 2 , respectively. C. C. Topological properties In order to investigate their topological properties, maximally localized Wannier functions (MLWFs) implemented in the WANNIER90 package are employed to fit their DFT band structures. Nonzero Chern number C is viewed as a character of topologically nontrivial band structure, and for each band, the Chern number can be obtained by integrating the Berry curvature over the first Brillouin zone. The calculated Chern number C for the valence band near the Fermi level is marked in Fig.3. Figs.4(a)-(c) show the calculated anomalous Hall con-ductance as a function of the chemical potential, and the quantized charge Hall plateau of σ xy = Ce 2 /h is obtained when chemical potential is within the band gap, characterizing the QAHE. Furthermore, the QAHE can also be confirmed by calculating their chiral edge states appearing within the band gap. On the basis of a recursive strategy, we construct the MLWFs using all d orbitals of Co atoms and calculate their local density of the edge states as shown in Figs.4(d)-(f). One can see that the bulk states are connected by three topologically nontrivial edge states. As the number of edge states cutting the Fermi level indicates the value of the Chern number, |C| = 3 is further verified. The internal of Berry curvature in a region around one original Weyl point gives a Chern number of 1/2. There are six Weyl points related by C 3 and inversion symmetry that should have the same Chern number of 1/2. Thus we can obtain the Chern number 3. D. IV. STRAIN EFFECT AND Co3Sn3Se2/MoS2 HETEROSTRUCTURE To investigate the effect of strain on Co 3 X 3 Y 2 monolayers, as shown in Fig.5, we plot the band gap and Curie temperature T C of Co 3 Sn 3 Se 2 monolayer as the function of biaxial strain which is defined as ε = (a − a 0 )/a 0 , where a and a 0 are the strained and equilibrium lattice parameters, respectively. With the increase of compressed strain, the valence band maximum (VBM) value at Γ point is lifted, while the conduction band minimum (CBM) value drops at K point, leading to the gradual closure of band gap. The VBM at Γ point and CBM at K point move in the opposite directions relative to the Fermi level with the tensile strain, and thus the band gap keeps about 60 meV. Meanwhile, the applied tensile strain will enhance the exchange-coupling parameter J and the corresponding Curie temperature from 46 K to 65 K. The strain effect on band gap and T c for Co 3 Pb 3 S 2 and Co 3 Pb 3 Se 2 has also been studied as shown in Fig.S2, where similar behavior is obtained. To investigate the effect of the substrate on electronic properties of Co 3 X 3 Y 2 monolayers, we construct a heterostructure Co 3 Sn 3 Se 2 /MoS 2 as shown in Fig. 6(a). The heterostructure is consisting of 1 × 1 unit cell of Co 3 Sn 3 Se 2 and √ 3 × √ 3 unit cell of MoS 2 . The lattice mismatch at interface is only 2%. By the first-principles calculations with the vdW interaction, the optimized lattice constant and equilibrium interlayer distance d for the heterostructure is about 5.42Å and 2.76Å, respectively. The T C is enhanced up to 60 K, with the band gap keeps about 60 meV. The calculated AHC and edge states as shown in Figs.6(b) and (c) indicate that the topological property of Co 3 Sn 3 Se 2 preserves. To study the layer dependent topological properties, the stabilities of the bilayer compounds Co 6 Pb 5 S 4 , Co 6 Pb 5 Se 4 and Co 6 Sn 5 Se 4 are first confirmed by calculating the phonon spectra, where no imaginary frequency was observed as shown in Fig.S3. For Co 6 Sn 5 Se 4 , we have noticed that a Weyl node appears at the Γ → M path near the Fermi level as illustrated in Fig.7(b). Taking the SOC into account, a local band gap of about 51 meV at the (original) Weyl point is opened. Although there is no global gap, it is interesting to find a relatively flat plateau of AHC corresponding to |C| = 3 near the Fermi level. IV. VI. CONCLUSION By using first principles calculations, we have systematically investigated two-dimensional kagome ferromagnets Co 3 Pb 3 S 2 , Co 3 Pb 3 Se 2 and Co 3 Sn 3 Se 2 monolayers, which can realize the high-Chern-number (|C| = 3) QAHE with a large band gap. For Co 3 Pb 3 S 2 , Co 3 Pb 3 Se 2 and Co 3 Sn 3 Se 2 monolayers, the band gap of 70, 77 and 63 meV and T C of 51, 42, and 46K are obtained, respectively. By constructing a heterostructure Co 3 Sn 3 Se 2 /MoS 2 , the T C can be enhanced to 60 K and the band gap keeps about 60 meV due to the tensile strain of 2% at the interface. For the bilayer compound Co 6 Sn 5 Se 4 , it becomes a half-metal, with a relatively flat plateau in its anomalous Hall conductivity corresponding to |C| = 3 near the Fermi level. Our results provide new topologically nontrivial systems of kagome ferromagnetic monolayers and heterostructrues with high |C| = 3 and large band gap QAHE, which are helpful for us to deepen understanding on the topological states in ferromagnets with kagome lattices.
2020-10-16T01:01:17.870Z
2020-10-15T00:00:00.000
{ "year": 2020, "sha1": "4c3645a7267849b88dcc46f50ea3f33ecf11d1be", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2010.07670", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4c3645a7267849b88dcc46f50ea3f33ecf11d1be", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221571706
pes2o/s2orc
v3-fos-license
Synthesis, Characterization, Catalytic Activity, and DFT Calculations of Zn(II) Hydrazone Complexes Two new Zn(II) complexes with tridentate hydrazone-based ligands (condensation products of 2-acetylthiazole) were synthesized and characterized by infrared (IR) and nuclear magnetic resonance (NMR) spectroscopy and single crystal X-ray diffraction methods. The complexes 1, 2 and recently synthesized [ZnL3(NCS)2] (L3 = (E)-N,N,N-trimethyl-2-oxo-2-(2-(1-(pyridin-2-yl)ethylidene)hydrazinyl)ethan-1-aminium) complex 3 were tested as potential catalysts for the ketone-amine-alkyne (KA2) coupling reaction. The gas-phase geometry optimization of newly synthesized and characterized Zn(II) complexes has been computed at the density functional theory (DFT)/B3LYP/6–31G level of theory, while the highest occupied molecular orbital and lowest unoccupied molecular orbital (HOMO and LUMO) energies were calculated within the time-dependent density functional theory (TD-DFT) at B3LYP/6-31G and B3LYP/6-311G(d,p) levels of theory. From the energies of frontier molecular orbitals (HOMO–LUMO), the reactivity descriptors, such as chemical potential (μ), hardness (η), softness (S), electronegativity (χ) and electrophilicity index (ω) have been calculated. The energetic behavior of the investigated compounds (1 and 2) has been examined in gas phase and solvent media using the polarizable continuum model. For comparison reasons, the same calculations have been performed for recently synthesized [ZnL3(NCS)2] complex 3. DFT results show that compound 1 has the smaller frontier orbital gap so, it is more polarizable and is associated with a higher chemical reactivity, low kinetic stability and is termed as soft molecule. Introduction Hydrazone ligands are one of the most important classes of flexible and versatile polydentate ligands which show very high efficiency in chelating various metal ions [1][2][3][4][5][6][7][8][9][10][11][12][13]. The coordination behavior of hydrazones is known to depend on the pH of the medium, the nature of the substituents and on the position of the hydrazone group relative to other moieties [2][3][4]. Moreover, deprotonation of the -NH group, which is readily achieved in the complexed ligand in particular, results in the formation of tautomeric anionic species (=N-N − -C=O or =N-N=C-O − ), having different coordination properties. On the other hand, the propargylamines are a unique family of organic compounds, which has received ample attention by the wider scientific community [14,15]. The profound interest surrounding these compounds is partly due to the bioactive nature of certain members of their family [14][15][16][17]. Furthermore, propargylic amines are frequently encountered as intermediates in organic synthesis, providing facile access to a variety of structurally complex organic compounds [14,15]. Among these compounds, the subgroup of tetrasubstitutedpropargylamines is particularly interesting, as it comprises the least studied family of propargylamines. The most straightforward approach towards such molecules is the ketone-amine-alkyne (KA 2 ) multicomponent coupling reaction, for which a significant number of catalytic systems has been reported during the past decade [18][19][20][21][22][23][24][25][26][27][28]. As part of the work of some authors focusing on sustainable organic transformations, multicomponent reactions and sustainable metal catalysis [19][20][21][22][23][24][25][26][27][28][29][30][31][32][33], the first zinc-based homogeneous catalytic system for the KA 2 coupling was disclosed very recently [34]. Since the use of ligands in these catalytic systems is rare, we were interested in testing well-defined zinc complexes as potential catalysts for the reaction operating under air. Crystal Structures of [ZnL 1 (NCS)2]2H2O (1) and [Zn(L 2 )2] (2) Complexes The molecular structure of 1 is shown in Figure 1. Selected bond distances and angles are given in Table 1. The neutral complex [ZnL 1 (NCS)2] crystallizes as dihydrate in the triclinic crystal system with space group P−1. In 1, Zn1 has fivefold coordination with tridentate ligand L 1 and two nitrogen atoms (N5, N6) from thiocyanate ligands. L 1 is coordinated to Zn1 in the zwitterionic form through NNO-set of donor atoms forming two fused five-membered chelate rings (Zn-N-C-C-N and Zn-N-N-C-O). The dihedral angle of nearly 4.0° between two five-membered chelate rings shows the non- The molecular structure of 1 is shown in Figure 1. Selected bond distances and angles are given in Table 1. The neutral complex [ZnL 1 (NCS) 2 ] crystallizes as dihydrate in the triclinic crystal system with space group P−1. In 1, Zn1 has fivefold coordination with tridentate ligand L 1 and two nitrogen atoms (N5, N6) from thiocyanate ligands. L 1 is coordinated to Zn1 in the zwitterionic form through NNO-set of donor atoms forming two fused five-membered chelate rings (Zn-N-C-C-N and Zn-N-N-C-O). The dihedral angle of nearly 4.0 • between two five-membered chelate rings shows the non-coplanar nature of metal-ligand system. Generally, the distortion in the five coordinated systems is described by an index of trigonality τ = (β − α)/60, where β is the greatest basal angle and α is the second greatest angle [35]. The parameter τ is 0 for regular square based pyramidal forms and 1 for trigonal bipyramidal forms. The τ value of 0.36 calculated for 1, indicates that the irregular coordination geometry about Zn1 is 36% trigonally distorted square-based pyramidal. The greatest basal angles O1−Zn1−N1 and N2−Zn1−N5 are 149.20 ( coplanar nature of metal-ligand system. Generally, the distortion in the five coordinated systems is described by an index of trigonality  = ( − )/60, where  is the greatest basal angle and  is the second greatest angle [35]. The parameter  is 0 for regular square based pyramidal forms and 1 for trigonal bipyramidal forms. The  value of 0.36 calculated for 1, indicates that the irregular coordination geometry about Zn1 is 36% trigonally distorted square-based pyramidal. The greatest basal angles O1Zn1N1 and N2Zn1N5 are 149.20 ( Table S1 in the supplementary material). The Zn(II) ion in 1 is more strongly bound to the imine nitrogen atom of the ligand L 1 than to the 1,3-thiazole nitrogen, as indicated by the Zn1-N2, 2.058(2) Å and Zn1-N1, 2.212(2) Å bond lengths ( Table 2). Similar to this, in analogous Zn(II) complexes with Girard's T hydrazone-based ligands and N3 − , NCO − , NCS  or Cl − as monodentate ligands [8][9][10] (Table S2, Figure S1a in the supplementary material). In addition, the solvent water molecules O1W and O2W assists in joining the neighboring layers related by the center of symmetry by means of weak intermolecular hydrogen bonds C-HOW (Table S2, Figure 95.20(9) C12-S4-Zn1 95.96(8) All reactions were performed on a 0.5 mmol scale and the reaction time was 16 h unless otherwise noted. 1 The progress of the reaction was monitored by gas chromatography/mass spectroscopy (GC/MS) analysis, using n-octane as the internal standard and the isolated yields reported correspond to the pure product after chromatographic purification. 2 The reaction was stopped after 3 h. The molecular structure of 2 is shown in Figure 2. Selected bond distances and angles are given in Table 1. The neutral complex molecule [Zn(L 2 ) 2 ] crystallizes in the monoclinic crystal system with space group P2 1 /c. In complex 2, two deprotonated ligand molecules L 2 coordinate the Zn(II) ion in a meridional fashion, forming a distorted octahedral complex by chelation through two NNS donor atom sets. Each ligand coordinates to metallic center through thiazole nitrogen, imine nitrogen and thiolate sulfur atoms. The tridentate coordination of each ligand implies the formation of two fused five-membered chelate rings Zn-N-C-C-N and Zn-N-N-C-S. The chelate rings (Zn1-N5-C9-C10-N6 and Zn1-N6-N7-C12-S4) are nearly coplanar, while the other pair (Zn1-N1-C3-C4-N2 and Zn1-N2-N3-C6-S2) deviates significantly from coplanarity, as indicated by the dihedral angles of 2.2 • and 7.1 • , respectively. In addition, the two chelation planes comprising the atoms N-N-S-Zn are practically perpendicular (dihedral angle = 89.7 • ). The octahedral complex molecule of 2 is comparable with the Zn(II) complex containing a similar ligand (2-acetylthiazole (N4)-phenylthiosemicarbazone) (CSD refcode KUMPEP) [13], although the latter is much more distorted due to the presence of the phenyl group at the terminal nitrogen atom of the thiosemicarbazone ligand, as evidenced by the smaller dihedral angle between chelation planes (N-N-S-Zn) compared to that observed in 2 (83.9 • vs. 89.7 • ). One of the measures of the octahedral strain is average ∆O h value, defined as the mean deviation of 12 octahedral angles from ideal 90 • . The complex 2 shows less octahedral strain in comparison to that observed in analogous Zn(II) complex with 2-acetylthiazole (N4)-phenylthiosemicarbazone. The calculated ∆O h values are 10 • for the former and 12 • for the latter complex. The mean Zn-L bond lengths (Zn-N 1,3-thiazole 2.2525 Å, Zn-S thiolate 2.4313 Å and Zn-N imine 2.148 Å) observed in complex 2 are similar to those found in its structural analogue (Zn-N 1,3-thiazole 2.2310 Å, Zn-S thiolate 2.4331 and Zn-N imine 2.1877Å). thiolate sulfur atoms. The tridentate coordination of each ligand implies the formation of two fused five-membered chelate rings Zn-N-C-C-N and Zn-N-N-C-S. The chelate rings (Zn1-N5-C9-C10-N6 and Zn1-N6-N7-C12-S4) are nearly coplanar, while the other pair (Zn1-N1-C3-C4-N2 and Zn1-N2-N3-C6-S2) deviates significantly from coplanarity, as indicated by the dihedral angles of 2.2 and 7.1, respectively. In addition, the two chelation planes comprising the atoms N-N-S-Zn are practically perpendicular (dihedral angle = 89.7). The octahedral complex molecule of 2 is comparable with the Zn(II) complex containing a similar ligand (2-acetylthiazole (N4)phenylthiosemicarbazone) (CSD refcode KUMPEP) [13], although the latter is much more distorted due to the presence of the phenyl group at the terminal nitrogen atom of the thiosemicarbazone ligand, as evidenced by the smaller dihedral angle between chelation planes (N-N-S-Zn) compared to that observed in 2 (83.9 vs. 89.7). One of the measures of the octahedral strain is average Oh value, defined as the mean deviation of 12 octahedral angles from ideal 90. The complex 2 shows less octahedral strain in comparison to that observed in analogous Zn(II) complex with 2acetylthiazole (N4)-phenylthiosemicarbazone. The calculated Oh values are 10 for the former and 12 for the latter complex. The mean Zn-L bond lengths (Zn-N1,3-thiazole 2.2525 Å , Zn-Sthiolate 2.4313 Å and Zn-Nimine 2.148 Å ) observed in complex 2 are similar to those found in its structural analogue (Zn-N1,3-thiazole 2.2310 Å , Zn-Sthiolate 2.4331 and Zn-Nimine 2.1877Å ). In the crystals of complex 2, molecules self-assemble within the layer parallel with the (1 0 0) lattice plain by means of intermolecular hydrogen bonds between terminal NH2 groups (N4 and N8) serving as hydrogen bond donors and thiolate sulfur atoms S4 at 1 − x, −1/2 + y, ½ − z and S2 at 1 − x, 2 − y, −z serving as acceptors (Table S3, Figure S2a in the supplementary material). The complex molecules belonging to neighboring layers are linked through weak ππ interactions involving heteroaromatic 1,3-thiazole rings to form a 3D supramolecular structure (Table S4 and Figure S2b in the supplementary material). In addition, the molecules of 2 are linked along a crystallographic axis by weak Caromatic-HNhydrazone contacts. In the crystals of complex 2, molecules self-assemble within the layer parallel with the (1 0 0) lattice plain by means of intermolecular hydrogen bonds between terminal NH 2 groups (N4 and N8) serving as hydrogen bond donors and thiolate sulfur atoms S4 at 1 − x, −1/2 + y, 1 2 − z and S2 at 1 − x, 2 − y, −z serving as acceptors (Table S3, Figure S2a in the Supplementary Materials). The complex molecules belonging to neighboring layers are linked through weak π···π interactions involving heteroaromatic 1,3-thiazole rings to form a 3D supramolecular structure (Table S4 and Figure S2b in the Supplementary Materials). In addition, the molecules of 2 are linked along a crystallographic axis by weak C aromatic -H···N hydrazone contacts. Evaluation of the Zinc Complexes' Catalytic Activity in the KA 2 Coupling Reaction We chose cyclohexanone, pyrrolidine and phenylacetylene as a model substrate triad. A promising result was obtained when complex 1 was used in 10 mol% loading in toluene, affording the product in 85% isolated yield after 16 h (Entry 1, Table 2). As expected, when ligand HL 1 Cl was used as a possible catalyst in a control experiment, the desired propargylamine was not formed. Complex 3 led to a 67% yield under the same conditions, while complex 2 also displayed moderate catalytic activity (Entries 3 and 5, Table 2), suggesting that the zinc center is no longer fully coordinated under the reaction conditions. Removing the solvent while reducing the temperature and catalyst loading also led to moderate yields in the cases of both 1 and 3 (Entries 6-8, Table 2), while using MgSO 4 as a water-scavenging additive, in combination with an increase in temperature, led to the highest yield, when complex 1 was used in 5 mol% loading (Entry 9, Table 2). Under the same conditions, complex 3 led to moderate yield, while reducing the reaction time to 3 h led to incomplete conversion and low yield (Entries 10 and 11 respectively, Table 2), suggesting that the reaction conditions outlined in entry 9 of Table 2 were optimal. Of note, when taking into account the reactivity of simple zinc salts, complex 1 performs comparably well in this reaction. However, lower catalyst loading is required under the conditions described herein, while, in the case of zinc acetate, 10 mol% was essential in order to reach yields above 90%, in combination with dry/inert conditions. Several substrate combinations were coupled under the aforementioned conditions, as shown in Scheme 2. Piperidine led to compound 4b in high yield, as was the case in the parent, Zn-based, ligand-free system and the more recently reported Mn-based system [34,42]. Propargylamine 4c was obtained in moderate yield, while using a linear ketone in combination with pyrrolidine afforded compound 4d in 72% yield. Propargylamine 4e, bearing an ester moiety that can be used for further functionalization, was synthesized in good yield, while the primary amine-derived compound 4f was also successfully synthesized, albeit in moderate yield because of the stability of the intermediate imine. When the steric bulk of the linear ketone was increased, the yield dropped significantly, highlighting the crucial effect of steric hindrance in the outcome of this reaction (compound 4g). When an aliphatic alkyne was used in combination with N-phenylpiperazine, propargylamine 4h was obtained in 37% isolated yield. In order to assess the effect of a less functionalized aliphatic alkyne, 1-octyne was used and compound 4i was isolated in 70% yield. Finally, cyclopentanone was chosen as a coupling partner and, as anticipated based on known reactivity trends, compound 4j was obtained in moderate yield [34,42]. Overall, complex 1 allows for lower catalyst loading when compared to simple zinc salts and is more robust under harsh, ambient conditions [34]; however, the limitations of this coupling reaction and the generally observed trends regarding substrate scope persist in this case as well. (Figure 3), thereby supporting the experimental X-ray diffraction (XRD) results. In complex 2 DFT results show that two tridentate ligand molecules L 2 coordinate the Zn(II) ion through thiazole nitrogen, imine nitrogen and thiolate sulfur atoms, forming an octahedral complex with four fused five-membered chelate rings, in agreement with experimental data. Selected bond lengths and values of valence angles are summarized in Table S5. The calculated geometric parameters of mixed ligand complexes are compared with the X-ray diffraction structures and show good agreement. Scheme 2. Substrate scope of the reaction system under the optimal conditions. All reactions were performed on a 0.5 mmol scale and isolated yields after column chromatography are shown in parentheses. Density Functional Theory (DFT) Optimized Structures and Highest Occupied Molecular Orbital-Lowest Unoccupied Molecular Orbital (HOMO-LUMO) Analysis In order to calculate the ground-state geometries of the complexes, DFT calculations of [ZnL 1 (NCS)2] (1) and [Zn(L 2 )2] (2), as well as [ZnL 3 (NCS)2] (3) complexes have been performed, as described below. DFT calculations predict five-fold coordination for both [ZnL 1 (NCS)2] and [ZnL 3 (NCS)2] complexes with tridentate ligands HL 1 Cl and HL 3 Cl and two nitrogen atoms from thiocyanate ligands (Figure 3), thereby supporting the experimental X-ray diffraction (XRD) results. In complex 2 DFT results show that two tridentate ligand molecules L 2 coordinate the Zn(II) ion through thiazole nitrogen, imine nitrogen and thiolate sulfur atoms, forming an octahedral complex with four fused five-membered chelate rings, in agreement with experimental data. Selected bond lengths and values of valence angles are summarized in Table S5. The calculated geometric parameters of mixed ligand complexes are compared with the X-ray diffraction structures and show good agreement. Scheme 2. Substrate scope of the reaction system under the optimal conditions. All reactions were performed on a 0.5 mmol scale and isolated yields after column chromatography are shown in parentheses. The HOMO-LUMO energies of the complexes provide information about energetic behavior and stability of the complexes. The energy gap between HOMO and LUMO, determines reactivity and kinetic stability of molecules [43][44][45]. The chemical hardness (η) is a good indicator of the chemical stability. The molecules having a large energy gap are known as hard and having a small energy gap are known as soft molecules. The soft molecules are more polarizable than the hard ones because The HOMO-LUMO energies of the complexes provide information about energetic behavior and stability of the complexes. The energy gap between HOMO and LUMO, determines reactivity and kinetic stability of molecules [43][44][45]. The chemical hardness (η) is a good indicator of the chemical stability. The molecules having a large energy gap are known as hard and having a small energy gap are known as soft molecules. The soft molecules are more polarizable than the hard ones because they need little energy for excitation [46,47]. The chemical potential (µ), hardness value (η), softness (S), electronegativity (χ) and electrophilicity index (ω) of molecules are formulated by the equations [47]: where E HOMO and E LUMO are the energies of the HOMO and LUMO orbitals. The negative chemical potential indicates complex to be stable in such a way that does not decompose spontaneously into its elements. Hardness measures the resistance to change in the electron distribution in a molecule. The HOMO-LUMO energy calculations were performed within the time-dependent density functional theory (TD-DFT) approach at the B3LYP/6-31G level of theory in vacuum and toluene. This functional has been employed with a great success in reactivity studies, with a good compromise between accuracy and computational cost [48]. To examine the basis set dependence of the DFT HOMO and LUMO energies, we also performed TD-DFT calculations on the investigated systems using the B3LYP functional with a larger basis set such as 6-311G(d,p). Results are presented in the Table S6. We obtained small differences between the HOMO and LUMO energies calculated at B3LYP level of theory by using the 6-31G and 6-311G(d,p) basis sets, ranging from 0.07 to 0.22eV. It has been already found that HOMO energies, negative values of LUMO energies and TD-DFT HOMO-LUMO gaps are generally less sensitive to the basis set [49]. The HOMO and LUMO and their energies were calculated to locate the high-and low-density regions in all complexes and are shown in Table 3. The negative values of chemical potential (−3.886, −3.597 and −3.670 eV) show their stability suggesting that these do not undergo decomposition into their components. As shown in Table 3, the compound that has the lowest energy gap in comparison to the two other complexes is the compound 1 (∆Egap is 2.167 eV in vacuum and 2.977 eV in toluene). This lower energy gap allows it to be the softest molecule. The magnitude of chemical hardness, supported by the HOMO-LUMO energy gap, for complexes 1, 2 and 3 have been found to be: 1.083, 1.456, and 1.232 eV, respectively (Table 3). Chemical hardness (softness) value of complex 1 is lower (greater) among all the investigated complexes, both in the gas phase and toluene. Hence, complex 1 is found to be more reactive than all the compounds which is in agreement with experimental catalytic data. The compound that has the lowest LUMO energy is the compound 1 (E = −2.803 eV) which signifies that it can be the best electron acceptor [50]. Besides, the electrophilicity index values ω given in Table 3 for complexes (6.971, 4.443 and 5.466 eV, respectively) related to chemical potential and hardness indicate that compound 1 is the strongest electrophile among all compounds. Compound 1 possesses a higher electronegativity value (χ = 3.886 eV) than all compounds, a characteristic that could explain its superior activity in catalysis, when compared to the other complexes evaluated herein [34]. Results were confirmed by using another DFT model denoted as BVP86/6-311G(d,p) with the lowest HOMO-LUMO energy gap for complex 1. The differences between TD-DFT gaps calculated with selected different functionals are small. For instance, B3LYP and BVP86 predict relatively good HOMO and LUMO energies for investigated complexes with errors ranging from 0.56 to 0.73 eV. experimental catalytic data. The compound that has the lowest LUMO energy is the compound 1 (E = −2.803 eV) which signifies that it can be the best electron acceptor [50]. Besides, the electrophilicity index values ω given in Table 3 for complexes (6.971, 4.443 and 5.466 eV, respectively) related to chemical potential and hardness indicate that compound 1 is the strongest electrophile among all compounds. Compound 1 possesses a higher electronegativity value (χ = 3.886 eV) than all compounds, a characteristic that could explain its superior activity in catalysis, when compared to the other complexes evaluated herein [34]. Results were confirmed by using another DFT model denoted as BVP86/6-311G(d,p) with the lowest HOMO-LUMO energy gap for complex 1. The differences between TD-DFT gaps calculated with selected different functionals are small. For instance, B3LYP and BVP86 predict relatively good HOMO and LUMO energies for investigated complexes with errors ranging from 0.56 to 0.73eV. X-Ray Crystallography The molecular structures of complexes 1 and 2 were determined by single-crystal X-ray diffraction. Crystallographic data and refinement details are given in Table S7. The X-ray intensity data for 1 were collected at room temperature on a Nonius Kappa CCD diffractometer equipped with graphite-monochromator utilizing MoKα radiation (λ = 0.71073 Å). Data reduction and cell refinement was carried out using DENZO and SCALPACK [51]. Diffraction data for 2 were collected at room temperature with an Agilent SuperNova dual source diffractometer using an Atlas detector and equipped with mirror-monochromated MoKα radiation (λ = 0.71073 Å). The data were processed by using CrysAlis PRO [52]. All the structures were solved using SIR-92 [53] and refined against F 2 on all data by full-matrix least-squares with SHELXL-2014 [54]. All non-hydrogen atoms were refined anisotropically. The water bonded hydrogen atoms in 1 was located in a difference map and refined with the distance restraints (DFIX) with O-H = 0.96 Å and with U iso (H) = 1.5U eq (O). All other hydrogen atoms were included in the model at geometrically calculated positions and refined using a riding model. Crystallographic data for the structures reported in this paper have been deposited with the CCDC 2021000 (for 1) and 2021001 (for 2). CCDC 2021000 and 2021001 contain the supplementary crystallographic data for this paper. These data can be obtained free of charge via http://www.ccdc.cam.ac.uk/conts/retrieving.html (or from the CCDC, 12 Union Road, Cambridge CB2 1EZ, UK; Fax: +44 1223 336033; E-mail: deposit@ccdc.cam.ac.uk). Catalysis General Procedure A Teflon sealed screw-cap pressure tube equipped with a stirring bar and a rubber septum or a screw-cap vial, was charged with x mol% of the catalyst and 0.5 eq. of the additive (MgSO 4 ) unless otherwise noted. Under air, 0.5 mmol of the amine were added and the mixture was stirred until the solid was partially dissolved. 0.5 mmol of the alkyne were added and the mixture was stirred at room temperature. Finally, 0.5 mmol of the ketone were added and the reaction was allowed to stir in a preheated oil bath, for the appropriate time. After cooling to room temperature, ethyl acetate was added (2 × 5 mL) and the mixture was stirred for 5 min. The mixture was filtered through a short silica gel plug, in order to remove inorganic impurities, concentrated under vacuum and loaded atop a silica gel column. Gradient column chromatography with ethyl acetate/petroleum ether furnished the desired products. All products were characterized by 1 H-NMR, and 13 C{ 1 H}-NMR which were all in agreement with the assigned structures and the data reported in the literature ( [34,42] and references cited therein]). The DFT calculations of newly synthesized 1 and 2 complexes, as well as [ZnL 3 (NCS) 2 ] (3) complex have been carried out for their structural determination, HOMO, LUMO study and to calculate reactivity descriptors. The lower kinetic stability and higher reactivity of complex 1 compared to the other two complexes have been found from the lower HOMO-LUMO energy gap value, in agreement with experimental data. The electrophilicity index value ω (6.971 eV in gas phase and 5.908 eV in toluene) indicates that compound 1 is the strongest electrophile than all investigated compounds. In addition, compound 1 possesses higher electronegativity value (χ = 3.886 eV) than all compounds. Therefore, it is the best electron acceptor and that feature can plausibly explain its better performance as a Lewis acid catalyst in the ketone-amine-alkyne coupling. Table S1. Structural parameters correlating the geometry of five-coordinate [ZnLX 2 ] complexes (L = tridentate hydrazone-based ligand; X = pseudohalde, halide or DMSO), Table S2. Hydrogen-bond parameters for [ZnL 1 (NCS) 2 ]·2H 2 O (1), Table S3. Hydrogen-bond parameters for [Zn(L 2 ) 2 ] (2), Table S4. Intermolecular π···π interaction parameters for complex 2, Table S5 Table S6. E HOMO , E LUMO and their energy gaps calculated by using TD-DFT in vacuum at different levels of theory. Table S7. Crystal data and structure refinement details for 1 and 2.
2020-09-10T10:21:43.525Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "f7ad994b657d42831e078bb0ae0f49e954e3a6fa", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/25/18/4043/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "da7ad57e43b2f43a0487d70e70bc7e8dde17538f", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
16264497
pes2o/s2orc
v3-fos-license
A Role for Bone Morphogenetic Protein-4 in Lymph Node Vascular Remodeling and Primary Tumor Growth Running title: BMP-4 in vascular remodeling and tumor growth Author manuscripts have been peer reviewed and accepted for publication but have not yet been edited. Author manuscripts have been peer reviewed and accepted for publication but have not yet been edited. Abstract Lymph node metastasis, an early and prognostically important event in the progression of many human cancers, is associated with expression of vascular endothelial growth factor-D (VEGF-D). Changes to lymph node vasculature that occur during malignant progression may create a metastatic niche capable of attracting and supporting tumor cells. In this study, we sought to characterize molecules expressed in lymph node endothelium that could represent therapeutic or prognostic targets. Differential mRNA expression profiling of endothelial cells from lymph nodes that drained metastatic or non-metastatic primary tumors revealed genes associated with tumor progression, in particular bone morphogenetic protein-4 (BMP-4). Metastasis driven by VEGF-D was associated with reduced BMP-4 expression in high endothelial venules, where BMP-4 loss could remodel the typical high-walled phenotype to thin-walled vessels. VEGF-D expression was sufficient to suppress proliferation of the more typical BMP-4-expressing high endothelial venules in favor of remodeled vessels, and mechanistic studies indicated that VEGFR-2 contributed to high endothelial venule proliferation and remodeling. BMP-4 could regulate high endothelial venule phenotype and cellular function, thereby determining morphology and proliferation responses. Notably, therapeutic administration of BMP-4 suppressed primary tumor growth, acting both at the level of tumor cells and tumor stromal cells. Together, our results show that VEGF-D-driven metastasis induces vascular remodeling in lymph nodes. Further, they implicate BMP-4 as a negative regulator of this process, suggesting its potential utility as a prognostic marker or anti-tumor agent. Author manuscripts have been peer reviewed and accepted for publication but have not yet been edited. Introduction Lymphatic dissemination is considered to be an early and crucial route of metastasis for many cancers (1,2).Blindending lymphatic capillaries drain fluid, cells, and macromolecules from tissue interstitium into a hierarchy of vessels punctuated by lymph nodes (LN), which provide immunologic surveillance for a particular lymphatic drainage basin (3).The presence of metastatic tumor cells in the "sentinel" LN draining a tumor site is a key factor in disease management: substantial clinical data indicates adverse prognostic significance of tumor-positive LNs for many tumor types (4,5).However, a clear understanding of the mechanistic role of LNs in tumor progression is still lacking. VEGF-D and VEGF-C are important inducers of the growth and differentiation of blood vessels and lymphatics.When overexpressed in experimental tumors these growth factors elicit angiogenesis and lymphangiogenesis, and are furthermore associated with increased metastasis to LNs and distant organs (1).VEGF-D and VEGF-C expression is also associated with metastasis to LNs in many human cancers, and is independently associated with poor prognosis (6).Recently, it has emerged that modulation of lymphatics and blood vesselsincluding high endothelial venules (HEV), vessels specialized for leukocyte trafficking (7,8)-also occurs in draining LNs of some tumors (9,10).Such alterations can precede the arrival of metastatic cells (7,(11)(12)(13), and members of the VEGF family have been implicated in these changes (12)(13)(14)(15).The importance of alterations to LN endothelium is highlighted by studies of human breast cancer: lymphangiogenesis or angiogenesis within metastatic tumor deposits in sentinel LNs was found to be associated with, and sometimes independently predictive of, distant metastasis or survival (9,16,17). Here, we sought to characterize changes to the vasculature within tumor-draining LNs, to identify molecules with prognostic or therapeutic potential.We compared the molecular profiles of enriched endothelial cell (EC) populations from LNs draining nonmetastatic tumors with those from LNs draining metastatic (VEGF-D-overexpressing) tumors.BMP-4 was downregulated in the HEVs of LNs draining metastatic tumors.This observation was linked with the remodeling of HEVs induced by VEGF-D-driven metastasis, thus implicating BMP-4 as a regulator of HEV morphology and cell function.Furthermore, therapeutically applied BMP-4 protein inhibited primary tumor growth.This study indicates that VEGF-D's prometastatic activity includes remodeling of specialized LN endothelium, and identifies new roles for BMP-4 in cancer and vascular biology. Materials and Methods Lists of antibodies, primers and detailed protocols are contained in the Supplementary Methods section. Metastatic and nonmetastatic xenograft tumor models 293 EBNA-1 tumor cell lines stably expressing full-length human VEGF-D (293-VEGF-D), human VEGF-C (293-VEGF-C), or vector alone (293-Apex) were established in SCID/NOD mice as described (18).293 EBNA-1 cells were a gift from Kari Alitalo, University of Helsinki, Finland.Regular growth and morphology of transfected cell lines was monitored routinely and growth factor expression verified by Western blot prior to each experiment.LNs were analyzed within the timeframe that metastasis typically occurs in this model; that is, 2 to 4 weeks postimplantation.All animal experiments were carried out with the approval of the Institutional Animal Ethics Committee. Enrichment of LN EC populations Draining LNs of metastatic or nonmetastatic tumors pooled from 1 to 5 mice were enzymatically digested, then tumor cells and leukocytes were depleted using immunomagnetic selection (Miltenyi Biotec) for class I HLA and CD16/CD32.The remaining cells were cultured in EGM-2 MV media (Lonza) before enrichment for ECs by selection for podoplanin (19).See Supplementary Fig. S1 for detailed procedure. Microarray analysis Duplicate samples of LN EC total RNA (RNeasy Plus kit, Qiagen) were applied to Affymetrix expression arrays (430 2.0; Australian Genome Research Facility).Raw intensity data were analyzed using GeneChip Operating Software (Affymetrix), and profiles compared via Robust Multiarray Analysis and linear modeling using AffylmGUI software (20).Microarray data are deposited in NCBI's Gene Expression Omnibus; series accession number GSE31123 (http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc¼GSE31123). Human LNs Breast cancer-associated LNs with or without histologically identifiable metastases (n ¼ 7 patients, 22 LNs), or control nontumor-associated LNs collected during cardiac surgery (n ¼ 3 patients), were obtained as a pilot study.Access to deidentified tissue (formalin fixed, paraffin embedded) was provided by the Pathology Department, Royal Melbourne Hospital, with permission from the Melbourne Health Human Research Ethics Committee. Immunostaining and image quantitation For BMP-4/MECA-79 quantitation, 2 to 3 sections of each tumor-draining LN ($6 per group) were immunostained (18).For HEV morphometry, the luminal and basal edges of HEVs were traced using Metamorph Premier (Molecular Devices), to determine lumen area, average vessel wall width and endothelial area using Integrated Morphometry Analysis parameters (journal available on request).HEVs with 50% or more of their circumference staining for BMP-4 were designated BMP-4 high ; or otherwise BMP-4 low .Data were analyzed according to a linear mixed model (Supplementary Methods). Treatment of ear-draining LNs with recombinant VEGF-D One microgram of purified VEGF-D dimers (0.05 mg/mL; Vegenics Ltd.) in PBS, or PBS alone as control, was injected intradermally into the ears of SCID/NOD mice for 3 consecutive days.On day 4, BrdU (Invitrogen) was injected intraperitoneally, and ear-draining (superficial parotid) LNs were harvested 2 days later. Treatment of tumors with neutralizing antibodies Mice bearing metastatic (VEGF-D overexpressing) tumors received 3 times weekly intraperitoneal injections of neutralizing antibodies (800 mg) to VEGF receptor-2 (VEGFR-2; DC101; Imclone) or VEGF-D (VD1; ref. 21), or PBS.For analysis of HEVs, sections of LNs draining nonmetastatic and antibodytreated metastatic tumors were used from one experiment.LNs of PBS-treated metastatic tumors where HEVs were not obscured by tumor infiltration were included from an identically conducted experiment as a control. BMP-4 therapeutic model Tumor-bearing mice were injected intraperitoneally from day 1, 3 times weekly, with 1.4 mg of human BMP-4 (R&D Systems) in 200 mL PBS with 0.652 mg/mL BSA, or a vehicle control of PBS with 0.32 mmol/L HCl and 1 mg/mL BSA, until day 12 or experiment termination.Serum was sampled 60 minutes posttreatment and BMP-4 quantified by ELISA (R&D). Statistical analysis Data were compared using a 2-tailed Student t test, or Fisher's exact test for comparison of proportions.Graphed data represent mean AE SE unless specified otherwise. Enrichment of endothelial cells from tumor-draining LNs A model of VEGF-D-driven tumor metastasis to regional LNs was used to examine molecular changes in LN endothelium during metastasis (Fig. 1A).Overexpression of VEGF-D in 293-EBNA-1 tumor cells drives metastasis to local LNs within 2 to 4 weeks of implantation in approximately 80% of cases.Vector-transfected tumor cells (no VEGF-D) served as a nonmetastatic control (18).Podoplanin (19) was used as a highly expressed, protease-resistant selection marker to derive cell populations enriched for lymphatic ECs and related EC types, which may respond to VEGF-D (Fig. 1B).Microarray analysis revealed expression of EC-characteristic genes, including VEGFR-2, neuropilin-1 and neuropilin-2, endothelial nitric oxide synthase, CD34 and TIE-2; while desmin and calponin-1, found in fibroblastic lineages, and chondroitin sulfate proteoglycan 4 (NG-2 antigen), characteristic of pericytes, were absent.These findings confirmed that the podoplanin þve cells were enriched for ECs.The LN ECs heterogeneously expressed ICAM-1 and endoglin, markers of endothelial activation in inflammation and angiogenesis, respectively (Fig. 1C; Supplementary Methods). Identification of endothelial-expressed genes modulated during metastasis to LNs LN ECs from metastatic and nonmetastatic tumor models were compared by microarray (Fig. 2A).Of the top 10 differentially expressed genes (ranked by adjusted P value), 9 were downregulated in LNs draining metastatic tumors compared with their nonmetastatic counterparts, and all 10 showed more than 2-fold difference in expression (Table 1; Fig. 2B).Candidates were subsequently selected for further analysis based on relevance to endothelial and cancer biology.qRT-PCR validated the downregulation of Bmp4, Unc5c, Cfh, Emcn, and Gpr39 in ECs from LNs draining metastatic tumors, and the upregulation of Nova1 (Fig. 2C).Bmp4 showed the greatest abundance and more than 5-fold difference in expression, and was thus selected for further investigation. Localization of BMP-4 protein in HEVs and differential expression in metastasis Immunohistochemistry showed that BMP-4 protein was localized to HEVs (Fig. 3A), confirmed by costaining for the specific MECA-79 epitope (22).BMP-4 protein was present in a subset of HEVs in LNs draining both nonmetastatic and metastatic tumors (Fig. 3A), and in LNs from nontumorbearing SCID/NOD and immunocompetent mice (Fig. 3A, data not shown).HEVs did not endogenously express podoplanin (Supplementary Fig. S2), suggesting podoplanin probably became upregulated in HEV ECs during the brief culturing between extraction from LN and purification for microarray analysis (23,24).Although MECA-79 stained the surface of HEV ECs, BMP-4 seemed primarily in the cytoplasm (Fig. 3A inset), implying that HEV ECs express BMP-4 protein.No other sites of BMP-4 localization were observed in the LN or primary tumor.This supported the conclusion that HEV ECs are the main source of BMP-4 mRNA and protein in LNs. Quantitation of staining revealed that HEV-expressed BMP-4 was significantly reduced (by $50%), in LNs draining metastatic versus nonmetastatic tumors (P < 0.001; Fig. 3B and C).This illustrated a shift from predominately BMP-4 high to predominately BMP-4 low HEVs in LNs draining nonmetastatic versus metastatic tumors respectively; however, both LN types contained some BMP-4 high and some BMP-4 low HEVs (Fig. 3B and C).Therefore, the downregulation of BMP-4 mRNA was reflected at the protein level in vivo. BMP-4 loss marks HEV remodeling in cancer We examined LNs for evidence of tumor-induced HEV remodeling (7), and explored whether VEGF-D or BMP-4 was associated with this process (Fig. 4A).In LNs draining nonmetastatic tumors, BMP-4 high HEVs had significantly smaller lumen areas than BMP-4 low HEVs (P ¼ 0.0017; Fig. 4B).In LNs draining metastatic tumors, however, the BMP-4 high HEVs were more dilated than in the nonmetastatic context (P ¼ 0.028; Fig. 4B).Significantly, BMP-4 high HEVs had thicker vessel walls than BMP-4 low HEVs in all LNs (P < 0.001; Fig. 4B), suggesting that BMP-4 expression was closely linked with HEV morphology.Although the remaining BMP-4 high HEVs in LNs draining metastatic tumors largely retained their greater vessel wall width, there was a strong trend suggesting reduced width compared with those in LNs draining nonmetastatic tumors, indicating that VEGF-D-driven metastasis could affect the endothelial width of BMP-4 high HEVs (P ¼ 0.064; Fig. 4B).We also observed remodeled HEVs in a pilot study of human breast cancer-associated LNs with or without histologically identifiable metastasis (Fig. 4E), confirming its occurrence in human disease (7). We next investigated whether HEV remodeling involved EC proliferation.Interestingly, BMP-4 high HEV ECs in LNs draining metastatic tumors had a significantly lower proliferation rate than those from the nonmetastatic model (P ¼ 0.026; Fig. 4C).Furthermore, BMP-4 low HEV ECs in LNs draining metastatic tumors had a significantly higher proliferation rate than the BMP-4 high HEV ECs (P ¼ 0.015; Fig. 4C).These results indicated that the proliferation response of HEV ECs to tumor-secreted VEGF-D may be modulated by BMP-4; another way in which VEGF-D-driven metastasis may induce remodeling of HEV characteristics via reduction of BMP-4 expression. The role of VEGF-D and VEGFR-2 in HEV remodeling To determine whether HEVs could respond directly to tumor-secreted human VEGF-D, we examined VEGFR-2 and VEGFR-3 expression in LNs.VEGFR-2 was expressed on most HEVs, blood vessel capillaries and lymphatics (Fig. 4D).VEGFR-3 was strongly expressed on lymphatics, but was essentially absent from HEVs.Thus HEVs are capable of responding to VEGFR-2 ligands. In vivo approaches were utilized to investigate the specific pathways controlling HEV remodeling.Injection of VEGF-D into the mouse ear mimics tumor-secreted growth factor draining to regional LNs.After 3 days of VEGF-D treatment, proliferation of BMP-4 high HEV ECs was decreased (P ¼ 0.034; Fig. 5A), suggesting VEGF-D was responsible for the effect observed in tumor-draining LNs (Fig. 4C), and that suppression of proliferation in BMP-4 high HEVs by VEGF-D may occur early in the metastatic process.Alteration of lumen area, vessel wall width and BMP-4 expression may require a longer stimulation period as neither was affected in this experiment (Fig. 5A and B); however, BMP-4 high HEVs again exhibited significantly thicker vessel walls (Fig. 5A). Effects of exogenous BMP-4 on tumor progression As this study was designed to identify and analyze molecular targets with prognostic and/or therapeutic potential, we established a therapeutic model to determine the effects of exogenously-administered BMP-4.Activity and stability of recombinant human BMP-4 were verified by bioassay (Supplementary Fig. S3A; Supplementary Methods).As shown in Fig. 6A, BMP-4 inhibited the exponential growth of VEGF-D-overexpressing primary tumors by approximately 50% (day 20, P ¼ 0.056; day 22, P ¼ 0.036; day 24, P ¼ 0.080).In addition, similar tumors overexpressing VEGF-C were reduced in size by approximately 56% by BMP-4 treatment (day 15, P ¼ 0.067; day 18, P ¼ 0.021; day 23, P ¼ 0.026).BMP-4 could thus inhibit tumor growth driven by 2 different lymphangiogenic/angiogenic growth factors.ELISA results confirmed that injected BMP-4 reached the systemic circulation at approximately 1,200 pg/mL in serum after 60 minutes (Fig. 6B).Interestingly, under the conditions and timecourse of these experiments the BMP-4 treatment did not seem to affect metastasis to LNs or HEV morphology (Fig. 6C and data not shown).Analysis of HEVs did reveal a trend suggesting that in metastasis-positive LNs draining VEGF-D-overexpressing tumors, more BMP-4 high HEVs were observed under BMP-4 treatment than for the control (mean AE SE: BMP-4, 40.9 AE 10.1; vehicle, 29.5 AE 10.0; n ¼ 5, P ¼ 0.16).Furthermore, BMP-4 high HEVs again exhibited thicker vessel walls than BMP-4 low HEVs in both treatment conditions (P < 0.001; Supplementary Fig. S3B), confirming the importance of endogenous BMP-4 expression. Mechanisms of BMP-4-induced tumor growth suppression To clarify the mechanism by which BMP-4 suppressed primary tumor growth, we first examined the distribution of its receptors.BMPs bind a heterodimeric complex of type I and type II receptors (25).Immunohistochemistry for BMPR-II revealed broad expression on multiple cell types including tumor cells, stroma, and endothelium of large blood vessels (Fig. 6D).Microarray analysis indicated that the VEGF-Doverexpressing tumor cells expressed BMPR2, as well as BMPRIA and ACTR1A, but not BMPR1B (Supplementary Table S2), whereas immunocytochemistry confirmed expression of BMPR-IA and BMPR-II protein on tumor cells and tumorderived fibroblasts (Supplementary Fig. S4A; Supplementary Methods).Interestingly, Western blotting revealed that BMPR-II protein was more abundant in BMP-4-treated than controltreated VEGF-D-overexpressing metastatic tumors (P ¼ 0.048; Fig. 6E and Supplementary Methods), potentially representing a feedback loop that could contribute to tumor suppression. Discussion Changes to the blood or lymphatic vasculature in tumordraining LNs have prognostic significance in cancer (9,16,17,26), and may facilitate metastasis (11)(12)(13).Understanding the mechanisms and functional consequences of these alterations will be critical in determining the overall role of LN metastasis in tumor progression, and could advance prognostication and treatment for cancer patients.Here, we have identified molecules involved in the remodeling of HEVs in tumor-draining LNs, and an additional role for BMP-4 in suppressing primary tumor growth. Microarray analysis of enriched LN EC populations revealed differential expression of several genes with significance to endothelial and tumor biology.Analysis of isolated EC subtypes has enabled identification of important functional molecules (ref.27 and manuscript submitted).Although our isolation strategy utilized podoplanin, commonly used to distinguish lymphatic endothelium, immunohistochemical validation revealed BMP-4 to be differentially expressed in HEVs, a specialized venous endothelial type that did not express podoplanin in vivo.Subsequent to observations that blood vascular ECs cocultured with lymphatic ECs could spontaneously acquire expression of lymphatic-characteristic molecules including podoplanin (23), it has been shown that substantial plasticity exists between arterial, venous, and lymphatic EC lineages, controlled by specific transcription factors and reflecting their common embryonic origin (24).Our observations provide further confirmation of this plasticity and relatedness.Another similar study used microarray analysis of isolated lymphatic ECs from primary tumors, which were briefly cultured, to identify novel markers with prognostic significance (28).Our study advances upon this by examining the endothelium of tumor-draining LNs. The morphologic changes we observed to be associated with VEGF-D-driven metastasis and BMP-4 reduction-that is, remodeling of the normally "high"-walled HEVs into flat walled, more dilated vessels with altered proliferation responseswere consistent with those observed in mouse models and human breast cancer (7).Other investigators observed suppression of the HEV-expressed lymphotactic chemokine CCL21 and reduced lymphocyte recruitment in tumor-draining LNs (29).Such physical and molecular features of HEVs endothelium are integral to their role in trafficking leukocytes into the LN to facilitate immune responses (8).Although these investigators analyzed total HEVs, we identified HEV subtypes (BMP-4 high and BMP low ) which can respond differentially to tumor-associated stimuli.Although the functional significance of HEV height is poorly understood, flattening of HEV ECs seems to reduce leukocyte transmigration rates (30).Lower branching-order HEVs were observed to support lower rates of lymphocyte adhesion than higher-order HEVs (29); interestingly, in our studies lower-order HEVs tended to have flatter endothelium and lower BMP-4 expression than higher-order HEVs.It is possible that HEV remodeling may echo homeostatic differences in the morphologic, molecular, and functional characteristics of different branching-order HEVs.Ultimately, tumor-induced HEV remodeling could assist in generating a metastatic niche (31): proliferating, dilated blood vessels derived from remodeled HEVs could enrich the nutrient and oxygen supply to a LN, whereas impaired immune function would promote tumor cell survival.The proximity of remodeled HEVs and lymphatic vessels could provide a shortcut for metastatic cells into the blood vasculature and thus systemic circulation (31,32). Our study provides an important contribution to understanding the molecular mechanisms driving tumor-induced HEV remodeling (Fig. 5E).The effects of BMP-4 and VEGF-Ddriven metastasis on HEV vessel wall width were strongly evident, whereas differences in lumen area and proliferation were more dynamic and may be sensitive to other factors.The differing impacts of VEGFR-2 and VEGF-D blockade suggest involvement of another VEGFR-2 ligand.Several studies have implicated VEGF-A in stimulating HEV growth and remodeling in immune responses (33,34); thus endogenous VEGF-A could contribute to VEGFR-2-mediated HEV dilation in tumor-draining LNs.In addition, VEGF-A might be involved in the differential proliferative response of BMP-4 high and BMP-4 low HEVs to VEGF-D.BMP-4 can increase expression and phosphorylation of VEGFR-2 in ECs, thus enhancing responsiveness to autocrine or paracrine VEGF-A (35).BMP-4 itself could signal to ECs in an autocrine manner (36), and might upregulate VEGF-A expression by LN stromal cells (34,37), thus potentiating a VEGF-A/VEGFR-2 signaling loop.VEGF-D may then inhibit proliferation of BMP-4 high HEV ECs in the tumor context by competing with VEGF-A for binding to As a member of the TGF-b superfamily of multipotent cytokines, the role of BMP-4 in tumor progression can be complex and highly context specific (25,39).We showed that while endogenously expressed BMP-4 regulates HEVs, exogenous BMP-4 can restrict primary tumor growth.BMP-4 is also known to induce apoptosis of other tumor cell lines (40,41) and microvascular ECs (42), although in other studies proangiogenic responses were observed, possibly due to potentiation of VEGF-A/VEGFR-2 signaling (35).Our data suggest that lymphatic ECs may respond to BMP-4 in a similar way.An increase in proliferation of tumor-derived fibroblasts stimulated with BMP-4 in vitro is intriguing considering that cancer-associated fibroblasts are commonly implicated in promoting tumorigenesis (43).The upregulation of BMPR-II expression in BMP-4treated tumors recapitulates a similar observation in Xenopus embryos indicating that Bmpr2 is a target gene of BMP-4 signaling (44).Expression of several other regulators of BMP-4 signaling is also induced by BMP-4, raising the possibility that blockade of relevant signaling inhibitors might enhance the efficacy of BMP-4 treatment.Previous in vivo studies have described antitumorigenic effects of BMP-4 for several tumor types (40,(45)(46)(47)-as well as protumorigenic effects for some-but thus far only one other study, using a model of glioblastoma multiforme, has demonstrated an antitumor effect of therapeutically administered recombinant BMP-4 (48).Although the authors identified a prodifferentiation effect on tumor stem cells, we noted that VEGF-D is highly expressed in glioblastoma multiforme (49).Our study adds weight to the potential of BMP-4 as an antitumor agent by showing that it can inhibit tumor growth driven by 2 different lymphangiogenic/angiogenic factors through action on both tumor cells and stroma. The context-specific nature of BMP-4 signaling does compel careful tuning of BMP-4 targeting and dosage to ensure a robust antitumor effect.A more constant dosage of BMP-4, or a delivery system more targeted to the LN, may help clarify whether therapeutically administered BMP-4 can reverse HEV remodeling or inhibit metastasis.Nevertheless, reduction of BMP-4 expression in HEVs is an important early molecular indicator of remodeling, as it precedes loss of MECA-79 upon incorporation into the vasculature of the tumor deposit (7).Clinical studies will establish whether BMP-4 may represent a convenient surrogate marker of HEV remodeling in cancer.Furthermore, BMP-4 or HEV remodeling may serve as indicators of systemic or distant effects of prometastatic tumorderived factors such as VEGF-D, and provide prognostic information relevant to metastasis, treatment response, or patient outcome.Our data further highlight the need to better under-stand the functional and prognostic significance of the LN, and in particular its vasculature, to cancer metastasis, as well as the potential of BMP-4 as a multipotent antitumor agent. Figure 1 . Figure 1. Isolation of ECs from tumor-draining LNs.A, schematic of approach to investigate differentially expressed genes in enriched ECs from LNs draining metastatic or nonmetastatic tumors.B, immunomagnetic selection for podoplanin-enriched populations of LN ECs, as confirmed by flow cytometry.Gray line, isotype control; percentages represent proportions within podoplanin þve gate (isotype control proportion subtracted).C, enriched EC populations from LNs draining metastatic tumors were analyzed for ICAM-1 (green) and endoglin expression by immunofluorescence or flow cytometry. Figure 2 . Figure 2. Identification of differentially expressed genes in LN ECs.A, ECs from LNs draining metastatic or nonmetastatic tumors (labeled nonmetastatic or metastatic LN EC) were compared by microarray.B, a volcano plot of log odds of differential expression against fold change illustrates significantly differentially expressed genes.C, for selected genes, differential expression was validated by qRT-PCR.Shown are 2 representative examples (1 and 2) of pairwise comparisons.Data are mean AE SD of triplicate reactions.Ã , P < 0.05; ÃÃ , P < 0.01; ÃÃÃ , P < 0.001. Figure 6 . Figure 6.Therapeutic administration of BMP-4.A, BMP-4 or vehicle control was administered to mice from day 1 until day 12 or experiment termination, and tumor volume measured (n ¼ 9-11).B, detection of BMP-4 in serum by ELISA (n ¼ 3).C, LNs were scored histologically positive or negative for metastatic cells.D, immunohistochemistry detecting BMPR-II expression on multiple tumor cell types including blood vessels, inset.E, Western blot detecting BMPR-II in cultured tumor cells and metastatic (VEGF-D) tumor lysates, and densitometric quantitation of expression (n ¼ 3; full-length blot, Supplementary Fig. S5). M.G.Achenand S.A. Stacker: commercial research grant, Imclone; ownership interest, Circadian Technologies; consultant/advisory board, Vegenics.R. Shayan: ownership interest, Circadian Technologies.The other authors disclosed no potential conflicts of interest.
2017-04-15T02:39:44.821Z
2011-10-15T00:00:00.000
{ "year": 2011, "sha1": "039caf7938be98db8af956a385e42fd8b35b263d", "oa_license": "CCBY", "oa_url": "https://aacr.figshare.com/articles/journal_contribution/Supplementary_Figure_3_from_A_Role_for_Bone_Morphogenetic_Protein-4_in_Lymph_Node_Vascular_Remodeling_and_Primary_Tumor_Growth/22391046/1/files/39836574.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "9c777aaf7abcb0fe05b7f619a5bc59ca578326cd", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
66069531
pes2o/s2orc
v3-fos-license
LHC Optics Determination with Proton Tracks Measured in the Roman Pot Detectors of the TOTEM Experiment The TOTEM experiment at the LHC is equipped with near beam movable devices -- called Roman Pots (RP) -- which detect protons scattered at the interaction point (IP5) arriving to the detectors through the magnet lattice of the LHC. Proton kinematics at IP5 is reconstructed from positions and angles measured by the RP detectors, on the basis of the transport matrix between IP5 and the RP locations. The precision of optics determination is therefore of the key importance for the experiment. TOTEM developed a novel method of machine optics determination making use of angle-position distributions of elastically scattered protons observed in the RP detectors together with the data retrieved from several machine databases. The method has been successfully applied to the data samples registered in 2010 and 2011. The studies show that the transport matrix could be estimated with a precision better than 1%. THE ROMAN POTS OF THE TOTEM EXPERIMENT Proton-proton elastic scattering was measured by the TOTEM experiment at the CERN Large Hadron Collider at √ s = 7 TeV in dedicated runs [1,2]. To detect leading protons scattered at angles as small as 1µrad, silicon sensors are placed in movable beam-pipe insertions, so-called Roman Pots (RP), located symmetrically on either side of the LHC intersection point IP5 at distances up to 220 m from it. Each RP station is composed of two units separated by a distance of about 5 m. A unit consists of 3 RPs, two approaching the outgoing beam vertically and one horizontally, allowing for a partial overlap between horizontal and vertical detectors and the alignment precision of 10 µm. PROTON TRANSPORT FROM IP5 TO THE ROMAN POTS Scattered protons are detected in the Roman Pots after having moved through a segment of the LHC lattice containing 29 magnets per beam. The trajectory of protons with transverse positions 1 (x * , y * ) and angles (Θ * x , Θ * y ) at IP5 are described with a linear formula is defined by the optical functions The magnification v x,y = β x,y /β * cos ∆φ x,y and the effective length L x,y = β x,y β * sin ∆φ x,y are functions of the betatron amplitude β x,y and the relative phase advance ∆φ x,y = RP IP β(s) −1 x,y ds and are particularly important for the proton kinematics reconstruction. The coupling coefficients m i,j are close to 0 and the vertex contributions are canceled due to the anti-symmetry of the scattering angles. Therefore, the kinematics of elastically scattered protons at IP5 can be reconstructed from Equation (1) as: where "RP" defines the measurement location. As the values of the reconstructed angles are directly inversely proportional to the optical functions, the accuracy of optics defines the systematic errors of the final physics results. The proton transport matrix T (s; M) over a distance of s is defined by the machine settings M. It is calculated with the MAD-X [3] code for each group of runs with identical optics based on several data sources. The magnet currents are retrieved from TIMBER [4] and are converted to strengths with LSA [5], which implements the conversion curves measured by FIDEL [6]. The WISE database [7] contains the measured imperfections (field harmonics, magnet displacements and rotations). However, the lattice is subject to additional ∆M imperfections, not measured well enough so far, which alter the transport matrix by ∆T : The 5-10% precision of ∆β/β beating measurement does not allow to estimate ∆T with the accuracy required by the TOTEM physics program. However, the magnitude of |∆T | can be evaluated from the tolerances of the LHC imperfections of which the most important are: • Strength conversion error I → B , σ(B)/B ≈ 10 −3 • Beam momentum offset σ(p)/p ≈ 10 −3 . Their impact on optical functions is presented in Table 1. It is clearly visible that the imperfections of the inner triplet Generally, as can be seen in Table 1, for large β * optics the magnitude of ∆T is sufficiently small from the viewpoint of data analysis and therefore ∆T does not need to be precisely estimated. However, the low β * optics sensitivity to the machine imperfections is significant and cannot be neglected. Fortunately, in this case ∆T can be determined precisely enough from the proton tracks in the Roman Pots. Table 1: Sensitivity of the vertical effective length L y to magnet strengths and beam momentum perturbed by 1 for low-and large-β * optics. CONSTRAINTS FROM PROTON TRACKS IN THE ROMAN POTS The elements of the transport matrix are functions of the betatron amplitudes β x,y and the phase advances φ x,y . Therefore they are mutually related. Moreover, the elastic scattering ensures that the scattering angles in both arms are identical: which allows to compute ratios between the effective lengths of the two beams. From Equation (1) we get: where b 1 and b 2 indicate beam 1 and beam 2. The ratios R 1 and R 2 can be estimated with a 0.5% precision. Furthermore, the distributions of proton angles and positions detected in Roman Pots define ratios of certain elements of the transport matrix T . First of all, dL y /ds and L y are related by with a 0.5% precision, and R 4 is the same for beam 2. Similarly, we exploit the horizontal distributions to quantify the relation between dL x /ds and L x . Contrary to the previous case, L x is close to 0 and instead of defining the ratio we rather estimate the position s (with the precision of about 1 m) along the beam where L x equals to 0 by solving where s 1 is the beginning of the Roman Pot station. The ratio dLx(s1) ds /L x (s 1 ) is defined by the proton distributions Finally, tracks determine as well the coupling components of T . Due to L x ≈ 0 at the Roman Pot locations, the further four constraints can be defined R 7 ≡ x b1,near pots y b1,near pots ≈ m 14,b1,near pots L y,b1,near pots , R 8 is defined with the far pots, and R 9,10 respectively for beam 2. These four constraints can be estimated with a 3% accuracy. OPTICS MATCHING On the basis of the constraints R 1 ...R 10 , ∆T can be determined with the χ 2 minimization procedure. The relevant lattice imperfections were selected forming a 26 dimensional optimization phase space, which includes the magnet strengths, rotations and beam momenta. Due to the high dimensionality of the phase space and approximately linear structure of the problem there is no unique solution. Therefore, the optimization is subject to additional constraints defined by the machine tolerances. Finally, the χ 2 is composed of the part defined by the values measured with the Roman Pots (discussed in the previous section) and such reflecting the LHC tolerances: where the design part defines the nominal machine as an attractor in the phase space, and the measured part contains the track based constraints R 1 ...R 10 together with their errors. The subscript "MADX" defines the parameter optimized with the MAD-X software. Table 2 presents the results of the optimization procedure for β * = 3.5 m. The obtained value of the effective length L y of beam 1 is close to the nominal one, while beam 2 shows a significant change. The same pattern applies to the values of dL x /ds. MONTE-CARLO VALIDATION The procedure has been extensively verified with Monte Carlo studies. The nominal machine settings were perturbed in order to simulate the LHC imperfections and the simulated proton tracks were used afterwards to calculate the optimization constraints R 1 ...R 10 . The study included the impact of The results obtained for the β * = 3.5 m study are summarized in Figures 1 and 2 and their statistical description is given in Table 3. The distributions of optical functions' errors indicate that the optical functions can be reconstructed with a precision of 0.2%, which confirms the validity of the proposed approach. Figure 2: Relative error distribution of dL x /ds for beam 1 before and after matching. CONCLUSIONS AND OUTLOOK TOTEM proposed a novel approach to optics estimation. First of all, the method allows to asses the optical functions' errors from machine tolerances. Secondly, it allows to determine the real optics solely from the Roman Pot proton tracks. The method has been validated with the Monte Carlo studies both for large-and low-β * optics. With its application TOTEM has published elastic scattering distributions obtained with different running conditions. It is foreseen to extend the proposed approach to model the transport of protons with large momentum loss.
2012-06-14T09:52:10.000Z
2012-06-14T00:00:00.000
{ "year": 2012, "sha1": "66949571007a3ee34a697db4571765b557f95aa5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "66949571007a3ee34a697db4571765b557f95aa5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235918838
pes2o/s2orc
v3-fos-license
Effectiveness of subsidized fertilizer distribution to rice farmer In Lemoe, Bacukiki District, Parepare, South Sulawesi Subsidized fertilizer is one of the government’s efforts to ensure the availability of fertilizer. However, farmers have difficulty in accessing subsidized fertilizers. This study aims to find out the effectiveness of subsidized fertilizer distribution in Lemoe Village, Bacukiki District, Parepare City. The analysis method used is descriptive quantitative by using effectiveness analysis. The results showed that the distribution process there are 4 lines of fertilizer storage up to farmers. Subsidized fertilizer distribution is carried out in a closed system based on the Definitive Plan of Group Needs (RDKK) with the Highest Retail Price (HET) as stipulated in the Regulation of the Minister of Agriculture on Allocation and HET of Subsidized Fertilizer in the Agricultural Sector. Overall the distribution of fertilizer felt by farmers runs effectively, with this it can be known that the subsidized fertilizer distribution program in Lemoe Village, Bacukiki District, Parepare City has been implemented in accordance with the guidelines for the implementation of fertilizer subsidies. Introduction In supporting the agricultural sector in Indonesia, the government conducts several programs to increase farmers' production and productivity in order to achieve national food security. One form of government intervention is to provide subsidized fertilizer assistance. This aims to ease the burden on farmers in the provision and use of fertilizer in their farming activities. Indonesia has used agricultural input subsidies, especially on fertilizer, to stimulate agricultural production, largely in pursuit of the goal of rice selfsufficiency [1] Based on BPS 2018 data from the Agriculture Office of Pare-pare City, Bacukiki sub-district is the district with the largest amount of rice production in the city of Pare-pare, with Lemoe village being the area where farmers receive the most subsidized fertilizer assistance from the government. This is seen in table 1 [2]. This fertilizer subsidy is provided as an effort by the government to guarantee the availability of fertilizer for farmers at a price that has been set by the government, namely the highest retail price (HET). However, in reality farmers as beneficiaries of this program are still difficult to access. Farmers often find rare fertilizers, fertilizer prices above the Highest Retail Price (HET) and misuse of fertilizer distribution mechanisms. In addition, the determination of the HET that has been set, still found various problems, both in sales by retailers who are perceived to be less so affordable by farmers, there are still many farmers who complain that the price of fertilizer at the retailer level is not in accordance with the applicable HET [3]. In seeing whether the objectives of the programs implemented by the government have been achieved or not, an approach called effectiveness is carried out. Effectiveness is the achievement of agreed goals for joint efforts. The level of achievement of that goal demonstrates effectiveness". Prasetyo Andri (2013) explained the concept of effectiveness as a condition that shows the extent to which the plan can be implemented / achieved. Effectiveness is one of the measures in determining the success of a program / plan [4]. The policy of subsidizing organic fertilizer can be said to be successful if the community receives benefits from organic subsidies to ease the burden in the provision and use of fertilizer for its farming activities. Therefore in the implementation of the program must be with the principle of work based on the right price, right amount, right type and right time [5]. Based on the description that has been stated in the background and previousexplanations, researchers are interested in conducting research on the distribution of subsidized fertilizers to farmers in Lemoe Village, Bacukiki District, Parepare City; both the process of subsidized fertilizer distribution and the effectiveness of subsidized fertilizer distribution that refers to the accuracy of prices, amount, type, and time. Research Method The research was conducted in Lemoe Village, Bacukiki Subdistrict, Parepare City, South Sulawesi. The determination of the research area was done deliberately with the consideration of Lemoe Village is the recipient of the most subsidized fertilizer assistance in Bacukiki District of Parepare City. The research time was conducted in May-June 2019. The respondents were 32 people who were divided into 20 combined farmer groups located in the Village Lemoe Bacukiki District, Parepare City conducted by systematic random sampling method. The data used are primary data and secondary data. To find out the effectiveness of subsidized fertilizer distribution to rice farmers using effectiveness analysis. The formula to know the percentage of achievement is as follows. Information : 70% -100% = Effective 34% -66.99% = Less Effective 0% -33.99% = Uneffective This study used descriptive quantitative analysis, then the respondents' answers on the questionnaire need to be changed in the form of figures to follow up the data obtained, the answers available are scored in stages ranging from the highest to the lowest. 1. Value 3 for alternative answer (a) which has an effective category 2. Value 2 for the alternative answer (b) which has a less effective category 3. Value 1 for alternative answer (c) which has an ineffective category IOP Conf. Series: Earth and Environmental Science 807 (2021) Results and Discussion The effectiveness of subsidized fertilizer distribution is an analysis of the benefits obtained by farmers with the government's fertilizer subsidy program. The indicators that will be analyzed in the research related to the effectiveness of subsidized fertilizer distribution, namely price accuracy, amount accuracy, type accuracy, and time accuracy. Right Price Right price is a condition where the purchase price of fertilizer by farmers in cash at the level of retailers or official kiosks per saknya equal to the highest retail price (Syafa'at, et al 2007). The selling price applied refers to the highest retail price that has been set, therefore, the government sets the Highest Retail Price (HET) for subsidized fertilizers distributed by producers. The effectiveness of price can be seen in Table 2 [6] . All 32 respondents with a percentage of achievement of 92.71% stated that the price of subsidized fertilizer in accordance with the Highest Retail Price (HET) Is Effective. That is, the government in this case as the party that sets the price of fertilizer is considered to have been effective in setting the Highest Retail Price (HET). Right Amount In the government's efforts to create food security, farmers who take precedence in the RDKK submission process are farmers who work on rice crops, and rice farmers who have joined the farmer group can apply for RDKK in accordance with the needs of farmers in developing their farming businesses. The calculation of the exact effectiveness of the amount of subsidized fertilizer distribution can be seen in Table 3. Table 3 shows that the accuracy of the amount of subsidized fertilizer distribution is in the Effective category, with an effectiveness value of 93.06%. This is based on the amount of fertilizer distributed by the government has been in accordance with existing provisions, namely as many as 5 types of fertilizers namely Urea, SP36, NPK, ZA, and Organic. Right Type The effectiveness of this type of accuracy is very important for the sustainability of agricultural activities of farmers. The calculation of the effectiveness of the right type of subsidized fertilizer distribution can be seen in Table 4. Table 4 shows that the effectiveness of type accuracy is in the category of less effective with an effectiveness achievement value of only 48.96%. This is because farmers who do not all use the five types of fertilizer provided by the government. Most farmers only choose to use 2-3 types of fertilizer only. Right Time Distribution with the effectiveness of time accuracy is a principle that ensures that farmers can buy subsidized fertilizer before the planting period begins. The timeliness of subsidized fertilizer distribution can be seen in Table 5 Table 5 Fertilizer Distribution Effectiveness Based on the analysis of the effectiveness of 4 aspects of price accuracy, timeliness, type accuracy and accuracy of the amount on the distribution of subsidized fertilizer rice farmers in Lemoe Village, Bacukiki District, Parepare City. The overall results of the analysis of the effectiveness of subsidized fertilizer distribution in Table 6. Conclussion Subsidized fertilizer distribution program based on four accuracy that is appropriate price is classified as effective because it is in accordance with the HET determined by the government, precisely the amount is classified as effective because the amount of fertilizer provided in accordance with the needs of agricultural land as farmers have proposed in the proposal RDKK, exactly the type is classified as less effective because farmers who do not all use the five types of fertilizer provided by the government, while on time is classified as effective where the distribution of subsidized fertilizer from the Government has been available before the planting period begins at least one month. Overall the distribution of fertilizer felt by farmers runs effectively, with this it can be known that the subsidized fertilizer distribution program in Lemoe Village, Bacukiki District, Parepare City has been implemented in accordance with the guidelines for the implementation of fertilizer subsidies.
2021-07-16T20:07:01.814Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "c132e260a18814acd659f39ad4e2042ca8cf811b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/807/3/032085", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c132e260a18814acd659f39ad4e2042ca8cf811b", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Economics" ], "extfieldsofstudy": [ "Physics" ] }
236318070
pes2o/s2orc
v3-fos-license
Towards optomechanical parametric instabilities prediction in ground-based gravitational wave detectors Increasing the laser power is essential to improve the sensitivity of interferometric gravitational wave detectors. However, optomechanical parametric instabilities can set a limit to that power. It is of major importance to understand and characterize the many parameters and effects that influence these instabilities. Here, we model with a high degree of precision the optical and mechanical modes that are involved in these parametric instabilities, such that our model can become predictive. As an example, we perform simulations for the Advanced Virgo interferometer (O3 configuration). In particular we compute mechanical modes losses by combining both on-site measurements and finite element analysis with unprecedented level of detail and accuracy. We also study the influence on optical modes and parametric gains of mirror finite size effects, and mirror deformations due to thermal absorption. We show that these effects play an important role if transverse optical modes of order higher than four are involved in the instability process. I. INTRODUCTION In 2015, the LIGO-Virgo collaboration [1-4] detected for the first time gravitational waves preceding a binary black hole coalescence [5], thus pioneering gravitationalwave astronomy. Today many other gravitational waves have been detected [6,7]. These detections have provided confirmation on the expected rate of binary black hole (BBH) mergers [8], a better understanding of BBHs population [8,9], a better limit to the mass of the graviton [10], a first direct evidence of a link between binary neutron star (BNS) mergers and short gamma-ray bursts [11], a higher precision in constraining the Hubble constant [12], and a better understanding of BNS mergers [11]. Since the first detections, improvements performed on ground based detectors yielded better detector sensitivities. Gravitational wave sources that are weaker or located further away can now be detected. Among the many improvements, increasing the light intensity in the interferometer arm cavities reduces the impact of the laser quantum phase noise, which is limiting the sensitivity in the high-frequency range. However, a laser power increase can trigger a nonlinear optomechanical effect [13,14], known as optomechanical parametric instability (OPI). This effect can jeopardize the interferometer stable operation. During the Observing Run 1 (O1), LIGO experienced an OPI for the first time [15]: after a few seconds, the interferometer went out of lock, thus preventing further data acquisition. In this letter, we present the models * Corresponding author: thibaut.jacqmin@sorbonne-universite.fr that we used to compute the OPI gains for a power recycled gravitational wave detector. Compared to previous work [17], we provide a precise description of optical and mechanical modes, together with a study of the impact of losses and the thermal deformation of the mirrors. This will allow to perform parametric instabilities predictions, which is of major importance for future designs of ground-based detectors. These simulations are done for the power recycled Advanced Virgo interferometer (O3 configuration), and can be extended to any other configuration. In sec. II, a short introduction to the model used to compute the parametric gains is given. In sec. III, we report on a detailed finite element analysis (FEA) of the mirrors, to compute precise mechanical modes frequencies and amplitudes, and estimate quality factors. An original method is then used to combine these FEA simulations with ring-down measurements performed on a subset of modes, in order to obtain accurate quality factors for all the modes. In section IV, different models for optical modes are compared: the analytical solution of the paraxial equation for purely spherical infinite size mirrors, as implemented in [17], and a brute force numerical simulation which includes finite size effects and arbitrary mirror surface shapes. In section V, we study the influence on optical modes of a thermal effect related to a local temperature increase of the mirror surface due to light absorption. Finally in section VI, we provide an example of parametric gains that are obtained in the Advanced Virgo O3 configuration, including optical losses and mechanical losses calculated with an unprecedented level of precision, and we investigate for the first time the effect of the mirror thermal deformation due to laser absorption. II. OPTOMECHANICAL PARAMETRIC INSTABILITY In an optomechanical cavity like one arm of a gravitational wave detector, photons from the optical zeroth order mode can be coherently scattered to a higher order transverse optical mode if a mechanical mode that sets a mirror surface into motion has its frequency ω m /2π equal to the frequency difference between the two optical modes (modulo the cavity free spectral range). This phenomenon can remove energy from the mechanical mode by annihilating phonons, and scattering photons from the zero order mode to the higher order transverse mode, thus damping the mirror motion [16]. Conversely, this phenomenon can add energy to the mechanical mode with the reverse process, thus exciting the mechanical motion. In that case, an instability can prevent the interferometer stable operation [5,13]. This instability has a threshold: it starts to grow as soon as the resonant excitation of the mechanical mode by the radiation pressure force overcomes mechanical losses. In the following, we use the approach developed by Evans et al. [17] to simulate this effect. In this framework, the whole interaction between the three implied modes (two optical modes and a mechanical mode) is seen as a classical feedback system. This modular approach is well suited, since it can be adapted to many different interferometer configurations with the same analytical formulas. The parametric gain of the mechanical mode m is given by where Q m is the quality factor of the mechanical mode m and ω m its frequency, P the arm-cavity optical power, λ the optical wavelength, M the mirror mass, c the velocity of light, G n is related to the scattered field optical gain of the n th optical mode and encapsulates the interferometer configuration. Finally, B m,n is the spatial overlap integral between the three involved modes. A mechanical mode is amplified if R m > 0 and damped if R m < 0. It becomes unstable if R m > 1. FIG. 2. Geometry used for the FEA, including the ears, the anchors and the magnets attached on the mirror rear face. The suspension wires are just for sketching but not included in the simulation as they do not influence the modal frequencies III. MECHANICAL SIMULATION A. The spatial overlap parameter The spatial overlap integral B 2 n,m is defined [18] as where M is the mirror mass and m eff the effective mass of the mechanical mode. The integral is performed over the test mass surface (coating side). E 00 ( r) stands for the optical carrier amplitude and E n ( r) for a transverse optical [20] mode amplitude labeled by the index n. As the interferometer is sensitive to the test mass displacement along the optical axis, only the vertical displacement µ m ⊥ ( r) is considered, where m is the mechanical mode index. The effective mass is related to the strain energy ρ e through the equation 1 2 M eff ω 2 m = ρ e r ⊥ , and effectively obtained with the formula where µ( r) stands for the test mass displacement. The mechanical modes were computed by means of finite element analysis (FEA) developed for the actual input test mass (IM) of Advanced Virgo arm cavities [21]. We have used the program Ansys®Workbench TM . The IM model includes the high-reflectivity (HR) coating of the front face, the flats and the bevels. Moreover the ears and the anchors attached by silicate bonding technique are included (see figure 2). In the FEA, the multilayer optical coating is modelled as a solid 3D element having the total thickness corresponding to the sum of the thicknesses of the high reflective and low reflective materials and mechanical parameters averaged over the thicknesses of the layers. Instead of 3D shell elements, we have used 3D solid elements also for very thin materials, though more CPU time consuming, because they provide the shear deformations and energies, which are useful for getting the mechanical losses associated to the modes. B. FEA simulations results The flats, the ears and also the anchors play an important role. In particular, since they break the cylindrical symmetry, they lift degeneracies and increase the number of distinct mode frequencies. In this paper we will discuss the results up to 70 kHz. To estimate the accuracy of the model, we have used a set of frequencies (ν Meas ) measured on the North arm IM up to 40kHz of an IM. Fig. 3 shows relative differences (ν Meas − ν FEA )/ν FEA , versus the frequency of the FEA ν FEA . The standard deviation is 0.15%. v We have estimated the quality factors of the mechanical modes of the IM taking into account several kinds of losses: losses of the fused silica substrate, anchors and supports of the magnets (loss angle φ FS ); coating losses (loss angle φ IMcoating ); losses of bonding layers used to attach the ears, the anchors and the magnets (loss angle φ Bonding ). The bonding layers, have a thickness of 60 nm, and are modeled as 3D solid elements. Coating losses of the IM and EM were recently measured [20]. Note that all the parameters used are given in table I. Each loss contributor is related to the energy fraction stored in the The overall loss angle for the IM is obtained by summing up all contributors: The mechanical quality factor of the IM modes then writes Q m = 1/φ IM . Fig. 4 shows the frequency dependence of the FS substrate loss and the effect of adding the optical coating and the bonding layers. The influence of the bonding term φ Bonding is strongly mode shape dependent through the deformation of the ear and anchor bulks and it is not negligible. In fact, its contribution to Q m is dominant. For this reason, from a set of Q measurements it is possible to infer the value of φ HCB by using the energy fractions calculated with the FEA. 5 shows the Q m of the IM mass computed by fitting the loss angle φ HCB by using the first set of 5 modes of the IM of the north arm and supposing that it is does not vary with the frequency. At frequencies higher than 10 kHz, the bondings have a strong damping effect, though they have a negligible effect on the thermal noise of the IM. This is a very important result for the parametric gains computation and consequently for identification of the unstable modes. IV. TRANSVERSE OPTICAL MODES IN ARM CAVITIES Hermite-Gauss modes (HGM) are solutions of the paraxial wave equation for infinite-sized spherical mirrors. This mode basis was used in [17] to compute the parametric gain for the LIGO interferometer. It is fast to implement as the mode shapes are provided by analytical formulas. However, it restricts the mirror model to a purely spherical shape of infinite size. In particular, it does not include the effects of the deviations from the spherical shape due to fabrication imperfections or thermal effects. Finally, it does not take account for finite size effects such as diffraction losses, which must be estimated separately. We have computed another set of optical modes that are obtained from a numerical resolution of the paraxial equation with finite-sized mirrors [22]. This mode basis will be referred to as 'finite-sized mirror modes (FSMM)'. Contrary to HGM, FSMM are obtained directly with diffraction losses. Moreover, mirror shapes can be chosen arbitrary, which enables one to introduce any deformation of the mirrors due to thermal effects or fabrication imperfections. Note, that in this work, we did not include fabrication imperfections, which effects will be the subject of future work. In the following, we analyze the differences in Gouy phase (or frequency), diffraction loss, and mode shape, and between the HGM and FSMM basis set. A. Gouy phases The Gouy phases of arm cavity modes set the optical resonant frequencies, and, thus, the OPI resonance condition ω m = δω, where δω is the difference in frequency between the zero order mode and the higher order transverse optical mode. In the case of HGM, the Gouy phase of the mode of linear index n is given by where O n the order of HGM(n), and φ G is the Gouy phase of the lowest order mode HGM(1) (usually referred to as TEM 00 in the literature) given by Here, g 1 < 0 and g 2 < 0 are the g parameters of the interferometer arm cavities. For the Advanced Virgo arm cavities, φ G 2.74 rad. and can be tuned by small variations of the mirror radii of curvature. Fig. 6 shows the difference between HGM's and FSMM's Gouy phases, expressed in units of free spectral range on the left vertical axis and in units of cavity linewidth on the vertical right axis. Note that the Gouy phases have been wrapped within an interval of length π, which allows to fold all the modes within a single free spectral range. The green line splits the graph into two regions: in the above region, the deviation is more than half a cavity linewidth, and we expect the model choice to have an impact on the OPI gain, whereas in the bottom part the impact should be negligible. Thus, the critical order is 7. B. Diffraction losses Diffraction losses stem from the finite size of the cavity mirrors. They are a key parameter to compute the parametric gain, since they contribute to the optical linewidth (together with material absorption losses, scattering losses, and mirror transmittance). Since low-order modes have most of the energy concentrated at the center of the mirror, their diffraction losses are small, whereas high-order modes spread over a larger surface and show higher diffraction losses. Thus, in general, high-order modes are less likely to contribute to a PI. However, note that counter-intuitively, a loss increase can some- times lead to a higher parametric gain, as explained in more details in section VI B. Diffraction losses for HGM are estimated, like in [17], by evaluating the ratio between the total light flux within the coating radius of a mirror and the total flux incident on the mirror. Figure 7 shows diffraction losses for both sets of modes. It shows that with this rough estimation method, besides the HGM(1), all HGM have their diffraction losses underestimated. However, note that the total losses (input mirror transmittance plus diffraction losses) of low-order modes are dominated by the input mirror transmittance (green line on Fig. 7). Thus, the total losses obtained with the two methods start to differ by more than 10% around mode order 5. C. Mode amplitudes Optical mode amplitudes are used to compute the three mode spatial overlap coefficient B mn of Eq. 1. Thus, they also directly affect the OPI gain. In order to compare the mode amplitudes of FSMM and HGM, we decompose the vectors of one basis set onto the other by using the decomposition coefficient c ij of any FSMM (index i) with any HGM (index j): where i and j are modes integer indices, u i (resp. v j ) are the FSMM (resp. HGM) mode amplitudes, and S is the mirror coating surface. Note that the transverse profile of a FSMM is constrained on a disk (mirror coating), whereas for HGM the transverse profile is distributed over the whole plane, such that a linear superposition of FSMM will never exactly match a HGM, and a true transformation matrix between the two basis set cannot rigorously be obtained [23]. In Fig. 8(a), we represent |c ij | for i = 2, and j ∈ {1, 2, ..., 36}. We find that FSMM(2) is a linear combination of the two order one HGM which are the HGM(2) and the HGM(3). We find that this is true for orders below 7. Conversely, as shown in Fig. 8(b), the higher order FSMM(36) mode (shown in the inset of Fig. 8(b)) cannot be decomposed on a single order of HGM. In the presented case, it is a mixture of order 7, 9, 11, and many other higher odd orders that are not shown on the figure. D. Conclusion This study shows that, in the absence of mirror deformation, the HGM basis does not deviate significantly from the FSMM basis for modes of order lower than 6. For order 6 and higher, the more resource consuming FSMM basis should lead to significantly different results for OPI gains. In section VI, we compare the OPI gains obtained for the Advanced Virgo O3 configuration, with HGM and FSMM basis set. V. THERMAL EFFECTS The laser energy is partially absorbed both by coatings and in the bulk of mirrors. This causes a temperature gradient, which originates two effects. First, a gradient of refractive index in the bulk of input mirrors modifies the mode matching condition, but affects neither the cavity linewidth nor the mode frequencies. Second, a deformation of the mirror surface, which modifies the mode shapes and frequencies. In this part, we evaluate the impact on this second effect on the properties of cavity modes by comparing FSMM obtained for purely spherical mirrors with FSMM obtained for thermally deformed mirrors. The deformation profile is obtained by solving the linear thermoelastic equations [24]. Figure 9(a) shows the purely spherical and thermally deformed profiles of an Advanced Virgo input mirror, for an intracavity power of 300 kW. We fitted the central part of the deformed mirror to extract a radius of curvature. The results are given in the following However, note that the mirror is not spherical anymore and the result of the fit is only valid in the center. In order to evaluate the incidence of this effect on the optical cavity parameters, we computed the FSMM with and without thermal effect on the two cavity mirrors. Fig. 9(b) shows the frequency differences between the two situations. We see that optical modes acquire a significantly different Gouy phase even for very low mode orders. We checked that losses and mode amplitudes are affected only for orders higher than 7, such that the frequency shift is the main effect. In section VI, we study the impact of this phenomenon on OPI gains. A. Validation: comparison with the Finesse software The OPI gains of all mechanical modes within the [5.7, 70.7] kHz range were computed using Eq. (1) using both the HGM and FSMM basis set. In order to validate this method, we compared our results with the one obtained with the Finesse software [25,26]. The OPI gain obtained with the Finesse software for one mechanical mode and two arm cavity mirrors is shown in Fig. 10(a). In Fig. 10(b), we plot the relative difference with the OPI gain obtained with the Finesse software and with Eq. (1) using FSMM and HGM. We observe a difference of a few percent at maximum. Note that the slight asymmetry between the blue and red curves stems from the small parameter difference between the two arms cavities. This comparison has been performed with many other mechanical modes and showed similar results. Note that using Eq. (1) is much faster than using the Finesse software, and that computing the results of the following figures would not have been possible in a reasonable amount of time. Therefore, in the following we use only Eq. (1). B. Effect of optical losses on the OPI gain In this section we demonstrate a counter-intuitive effect of optical losses on the OPI gains. Intuitively, if optical losses increase, the OPI gains get lower since the optical linewidth also increases. Here we show that if the OPI resonance condition is not exactly fulfilled, broadening the optical mode response can increase the gain such that counter intuitively, the gain variation does not vary monotonously with the diffraction loss. This is best shown on Fig.11(a), where the OPI gain of a mechanical mode is plotted against optical diffraction losses of the main optical contributor. In this example, the gain first increases from around 0.04 below 10 2 ppm to 0.1 at 2 × 10 4 ppm, before decreasing at higher loss values, as expected. This appears also in Fig. 11(b), where the optical gain G n of the main optical contributor to the OPI gain of the mechanical mode of Fig. 11(a) is represented as a function of the mechanical mode frequency, for two different values of diffraction losses. At low losses, the two resonance peaks are well separated, such that there is a minimum in between (black arrow). A loss increase from 0 to 21540 ppm (red bullet at maximum of Fig. 11(a)) broadens the peaks an lead to the red curve which has no minimum anymore, and which shows higher Optical gain Gn of a FSMM high order mode versus the mechanical mode frequency, which is artificially varied around the resonance condition δω = ωm (with δω = 12.5551 kHz). The blue line is for null diffraction losses, and the red one is for 21540 ppm (red bullet at the maximum of the curve in (a)). The arrow point the minimum in between the two resonance on the blue curve, where the gain increase is maximum. The gray shaded area highlights a frequency at which Gn is higher when the diffraction loss is higher. values in a whole frequency region (gray shaded area on Fig. 11(b)). Finally, if the losses were increased further, the red curve would start lowering and the gray shaded area would vanish. In this paragraph, we study the impact of the model used to compute the optical modes. We compare the OPI gains obtained with the HGM and FSMM basis. As expected, we find that there is only a marginal difference between the two models if optical modes of order below 5 are involved. In Fig. 12, we plot the gain of a mechanical mode versus its frequency using the two optical mode basis set. This mode has been chosen for the main optical contributor to the OPI gain is an order 6 optical mode. There is a factor 3 between the two gain maxima and the two peaks are shifted by around 100 Hz, which corresponds to the optical linewidth. This is in agreement with the conclusions of Sec. IV. D. OPI gain computation in the O3 configuration The simulations have been performed for the Advanced Virgo configuration corresponding to that of O3. The parameters for such a configuration are shown in Table II. Measured parameters have been included rather than nominal values when they were available. Only the optical input power has been set to the nominal value of 50 W, which is the maximum value that would have been possibly used during O3 (the value effectively reached in O3 being too small to trigger any instability in the range of mechanical mode frequencies simulated in this work). The corresponding arm-cavity power is around 300 kW. The parameters used are listed in table II. To account for optical mode frequency uncertainties, we present OPI data in two-dimensional plots (see for instance Fig. 13(a) and (b)), where the end mirrors radii of curvature NE and WE are scanned. The color code indicates the gain value at each interferometer working point. This choice is also related to the fact that the envisioned OPI mitigation technique relies on ring heaters able to tune the end mirrors radii of curvatures [27]. In the following, we show two sets of results. Each figure is the result of the same OPI calculation but using a different set of optical modes. Fig. 13(a, b, c) shows the results for FSMM, Fig. 13(d, e, f) for FSMM including thermal effect due to coating absorption (see sec. V). NE and WE are scanned over a five meters range, which is within reach of the mirror ring heater system. In each OPI plot, the color code is chosen such that the gray scale is for gains lower than 1 (no instability), and the color scale is for R m > 1. The involved mechanical modes are indicated in the inset, and main optical modes contributors are shown below each OPI plot. Note that the result obtained with HGM are indistinguishable from that of Fig. 13(a), such that we did not include the corresponding figure. Indeed, only low order optical modes are involved here, such that HGM and FSMM give the same result. Fig. 13(b resp. d) shows the unstable mechanical modes of Fig. 13(a resp. b) on the first line. The second line shows the involved optical modes. In Fig. 13(c resp. f), we plot the number of unstable modes in the range of Fig. 13(a resp. b) versus the optical power for FSMM without (resp. with) thermal effect. Modes that ring on different mirrors are counted only once. The blue curves represents the ratio of the area free of instability S Rm>1 to the total area S tot in Fig. 13 (a resp. b), versus the optical power. This ratio quantifies how difficult it is to escape an unstable area within the accessible radii of curvature range. The green vertical lines on Fig. 13(c and f) point the nominal power of O3 (50 W) and the power that was effectively reached (28 W). These results show that an OPI involving mechanical modes with frequencies below 70 kHz could have been observed at the nominal power of O3, although it would be easily escaped with end-mirrors ring heaters since S Rm>1 /S tot 0.2 at 300 kW cavity power. It also shows that, at the real power of O3, it was very unlikely to observe an instability. Finally, it shows that the thermal effect caused by coating absorption has an important impact on the results at such high optical powers. VII. CONCLUSION In this letter, we have presented OPI gain simulations in the Advanced Virgo configuration of O3. Compared to previous work [17], we have used deeper physical modeling, including a very detailed description of mechanical modes and optical modes. The aim being that our model becomes predictive. The mechanical mode simulation includes all the mirrors details, and we implemented an original method to obtain precise quality factors value for all modes, by using the combination of FEA and ringdown measurement that are performed on a subset of modes. We have also provided a precise description of optical modes, by considering finite size effects. Our method provides directly accurate diffraction losses. Furthermore, we have shown that, counterintuitively, an optical loss increase can lead to a parametric gain increase. Our conclusion regarding optical modes is that up to order 4 (included), analytical formulas for Hermite-Gauss modes are sufficient to predict accurate OPI gains. However, if higher-order optical modes are involved, mirrors finite size effects must be accounted for. Finally, we have shown that the mirror deformation stemming from the laser absorption in mirror coatings plays an important role and must be included in the OPI simulation. These simulations pave the way towards precise optomechanical instability predictions for the current and next generations of gravitational-wave detectors.
2021-02-23T02:15:35.808Z
2021-02-19T00:00:00.000
{ "year": 2021, "sha1": "f5e3ac8e77796289ca0037a21f5e5c145f795a44", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2102.11070", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f5e3ac8e77796289ca0037a21f5e5c145f795a44", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
56017969
pes2o/s2orc
v3-fos-license
Gauging the Galactic thick disk with RR Lyrae stars In this contribution we present results from the QUEST RR Lyrae Survey of the thick disk. The survey spans ∼480 sq. deg. at low latitude |b| < 30◦, with multi-epoch VRI observations, obtained with the QUEST-I camera at the 1m Jürgen Stock Schmidt telescope located at the National Astronomical Observatory of Venezuela. This constitutes the first deep RR Lyrae survey of the Galactic thick disk conducted at low galactic latitudes, covering simultaneously a large range in radial (8<R(kpc)<60) and perpendicular (0<|z|(kpc)<20) distance from the Galactic Plane. The spatial coverage of the survey together with the multi-band multi-epoch photometry allowed for the derivation of the thick disk structural parameters from in situ RR Lyrae stars having accurate distances (errors <7%) and individual reddenings derived from each star’s color curve at minimum light. Moreover, the use of RR Lyrae stars as tracers ensures negligible contamination from the Galactic thin disk. We find a thick disk mean scale height hZ = 0.94 ± 0.11 kpc and scale length hR = 3.2 ± 0.4 kpc, derived from the vertical and radial mean density profiles of RR Lyrae stars. We also find evidence of thick disk flaring and results that may suggest the thick disk radial density profile shows signs of antitruncation. We discuss our findings in the context of recent thick disk formation models. 1. THE QUEST-I RR LYRAE SURVEY OF THE THICK DISK The goal of our work was to characterize the structure of the Milky Way thick disk using RR Lyrae stars as tracers, by means of a large-scale survey at low galactic latitudes. The present survey spans an area of ∼480 sq. deg. in the galactic latitud range −30◦ b + 25◦ approximately towards the Galactic anticenter 190◦<l<230◦, with multi-epoch VRI observations for 6.5 × 106 objects, obtained between 1998 and 2008 with the QUEST-I camera at the 1m Jürgen Stock Schmidt telescope located at the National Astronomical Observatory of Venezuela. We identified 160 RRab and 51 RRc stars in the magnitude range 14<V <18.5, with a completeness estimated in 95% and 80% respectively. Distances to RRab stars were derived using individual reddenings computed for each star from its mean V − R and V − I color curves during the minimum light phase [1, 2], resulting in typical distance errors of ∼7%. The present therefore constitutes the first deep RR Lyrae survey of the Galactic thick disk conducted at low galactic latitudes, covering simultaneously a large range in radial (8<R(kpc)<60) and perpendicular (0<|z|(kpc)<20) distance from the Galactic Plane. The survey coverage is illustrated in Figure 1, in comparison with previous tracer surveys of the Galactic thick disk ([3–6]). ae-mail: cmateu@cida.ve This is an Open Access article distributed under the terms of the Creative Commons Attribution-Noncommercial License 3.0, which permits unrestricted use, distribution, and reproduction in any noncommercial medium, provided the original work is properly cited. Article available at http://www.epj-conferences.org or http://dx.doi.org/10.1051/epjconf/20121904006 EPJ Web of Conferences Figure 1. Survey coverage in cylindric galactocentric coordinates |z| versus R, compared to previous thick disk tracer surveys. RRLS from [3] were included to supplement the survey at high latitudes. 2. THE GALACTIC THICK DISK TRACED WITH RR LYRAE STARS We computed cumulative mean densities RR(≤ R) and RR(≤ z) of RRLS (only for type ab RRLS) in the survey volume up to a projected galactocentric radius R and distance perpendicular to the Galactic Plane |z| respectively. We derived Halo density profile parameters by fitting the observed profiles for RRLs outside the Galactic Plane (|z|≥6 kpc). We used these to account for the contribution of Halo RRLS near the Galactic Plane (|z| < 6 kpc), where we fitted the observed radial and vertical profiles to derive the thick disk parameters. We obtained a mean scale height of hZ = (0.94 ± 0.11) kpc and mean scale length hR = (3.2 ± 0.4) kpc for the thick disk. We also find that in order to appropiately fit the full radial density profile, the scale length hR must be shorter for R 11.5 kpc than for longer distances, suggesting the thick disk might have an antitruncated or Type III profile, similar to those observed in external galaxy disks [7]. Finally, we also fitted the vertical density profile of RRLs in five different radial distance intervals, in order to explore the behaviour of the scale length as a function of R. The scale height is found to increase at a rate hZ/hZ = 0.9, i.e. 90% if R increases in one scale length. Our results for the mean scale height and scale length are consistent with current predictions of thick disk formation models by thin disk heating [8] and gas-rich mergers [9]; however, our results for the observed flare amplitude favour current gasrich merger models [10]. THE QUEST-I RR LYRAE SURVEY OF THE THICK DISK The goal of our work was to characterize the structure of the Milky Way thick disk using RR Lyrae stars as tracers, by means of a large-scale survey at low galactic latitudes.The present survey spans an area of ∼480 sq.deg. in the galactic latitud range −30 • b + 25 • approximately towards the Galactic anticenter 190 • <l<230 • , with multi-epoch VRI observations for 6.5 × 10 6 objects, obtained between 1998 and 2008 with the QUEST-I camera at the 1m Jürgen Stock Schmidt telescope located at the National Astronomical Observatory of Venezuela. We identified 160 RRab and 51 RRc stars in the magnitude range 14<V <18.5, with a completeness estimated in 95% and 80% respectively.Distances to RRab stars were derived using individual reddenings computed for each star from its mean V − R and V − I color curves during the minimum light phase [1,2], resulting in typical distance errors of ∼7%.The present therefore constitutes the first deep RR Lyrae survey of the Galactic thick disk conducted at low galactic latitudes, covering simultaneously a large range in radial (8<R(kpc)<60) and perpendicular (0<|z|(kpc)<20) distance from the Galactic Plane.The survey coverage is illustrated in Figure 1, in comparison with previous tracer surveys of the Galactic thick disk ( [3][4][5][6]). a e-mail: cmateu@cida.veThis is an Open Access article distributed under the terms of the Creative Commons Attribution-Noncommercial License 3.0, which permits unrestricted use, distribution, and reproduction in any noncommercial medium, provided the original work is properly cited. Figure 1. Survey coverage in cylindric galactocentric coordinates |z| versus R, compared to previous thick disk tracer surveys.RRLS from [3] were included to supplement the survey at high latitudes. THE GALACTIC THICK DISK TRACED WITH RR LYRAE STARS We computed cumulative mean densities RR (≤ R) and RR (≤ z) of RRLS (only for type ab RRLS) in the survey volume up to a projected galactocentric radius R and distance perpendicular to the Galactic Plane |z| respectively.We derived Halo density profile parameters by fitting the observed profiles for RRLs outside the Galactic Plane (|z|≥6 kpc).We used these to account for the contribution of Halo RRLS near the Galactic Plane (|z| < 6 kpc), where we fitted the observed radial and vertical profiles to derive the thick disk parameters.We obtained a mean scale height of h Z = (0.94 ± 0.11) kpc and mean scale length h R = (3.2± 0.4) kpc for the thick disk. We also find that in order to appropiately fit the full radial density profile, the scale length h R must be shorter for R 11.5 kpc than for longer distances, suggesting the thick disk might have an antitruncated or Type III profile, similar to those observed in external galaxy disks [7].Finally, we also fitted the vertical density profile of RRLs in five different radial distance intervals, in order to explore the behaviour of the scale length as a function of R. The scale height is found to increase at a rate h Z /h Z = 0.9, i.e. 90% if R increases in one scale length.Our results for the mean scale height and scale length are consistent with current predictions of thick disk formation models by thin disk heating [8] and gas-rich mergers [9]; however, our results for the observed flare amplitude favour current gasrich merger models [10].
2018-12-06T04:53:10.952Z
2012-02-01T00:00:00.000
{ "year": 2012, "sha1": "02dbfb2d0ee5a44c3d2cfa0530abdfecb05f740a", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2012/01/epjconf_apmw2012_04006.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "02dbfb2d0ee5a44c3d2cfa0530abdfecb05f740a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
53586774
pes2o/s2orc
v3-fos-license
Modeling the unified measurement uncertainty of deflectometric and plenoptic 3-D sensors In this paper we propose new models of two complementary optical sensors to obtain 2.5-D measurements of opaque surfaces: a deflectometric and a plenoptic sensor. The deflectometric sensor uses active triangulation and works best on specular surfaces, while the plenoptic sensor uses passive triangulation and works best on textured, diffusely reflecting surfaces. We propose models to describe the measurement uncertainties of the sensors for specularly to diffusely reflecting surfaces under consideration of typical disturbances like ambient light or vibration. The predicted measurement uncertainties of both sensors can be used to obtain optimized measurements uncertainties for varying surface properties on the basis of a combined sensor system. The models are validated exemplarily based on real measurements. Introduction Automated quality inspection of product surfaces requires a fast and robust sensor, capable of detecting all relevant defects without damaging the surface.Optical measurement techniques fulfill these requirements but are highly dependent on the surface properties.For example, pattern projection and passive stereoscopic methods require diffuse reflectance, while deflectometric methods depend on specular reflectance of the inspected surface.Many surfaces are partially specular or a mixture of diffusely and specularly reflecting parts and cannot be robustly measured with only one method.By combining several measurement methods into a single sensor system that adapts its algorithms to exploit the advantages of the single methods, we are capable of measuring surfaces with a large variety of surface properties.To demonstrate the principle, we propose uncertainty models for plenoptic and deflectometric sensors, and based on the models we simulate both sensors under similar circumstances on varying partially specular surfaces. 1.1 Related work Tutsch et al. (2011) give a good overview of optical 3-D measurement techniques with structured illumination that includes passive triangulation and deflectometry.The measurement uncertainty for the deflectometric sensor is based on the phase noise model from Fischer et al. (2012), which is itself based on parameters defined in the EMVA 1288 measurement standard by the European Machine Vision Association.Fiete and Paul (2014) describe a systematic approach to model the imaging chain using the optical transfer function, that will be used in Sect. 2. Plenoptic cameras have been used in computational imaging for several decades now.However, Perwaß and Wietzke (2012) proposed the first plenoptic camera for industrial applications.They commercialized their camera in the company Raytrix.In the recent years, several papers on the metric calibration of Raytrix cameras were published; see Johannsen et al. (2013), Heinze et al. (2016), Zeller et al. (2016), Strobl and Lingenauber (2016) and Zeller et al. (2017).Furthermore, the potential of plenoptic cameras with respect to 3-D measurement was derived analytically by Perwaß and Wietzke (2012) as well as Zeller et al. (2016) and Published by Copernicus Publications on behalf of the AMA Association for Sensor Technology.demonstrated experimentally by Heinze et al. (2016), Zeller et al. (2016), and Sardemann and Maas (2016).Nevertheless, all existing analytical evaluations considered the optical system only from a purely geometrical perspective, ignoring the effects of real optical systems as is done in this paper. Outline First, in Sect. 2 we introduce the photometric properties of the measurement system by means of the spatial distribution of light and the modulation transfer function.In Sects. 3 and 4 we describe the deflectometric sensor and the plenoptic sensor and derive their measurement uncertainties.Then in Sect. 5 we compare the uncertainties for both methods.All plots shown in this paper use parameters from an exemplary setup given in Table 1.Finally, in Sect.6 we present, by way of example, real measurements to validate the proposed models. Photometry Deflectometry, as well as the plenoptic method, relies on the recognition of spatial light patterns to identify unique positions that can be triangulated.The reliability of this recognition depends on the pattern contrast.When the pattern contrast is very low, noise introduced by the camera dominates the pattern.In the following section we introduce a systematic approach to describe the pattern contrast and a reduction of this contrast depending on its spatial frequencies.Despite the 2-D nature of image-based measurements we decided to describe our approach in 1-D, since the direction has no impact on the results. Deflectometry In deflectometry the camera integrates light emitted at some point x by the screen L scr (x) (radiance) and reflected by the surface (ignoring spectral dependency).The light L cam (m) reaches the camera image plane at some point m "smeared" by the screen itself, the reflection at the surface, the refraction and diffraction by the camera lens and the sensor itself.The smearing can be mathematically described as a convolution of the incoming light with the point spread function (PSF).We assume that the PSF is translation-invariant to the object and the sensor point: Now, the image radiance is the convolution of the screen radiance and the point spread functions of the screen (PSF scr (x)), the surface (PSF srf (x)) and the camera (PSF cam (x)): Instead of using PSFs in the spatial domain, the imaging properties of optical systems can be described in the Fourier domain, depending on the spatial frequency k.Then the optical transfer function (OTF), which is the Fourier transform of the PSF, is used instead.As we are only interested in modulation changes, we separate the OTF into modulation transfer function (MTF) and phase transfer function (PTF): (k) . (3) Additionally, we normalize the MTF to 1 for the DC component: We will introduce the patterns shown on the screen I scr (x) (radiant intensity) in Sect.3.1.The radiant intensity from the screen is linked with the radiance from Eq. ( 2) by multiplication with the pixel size A scr and the cosine of the incident angle θ scr of the optical axis to the screen normal direction (which is approximately the same for all points on the screen for a small screen size compared to the screen distance): I scr (x scr ) = L scr (x scr )A scr cos(θ scr ). (5) The radiance distribution γ scr (k scr ) with the spatial frequency k scr on the screen is the spectrum of the radiance (F{•} denotes the Fourier transform): The irradiance on the camera sensor is linked with the radiance from Eq. ( 2) by multiplication with the solid angle of the sensor pixel cam and the cosine of the incident angle θ cam of the optical axis to the sensor normal direction (which again is approximately the same for all points on the sensor for a small sensor size compared to the camera distance): Now we obtain the irradiance distribution on the camera sensor, as a product of the screen radiance distribution γ scr (k scr ) and the MTFs: Note the usage of different spatial frequencies k cam and k scr .We will introduce the correspondence between camera and screen spatial frequencies for simple surface shapes in Sect.3.4.In Sect. 5 we will discuss the optimum screen pattern, minimizing the measurement uncertainty.The screen and camera MTF can be measured in advance using a plane first-surface mirror with M srf (k cam ) ≈ 1.If necessary, the camera MTF can be measured separately in advance using calibrated MTF targets and subsequently the screen MTF can be measured with the calibrated camera at close range to the screen; see Triantaphillidou and Jacobson (2004).The surface MTF M srf depends on the surface roughness and position between screen and camera, so M srf has to be measured for a fixed setup. Plenoptic Things change for the passive plenoptic setup, where the surface itself is considered as an emitter of structured light.We assume that the surface itself reflects unstructured light from the environment depending on the surface texture γ srf (k srf ).However, for surfaces which do not show perfect Lambertian reflectance, that surface texture is superimposed by the specular reflections on the surface and imaged and sensed by the plenoptic camera (described by M cam (k cam )) and lead to a radiance distribution γ cam (k cam ) on the camera sensor.The superposition of the surface texture showing Lambertian reflectance with the specular reflections of unstructured light from the environment can be considered as a reduction of the signal-to-noise ratio (SNR) as it increases the noise level on the surface.Instead of increasing the noise level, we model this superposition as a spatial-frequency-dependent attenuation of the surface texture γ srf (k srf ).Anyway, on the image sensor no absolute signal levels can be measured since they are equalized by the automatic exposure control of the camera.The relative attenuation of the surface texture γ srf (k srf ) due to specular reflectance is modeled by a function that we call the Lambertian surface MTF M srf .Since this Lambertian surface MTF M srf (k srf ) models the deviation from a perfect Lambertian reflectance it behaves reciprocally to the surface MTF M srf (k srf ) defined for the deflectometric setup.Hence, similar to Eq. ( 7) we obtain the following: In the best case for plenoptic measurements, the surface pattern consists of intensity steps with an modulation of 1 over a wide spatial frequency spectrum: This pattern is superimposed by the specular reflection of pattern in the environment.Hence, in contrast to deflectometry, the specular reflection decreases the measurable patterns contrast for the plenoptic setup.A detailed definition of the surface MTF will be discussed in Sect.2.5. Camera MTF Following Fiete and Paul (2014) the camera OTF consists of the lens OTF and the sensor OTF.In the best case of a diffraction-limited optical system, the lens OTF is given by the following: with k = k cam and F is the ratio of the focal length f and the diameter D of the entrance pupil and is commonly known in photography as the "f number".Ignoring signal sampling, noise, quantization and anisotropy, the OTF of the sensor is given (with k = k cam ) by the following: The bandwidth of the sinc function depends on the aperture of a pixel on the sensor and is assumed to be equal to the pixel pitch c.The camera MTF is the composition of the lens OTF in Eq. ( 10) and the sensor OTF in Eq. ( 11) as follows: In Fig. 1 we show the camera MTFs for the deflectometric and the plenoptic camera with lens MTFs as given in the manufacturer data sheet and sensor MTF derived from parameters in Table 1 in comparison with camera MTF for the same sensor and a diffraction-limited lens. Defocus MTF The OTF introduced in the previous section describes a camera in focus; in this section we will discuss a camera out of focus.Let the camera be in focus at some object distance g Using the thin lens equation 1 f = 1 g + 1 b the image distance b can be eliminated and we get the size for a point out of focus on the sensor: The size of a point on the image plane c cannot be smaller than the camera pixel size c, and thus we define it as follows: The OTF for an image out of focus depends on the size of this point (see Beyerer et al., 2012) and replaces the sensor OTF in Eq. ( 11), where J 1 is the Bessel function of first kind and order: Figure 2 shows the camera MTF of screen points with 1.0 m distance for several focus distances g from the surface (g = 0.5 m) to the screen (g = 1.0 m). Motion blur In many situations the camera is shaking during exposure due to vibrations caused by heavy machines, etc. Fiete and Paul (2014) give an OTF for motion blur caused by a random movement of the camera during the exposure with standard deviation σ mot : Of course, the measured surface may also be subject to vibrations, but due to the complexity of the implications of changing surface normals during exposure it is not covered here.The influence of translational camera motion blur on the MTF according to Eq. ( 17) is shown in Fig. 3.Note that the motion blur is related to the pixel pitch (here 6.45 µm), so motions during exposure smaller than half the pixel pitch in image space have almost no impact on the modulation. Roughness With more surface roughness the reflectance decreases from specular to diffuse; see Fig. 4. Harvey et al. (2012) and Harvey (2013) describe the relation of surface roughness measured as root mean squared (RMS) surface height and the optical properties of the surface using total integrated scatter and the MTF.They observe that the RMS value has to be calculated within the relevant scale.However, this model seems to be valid only for very smooth surfaces, so in the following we propose a simple parametric model for the surface MTF: This matches the MTF measurements of the surfaces shown in Fig. 4, but has no dependency on physical parameters like the surface roughness.For each surface type, the parameter c has to be estimated.The five surfaces shown in Fig. 4 have parameters in the range c = 2.5. ..3.8.As M srf (k cam ) depends on the camera distance s and the screen distance r, it has to be estimated again, if the setup changes.Figure 5 shows the surface MTF for different gloss factors c. While deflectometry utilizes the specularity of the surface, passive triangulation approaches like plenoptic-camerabased methods rely on Lambertian reflectance.Hence, for the plenoptic camera we consider the specular component as an additional noise component and therefore model the Lambertian surface MTF M srf as follows: Figure 6 shows the Lambertian surface MTF for different gloss factors c. Ambient light In the previous section we described how surface roughness increases the amount of light scattered by the surface.If more ambient light is present in the scene, the light scattered in the direction of the camera also increases.This can be measured as the Michelson contrast, i.e., the ratio of the difference and the sum of the maximum and minimum radiance (L min , L max ).To visualize the influence of (constant, unstructured) ambient light, we assume that the radiance of the pattern, received by the camera, is increased by a constant offset L amb , which leads to a modified surface MTF: If the ambient radiance reflected into the camera equals half the radiance of the maximum pattern intensity, the pattern contrast decreases to one-half of the original contrast.Calculating the influence of ambient radiance emitted by the surface requires knowing the location of the ambient light sources and the surface BRDF (bidirectional reflectance distribution function).On specular reflecting surfaces, ambient light does not influence the contrast of a reflected pattern, but more diffuse reflection increases the amount of ambient radiance reflected into the camera.The contrast of a surface texture is influenced by the amount of specularly reflected light. Deflectometry Deflectometry (see Werling et al., 2009), is a relatively inexpensive but powerful method for inspection and measurement of specular surfaces.A generic setup is shown in Fig. 7.The basic idea is to measure the surface normals by identifying the origin x scr of each camera ray (going through x cam and the optical center) that is reflected at the surface.A phase-shifting algorithm is used to identify this origin on the screen x scr .We use M > 3 shifts of a cosine fringe pattern (as seen in Fig. 4) with frequency k scr on the screen to identify the location of x scr with subpixel uncertainty in one direction.The result of this algorithm is the phase φ at the current screen point x scr : Hence, the measurement uncertainty of the camera ray origin on the screen σ x scr is given by the following: In the following section we will look at the phase-shifting algorithm and the phase noise model. Phase shifting Let the origin x scr of each camera ray be a point on a flat screen showing a sequence of i = 1. ..M cosine patterns captured by the camera with intensity I at the position x cam on the image plane with relative mean exposure β and relative pattern contrast γ cam (k cam ): β and γ cam are both constrained to 0. ..1 and β(1 The maximum exposure is the saturation capacity of the sensor I sat .Each pattern In the case of a four-step phase shifting (M = 4) the phase is obtained from the captured intensities as follows: To get the absolute position on the screen x scr the phase φ ∈ (0. ..2π ) has to be unwrapped by determining m in Eq. ( 26). One popular phase-unwrapping method is the heterodyne method, which uses two different pattern frequencies.See Zuo et al. (2016) for comparison of phase-unwrapping methods. Phase noise A model describing the phase noise of Eq. ( 25) for a symmetric M-step algorithm was derived by Fischer et al. (2012): The model is based on the EMVA 1288 camera noise model by the European Machine Vision Association with the parameters of saturation capacity µ e.sat , dark noise variance σ 2 d and overall system gain K. Hence, the authors show a way to estimate the EMVA 1288 camera parameters using a deflectometric setup.Since β and γ depend on local surface reflection properties, the authors propose a way to estimate the measurement uncertainty in Eq. ( 27) based on the observed intensities for each pixel separately: Here y i = I i − µ d denotes the physically correct intensities, µ d is the mean dark noise and R 4 the noise term, defined as follows: The two parameters influenced by the environment and the surface are β and γ .We can ensure β = 0.5, if the camera exposure is chosen appropriately.The pattern contrast on the sensor γ cam is given as product of the MTF functions for the screen, surface, ambient light, camera and motion blur in Sect. 2. The rest of the parameters can be either chosen (M) or are known from the camera data sheet (EMVA 1288 parameters).Alternatively σ φ can be estimated directly from a measurement using Eq.(28). Geometric properties In the following section the uncertainty of the surface height σ z will be derived.Let the surface be acquired in many small mirror segments.The width of each segment is the lateral uncertainty σ x , and the slant uncertainty is σ α ; see Fig. 7.The uncertainty in surface height is then obtained by the following: Measurement uncertainty σ z of a deflectometric system with lateral uncertainty σ x caused by discretization σ c on the camera sensor and uncertainty of the origin of incident rays from the screen σ x scr caused by the phase-shifting algorithm. The lateral uncertainty σ x is determined by the angular uncertainty σ θ and the distance s of the camera to the surface (assuming only small angles between viewing and surface normal direction): The angular uncertainty of the camera σ θ is determined by the discretization on the sensor σ c = c and the focal length f : Either the size of one projected pixel from Eqs. ( 31) and (32) or the defocus on the surface caused by focal plane distances g ∈ [s, r] and the aperture size D ( D σ x = g s−g ) set the lower limit of lateral uncertainty: The slant uncertainty of each surface segment σ α is determined by the screen σ x scr from Eq. ( 23) and the screen distance r (assuming only small angles between direction of reflected ray and screen normal): Combining the above equations, the uncertainty of the surface height σ z (with tan( σ α 2 ) ≈ tan σ α 2 ) is as follows: Surface shape Assume that the specular surface has a convex spherical shape.Then the screen appears smaller, which results in a smaller fringe period.In the simple case of a plane mirror, the effective fringe pattern frequency depends only on the distance from the camera to the screen: If the surface is curved (at least piecewise) like a sphere with radius R, the imaging properties can be described by the imaging properties of a convex (positive radius) or concave (negative radius) mirror with focal length f s = − R 2 : This equation has a pole at f s = (f r − rs)/(f − r − s) for concave surfaces, where the screen is located in a caustic.Near this pole, even infinitely small changes of the screen pattern would lead to changes in the image plane, but this configuration is impractical for a real inspection task. Plenoptic camera A plenoptic camera is a single sensor system which records a 4-D light-field representation of a scene in a single image.That means a point in the object space does not only correspond to a single image point in the image, as it would be for a regular camera, but to multiple image points.In other words, a plenoptic camera does not only capture a single ray emitted from a certain point in the object space but multiple light rays with different incident angles.Hence, the four dimensions describe two spatial dimensions and two angular dimensions.Even though plenoptic sensors for industrial applications are still expensive (see Raytrix GmbH, 2016), they rely on a quite simple idea.In general any industrial camera can be transformed into a plenoptic camera by placing a micro lens array (MLA) in front of the image sensor. The 4-D light field recorded by a plenoptic camera enables tasks like 3-D measurement or software-based refocusing after an image is captured.Industrial tasks for plenoptic cameras may include 3-D microscopy or the inspection of production parts. Here we describe the principle of plenoptic depth measurement based on the concept of a focused plenoptic camera developed by Lumsdaine and Georgiev (2009).As well as this, we formulate an uncertainty prediction model for the plenoptic depth measurement that relies on this concept. Figure 8 shows the imaging process of a focused plenoptic camera in the Galilean mode (see Lumsdaine and Georgiev, 2009).The main lens produces a virtual image of the real object at distance b L behind the main lens.Due to the MLA a The image distance b L results from the object distance s and the main lens focal length f as defined by the thin lens equation: Based on disparities µ, which can be measured in the micro images, one is able to calculate the image distance b L and from that the respective object distance s of a certain point, under the assumption that all intrinsic camera parameters are known; see Zeller et al. (2016). Uncertainty prediction model In contrast to deflectometry, the plenoptic camera is a passive measurement system that relies on high-contrast patterns on the surface to be measured.Besides, the surface has to have Lambertian reflectance to obtain correct measurements. In the following we define a model to predict the measurement accuracy of the plenoptic camera for a certain measurement setup.This measurement setup is shown in Fig. 10. In analogy to deflectometry and to obtain a general definition of the surface contrast we define the surface structure as a fringe pattern similar to Eq. ( 24). Of course, the pattern on the surface which is captured by the camera will never be a perfect fringe pattern but can always be modeled as a mixture of frequencies.However, this formulation gives us the possibility to model the camera response dependent on the frequency k srf of the surface pattern.In general, the structures to be measured on the surface are intensity steps and thus contain the complete frequency spectrum. In a local region one can consider the imaging process just as a scaling of the fringe pattern on the surface in combination with a frequency-dependent attenuation of the intensity modeled by the MTF of the imaging system.Therefore, by applying the assumption of being in a local region around a certain point, perspective distortion does not have to be considered. Imaging scale In contrast to a regular camera, which can be defined mathematically by a pinhole camera model, the imaging scale of a plenoptic camera is not proportional to the distance between main lens and object s.Instead, a plenoptic camera basically performs two perspective projections, one by the main lens from object space to the virtual image and one from the virtual image on the sensor.The scaling between object space and virtual image s L is given by the following: Furthermore, one gets the scaling between virtual image and the micro images on the sensor as given by the following: Here, b L0 is the distance between MLA and main lens and B is the distance between image sensor and MLA (see Fig. 8). Both scaling factors (s L and s ML ) behave reciprocally to each other.While an increasing object distance s results in an increasing scaling factor s L , the scaling factor s ML decreases at the same time since the virtual image distance b L is also decreasing. Besides the scaling of the pattern due to perspective projection, the fringe pattern is compressed if the viewing angle of the plenoptic camera α is not 90 • to the surface (see Fig. 10).This results in an additional scaling of the pattern s α defined as follows: with 0 < α < π. Based on the defined scaling factors one can define the following relations between surface x srf , virtual image x vim and sensor coordinates x cam : Hence, we obtain the following frequencies of the fringe pattern in the virtual image k vim and on the sensor k cam , respectively: Following the Shannon-Nyquist sampling theorem we can calculate from Eq. ( 45) an upper boundary for the fringe pattern frequency on the surface.Thus, to avoid aliasing, the following condition has to hold: Here, c is the pixel pitch of the sensor.After inserting Eq. ( 45) into Eq.( 47) and rearranging it, one receives the following condition for the frequency of the fringe pattern on the surface to assure aliasing-free sampling: Attenuation Similarly to in deflectometry, we can describe the imaging properties of the plenoptic camera by its MTF.In a plenoptic camera, we have a sequence of two optical systems: the main lens and the MLA.This results in an MTF of the complete plenoptic camera that depends on the distance to the surface.This can be formulated as the sequence of two MTFs with a nonlinear connection between k vim and k cam : For simplification, we approximate the complete MTF M cam (k cam ) as follows: By this approximation we neglect the depth scaling from the virtual image on the sensor and therefore consider M L and M ML to be defined on a common frequency axis.However, the error made by the approximation is negligible for object distances that are large with respect to the focal length f L and short depth ranges, which have to be measured.An alternative would be to define and measure a depth-dependent MTF. For a Raytrix camera the MLA consists of three different types of micro lenses to increase the depth of field of the camera.Therefore, strictly speaking one has to define three different MTFs for the respective micro lens types. For the simulations in Sect. 5 we assume a plenoptic camera MTF M cam (k cam ) as defined in Eq. ( 12).However, it might be worth investigating differences in the camera MTF of a plenoptic camera in comparison to a regular monocular camera. Measurement uncertainty Depth measurements are obtained based on disparities µ, which are estimated from the recorded micro images.The accuracy of the estimated disparity mainly relies on the intensity gradient along the epipolar line as well as the additive signal noise of the image sensor.The signal noise already combines noise sources like dark noise and quantization noise. For a given fringe pattern I srf (x srf ), as defined in Eq. ( 24), on the surface to be measured one obtains a maximum absolute intensity gradient on the image sensor as follows: The MLA in a plenoptic camera is in most cases arranged on a hexagonal grid.Therefore, for each micro lens multiple epipolar lines in all possible directions are obtained.Figure 11 exemplarily shows the epipolar lines in a hexagonally arranged MLA.Thus, in the following we equate the maximum absolute gradient g cam with the maximum absolute gradient along one specific epipolar line.Zeller et al. (2016) show that one can approximate the variance of the estimated disparity σ 2 µ based on the variance of the signal noise σ 2 I and the maximum gradient along the epipolar line g cam as follows: As defined in the EMVA1288 standard, the signal noise σ I results from the system gain K, the dark noise σ d , the quantization noise σ q , the gray value µ y and the dark signal µ y.dark as follows: However, for later simulations we use an average value derived from the camera specification as given in Table 1. Eq. ( 52) states that for a high-intensity gradient along the epipolar line the disparity can be determined more accurately than for a low-intensity gradient. For multifocus plenoptic cameras (see Perwaß and Wietzke, 2012), like those from the manufacturer Raytrix, the MLA consists of different types of micro lenses with different focal lengths.Therefore, for a certain object distance one type of micro lenses will produce focused micro images while the micro images of another type will be out of focus.This effect of differently focused micro images also has to be considered when estimating the disparity µ.How this focus disparity error can be modeled is shown by Zeller et al. (2017). For simplification, we do not consider the focus disparity error here.Besides, by choosing an appropriate camera setup one can assure that a pair of focused micro images is always present for a given object point. Based on the theory of propagation of uncertainties, one is able to calculate the standard deviation of the measured object distance σ s from the disparity standard deviation σ µ .The relationship between σ µ and σ s is given by the following: Equation ( 54) was derived by Zeller et al. (2017).Here D M defines the diameter of a micro lens, while κ is a normalized distance factor between the two micro lenses used for disparity estimation, i.e., κ is one for two directly adjacent micro lenses.In a hexagonal arrangement of the MLA the smallest 10 values of κ are 1.00, 1.73, 2.00, 2.65, 3.00, 3.46, 3.61, 4.00, 4.36 and 4.58. Perwaß and Wietzke (2012) showed that, depending on the distance between virtual image and MLA plane v, it can be guaranteed that micro lenses further apart still see the same object point and therefore a larger value of κ can be chosen. Simulation In the following section, we present simulation results for the measurement uncertainty of both the deflectometric and the plenoptic sensor.They are derived for an exemplary setup shown in Table 1.We chose similar settings for both systems to make realistic comparisons.All simulation results that depend on the spatial frequency of some pattern are shown as a function of the spatial frequency on the image plane k cam .The frequency limits have been chosen according to the Shannon-Nyquist sampling theorem: the highest detectable frequency is limited by the pixel pitch, i.e., k cam < 1 2 c .The surface MTF parameters for the low and high gloss and mirror were estimated from real measurements, the value for the ideally specular surface should be c → ∞, but for numerical reasons we chose c = 8.For plots containing results of both sensors, we plotted results in blue for the deflectometric sensor and results in red for the plenoptic sensor. Simulation results -deflectometry We assume that the overall shape of the reflecting surface is flat.Thus we can apply Eq. ( 36) to estimate the spatial frequency on the sensor k cam from the frequency on the screen k scr . The first result, depicted in Fig. 12, shows the uncertainty of the phase noise model using Eq. ( 27) for different surface gloss parameters.It can be seen that the uncertainty increases rapidly with the pattern frequency when the surface is not perfectly specularly reflecting. In contrast, higher spatial frequencies (and thereby shorter period lengths) in Eq. ( 23) decrease the uncertainty in estimating the screen position x scr .These two opposite effects lead to a unique pattern frequency, where the resulting measurement uncertainty on the screen reaches a minimum for some frequency k scr opt and a given roughness.As the uncertainty of the surface slant σ α given in Eq. ( 34) directly depends on the uncertainty of the screen position, we can also see these minima in Fig. 13.Here the measurement uncertainty of the surface normal dependent on the pattern frequency k cam is shown for different focus distances g as well as different surface gloss factors c.For focus distances g < 1 m the image of the screen is defocused and the Bessel function from Eq. ( 16) starts to appear.Please note that the curves in Fig. 13 with g ≤ 0.8 m are subject to aliasing artifacts. It can be seen that k scr opt changes with the surface properties.With higher surface gloss c and the focus on the screen (g → 1 m), higher frequencies can still be resolved on the sensor, and hence k scr opt increases and leads to lower measurement uncertainties.Of course, the camera sensor still limits the highest resolvable frequency on the sensor: In Fig 14 we show the measurement uncertainty from Eq. ( 35) in z direction next to the results of the plenoptic sensor.Note the triangle model shown in Fig. 7 that connects the uncertainties of surface height σ z , position on the surface σ x and angle of the reflected ray σ α .Since both uncertainties σ x and σ α are very small, the final measurement uncertainty σ z is very small as well (for plane surface mirrors in the nanometer range).Also note that the surface height is not measured directly, so σ z is only valid for local changes.The whole surface can be reconstructed by integration over the surface gradients, but along with the integration, the uncertainties sum up to larger errors for a global surface shape. Simulation results -plenoptic camera Similar to Sect.5.1 we performed different simulations based on the measurement uncertainty model defined in Sect.4.1.We chose a plenoptic sensor similar to the Raytrix R5 camera (see Table 1). In Table 1 the measurement range s defines the range around the surface distance s for which measurements can be obtained.To obtain depth measurements from a plenoptic camera it has to be assured that a point is visible in at least two micro images.This is the case for virtual depth v ≥ v min = 2.0 (v := b L0 −b L B ; see Fig. 9) and thus for b L ≥ b Lmin = b L0 + 2.0B for a hexagonally arranged MLA (see Perwaß and Wietzke, 2012).Hence, b L0 is chosen such that s + s results in an image distance b L ≥ b Lmin .Therefore, for all distances larger than s + s no depth measurements can be obtained.By minimizing the measurement range s one is able to improve the measurement accuracy. Focal length Figure 15 shows simulation results for a perfect Lambertian surface.Here we varied the focal length f while all other parameters are as given in Table 1.As can be seen from the figure, the depth uncertainty of the camera can be significantly improved by increasing the focal length.However, there exists an optimum focal length around f = 50 mm for which the best measurement uncertainty is obtained.The reason for that is the reciprocal behavior between image distance b L and object distance s, which is dependent on the focal length f .Hence, as long as the distance s is much larger than the focal length f the depth uncertainty can be improved by increas- ing the focal length.However, at approximately s ≈ 10•f the best accuracy is obtained, as can be seen from the figure. For the following simulations, which model the effect of motion blur as well as different surface roughnesses, the focal length was set to f = 16 mm. Simulation results -motion blur Figure 16 shows the effect of different amounts of camera vibration for both systems.The resulting motion blur behaves as a low-pass filter which is modeled as given in Sect.2.4.Due to the implicit low-pass filtering the optimal pattern is shifted to a lower frequency.Thus, the introduced blur significantly degrades the measurement results. Simulation results -surface roughness While increasing surface roughness has a negative effect on the deflectometric measurement results, due to less specular reflectance, it has the opposite effect on the plenoptic measurements.For the plenoptic setup best results are expected at a completely Lambertian reflectance (surface gloss c → 0).Noise is introduced by the specular reflection component.Figure 14 shows the expected measurement results for surfaces with different specularity. Finally, in Fig. 17 we show the measurement uncertainty as a function of the surface gloss c and the focus distance g, assuming (for deflectometry) that the optimal pattern k scr = k scr opt is chosen for each case. For high-gloss surfaces the deflectometric sensor is most accurate with the camera focused on the screen.For lowgloss surfaces it is preferable to focus on the surface and use patterns with low spatial frequencies. Experiments In this paper we present quite complex and complete mathematical models for the measurement process of a deflectometric and a plenoptic sensor system, although for such complex models it is almost impossible to validate them entirely.Hence, we validate our models on the basis of only two distinct setups: one for deflectometry and one for plenoptic. For both measurement systems we use the same configurations as given in Table 1.However, one cannot guarantee that all parameters match exactly, due to manufacturing tolerances as well as manual adjustments. For both sensor systems we obtain measurements based on the case of a thermometer that consists of surfaces with different reflectance properties.This case is shown in Fig. 18. Since for the plenoptic camera we cannot influence the contrast of the surface pattern on the case, we performed a second experiment, in which we generated a fringe pattern on a screen and recorded this pattern with the plenoptic camera.Here, we measured the measurement uncertainty for different fringe frequencies.This setup is shown in Fig. 24. Experimental results -deflectometry We used the deflectometric sensor shown in Fig. 19 to validate the measurement uncertainty of the surface slant σ α . The measurements of the thermometer surface were taken using 24 different pattern frequencies in the range k cam = 0.127-32.7 mm −1 with a logarithmic step size and under dark and bright illumination conditions.For dark illumination conditions there is no ambient light, apart from stray light emitted by the screen.For bright conditions there is additional light from fluorescent ceiling lamps.Fringe modulation γ cam was obtained using the captured images under both illumination conditions.Figure 20 shows the fringe modulation for one low and one high pattern frequency. The red boxes mark areas on the surface with different reflection properties: case and display.Measurements are taken per pixel and then averaged over these two areas.On the one hand, we calculated a spline interpolation γ cam (k cam ) from the fringe modulation measurements, as shown in Fig. 21.Using this continuous function of γ cam (k cam ) and the sen- sor model we are able to predict a continuous function for σ α (k cam ). On the other hand, we estimated σφ for each pattern frequency from the standard deviation over nine phase measurements.Two phase measurements are shown in Fig. 22. Hence, using Eq. ( 34), we calculated the standard deviation of the surface slant σα and compared it to the predicted measurement uncertainty as shown in Fig. 23. The predicted uncertainties for σ α slightly underestimate the measurement standard deviation, especially for the surface case.This may be caused by inaccurate measurements or interpolation of γ cam .At k cam ≈ 1e−3 mm −1 the measurement noise is dropping, because the standard deviation of the phase σφ is limited to 2π , while the frequency in Eq. ( 23) is increasing.Apart from this, the model approximates the uncertainty of the display surface slant very well. Experimental results -plenoptic camera Using the setup which is shown in Fig. 24 we recorded sets of images at different fringe frequencies.We recorded images at 10 different frequencies starting at a screen frequency k scr = e−3pixel −1 up to k scr = e−1pixel −1 with a logarithmic step size.For each frequency we recorded 20 images, which we used to calculate the measurement statistics empirically.To obtain the frequencies k cam of the fringe pattern in the camera image we calculated the discrete Fourier transform (DFT) of a single image and detected the peak of the first harmonic.This way a scaling factor from k scr to k cam is obtained. Figure 25 shows the mean empirical standard deviation σs of the measured distance s for a pixel in the depth map.Different than expected, we obtain a very low standard deviation for the smaller frequency.Furthermore, for the highest frequency the standard deviation seems to increase slightly.However, both effects are plausible.For low frequencies, the gradient in the image is just not high enough to obtain reliable depth estimates, and hence no measurements at all or only a few measurements are obtained.Thus, too few measurements do not allow us to calculate reliable statistics.Due to the periodic pattern, for high frequencies one obtains ambiguous stereo matches in neighboring micro images.Hence, the depth estimation has a higher number of outliers.However, the standard deviation σs behaves similarly to that predicted by the presented model, as shown in Fig. 15.Due to the fact that we have now accurate MTF for the screen used in this experiment and the ambient light can only be influenced to some degree, we cannot validate the model on a more quantitative basis. Figure 26 shows the reconstructed intensity image of the case recorded by the plenoptic camera.Here, one can still slightly see the borders of the micro images, which are a little bit darker than the center. In contrast to the deflectometric setup, we are not able to validate our model based on the recordings of this case, since we are not able to influence the pattern on the surface of the case.However, we still measured the empirical standard deviation for two different positions on the case.Here, the mean standard deviation is calculated based on a set of 40 images for all valid points seen in Fig. 26b.We used all valid points on the display and only those valid points on the case around the "Min/Max" inscription. For the display we measured a mean standard deviation of 8.8 mm and for the inscription a mean standard deviation of 12.8 mm.Intuitively one would expect to obtain a higher uncertainty for the display than for the inscription on the case.However, the depth estimation is already filtering out uncertain estimates, which leads to a sparser depth map on the display.This sparsity must also be taken into account when rating the results.However, as can be seen from deflectometry, both the case and its display are not perfect Lambertian surfaces.Hence, the obtained accuracy conforms quite well to the simulations shown in Fig. 17. Conclusions In this paper we proposed two models to predict the measurement uncertainty of a deflectometric and a plenoptic sensor.Based on our introduced models, we have shown that, for a given measurement setup, there exists an optimum fringe pattern that results in the lowest measurement uncertainty.In case of the deflectometric sensor, the achieved height measurement uncertainty ranges between 1 and 100 nm if the surface is at least partially specular.In contrast the measurement uncertainty of the plenoptic sensor on a perfect diffusely reflecting and textured surface can be as low as several micrometers, depending on the measurement setup. While the deflectometric sensor has much lower uncertainty for surface changes (3 orders of magnitude for partially specular surfaces), it measures surface normals instead of distances, which have to be integrated to obtain the surface height.The plenoptic measurement could help to regularize this integration by providing a relatively rough but robust distance measure. While the simulation shows plausible results for the proposed models, we furthermore were able to validate the models exemplarily by real measurements. Data availability.The raw image data can be made available upon request. Author contributions.MZ and his supervisor MH developed the theory and evaluation for deflectometry, and NZ and his supervisor FQ did the same for the plenoptic sensor. Competing interests.The authors declare that they have no conflict of interest.Special issue statement.This article is part of the special issue "Evaluating measurement data and uncertainty".It is not associated with a conference. Figure 2 . Figure 2. Deflectometric camera MTF for defocused image of screen points with 1.0 m distance and focal plane distances g. Figure 3 . Figure 3. Motion blur MTF caused by camera motion in image space. Figure 4 .Figure 5 . Figure 4. Image of a reflected pattern for five different surfaces ranging from high to low gloss for a pattern on the screen with spatial frequency k cam = 1 mm −1 .Note the modulation change from left (high gloss, c = 3.8) to right (low gloss, c = 2.5), where the pattern is almost invisible. Figure 6 . Figure 6.Lambertian surface MTF for different surface gloss parameters. Figure 8 . Figure 8. Image projection of a focused plenoptic camera in the Galilean mode.One object point is projected to multiple micro images on the sensor. Figure 9 .Figure 10 . Figure9.Raw image recorded by a focused plenoptic camera.Different to a regular camera, a plenoptic camera has a micro lens array (MLA) placed in front of the sensor (see Fig.8).Therefore, the raw image recorded by the camera is not one consistent central perspective image, but consists of thousands of circular micro images where each micro images shows only a small portion of the complete scene, as can be seen in the figure.From the magnified section, one can see that the same object point is projected to multiple neighboring micro images. Figure 11 . Figure11.Epipolar lines in a hexagonally arranged MLA.Due to the reason that the micro images are rectified to each other by nature, the epipolar line for a pair of micro images is defined by the vector between the respective principal points.The figure exemplarily shows one epipolar line (blue) for the five shortest stereo baseline distances (red). Figure 12 . Figure 12.Deflectometric measurement uncertainty of the phase on the screen, showing a pattern with fringe frequency translating to k cam when reflected onto the sensor. Figure 13 . Figure 13.Deflectometric measurement uncertainty of the surface normal for several focus distances and surface MTF parameters. Figure 14 . Figure 14.Measurement uncertainty of plenoptic and deflectometric sensor for several surface MTF parameters.The pattern is shown on the surface and also shown on the screen.Both pattern frequencies are given relative to the camera sensor image frequencies k cam . Figure 15 . Figure 15.Measurement uncertainty of the plenoptic sensor for different main lens focal lengths f and a perfect diffusely reflecting surface. Figure 16 .Figure 17 . Figure 16.Measurement uncertainty of plenoptic and deflectometric sensor with motion blur σ mot .The pattern is shown on the surface and also on the screen.Both pattern frequencies are given relative to the camera sensor k cam . Figure 18 . Figure 18.Case of a thermometer used to validate the proposed sensor models.The case consists of different surfaces with different reflectance properties. Figure 19 . Figure 19.Setup used to measure the measurement uncertainty of the deflectometric sensor for different frequencies of the fringe pattern on the surface. Figure 20 . Figure 20.Measurements of γ cam for different pattern frequencies under dark conditions. Figure 21 . Figure 21.Measurements of the surface MTF (points) and extrapolated data (lines) for two areas on the surface (display and case) and two illumination conditions (dark and bright surrounding). Figure 22 . Figure 22.Measurements of φ for different pattern frequencies under dark conditions.Measurement noise increases with the pattern frequency, depending on the surface reflectance. Figure 23 . Figure 23.Comparison of predicted measurement uncertainty (lines) and standard deviation (points) of σ α using the deflectometric measurement model. Figure 24 . Figure 24.Setup used to measure the measurement uncertainty of the plenoptic camera for different frequencies of the fringe pattern on the surface. Figure 25 . Figure 25.Measurement uncertainty of the plenoptic sensor for different main lens focal lengths f and a perfect diffusely reflecting surface. Figure 26 . Figure 26.Case recorded by the plenoptic camera.Intensity (a) image and depth map (b) calculated from the recordings of the plenoptic camera.
2018-11-12T11:19:43.642Z
2018-09-28T00:00:00.000
{ "year": 2018, "sha1": "9f7b9f9266cbbc94e57f204cf086c7358e7521d7", "oa_license": "CCBY", "oa_url": "https://www.j-sens-sens-syst.net/7/517/2018/jsss-7-517-2018.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9f7b9f9266cbbc94e57f204cf086c7358e7521d7", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
169764431
pes2o/s2orc
v3-fos-license
Evaluating the Efficiency of a Collaborative Learning Network in Supporting Third Sector Organisations in the UK Third-sector organisations are a collective term for voluntary and community services, charities, and social enterprises. Within the UK and internationally, a subset provides a crucial and ever-expanding role in mental health care provision, delivering valuable client and community-led services. However, in the UK these organisations are under increased pressure to demonstrate their value, and many are constrained by scarce resources and lack of expertise. The Service Improvement Learning Collaborative was conceived as an innovative model for shared learning to enhance the value of this sector in mental health care support, generating a valuable resource of practice-based learning and promoting the implementation of effective practices. The initiative combines a collaborative learning model with mentorship support and in-depth, data analytical profiling. This collaborative involved a network of six organisations focused on exploring the maximisation of data quality, the minimisation of client attrition, and the optimisation of clinical outcomes. Evaluating the collated data helped identify the many unique challenges facing the sector and evidenced the model as a pragmatic solution for service quality improvement. This chapter provides an overview of the project’s methodology, including its underlying rationale, first year of operation, and value of experiential learning for the field. Introduction In the UK, Third-Sector Organisations (TSOs) are a collective term for voluntary and community agencies, charities, and social enterprises, of which a sub-section provides health and social care via independent and value-driven services [1]. Recent audits of the whole sector reveal a notable presence, with over 160,000 organisations and nearly 1-million employees and volunteers operating in the UK [1]. Across many high-income countries, it is an area which is growing rapidly as governments seek to harness their innovation and local capabilities [1,2]. Given their nature, TSOs tend to be highly regarded for their proximity to the community, welcoming facilities, and the ability to engage those with complex and chronic needs [1][2][3][4]. Despite the potential benefits of TSOs, little research has been undertaken to evidence their impact and effectiveness [2,3]. Research applicable to many mental health care TSOs in the UK, including systematic reviews [2], national audits [1] and interviews with mental health charities [3], highlight the clinical and economic barriers affecting the production and utilisation of practice-based evidence (PBE). Many are constrained by tight budgets and scarce resources and often exist as 'microentities' making bidding processes and research prohibitively expensive [1,4]. The evidence that has been produced has been characterised as low in quality, lacking methodological rigour, theoretical modelling, and reliance on non-representative stakeholder feedback [2,3]. Access to learning is equally challenging with constraints on resources to review the latest research literature [3,4]. For TSOs to overcome these challenges, there must be greater alignment of needs and priorities between providers, commissioners, policymakers and academic institutions. One approach to optimising the production and sharing of knowledge has been to form collaborative learning networks (CLNs) of services using a similar treatment model or methodology for generating evidence [5]. By partnering with similar providers, these networks enable organisations to explore, share and integrate learning across a network, maximising the potential for practice-based learning. CLNs have demonstrable potential within the UK mental health care sector, having reported success in the Improving Access to Psychological Therapy (IAPT) programme [6] and Children and Young People's [5] services. The IAPT programme, which is a national government-funded initiative for English primary mental health services, has been an influential driver in generating public domain service performance data. Having mandated sessional measurement across all services over a decade ago, it has recently achieved pre-and-post outcomes completion rates of 98% for clients completing therapy [7]. These high levels of data completeness are essential for supporting CLNs [6]. The quality implementation framework (QIF) [8] has been previously used as a schematic structure to introduce practice changes, including routine outcome monitoring (ROM), within mental health care services [9]. This model synthesises 25 implementation methods from almost 2000 evaluation reports, comprising 4 action phases and 14 critical steps [8]. Combined with research on the value of CLNs, an initiative was undertaken to bring together multiple TSOs delivering mental health care to enhance service quality. This chapter describes the rationale, process, and outcome of this initiative across its initial start-up and first year of operation using a traditional storytelling structure, with reference to the QIF [8] and other implementation frameworks [10][11][12][13]. Telling stories Implementation science is the scientific study of techniques to enhance the quality and effectiveness of health services by advancing the systematic uptake of evidence-based practice (EBP) in routine clinical settings [14]. The learning from the field demonstrates the gap between what is shown to be effective to what is implemented in practice [14]. According to the QIF, in preparation for implementing practice change, agents must assess the host setting and build capacity, meeting with the service, analysing its infrastructure, surveying and training practitioners, and securing buy-in [8,9]. Regardless of how well-founded and robust the evidence may be, it is no guarantee it will be accepted and readily adopted by stakeholders [9,15]. Persuasive communication is therefore critical for framing research findings for specific contexts to enhance their uptake and impact [16]. The power of storytelling is increasingly recognised as an effective technique for transforming attitudes, perceptions and behaviours as they summarise concepts simply, quickly and effectively, appealing directly to a stakeholder's values and interest [16]. For instance, within UK mental health care services, storytelling as a technique has been associated with rapid improvements in data quality [9]. It is for this reason, our chapter aims to share the experiential learning and evaluation of this CLN for mental health care TSOs using a traditional storytelling outline, describing its setting, characters, plot, and themes. Setting To overcome the challenges of effective service development, a CLN was devised to support TSOs in the collection and use of data to inform the future development of operational practice. Inspired by the Institute for Healthcare Improvement's (IHI) [12] 'Breakthrough Series' Collaborative Model and implementation science research [11][12][13][14], this initiative intended to break new ground by working in close partnership with TSOs to generate evidence and inform quality improvement. The framework integrated implementation techniques using plan, do, study, act (PDSA) cycles [10] focusing on specific areas of service delivery and, as modelled by the QIF, create a structure for implementation [8,9]. This would become known as the service improvement learning collaborative (SILC). Working in partnership, TSOs were invited to upgrade their measurement system to a more sophisticated software platform providing additional reporting features relevant for service operation and development [17]. Services were required to verify their commitment and autonomy at a managerial, board and trustee level to commence on a year-long journey to profile and engage with subject-relevant resources and attend monthly mentorship sessions and quarterly overnight residentials. A memorandum of understanding was devised to emphasise that membership was contingent on full-service participation and this was incorporated into the development of an implementation plan [8,9]. This project took place over the course of a year, focusing on a different challenge each quarter, including a focus on data collection, session attendance, endings, and clinical outcomes. The project commenced with a planning meeting involving introductions, training and attitudinal surveys. With reference to the QIF, these steps were undertaken to assess the fit between the organisation's aspirations and readiness for change, allowing for open discussion and early feedback [8,9]. Across the project, there were monthly supportive calls with an assigned mentor from the research team, and quarterly in-person residential meetings with fellow TSOs, each supported by in-depth data profiling throughout. The purpose of the mentorship and residential sessions were to support participants in monitoring aspects of service quality and provide supportive feedback mechanisms which, according to the QIF, are critical post-implementation support strategies [8]. To improve future applications, the end of the year culminated in a summative conference with fellow mental health services to share the findings from the project's first year in operation [8][9][10]. A diagram of the SILC CLN model, including the induction, mentorship, residentials and summative conference, is outlined in Figure 1. Characters The QIF emphasises the criticality in creating an implementation team to oversee its rollout and set targets and agree off-track remedial action [8,9]. The SILC project team was assembled in 2016, consisting of academics and clinicians Evaluation of Health Programs with extensive experience in the field of talking therapies and service design [9]. This team was responsible for developing learning resources, providing mentorship support and tracking data through the relevant quarterly themes of service development. The team also worked directly with individual service leads to cascade learning and implement practice change, compiling routine reflective case notes and disseminating learning throughout the network. A series of prospective pilot services were approached and recruited in early 2017, subject to expressions of interest and eligibility criteria. The SILC initiative was specifically aimed at mental health care TSOs using CORE IMS computerised quality evaluation systems [17] to obtain evidence on their delivery and strengthen their position for funding and benchmarking. Those eligible had been using CORE outcome measurement systems for over 5 years, primarily as an administrative tool to log clinical activity. Within all but one TSO expressing interest, there was little analysis of the data being undertaken, and no indication of it being used clinically or to enhance service quality. Prospective services were using traditional pre and post-therapy measurement approaches, acquiring outcomes data for around 40-50% of clients; a rate which is representative of the field and this methodology generally [18]. Many were also experiencing high rates of non-attendance and attrition, plus modest clinical outcomes for those with outcomes data. The exploration phase of Aarons, Hurlburt and Horwitz [11] conceptual model for implementation identifies the importance of inner and outer contexts. In this project, it seems early withdrawal during the recruitment stages was due to a combination of socio-political factors and lack of absorptive capacity which impeded progress [11]. What had started as 12 prospective members soon halved to only six. Various reasons were given but discontinuation was mostly cited as being due to managerial turnover, lack of capacity for change, and workforce restructuring, or resistance. By contrast, the remaining TSOs demonstrated their levels of commitment via an initial attitudinal survey which, when disseminated to all practitioners (n = 49), achieved a high response rate of around 80%. The six services joining the project ranged in size, geographical location and clinical specialism. Annual throughput ranged from around 80-300 clients per organisation. Clinical support specialisms included psychological support for female victims of domestic abuse; women on low incomes; parenting; unpaid carers; and general counselling support. Informed by QIF support strategies, each service was assigned a mentor from the SILC project team using a consultation and matching process [8,9]. Members received regular updates via a monthly blog post on the project's website (www.silcuk.org) and a quarterly newsletter via email. Resources were shared via the website and there were opportunities to contribute in online discussion forums. The combination of online meeting platforms and email correspondence enhanced the sharing of stories, communicating learning and progress, and helped to sustain the network. Plot Expanding on the story structure framework, this section will incorporate a generic narrative mountain structure, breaking down the plot by its background, rising action, climax, falling action, and resolution. Background During each quarter, the project team worked with each TSO to produce an implementation plan including a set of targets, infographics, quality checklists, report templates and mentorship support, with PDSA cycles to structure the process [8][9][10]. Many of these tools required regular, in-depth auditing of data recorded during assessment, treatment, and discharge. Analyses were complemented by attitudinal surveys to front-line practitioners focusing on their perceptions and experiences across each quarter. Services were encouraged to reflect and communicate their learning at the quarterly residential meetings, while critically appraising fellow member's contributions. Rising action: Events leading up to the main challenge(s) Throughout the project, it became clear that an organisation's success in addressing the challenges depended on their relationship with the process of using measurement questionnaires and how deeply practitioners and clients were engaged in responding to feedback. The team later conceptualised this as a development cycle with four distinct evolutionary stages that described the operational depth of practitioners' relationship with measurement: Pre and post-therapy measurement using paper forms; measurement at every session using paper forms; digital measurement at every session using tablets or computers; and digital measurement at every session tracking and sharing outcome progress directly with clients throughout Evaluation of Health Programs 6 the entire therapeutic encounter. It was recognised that services which were further along in this cycle had an inverse relationship with measurement in terms of its input and value towards stakeholders. Those in the later stages were able to maximise the value for clients that in turn benefitted other groups including practitioners, service management, and boards/funders. Conversely, those operating in the earlier stages were limited in their value to certain groups, typically to the boards/funders. Figure 2 shows a conceptual model of this, including the resulting value for stakeholders. Conceptual implementation models highlight how the structures and processes that exist within organisations have an influence on the adoption of practice changes during the active implementation phases [8,10,11]. Within the SILC project, it was observed that completing paper forms, particularly at every session, generated huge administrative and inefficient burdens for members. This created barriers for practitioners looking to use data as feedback to enhance client outcomes and develop their clinical skills. During the year, most organisations evolved their administrative processes by replacing paper with digital methods, recording via electronic tablets. The services most successful in achieving the optimal rates for each quarterly challenge described understanding measurement as a construct and extension of the client. By focusing on creating the maximum value of measurement for clients, a myriad of other benefits at different stakeholder levels was also reported [19]. Naturally, some services were more equipped than others in accessing the appropriate technologies. Climax: The main challenge(s) reach a high point During the project, one of the participating TSOs withdrew due to a turnover in management and evolving financial pressures. Two other services experienced management turnover during the project which, although not impacting on their participation, did require additional input and training from the SILC project team. Practitioner turnover was understood to be common in TSOs [2][3][4], however, the rate of turnover concentrated at a managerial level had not been anticipated. For services with a complex management structure, this too complicated the sharing of learning and addressing each quarterly challenge. It was discovered that when managers with an on-hand leadership style were absent, this would impact on key aspects of their service operation, including the collection of high-quality data. Another key challenge regarded the issue of session attendance and unplanned endings. A list of categorical reasons for why a session was not attended was compiled to record each time this occurred. Although the reasons recorded for cancellations were high, this was not the case for those who did not attend (DNA) (no advanced warning given) despite subsequent sessions being attended in approximately half of all instances. The most common reason for cancellations during the second quarter (n = 482) was 'Health Problems' (40%) while for DNAs (n = 160) it was 'Unknown' or 'Not Recorded' (76%). The absence of reasons recorded despite sessions being subsequently attended suggests practitioners either forgot or did not feel comfortable exploring why a session had been missed. This is concerning as DNAs were found to be indicative of an unplanned ending. Definitions are important and have shown to vary the reported unplanned ending rate [20]. During the project, the unplanned ending rate reduced from 32% at baseline to 27% at the end of the third quarter, however defining and interpreting these rates revealed notable issues. Among the participating members, there were multiple interpretations about what constituted a planned versus unplanned ending. Given its inherently subjective nature and potentially negative connotations, this limited the analysis somewhat. However, the links between session nonattendance and unplanned endings were consistent across all services and tended to occur early in treatment, as described in the next section. One of the aims of the SILC project was to provide services with regular analyses to inform delivery and operation. This section reports on some of the headline findings along with extract quotes from two of the SILC TSOs. Systems-level modelling demonstrates the importance of considering the interrelationships between individual practice elements as opposed to solely focusing on each in isolation [11,21]. Although the challenges during each quarter were distinct, the areas of overlap were noteworthy. Not only was session non-attendance linked with unplanned endings, but those TSOs with the longest standing commitment to high-quality data also reported the highest rates of clinical improvement. Data quality One major shift during the first quarter was to adopt sessional ROM, moving from traditional pre and post-therapy measurement approaches. This process was supported by a dedicated project member auditing and feeding back information to services. By the end of the first quarter, pre-and-post outcome completion rates increased from an average of 65% at baseline to 98%, while by the end of the year, this was 97%, with all TSOs achieving above 90% and half achieving 100% completion rates (Figure 3). These values were almost identical to the IAPT programme's recent achievement of 98%, a decade after its first site implementation [7]. Session non-attendance At the start of the second quarter, members began to record session nonattendance, including when an appointment was cancelled (by client) or the client DNA (no advanced warning given). One of the primary areas of interest was understood when sessions being missed were most likely to occur. Aggregating each service's datasets, the total number of appointments per sequential session number was tallied to assess what proportion was recorded as either cancelled or DNA. Including only session numbers with over 10 appointments each, it was possible to chart this data (Figure 4). It was identified that cancellations as a proportion tended to increase the longer therapy progressed; although this might be due to a lower number of appointments at these stages. DNAs as a proportion did not exceed 10% for any session number although they did tend to occur earlier in therapy, with sessions 2-5 reporting the highest rates of 7-8%. The occurrence of DNAs declined somewhat as therapy progressed, possibly due to contracting which discharged clients after missed appointments without prior notice. Focusing on session non-attendance helped determine the scale of the challenge and how the pattern of cancellations and DNAs differed, prompting two participating services to a revise their policy in the interests of equitable access and service efficiency. Unplanned endings For the third quarter, the focus shifted to exploring the nature of unplanned endings. An analysis was undertaken to explore the potential associations between unplanned endings and the rate of non-attendance during therapy. This analysis found that, across all services, there was a link between session absence and ultimate attrition, especially regarding DNAs. For all TSOs, the DNA rate for clients with an unplanned (13%) versus planned (2%) ending was around 6½ times difference, ranging from 2 to 18 times across providers (Figure 5). By the end of the third quarter, those with planned endings attended almost 3 times more sessions (11) than those with unplanned endings (4) and were more likely to report reliable improvement for planned (62%, n = 226) versus unplanned (36%, n = 70) endings. To assess how the pattern of non-attendance varied during therapy per ending type, session numbers and total appointments recorded were banded across all services (Figure 6). This analysis found that again, non-attendance was indicative of an unplanned ending, with higher rates of cancellations and DNAs. For those with an unplanned ending, it also revealed that while DNAs as a proportion were reduced in the lower session number bandings (2-4; 5%), they remained consistent at around 17-21%, excluding the 14-16 banding which reported a rate of 30%. Similar to the overall patterns of attendance, cancellations as a proportion of all appointments tended to increase the longer therapy progressed but again, this could be explained by a decrease in appointments recorded during these later subgroup stages. Clinical outcomes In the final quarter, the project focused on clinical outcomes and understanding therapist variation and trajectories of change. To identify a possible dose-effect, an analysis was undertaken to assess the rates of change across individual domains of the CORE-OM (wellbeing, problems, functioning, and risk) within the one service using the full 34-item measure, as opposed to the shorter CORE-10 which does not record all domains [17]. A pattern of average scores were mapped relative to individual session numbers up to the 10th session (for clients having 10+ appointments each) for those who reported reliable improvement (n = 130; 891 sessions) versus those who reported no reliable change (n = 39 clients; 243 sessions) or reliable deterioration (n = 7 clients; 53 sessions) (Figure 7). Based on this analysis, most of the score changes tended to occur early in treatment for those reporting reliable improvement, with an average decrease in scores of −6.1 across the first four sessions, remaining steady between sessions four to seven (−0.5), and then decreasing steadily from sessions seven to 10 (−2.3). For those reporting no reliable change or reliable deterioration, scores generally remained steady, with average changes ranging from 0.2 to 1.7. This suggests the first four sessions were important for identifying clients who were likely to improve or not. This triggered the integration of a flag feature to remind practitioners to review progress early in therapy to identify those at-risk of showing no change to provide additional support. The lived experiences of two TSOs engaging in the SILC project Informed by the QIF, improvement for future applications requires learning from experience [8]. To gauge the experiences of those participating in the project, a brief semi-structured interview was conducted at the end of the year to explore what service managers thought of the initiative, and how they might improve it for future services embarking on a similar journey of collaborative learning. The boxes below contain extracts from these interviews with two self-selected TSOs. Figure 7. A pattern-of-change comparison across the CORE-OM per session number illustrating early improvements for clients reporting reliable improvement compared with no reliable change or reliable deterioration. Service A: Interview Extracts Our first question was how is it going to work for our clients? Building that value for them, and the practitioners, giving them a value to the work. This is not a measurement, it's not an outcome, it's an aide to the process, something that helps the work with clients. And Resolution: How things have ended up in this story In keeping with the IHI's [10] collaborative learning model framework, the first year of the SILC project culminated in a summative conference. Nearly 100 delegates were in attendance, each representing a range of different sectors within the field of talking therapies. Both the project team and self-selected SILC TSOs held a discussion regarding their experiential learning during the first year of the project. There was a consensus at the event about the operational challenges facing modern-day talking therapy services. While systems were becoming increasingly sophisticated, the training and support necessary to build in-house expertise were reportedly difficult to access due to time and resource constraints, a saturated and uncertain field, and isolated working practices. Providers, particularly in the thirdsector, desired the opportunity to work in partnership with others to share learning and enhance theirs and the sector's organisational and therapeutic models further. With the first stage complete, the SILC project has amassed a wealth of learning which will be converted into a modular learning programme, providing a resource for future applications of the network [8,9]. This will replicate the CLN model and invite existing SILC members to act as guest speakers and offer unique support and valuable insights to newly recruited collaborative members. There are three existing SILC TSOs who have declared their interest and commitment to continuing with the project. Due to a turnover in management and decrease in contribution, two members have since withdrawn. The next phase of the initiative will focus on expanding the network, building on the existing knowledge and aggregate data to support ongoing analyses and resource development. Themes Themes are the essence of a story, the central constructs which reflect the actions, perceptions and experiences of the characters in their situational contexts. They represent the underlying 'big ideas' which transcend the distinctions between Service E: Interview Extracts Having the support from the team that was specific to our service, having experts on hand when you needed them. settings and circumstances and help conceptualise elements and links between them. This is important given the lack of guiding conceptual models for the sustainment phase of implementation [11]. Listed below is a discussion on some of the key themes both the participating services and project team uncovered during this stage of the project. The possibilities of CLNs in mental health TSOs The unintegrated nature of TSOs in the UK means there can be obstructions to developing and integrating EBP [2][3][4]. Within the field of talking therapies, determining what constitutes as EBP has been criticised for its reliance on controlled study methodologies which, due to their somewhat artificial nature, are considered detached from the clinical realities of routine practice settings [22,23]. Certain advocates support a PBE approach to complement and address these limitations [24]. However, PBE relies on the collection of robust, aggregate datasets across multiple organisations sharing a common system or model. Fragmentation, isolated working practices, and resource constraints can limit TSOs generating the PBE necessary to support their delivery [2][3][4]. Indeed, the primary interest from prospective members in this project was overcoming these barriers and demonstrating they were treating clients effectively. By pooling experience, resources and expertise around a central, unifying theme, TSOs were able to systematically explore, assess, understand and reflect upon key aspects of service quality development. Through iterative cycles, strategic improvement models and coordinated and collaborative dialogue [10], services were able to generate timely and actionable insights that were relevant to their unique circumstances. Testing practice changes on small scales, using focused inquiry and PDSA cycles, helped achieve small wins which, according to evaluation theories, can be an effective strategy for boosting perceived capabilities [6,10]. Replicating previous research findings [2,3], access to a supportive academic project team was deemed invaluable for producing, mentoring and synthesising analyses and learning across the network. However, liaising with several TSOs proved to be a lengthier and more complicated process than first envisaged; an experience which is echoed elsewhere [6]. This identifies an important obstacle for sustaining CLNs, particularly those undertaking continuous analyses. It might be that by offsetting resources to a project team, this creates a more efficient process within individual services as it shares the expertise around a common need. If this were true, then it could prove more efficient and cost-effective for TSOs overall. Given the central communicative nature of CLNs, it is important these channels are equitable. Within the third-sector, organisations tend to differ in size and can be equally varied in their operational modelling [2,3]. This inequity in size and visibility could feasibly leverage greater influence over smaller providers to work towards their agenda. To overcome the challenges of distinct delivery models within CLNs, a central governing platform using cooperative representation could therefore be valuable for identifying topics of interests and establishing a dictionary of terms. Similarly, these communication channels ought to use terminology that is consistent and agreed upon, particularly around subjective concepts such as ending types as doing so would ensure greater validity and reliability in data analytics [20]. Management and leadership Many implementation frameworks emphasise the planning stages as critical to successfully embedding innovation [8,[11][12][13]. Because implementation can be a complex process involving integrating existing practices with new, it typically 13 Evaluating the Efficiency of a Collaborative Learning Network in Supporting Third Sector… DOI: http://dx.doi.org/10.5772/intechopen.84294 requires a well-planned, structured and iterative process, addressing the various philosophical and practical barriers that can occur regularly [9,15,21]. It is within these contexts that supportive leadership can be a facilitating factor [2,11,15,21,25]. Without effective leadership to track, monitor and effectively champion the merging of practices, any expended effort can unravel [9,15,25]. Those in leadership positions need to be present and well-respected, retaining a detailed awareness and understanding of delivery and operation [15]. Service quality development through CLNs therefore appears to be reliant on management structures and local leadership. In considering the scale of change and level of turnover in TSOs, particularly at a managerial level, the reliance on leadership highlights a notable barrier. Given the project team tended to work exclusively through managers brokering knowledge and training, their absence ultimately affected their organisation's participation and operational processes. It could be argued this was a sideeffect of the chosen methodology which may have benefitted from a broader involvement and contribution among the workforce. Advocates across the field recommend ensuring a local champion is permanently in place, advising that those departing a service provide sufficient training to those replacing them [9,15,26]. While this recommendation is practical, how it applies to TSOs is perhaps more complicated. Continually nurturing the operational climate through sustained involvement and being present can help resolve the functional mechanisms of feedback systems [15,[25][26][27][28]. A perceived lack of presence in the project among some practitioners served to undermine the initial enthusiasm and positive ethos established at the project's outset. Services which thrived tended to dedicate additional time and resources to sharing information in an open and accessible manner. This actively engaged the workforce in the minutiae of feedback informed treatment (FIT) [28] and encouraged more open dialogue. The literature on FIT teaches the value of routinely soliciting responses from clients about treatment progress, aiding practitioners at a therapeutic level [28][29][30]. However, there is an additional service-level which could also help inform practitioners and other stakeholders about enhancing client engagement and outcomes. By combining a FIT model with a feedback informed service, practitioners could have timely access to relevant learning. With reference to the QIF [8], supportive feedback mechanisms will be relevant to all stakeholder levels and through aggregate data, the client voice can be made accessible to all, helping sustain innovation. The resource challenge of TSOs Based on the learning from this initiative and relevant national and international research [2][3][4], there appears to be a significant resource challenge facing TSOs. Although many report having an interest in quality improvement [3], the constraints on providers including turnover, financial pressures and limited budgets, appear to greatly impact their ability to generate data and engage in practice development [2][3][4]. For a sector that relies heavily on volunteers, some of whom are in trainee positions [1], preserving a level of local expertise represents a continual challenge, particularly as systems become more expansive, specialised and costly. Although the CLN was a means to pool and share resources, supporting the implementation phase [11], external pressures had a notable influence on its integration, process and overall output. The level of attrition at the beginning and eventual withdrawal of others highlights the scale of this challenge. Consequently, this further demonstrates the criticality of the QIF phases in thoroughly assessing the fit between the host setting's aspirations and readiness for change [8,9]. Given the sheer scale of change and advancing pace of new technologies, feedback systems and innovations are becoming increasingly sophisticated while at the same time, access to training and support might not be keeping pace [3,31]. For many, including attendees at the summative conference and across the wider literature [3], allocating resources to this endeavour might be considered non-feasible as few can afford or justify it economically. This issue is further compounded by the fluctuating and isolated nature of services as well as barriers in accessing the literature due to subscription paywalls [2,3]. Accordingly, this highlights the need to consider the additional training and support required when adopting new innovations. Despite its limitations, a CLN could address some of the resource challenges identified, increasing the opportunities for learning. Disseminating feedback throughout a network might help overcome some of the barriers to accessing research and forming partnerships [5,6,10]. Shared learning across all levels of the network, could foster a broader culture of openness and training, supporting collaboration across multiple platforms, while also generating an asset for feeding back insights across the sector. Undoubtedly, this would rely on the aggregation of robust datasets and communication platform to support this process [5]. Designing the infrastructure The experiences from this project revealed the influence of organisational factors and infrastructure on the uptake of practice changes. Although research on the integration of feedback systems and ROM have identified numerous practical barriers, much of the emphasis has focused on practitioners [9,15,[31][32][33][34][35][36][37]. Indeed, positive attitudes towards feedback have been shown to facilitate the effect on clinical outcomes improvement, while resistance can have the opposite effect [33,[38][39][40]. Resistance reportedly stems underlying performance anxiety or negativity about the relevance and utility of the practice [9,15]. However, the learning from this project highlights how positivity and motivation might not be sufficient in isolation. Despite the generally positive attitudes from the survey and among the management mentees, itself likely a result of the selection process, many TSOs still encountered challenges, many of which appeared to be due to limitations in the infrastructure and frustrations with the technology. This, in turn, affected their capacity to use the system, something which is shown to be a facilitator in implementing EBP [25,27,31]. Restrictive and frustrated working practices can lead to negative perceptions forming [25,27,36,41], suggesting attitudes might be mediated by how user-friendly and engaging a system is. For TSOs facing time and resource constraints, the simplicity of a feedback system is perhaps more pivotal. In these circumstances, systems may benefit from a uniform, standardised approach so that training and support can be refined and accessible via fully integrated and self-led instructional packages [32]. In terms of the QIF [8], the critical steps for assessing the needs and resources, capacity, and pre-implementation training would benefit from accessible resources which are intuitive and easy to understand. Refocusing measurement to respond and maximise the value to clients Traditionally, measurement in TSOs have been undertaken to satisfy the needs of boards and funders and to a lesser extent, service managers [3,4]. The pressures on services have meant that pre and post-measurement approaches have dominated, with its purpose serving mainly administrative rather than clinical needs [3,9]. ROM established a method for improving data quality and representativeness, although the emphasis regarding its clinical utility or use in service development has only recently been advanced [7]. This illustrates how the focus and value of measurement have been positioned to satisfy a broader sector-level drive. However, by framing measurement in a way to maximise the value for clients, as observed in this project, there appear to be many cumulative gains for all stakeholders, including practitioners, service managers and boards/funders. Across each of the common challenges, there seemed to be a critical period, usually within the first four to six sessions, which correlated with eventual outcome. For instance, a large proportion of DNAs tended to occur early in treatment which were a useful indicator of an unplanned ending, and by extension, a reduced chance of reliable improvement [20]. For clients reporting reliable improvement in one TSO, most change seemed to occur during the first four sessions, while those reporting no reliable change or reliable deterioration showed little change across a 10-session period. This emulates the wider literature which identifies the initial stages as being a useful indicator for a client's subsequent engagement and outcome [42][43][44][45]. Accordingly, this trend highlights the criticality of early engagement and warrants a further discussion about the implications of keeping clients involved in therapy who report no change or attend infrequently. Evidence has shown that decisions to prolong or conclude therapy despite a lack of positive therapeutic change can be influenced by subjective beliefs, norms and attitudes, sometimes superseding what feedback monitoring and practice guidelines recommend [45]. According to the literature, the clinical benefit of measurement can be mediated by a practitioner's engagement and attitude towards outcomes monitoring [33,38,39]. Moreover, timely access to feedback has been shown to be a critical factor in the use of data among practitioners [27,34,36,46]. TSOs which encourage open dialogue and pay greater attention to this information could produce cumulative benefits in each of the quarterly themes identified [10,30,47]. An organisational culture of openness and commitment to learning was important and replicates findings reported elsewhere [15,46]. Additionally, giving practitioners access to service-level data might assist them in overcoming residual ambivalence because its application to service quality development is readily observable. Recommendations For those interested in implementing a CLN to support TSOs, there are several recommendations based on this project's findings. Firstly, recording high-quality data is crucial to this model. Securing high-quality data helps support the network and aggregate learning by effectively threading the client voice throughout all stakeholder levels. Promoting client engagement in the process of measurement is an effective strategy for enhancing data quality and building the opportunities for clinical application [9,31,47]. Because of this, it is important that implementation teams do not underestimate the infrastructure necessary to support practitioners working to deliver these innovations [15,32,35,46]. While pooling resources can help overcome challenges relating to cost and access to expertise, without a shared framework and understanding of the key concepts, a CLN and its associated analyses are likely to be impacted. In keeping with the wider literature, access to expertise and committed project team can be beneficial for supporting the network [2,3,5,6,9]. Focusing on distinct areas of service delivery through iterative improvement cycles and acknowledging their interdependency can help achieve cumulative benefits through the combination of smaller gains [6,21,25]. For TSOs, the role of leadership and effects of turnover cannot be understated. While it might not be feasible in TSOs to ensure a local champion is always in place, it is valuable to build a system that enables receptiveness towards continual practice innovation. A broader involvement and contribution among the workforce through wider supportive feedback mechanisms represents one effective strategy to overcome this. Conclusion TSOs represent a valuable and growing player in the provision of mental health care, yet many are constrained by limited budgets, isolated working practices, and a constantly shifting workforce. Together, these make producing and accessing evidence difficult, further limiting the sector from credentialing their impact and engaging in service development. To overcome these challenges, a CLN was implemented involving six TSOs and a dedicated project team to share learning and resources with the aim of improving delivery and operation in the areas of data quality, session attendance, unplanned endings and clinical outcomes. The CLN was inspired by the IHI collaborative model [10] framework for integrating and testing improvements using PDSA cycles and the implementation process was guided by the QIF [8]. It was found that introducing ROM substantially improved data quality which acted as the bedrock for all subsequent analyses and discussion. There appeared to be strong links between each of the common challenges, including increased non-attendance being associated with the occurrence of an unplanned ending, itself linked with a lower chance of reliable improvement. Overall, this approach to generating timely and relevant practice-based insight through partnership working and mentorship support proved to be effective for stimulating service quality enhancement. Although TSOs face many unique challenges, including high staff turnover and strained budgets, those with on-hand and inspirational leadership and commitment towards maximising the value of measurement for clients reported most success. © 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
2019-05-30T23:46:52.831Z
2019-02-12T00:00:00.000
{ "year": 2020, "sha1": "5d0991fb595cc4cdc4feaf7fd1bb39b711c39886", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/65446", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "5815fb5d0575d6269e362bc7328279e0f3befd0f", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
249934400
pes2o/s2orc
v3-fos-license
A Diagnosis of Camptodactyly With Benign Joint Hypermobility Syndrome in a Patient Presenting With Fixed Flexion Deformity of the Fingers and Striae Camptodactyly is a genetic disorder that causes fixed flexion deformity of one or more fingers of single or both hands. It is very rare and the occurrence is very low amongst the children. It is linked to a handful of congenital connective tissue syndromes. It is passed onto generations with reduced expressivity. However, its association with benign joint hypermobility syndrome is rarely known. Joint hypermobility syndrome is a condition where there is extreme joint flexibility and it is related to a set of articular and extra-articular sequelae. We herein report a case of camptodactyly with benign joint hypermobility syndrome in a patient presenting with fixed flexion deformity of the fingers, joint hyperextensibility, and striae. Introduction Camptodactyly is a fixed flexion deformity primarily involving the proximal interphalangeal joint [1]. Although it occurs sporadically, it may be inherited in the autosomal dominant manner and is very rarely seen in children (1%) [2]. It occurs mostly unilaterally (66%) and sometimes bilaterally (33%) in a symmetrical or asymmetrical fashion. The proposed pathophysiological basis is mainly the abnormal lumbrical muscle insertion. Also, the abnormal insertion of flexor digitorum superficialis and abnormal volar plate, and abnormal extensor hood are known to cause the deformity [2]. The diagnosis is made clinically. The radiology is routinely normal in the initial stages. Passive stretching and static splinting are the remedies to improve joint mobility. However, deformities that account for severe functional deprivation may require surgical techniques in the form of flexor digitorum superficialis tenotomy or corrective osteotomy [2]. Benson's classification is used to type them according to the severity so that the management can be planned appropriately [2]. Type 1 basically includes those with isolated involvement of the little finger. Type 2 has abnormal lumbrical insertion or abnormal flexor digitorum superficialis origin. Type 3 has severe contractures with multiple digit involvement [2]. The clinical severity determines the treatment options. Camptodactyly also has a notable connection with several developmental dysmorphology syndromes including Marfan syndrome [2]. However, the association with benign joint hypermobility syndrome is seldom documented. Benign joint hypermobility syndrome is a frequently missed rheumatological condition that causes highly flexible joints with associated pain [3]. The joints are easily movable beyond their range. Its prevalence around the globe is approximately 3% [3]. Many of them experience distressing symptoms such as pain and fatigue [4]. They are susceptible to developing significant musculoskeletal injuries [3]. Though it is speculated to be a part of hereditary diseases of connective tissue such as Marfan syndrome, Ehlers Danlos syndrome, and osteogenesis imperfecta, it is a milder, unique, and a commonly missed variant [5]. It is a grey area in the literature on how the benign joint hypermobility syndrome is related to and overlaps with the mild versions of hereditary diseases of connective tissue [5]. Benign joint hypermobility syndrome has a strong genetic association in the form of autosomal dominant inheritance [6]. Brighton has laid down the criteria for the diagnosis of joint hypermobility syndrome which considers the hyperextensibility of the elbows, knees, and fifth finger along with arthralgia lasting for three months or longer in four or more joints and several other aspects [6]. A multidisciplinary team approach is used for the management. Education, activity modification, and muscle strengthening exercises are the cornerstone of the treatment [6]. Open kinetic (distal extremity meets freely) and closed kinetic exercises (distal extremity moves against resistance) are used combinedly for strength training [6]. Case Presentation A 15-year-old Sri Lankan boy was brought by his parents to the rheumatology clinic with fixed flexion 1 2 deformities of the little fingers bilaterally with the associated diminished functional capacity of both hands and pain during the finger extensions. He had pain in all the five metacarpophalangeal and interphalangeal joints in both hands which had been persistent for more than nine months. No other joints were involved. He did not mention any persistent headaches, myalgia, and fatigue. There was no history of trauma or accidental fractures in the past. He denied any early morning joint stiffness or recent febrile episodes. He complained of an itchy rash on the dorsal aspect of the right index finger, which had been there for almost six months. He denied any altered bowel habits, urinary symptoms and blood, and mucus diarrhea. There were no features to suggest an autoimmune etiology. Neither did he have any family history of such deformity or rheumatoid arthritis. On examination, he had a body mass index of 17.1 kg/m 2 . He was afebrile and not pale. He had no lymphadenopathy, clubbing or ankle edema. No visible ecchymotic patches on the skin were present. There were no tender points upon palpation. The fixed flexion deformity was observed on the proximal interphalangeal joints of both little fingers ( Figure 1). There were no tender or swollen joints. He had a nonscaling eczematous rash on the dorsal aspect of the right index finger. FIGURE 1: Fixed flexion deformity of both little fingers Prominent striae atrophicae was noticed on the back ( Figure 2). But hernias or varicose veins were absent. The arm span to body height ratio was 0.8. Thump and wrist signs were absent. A high arched palate was also not observed. On fundoscopy examination, there was no ectopia lentils or any other abnormalies related to the connective tissue diseases. Cardiovascular examination revealed a pulse rate of 88 beats per minute which was of good volume and was regular. The cardiac apex was located at the fifth intercostal space in the midclavicular line and there was no detectable murmur. The respiratory, abdomen and neurology examinations were virtually unremarkable. The following investigations were performed ( Table 1). 2022 The anteroposterior and lateral views of the hand x-ray (Figures 4a, 4b) were taken and were reported normal. FIGURE 4: X-ray of the hand. (a) Lateral view. (b) Anteroposterior view. The anteroposterior and lateral views of the thoracolumbar spine x-rays (Figures 5a, 5b) were taken which were normal. Ultrasound scan of the soft tissues of the hand excluded the synovitis. The patient and his parents were explained about the condition and its complications before proceeding to the next strategy. Analgesics were initiated for symptom relief. Occupational therapists began pacing which is systematic graded approach to the participation in the activities. The activities to ensure sleep hygiene, relaxation and daily occupations were also bestowed upon the team. Podiatry team provided the advice regarding the appropriate foot wear and orthotics to support the patient achieve a higher functional independence. We also arranged counselling to the patient and his parents to ensure full participation of him in the activities of daily living and to gain a good motivation. Additionally, we made a dermatology referral regarding the eczematous non-scaly rash on dorsal aspect of the index finger which was diagnosed as psoriasiform eczema. He was offered topical corticosteroids for which he showed a good response. The wholesome approach helped to promote improved functional capacity and mental-wellbeing in the patient. Discussion The diagnosis of benign joint hypermobility syndrome was made on the basis of the hyper flexibility at the elbow, finger metacarpophalangeal joints, knee, and thump. More sinister connective tissue diseases like Marfan's syndrome and Ehlers Danlos syndrome were excluded clinically and with some investigations though some of the genetic tests were not available at the center. A five-point questionnaire was used to assess the severity [7]. The questionnaire along with the patient's responses are shown below ( Table 2). The questions Response Points • He scored 3 points out of 5 which is more than 90% specific to benign joint hypermobility syndrome [7]. Several maneuvers were used to calculate the Beighton score for which the patient scored 6 (passive hyperextension of both elbows beyond 10 degrees, passive hyperextension of both knees greater than 10 degrees, and passive dorsiflexion of metacarpophalangeal joints up to 90 degrees on both sides) [7]. He fulfilled the revised Brighton's criteria for benign joint hypermobility syndrome with one major criterion (polyarthralgia involving at least four joints lasting more than three months) and two minor criteria (Beighton score of 6 and skin changes such as striae) [8]. Apart, the patient's fixed flexion deformity in both little fingers in the absence of any radiological abnormalities was diagnosed as camptodactyly. The association of camptodactyly with joint hypermobility syndrome is not well documented although it is related to congenital syndromes such as Marfan syndrome and Beal's syndrome [9]. Benign joint hypermobility syndrome is an overlooked entity that causes chronic pain and fatigue. It is sometimes missed due to the variable nature of the presentations. However, the affected patients are liable to develop fatigue, headache, orthostatic hypotension, anxiety, hernias, functional gastrointestinal disturbances, vasovagal syncope, dysautonomia, and genitourinary symptoms [10]. Therefore, it's necessary to obtain a comprehensive history and perform a detailed clinical examination. After excluding the more alarming heritable connective tissue diseases, a thorough clinical examination of the musculoskeletal system helps to establish a unifying diagnosis and to assess the extent of burden. Collaborative care from a multidisciplinary team of rheumatologists, physiotherapists, occupational therapists, podiatrists, psychologists, and orthopedic surgeons is important for a successful treatment outcome. The probable reassurance is that joint laxity may decline with advancing age so the symptoms will gradually improve. The prime goal of physiotherapy is to tackle muscle inhibition and build up muscle strength. The muscle strengthening exercises, active mobilization exercises, and proprioception exercises are included. Once the neutral resting position of the joint is achieved, retraining is done to gain dynamic control whilst moving the adjacent joints [11]. Occupational therapists assess the patient with the view of increasing sleep, relaxation, and activities to ameliorate the functionality. For example, they are provided with specialized pens/pencil grips, finger splints, and angled desktops in order to improve their quality of life and performance [11]. Podiatrists help to wear the appropriate supportive footwear and orthotics to stabilize the feet and other muscle groups [12]. The psychotherapists can offer counseling to boost confidence and motivation which ensures good treatment compliance. Due to the uniplanar and supported weighting bearing nature, some activities involving side-to-side movements such as swimming, rowing, and cycling are highly encouraged [13]. This is the very first case report around the world to underline the association of camptodactyly with benign joint hypermobility syndrome. However, one case report from Southern Turkey describes three instances of camptodactyly -arthropathy -coxa vara-pericarditis syndrome (genetic mutation in the proteoglycan 4 gene), which mimicked juvenile idiopathic arthritis and were inappropriately treated [14]. Similarly, a case of Blau syndrome (mutation in nucleotide-binding oligomerization domain-containing two genes) was reported from Palestine with camptodactyly and bilateral intermediate uveitis and responded well to subcutaneous Adalimumab, biologic disease-modifying agents [15]. The criterion for the selection of candidates for therapy is unclear. There was a reported case of radiographic remodeling of the proximal phalangeal head using stretching exercise in patients with camptodactyly [16]. It concluded that graded stretching exercises helped to restore mobility of the proximal interphalangeal joints and gradually brought back proximal phalangeal head roundness and concentricity in those with infantiletype camptodactyly. Some case reports linked the camptodactyly to the anomalous origin of the flexor digitorum superficialis tendon [17]. The general consensus regarding the interventions was that the surgical correction is certainly assigned to those who have preoperative proximal interphalangeal joint contracture of more than 60 degrees [18]. Nonopioid analgesic care is provided to relieve symptoms. Prolotherapy is sometimes used for the management of joint hypermobility syndrome. Irritants such as dextrose solution are injected into the joints to mount a short-lasting inflammatory reaction, which initiates a reparative cascade to produce more extracellular matrix and collagen, giving additional strength and withstanding ability [19]. It also slows down the degenerative changes due to joint hypermobility, thereby postponing the development of precocious osteoarthritis [19]. A current shift from active to passive care for patients with hypermobility syndrome has proven to be so much beneficial to the affected patients. Conclusions This is the world's first reported case of benign joint hypermobility syndrome with associated camptodactyly, which is a rarely fixed flexion deformity of the fingers. Hypermobility syndrome, if left unaddressed, can result in so many complications including chronic pain, musculoskeletal injuries, and fatigue. Therefore, it requires a holistic approach with the inclusion of rheumatologists, physiotherapists, occupational therapists, podiatrists, and psychotherapists. Camptodactyly, on the other hand, is approached with graded stretching exercises and if markedly severe, necessitates surgical correction for a better functional outcome. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-06-23T15:17:44.660Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "c6402e8462ee7040ed6a11654681d3c30e51ed3a", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/101975-a-diagnosis-of-camptodactyly-with-benign-joint-hypermobility-syndrome-in-a-patient-presenting-with-fixed-flexion-deformity-of-the-fingers-and-striae.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7bab4e2fd7e15c8aa8117df63c5e70ca87de7a8f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265690914
pes2o/s2orc
v3-fos-license
Theoretical and experimental study on the degradation mechanism of atrazine in Fenton oxidation treatment College of Chemistry, Chemical Engineering University, Jinan 250014, P. R. China Department of Resources and Environment, China School of Environmental Science and E 250100, P. R. China. E-mail: wsg@sdu.edu. Department of Environmental Science & E 200433, P. R. China. E-mail: lixiang@fudan † Electronic supplementary information alkyl oxidation process of IM11 in aq technology. See DOI: 10.1039/c6ra26918d Cite this: RSC Adv., 2017, 7, 1581 Introduction 3][4] It was detected ubiquitously in surface and ground water in many countries.6][7][8] The chromosomes of Chinese hamster egg cells will be damaged if they are exposed to 1.08-17.26mg L À1 of ATZ within two days.The widespread use of ATZ has caused concentrations exceeding the limit of surface and ground water throughout Europe and the United States. 9,10hus, the commercial use of ATZ has been banned in several countries. 11However, its presence in surface and ground water continues to last for several years.The search for effective methods to remove ATZ from water is of importance. In general, absorption and extraction are cost effective and easy to perform.However, they only transfer the pollutant to another phase, without promoting its degradation to a less harmful species. 21ATZ is toxic to microorganisms, and the triazine-ring itself is quite resistant to microbial attacks. 22,23As a result, conventional biological remediation is neither efficient and nor suitable for removing higher concentrations of ATZ from contaminated water rapidly.In the chemical process, AOPs are potentially useful to treat pesticide wastes because they generate powerful oxidizing agents.Several AOPs have been applied to ATZ degradation in the aqueous medium, such as sonolysis, 24,25 electron-beam irradiation, 26 TiO 2 -supported UV photolysis, 27,28 O 3 /UV, 29,30 UV/H 2 O 2 , 31,32 O 3 /H 2 O 2 (ref.][36][37][38][39][40] Fenton oxidation technology, which generates hydroxyl radical (OH), is a promising method to treat wastewater containing ATZ.The Fenton system consists of a mixture of ferrous salt (Fe 2+ ) and H 2 O 2 , namely Fenton's reagent.OH can be produced in the reaction, H 2 O 2 + Fe 2+ / Fe 3+ + OH À + OH.Laat has compared the efficiencies of degradation of ATZ by several AOPs, and found that the photo-Fenton process was more efficient than H 2 O 2 /UV. 35Khan have compared the degradation of ATZ by photo-Fenton and photo-Fenton-like oxidation technologies.It is suggested that they are capable of removing ATZ from water efficiently, but this study did not cover the degradation mechanism. 40Balci has studied the degradation mechanism of ATZ in the aqueous medium by electro-Fenton oxidation. 10Considering all oxidation reaction intermediates and products, a general reaction mechanism for ATZ degradation by OH was proposed.But the mechanism proposed in the experiment wasn't detailed enough.Mackul'ak identied the degradation products of atrazine by HPLC aer application of the Fenton reaction and modied Fenton reaction, including some small organic molecules such as oxalic acid, urea, formic acid, acetic acid, and acetone.But their attention was focused on the small fragments of the degradation process that were identied by HPLC, the intermediates were not found. 41Theoretical calculation can provide information for the reaction intermediates and pathways.Many theoretical studies on the degradation reaction by OH radical have been reported. 42,43In this work, the density functional theory (DFT) calculation and the polarized continuum model (PCM) [44][45][46] were performed to investigate the degradation of ATZ by OH radical, and the roles of other components in Fenton's reagent have also considered.This study was helpful to further perfect the experimental mechanism, and could make up the inadequacy of experimental measurement on the short-lived substances, which provided the theoretical support for the removal of ATZ by Fenton oxidation treatment.In order to verify the theoretical results, liquid chromatography/mass spectrometry (LC/MS) analysis was used to identify the major intermediates and products. Computational methods Using the GAUSSIAN 09 programs, 47 high-level ab initio molecular orbital calculation is carried out for the reaction of ATZ with the OH radical.The geometrical parameters of stationary points are optimized at the M05-2X/6-31+G(d,p) level.The M05-2X functional is of high nonlocality with the double amount of nonlocal exchange (2Â), which is an excellent method to predict noncovalent interactions. 48The vibrational frequencies have been calculated at the same level in order to determine the nature of stationary points.Each transition state is veried to connect the designated reactants and products by performing an intrinsic reaction coordinate (IRC) analysis. 49The PCM is chosen to calculate the solutionphase energy. Experiment methods The stock solution of ATZ was prepared at 10 mg L À1 in distilled-deionized water, and the pH values of solutions were adjusted with HCl or NaOH.Two hundred milliliters of ATZ aqueous solution was added, adjusting pH prior to 2-3.The desired dosage of FeSO 4 $7H 2 O/H 2 O 2 was added thereaer to initiate the reaction.The mixture was magnetically stirred at 200 rpm at 25 C.At different time intervals, 100 mL solution was taken out with an injector, and then ltered through 0.22 mm membranes before LC/MS analysis. Experimental analysis The oxidative degradation products of ATZ were analyzed by LC/ MS with a LCQ Fleet (Thermo Fisher Scientic, USA), using a Waters SunFireTM-C18 column (4.6 mm  250 mm, 5 mm).Electron spray ionization (ESI) was used with a spray voltage set at 5000 V; sheath gas ow rate, aux gas ow rate and capillary temperature were set at 30 arb, 10 arb, and 300 C, respectively.The mass spectra data were obtained in the positive ion mode aer scanning them from m/z 50 to 350. Initial reactions with OH radical The OH radical addition to the C atom of triazine-ring, Cl atom substitution and H atom abstraction from ATZ are three possible kinds of channels for the reaction of ATZ with OH radical.The reaction pathways of OH radical addition, Cl atom substitution and H atom abstraction are depicted in Fig. 1 and 2, in which the potential barriers (E a ) and the reaction heats (E r ) are marked too.The optimized structures of the transition states involved in reactions of ATZ with OH radical are shown in Fig. 3. A OH radical addition pathways.With the potential barriers of 16.32 and 21.46 kcal mol À1 , OH radicals are added to C 1 atom and C 3 atom in the triazine-ring, respectively.The lengths of the newly formed C 1 -O bond and C 3 -O bond in two transition-states are 1.768 and 1.774 Å, which are 0.380 and 0.382 Å longer than those in the corresponding ATZ-OH adducts.These processes are slightly endothermic, giving out 2.74 and 6.72 kcal mol À1 of energy, respectively.Thus, the ATZ-OH adducts are unstable, and can further react with the dissolved oxygen in water. B Cl atom substitution pathway.Since OH is a strongly nucleophilic radical, the Cl atom which is attached to C 5 atom can be substituted for OH radical, producing hydroxyatrazine (HA, denoted as IM3) and Cl atom.This process crosses a potential barrier with 12.25 kcal mol À1 of energy, which is lower than the barrier added to the C 1 and C 3 atom of C]N bond.This reaction is strongly exothermic, releasing 28.10 kcal mol À1 of energy, implying that Cl atom substitution pathway is an energetically favorable reaction.HA is stable and has been detected in the experiment. 10 H atom abstraction pathways.As shown in Fig. 2, six Habstraction sites exist in ATZ structure: two in the ethyl group, two in the iso-propyl group, one in the -NHof ethylamino and one in the -NHof iso-propylamino. In the ethyl group, i.e., -CH 2 CH 3 , the OH radical can abstract H atom from either -CH 2group or the -CH 3 group.A transition state (TS4) was found in the abstraction of H atom in -CH 3 group.This process has a potential barrier with 7.38 kcal mol À1 and is exothermic releasing 14.29 kcal mol À1 of energy.In H atom abstraction from the -CH 2group, OH radical abstracts H atom to produce IM5 via a small potential barrier with 2.02 kcal mol À1 of energy.This reaction is strongly exothermic, sending out 25.60 kcal mol À1 of energy, which shows that the H abstraction from the -CH 2group is easier than the H abstraction from the -CH 3 group. H atom is abstracted from the iso-propyl group, i.e., the -CH(CH 3 ) 2 group proceeds via either -CHgroup or the -CH 3 group.These reactions are required to overcome the barrier with 1.72 and 6.57 kcal mol À1 of energy, and are strongly exothermic, releasing 22.26 and 14.17 kcal mol À1 of energy, respectively.Therefore, the abstraction from the -CHgroup takes place more easily than H abstraction from the -CH 3 group. As to H atom abstraction from the -NHof ethylamino and -NHof iso-propylamino, i.e., -NHCH 2 CH 3 and -NHCH(CH 3 ) 2, the two reactions need to cross the barrier of 10.22 and 10.99 kcal mol À1 and are exothermic, giving out 7.03 and 6.47 kcal mol À1 of energy, respectively.Comparison of these initial reactions with OH radicals show that H atom abstraction from the -CHof -CH(CH 3 ) 2 group and the -CH 2of -CH 2 CH 3 group can occur more easily and are expected to play an important role in further reactions.Therefore, the ethyl group is more reactive than the isopropyl group during OH radical attack, which is consistent with the research of Acero. 33 Water catalysis in subsequent reactions From the point of thermodynamics, H atom abstraction pathways are easier to take place than OH radical addition pathways and Cl atom substitution pathway.And H atom abstraction from the -CHof -CH(CH 3 ) 2 group and the -CH 2group of -CH 2 CH 3 are expected to occur more easily.Thus, in this section, intermediates IM5 and IM6 are selected as reactants in the following degradation process. The production of carbon-centered radicals, IM5 and IM6, can be combined with OH radicals through barrierless reactions, generating IM10 and IM11.These processes are strongly There are two ways in the following decomposition: dealkylation and alkyl oxidation with formation of formamide or acetamide.The reaction process is shown in Fig. 4 and S1.† In the dealkylation reaction of IM10, the C 10 -N 8 bond will be opened up, accompanied by the H atom migration from the O atom to the N 8 atom.The C 10 -N 8 bond in the transition state TS10-A1 is elongated to 1.747 Å as shown in Fig. S2.† Deethylatrazine (DEA, denoted as IM12-A) and acetaldehyde (CH 3 CHO) are produced via an apparent barrier of 54.75 kcal mol À1 , and this reaction is predicted to be endothermic, giving out 13.91 kcal mol À1 of energy.Given that the barrier of 54.75 kcal mol À1 is too high for this reaction to play an important role, we took into account the possible role of a H 2 O. TS10-A2 shows that H 2 O acts as a catalyst with one H atom moving to the N 8 atom simultaneously and extracting an H atom from the OH group.The C 10 -N 8 bond will also be broken in this process via the barrier of 25.22 kcal mol À1 .This process is a concerted reaction.The water serves as a catalyst to reduce the reaction barrier dramatically.Although this activation barrier is still high, it is smaller compared to the energy released from the combination of IM5 and OH radical. Besides the dealkylation, IM10 can also ignite alkyl oxidation reaction with formamide.With a high barrier of 83.83 kcal mol À1 , the C 10 -C 13 bond will be broken along with the H migration from O to C 13 via a transition state TS10-B1 (Fig. 4).2-Chloro-4-formamido-6-isopropylamin-s-triazine (CFIT, denoted as IM12-B) and methane (CH 4 ) will be produced in this reaction.When H 2 O gets involved in the reaction, it will act as a catalyst and reduce the barrier to 59.99 kcal mol À1 as shown in Fig. 4 and Table 1. The decomposition of IM11 has undergone the similar pathways, and the schematic diagram of the reaction pathways and the optimized geometries for the transition states are drawn in Fig. S1 and S2, † respectively.However, it is worth noting that deisopropylatrazine (DIA, denoted as IM13-A) and 6acetamido-2-chloro-4-ethylamino-s-triazine (CDET, denoted as IM13-B) will be formed. The toxicity of ATZ and its degradation products have been evaluated in several studies.Ralston-Hooper et al. evaluated the acute and chronic toxicity in the amphipods Hyalella azteca and Diporeia spp., and in the unicellular algae Pseudokirchneriella subcapitata, and they concluded that acute and chronic toxicities were ranked ATZ > DEA > DIA. 50Tchounwouli compared the toxicities of ATZ, DEA, DIA and DEDIA by Microtox Assay.They found that DEA and DIA are the least toxic, with EC50 81.86 and 82.68 mg L À1 , followed by ATZ (EC 50 ¼ 39.87 mg L À1 ), which is consistent with the former. 51The EC 50 of DEDIA is 12.74 mg L À1 , and it suggests that the nal product has more toxicity than that of ATZ. Conclusions The new degradation mechanism of ATZ in aqueous solutions has been investigated.From the point of thermodynamics, H atom abstraction pathways are easier to take place than those of OH radical addition and Cl atom substitution.Moreover, H atom abstraction from -CHof -CH(CH 3 ) 2 group and -CH 2group of -CH 2 CH 3 are the main OH-initiated reactions of ATZ. The subsequent decomposition of IM10 and IM11 involves two ways: dealkylation and alkyl oxidation with formation of formamide or acetamide.It should be pointed out that H 2 O can act as a catalyst to reduce the reaction barrier in these processes dramatically which is helpful to interpret the high efficiency of Fenton reagents.This mechanism can also provide a new point for the OH-initiated chemical transformation of volatile organic compounds in atmosphere. The stable intermediates and products, CH 3 COCH 3 , DEDIA, DIA, DEA, CAFT, CDAT, CDET, CDFT and CFIT, have been observed experimentally.This study offers a cost-effective way to probe the degradation mechanism of ATZ in the aqueous medium by Fenton oxidation technology. Fig. 1 Fig.1OH radical addition pathways and Cl atom substitution pathway in aqueous solution with the potential barriers E a (kcal mol À1 ) and the reaction heats E r (kcal mol À1 ). Fig. 2 H Fig. 2 H atom abstraction pathways in aqueous solution with the potential barriers E a (kcal mol À1 ) and the reaction heats E r (kcal mol À1 ). Fig. 3 Fig. 3 Optimized geometries for the transition states involved in the initial reactions with OH radical.Distances are in angstroms. Fig. 4 Fig.4The dealkylation and alkyl oxidation process of IM10 in aqueous medium by Fenton oxidation technology.
2019-04-08T13:11:01.961Z
2017-01-04T00:00:00.000
{ "year": 2017, "sha1": "9e530230ece360bc89672c821b664ae0a0b5e95b", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c6ra26918d", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "eadcdcfe23afede8696c2bc929c08fd2b87d1d49", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
244104532
pes2o/s2orc
v3-fos-license
Buyang Huanwu Decoction protects against STZ-induced diabetic nephropathy by inhibiting TGF-β/Smad3 signaling-mediated renal fibrosis and inflammation Background Buyang Huanwu Decoction (BHD) is a classical Chinese Medicine formula empirically used for diabetic nephropathy (DN). However, its therapeutic efficacies and the underlying mechanisms remain obscure. In our study, we aim to evaluate the renoprotective effect of BHD on a streptozotocin (STZ)-induced diabetic nephropathy mouse model and explore the potential underlying mechanism in mouse mesangial cells (MCs) treated with high glucose in vitro, followed by screening the active compounds in BHD. Methods Mice were received 50 mg/kg streptozotocin (STZ) or citrate buffer intraperitoneally for 5 consecutive days. BHD was intragastrically administrated for 12 weeks starting from week 4 after the diabetes induction. The quality control and quantitative analysis of BHD were studied by high-performance liquid chromatography (HPLC). Renal function was evaluated by urinary albumin excretion (UAE) using ELISA. The mesangial matrix expansion and renal fibrosis were measured using periodic acid-schiff (PAS) staining and Masson Trichrome staining. Mouse mesangial cells (MCs) were employed to study molecular mechanisms. Results We found that the impaired renal function in diabetic nephropathy was significantly restored by BHD, as indicated by the decreased UAE without affecting the blood glucose level. Consistently, BHD markedly alleviated STZ-induced diabetic glomerulosclerosis and tubulointerstitial injury as shown by PAS staining, accompanied by a reduction of renal inflammation and fibrosis. Mechanistically, BHD inhibited the activation of TGF-β1/Smad3 and NF-κB signaling in diabetic nephropathy while suppressing Arkadia expression and restoring renal Smad7. We further found that calycosin-7-glucoside (CG) was one of the active compounds from BHD, which significantly suppressed high glucose-induced inflammation and fibrosis by inhibiting TGF-β1/Smad3 and NF-κB signaling pathways in mesangial cells. Conclusion BHD could attenuate renal fibrosis and inflammation in STZ-induced diabetic kidneys via inhibiting TGF-β1/Smad3 and NF-κB signaling while suppressing the Arkadia and restoring renal Smad7. CG could be one of the active compounds in BHD to suppress renal inflammation and fibrosis in diabetic nephropathy. Supplementary Information The online version contains supplementary material available at 10.1186/s13020-021-00531-1. Introduction Diabetic nephropathy (DN) is the main cause of endstage renal disease (ESRD) with high morbidity and mortality in diabetes mellitus (DM) patients [1]. In the United States, about 30-40% of diabetes cases develop into diabetic nephropathy. Diabetes mellitus accounts for 30-50% of total ESRD incidents, representing a significant public health concern [2]. Pathologically, diabetic nephropathy is characterized by chronic renal inflammation and fibrosis, leading to the deposition of extracellular matrix (ECM) in renal interstitial and thickening of basement membranes [3]. As no curative drug for diabetic nephropathy is available, most diabetic nephropathy patients eventually develop into ESRD. In the progression of diabetic nephropathy, renal inflammation and fibrosis consistently appear, eventually leading to renal injury [3]. Multiple signaling pathways participate in the pathological process of renal inflammation and fibrosis during diabetic nephropathy. Among them, transforming growth factor β1 (TGF-β1) mediated Smad signaling is a representative signaling pathway contributing to diabetic nephropathy. TGF-β1 is abundantly expressed by all kinds of kidney resident cells and infiltrating inflammatory cells for the regulation of many signaling pathways, including both Smaddependent and Smad-independent signaling [4]. Notably, TGF-β1/Smad3 signaling is involved in diabetic nephropathy [5,6]. Smad3 knock-out mice or Smad3 inhibitors attenuate renal fibrosis and inflammation in diabetic mice, suggesting that the TGF-β1/Smad3 signaling pathway contributes to renal fibrosis and inflammation in diabetic nephropathy [7][8][9]. In contrast, an inhibitory Smad, Smad7, represses TGF-β/Smad3 and NF-κB signaling pathway by interacting with TGF-β receptors and functions as an antagonist of these molecules in diabetic nephropathy [10]. Overactivation of TGF-β1/ Smad3 signaling is accompanied by the degradation of Smad7, contributing to the activation of NF-κB signaling in diabetic nephropathy [11]. Thus, suppression of TGF-β1/Smad3 signaling represents a critical therapeutic approach to reduce renal inflammation and fibrosis in diabetic nephropathy [9,12,13]. Traditional Chinese medicine (TCM) has a long history of treating various diseases with less toxicity and side effects [14,15]. According to TCM concepts, the syndrome patterns of diabetic nephropathy can be classified as qi deficiency, blood stasis, yin deficiency, turbid dampness, phlegm dampness, yang deficiency, blood deficiency, and qi stagnation [16]. Based on clinical observation, qi deficiency-induced blood stasis is one of the most common TCM syndrome patterns in diabetic nephropathy patients. Thus, the herbal formula for tonifying qi to improve blood circulation has been commonly used for diabetic nephropathy treatment. Buyang Huanwu Decoction (BHD) is a classic TCM formula with the functions of replenishing qi and invigorating blood circulation, which is composed of Astragali Radix, Angelicae Sinensis Radix Tail, Paeoniae Radix Rubra, Chuanxiong Rhizoma, Persicae Semen, Carthami Flos and Pheretima with the ratio of 120:6:4.5:3:3:3:3 (dry weight) [17]. BHD is commonly used to treat diseases with the pathological status with Qi deficiency and blood stasis, such as vascular diseases, nerve injury-associated diseases, and kidney diseases in TCM practice [18,19]. Our previous studies demonstrated that BHD has neuroprotective and neurogenesis-promoting effects in transient ischemic stroke model via activating PI3K/Akt/Bad and Jak2/Stat3/Cyclin D1 signaling pathways and modulating VEGF and Flk1 expressions [17,20]. Recently, a meta-analysis consisting of 12 randomized controlled trials (RCTs) and involving 911 patients revealed that the BHD could be an adjunct therapy with RAAS inhibitors for early diabetic nephropathy patients by reducing the 24 h urine albumin excretion rate (UAER) [21]. BHD exerts anti-inflammation and anti-fibrotic bioactivities via distinct signaling pathways in several experimental in vivo models, such as transient focal cerebral ischemia [22], myocardial ischemia [23], and experimental autoimmune encephalomyelitis (EAE) [24]. However, whether BHD has anti-inflammation and anti-fibrosis effects and recovers renal function in diabetic nephropathy remains unknown. In the study, we tested the hypothesis that BHD could attenuate renal inflammation and fibrosis in diabetic nephropathy, and its underlying mechanisms would be related to inhibiting the TGF-β1/Smad3 pathway. Quality control analysis We performed the HPLC fingerprint study on BHD. Chromatographic separations were operated on a Thermo UPLC system with ACE Excel 2 C18 column (100 × 2.1 mm, 2 μm). The following chromatographic parameters were optimized in the study: mobile phase A (acetonitrile) and mobile phase B (0.1% formic acidwater), the flow rate at 0.4 ml/min, column temperature at 25 ℃, and detection wavelength at 227 nm. The whole retention time was 50 min. The percentage of mobile phase A increased from 1 to 10% during the first 10 min, then the ratio of two mobile phases was held for the next 38 min before the percentage of phase A raised to 15% in the last 2 min. Quantitative quality control analysis Five standard chemical ingredients were used to perform quantitative analysis. After obtaining the standard curve of each chemical ingredient, mixed standard working solutions were analyzed by HPLC under the same condition as the Quality Control Analysis of BHD. The measurement was conducted 3 times parallelly. According to the standard curve, the content of each standard chemical ingredient was calculated. Animals and STZ-induced diabetic nephropathy mouse model Male ICR mice (10-12 weeks old) were supplied from Laboratory Animal Unit at the University of Hong Kong. All experimental protocols of animals were approved by the Committee on the Use of Live Animals in Teaching and Research (CULATR). The mice were maintained on 12-h light/dark cycles in a pathogen-free environment with a constant temperature of 22 ℃. The type I diabetes model was induced in mice according to the low-dose STZ induction protocol recommended by the Animal Models of Diabetic Complications Consortium (https:// www. diaco mp. org/). Streptozotocin (STZ, S0130, Sigma-Aldrich Corp, St. Louis, MO, USA) was freshly prepared in 0.1 M sodium citrate buffer (pH 4.5). Mice were fasted for 4 h before STZ administration. After 4 h fasting, mice were received a daily intraperitoneal injection of 50 mg/kg STZ or sodium citrate buffer for 5 consecutive days. Fasting blood glucose and urine were collected every 2 or 4 weeks. The mice were sacrificed at week 16 after STZ injection. Kidney tissues were collected and stored at -80 ºC or paraffin-embedded for subsequent experiments. Microalbumin and renal function Renal function was evaluated by urinary albumin excretion (UAE), the ratio of total urinary albumin/creatinine. For urine analysis, 24 h urine samples were collected from metabolic cages every 2-4 weeks. Microalbuminuria was quantified using the competitive ELISA method (Exocell, Philadelphia, PA, USA) and creatinine was detected by the Creatinine Companion kit (Exocell, Philadelphia, PA, USA) according to the manufacturers' instructions. Cell culture and drug treatments Mouse mesangial cells (MCs) were used in our study. Cells were grown in Dulbecco's modified Eagle's medium (DMEM)/Ham's F12 medium (Invitrogen Life Technologies, Carlsbad, CA, USA) supplemented with 10% fetal bovine serum (FBS; Invitrogen life Technologies) and 1% penicillin/streptomycin (Gibco) at 37 °C in a humidified atmosphere with 5% CO 2 . The MCs were cultured in an FBS-free medium for 24 h and then stimulated with BHD or compounds under normal D-glucose (5.5 mM) or high D-glucose (35 mM) conditions for up to 24 h. D-Mannitol (29.5 mM) was used as an osmotic control. MCs were pre-treated with 2 μM SIS3 for 1 h before stimulating with high glucose for positive control groups. Cells were harvested for western blot and real-time PCR analysis. All experiments were conducted three or four times. BHD treatments in animals Mice orally received BHD at 0.5 g/kg, 1 g/kg and 2 g/kg daily for 12 weeks from week 4 after STZ injection. Irbesartan (IRB) (50 mg/kg per day) was taken as a positive control. Western blot analysis Protein was extracted from renal tissues or cells by RIPA buffer, and western blot analysis was carried out as described previously [3]. Briefly, membranes were incubated with primary antibodies against fibronectin, collagen I, TNF-α, IL-1β, NF-κB p65, phospho-NF-κB p-p65, Smad3, phospho-Smad3 (Ser423/425), Smad7, Arkadia and β-actin at 4 °C overnight, and then the membranes were further incubated in HRP conjugated second antibodies. The signals of proteins were detected by chemiluminescent ECL ™ Detection Kit (GE Healthcare), followed by scanning using Gel Doc system (Bio-Rad) and analyzing by Image Lab software (Bio-Rad). Protein levels were quantified using the ImageJ software (NIH, Bethesda, MA, USA). Statistical analysis Data are shown as the mean ± SEM and were analyzed using one-way analysis of variance (ANOVA), followed by Tukey's post-hoc tests using GraphPad Prism Version 7.0 software (GraphPad Software Inc., CA, USA). A value of p < 0.05 was considered statistically significant. HPLC analysis for quality control study of BHD We first performed HPLC study for quality control of BHD granules. We optimized the chromatographic conditions and obtained a well-separated chromatogram for the fingerprint of BHD (Fig. 1). The structures and retention time of those compounds are summarized in Additional file 1: Table S2. Five standard compounds were confirmed in BHD, including hydroxysafflor yellow A, amygdalin, paeoniflorin, ferulic acid and calycosin-7-glucoside. Thus, we used these compounds for quality control which are identified from Astragali Radix, Angelicae Sinensis Radix Tail, Paeoniae Radix Rubra, Chuanxiong Rhizoma, Persicae Semen and Carthami Flos. We quantitatively analyzed the concentrations of hydroxysafflor yellow A, amygdalin, paeoniflorin, ferulic acid and calycosin-7-glucoside in BHD. As shown in Additional file 1: Table S3, the standard curves of the five chemical markers had good linearity with correlation coefficients (r) > 0.999. Relative Standard Deviation (RSD) of peak areas for each compound was lower than 1.33% for stability assay. These results indicate that the HPLC method has good sensitivity, accuracy and stability. We subsequently used this method to determine the concentrations of five standard compounds in BHD samples (Additional file 1: Table S3). The contents of hydroxysafflor yellow A, amygdalin, paeoniflorin, ferulic acid and calycosin-7-glucoside were 1.601, 0.717, 7.635, 0.263 and 0.238 mg/g in BHD, respectively. BHD treatment reduces urinary protein excretion and improves renal pathology in STZ-induced diabetic nephropathy mice We evaluated the renoprotective effects of BHD (0.5, 1 and 2 g/kg/day) on STZ-induced diabetic nephropathy mice in which irbesartan was taken as a positive control drug. Periodic acid-schiff (PAS) staining revealed the stromal hyperplasia and thickened glomerular basement membrane in the sections of kidney tissues of the STZinduced diabetic mice. Notably, BHD treatment dosedependently reduced mesangial matrix expansion of glomeruli and the thickening of the glomerular basement membrane ( Fig. 2A). Meanwhile, BHD treatment dosedependently reduced urinary protein excretion level (Fig. 2B) without affecting blood glucose level (Fig. 2C). These results suggest that BHD has direct renoprotective effects in the diabetic nephropathy mice. BHD treatment protects against renal fibrosis and inflammation in STZ-induced diabetic nephropathy mice Renal fibrosis and inflammation are two major pathological features of diabetic nephropathy [10]. We then examined the renoprotective effects of BHD on reducing renal fibrosis and inflammation in the STZ-induced diabetic mice. First, Masson's Trichrome staining and semiquantification revealed the renal fibrosis in the diabetic mice. BHD treatment reduced the renal extracellular collagen formation in a dose-dependent manner (Fig. 3A). Besides, immunohistochemistry showed that F4/80positive macrophages were significantly increased in the diabetic renal tissues. BHD treatment dose-dependently decreased the positive staining cells, indicating the antiinflammatory effects (Fig. 3B). Furthermore, western blot analysis showed that BHD treatment dose-dependently downregulated the expression of fibronectin, collagen I, TNF-α and IL-1β, indicating its anti-inflammatory and anti-fibrotic effects (Fig. 4). Immunohistochemistry studies yielded consistent results to western blot data, showing that the up-regulation of fibronectin and IL-1β in diabetic kidney sections were inhibited by the BHD treatment dose-dependently (Fig. 5A, B). BHD had similar effects to irbesartan, a positive control drug. Therefore, BHD could attenuate renal fibrosis and inflammation in the STZ-induced diabetic nephropathy mice. BHD inhibits TGF-β1/Smad3, NF-κB and Arkadia and restores Smad7 in renal tissues of STZ-induced diabetic nephropathy mice TGF-β plays a critical role in the pathological progression of diabetic nephropathy [11]. TGF-β not only induces Smad3 phosphorylation but also enhances the Smad3-dependent Smad7 degradation by stimulating the Arkadia-mediated ubiquitin-proteasome degradation pathway [26][27][28]. Our previous study indicates that the depletion of Smad7 promotes the activation of NF-κB signaling in diabetic kidney [10]. Thus, we tested the hypothesis that the anti-fibrotic and anti-inflammatory effects of BHD might be attributed to blocking the Smad3 phosphorylation and Arkadia, subsequently increasing renal Smad7 and inhibiting NF-κB signaling in the STZ-induced diabetic nephropathy mice. The results were shown in Fig. 6. Western blot analysis and immunohistochemistry revealed the increased expressions of the phosphorylated Smad3 (p-Smad3) and the reduction of Smad7 in the renal tissues of diabetic nephropathy mice, which was companied with the upregulated expressions of E3-ligase Arkadia and phosphorylated NF-κB/p65 (p-p65) in the renal tissues. After diabetic nephropathy mice were treated with BHD (2 g/kg), the expressions of p-Smad3, Arkadia, p-p65 were significantly down-regulated whereas the expression of Smad7 was upregulated. Those results suggest that BHD could suppress the TGF-β1/Smad3 and NF-κB signaling pathways, while inhibiting Arkadia expression and restoring renal Smad7 in diabetic mice. BHD suppresses TGF-β1/Smad3-mediated fibrosis and NF-κB-driven inflammation, while suppressing Arkadia and restoring Smad7 in cultured mouse mesangial cells (MCs) in vitro We further tested the BHD's renoprotective effects and its underlying mechanisms by using in vitro cultured MCs. These cells were stimulated by the conditions of low glucose (5.5 mM) and high glucose ( Figure 7A revealed that high glucose exposure increased the expressions of TNF-α, IL-1β, fibronectin and collagen I. The BHD treatment dose-dependently down-regulated the expressions of fibronectin, collagen I, TNF-α, and IL-1β, indicating the anti-fibrotic and anti-inflammatory effects, respectively. We then used the dosage of 8 mg/ml of BHD for the following mechanistic studies. Figure 7B showed that high glucose exposure remarkably increased the expressions of p-Smad3, p-p65 and Arkadia but decreased the expression of Smad7. Notably, the BHD treatment significantly inhibited the expression of p-Smad3, p-p65 and Arkadia but increased the level of Smad7, whose effects were similar to SIS3, a Smad3 inhibitor. Those results suggest that BHD could suppress Arkadia expression and restore Smad7, contributing to the inhibitions of the TGF-β1/ Smad3-mediated fibrosis and NF-κB-driven inflammation in the MCs. B. IHC of IL-1β A Calycosin-7-glucoside (CG), an active compound from BHD, attenuates high glucose-induced fibrosis and inflammation via blocking TGF-β1/Smad3 and NF-κB signaling pathways in MCs We then screened active compounds from BHD with anti-fibrotic and anti-inflammatory properties by using the in vitro cultured MCs under high glucose conditions. As showed in Additional file 1: Table S1, we selectively identified five compounds from the BHD sample. The cells were cultured in FBS-free medium for 24 h and then treated with these compounds respectively at the concentrations of 10, 20 and 40 μM under the exposures of low glucose (5.5 mM) or high glucose (35 mM) conditions for up to 24 h. We detected the expressions of fibronectin and IL-1β. Among five compounds, CG was found to be the most effective compound to inhibit the expression of fibronectin and IL-1β. RT-PCR results revealed that CG dose-dependently inhibited the expressions of fibronectin mRNA and IL-1β mRNA (Fig. 8A). Thus, CG was used as a representative active compound to further understand the anti-fibrotic and anti-inflammatory basis of BHD in the STZ-induced diabetic nephropathy mice. We used western blot analysis to further investigate the effects of CG on the expressions of p-Smad3, fibronectin, collagen I, TNF-α and IL-1β in the high glucose treated MCs. As showed in Fig. 8B, CG treatment downregulated the expression of p-Smad3 and inhibited the expressions of fibronectin, collagen I, TNF-α and IL-1β, which was consistent with the SIS3 treatment. Furthermore, we found that CG treatment also significantly reduced phosphorylated p65, while suppressing Arkadia expression and restoring Smad7 in high glucose treated MCs (Fig. 9). Taken together, CG could protect against high glucose-induced cell fibrosis and inflammation probably through regulating TGF-β1/Smad3 and NF-κB signaling pathways. Discussion TGF-β1/Smad3 signaling pathway mediated renal fibrosis and NF-κB-driven renal inflammation are important pathological procedures to impair renal function in diabetic nephropathy [10,13]. In the present study, we demonstrated that BHD has therapeutic effects against diabetic nephropathy by suppressing renal inflammation and fibrosis in vivo and in vitro. Mechanistically, BHD blocks TGF-β1/Smad3 signaling and inhibits renal inflammation by attenuating NF-κB signaling, while suppressing Arkadia expression and restoring renal Smad7. Furthermore, CG was found to be a representative active compound contributing to the anti-inflammatory and anti-fibrotic effects of BHD through inhibiting the TGF-β1/Smad3 and NF-κB signaling pathways. To the best of our knowledge, it is the first time to report that BHD and its active compound CG could protect against diabetic nephropathy via inhibiting TGF-β1/Smad3/NF-κB signaling. BHD is a TCM formula composed of Astragali Radix, Angelicae Sinensis Radix Tail, Paeoniae Radix Rubra, Flos respectively. Thus, we quantitatively detected these compounds in the preparations of BHD. We then conducted morphological studies and found that BHD treatment reduced mesangial matrix expansion of glomeruli and protected the glomerular basement membrane. Biochemical studies showed that BHD decreased the level of urinary protein excretion but without changing blood glucose levels. Thus, BHD has the direct renoprotective effects on the diabetic nephropathy mice. We then investigated the underlying mechanisms of how BHD worked for improving renal functions and attenuating morphological changes. Diabetic nephropathy is defined as the appearance of chronic kidney disease in diabetes mellitus, accompanied by continuous elevation of urinary albumin excretion or a persistent reduction in estimated glomerular filtration rates [29]. The pathophysiological features of the STZ-induced diabetic nephropathy include the thickened basement membranes, mesangial expansion, hypertrophy, and CG treatment reduces high glucose-induced phosphorylated p65, while suppressing Arkadia and restoring Smad7 in vitro. A Western blot and quantitative data for p-p65, Arkadia and Smad7. HG, high glucose. **P < 0.01, ***P < 0.001 versus normal control. # P < 0.05, ## P < 0.01 versus high glucose glomerular epithelial cell (podocyte) loss in glomeruli and tubular interstitial fibrosis [30]. Mechanistically, TGF-β1/Smad3 signaling activations are crucial players in renal fibrosis and inflammation of diabetic nephropathy [9]. TGF-β1 was found to be significantly increased in the fibrotic kidneys of diabetic nephropathy [31]. TGF-β promotes phosphorylation of Smad3, affecting the promoter regions of fibrotic genes, e.g., collagens, CTGF, and stimulates the expressions of these genes. Interestingly, our previous study indicated that the activation of TGF-β1/Smad3 could be attributed to the loss of Smad7 signaling in diabetic nephropathy [10]. Smad7 is an inhibitory Smad which binds TGF-β type I receptor and blocks Smad3 phosphorylation. Smad7 also suppresses the phosphorylation of NF-κB/p65 by interacting with IκBα. The loss of Smad7 subsequently further enhances the activation of NF-κB/p65 [10,26]. Thus, therapeutic strategies can be based on targeting the TGF-β1/Smad3 and NF-κB/p65 pathways to attenuate the progress of diabetic nephropathy. We logically investigated the effects of BHD on the TGF-β1/Smad3 and NF-κB/p65 signaling pathways in the STZ-induced diabetic nephropathy mice. In previous studies, BHD was reported to inhibit TGF-β/Smads signaling-mediated cardiac fibrosis in the pressure overload-induced cardiac remodeling [32]. BHD also suppresses the inflammation by blocking the NF-κB signaling pathway in atherosclerosis rats [33]. Astragalus, a monarch herb of BHD, was found to protect against diabetic nephropathy via suppressing TGF-β/Smad3 signaling in diabetic mice [34]. Thus, we tested the effects of BHD on the TGF-β1/Smad3 signaling pathway in the mice model of diabetic nephropathy. We found that BHD reduced the expressions of pro-inflammatory cytokines (TNF-α and IL-1β) and fibrosis-related proteins (fibronectin and collagen I), and inhibited the TGF-β1/ Smad3 and NF-κB signaling pathway. Notably, Smad7 is a key protein to suppress the activation of TGF-β/Smad3 and NF-κB signaling pathways in diabetic nephropathy [10]. Smad7 could be degraded by E3 ubiquitin-protein ligase Arkadia-mediated ubiquitination in diabetic kidneys [35]. As a result, we investigated the effects of BHD on the expression of Smad7. The results revealed that BHD treatment decreased Arkadia and restored renal Smad7 in diabetic mice. Consistently, our in vitro study also yielded similar results. Therefore, we speculate that BHD may protect against diabetic nephropathy probably by suppressing the Arkadia-dependent ubiquitin degradation of renal Smad7 and TGF-β/Smad3 signaling pathways. The exact underlying mechanisms remain to be further studied. Previous studies suggest that Astragali Radix, Angelicae Sinensis Radix Tail, Paeoniae Radix Rubra, Chuanxiong Rhizoma, Persicae Semen and Carthami Flos could have renoprotective effects on diabetic nephropathy [36][37][38]. Astragalus root is a monarch herb in BHD, accounting for 57.2% of total weight. CG is a representative active ingredient from Astragalus root recorded in Chinese Pharmacopoeia (2020 edition). It was reported that CG could attenuate the inflammatory injury in vascular endothelial cells [39] and alleviate cerebral ischemia/ reperfusion injury in rats [40]. Herein, CG treatment decreased high glucose-induced cell fibrosis and inflammation by reducing pro-inflammatory cytokines (TNF-α and IL-1β) and fibrosis-related proteins (fibronectin and collagen I), which was associated with the downregulation of TGF-β1/Smad3 and NF-κB/p65 pathways in MCs. These results suggest that CG could be a representative active compound contributing to the bioactivities of BHD in inhibiting the TGF-β1/Smad3 and NF-κB/p65 pathways and attenuating fibrosis and inflammation. However, whether CG directly interacts with the molecules within TGF-β1/Smad3 signaling or indirectly inhibiting TGF-β1/Smad3 signaling remains unclear. Additionally, the concentration of CG is 0.238 mg/g in granules. Orally administered BHD may also provide a sufficient concentration of CG to protect against diabetic nephropathy, which needs to be further studied. To our knowledge, it is the first time to verify the anti-inflammatory and antifibrotic bioactivities of CG, which might contribute to the renal protective effect of BHD on diabetic nephropathy. Nevertheless, with the multiple ingredients of BHD, we could not ignore the effects of other active compounds individually or synergically to regulating the TGF-β1/ Smad3 signaling pathways. For example, amygdalin, a chemical ingredient of Persicae Semen, down-regulated the expressions of FN and IL-1β mRNA in the high glucose treated MCs, indicating anti-fibrotic and antiinflammatory effects. Besides, astragalus polysaccharides, ligustrazine and safflower yellow, also have renoprotective effects against diabetic nephropathy [41][42][43]. Astragalus polysaccharides could protect against STZ-induced diabetic rats through inhibiting the TGF-β/Smad signaling pathway [43]. Ligustrazine and safflower yellow improve renal function and reduce urine protein excretion in diabetic nephropathy patients [41,42]. Thus, the anti-inflammation and anti-fibrosis for renoprotection could be attributed to the synergic effects of the multicompounds from BHD, which should be further studied. Conclusion BHD protects against STZ induced diabetic nephropathy by reducing renal inflammation and fibrosis. The underlying mechanisms could be related to inhibiting the TGF-β1/Smad3 and NF-κB signaling pathway, while suppressing Arkadia and restoring Smad7. In addition, CG,
2021-11-15T14:36:57.639Z
2021-11-14T00:00:00.000
{ "year": 2021, "sha1": "deb234e54865ffdfce02939683be1cab1d27eab9", "oa_license": "CCBY", "oa_url": "https://cmjournal.biomedcentral.com/track/pdf/10.1186/s13020-021-00531-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "deb234e54865ffdfce02939683be1cab1d27eab9", "s2fieldsofstudy": [ "Medicine", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
266667970
pes2o/s2orc
v3-fos-license
Antipsychotics and Mortality in Adult and Geriatric Patients with Schizophrenia Patients with schizophrenia have a high mortality risk, and the role of antipsychotic medications remains inconclusive. In an aging society, older patients with schizophrenia warrant increased attention. This study investigated the association of antipsychotic medication dosages with mortality in patients with schizophrenia by using data from Taiwan’s National Health Insurance Research Database from 2010 to 2014. This study included 102,964 patients with schizophrenia and a subgroup of 6433 older patients in addition to an age- and sex-matched control group. The findings revealed that among patients with schizophrenia, the no antipsychotic exposure group had the highest mortality risk (3.61- and 3.37-fold higher risk for overall and cardiovascular mortality, respectively) in the age- and sex-adjusted model, followed by the high, low, and moderate exposure groups. A similar pattern was observed in the older patients with schizophrenia. High exposure to antipsychotics was associated with the highest risks of overall and cardiovascular mortality (3.01- and 2.95-fold higher risk, respectively). In conclusion, the use of antipsychotics can be beneficial for patients with schizophrenia with recommended exposure levels being low to moderate. In older patients, high antipsychotic exposure was associated with the highest mortality risk, indicating that clinicians should be cautious when administering antipsychotic medications to such patients. Introduction Compared with the general population, individuals with schizophrenia have a substantially higher risk of mortality [1][2][3].In Taiwan, individuals with schizophrenia have a life expectancy at birth that is approximately 15 years shorter than that of the general population [4].This finding is consistent with the literature [5,6].Individuals with severe mental illness have a high risk of death from unnatural causes; however, approximately three-fourths of all deaths among such individuals are classified as occurring due to natural causes [1], and research indicates that a sedentary lifestyle, obesity, limited access to health services, and the adverse effects of medications [7][8][9][10] may contribute to this population's high mortality rates. Increased mortality in patients with schizophrenia may be attributable to the adverse effects of antipsychotic medications, which include weight gain, metabolic syndrome, diabetes, and ischemic heart disease [11,12].A pharmaco-epidemiological study reported a dose-related risk of sudden cardiac death among patients using antipsychotic medications [13].However, several studies have been conducted using large, prospectively collected datasets of actual filled prescriptions to investigate the risk of death associated with any, current, or cumulative antipsychotic exposure in patients with schizophrenia, and their results have indicated that the use of an antipsychotic is associated with a lower risk of mortality than nonuse of an antipsychotic is [5,[14][15][16].Furthermore, studies have revealed a U-shaped relationship between antipsychotic exposure and overall mortality, indicating that low and moderate levels of antipsychotic exposure are associated with a substantially lower risk of mortality than no or high exposure are [17,18].However, evidence is lacking regarding the association between mortality and antipsychotic exposure relative to the mortality of a control group without psychiatric diagnoses with consideration of socioeconomic factors and comorbid physical conditions. With the advancement of health-care services, the average life expectancy has gradually increased.In many developed countries, individuals aged ≥55 years will soon account for a quarter or more of the population with schizophrenia [19].Whiteford et al. reported that schizophrenia currently ranks third among psychiatric disorders in terms of causes of disability-adjusted life years for people aged 60 years or older [20].In addition, older patients with schizophrenia have a high prevalence of comorbid medical conditions [21], which may influence their use of psychotropic medications and their risk of mortality.However, research on older individuals with schizophrenia is limited, accounting for only approximately 1% of the literature on schizophrenia [22], and the association between exposure to psychotropic medications and mortality in the geriatric population remains under-researched. This study investigated the association between the degree of cumulative antipsychotic exposure, as indicated by the number of filled prescriptions, and mortality in a national cohort of patients with schizophrenia, with the risk of mortality in this population compared with that in the general population (control group).In addition, this study analyzed the association between mortality and cumulative antipsychotic doses in a subgroup of older individuals with schizophrenia. Results Table 1 lists the demographic and clinical characteristics of the participants.In total, 102,964 patients with schizophrenia were enrolled.The mean age of the schizophrenia cohort was 44.8 (SD = 13.2) years, and 47.4% of the patients were women.The subgroup of older patients with schizophrenia comprised 6433 individuals with a mean age of 73.6 (SD = 6.7) years, and 59.7% of these patients were women.Compared with the corresponding control sample, the patients with schizophrenia had higher rates of chronic obstructive pulmonary disease (COPD), cardiovascular diseases (CVDs), diabetes mellitus (DM), and renal disease (RD).However, in the older patient subsample, compared with the corresponding control sample, the older patients with schizophrenia had higher rates of only COPD and CVD.In terms of mortality, 7730 patients with schizophrenia (7.5%) and 2593 controls (2.5%) died during the 5-year follow-up period.The mortality rate in the older patients with schizophrenia (31.9%) was approximately twice that observed in the corresponding control sample during the 5-year follow-up period.For the patients with schizophrenia, 8733 (8.5%) had no antipsychotic exposure during the follow-up period, whereas 19,017 (18.5%) had high antipsychotic exposure (Table 2).Among the 6433 older patients with schizophrenia, 944 (14.7%) had no antipsychotic exposure during the follow-up period, whereas 3520 (54.7%) and 333 (5.2%) had low and high antipsychotic exposure, respectively.Survival analysis was conducted using Cox regression with adjustment for sex and age (reference group = control group).Hazard ratios for overall and cardiovascular mortality were calculated for the no exposure (reference), low exposure (<0.5 DDD), moderate exposure (0.5-1.5 DDD), and high exposure (>1.5 DDD) groups.Abbreviations: CI, confidence interval; DDD, defined daily dose. The results for the demographic-adjusted model (model 1) and the fully adjusted model (model 2) are presented in Tables 2 and 3, respectively.The patients with schizophre-nia who had no exposure to antipsychotics had 3.61-fold (model 1) and 2.89-fold (model 2) higher risks of overall mortality and 3.37-fold (model 1) and 2.96-fold (model 2) higher risks of CVD-related mortality than did the control group.In the demographic-adjusted and fully adjusted models, U-shaped curves were noted for the associations of antipsychotic exposure with overall and CVD-related mortality risk (Tables 2 and 3; Figures 1 and 2).Among the patients with schizophrenia, the no exposure group had the highest overall mortality risk (model 1: HR = 3.61, 95% CI: 3.35-3.89;model 2: HR = 2.89, 95% CI: 2.65-3.14),followed by the high, low, and moderate exposure groups, despite some of the confidence intervals overlapping.Furthermore, the no exposure group had the highest CVD-related mortality risk (model 1: HR = 3.37, CI: 2.82-4.04;model 2: HR = 2.96, 95% CI: 2.40-3.64),followed by the high, low, and moderate exposure groups.Survival analysis was conducted using Cox regression with adjustment for sex, age, possession of catastrophic illness certification, socioeconomic status (insurance premium level, household income, and urbanization level of residence), and comorbid physical illnesses (chronic obstructive pulmonary disease, cardiovascular disease, cancer, diabetes mellitus, and renal disease) (reference group = control group).Hazard ratios for overall and cardiovascular mortality were calculated for the no exposure, low exposure (<0.5 DDD), moderate exposure (0.5-1.5 DDD), and high exposure (>1.5 DDD) groups.Abbreviations: CI, confidence interval; DDD, defined daily dose. For the older patients with schizophrenia, U-shaped associations of level of exposure to antipsychotics with overall mortality and CVD-related mortality were noted (Tables 2 and 3; Figures 1 and 2).Furthermore, the high exposure group had the highest overall mortality risk (model 1: HR = 3.01, 95% CI: 2.45-3.70;model 2: HR = 2.69, 95% CI: 2.17-3.34),followed by the no, low, and moderate exposure groups, despite some of the confidence intervals overlapping.A similar pattern was observed for CVD-related mortality in these patients.The high exposure group had the highest CVD-related mortality risk (model 1: HR = 2.95, CI: 1.87-4.67;model 2: HR = 2.78, 95% CI: 1.72-4.50),followed by the no, low, and moderate exposure groups.For the older patients with schizophrenia, U-shaped associations of level of exposure to antipsychotics with overall mortality and CVD-related mortality were noted (Tables 2 and 3; Figures 1 and 2).Furthermore, the high exposure group had the highest overall mortality risk (model 1: HR = 3.01, 95% CI: 2.45-3.70;model 2: HR = 2.69, 95% CI: 2.17-3.34),followed by the no, low, and moderate exposure groups, despite some of the confidence intervals overlapping.A similar pattern was observed for CVD-related mortality in these patients.The high exposure group had the highest CVD-related mortality risk (model 1: HR = 2.95, CI: 1.87-4.67;model 2: HR = 2.78, 95% CI: 1.72-4.50),followed by the no, low, and moderate exposure groups. Discussion This study investigated the association between cumulative exposure to antipsychotics and excess mortality in patients with schizophrenia through a comparison with an age-and sex-matched control group.In addition, we analyzed such an association in a subgroup of older patients with schizophrenia.The results revealed U-shaped associations of exposure to antipsychotics with overall and CVD-related mortality in the patients with schizophrenia; the highest risk of mortality was observed in the patients with no antipsychotic exposure, whereas the lowest risk of mortality was noted in those with low to moderate antipsychotic exposure.In the older individuals with schizophrenia, U-shaped associations of antipsychotic exposure with overall and CVD-related mortality were also noted.However, in this subgroup, high antipsychotic exposure was associated with the highest risks of overall and CVD-related mortality.These findings highlight the importance of using an adequate dosage of antipsychotic medications for patients with schizophrenia.The finding of an association between a high antipsychotic dosage and increased risk of mortality in older patients with schizophrenia highlights the need for clinicians to remain vigilant when adjusting antipsychotic dosages for these patients. The no exposure group had an age-and sex-adjusted (model 1) HR of 3.61 and a fully adjusted (model 2) HR of 2.89 for overall mortality and an age-and sex-adjusted (model 1) HR of 3.37 and a fully adjusted (model 2) HR of 2.96 for CVD-related mortality compared with the control group.After socioeconomic factors and physical comorbidities were adjusted for, the mortality rate in model 2 was lower.Seeman reported that poor socioeconomic status is a barrier to reducing the mortality gap between individuals with schizophrenia and the general population [23].In addition, studies have indicated that patients with schizophrenia have a higher prevalence of comorbid physical conditions, such as diabetes, COPD, and CVD, than do those without schizophrenia [16,[24][25][26].The higher mortality in patients with schizophrenia might be attributable to the presence of comorbidities, particularly CVD [26]; undetected or inadequately treated comorbid physical illnesses can lead to an increased mortality risk [8,16].Our results revealed that the patients with schizophrenia with no antipsychotic exposure had an approximately three-fold higher risk of overall and CVD-related mortality than the control group did, even after socioeconomic variables and comorbid physical illnesses were controlled for (model 2).A systematic review including 135 studies published spanning from 1957 to 2021 reported a 2.9-fold higher risk of all-cause mortality in patients with schizophrenia than in the general population and a 1.6-fold higher risk in patients with schizophrenia than in physical disease-matched controls from the general population [27].It seems likely that schizophrenia per se may be independently associated with an elevated overall and CVD-related mortality, as presented in the current findings for the patients with no antipsychotic exposure.Several symptoms of schizophrenia could lead to a higher mortality risk in individuals with the disorder.For example, lack of insight is a symptom of schizophrenia [28], as indicated by the finding of the WHO's 10-country study on schizophrenia that 98% of patients with schizophrenia exhibit a lack of insight, making it the most prevalent symptom [29].Lack of insight is the primary cause of treatment nonadherence [30].Moreover, various interconnected factors in schizophrenia might synergistically contribute to physical morbidity and subsequent mortality in patients.Peritogiannis et al. collected and classified these factors, including patient-related factors, symptomatology, treatment-related factors, health service-related factors, other diseaserelated factors, and socioeconomic factors [31].Patient-related factors such as unhealthy lifestyle, including smoking, substance use, alcohol use, sedentary lifestyle, and poor nutritional habits, contribute to higher mortality due to natural causes in patients with schizophrenia [31].In particulary, with smoking, a study reported that approximately 70-80% of patients with schizophrenia were smokers and smoked a higher number of cigarettes than did those smokers without schizophrenia [32,33].The life expectancy for smokers is at least 10 years shorter than that for nonsmokers [34,35], and smoking is causally associated with mortality in various CVDs [36,37].The high mortality risk in patients with schizophrenia may also be linked to societal factors and the characteristics of the health-care system [31].Evidence indicates that patients with schizophrenia and physical comorbidities may not receive appropriate medical care.A Danish study reported that patients with schizophrenia and heart failure received lower-quality care that deviated from guidelines compared with that received by patients without schizophrenia with heart failure.Notably, inadequate psychosocial functioning in schizophrenia patients predicted suboptimal heart failure care, leading to a substantially higher 1-year mortality risk [38].Not only do they accept improper medical care, but patients with schizophrenia are also faced with discrimination or have some kind of stigma that may negatively affect their health.A previous cross-sectional survey conducted across 27 countries revealed that more than 17% of patients with schizophrenia encountered discrimination when receiving treatment for physical health problems.Perceived discrimination may deter patients with schizophrenia from seeking medical services, thereby contributing to less favorable outcomes in addressing their physical health issues [39].Therefore, it is crucial for clinicians to adequately address the aforementioned habits and behaviors associated with patients with schizophrenia, as well as the relevant social-environmental factors. In this study, U-shaped associations of antipsychotic exposure with overall and CVDrelated mortality in the patients with schizophrenia were observed, with these findings indicating that the risk of mortality may be highest among patients with no antipsychotic exposure (Figures 1 and 2).However, the confidence intervals were overlapping across certain groups, which necessitates a more cautious interpretation.For instance, as presented in Table 2, we found that schizophrenia patients of the low dose and moderate dose groups had lower overall mortality compared to those in the no use and high dose groups, with non-overlapping confidence intervals, whereas the confidence intervals were overlapping between the no use and high dose groups.In a Finnish study involving more than 60,000 patients treated for schizophrenia in inpatient settings and with a follow-up of 20 years, compared with nonuse, any antipsychotic use was associated with lower all-cause mortality (HR = 0.48) [40].Additionally, Crump et al. revealed that nonuse of antipsychotics was associated with elevated mortality [16].However, both of these studies compared use and nonuse of antipsychotic medications without considering the effects of different antipsychotic doses.Torniainen et al. investigated the associations between antipsychotic exposure and mortality while also examining the effects of varying antipsychotic doses [18].Our results are comparable to their study, revealing a U-shaped association between overall mortality and different levels of cumulative antipsychotic exposure [18].However, the aforementioned study did not concurrently account for the influence of socioeconomic factors on mortality or consider the potentially high prevalence of comorbid physical illnesses in patients with schizophrenia.Besides overall mortality, we found a similar U-shaped pattern for CVD-related mortality in the present study.A 24-year national register study conducted in Sweden revealed that CVD was the leading cause of mortality among patients with schizophrenia.In addition, it revealed that patients with schizophrenia generally experienced CVD-related mortality 10 years earlier than the general population did [41].In contrast to the general population, individuals with schizophrenia exhibited elevated and earlier CVD-related mortality.This disparity can be attributed to multiple factors, encompassing patients' comorbidities, harmful health-related behaviors, social factors such as stigma, insufficient preventive interventions, limited health literacy, suboptimal adherence to essential management, barriers to health-care access, and a propensity for accepting suboptimal care [42,43].Antipsychotic medications, as the primary pharmacotherapy for treating schizophrenia, have been extensively studied for their efficacy in addressing psychiatric symptoms and potential cardiovascular complications.Adequate exposure to antipsychotic treatment is crucial for patients with schizophrenia.Nonadherence to antipsychotic medication is linked to several psychiatric symptoms, poorer mental functioning, poorer life satisfaction, and increased substance use [44].These problems may worsen the health condition of patients with schizophrenia.However, the literature has revealed that antipsychotics, regardless of whether they are first-or second-generation antipsychotics, are associated with metabolic and cardiovascular side effects [45][46][47].Another study reported that antipsychotics were associated with a dose-related increase in the risk of sudden cardiac death [13].These findings indicate that the use of antipsychotic medications warrants attention because they may increase the risks of CVD and associated mortality.However, based on previous research, especially clinical database studies, the risk of CVD-related mortality may not exhibit a positive correlation with the use of antipsychotic medications.For example, Torniainen et al. reported a U-shaped association between cumulative antipsychotic exposure and CVD-related mortality [18], a finding that is comparable to our own.Another crucial issue is when, or if, stable patients with schizophrenia can reduce their antipsychotic medication dosage.A review published in 2022 collected evidence on the reduction of antipsychotic doses for stable individuals with schizophrenia.The review revealed no difference between the groups undergoing dose reduction and those continuing the same dose with regard to quality of life, functioning, and the incidence of participants experiencing at least one adverse effect.However, the dose reduction group exhibited an increased susceptibility to relapse, dropping out, and rehospitalizations [48].These findings highlight the importance of achieving a balance in prescribing antipsychotic medications, that is, in prescribing adequate doses that effectively treat symptoms, enhance patients' quality of life, and enable management of side effects. In this study, we observed that compared with the corresponding control group, the older patients with schizophrenia who had no antipsychotic exposure had 2.79-fold (model 1) and 2.51-fold higher risks (model 2) of overall mortality and 2.54-fold (model 1) and 2.39-fold higher risks (model 2) of CVD-related mortality.The literature indicates that patients with schizophrenia have a higher risk of mortality that continues into old age, although the mortality gap is smaller in older individuals than in younger ones [49,50].A survivor effect might explain these findings; the patients who remained alive throughout the study period might have been healthier than those who died were.Although the mortality gap between the older patients with schizophrenia and the control group was smaller than that observed between the younger patients with schizophrenia and the control group, the health status of older patients with schizophrenia warrants greater attention.Research has indicated that compared with healthy individuals, older patients with schizophrenia receive less adequate treatment for physical comorbidities [51,52].Inadequate treatment may result in compromised health and diminished quality of life in older patients with schizophrenia, which can lead to an increased medical and societal burden.In the context of an aging population, the vulnerability of older patients with schizophrenia must be considered, and clinicians must ensure that such patients receive adequate treatment for physical comorbidities. In the current study, the older patients with schizophrenia had the highest risk of overall and CVD-related mortality in the high antipsychotic exposure group (Figures 1 and 2), although the confidence intervals were overlapping across certain groups.Table 2 revealed that among the older patients with schizophrenia, those in the low and moderate exposure groups had lower overall mortality than did those in the no and high exposure groups, with nonoverlapping confidence intervals.However, for CVD mortality, the confidence intervals overlapped across the groups with different exposure levels, despite the illustrated U-shaped curves.The side effects of antipsychotics, which can affect all patient populations, may be particularly pronounced in older patients, because age-related changes can amplify these effects [53].For example, age-related changes in hepatic and renal functions markedly affect the absorption, distribution, metabolism, and excretion of drugs.Liver mass, hepatic blood flow, serum albumin levels, and renal blood flow and function tend to decrease with age [54].In addition, age-related alterations in body composition can affect the pharmacokinetics of antipsychotics in older patients.These changes may involve a decrease in lean muscle mass and total body water and an increase in total body fat [53].Moreover, because of age-related increases in monoamine oxidase activity, in older patients, the central nervous system exhibits heightened sensitivity to antipsychotic drugs.In addition, a reduction in cerebral blood flow and a selective decline in some nerve pathways were reported in older patients.Furthermore, the age-related loss of cholinergic neurons and the exacerbation of cholinergic deficits by these drugs increased the sensitivity of older patients to medications exerting anticholinergic effects [54].Older patients with schizophrenia may have comorbid chronic diseases, such as CVD and diabetes, and antipsychotic use may increase the difficulty of managing these diseases [53].A study revealed that both first-and second-generation antipsychotics were associated with an increase in the risk of mortality among older patients [55], and dosage may be a key determinant of antipsychotic safety, which affects mortality risk [56].In summary, low or moderate doses of antipsychotics may be sufficient for treating older patients with schizophrenia, and as patients advance in age and their physical health condition changes, clinicians should adjust the dosage of antipsychotic medications.For older patients with schizophrenia, high exposure to antipsychotic medications is not generally recommended.If high doses of antipsychotic medication are required, clinicians must closely monitor for associated side effects and potential risk factors. The strengths of this study include nationwide coverage, encompassing patients diagnosed with schizophrenia in all clinical settings, which has benefits in increasing the generalizability of the results, and comparisons with a control sample without psychiatric diagnoses.However, this study has several limitations.First, due to its non-randomized study design, we needed to be cautious when interpreting the results due to potential selection bias.For instance, certain characteristics of the no exposure group, such as lack of connection to health-care systems, may confound the presented associations between dose exposure and mortality.Second, the observational nature of the study may limit its ability to establish causal relationships.Third, the lack of accurate information regarding disease severity and patient lifestyles in the NHIRD limited our assessment of these factors, which can affect mortality.Fourth, because the data were mainly obtained from the NHIRD, the current results cannot be directly generalized to populations with different characteristics and under different health-care systems without proper adjustment.Finally, we did not directly measure the actual amount of medications received by the patients or their blood levels of antipsychotic medications. Setting This study was approved by the Research Ethics Review Committee of Far Eastern Memorial Hospital in Taiwan (109150-E).Taiwan has a population of approximately 23 million, and its National Health Insurance system is a compulsory, single-payer healthcare system.In this system, the disbursement of funds is centralized, and all Taiwanese citizens and legally employed foreign workers in Taiwan have equal access to health-care services.The National Health Insurance Research Database (NHIRD) contains comprehensive records of the health service utilization of nearly the entire Taiwanese population; these records include information regarding demographics, procedures, and medication usage and corresponding medical service expenditures.From its inception until 2016, disease data in the NHIRD were coded using the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) [57]. Study Population In the present study, individuals with schizophrenia were identified on Taiwan's NHIRD, which is managed by the Health and Welfare Data Science Center of the Ministry of Health and Welfare in Taiwan.We identified and included individuals aged ≥15 years who were given a diagnosis of schizophrenia (ICD-9-CM code 295) in 2010.These patients were followed up on for 5 consecutive years (2010-2014), which served as the observation window.The study cohort comprised patients with both incident and existing cases with schizophrenia.In addition, this study identified a subgroup of older patients with schizophrenia who were aged ≥65 years in the index year.The mortality in the patients with schizophrenia was compared with that in a control sample.The control sample was randomly selected from the Registry for Beneficiaries of the NHIRD and was age-and sexmatched with the patients with schizophrenia.We selected individuals without diagnoses of psychiatric disorders (ICD-9-CM codes 290-319) from the registry and then matched them by sex.We subsequently randomly selected individuals from the control sample for age matching based on 10-year age intervals (15-20, 21-30, 31-40, 41-50, 51-60. .., 91-100, and >100 years).For the subgroup analysis of older patients, a control sample of individuals aged ≥65 years was selected from the overall control sample.The flowchart for patient selection and recruitment is illustrated in Figure 3.As for mortality, we identified the causes of death through linkage to Taiwan's national mortality registry.In addition to overall mortality, we analyzed the number of deaths due to cardiovascular diseases (CVDs, ICD-9-CM codes I00-I99). Covariates Data on age, sex, socioeconomic variables, possession of a catastrophic illness certification, and diagnoses of physical illnesses were extracted for both the patients with schizophrenia and the control sample.The following socioeconomic variables were analyzed: household income, urbanization level of residence, and insurance premium level (determined on the basis of the monthly income of the insured).The investigated covariates related to physical illnesses included diagnosed chronic obstructive pulmonary disease (COPD), CVD, cancer, diabetes mellitus (DM), and renal disease (RD).In addition, for the patients with schizophrenia, we extracted treatment-related data, including psychiatric and nonpsychiatric health-care costs, psychiatric ward admission records, and antipsy- Covariates Data on age, sex, socioeconomic variables, possession of a catastrophic illness certification, and diagnoses of physical illnesses were extracted for both the patients with schizophrenia and the control sample.The following socioeconomic variables were analyzed: household income, urbanization level of residence, and insurance premium level (determined on the basis of the monthly income of the insured).The investigated covariates related to physical illnesses included diagnosed chronic obstructive pulmonary disease (COPD), CVD, cancer, diabetes mellitus (DM), and renal disease (RD).In addition, for the patients with schizophrenia, we extracted treatment-related data, including psychiatric and nonpsychiatric health-care costs, psychiatric ward admission records, and antipsychotic medication dosage information.The antipsychotic medications considered in our study were aripiprazole, amisulpride, clozapine, olanzapine, quetiapine, risperidone, ziprasidone, zotepine, paliperidone, chlorpromazine, haloperidol, fluphenazine, and thioridazine.The mean defined daily dose (DDD) is the recommended average daily maintenance dose of a drug used for its main indication in adults; the present study referenced the DDD guidelines of the World Health Organization (WHO) [58].We calculated the mean DDD of antipsychotics by dividing the cumulative dose by the number of follow-up days.Subsequently, we categorized the patients into four groups on the basis of their exposure to antipsychotic medications: no exposure, low exposure (<0.5 DDD), moderate exposure (0.5-1.5 DDD), and high exposure (>1.5 DDD). Statistical Analyses We first compared the demographic and socioeconomic characteristics between the patients with schizophrenia and the control group and between the older patients with schizophrenia and the corresponding control group.Categorical variables were analyzed using the chi-squared test, whereas continuous variables were analyzed using F tests.Cox regression analysis was performed to compare the overall mortality and CVD-related mortality among the different antipsychotic exposure groups relative to those of the control group.Two regression models were used.The first model included only age and sex as covariates (demographic-adjusted model), and the second included sex; age; possession of a catastrophic illness certification; socioeconomic variables, including insurance premium level and household income; and presence of comorbid physical illnesses, such as COPD, CVD, cancer, DM, and RD, included as covariates (fully adjusted model).Hazard ratios (HRs) for overall mortality were calculated for the exposure groups (i.e., the no exposure, low exposure, moderate exposure, and high exposure groups).HRs for CVD-related mortality were also calculated for the exposure groups.Statistical significance was set at p < 0.05.All statistical analyses were performed using SPSS, version 21.0 (IBM, Armonk, NY, USA). Conclusions In this study, we discovered that although the higher mortality rates in the patients with schizophrenia were partly attributable to their comorbidities, schizophrenia was independently associated with an increase in overall and CVD-related mortality.The use of antipsychotics was beneficial to the patients with schizophrenia, and the most recommended dosage was low to moderate.In the older patients with schizophrenia, high antipsychotic exposure was associated with the highest mortality risk.The findings of this study indicate that improving drug adherence among patients with schizophrenia is crucial.In addition, given that research on older patients with schizophrenia is limited, this study provides valuable insights, indicating that an appropriate dose of antipsychotic medications must be prescribed in older patients with schizophrenia to prevent adverse outcomes related to high exposure to antipsychotic medications.Future studies should investigate the mechanisms underlying the associations between antipsychotic dosage and mortality in different age groups of patients with schizophrenia. Figure 1 .Figure 1 . Figure 1.Overall mortality hazard ratios and 95% confidence intervals for level of exposure to antipsychotics in the demographic-adjusted model (a) and fully adjusted model (b) for patients with schizophrenia and older patients with schizophrenia relative to controls without psychiatric diagnoses.A. Patients with schizophrenia Figure 2 . Figure 2. CVD-related mortality hazard ratios and 95% confidence intervals for level of exposure to antipsychotics in the demographic-adjusted model (a) and fully adjusted model (b) for patients with schizophrenia and older patients with schizophrenia relative to controls without psychiatric diagnoses. Figure 2 . Figure 2. CVD-related mortality hazard ratios and 95% confidence intervals for level of exposure to antipsychotics in the demographic-adjusted model (a) and fully adjusted model (b) for patients with schizophrenia and older patients with schizophrenia relative to controls without psychiatric diagnoses. Figure 3 . Figure 3. Flowchart for patient selection and recruitment. Figure 3 . Figure 3. Flowchart for patient selection and recruitment. Table 3 . Fully adjusted hazard ratios for antipsychotic exposure in patients with schizophrenia and older patients with schizophrenia relative to controls (model 2).
2023-12-31T16:03:33.774Z
2023-12-29T00:00:00.000
{ "year": 2023, "sha1": "9c0d39762c66f098e53aa29a9a165bd187ec8d19", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8247/17/1/61/pdf?version=1703857233", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a9c933425d5823519d01abcdc8077cbfb0124f85", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236377341
pes2o/s2orc
v3-fos-license
Prole of Oncological Patients Needing Rehabilitation by Dental and Bucomaxillofacial Prostheses in a Brazilian Subpopulation Purpose: This study aimed to identify the prole of cancer patients in need of rehabilitation with oral and/or buccomaxillofacial prostheses, as well as to evaluate the possible reasons for not concluding the rehabilitation. Materials & Methods: This is a retrospective observational study carried out at the Dentistry Department of the Mato Grosso Cancer Hospital, Cuiabá, MT, Brazil, through the evaluation of the medical records of patients attended from April 2017 to November 2019. Results: The study population comprised 256 patients who met the research inclusion criteria. It was found that 30.90% of the patients were elderly, 65.6% were men, 70.3% brown, 27.3% retired, 49.2% married and 52% coming from municipalities of the interior of the state of Mato Grosso. From the total of patients, 67.23% reported smoking and 53.9% alcohol consumption. As for the location of the tumor, 57.4% had it in the head and neck region, 55.1% of which were epidermoid carcinoma and in 28.9% of cases the disease stage was IV. Most of the patients (60.2%) completed prosthetic rehabilitation, with total prostheses predominating. The main reasons for not completing the rehabilitation were the patient's death and weakness. Conclusions: Patients who started treatment in more advanced stages of cancer had a greater chance of not completing the prosthetic rehabilitation, and the incompletion of rehabilitation treatment was directly related to the patients’ death and the state of weakness. were collected. Regarding the characteristics of prosthetic rehabilitation, data were collected regarding the types of maxillary prostheses (total, partial, total obturator, and partial obturator), mandibular (total or partial), and facial; and the reasons for not completing rehabilitation. Introduction Cancers in the head and neck region profoundly affect the quality of life of patients, as they can affect the patient's aesthetics and are a constant reminder of the disease. These cancers are emotionally debilitating for patients and their families 1 . The diagnosis of cancer negatively impacts the patient's life and feelings of fear and suffering are common throughout the process, which begins with the diagnostic phase, followed by therapy and survival 2,3 . Most post-treatment patients, whether surgical or not, remain in subsequent follow-up visits for an average period of 10 years 4 . Cancer treatment causes from mild to more severe adverse effects 5 . Large facial defects compromise vital functions such as breathing, chewing, speaking, swallowing and aesthetics. A prosthetic reconstruction of facial defects helps to restore functional disabilities and assists in the recovery of the patient and his family 6 . In the treatment of tumors in the head and neck region, patients are also often submitted to surgeries that have a serious impact on quality of life and can impair appearance and functional characteristics 7 . Mutilations on the face can bring very important aesthetic and functional damage to patients, causing morphofunctional and psychosocial changes leading the individual to social and family isolation. Thus, it is imperative that health professionalscommit to their rehabilitation 8 . Prosthetic treatment is indicated to regain lost oral functions, improve physical appearance and enable patients to participate in daily activities with greater con dence 9 . The absence of teeth, not replaced by prosthesis also negatively affects the quality of life of cancer patients 10 . Given the important demand for dental care of patients with malignant neoplasms and the positive impact of rehabilitation on their quality of life, this study aims to outline the pro le of cancer patients rehabilitated with dental and/or maxillofacial prostheses, and also the reasons for inconclusion of rehabilitation treatments. Materials & Methods Population selection Data collection was performed by surveying the medical records of patients treated at the Dentistry Department of the Cancer Hospital of Mato Grosso, Cuiabá, MT, Brazil, from April 2017 to November 2019. This period was selected because the hospital uses a single medical record for each patient and the medical records are managed by an information system that was replaced in April 2017. All prosthetic care performed in the period were surveyed in the Care Management System, reaching the total of patients who had an indication for prosthetic rehabilitation treatment in the period. Patients of both sexes and of any age were included. Patients who were rehabilitated but had no con rmed diagnosis of cancer and patients whose information was not possible to be collected in the respective medical records were excluded. Collected data Information about age; sex; race/color of skin (de ned through the recommendation by the Brazilian Institute of Geography and Statistics -IBGE) 11 ; origin (separated into two distinct groups: state capital and cities from the interior of the state); profession; marital status; smoking and drinking habits; and family history of cancer were collected. As for the characteristics of the disease, the tumor location, histological type, stage of the disease, and the cancer treatments performed (surgery, radiotherapy and chemotherapy) were collected. Regarding the characteristics of prosthetic rehabilitation, data were collected regarding the types of maxillary prostheses (total, partial, total obturator, and partial obturator), mandibular (total or partial), and facial; and the reasons for not completing rehabilitation. Patients who met the inclusion criteria, but whose medical records did not present the information on completing the rehabilitation or installation of the prosthesis, were actively sought by telephone and questioned as to the reason for the fail in the conclusion of the rehabilitation treatment. Those patients or their relatives contacted, gave different answers to the fail in the conclusion of the treatment. These responses were grouped into ve distinct groups, as follows: patient died before completing the rehabilitation treatment; patient interrupted the treatment because of weakness; patient is still in rehabilitation treatment; patient completed the rehabilitation in somewhere else and patients with no de ned response (corresponding to the cases in which telephone contact was unsuccessful). Data analysis A single researcher collected the data and organized in an Excel spreadsheet. Descriptive statistical analysis was performed for the studied variables. The result was presented as absolute and relative frequencies. To analyze possible associations between independent variables and the dependent variable not completing the prosthesis and between the independent variables and the dependent variables reason for not completing the prosthesis, multinominal logistic regression was applied, following previously used methodology 12 Results In the present study, 470 hospital records were analyzed, among which 256 records t the research inclusion criteria. The other patients were rehabilitated, but were excluded because they had no con rmed diagnosis of cancer or the information was not possible to be collected in the respective medical records. The distribution of patients according to demographic characteristics, smoking and drinking habits and family history of cancer are shown in Table 1. The distribution of patients according to the treatment and characteristics of the tumor is shown in Table 2. Figure 1 shows the owchart of the characteristics of prosthetic rehabilitation and reasons for not completing the prosthetic treatment. The analysis of the association between age, marital status, city of origin and cancer staging with the completion of the prosthesis using multivariate logistic regression is shown in Table 3. The association between the stage of cancer and the reason for not performing the prosthesis using multivariate logistic regression is shown in Table 4. Discussion Cancer patients rehabilitated with dental and/or maxillofacial prostheses are elderly and the main reason for not concluding the treatment was the patient's death. The population studied was predominantly of married and retired men. As for the origin 52% of the patients are from cities in the interior of the state and 70.3% of the patients are brown. Caetano et al. 13 ,evaluated the quality of life, body image and selfesteem in patients with sequelae after treatment of head and neck cancer, candidates for prosthetic rehabilitation. They had a sample of 10 patients, and also found a predominance of male patients (60%); 50% married; 30% aged 51 to 60 years, 40% farmer, 30% retired; 60% were from the interior of the state. In a study by Rettig and D'Souza 14 , they mention that two of the main causes of head and neck cancer are the use of tobacco and alcohol. Gomes et al. 15 , analyzed 33 patients, where 84.38% and 87.50% of the individuals used or were still in use of tobacco and alcohol respectively. The most affected anatomical sites by the tumors in this population were head/neck (57.4%), breast (12.1%) and prostate (10.3%). Breast and prostate cancer are among the most prevalent in Brazil 16 ,the greater number of patients with tumors in the head and neck in this study can be explained because these patients always have their oral health analyzed in the Dentistry Department before starting the anitineoplastic treatment and, with that, they already establish a connection, returning later for oral rehabilitation. The most frequent histopathological diagnosis was epidermoid carcinoma with 141 cases (55.1%) and the most frequent disease staging was IV with 74 cases (28.9%). Epidermoid carcinoma is the most frequent malignancy among tumors in the head and neck region and the sixth most common cancer worldwide 17 . Of the population studied, 154 patients (60.2%) completed their rehabilitation with dental prostheses, including 3 facial prostheses, 148 maxillary prostheses, including 11 obturators, and 126 mandibular prostheses. Quispe et al. 18 , evaluated 75 individuals, but of this group only 30 were cancer patients. The research assessed the need for maxillary and mandibular prostheses, the data collected were: 21 patients needed a maxillary prosthesis: to replace one element (10%), to replace more than one element (33.3%), needs prosthesis combination (13.3%) total prosthesis (13.7%); 29 patients used mandibular prosthesis: to replace more than one element (70%), required the combination of prostheses (3.3%), total prosthesis (23.7%). Joo et al. 7 , cites in his research that patients undergoing oncological treatments may have several sequelae, impairing masticatory function, swallowing, aesthetics, so the use of total or partial obturator prosthesisis an alternative to remedy such sequelae and enable a better quality of life for the patient Parameswari et al. 19 also concluded in their research that prosthetic rehabilitation with obturator prosthesis restores the missing intraoral structures and acts as an anatomical barrier between the oral and nasal cavities, restoring function and aesthetics. The study was carried out at the Cancer Hospital of Mato Grosso, located in the city of Cuiabá, MT. The Cancer Hospital of Mato Grosso uses a unique medical record for each patient, regardless of the treatments that are performed in different departments of the institution. When working with the medical records, there was a lack of relevant information. The absence of this information impaired the analysis and represents a limitation of this study. This limitation is often found in studies that work with secondary databases, which were not collected speci cally for research. However, it is compensated by the possibility of providing information on a large number of patients quickly and agile over an extended period of time 20 . Conclusion The patient rehabilitated with dental and maxillofacial prostheses is mainly male, elderly, brown, married, from cities in the interior of the state, smoker and alcoholic with a history of cancer in the family. They had cancer in the head and neck, the epidermoid carcinoma type, in stage IV and underwent surgery, radiotherapy and chemotherapy. The most performed prostheses were total. The main reasons for not completing the rehabilitation were the patient's death and weakness. Patients who started treatment in more advanced stages of cancer had a greater chance of not completing prosthetic rehabilitation, and had the inconclusion of rehabilitation treatment related to death and weakness. Declarations Funding: None to declare. Availability of data and material: Not applicable. Code availability: Not applicable. Authors' contributions: All authors have made substantial contributions to conception and design, acquisition of data, analysis and interpretation of data. All authors participated on drafting the article or revising it critically for important intellectual content. All authors approved of the version to be published and all subsequent versions. Figure 1 Flowchart of prosthetic rehabilitation characteristics and reasons for not completing the prosthetic treatment.
2021-07-27T00:05:47.945Z
2021-05-24T00:00:00.000
{ "year": 2021, "sha1": "335e2384351d668c9b54fc02413157402c20bbb9", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-519549/latest.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "44560e2b8f17736419646adb2f521a49f8892e93", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253043510
pes2o/s2orc
v3-fos-license
Trends and determinants of taking tetanus toxoid vaccine among women during last pregnancy in Bangladesh: Country representative survey from 2006 to 2019 Background Tetanus occurring during pregnancy is still an important cause of maternal and neonatal mortality in developing countries. This study estimated the trend of tetanus toxoid (TT) immunization coverage from 2006 to 2019 in Bangladesh, considering socio-demographic, socio-economic, and geospatial characteristics. Methods The dataset used in this study was extracted from Multiple Indicator Cluster Surveys (2006, 2012–13, and 2019) including 28,734 women aged between 15–49 years. Data analysis was performed using cross-tabulation and logistic regression methods. Further, the spatial distribution of TT immunization coverage was also depicted. Results The trend of TT immunization (81.8% in 2006 to 49.3% in 2019) and that of taking adequate doses of TT (67.1% in 2006 to 49.9% in 2019) has gradually decreased throughout the study period. Among the administrative districts, North and South-West regions had lower coverage, and South and West regions had relatively higher coverage of both TT immunization and that of adequate doses. Antenatal TT immunization (any dosage, inadequate or adequate) was significantly associated with lower age (AOR = 3.13, 1.55–6.34), higher education (AOR = 1.20, 1.03–1.40), living in urban areas (AOR = 1.17, 1.03–1.34), having immunization card (AOR = 5.19, 4.50–5.98), using government facilities for birth (AOR = 1.41, 1.06–1.88), and receiving antenatal care (ANC) (AOR = 1.51, 1.35–1.69). In addition, living in urban areas (AOR = 1.31, 1.10–1.55), having immunization cards (AOR = 1.62, 1.36–1.92), and choosing others’ homes for birth (AOR = 1.37, 1.07–1.74) were significantly associated with adequate TT immunization. However, higher education (AOR = 0.57, 0.44–0.74), having poor wealth index (AOR = 0.65, 0.50–0.83), and receiving ANC (AOR = 0.76, 0.63–0.92) had lower likelihood of taking adequate TT immunization. Conclusions The gradual decline in the TT immunization rate in the present study suggests the presence of variable rates and unequal access to TT immunization, demanding more effective public health programs focusing on high-risk groups to ensure adequate TT immunization. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 1 year of age and 52% of reproductive age took at least one TT immunization. Furthermore, 85% of mothers with children under one year old, 64% of married women who had previously been pregnant, and 47% of women of reproductive age reported receiving at least two doses of TT vaccination [24]. Another study depicts that only 22.3% of pregnant women took one TT and 56.3% had more than 2 TT immunizations [25]. In addition, Abir et al. from Bangladesh reported that 26.0% of mothers had taken one TT dose, and 55.6% had received two or more doses [26]. These studies did not investigate the trends of TT coverage over the years, which is the focus of the present study. Notably, after India, China, and Pakistan; Bangladesh was identified as the fourth-highest country for neonatal tetanus with estimated cases of 41000 annually [23]. Vulnerability in pregnant women mostly resides in the rural part of our country where TT is a continuing program under EPI regarding maternal and neonatal tetanus elimination. Bangladesh government has taken the initiative of the EPI program since 7th April 1979 [27]. However, to our knowledge, changes over time in TT vaccination coverage have not been investigated. Little is known about the determinants of TT vaccination coverage in Bangladesh, a country being a hotspot for infectious diseases. Better understanding the determinants of TT vaccination coverage may also assist in tailoring program interventions. Therefore, it is important to understand the general trend of TT vaccination coverage, and the issues surrounding vaccination. This study aims to demonstrate the trend and determinants of TT immunization among Bangladeshi pregnant women using data from the United Nations Children's Fund (UNICEF) from 2006 to 2019. Data overview This cross-sectional study used survey data sets from 2006, 2012-13, and 2019 of the Bangladesh Multiple indicator cluster surveys (MICS), an international survey initiative carried out by the Bangladesh Bureau of Statistics in collaboration with UNICEF. The MICS were developed and supported by UNICEF "focusing mainly on issues that directly affect the lives of children and women" allowing countries to generate evidence and recommended strategies to monitor the progress toward millennium development goals [28]. The survey (2019) employed a two-stage stratified cluster sampling method where each of the 64 districts was considered as the sampling strata. The primary sampling units were enumeration areas (EAs) based on the 2011 Bangladesh population, with housing census and households serving as secondary sampling units. Using the probability proportional to size (PPS) method, a total of 3220 EAs were selected from all 64 strata in the first step of sampling. In the second stage, a random systematic selection was utilized to choose a sample of 20 households from each sampled EA [29]. For the 2012-13 round, a two-stage stratified cluster sampling approach was used to determine the survey samples. Administrative districts were designated as priority districts and non-UNDAF districts under the United Nations Development Assistance Framework (UNDAF). 50 sample clusters were chosen from each of the 20 UNDAF districts, and 40 sample clusters were chosen from each of the 44 non-UNDAF districts. A systematic random selection technique was used to choose sample households in each cluster from a list of households [30]. In round 2006, a multi-stage, stratified cluster sampling approach was used for the selection of the survey sample. In each enumeration region, households were sequentially numbered from 1 to 100 (or more), and 35 households were selected using systematic selection procedures [31]. We extracted data for women aged 15-49 years from the dataset of three rounds of the survey. The total number of households surveyed from 2006 to 2019 was 206568. In the raw data, the total number of women aged 15-49 years was 186028 (unweighted). After cleaning the missing values, the remaining 11812, 7783, and 9139 (weighted) women from 2006, 2012-13, and 2019, respectively were included in the final analysis. Outcome variables The two outcome variables used for this study were similar to previous studies [13,18,19]. The question that was asked to the participants was whether or not they took TT vaccination during their last pregnancy. In addition, based on WHO recommendations, receiving at least two doses of TT is considered adequate while less than 2 doses are regarded as inadequate [19]. Statistical analyses IBM SPSS statistics 26.0 version was used for the analysis. First, simple descriptive tests were done to observe the exact group frequencies, percentages, minimum, maximum, range, etc. Then, Pearson Chi-square tests were carried out to imprint the association of covariates with the two dependent variables. Multicollinearity between independent variables was measured using correlation coefficient and the value of all the variables was less than 0.5 indicating the absence of multicollinearity. After that, logistic regression models for the binary outcome were used to analyze the multivariable association between covariates and outcomes. Tableau Desktop version 2021.2 was used to create the line chart and ArcGIS v10.5 to visualize the % changes occurring across 64 districts of Bangladesh. All tests were two-sided and had statistically eligible significant values below 0.05 with 95% confidence intervals. Forest plots were used for the graphical representation of the significant findings. Ethical clearance This study analyzed survey data from UNICEF, where all the personally identifiable information of participants had been removed. The national statistical office, Bangladesh Bureau of Statistics, and UNICEF obtained informed consent from survey participants before their participation. Because we used publicly available de-identified data, our work was exempt from full ethical review process and approved by the Research and Ethical Committee of Department of the Biochemistry and Food Analysis, Patuakhali Science and Technology University (approval no.: BFA 12/01/2022:03). In addition, upon completing the registration process, the authors were granted permission to download and use the datasets. The data are available online: http://mics.unicef.org/surveys. Changes in the prevalence of taking tetanus toxoid in Bangladesh After excluding the missing values, 28,734 pregnant women aged 15-49 years were included in our study (Fig 1). As shown in Fig Table 1 Determinants of taking adequate tetanus toxoid vaccine in Bangladesh The education of the participant is a significant covariate of adequate TT immunization. This data demonstrates that in 2006 and 2012-13, the women who did not complete their secondary education had significantly lower (AOR = 0.80, 0.70-0.91) and (AOR = 0.77, 0.63-0.95) uptake of adequate TT doses than uneducated women (Fig 5, S2 and S4 Tables). Similarly, the women who completed secondary or higher education had significantly lower (AOR = 0.57, 0.44-0.74) and (AOR = 0.70, 0.54-0.91) TT immunization in the 2012-13 and 2019 survey years than those who never attended school. In addition, the study results disclosed that the odds of utilizing adequate TT immunization were significantly higher in urban areas than in [36] showed that sufficient TT immunization among pregnant women was 75% worldwide, 95% in South East Asia, 63% in Africa, and 53% in East Mediterranean. The high socio-demographic condition is associated with TT immunization. The higher TT immunization rate among younger women in the present study might be attributed to the improved formal female education and access to modern media outlets. By previous studies, age is an insignificant factor regarding adequate doses of TT immunization [13]. In contrast, the association between age and adequate doses of TT immunization has been demonstrated in other countries [18,19]. However, we determined that age is a significant factor regarding any doses of TT immunization only for 2012-13 and 2019. Further studies are warranted to comprehend this connection properly. With some exceptions, this study suggests that increasing the level of education increases the uptake of TT immunization and adequate doses of TT immunization, in line with other studies [37,38]. Because educated women might be more likely to have decision-making power regarding their health and education may improve the level of knowledge about the deleterious effects of tetanus [18]. In our study, urban women were more likely to take sufficient TT immunization, in agreement with other studies [39,40], which might reflect the improved access to healthcare facilities in the urban area. Therefore, interventions especially targeting uneducated rural women are needed to improve the current scenario of TT immunization in Bangladesh. However, another study in Afghanistan concluded that urban women had lower odds of being sufficiently vaccinated, and may be offered less TT immunization than their rural counterparts due to less knowledgeable ANC providers and less vaccine availability [41]. Bangladesh has achieved remarkable health improvements during the last two decades [42]. More recently, Bangladesh was commended as an example of "good health at low cost" [42]. However, socioeconomic inequality in health, especially in maternal and child mortality remains a disturbing reality in Bangladesh [42]. Similar to our findings, studies revealed that increasing the wealth index of women in the household is protective against tetanus compared to a poor wealth index [18]. The financial reason might contribute to women from wealthy households getting easier access to healthcare facilities compared to those from poor households as well as women from low economic status are challenged with availability and high maternity and transportation cost when seeking health care. Moreover, women from the poor wealth index might be engaged in other activities to fulfill their basic needs, limiting their time to utilize healthcare services compared with the richest suggesting that strategies such as improvement of health literacy and logistic support could be taken to enable coverage and equity of TT immunization across women on areas with poor wealth index. However, this finding opposes a study conducted in the Gambia that showed that the wealth quintile did not affect TT immunization [19]. This study showed that women who uptake TT immunizations or adequate TT doses using immunization cards had a higher chance of being protected from tetanus. This finding is similar to studies conducted in Ethiopia and Ghana [22,43], which encouraged women to promote immunization card retention as well as other records of health facilities as a mechanism to improve the immunization rate. The present study demonstrated that birth at government facilities and home birth were protective factors for TT immunization and adequate TT doses. The presence of user fees for maternal health services and immunization might appear to be a major barrier to increasing TT immunization in private clinics suggesting exemption of user fees for maternal and child immunization [44,45]. However, the current study contradicts the findings of an earlier study [46] that showed that place of delivery did not affect TT immunization. Another important factor significantly associated with TT immunization was ANC followup, consistent with previous studies conducted in different countries worldwide [13,18]. This might be because higher ANC follow-up increased awareness of the importance of TT immunization and ANC visits also offer interventions and provides critical healthcare functions that might be crucial to health and well-being. Interestingly, women who attend high ANC visits are less likely to receive adequate TT immunization. This suggests that the burden of user fees and transportation costs for ANC might serve as a barrier to care and further discourage the continuation of TT immunization, contributing to the evidence that user fees exemption policies may reduce the inequities in access to care [47]. The spatial analysis discovered some district-wise variations in the change rate of TT immunization from 2006 to 2019, with Rangamati and Bandarban districts experiencing the worst. Moreover, regarding adequate TT doses, the situation was worse in Bandarban, Cox's Bazar, Pabna, Natore, Thakurgaon, and Nilphamari districts. Therefore, the policy focus here would be on the peripheral districts and the hill tracts in Bangladesh to improve women's health literacy in remote areas and ensure maternal healthcare access. This study has several strengths. First, this is the first study to demonstrate trends of TT immunization among women during the last pregnancy in Bangladesh. Second, we used data from three nationally representative datasets and the findings can be generalized to a whole nation. However, this study does have some limitations. Due to the cross-sectional nature of the study, causal inference of the association between TT immunization and women's health cannot be drawn. Since this study asks participants about past exposure and numerous vaccines indicated in the period in question, recall bias may occur. Social media exposure variables like watching television, using a computer, and listening to radiofrequency were not common in the three datasets and are missing from the analysis. Secondary data used in the analysis allowed us to lack control over the variables of interest to include in the analysis.
2022-10-22T05:18:55.211Z
2022-10-20T00:00:00.000
{ "year": 2022, "sha1": "23af9ab0d131a2e9fe5a4a2055e6095c4515b2ea", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "23af9ab0d131a2e9fe5a4a2055e6095c4515b2ea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235333220
pes2o/s2orc
v3-fos-license
The Fibrinolytic System: Mysteries and Opportunities The deposition and removal of fibrin has been the primary role of coagulation and fibrinolysis, respectively. There is also little doubt that these 2 enzyme cascades influence each other given they share the same serine protease family ancestry and changes to 1 arm of the hemostatic pathway would influence the other. The fibrinolytic system in particular has also been known for its capacity to clear various non-fibrin proteins and to activate other enzyme systems, including complement and the contact pathway. Furthermore, it can also convert a number of growth factors into their mature, active forms. More recent findings have extended the reach of this system even further. Here we will review some of these developments and also provide an account of the influence of individual players of the fibrinolytic (plasminogen activating) pathway in relation to physiological and pathophysiological events, including aging and metabolism. Introduction "Fibrinolysis" literally refers to the proteolytic removal of fibrin. That fibrin was the first recognized target for this process, gave credence to its name, and drove scientists to understand how this enzymatic process worked. After all, if the fibrin destroying protease(s) could be identified and harnessed, there would be clinical opportunities to bolster this process for therapeutic advantage. The main components of this system were slowly identified yet additional modifiers of the system continue to be revealed in more recent times (below). The fibrinolytic system is now appreciated for being a highly orchestrated, purpose-built process that ultimately results in the conversion of the abundant plasma zymogenic protein, plasminogen, into its active proteolytic form plasmin. 1 The 2 classical enzymes engaged with the task of converting plasminogen into plasmin are tissue-type and urokinase-type plasminogen activator (tPA and uPA, respectively). Other proteases can also perform this function (ie, kallikrein 2 ), but not as well as tPA or uPA. These enzymes, and plasmin itself, are members of the serine protease family. These enzymes are all subject to tight modulation, an important and necessary feature to limit the magnitude and duration of plasmin's activity. Indeed when all checkpoints are in place (ie, under normal conditions), the plasma half-life of plasmin was reported to be between 25 and 50 milliseconds. 3 The primary inhibitors of tPA and uPA, as well as the main inhibitor of plasmin itself, are also grouped into a family of serine-protease inhibitors (the "SERPINS"). Key among these are plasminogen activator inhibitor (PAI)-1 and PAI-2 that act on both tPA and uPA while alpha 2 -antiplasmin has the crucial job of controlling plasmin. 1 This regulatory effect does not stop with these serpins, as the fibrinolytic system has additional layers of regulation that influence the ability of these proteins to bind to their target substrates. Thrombin activatable fibrinolytic inhibitor (TAFI) is a prime example of this and plays an important role in limiting the ability of plasminogen to assemble itself on the fibrin surface. 4,5 Fibrin contains exposed lysine residues that provide important docking sites for plasminogen that conveniently harbors 4 lysine binding sites within its kringle domains. 6 TAFI, a carboxypeptidase, is able to remove the exposed lysine residues from the fibrin surface, thereby discouraging plasminogen away from fibrin, and sparing fibrin from proteolysis. While the fibrinolytic system can also be primed by other co-factors and inducers (eg, heparin, DNA, histones 7,8 ), plasminogen can also be targeted to cell-surface receptors (of which there are at least 12 9 ). Most of these receptors also contain exposed lysine residues (akin to fibrin). Once plasminogen is bound to the surface of a given cell, plasmin can be formed locally. In this scenario however, the plasmin formed is not necessarily looking for fibrin to cleave; instead it has additional functions, ranging from the promotion of cell migration 10 to the initiation of intracellular signaling and cytokine release. 11 Here we will review the current use or manipulation of the fibrinolytic system in clinical medicine, then outline other processes that are impacted by the same process, which is beginning to raise speculation as to whether the current indications for the use of pro-or anti-fibrinolytic agents can be reconsidered. This review will also provide an overview of how the individual regulatory molecules of the fibrinolytic system can perform additional functions, some of which are now being subjected to therapeutic targeting for purposes unrelated to fibrin removal. Clinical medicine and fibrinolysis: current use and recent developments For decades, the fibrinolytic system has been modulated for therapeutic benefit. This is well-cemented in medicine. Thrombolysis for example, is still mainstream in patients with thromboembolic conditions (ischemic stroke, pulmonary embolism, myocardial infarction and even ischemic limbs) albeit under strict guidelines. On the other hand, anti-fibrinolytic agents, most commonly the lysine analogue, tranexamic acid (TXA), are in widespread use to stop bleeding by protecting fibrin-rich blood clots from premature removal by plasmin. Thrombolysis Streptokinase, derived from the broth of hemolytic streptococci, was the first fibrinolytic agent evaluated in humans in the 1940s. 12 This was prior to the discovery and development of either tPA or uPA that were both subject to clinical development initially in the 1950s, but became prominent in the 1980s when recombinant DNA technology permitted large scale protein production. Thrombolysis was initially used for patients with myocardial infarction, but in the mid-1990s, tPA was specifically approved for patients with acute ischemic stroke, albeit with limitations. 13 tPA is still the more widely used thrombolytic due to the fact that it, unlike uPA, binds preferentially to fibrin, and markedly potentiates (~500-fold) the ability of tPA to activate plasminogen. 14 Fibrin therefore initiates its own demise 15 ; an essential requirement given that it is only formed as a temporary matrix. Even today, thrombolysis using tPA (alteplase) remains the front-line treatment for patients with acute ischemic stroke, despite the advent of endovascular clot retrieval (ECR). 16 One of the short-comings of tPA is its short plasma half-life of ~5 minutes. 17 This posed a drug delivery challenge in emergency departments as a 100% bolus administration of tPA would not last long enough to perform its lytic function. In order to maintain a lytic condition for long enough to remove any offending blood clot, tPA needs to be delivered as a 10% bolus followed by a 1 hour infusion. It was realized early on that a thrombolytic with an extended plasma half-life would be far more practical, particularly when dealing with such time sensitive conditions as an acute ischemic stroke where the earlier the administration of thrombolysis, the better. This practical consideration was the driving force behind the development of third-generation thrombolytic agents. 18 These variants were engineered to create a tPA-like molecule with a longer plasma half-life without compromising any of its other beneficial features. tPA is a complicated multi-domain molecule that contains 2 kringle domains, an epidermal growth factor domain and a finger domain, in addition to its critically important protease domain. 19 These other domains provide the means for tPA to participate in a myriad of "extra-curricular" functions 20 (and see below). However, in the context of conventional fibrinolysis and thrombolysis, the key feature to preserve was its remarkable ability to selectively bind to fibrin and less so to fibrinogen. The most successful of these variants generated was tenecteplase (TNK), developed in 1994. 21 TNK has only 6 amino acids different from tPA and shares 97% amino acid identity with tPA. Nonetheless, these substitutions empowered TNK with an increase in plasma half-life to 30 minutes, a simple change that has been warmly welcomed in the clinical arena. Fibrin selectivity was not only maintained in TNK, but actually enhanced (8-fold), thereby providing a longer lasting tPA-like molecule with even more preference for fibrin. 21 While ECR has certainly made its impact on clinical management and outcome for patients with ischemic stroke, it is restricted to patients with large vessel occlusions and requires sophisticated infrastructure. Also, ECR usually occurs in conjunction with thrombolysis rather than instead of it. Although it is hard to imagine that there will ever be a more potent fibrinolytic enzyme than plasmin itself, more ingenious approaches might be forthcoming that are more efficient at generating plasmin, producing it in a more targeted manner, ideally only at the clot surface. Natural variation in fibrinolytic capacity Notwithstanding the above discussion about the current use of thrombolytics, there is an assumption that administration of a given thrombolytic will result in a comparable ability to generate plasmin in any individual. Challenging this notion is a study from 1992 by Tait et al, 22 who compared the capacity of plasma from over 9000 apparently healthy individuals to generate plasmin ex vivo after the addition of a fibrinolytic agent (in this case streptokinase). The approach was simply to use the plasmin sensitive substrate S2251 to measure the rate of plasmin generation in plasma after the addition of streptokinase. It was revealed in this study that the rate of plasmin generation varied over an 8-fold range, and was further influenced by age, sex, and use of the contraceptive pill. 22 This is a very important finding when considering the fact that the use of thrombolytic agents in patients with acute ischemic stroke fails to improve outcome in >60% of cases. 23 While this has been attributed to clot burden or stroke severity, it is possible that the degree of plasmin generation following thrombolysis is simply not the same in all patients and may contribute to this high fail rate. 24 Some individuals may simply be less capable of responding to tPA to the extent required for thrombolysis. Anti-fibrinolytics The anti-fibrinolytic agent TXA was developed in Japan in the early 1960s 25 and is still used widely to reduce bleeding in various conditions, including trauma, post-partum hemorrhage, hemophilia, and also prophylactically in some surgeries. Being a lysine analogue and not a direct plasmin inhibitor (ie, like aprotinin), only lysine-dependent plasmin generation will be blocked by TXA, while plasmin formed through lysine-independent means will be spared from inhibition. This may be one reason why TXA is a safe drug since it cannot completely inhibit plasmin formation. Nonetheless, some have argued that TXA might indirectly promote thrombosis, given that it shuts down the fibrinolytic process, which is lysine-dependent. However, most large scale clinical trials and other meta-analyses have not raised any safety concerns. 26 On the other hand, the recent "Effects of a high-dose 24-hour infusion of tranexamic acid on death and thromboembolic events in patients with acute gastrointestinal bleeding" trial that evaluated TXA in patients with upper and lower gastrointestinal bleeding showed that thrombosis occurred more frequently in TXA-treated patients. 27 The reason for why TXA increased thrombotic events in these particular patients is not clear. What lies ahead? The common use of the term "fibrinolysis" does little for the imagination in considering that this system performs anything else other than fibrin removal. The classical view of fibrinolysis is certainly very important and it is not the purpose of this review to downplay this, but rather to highlight the fact that the end-product of this enzymatic cascade, plasmin, has important additional roles in physiology. This functional diversity is not solely related to the pleiotropic effects of plasmin itself (although this is substantial), but that all key modulators of the plasminogen activating pathway, including tPA, uPA, PAI-1, antiplasmin and PAI-2, participate in many other areas of physiology. Some of these processes are independent and unrelated to the activation or inhibition of plasmin. When considering these variables, the clinical manipulation of fibrinolysis takes on a new light where it can no longer be assumed that administration of a thrombolytic agent like tPA for example, only activates plasminogen while the administration of TXA only protects fibrin. Diversity in function: plasmin The enzymatic activity initially found in some strains of streptococci that caused fibrin breakdown were first named "fibrinolysin" to reflect fibrin as the substrate. However, after further studies in the 1940s, it was revealed that this moiety did not only cleave fibrin, but also gelatin and casein. 28 So even back in the 1940s, there was direct evidence that the key fibrin-busting enzyme had other targets. It was because of this non-specificity that the term "fibrinolysin" was renamed "plasmin", reflecting its activation from its zymogenic precursor, plasminogen, rather than focusing its substrate specificity solely to fibrin. Plasmin is well known to cleave numerous substrates. But, why produce plasminogen activators that are themselves relatively specific for plasminogen, while the resulting end-product (plasmin) has such broad specificity? Perhaps the key to understanding this (at least in part) stems from our understanding of the process that initiates plasmin generation on the fibrin surface: namely the critical role of lysine residues and how these residues initiate the entire process thereby providing a form of "specificity". Once lysine residues become visible to plasminogen, plasminogen binds to fibrin and plasmin is formed exactly at the site needed. However, could the broad specificity of plasmin allow similarly targeted proteolysis to occur on different substrates? In this context, it is important to mention that plasminogen was found to bind to dead cells ~100-fold more than to live cells. 29 Samson et al 30 subsequently showed that protein aggregates/necrotic tissue formed in necrotic cells provided a non-fibrin cofactor for plasminogen activation that resulted in removal of these aggregates/necrotic material. What was important here was that plasminogen was also recognizing lysine residues formed in these structures as this process was blocked by the lysine analogue TXA. In other words, the removal of both fibrin and misfolded/aggregate proteins were being initiated by the exact same process: the presence of exposed lysine residues that attracts plasminogen to the fibrin or necrotic cell surfaces, where plasmin can be generated to remove the protein in question. Hence, the in vivo function of the fibrinolytic system also includes the removal of misfolded proteins and is therefore important in maintaining protein homeostasis. While it has been well known that plasminogen-deficient mice display increased fibrin deposition, these mice also have increased levels of misfolded proteins after traumatic brain injury. 31 Adding further intrigue to this effect, plasmin formation not only facilitated the proteolytic removal of these proteins, but also enhanced phagocytosis of macrophages and dendritic cells, 32,33 while at the same time suppressing the immune response. 32 This immunosuppressive effect was speculated to minimize self-reactivity that might occur to misfolded proteins. The number of substrates that plasmin acts upon is quite extensive with many linked with hemostasis including von Willebrand factor, 34 and many of the coagulation factors (F), including FV, FVII, FIX, FX (see Ref. 35,36), protease activated receptor (PAR)-2, 37 as well as the contact pathway components (FXII and pre-kallikrein; see Ref. 38). Plasmin has also been reported to convert FX from a coagulation zymogen into a fibrinolysis cofactor. 39 What is also worth mentioning is that plasmin also acts to initiate other enzymatic or biological processes. Key among these include plasmin's ability to activate a number of the matrix metalloproteinases (MMPs 40 ), brain-derived neurotropic factor 41 and transforming growth factor β, and key complement proteins (C3 and C5 42 ) into their respective mature forms. Plasmin can also activate signaling pathways downstream of any one of the 12 plasminogen receptors that exist on various leukocytes. 9 Plasmin formation, mostly in response to tPA, has also been shown to be involved in wound healing, 43 and with many functions in the brain including memory formation 44 and in the addictive response to alcohol, 45 morphine, 46 methamphetamine, 47 cocaine, 48 and nicotine. 49 These effects are most likely related to the capacity of tPA/plasmin to engage in synaptic plasticity that in some way enhances the addictive response. Consistent with this, overexpression of tPA in the nucleus accumbens (the area of the central nervous system [CNS] important in the addictive response) potentiated sensitivity to many of these addictive agents. 50 Although this was not formally shown to be plasmin-mediated, the fact that plasminogen-deficient mice also displayed resistance to some of these additive agents makes this highly likely. Diversity of function of the other key components of the fibrinolytic cascade It comes without surprise that the major components of the fibrinolytic system also participate in processes unrelated to plasminogen activation, and also result in plasmin generation for purposes other than fibrin removal. The following sections will present some examples of the non-canonical functions of these proteins. Tissue-type plasminogen activator The importance of tPA in activating plasminogen is without doubt. However, tPA does cleave other substrates and has the capacity to bind to cell surface receptors and to promote cell signaling. One example of a non-plasminogen substrate for tPA was revealed by a Swedish group in the mid-2000s, where tPA was shown to directly activate (ie, independent of plasminogen) the platelet-derived growth factor (PDGF)-CC molecule. 51 This in turn allowed PDGF-CC to bind to its cognate receptor (PDGFRα) and to initiate a protein tyrosine kinase-dependent intracellular signaling event. Indeed, activation of this pathway resulted in opening of the blood-brain barrier (BBB) that occurred following tPA treatment in a mouse model of ischemic stroke. Moreover, this BBB opening event was blocked using the tyrosine kinase inhibitor, imatinib. 52 It is of immense interest that imatinib is currently being evaluated in a randomized clinical trial of patients with acute ischemic stroke to see if it reduces the deleterious effect of tPA at promoting symptomatic intracranial hemorrhage. 53 tPA can also interact with cell surface receptors, including the low-density lipoprotein receptors, notably low density lipoprotein receptor-related protein 1 (LRP-1) 54,55 . tPA has also been reported to promote various effects in the CNS (see Ref. 56 for review). Many of these studies did not determine whether the said effect was a direct effect of tPA or required plasmin formation. There are, however, some clear examples where some of the CNS effects of tPA are direct. Indeed, tPA was also reported to activate microglial cells and to initiate cytokine release in a manner not only independent of plasmin formation, but entirely independent of its proteolytic activity. In this case, tPA was essentially acting as a ligand and was actually referred to as being a cytokine. 57 This was also revealed in a later study where an inactive tPA variant, also referred to as a "cytokine", was able to induce MMP-9 expression in kidney fibroblasts in a manner dependent on binding to LRP-1. 58 It was also during this period that tPA was shown to be neurotoxic, 59 an important yet worrying finding given the use of tPA for thrombolysis in patients with ischemic stroke. This process was dependent on the catalytic activity of tPA which also relied on plasmin generation since the damaging effect was lost in plasminogen-deficient mice. 60 This example is included because other reports suggested that tPA was promoting neurotoxicity by directly cleaving the NR1 subunit of the glutamate (N-methyl-D-aspartate) receptor. 61 This was a point of much controversy 62 and could not be replicated by others. 63,64 Some reports indicated that NR1 cleavage was secondary to plasmin formation, 62,65 while others reported that tPA was able to directly stimulate neurons without any NR1 cleavage but required other cell surface receptors in this process. 65 Urokinase uPA is the other major plasminogen activator in mammalian plasma. uPA is more often linked to plasmin-mediated proteolysis in the extravascular compartment whereas tPA is more associated with conventional fibrinolysis (as well as its other attributes). uPA is often inextricably linked to a unique uPA cell surface receptor (uPAR 66 ) where it can perform various functions, some linked to cell-surface plasminogen activation and others to intracellular signaling, 67 particularly in the context of malignancy as recently reviewed. 68 uPA and uPAR have also been associated with cell adhesion, 69,70 neointimal formation, and atherosclerosis. 71 The PAIs The 2 major PAIs (PAI-1 and PAI-2) have also been implicated in other processes. PAI-2 has an additional anomaly being predominantly located intracellularly, although some is secreted and does influence the conventional fibrinolytic process in both thrombus resolution 72 and cancer metastasis. 73 PAI-2 is still viewed as being enigmatic 74 with suggested influence on apoptosis, 75 human immunodeficiency virus replication, 76 monocyte proliferation and differentiation, 77 and more recently, to block tumor growth. 78 It is also impressively upregulated in response to inflammatory cytokines, including tumor necrosis factor (TNF). 79 Early reports indicated that PAI-2 levels can increase over 1000-fold in some cells in response to TNF in combination with the phosphatase inhibitor, okadaic acid. 80 PAI-2 is also highly regulated by aryl hydrocarbon receptor ligands and by implication, with the process of carcinogenesis as recently reviewed. 81 PAI-1 on the other hand, shares little in common with PAI-2 apart from the fact they both inhibit tPA and uPA. PAI-1 is a particularly interesting serpin and has also been associated with numerous other processes (see Ref. 82). As it has the ability to interact with matrix proteins, including vitronectin, 83 PAI-1 is linked to tissue remodeling, cell migration, and intracellular signaling, with implications in the development of fibrosis, 84 obesity, 85,86 and quite curiously, the process of cellular senescence. 87 Concerning the latter, a null mutation in the PAI-1 (SerpinE1) gene was reported to increase aging in humans. 88 This was revealed in an Amish community in the United States that carried a null mutation in the PAI-1 gene. These individuals had "longer telomere length", a more favorable metabolic profile with lower prevalence of diabetes. Exactly how PAI-1 imposes these effects on these parameters is not clear but of immense interest. PAI-1 can also modulate Notch signaling with subsequent effects on endocardial proliferation and maturation. 89 These associations have led investigators to develop specific PAI-1 inhibitors [90][91][92][93] for some indications (ie, fibrosis, obesity, metabolic disorders among others; see 94 ) and results are eagerly awaited. It is also notable that antibodies against TAFI are also being explored, 95 but more in relation to conventional fibrinolysis. Alpha 2 -antiplasmin Antiplasmin has received relatively little attention compared to most other members of the fibrinolytic system, but there has been a resurgence of interest in recent years. Deficiency of antiplasmin resulted in the resolution of venous thrombus 96 and this prompted efforts to therapeutically target antiplasmin in order to increase endogenous fibrinolytic activity. 97 This approach was successful in a model of pulmonary embolism 97 and ischemic stroke. 98 Indeed, antibodies specific to antiplasmin removed pulmonary emboli in a manner similar to exogenous tPA. 97 Antiplasmin expression has also been detected in the brain hippocampus. Injection of neutralizing antibodies against antiplasmin was reported to increase hippocampal neurogenesis and spatial memory in mice. 99 Similarly, antiplasmin-deficient mice display impaired motor and cognitive function and show anxiety and depression-like behavior. 100 Whether this is due to uncontrolled plasmin activity is not clear but likely. Other reports have indicated additional functions for this serpin that appear to be unrelated to plasmin inhibition. For example, early studies suggested that antiplasmin promoted myofibroblast differentiation independent of plasmin inhibition. 101 A schematic representation of the various functions of the fibrinolytic system and the individual components is presented in Figure 1. Unanticipated effects of antifibrinolytic agents As the broadening role of plasmin has become evident, it is easy to imagine that the use of antifibrinolytic agents might have outcomes unrelated to their intended use, which is simply to stop bleeding. For example, administration of TXA to non-diabetic patients undergoing cardiac surgery resulted in a significant reduction in surgical site infection rates that was independent of its haemostatic effect. 102 That this was not seen in diabetic patients underscores the confounding effect of diabetes on the fibrinolytic process. TXA was also reported to reduce periprosthetic joint infection after primary arthroplasty 103 and revision arthroplasty, 104 although in the 2 examples related to arthroplasty, immune parameters were not evaluated and this effect was attributed to reduction in blood transfusion requirements. More recent preclinical studies have yielded some surprising effects of TXA in other contexts. For example, TXA was reported to increase life span of immune-compromised mice that coincided with reduced levels of TNF, interleukin-6 and MMP-9. 105 The same group also reported that long term (2 year) treatment of normal mice with TXA or aprotinin improved cognition and lowered brain amyloid levels. 106 Whether these longer term effects of TXA are actually related to plasmin inhibition or some other effect remains to be seen. Conclusions and future "fibrinolytic" prospects There is now a growing sentiment that is calling for a reappraisal of "fibrinolysis". Indeed, there is a lot more to this process than the removal of fibrin, and this is slowly gaining traction in the broader scientific community. Recent reviews have highlighted this very issue. 107 For decades, fibrinolysis has been largely ignored in comparison to the interest in the parallel coagulation cascade. Minimizing thrombosis risk is effectively tackled using various approaches to slow down the coagulation cascade either by blocking the vitamin K-dependent coagulation enzymes in a general sense (warfarin), accelerating thrombin inactivation (heparin), or by more directly targeting factor X or thrombin (various direct oral anticoagulants). Only when thrombosis really gets out of hand is a direct fibrinolytic approach considered (ie, thrombolysis) and even then, within strict guidelines. The intrigue of scientists towards coagulation also resulted in advanced laboratory approaches to measure almost every step of the cascade used for diagnostic purposes. Individuals with deficiencies or modifications of any one of the numerous enzymes and regulatory proteins of the coagulation system were revealed permitting targeted, personalized treatment. Uncertainties can remain and the underlying thrombophilic tendencies in some individuals remains a mystery. It is not beyond the realms of possibility that a defect in the fibrinolytic pathway may impact on other processes even in the absence of a coagulation anomaly. On the other hand, if there was a deficiency in plasminogen, or if its potential to be activated is impaired, then individuals with either low or dysfunctional plasminogen would be expected to present with a thrombotic phenotype. However, conditions associated with plasminogen deficiency-despite their ultra-rarity (1.6 per million 108 )-do not present with thrombosis, but rather the accumulation of fibrin deposits elsewhere, most commonly on the inner eyelid causing ligneous conjunctivitis. It should be remembered that plasminogen levels in these individuals are low, but not zero so it is possible that the remaining plasminogen was enough to provide sufficient fibrinolytic activity in blood vessels. The fibrinolytic system is continuing to deliver surprises and has come a long way since its initial association with fibrin removal. Although this is arguably still the most relevant role for this system, particularly in the context of thrombosis and bleeding, its looming role in various non-canonical areas cannot be ignored. Figure 1. Schematic representation of the fibrinolytic system and the broad effects of its component parts. Plasminogen is activated to plasmin by either tPA or uPA and can be inhibited by PAI-1 or PAI-2. Activation can also be endogenously inhibited by TAFI or therapeutically by TXA. Antiplasmin blocks plasmin activity. Additional effects of the individual components of this system are also indicated. Many additional effects of uPA are mediated by its interaction with its receptor, uPAR. Note, this list is not exhaustive. BBB = blood-brain barrier; CNS = central nervous system; HIV = human immunodeficiency virus; LDLR = low-density lipoprotein receptor; MMP = matrix metalloproteinase, PAI = plasminogen activator inhibitor; PDGFCC = platelet derived growth factor-CC; plg = plasminogen; TAFI = thrombin activatable fibrinolysis inhibitor; tPA = tissue type plasminogen activator; TXA = tranexamic acid; uPA = urokinase type plasminogen activator; uPAR = uPA cell surface receptor; vWF = von Willebrand factor.
2021-06-05T05:14:45.755Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "c86fc3b802cde652de566e7638ae877bf3de006d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/hs9.0000000000000570", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c86fc3b802cde652de566e7638ae877bf3de006d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245609368
pes2o/s2orc
v3-fos-license
Fractionation of Organic Carbon and Stock Measurement in the Sundarbans Mangrove Soils of Bangladesh Mangrove soils are well known for their high capacity of storing organic carbon (SOC) in various pools; however, a relatively small change in SOC pools could cause significant impacts on greenhouse gas concentrations. Thus, for an in-depth understanding of SOC distribution and stock to predict the role of Sundarbans mangrove in mitigating global warming and greenhouse effects, different extraction methods were employed to fractionate the SOC of Sundarbans soils into cold-water (CWSC) and hot-water (HWSC) soluble, moderately labile (MLF), microbial biomass carbon (MBC), and resistant fractions (RF) using a newly developed modified-method. A significant variation in total SOC (p < 0.001), SOC stock (p < 0.001) and soil bulk density (p < 0.05) at the Sundarbans mangrove forest were observed. In most soils, bulk density increased from the surface to 100 cm depth. The total SOC concentrations were higher in most surface soils and ranged from 1.21% ± 0.02% to 8.19% ± 0.09%. However, C in lower layers may be more resistant than that of upper soils because of differences in compositions, sources and environmental conditions. SOC was predominately associated with the resistant fraction (81% - 97%), followed by MLF (2% higher SOC stock in the soil profile and its primary association in resistant fractions suggested that Sundarbans mangrove soil is sequestering carbon and thereby serving as a significant carbon sink in Bangladesh. Introduction Soil organic carbon storage, resulting from a range of natural biogeochemical processes, is a major ecosystem service that regulates the chemical, physical and biological properties of soil. The amount of organic C in soil (1500 GT) represents approximately 50% of the total terrestrial carbon (3060 GT), and this pool is approximately two times of the atmospheric pool of 760 GT (Oelkers & Cole, 2008;Lal, 2008;Hossain et al., 2007). Only the ocean has a larger C pool (~38,400 GT), although most are inorganic forms (Houghton, 2007). Moreover, it is estimated that ~ 20% -30% of terrestrial SOCs are stored in wetlands (Bridgham et al., 2006), although only 5% -8% of land surfaces consist of wetlands (Mitsch et al., 2013). Therefore, wetland soils are the major carbon sinks on earth because of higher amounts of stored organic carbon. Mangrove plays a crucial role worldwide with a multitude of services, including carbon and nutrients cycling and coastal protection (Koshiba et al., 2013;Alongi, 2008). Several studies identified mangroves are most carbon-rich among the forest ecosystems (Eid et al., 2019;Murdiyarso et al., 2015;Donato et al., 2011;Kauffman et al., 2011) and act as a powerful atmospheric C sink (three times the biomass) because of their higher primary production capacity (Twilley et al., 1992). Although mangroves are efficient in sequestering and conserving C in soils, sediments, and plant biomass, there are concerns that global warming may release carbon into the atmosphere as CO 2 , the major greenhouse gas (Eid et al., 2019;Alongi, 2012). Still, situations may vary from place to place and time to time (Hossain et al., 2020). Sundarbans plays a crucial but underrated role in carbon storage and greenhouse gas regulation. Sundarbans mangrove is currently experiencing the highest rates of degradation among existing forest ecosystems from both natural including cyclone, storm surge, lightning, pest and diseases and anthropogenic disturbances like deforestation, pollution, urban and coastal development, agriculture and aquaculture conversion, hydrological disruptions, and over-exploitation of timber, wood, and fish (Grellier et al., 2017;Spaulding et al., 2010;Alongi, 2012;Duke et al., 2007;Alongi, 2002). Besides the loss of above-ground carbon by mangrove disturbance, significant CO 2 is released into the atmosphere due to the decomposition of SOC. This wetland provides a potential sink for atmos- (Hossain et al., 2020;Murdiyarso et al., 2009). Soil organic carbon has been an essential topic of study in the field of soil and environmental sciences. Simply, estimating total SOC in only surface soils is insufficient to study the storage, stability, and dynamics of SOC concerning ecosystem carbon balance. Because the SOC pool is very dynamic from the surface layer to 1 m depth (Hossain et al., 2020;Lal, 2008) and the labile pool has a lower turnover time (<5 years) compared to resistant residues (20 -40 years), which are physically or chemically protected (Hoyle & Murphy, 2006 Soil Sampling and Preparation Approximately 3/4 of the terrestrial carbon is present in the upper 1 m soil (Lal, 2008;Hossain et al., 2007) . The samples were air-dried spreading on clean polythene sheets. Visible roots and debris were carefully discarded from soils. Massive aggregates of resulting dried samples were broken with a wooden hammer. Ground soils were screened using a 2 mm stainless steel sieve and preserved properly for analysis. Analytical Procedure Soil Physical properties viz bulk density (Blake, 1965), texture (Bouyoucos, 1936), soil acidity (pH) (Soil: Water = 1:2.5, w/v), and redox potential (Eh) (Soil: Water = 1:5, w/v) were determined. Organic carbon was determined by Walkley and Black's wet oxidation method (Walkley & Black, 1934). A new sequential fractionation scheme was developed for the fractionation of SOC in Sundarbans soils based on Erich et al. (2012) and Ghani et al. (2003) with modification ( Table 1). The SOC is usually present in soils as labile, moderately labile, microbial biomass, and resistant fractions. Two major parts of the labile C pool were usually studied: Cold-Water soluble carbon (CWSC) and Hot-Water soluble carbon (HWSC) (Ghani et al., 2003). Quantitatively the first one is linked to dissolved organic carbon. The second one contains more stable elements that form nutrients and energy reservoirs for plants and microorganisms. Labile C pool is a critical CO 2 source and directly changes soil CO 2 flow because of high biodegradation (Gregorich et al., 2003). Moderately labile SOC is a reservoir of less decomposable soil organic carbon and its primary and essential function is cation exchange. The organic-mineral aggregates constitute a significant source of this C pool. Although the microbial biomass carbon (MBC) pools only represent 1% -5% of the total SOC in soil, it is the driving force of the soil carbon cycle (Erich et al., 2012). Labile Fraction: Cold-Water (CWSC) and Hot-Water Soluble (HWSC) For the CWSC fraction, 3 g soil of each sample was weighed to 50 ml centrifuge tubes (polypropylene), shaken on an end-over-end shaker and centrifuged as described in Table 1. The supernatants were filtered via 0.45 mm cellulose-nitrate-membrane filter papers into clean plastic bottles for carbon analysis. For the second fraction, residue soils from the first step were shaken for 10 seconds with 30 ml distilled water on a vortex shaker to suspend soil in the added water and then left for 16 h in a hot-water bath at 80˚C. After completing the water bath extraction, the shaking was repeated for 10 s to release HWSC from soil to hot water fully. These tubes were centrifuged and supernatants were filtered as described above. Finally, the organic HWSC was determined by multiplying the total hot-water soluble C by 0.963 because the average inorganic C of the soils was 3.7%. Moderately Labile Fraction (MLF) Soil samples were extracted using 125 mM Na 4 P 2 O 7 (pH 5). Pyrophosphate is usually used to extract Fe-Al bound soil organic C (Erich et al., 2012;Wattel-Koekkoek et al., 2001;Schnitzer & Schuppli, 1989). Thus, this fraction possibly chemically sorbed C on the soil surface and protected from decay. The extraction step was employed by adding 25 ml of extractants in the 3 g residue soils of HWSC fraction and set for 24 hours end-over-end rotation at 40 rpm. The suspensions were kept in a safe place to settle down overnight. The supernatant was cautiously transferred and filtered by repeating the previously described filters. Microbial Biomass Carbon (MBC) Fraction A chloroform fumigation-extraction technique was employed to assess microbial biomass C in field-moist soils (Ghani et al., 2003). Field moist soil was used because this soil represents a nearly exact amount of microbial biomass C (Vance et al., 1987). Five gram soil samples (dry weight basis) were fumigated by chloroform for 24 h and extracted with 0.5 M K 2 SO 4 for 2 hours shaking. Following the same procedure, extraction of a similar set of non-fumigated samples was performed. Finally, the MBC fraction was estimated by subtracting non-fumigated soil carbons from fumigated C (Ghani et al., 2003). Resistant Fraction (RF) The resistant fraction is physicochemically protected against decomposition and measured by subtracting the C concentrations of the above four fractions from Soil Organic Carbon (SOC) Stock Storages of SOC in Sundarbans soils were determined according to Hossain et al. (2015Hossain et al. ( , 2007. The SOC storage is expressed on an oven-dry weight basis and calculated by using three variables, namely horizon thickness, soil bulk density and % organic carbon (Meersmans et al., 2008). The SOC stock was estimated by the following formula. ( ) 2 % OC BD Thichness of Layer cm 10 SOC storage kg C m 100% where, OC is organic carbon, BD is soil bulk density (g·cm −3 ), and 10 is the conversion factor from g·cm −2 to kg·m −2 . Statistical Analyses The data were statistically analyzed using Minitab 2019 and Microsoft excel 2016 versions. Descriptive statistical analysis was used to establish trends and differences of date between variables like mean, median, standard deviation, minimum and maximum and also to produce tables, bars. Inferential statistical analyses were used to test the relationships between different variables under the study. Relationships were assumed as significant when the p < 0.05. Soil Characteristics Soil texture, soil bulk density and redox potential are the most important soil properties that affect the organic carbon concentrations and its stock in the Sundarbans mangrove soils. Lower bulk density results in a higher organic matter content. A lower level of bulk density was found in the Sundarbans soils except S4 due to litters and roots that serve as habitats for aquatic biota and thus consequent activities increase soil pore and macropore development (Eid et al., 2019 in comparison to other lands. In the current study, a trend of decreasing SOC with increasing depth was determined in this study, although some soils contained higher SOC in 15 -50 cm depth (Table 2). Fractionation of SOC Fractionation of soil organic carbon can give unique understandings into its distribution in the Sundarbans ecosystem and respond to greenhouse gas emissions mainly, CO 2 and CH 4 . The cold-water soluble carbon (CWSC) fraction is considerably smaller than other carbon pools and constituted about only ~0% to 3% (Figure 3). There was a decreasing trend of cold-water soluble C concentration from surface to 100 cm, although some soils contain higher C in the 15 -50 cm depth. The amount of CWSC varies regarding soil types, primarily depending on soil carbon content and microbial activity. A similar C concentration in CWSC (1% to 1.25%) fraction was reported in potato ecosystem, Maine, USA (Erich et al., 2012). However, several researchers described 0.1% to 0.4% CWSC in oven-dry soils (Provenzano et al., 2010) and about 1% in fieldmoist soils in the Arial Beel wetland of Bangladesh (Eva et al., 2018). It has been suggested that the CWSC being part of the highly labile C pool, may be susceptible to stress and perturbation in the soil-plant ecosystem (Doran & Parkins, 1994) and, thus, might be a vital source of greenhouse gasses. On the contrary, the HWSC fraction was higher in all the studied soil than in the CWSC fraction. Partly extraction of non-humified organic material such as lignin, lignocellulose, and other carbohydrates by hot water may be possible for this higher C content in HWSC fraction (Hossain et al., 2020;. A similar HWSC carbon concentration (361 -865 µg·g −1 ) was reported by Eva et al. (2018) in the Arial Beel wetland, Bangladesh. The moderately labile C fraction comprised a wide range (2% to 10%) of the total SOC and constituted a greater amount of C than CWSC and HWSC fractions. This fraction possibly represents carbon that can chemically sorb to clay surface and protected from decomposition (Hossain et al., 2020;Eva et al., 2018). Moderately labile fraction implies that this C fraction isn't readily available to microbes for decomposition and may have fewer impacts on climate change. Soil microbial biomass C is a determination of the carbon associated with the living components of the soil. Microorganisms are the main driving force of the C cycle that decompose soil organic matter, release C as CO 2 in the atmosphere, and assimilate some C in their body mass (Sylvia et al., 2005). Microbial biomass C ranges from 1% to 5% of the total SOC but could be as much as 8% (Erich et al., 2012). This study found a range of about 0% to 4% of total SOC (Figure 2), well agreement with another study, 2% -5% MBC in the Arial Beel wetland in Bangladesh (Eva et al., 2018). The MBC fraction gradually decreased from surface to substratum in all the soils. In this study, a higher amount of C was determined might be due to the presence of both aerobic and anaerobic microbes and very high organic matter in Sundarbans wetland. Besides, the distribution of dissolved organic carbon regulates the distribution of microbial biomass in soil (Zhang et al., 2006) because it is the principal energy source for microorganisms (Hofman et al., 2003;Haynes & Francis, 1993). A substantial quantity of carbon is predominately associated with the resistant fraction in the Sundarbans soils. Correlation Analysis The correlations between different carbon fractions and between carbon fractions and soil properties are presented in Table 5. Cold-Water soluble carbon showed a positive and significant relationship with HWSC and MBC, whereas the HWSC fraction significantly correlated with all the soil carbon fractions. This relationship implied that the content of labile carbon fractions influences both the mass and activity of microorganisms in the soil (Hoyle & Murphy, 2006 The principal component analysis (PCA) also emphasized a highly positive relationship among five C pools in Sundarbans ( Figure 3) and the two components (PCA 1 and PCA 2) together explained a total of 73.5% data variability in the studied areas. Multivariate analysis depicted 99.39% similarities among S1, S2, S9, S10, and S8 in cluster 1, while S3, S7, S4, S6, and S5 showed 99.62% similarities in cluster 2 ( Figure 3). Since S1 and S2 samples were collected from comparatively closer locations, they showed maximum similarities. Contrary, S3-S7 showed similar soil properties and carbon fractions because these are the nearest to the Bay of Bengal coast. Storage of Organic Carbon in the Sundarbans Mangrove Soils The stock of SOC in surface (0 -15 cm), subsurface (15 -50 cm), substratum (50 -100 cm), and total stock in 1 m soil profile of the Sundarbans mangrove soils are illustrated in Figure 4. SOC stock in the Sundarbans soils was determined in 1 m depth because substantial SOC may accumulate in lower soil layers (Lal, 2008 in S4 (14.32, 49.88, and 70.93 kg·C·m −2 , upper to lower respectively) possibly due to higher burial of organic matter by natural calamities. In contrast, the minimum SOC stocks in the three studied depths were found in S3, S7, and S9, respectively. The soils of 50 -100 cm contained more carbon than the other two layers because the bulk density and thickness of this lower layer are higher (50 m) than the surface (15 cm) and subsurface (35 cm). Also, more than 8 times higher SOC storage was determined in S4 than S3. The subsurface and substratum C sources are mainly root residues, water transportation of particulate and dissolved organic C from the soil surface, and bioturbation (Rasse et al., 2006). Comparing with the other studies globally, the SOC stocks in the Sundarbans mangrove were higher than the described values in Saudi Arabia (19.9 to 29.2 kg·C·m −2 ), Vietnam (5.8 kg·C·m −2 ), Egypt (8.5 kg·C·m −2 ), and New Zealand (6.9 might be more significant than that of conventionally ascribed. Conclusion Soil organic carbon is a vital component of the Sundarbans soils that significantly impacts the proper functioning of its ecosystem. The SOC fractionation study showed that the Cold-Water soluble carbon (CWSC) fraction was considerably smaller than other fractions that primarily depend on total SOC content and soil microbial activities. The moderately labile (MLF) and hot-water soluble (HWSC) fractions were the second and third most dominant fractions, included root exudates, clay bound C, soluble amino acids and carbohydrates. The SOC was mainly associated with the resistant fraction of about 81% to 97% of total SOC. Resistant carbon fraction (RF) takes centuries to decompose and is largely unavailable to microbes. Moreover, these five SOC pools were significantly correlated with two or more other fractions and thus, changes in one pool can cause carbon imbalance in the Sundarbans ecosystem. Besides, Sundarbans soils contained a high SOC stock in the studied soil profiles. Therefore, this study revealed that Sundarbans mangrove soils are a good carbon sink and suggested that sustainable management of the Sundarbans mangrove reserve would increase the SOC storage and contribute to climate change mitigation.
2022-01-01T16:15:54.261Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "2417c851be0c4041d199eee3b089cb09190d5c3a", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=114407", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a6e46085c598995f70b40abf9f469dd70059af87", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
245684186
pes2o/s2orc
v3-fos-license
SHARIA LIFE INSURANCE BUSINESS AND RISK MANAGEMENT BASED ON SHARIA PRINCIPLES: REGULATORY PERSPECTIVE Sharia life insurance is an agreement between two parties. One is obliged to pay a contribution or premium, and the other must ensure a full guarantee of the insurers. It should undertake the protection of a contingent or uncertain loss based on the contract. This research used a descriptive qualitative approach. It seeks to explore and understand the central phenomenon to obtain in-depth data to reveal the facts on the research object. The theoretical contribution in this study can contribute to the insights about the insurance partnership of ASYKI, Inc. to increase the number of insurance participants. It is expected that people do not have doubts about sharia life insurance in a formal juridical manner related to insurance. The result of the research showed that Asuransi Shariah Keluarga Indonesia ASYKI, Inc Pasuruan Unit’s performance is generally good. According to Law No.40 of 2014, its risk management is sharia-oriented. However, it does not rule out individuals who register individually. Still, ASYKI, Inc. Pasuruan Unit provides some solutions. They should register with the ASYKI partners. It opens an opportunity for the polis applicants to easily recognize and apply sharia life insurance without going through business entities or companies that act as intermediaries. دحأ هيف مزتلي ينفرط ينب قافتا وأ ينمأت وه ةعيشرلا في ةايحلا لىع ينمأتلا :صخللما تاكاترشلاا يعفادل ةلماك تانماض ميدقتب مزلم رخلآاو طاسقأ وأ تاكارت عفدب ينفرطلا .مبرلما قافتلال اقًفو هتاكلتمم وأ لولأا فرطلا ام ءشي ثدح اذإ طاسقلأا / تاكاترشلاا / فاشكتسا لىإ جهنلا اذه ىعسي .يعون يفصو جهن وه ةساردلا هذه في مدختسلما جهنلا يتلا قئاقحلا فشكل ةقمعتم تانايب لىع لوصحلا لجأ نم ةيزكرلما ةرهاظلا مهفو 396 Al-Tahrir, Vol. 21, No. 2 November 2021 : 395-414 ةفرعلما في ةساردلا هذه في ةيرظنلا تماهاسلما مهاست نأ نكيم .ثحبلا عوضوم في تثدح ةيلمعلا ةيحانلا نم .ينمأتلا في ينكراشلما ددع ةدايز في IKYSA .TP ـل ينمأتلا ةكاشر لوح وأ ةينوناقلا ةيحانلا نم ءاوس ، ةايحلا لىع يعشرلا ينمأتلا لوح كش سانلا رواسي لا ىتح IKYSA ةدحو برتعت .TP نأ ةساردلا هذه جئاتن ترهظأ دقو. ينمأتلاب ةقلعتلما ةيمسرلا مقر نوناقلل اقًفوو ةديج ماع لكشب ةيسينودنلإا ةسرلأل يعشرلا ينمأتلل naurusaP مغرلا لىع ، ةيملاسلإا ةعيشرلا ئدابم ساسأ لىع رطاخلما ةرادإ ةرادإ في 4102 ماعل 04 نأ بجي .TP همدقت يذلا لحلا وهو ، يدرف لكشب ليجستلا دارفلأل نكملما نم هنأ نم ءاكشرلا دحأ لىإ يدرف لكشب ةهجولما ينمأتلا صلاوب naurusaP IKYSA ةدحو لجست ةيدرفلا ةسايسلا ليجسلم اصًرف لبقتسلما فيو IKYSA حتفيس .TP عم نولمعي نيذلا ةعيشرلل اقًفو هقيبطتو ةايحلا لىع ينمأتلا لىع فرعتلا في ةيرح ثركأ روهمجلا نوكي ثيحب .طيسوك لمعت ةكشر وأ يراجت نايك برع رورلما لىإ ةجاحلا نود ةلوهسب Abstrak: Asuransi jiwa syari’ah merupakan pertanggungan atau perjanjian antara dua belah pihak yangmana pihak satu berkewajiban membayar kontribusi atau premi dan yang lainnya memiliki kewajiban memberikan jaminan sepenuhnya kepada pembayar iuran/kontribusi/premi apabila terjadi sesuatu yang menimpa pihak pertama atau barang miliknya sesuai dengan perjanjian yang sudah dibuat. Pendekatan yang digunakan dalam penelitian ini adalah pendekatan kualitatif deskriptif. Pendekatan ini berusaha mengekplorasi dan memahami gejala sentral agar mendapatkan data yang mendalam guna mengungkapkan fakta yang terjadi pada objek penelitian. Kontribusi teoritis dalam penelitian ini dapat memberikan sumbangsih ilmu pengetahuan tentang kemitraan asuransi PT. ASYKI dalam menambah jumlah peserta asuransi. Adapun secara praktis agar masyarakat tidak memiliki keraguan mengenai asuransi jiwa shariah, baik secara yuridis formal yang berhubungan dengan perasuransian. Hasil dari penelitian ini bahwa PT. Asuransi Shariah Keluarga Indonesia ASYKI Unit Pasuruan secara umum dan keseluruan sudah baik dan sesuai dengan UU No.40 Tahun 2014 dalam pengelolaan manajemen resiko sudah berdasarkan prinsip shariah, meskipun tidak menutup kemungkinan ada individu yang mendaftarkan diri secara perorangan, namun solusi yang ditawarkan dari pihak PT. ASYKI Unit Pasuruan hendaknya pendaftar poli asuransi secara individu di arahkan pada salah satu mitra yang bekerja sama 397 Parmujianto, Sharia Life Insurance Business and Risk Management dengan PT. ASYKI dan ke depannya akan membuka peluang bagi pendaftar polis secara perorangan sehingga masyarakat lebih leluasa untuk mengenal dan menerapkan asuransi jiwa shariah dengan mudah tanpa harus melalui badan usaha atau perusahaan yang menjadi perantara. INTRODUCTION The development of Islamic insurance in Indonesia has progressed rapidly due to the majority of the Muslim population, thus making the demand for Islamic insurance even higher. There are several types of insurance offered by insurance companies in Indonesia, one of which is life insurance. Insurance is an agreement between two parties in which one party is obliged to pay a contribution or premium. The other must provide a full guarantee to the premium/contribution/premium payer if something happens to the first party or his belongings based on the agreement that has been made. The term insurance usually refers to anything that is protected. 1 Sulistio Purwaningrum et al. in 2020, a study was conducted on "Determining the Growth of Assets of Sharia Life Insurance Companies in Indonesia for the 2013-2018 Period." It was said that the growth in assets of sharia life insurance companies simultaneously took advantage of investment returns, operating expenses, participant contributions, and claims. 2 Ridwan Tabe et al., in their research in 2018 entitled "The Effect of Premiums on Profits in Life Insurance Companies at the Sharia Unit of Panin Dai-Ichi Life Indonesia, Inc.," explained that the premium has a significant effect on profits. 3 1 wikipedia, "Asuransi," https, 2020 <https://id.wikipedia.org/wiki/Asuransi>. Meanwhile, Siti Maskanah researched "Implementation of Sharia Life Insurance Products for Family Economic Stability. The results show that having protection in the form of life insurance will be safe, comfortable, and investment is one of the conditions so that the economy of a family becomes stable. Sharia life insurance products are in demand by people with minimal and high incomes. 4 Alfa Immanuel Wijaya, in 2019 conducted research entitled "Implementation of the Unit Link Product Life Insurance Agreement at Allianz Life, Inc. Lampung Branch." It explained that the life insurance agreement, according to the law, must be outlined in a deed called a policy. It is evidence of an insurance agreement as stated in article 225 of the Indonesian Commercial Code and regarding the formal terms of the policy as stipulated in Article 265 of the Commercial Code. According to the provisions of the article, there are four conditions for the validity of an agreement: the existence of an agreement, the existence of authority, the existence of particular objects, and the existence of lawful power. Meanwhile, the benefits of unit-linked product life insurance are considered very profitable for the insured. Some of the benefits include additional protection, premiums that do not expire, an extended coverage period, multiple benefits, ease of investment, and premium leave facilities. According to Dedi Yulianto in his research in 2018 "Insurance Strategies in Fostering Public Interest in Al-Amin Sharia Life Insurance, Lampung Branch," said that Al-Amin Sharia Life Insurance Lampung Branch is insurance for consumer association Sharia Life Insurance from various institutions such as banking, cooperatives, BMT, then universities, and schools. The company will provide an offer in the form of protection to bank debtors who risk congestion due to death by introducing Sharia Al-Amin Financing products. 5 Inc. Medan Branch." It found that the risks in life insurance include risks in the insurance industry as uncertainty and financial losses. 6 The similarity of previous research with this research is how to take or examine the management used by a sharia life insurance company, both unit-linked products and sharia life insurance management development strategies. The gap between the previous research and this research is that the life insurance agreement must be stated in a deed or policy. The risk of raising public interest is to trust more in sharia life insurance than conventional, the uncertainty of the insurance industry, and financial loss. Nevertheless, this study discussed how a company's management system manages a life insurance product in sharia and its legal perspective according to Law No. 40 of 2014. Therefore, someone who applies for insurance has a goal so that if something happens to him, there is a company willing to bear it. Various types of insurance offer sickness coverage, but some also offer goods, employment, and life coverage. Along with the development of the times, many institutions have established sharia-based insurance services; of course, the principles used in their management are under the principles of sharia and Islamic teachings. As explained in the QS. al-Maidah: 2. "And please help you in doing goodness and piety and do not help in sins and transgressions. Belief in Allah; Allah is very heavy in torment ". 7 For insurance, there is also an element of mutual help between fellow humans. In this case, it is based on the commandments of Allah contained in QS al-Maidah: 2. Insurance, when viewed as sharia, is essentially a form of mutual risk-bearing activity between fellow humans; each other becomes the guarantor for the risks of the other. 8 According to the DSN Fatwa. No21/DSN-MUI/X/2001. Sharia insurance (Ta'min, Takaful, or Tadhamun) is to protect and help some people/parties through investment in assets and/or tabbaru. It provides a pattern of returns to face certain risks through a suitable contract (agreement). Life insurance in the Indonesian Commercial Code (KUHD) is regulated in book 1 Chapter X Article 302 to Article 308 KUHD. The life insurance based on the provisions of Article 255 of the KUHD, which contains life insurance, must be held in writing in the form of a deed called a policy. According to the provisions of article 304 of the KUHD, the life insurance policy contains some parts. They are the day the insurance is held, the name of the insured, the name of his life insured, the start and end of the event, the amount of insurance, and the insurance premium. 9 The Law of the Republic of Indonesia Number 40 of 2014 concerning insurance, article 3 paragraph 6, states that "Life Insurance Business is a business that provides risk management services that provide payments to policyholders, the insured, or other entitled parties in the event the insured dies or remains alive, or other payments to the policyholder, the insured or other entitled parties at a certain time as stipulated in the agreement, the amount of which has been determined and/or is based on the results of fund management." There is also the same article in paragraph 9 which states that "Sharia Life Insurance Business is a risk management business based on Sharia Principles to help and protect one another by providing payments based on the participant's death or life, or other payments to participants or other parties entitled to the specified time as stipulated in the agreement, the amount of which has been determined and/or based on the results of fund management. According to the article above, the primary use in life insurance is to help each other and protect each other or between siblings. In Islam, death is an accident, and death is a person's destiny. No one knows except Allah Swt. So that humans are allowed to try to prepare themselves as well as possible before that destiny comes by preparing themselves in the form of protection of the soul by prioritizing sharia compliance. The fundamental reason for taking the theme in this research study is because, in fact, in social life, there are still many who do not know about the existence of life insurance, maybe because they do not 9 Kitab Undang-Undang Hukum Dagang (Bandung: Gramedia Press, 2013). know or know but are hesitant to register themselves as life insurance customers, because of this, the author would like to discuss how the Sharia Life Insurance Management system at Asuransi Shariah Keluarga Indonesia (ASYKI), Inc. Pasuruan Unit, and the Perspective of Law Number 40 of 2014 concerning Sharia Life Insurance Business in Risk Management Based on Sharia Principles.Whereas in social life, many people still do not know about life insurance, some do not know or know but are hesitant to register or participate in life insurance. From the phenomena that occur and the research gap that appears in this study, it is clear that not every empirical incident is in accordance with the existing theory. It is confirmed by the existence of a research gap in previous studies. The various studies above show that there are differences in asset growth in the Sharia Insurance Company. However, this research is related to the sharia life insurance business based on Law no. 40 of 2014 that research gaps emerge in risk management that is oriented towards shari'ah compliance, the value of justice, and the benefit of the people. So in this study, the question arises of how sharia life insurance business management and risk management according to Islamic sharia at Asuransi Syariah Keluarga Indonesia (ASYKI), Inc. Pasuruan Unit. The approach used in this study was qualitative. This approach seeks to explore and understand the central phenomenon to obtain in-depth data to reveal the facts on the research object. Objects in qualitative research are natural objects or settings, so this method is often referred to as naturalistic methods. Natural objects are objects that are what they are, not manipulated by the researcher. When the researcher enters the object, after being in the object and leaving the object are relatively unchanged. Qualitative research methods are used to obtain in-depth data, data that contains meaning. Meaning is actual data; definite data is a value behind the visible data. Therefore, qualitative research does not emphasize generalizations but instead emphasizes meaning. Generalization in qualitative research is called transferability, which means that the study results have similar characteristics. 10 Data analysis in this research used the qualitative descriptive method. Descriptive is trying to describe, analyze, and assess the material that focuses on the research. The data were collected from various data sources, both primary and secondary data. They were interviews, field notes, official documents, related files, and the web problems the researchers discussed. It was related to the management system analysis at Asuransi Shariah Keluarga Indonesia, Inc. of Pasuruan Unit. BASIC INSURANCE CONCEPTS According to law No. 2 of 1992, insurance is an agreement of two or more parties, whereby the insurer binds himself to the insured, by receiving an insurance premium, to provide compensation to the insured due to loss, damage, or loss of expected profits, or legal liability to the insured. a third party that the insured may suffer arising from an uncertain event, or to make a payment based on death or someone insured. 11 Muhammad Muslehuddin, in his book Insurance and Islamic Law, adopts the meaning of insurance from the Encyclopedia Britannica. It is a supply prepared by a group of people who suffer losses to deal with not enlivened events. If the loss falls on one of them, then the burden of the loss will be distributed throughout the group. 12 Since the time of the Prophet Muhammad, the Muslims have played an essential role in introducing the insurance system to the world. In the year 200 H, many Muslim entrepreneurs started pioneering the takaful system, a system of collecting funds to be used to help businessmen who are suffering from losses each other; such as when the cargo ship hits a reef and sinks, or when someone is robbed which results in the loss of part or all of his property. This term is better known as "Sharing of Risk." 13 According to Doctor Jafril Khalil concerning the DSN-MUI Fatwa. Some of the contracts contained in sharia insurance are not only limited to the Tabarru and Mudharabah contracts, but there are 11 Sri Nurhayati, Akutansi Shariah di Indonesia (Jakarta: Salemba Empat, 2013), 27. 12 other types of tijarah contracts such as musyarakah (partnership), wakalah (appointment of representatives/agents), wadiah (deposit contract), syirkah (association), musahamah (contribution) and others that are recognized and justified in syar'i for use in sharia insurance. 14 The contract stipulated in the sharia life insurance at the beginning of receiving the premium applies two contract forms, namely the investment savings contract and the contribution contract. For investment savings contracts based on the mudharabah principle and contribution contracts applying the grant principle, the grant is made in the congregation, which has a mutually beneficial effect. The grant amount is 5% to 10% of the total premium. The remaining 95% to 90% will go into the investment savings of participants/customers. Deferred life insurance is caused by death (death). The death resulted in the loss of income for a person or a particular family. The risks that may arise in life insurance are mainly in the "time element" because it is difficult to know when someone dies. Mental insurance is held to minimize this risk. Nowadays, the agreement or contract between the insurer and the insured almost always uses an agreement or contract in the form of a book (policy). Standard agreements are used so that service transactions can be carried out efficiently and practically without any obstacles due to "bargaining" before closing an agreement. In a standard agreement, the clauses in the agreement have been determined unilaterally by the insurer. These clauses tend to prioritize the insurer's rights over the insured's rights and the insurer's obligations. So that now the insurance agreement will be more straightforward, and it will not take a long time. It led to the development of life insurance in the form of unit links or Link Assurance. 15 Another discussion state that the life insurance agreement according to the law must be stated in a deed called a policy as evidence of an insurance agreement as contained in the Indonesian Commercial so that the article regulates General Provisions that must be fulfilled so that a deed can be referred to as a policy. The terms of the life insurance agreement are regulated in Article 1320 of the Indonesian Commercial Code (KUHD). According to the provisions of this article, there are four conditions for the validity of an agreement: There is an agreement, the existence of authority, the existence of particular objects, and the existence of a lawful power of attorney. Meanwhile, the benefits of unit-linked product life insurance are considered very profitable for the insured. Some of the benefits include additional protection, premiums that do not expire, an extended coverage period, multiple benefits, ease of investment, and premium leave facilities. Insurance can help humans in overcoming all the risk problems they face. Insurance has now increased its function to protect the insured against the risks it faces and manage public funds with unitlinked product life insurance investments. Unit-linked product life insurance is formed by entering into a risk transfer agreement. The insured party of the unit-linked product life insurance reminds himself to pay the premium. The insurer informs the prospective insured of the terms and procedures to participate in unit-linked product life insurance. Suppose the insured candidate fulfills the requirements and procedures. In that case, a legal relationship will arise, which creates the rights and obligations among the parties that must be fulfilled in the contents of the policy. 16 The types of risk commonly known in the insurance business include: First, the pure risk is the uncertainty of a loss. In other words, there is only an opportunity of loss and not an opportunity of profit. Pure risk is a risk that will provide if it does not occur; it will not cause loss but also not benefit. For example, the car you are driving might be hit. If a car is insured and then hit, the owner will suffer a loss. However, if this does not happen, the owner will neither lose nor benefit. In its operation, the insurance company is constantly faced with this kind of pure risk. Second, investment risk is the risk associated with the occurrence of two possibilities, namely, the opportunity to experience financial loss or the opportunity to gain profit. The difference between pure risk and investment risk is the possibility of a loss or profit. For example, in investing in stocks on the stock exchange, and so on. Stock price fluctuations can cause losses or gains. Third, Iidividual risk is a risk that affects a person's capacity or ability to obtain an advantage. The individual risk can be divided into 3 (three) types of risk. First, personal risk (personal risk), for example, the risk of a person resulting in a reduction or loss of one's capacity to benefit, which may be caused by dying young, aging, physical disabilities, and losing a job. Second, the risk of property (property risk) at this risk will be a financial loss if we have an object or property that makes the opportunity for the property to be lost, stolen, or damaged. Loss of property means financial loss. Third, liability risk is a risk that may be experienced as detrimental to other parties. If a person bears someone else's loss, he must pay it, which is a financial loss. 17 REGULATION OF SHARIA PRINCIPLES IN LIFE INSURANCE 1. Law Basic Insurance It relates to the Law of the Republic of Indonesia No.40 of 2014 concerning insurance within the scope of insurance. 18 General insurance companies can only carry out: General insurance business, including health insurance business lines and personal accident insurance business lines, and reinsurance businesses for the risks of other general insurance companies. Types of Akad in Shari'ah Insurance In addition to the mudharabah contract, several forms of contract are applied in Sharia insurance. There are also wakalah, wadiah, and musyarakah contracts. The contract forms mentioned above are applied based on the situation and condition of the business activities carried out by the parties concerned. Each contract has different characteristics or conditions in its application. 19 3. Law Number 40 of 2014 The explanation of life insurance in Law Number 40 of 2014 is contained in article 1 number 6 and 9, namely: "Life Insurance It has been explained in the Code of Commercial Law Chapter X concerning "Insurance or coverage against fire hazards, against hazards that threaten agricultural products that have not been harvested, and about life insurance" in this case, in particular, life insurance is contained in article 302 "A person's soul can be suspended for the needs of an interested person, either for life or for a time specified in the agreement." And also continued in article 303, "Those concerned can provide coverage, even outside the knowledge or permission of the person whose soul is insured." The terms of the policy that must be fulfilled are also explained in the Commercial Code Article 304 "The policy contains: the result of the provision of the insurance, the name of the insured, the name of the person whose life is insured, the time the danger for the insurer starts and ends, the amount of money insured and insurance premiums." Article 306 states that "If the person whose life was insured has passed away at the time of the provision of insurance, the agreement is terminated, even though the insured cannot know about the death unless otherwise required." Moreover, Article 307 explains that "If the person who is insured for his life has committed suicide or is sentenced to death, his insurance will be annulled." 21 RISK MANAGEMENT IN THE SHARIA LIFE INSURANCE BUSINESS Overview of The Pasuruan Unit of the Pasuruan Family Syari'ah Insurance, Inc. was founded by activists and practitioners of Islamic Economics and Microfinance. They were concerned about building independence and developing the community's economic welfare, especially families from the middle to lower economic circles or lowincome people through Sharia Microfinance Institutions (LKMS) and Sharia Insurance. Asuransi Shariah Keluarga Indonesia, Inc. has a background that humans in their lives cannot be spared from calamities. Still, as social beings, when a disaster occurs, they are obliged to help and help one another. Sharia insurance has the primary function as an operator in sharing risks between participants or policyholders if a disaster occurs. The basic concept of Sharia Insurance is to help you all in Kindness and Taqwa. This principle makes insurance participants a large family that helps and helps each other. Therefore, Asuransi Shariah Keluarga Indonesia, Inc. is a part of ta'awun and sharing blessings with the ummah. Asuransi Shariah Keluarga Indonesia, Inc. also has the concept and philosophy of ta'awun, where the concept of ta'awun in the Qur'an has been explained. Humans as individual beings as well as social beings are a unity that cannot be separated. They must realize that their new life has meaning if humans are involved in social relationships. It involves interactions based on an attitude of help among pluralistic communities. In other words, without other people or living in society, a person is meaningless and does nothing. When humans maintain life and pursue a better life, someone can't work alone without the help and assistance of others. Therefore, Islam recommends that its adherents have an attitude of mutual help and assistance in living their lives. This attitude will work well if there is communication between them or they understand this because human interests are always related to other humans. 22 In the al-Qur'an, Allah Swt. has ordered Muslims to continuously unite and help each other for the sake of the strength and glory of Muslims. If this happens, Muslims will respect, be liked, and respect other groups outside of Islam. Allah Swt. has confirmed this in QS. al-Maidah: 2, which reads as follows: "And please help you in goodness and piety, and do not help you in committing sins and transgressions. And fear Allah indeed Allah is very severe in torment '. By conveying the above verse, it can be understood that helping each other who is kind is an effort to increase piety to Allah Allah Swt. This attitude not only exists in material issues but also‫ة‬ in non-material issues, such as people experiencing worries and troubles. In this context, the help we can give is help that is non-material in nature. What is meant by the side is to provide advice and motivation to cheer or cheer him up. As a result, the worries and troubles he experiences will be replaced with joy. But in that verse, the help referred to is help that is non-material in nature. In the author's view, help in this form can be termed da'wah. Namely, help by inviting people to do good or according to the terms of the verse is al-birr and al-taqwa. From that verse, it can be said that the doer or person who can help is not limited to certain people, especially to help that is non-material in nature. Except for material help, only people who have material can do it. For example, a wealthy person helps his poor brother and so on. In people's lives (in Indonesia), this attitude has become a national culture known as "mutual cooperation." This culture has been practiced from generation to generation, from the ancestors of the Indonesian nation to the generations of this century. But the form of assistance varies according to the abilities and conditions they face. In the city, for example, the assistance or assistance provided is more of material nature. Meanwhile, for rural communities, the assistance or assistance provided is more of a nonmaterial nature in the form of labor or the like. Therefore, habits like that should be preserved continuously whenever and wherever we are. It is based on the teachings of Islam, which always recommends adherents to help each other and help each other, especially fellow Muslims. There will be solid unity and unity and a close brotherhood among humankind. Allah Swt. will lower His help as long as a servant helps His brother. The Prophet confirmed it. In his words, as follows: "From Abi Hurairah that the Messenger of Allah has said:" Allah Swt. will help a servant as long as I help his brother" (Narrated by Muslim). Based on the hadith, it can be said that a human being must adorn himself with an attitude of help. If every human being owns this, then Allah Swt. will help and protect and be with him. Background and Principles Background and Principles of Asuransi Shariah Keluarga Indonesia, Inc. Humans in their lives cannot be avoided from calamities. Still, as social beings, when a disaster occurs, they are obliged to help and help one another. Sharia insurance has the primary function as an operator in sharing risks between participants or policyholders if a disaster occurs. The basic concept of Sharia Insurance is to help you all in Kindness and Taqwa. This principle makes insurance participants a large family that helps and helps each other. Therefore, Asuransi Shariah Keluarga Indonesia, Inc. is a part of tawun and sharing blessings with the ummah. Asuransi Shariah Keluarga Indonesia, Inc. also has the concept and philosophy of ta'awun, where the concept of ta'awun in the Qur'an has been explained. Humans as individual beings as well as social beings are a unity that cannot be separated. They must realize that their new life has meaning if humans are involved in social relationships or interactions based on help among pluralistic communities. In other words, without other people or living in society, a person is meaningless and does nothing. When humans maintain life and pursue a better life, someone can't work alone without the help and assistance of others. Therefore, Islam recommends that its adherents have an attitude of mutual help and assistance in living their lives. This attitude will work well if there is communication between them or they understand this because human interests are always related to other humans. In the Quran, Allah swt. has ordered Muslims to continuously unite and help each other for the sake of the strength and glory of Muslims. If this happens, Muslims will respect, be liked, and respect other groups outside of Islam. Asuransi Shariah Keluarga Indonesia, Inc. has several Islamic life insurance products, namely: a. The Mu'awanah Sakinah program is shown to provide a means for families to help each other (ta'awu) and protect (takafuli) among family members through the formation of a fund pool (tabarru fund) which is managed according to sharia principles to face risk in the form of compensation. Sharia micro life insurance for all family members has terms and conditions. First, the Mu'awanah Sakinah program agreement uses the tabarru 'aqad and wakalah bi al-ujrah. Tabarru 'contract is a grant agreement giving funds from participants to the tabarru fund to help. wakalah bi al-ujrah 'aqad is a contract between participants collectively or individually with the management (Insurance Company) with a commercial purpose that gives power to the manager according to the power or authority is given, with a reward in the form of ujrah. The second contribution, in Asuransi Shariah Keluarga Indonesia, Inc. is IDR 100,000 per year. Third, the death benefit for each family member in the amount of IDR 2,500,000 died due to illness. Valid for a 30 day waiting period (a time in which there is no right to apply for MuʻCloudah compensation). Fourth, the insurance is valid for one year from the date of successful activation of the Mu'Cloudah Sakinah card. Fifth, membership requirements: a. physically and mentally healthy; b. minimum age one year and maximum 69 years; c. family members are spouses and children registered on the family card; d. maximum child age is 25 years and not married; e. Mu'Cloudah Sakinah is valid for a minimum of 90 days from the issuance of the family card; f. your registration card number is the participant number; g. the Mu'Cloudah Sakinah Card is proof of participation; h. one card is valid for one family. Sixth, participation data: a. name of family head: b. family card number: c. number of family members. Seventh, owe to apply for compensation: 1. participants or compensation recipients must report the incident (death in the world) through, a. Service offices, b. Office of PT Asuransi Shariah Keluarga Indonesia, c. Via SMS; 2. Send complete compensation documents. Eighth, compensation claim documents: a. copy of "Mu "Cloud Sakinah" card; b. copy of participant Indonesian identity card and family card; c. Submission of compensation from family members; d. death certificate from the hospital if you die in the hospital; e. death certificate from the village office; f. a letter from the police if you die in an accident. Ninth, the compensation exemption is not given if a family member: a. suicide or sentenced to death by a court; b. as a result of actions against the law or being involved in fights, brawls, or mass riots; c. epidemic or natural disaster d. misuse of alcohol, illegal drugs, or other addictive substances; e. sexual relationship diseases, AIDS, HIV, ACR, and all its consequences b. Mu'awanah Virtue of Group of Students The terms and conditions of the general mu'Cloudah the goodness of a group of students are the same as mu'Cloudah Sakinah, which makes the difference including the special provisions including 1. Insurance participants: a. student/santri aged three years to 25 years; b. teachers/ staff aged 18 to 65 years. 2. Special provisions: a. the insurance period is a minimum of 1 year according to the date stated on the participant card: b. for risks caused by disease (not accident), a waiting period of 7 days is valid from the insurance start date stated on the participant's card. Vision and Mission The procedure that runs in the ongoing registration at Asuransi Shariah Keluarga Indonesia Inc., according to Ahmad Durri as the Pasuruan Unit Branch Office Staff is: "The contract that we use is wakalah bi al-ujrah, we will use mu 'cloud, the customer will represent if the customer has a disaster we will immediately help him, the risk to one of the customers for financing can be directly transferred, and the source of the money. Members hand over money to ASYKI using a tabarru contract. "The money is divided by 2, and the first is tabarru". The second is given to the company or using the ujrah contract, and the distribution is 50% ujrah, 50% tabarru ". 24 The interview results show that the company only accepts insurance policy registration through the company. The risk management model in sharia life insurance has shared financial risk among participants. Meanwhile, the insurance company only acts as a regulator of the pooling of funds mandate. The transactions used are based on the tabarru' and the tijari contract. The tijari contract itself includes mudharabah, musyarakah, wakalah bi al-ujrah. All of these contracts are free from the elements of usury (interest money), maisir (gambling), gharar (fraud), and zhulman (persecution), which are expressly prohibited in Islamic law. As for Ahmad Durri from Sidogiri said: "That policy registration only occurs in stock mapping, customer financing which we usually call insurance brokers, so that in 2016 the insurance broker established ASYKI Unit Pasuruan and partnered with BMT Sidogiri". Thus, ASYKI, Inc. Unit of Pasuruan implements a management system through adjustments to the system located at the Bogor head office. According to Law No. 40 of 2014, the sharia life insurance business is viewed from the perspective of the contract used. The practice of ASYKI, Inc. in the field is in accordance with sharia principles. It has a legal identity and legality according to state law. Through the management system that corporation has implemented. ASYKI Unit Pasuruan can be seen in handling sharia life insurance based on sharia principles and monitoring from the initial identification process to the final risk control stage. CONCLUSION Based on the results of this study, it can be concluded that the Indonesian Family Sharia Insurance ASYKI, Inc. Pasuruan Unit is generally well-performed. Based on Law No.40 of 2014 in risk management is based on sharia principles. However, it does not rule out individuals who register individually. Still, the solution ASYKI, Inc. Unit Pasuruan should individual poly insurance applicants be directed to one of the partners of ASYKI, Inc. It will open up opportunities for individual policy registrants to be freer to recognize and apply sharia life insurance. They can do it easily without going through an intermediary business entity or company.
2022-01-05T16:23:10.057Z
2021-11-12T00:00:00.000
{ "year": 2021, "sha1": "3b4f34ec5c4fad1f8a263f6d5f4de8da8b000286", "oa_license": "CCBYNC", "oa_url": "https://jurnal.iainponorogo.ac.id/index.php/tahrir/article/download/3041/1842", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "dc97e022f12f732b9d49f5c94f91dcd75ecd913a", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
255218918
pes2o/s2orc
v3-fos-license
Mental Health Risk Factors Related to COVID-19 among Canadian Public Safety Professionals : Public safety personnel (PSP) are known to experience difficult and demanding occupational environments, an environment that has been complicated by the COVID-19 pandemic. Firefighters, paramedics, and public safety communicators were among the front-line workers that continued to serve the public throughout the course of the pandemic. The present study considered the potential impacts of the COVID-19 pandemic on self-reported symptoms of mental health challenges in Canadian firefighters, paramedics, and public safety communicators. Participants were firefighters ( n = 123), paramedics ( n = 246), and public safety communicators ( n = 48), who completed an online survey, including demographics, questions related to COVID-19 exposure and worry, the Patient Health Questionnaire-9, the Generalized Anxiety Disorder-7, the Social Interaction Phobia Scale, and the Posttraumatic Stress Disorder Checklist-5. Results revealed that risk factors for increased mental health symptom reporting were paramedic occupation, self-identified female, younger in age, COVID-19 personal contact, requirement to self-isolate, and self-perception of COVID-19 contraction (without confirmation through testing). The COVID-19 pandemic should be considered a risk factor for increased mental health symptom reporting in PSP. Introduction As a result of the rapid escalation of the novel coronavirus (Coronavirus Disease 2019; COVID-19) across the globe (caused by the coronavirus SARS-CoV-2) the World Health Organization recognized COVID-19 as a pandemic on 12 March 2020. Governments around the globe were quick to look for ways to reduce the spread of the virus [1] with a large portion of the world's population being impacted by government restrictions [2]. Public Health Orders imposed restrictions that typically included restricted domestic and/or international travel, wearing of masks, physical distancing, restricted gathering sizes, limited size of social gatherings, closing venues where public gatherings took place, shelter in place, and complete lockdown measures. In Canada the quarantine mandates were either "recommended" and voluntary in nature, or "mandatory" which were legally enforceable [3] and at times included mandates restricting contact with individuals outside, or even those within, the household [4]. The COVID-19 pandemic has had a global impact on coping and mental health [5]. A deterioration in mental health and coping has been evidenced in Canadian populations, particularly for those with health, social, or structural vulnerabilities [6]. The level of uncertainty regarding the virus was especially pronounced during the initial stages of the COVID-19 pandemic when information about the virus was unclear and evolving, compounding fears of infection and panic [7]. The mental health of frontline healthcare workers (e.g., doctors, nurses, public safety personnel [PSP]) has been a research priority due to their direct involvement with COVID-19 patients and the subsequent increased risk of contracting the virus [8,9]; early studies have also indicated that, especially for nonmedical staff less familiar with communicable disease protocols, pandemic training and preparedness was very important. A recent review has suggested that greater psychological stress may be experienced by those in occupations where work occurs outside of a controlled environment, such as is the case with PSP [10]. Within the Chinese health workers, psychological concerns such as symptoms of generalized anxiety disorder (GAD), major depressive disorder (MDD), and emotional exhaustion have been exacerbated [11]. Results have evidenced significant increases in anxiety [12], attributed to several factors, including dealing directly with infected patients, insufficient personal protective equipment, poor access to hand sanitizers or liquid soaps, and existing mental health challenges [13]. The prolonged nature of the pandemic has also contributed to increased emotional exhaustion in healthcare workers [7]. The pandemic has increased risk for mental health symptoms among PSP, who have been required to provide ongoing service throughout the pandemic [8]. Firefighters, paramedics, and public safety communicators (e.g., dispatchers, 911 operators) have experienced increased risks to their mental health related to direct patient contact, insufficient access to necessary protective measures, increased call volume, and intensified public stress, expectation, and demand [14]. Personal responsibility, personal safety risks, emotional activation, and levels of empathy [15] also affected the risk to the mental health of these workers. There is research from other countries suggesting the negative impact and effects of COVID-19 observed in the general population [5] and healthcare workers [16] would also be reflected in the mental health and well-being of PSP working through the pandemic [14]. Mental Health and Emergency Services Due to their occupational demands, firefighters (including Emergency Medical Services who respond but do not transport patients), paramedics (ambulance service personnel who respond, treat, and transport patients), and public safety communicators (who have no in person contact with patients but deal with vicarious trauma) are exposed to potentially psychologically traumatic events (PPTE) at rates much higher than the general population [17]. PSP also report considerable difficulties with other occupational stressors, including organizational (e.g., staff shortages, inconsistent leadership styles) and operational elements (e.g., shift work, public scrutiny) [18]. Prior to the COVID-19 pandemic, there was evidence of associations between complex, specific vocational stressors and an increased risk for symptoms of posttraumatic stress injuries (PTSI; e.g., MDD, GAD, panic disorder [PD], social anxiety disorder [SAD], alcohol use disorder [AUD], posttraumatic stress disorder [PTSD]) [18,19] as well as suicidality (i.e., ideation, planning, attempts; [20,21]. Research examining the mental health impacts of COVID-19 for paramedics, firefighters, and public safety communicators is limited, but growing. PSP in the United Kingdom reported fewer mental health symptoms during the early months of the pandemic than the general population, possibly related to playing a critical role early in a time of societal crisis [22]. Conversely, a sample of United States PSP who had been exposed to COVID-19 reported higher alcohol use severity compared to PSP who had not been exposed [8]. Participants in the United States study who reported increased COVID-19 worry and vulnerability, reported more symptoms of GAD and MDD; similarly, for those who had both COVID-19 worry and exposure, increased PTSD symptom severity was demonstrated [8]. A convenience sample of 31 PSP reported feelings of "isolation, lack of support and understanding by family or friends, decreased or forced removal in immediate social interaction (e.g., within family and friend circles), sentiments of being infected or dirty, increased feeling of sadness and anxiety, and reluctance to ask for help or get treatment (e.g., self-approval of being isolated)" in an investigation regarding COVID-19 stigmatization [23], p. 375. Social distancing required by public health measures and within the workplace was also associated with increased anxiety and stress for PSP and may have contributed to poorer mental health outcomes [24]. A recent scoping review identified key themes for assisting with mental health of paramedic practitioners during the pandemic. Key themes included increasing confidence in personal protective equipment, improved understanding of ways to protect self and family, and enhanced managerial communication [25]. During the pandemic public safety communicators reported an increase in occupational burnout, emotional exhaustion and loss of professional effectiveness [26]. Additional research considering the impacts of the pandemic on the mental and emotional health of PSP is necessary. Current Study The current study was designed to address gaps in the literature with respect to the mental health of PSP over the course of the COVID-19 pandemic; specifically, the relationships between COVID-19 and symptoms related to MDD, GAD, SAD, and PTSD in a sample of Canadian paramedics, firefighters, and public safety communicators were examined. The data is a cross-sectional sample of PSP who recalled two points in time (current day, and when COVID restrictions were first presented). Our primary hypotheses were: 1. Given increased patient contact, paramedics were expected to report greater selfreported symptoms of MDD, GAD, SAD, and PTSD compared to firefighters and public safety communicators. 2. Given age as a significant risk factor for COVID-19 [27,28], increasing age was expected to be associated with increased self-reported symptoms of MDD, GAD, SAD, and PTSD. 3. Given that women are more likely to experience post-traumatic stress injury and suffer from it as a chronic condition [29,30], and are reported to be the most common elder care providers [31], female participants as determined through sex at birth were expected to report greater self-reported symptoms of MDD, GAD, SAD, and PTSD compared to male participants. These symptoms would be related to increased concerns that workplace exposure may be shared with older family members. 4. Participants with confirmed (or suspected confirmed) contacts were expected to report increased self-reported symptoms of MDD, GAD, SAD, and PTSD related to concerns that workplace exposure may be shared with older family members. 5. Participant responses were expected to demonstrate differences across professional groups (i.e., firefighters, paramedics, public safety communicators) based on selfreported need to complete self-isolation. Similarly, we expected that there would be a positive relationship between mental health symptoms and number of days in self-isolation. 6. Participant responses were expected to differ across professional groups (i.e., firefighters, paramedics, public safety communicators) based on self-reported suspicion of contracting COVID-19, even if it was not confirmed through testing. Methods Between March of 2020 and March of 2021, during the first year of the pandemic, the British Columbia Paramedics Association, British Columbia Professional Fire Fighters' Association (BCPFFA), and Dispatch Centre Managers emailed their members invitations to participate in an online survey hosted on Qualtrics (Qualtrics, Provo, UT); invitations with login information were provided and linked to individual email addresses. The invitation email indicated that participants would receive two reminders to complete the survey. Participation was voluntary and participants were requested to complete a demographic questionnaire, as well as a series of validated measures related to COVID-19 and mental health). Public health restrictions in British Columbia (BC) were changing rapidly during data collection such that participants were requested to report on the public health response currently in place at the time of questionnaire completion. Specifically, the BC government used a phased approach to Public Health Orders that included varying level of restrictions from Phase 1 (declaration of provincial "public health emergency" on 17 March 2020, included closure of restaurants, bars, and personal care services, travel-related quarantine, reduced in-person school and childcare, restriction of non-emergency medical procedures and visitors to long-term care facilities) to Phase 4 (declared 24 June 2020, that included reopening of restaurants, bars, personal care services, shopping malls, recreational facilities, parks, places of worship, and small outdoor events with capacity limits). Each participant worked through each of the four phases during the period of this study. Ethical approval for the current study was provided by the Queen's University Health Sciences and Affiliated Teaching Hospitals Research Ethics Board (HSREB Certificate #6030697). Measures Demographic Information: Participants were asked to provide demographic and occupational information (i.e., sex, age, occupation, role, rank, years of service, full-or part-time status) and to describe the current phase of COVID-19 as defined by their local public health agency (ranging from 1-4). Details regarding their confirmed or suspected COVID-19 exposures and associated actions (i.e., self-isolating, missed work, how long), along with two open-ended response questions describing the impact of COVID-19 on their life at home and work, respectively were also collected (reported elsewhere). Patient Health Questionnaire: Symptomology for MDD was assessed using the Patient Health Questionnaire 9-item (PHQ-9) [32]. The PHQ-9 asks individuals to consider the past two weeks and to rate nine symptoms of MDD on a scale of 0 (not at all) to 3 (nearly every day). General Anxiety Disorder: Symptoms of GAD were assessed using the GAD 7-item Scale (GAD-7) [33]. The GAD-7 is a seven-item questionnaire in which individuals are asked to rate how often symptoms of GAD (e.g., "Feeling nervous, anxious, or on edge) have bothered them on a scale of 0 (not at all) to 3 (nearly every day). Social Interaction Phobia Scale: SAD was assessed using the Social Interaction Phobia Scale (SIPS) [34]. The SIPS is a 14-item measure of SAD symptoms that can be divided into three subscales; Social Interaction Anxiety, Fear of Overt Evaluation, and Fear of Attracting Attention. PTSD Check List: Symptoms of PTSD were assessed using the PTSD Check List 5 (PCL-5) [35]. The PCL-5 is commonly used 20 item self-report tool that assesses symptoms of PTSD as outlined by the Diagnostic and Statistical Manual of Mental Disorder-Fifth edition [36]. Participants are asked to rate how much they were bothered by items (e.g., "Repeated, disturbing, and unwanted memories of the stressful experience") on a scale of 0 (not at all) to 4 (extremely). Statistical Analyses Hypothesis 1 was examined using a repeated-measures ANOVA with measure (e.g., PHQ-9; GAD-7; SIPS; PCL-5) as the within participant factor, and occupation as the between participant factor. Partial correlation was completed to examine Hypothesis 2, using age and measure (PHQ-9; GAD-7; SIPS; PCL-5) as factors and years of service as a covariate. Hypothesis 3 through 6 we examined using independent samples t-tests. Results Participants included firefighters (n = 123), paramedics (n = 246), and public safety communicators (n = 48); additionally, there were seven participants who reported a different occupation and were therefore not included in the analyses. The firefighter sample predominantly self-identified as male (96.7%), with an average age of 29.80 years (SD = 9.74) and 14.24 years of service (SD = 7.54). One firefighter (0.8%) reported a COVID-19 diagnosis, while 16 (13.0%) others reported suspected cases of COVID-19. There were three firefighters (2.4%) who reported a COVID-19 diagnosis in their household, 60 (48.8%) reported having been in contact with a COVID-19 case, and 60 reported having to isolate. The public safety communicator sample predominantly self-identified as female (77.1%) with an average age of 25.08 years (SD = 9.39) and 11.69 years of service (SD = 5.80). No public safety communicators reported having been diagnosed with COVID-19; however, 12 (25.0%) reported a suspected cased of COVID-19. There were 2 (4.2%) public safety communicators who reported a COVID-19 diagnosis in their household and 22 (45.8%) members reported having been in contact with a COVID-19 case, with 25 (52.1%) reporting a requirement to isolate. Hypothesis 1 Hypothesis one examined the relationship between self-reported symptoms of MDD, GAD, SAD, and PTSD between firefighters, paramedics, and public safety communicators. A repeated-measures ANOVA was completed with measure (e.g., PHQ-9; GAD-7; SIPS; PCL-5) as the within participant factor and occupation as the between participant factor, with both age and gender as covariates. Wilks' Lambda indicated a significant measure by occupation interaction, (F(6, 574) = 3.79, p < 0.001). For the PHQ-9, post hoc analyses indicated that paramedics self-reported statistically significantly greater symptoms of MDD than firefighters (mean difference 3.63; p < 0.001), but no statistically significant differences relative to public safety communicators; across all analyses for all hypotheses, conservative alpha criteria were set (p < 0.001) to control for Type 1 error. For the GAD-7, post hoc analyses indicated that paramedics self-reported statistically significantly greater symptoms of GAD than firefighters (mean difference 3.30; p < 0.001), but no statistically significant differences relative to public safety communicators. For the SIPS, post hoc analyses indicated that paramedics self-reported statistically significantly greater symptoms of SAD than firefighters (mean difference 5.70; p < 0.001), but no statistically significant differences relative to public safety communicators. For the PCL-5, post hoc analyses indicated that paramedics self-reported significantly greater symptoms of PTSD than firefighters (mean difference 10.29; p < 0.001), but no statistically significant difference relative to public safety communicators. Across all measures, paramedics self-reported significantly higher symptoms of mental health challenges as compared to firefighters but were not significantly different relative to public safety communicators. Hypothesis 3 Female participants were expected to report greater baseline self-reported symptoms of MDD, GAD, SAD, and PTSD compared to male participants. Independent sample t-tests Hypothesis 5 Independent sample t-tests were completed to determine whether, at baseline, there was a statistically significant difference in self-reported symptoms between workers who were required to self-isolate, as compared to those workers that were not required to self-isolate. For firefighters, there was no revealed differences for symptoms of MDD, GAD, SAD, or PTSD between those with and without confirmed exposure. In comparison, for public safety communicators higher scores on SIPS [t (1,36) Further, bivariate correlations were completed to determine whether length of isolation was positively related to self-reported mental health symptoms. For firefighters and public safety communicators, there were no significant correlations revealed for length of isolation and GAD-7, SIPS, PCL-5, and PHQ-9 scores. For paramedics, significant correlations were evident for length of self-isolation and SIPS score (r = 0.269; p < 0.001), and patterns of similar relationships were evident for PCL-5 (r = 0.156; p < 0.071, and PHQ-9 (r = 0.166; p < 0.056). Hypothesis 6 Independent sample t-tests were completed on baseline measures to determine whether there was a statistically significant difference between workers who believed that they had contracted the virus (even if not confirmed by testing), as compared to those first responders that did expect that they had contracted the virus. For firefighters, there was no revealed differences for symptoms of MDD, GAD, SAD, or PTSD between those with and without confirmed exposure. In comparison, for public safety communicators, increased GAD Discussion Firefighters, paramedics, and public safety communicators are required to sup-port individuals in distress, which necessarily means close physical or psychological contact with members of the public during the pandemic. Usul et al. [37] found over 83% of paramedics in their sample reported treating patients with COVID-19. Direct contact between workers and patients with COVID-19 has been associated with higher levels of stress and burnout, as well as lower levels of compassion satisfaction [38]. Various factors (e.g., working conditions, personal responsibility, personal safety risks, emotional activation, levels of empathy) may exacerbate the experience of stress for different frontline groups managing COVID-19 [13]. Uncertainty can be inherently stressful [38]. Uncertainties surrounding the COVID-19 virus, increased risk of per-sonal exposure to a highly infectious virus, increased precautions needed to protect against infection, fears of unknowingly infecting families or co-workers, and shifts in work patterns during periods of social distancing and stay-at-home orders have com-pounded negative experiences for PSP during the pandemic [6,7,37,39]. Further, pan-demic-related stressors may have increased stress in personal relationships [40], im-pairment in physical [41] and mental [42] performance, and risks for mental health challenges among PSP [43,44]. The current study intended to contribute to the scant literature regarding PSP wellbeing during the COVID-19 pandemic. Specifically, we assessed self-reported mental health symptoms among firefighters, paramedics, and public safety communicators using data collected between March 2020 through March 2021 in British Columbia, Canada. We were particularly interested in considering risk factors for the mental health impacts of COVID-19, including occupation, gender, age, COVID-19 exposure, phase of pandemic, and COVID-related worry across a group of professions that differed in exposure to COVID-19 at work. Paramedics had high exposures (77% had direct contact with a COVID-19 positive case) as they were required to treat and transport, while firefighters responded in person without transport of patients (49% had contact with a COVID positive case) and public safety communicators had no in person contact with those requiring assistance (although 46% had been in contact with a COVID-19 positive person). In April 2020, BC firefighters were "ordered to stop responding to all but the most dire medical emergency calls during the COVID-19 pandemic", reducing the amount of exposure to COVID-related calls for BC firefighters compared to BC paramedics [45]. Similarly, public safety communicators have a lower risk of occupational related exposure to COVID-19 patients compared to paramedics, as public safety communicators do not respond physically to emergency calls. The hypothesis that paramedics would be most at risk for self-reported mental health symptoms was supported by the results; however, while paramedics were more likely to report symptoms of MDD, GAD, SAD, and PTSD than firefighters, neither paramedics nor firefighters differed from public safety communicators on any of the mental health variables. With respect to self-reported sex at birth, nearly half of the paramedic participants in our data set were male, compared to the predominately male firefighter participants and predominantly female public safety communicator participants. Previous literature suggests women are more likely to experience post-traumatic stress injury and suffer from it as a chronic condition even though men are more likely to live through potentially traumatic events [29,30]. As such, female participants (as identified by sex at birth) in the present study were expected to report more mental health symptoms than male participants, which was supported by the current results. Given that our sex-based analysis indicated that females reported greater mental health symptoms, and paramedic participants appeared to have increased risk related to increased occupationally defined physical contact, our results are consistent with previous literature. Together, the results suggest that risk factors for reporting symptoms one or more mental health disorders for PSP during the pandemic were likely linked to both occupation type and sex. Age has been consistently associated with severe health outcomes related to COVID-19 [27,28]. Consequently, it was hypothesized that PSP age would be related to increased self-reported mental health symptoms, as PSP with increased age would be expected to experience more worry regarding the potential for severe outcomes to self and family. The current results contrasted this hypothesis; age was inversely related to self-reported symptoms of MDD, GAD, SAD, and PTSD (years of service was used as a covariate) such that older PSPs in this sample were less likely to report mental health symptoms. It is difficult to present possible explanations for our unexpected finding, but perhaps the relationship between age and psychological symptoms during the COVID-19 pandemic is more complicated that our current analyses. That is, we suggest that this relationship may be mediated by multiple factors including mental health status pre-pandemic, years of service, and a shift to managerial and less operational duties, in particular service related to communicable disease; consequently, future research considering modelling analyses that can take all of these factors into account simultaneously is suggested. The current results indicate that PSP who had experienced a personal contact with COVID-19 reported more mental health symptoms. The link between confirmed contact and mental health symptoms may be related to feelings of increased risk, impact of related isolation, inability to continue with daily activities, or a variety of other possible contributing factors (as demonstrated by Vujanovic et al. [8]). This finding seemed to be particularly true for firefighters and paramedics [8]. The results further suggested that the requirement for self-isolation and length of time in isolation were predictive of increased mental health symptoms for public safety communicators and paramedics, with a positive relationship between mental health symptoms and length of self-isolation also evident for paramedics. This is similar to results reported by Wu et al. [46] who found increased mental health problems in persons who were required to quarantine [46,47], potentially compounded by reduced access to mental health services during the pandemic. Finally, paramedics and public safety communicators were also found to be more at risk of self-reporting mental health symptoms if they perceived that they had contracted the COVID-19 virus (even if not confirmed through testing). The present study contributes to the limited data on Canadian PSP during the COVID-19 pandemic. The limitations of the current study provide important directions for future research. First, the data were collected cross-sectionally. Longitudinal data would help to clarify causal relationships and patterns of change over time. Second, the current study used only a single sample for each of only three PSP occupations. Including data from more diverse samples, including family status, may help to identify differences that can inform important opportunities for supporting PSP mental health. Third, Phase 1 self-reported COVID-worry relied on participant memory, which means there is an unknowable amount of recall bias associated with the results. Future research should collect the data over time to minimize such biases. Fourth, we did not collect information on baseline MDD or GAD symptoms such that we were not able to determine differences between pre-pandemic mental health symptoms and those experienced during the pandemic specifically. Finally, participation in the study was voluntary leading to the possibility that selection bias may have been a factor in the responses collected; similarly, because invitations to participate were sent by union executives and management, we were unable to determine a response rate for completion of surveys. Conclusions The present study provides a first glance at the impacts of COVID-19 on the mental health profiles of firefighters, paramedics, and public safety communicators and some of the factors associated with self-reported symptoms of one or more mental health disorders. The data demonstrates that both sex and occupation are factors related to increased symptoms, as were periods of isolation and expecting having been COVID-19 positive. PSP reported feeling less worried about COVID-19 as the pandemic progressed. Higher COVID-related worry was associated with higher mental health symptoms for all PSP, and self-reported mental health symptoms were also related to personal contact with COVID-19 infection. In conclusion, results suggest that the COVID-19 pandemic may have had notable impacts on mental health status for PSP and tailored supports are prudent to manage the COVID-19 pandemic stressors in PSP populations. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Data is only available on request due to privacy/ethical restrictions. The data are not publicly available due to their containing information that could compromise the privacy of research participants.
2022-12-29T16:03:15.201Z
2022-12-26T00:00:00.000
{ "year": 2022, "sha1": "8345f522923b51a3d477d3816c62c0f8d1164d2d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-5318/4/1/1/pdf?version=1672043201", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4c7527584250fbc41c4116e1ba4f9ca0792514de", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214194136
pes2o/s2orc
v3-fos-license
Membraneless ethanol,O 2 enzymatic biofuel cell based on laccase and ADH/NAD + bioelectrodes : This work describes EtOH,O 2 membraneless enzymatic biofuel cells (EtOH,O 2 M less EBFCs) that employ laccase-based biocathodes and ADH/NAD + bioanode. Laccase biocathodes were prepared by immobilizing a polypyrrole film containing different redox mediators (ruthenium and osmium complexes). The bioanode for EtOH,O 2 MlessEBFCs was fabricated by immobilizing multiwalled carbon nanotubes, NAD + -dependent alcohol dehydrogenase enzyme (ADH), poly-methylene green, and poly(amidoamine) (PAMAM) dendrimer onto a carbon cloth platform. Maximum power density and current density were 21.0 ± 0.2  W cm -2 and 0.15 ± 0.07 mA cm -2 , respectively, in PBS (pH 6.5). Lifetime tests conducted for EtOH,O 2 M less EBFCs showed promising perspectives for their future application in miniaturized devices. Introduction Biofuel cells (BFCs) employ enzymes or microorganisms as catalysts to convert chemical energy into electric energy.BFCs can operate under milder temperatures (20-40 °C) and physiological pH, so they could be a strategy to replace traditional batteries that need large amount of hazardous metallic catalysts in small devices.Moreover, BFCs could be employed to produce energy from various fuel sources because enzymes can selectively catalyze different fuels 1,2 .Enzymes have high specific selectivity, which could dismiss the need for a membrane 3 .The first report of a membraneless biofuel cell (MlessBFC) dates from 1997, when a single compartment cell was used to oxidize organic compounds (sugars or alcohols) while simultaneously reducing molecular oxygen (O2) at the biocathode 4 . To prepare MlessBFCs successfully, enzyme immobilization is a key step to obtain a stable longlasting device, improving electron transfer kinetics, and increasing power densities (PDs).In this context, researchers have sought to enhance enzymatic system robustness and activity-an enzymatic system must be able to survive pH, temperature, and reaction medium changes.This is not a simple task when one deals with biomolecules, but growing interest in this area has advanced knowledge in the field.A 20-day lifetime has been reported for a membraneless ethanol, oxygen enzymatic biofuel cell (EtOH,O2 MlessEBFC) based on alcohol dehydrogenase (ADH) and bilirubin oxidase (BOD) as bioelectrodes 5 .Over the years, numerous architecture designs for MlessEBFCs have been developed in order to achieve higher PD values 6,7 .For instance, Deng and co-workers 8 produced ABSTRACT: This work describes EtOH,O2 membraneless enzymatic biofuel cells (EtOH,O2 MlessEBFCs) that employ laccase-based biocathodes and ADH/NAD + bioanode.Laccase biocathodes were prepared by immobilizing a polypyrrole film containing different redox mediators (ruthenium and osmium complexes).The bioanode for EtOH,O2 MlessEBFCs was fabricated by immobilizing multiwalled carbon nanotubes, NAD + -dependent alcohol dehydrogenase enzyme (ADH), polymethylene green, and poly(amidoamine) (PAMAM) dendrimer onto a carbon cloth platform.Maximum power density and current density were 21.0 ± 0.2 W cm -2 and 0.15 ± 0.07 mA cm -2 , respectively, in PBS (pH 6.5).Lifetime tests conducted for EtOH,O2 MlessEBFCs showed promising perspectives for their future application in miniaturized devices. To increase PD, carbon-based materials, such as multiwalled carbon nanotubes (MWCNTs), have been successfully investigated 9 .With respect to electrochemical performance, MWCNTs are claimed to be more efficient than single-walled carbon nanotubes (SWCNTs).Indeed, MWCNTs have greater surface area and wider potential range, provide many active sites for biomolecule immobilization (which promotes faster electron transfer reactions along the tube axis), and display prominent charge transortation features 10,11 . Immobilization aiming at protein microencapsulation has currently gained researchers' attention.In this immobilization mode, intrinsically conductive polymers and dendrimers are employed as imprisonment arrays so that enzymes are physically entrapped in membrane pores or anchored onto the electrode surface.Intrinsically conductive polymers are compounds that can carry electric current without incorporating conductive charges.Also known as conjugated polymers, their electrical, optical, magnetic, and electronic properties resemble the properties of metals and/or semiconductors.Here, we highlight the use of polypyrrole (polyPYR), which is highly chemically and environmentally stable, biocompatible, and biodegradable.This porous polymer has been widely applied in batteries, sensors, and anti-corrosion protective agents, among others.Several methodologies can be employed to obtain polyPYR layers, and use of this polymer, modified or not, has been often reported [12][13][14] .Our research group prepared enzymatic biocathode and bioanode for biofuel cells 15,16 , and an example of MlessEBFCs application can be found elsewhere 17 .PolyPYR and MWCNT matrixes have been employed to prepare glucose,O2 EBFCs based on glucose oxidase (GOx) and pyrroloquinoline quinone (PQQ) redox mediator absorbed on MWCNTs and polyPYR as a MWCNTs-GOx-PQQ-polyPYR bioanode 17 .PD of 1.1 W mm -2 was achieved at non-compartmentalized BFCs at a cell voltage of 0.167 V in PBS (pH 7.4) for 10 mM glucose (as fuel), and PD of 0.69 W mm -2 was obtained at cell voltage of 0.151 V in human serum containing 5 mmol L -1 glucose (37 °C) 17 . PAMAM dendrimer is another promising polymer belonging to the class of branched monodisperse polymers 18,19 .PAMAM has widely uniform structure, low molecular weight, highly functionalized surface and high degree of porosity 19 . We report the construction of a single-chamber EtOH,O2 biofuel cell to harvest energy from ethanol.Strategies to enhance ET between enzymes and electroactive surfaces include orientation and immobilization of the enzymes and electron mediation.For this laccase-based biocathode metallic redox complexes (Os and Ru) was entrapped in a polyPYR film as redox mediators and the ADH/NAD + bioanode employed polymethylene green layer as mediator.We also investigate the activity of the membraneless biofuel for a long period (11 months) in other to show their stability. Multiwalled carbon nanotubes (MWCNTs) were acquired from Cheap Tubes Inc. (diameter of 8.0 nm, length of 10 to 30 m, and > 95% purity).All solutions were prepared with high-purity water from a Millipore Milli-Q system.Solution pH was measured with a pH electrode coupled to a Qualxtron model 8010 pHmeter. pH influence on enzymatic kinetics was determined by assaying laccase and ADH activities at various pH values ranging between 3.5 and 10.To this end, the following 0.1 mol L −1 buffer solutions were employed: acetate buffer (NaAc/HAc) for pH 3.5-5, phosphate buffer (NaH2PO4/Na2HPO4) for pH 6-7, and tris(hydroxymethyl)aminomethane-HCl (Tris+) buffer for pH 8-9.Reaction was initiated by adding substrate to the immobilized protein, depending on the study that was being performed. Biofuel cell tests Power density measurements were accomplished in an EtOH,O2 MlessEBFC described in the Instrumentation Section.First, the EtOH,O2 MlessEBFC open circuit voltage (OCV) was measured at least 1 h before the cell test.After that, polarization curves at a scan rate of 1 mV s -1 were registered in triplicate.PD values for all MlessEBFCs were obtained by multiplying cell voltage (Ecell) by current density (Jcell) (PD = Ecell × Jcell). pH effect on semi-MlessEBFC To obtain maximum MlessEBFC performance, it is important to investigate bioelectrode enzymatic behavior as a function of pH because hydrogen ion concentration affects enzymatic activity: enzyme spatial conformation depends on pH values and on the presence of protonated/deprotonated groups in the enzyme catalytic site, which can modify the enzyme tertiary structure.pH also influences intrinsic/extrinsic electron transfer reaction strongly.The individual pH behavior of the immobilized enzymes employed here has previously been investigated in detail 15,20 .The optimum pH range for ADH/NAD + bioanode is between 7.0 and 8.0, achieved by employing PBS as buffer 24 .Laccase works best in more acidic medium (pH 4.5), in ABS buffer solution 15 .Therefore, besides operating in different pH ranges, bioanode and biocathode also use distinct buffer solutions. Direct correlation between enzymatic kinetics and pH is important to obtain maximum bioelectrode performance.Nevertheless, in a MlessEBFC both bioelectrodes seldom operate at their optimum pH.To find the best pH for MlessEBFC operation, individual pH curves of each enzyme were plotted together (Fig. 2).On the basis of Fig. 2, pH curves intersect at pH 6.5, which was further employed with MlessEBFCs.Even though this pH value is close to physiological conditions, other complications may arise and diminish EtOH,O2 MlessEBFC PD and OCV values as compared to separated biofuel cells.Other factors may also be associated with this behavior such as problems with the enzyme-mediator-electrode electron transfer enzyme, which hinders the redox process underlying EtOH oxidation by ADH and O2 reduction to H2O by multicopper oxidase enzymes (laccase). EtOH,O2 membraneless biofuel cell: reaction medium influence (ABS or PBS). Eliminating proton exchange membrane (PEM) has several advantages.During the reaction process, PEM is subjected to membrane channel obstruction by ions present in the supporting electrolyte, which dries or floods membrane parts, and fuel crossover.To minimize the aforementioned problems, one strategy is to remove the membrane when biocatalyst specificity can be maintained at MlessEBFC.However, each bioelectrode must be evaluated for its electron transfer activity and enzymatic selectivity.EtOH,O2 MlessEBFC performance was assessed by analyzing OCV and power density curves obtained from polarization curves (results not shown).To investigate how PBS and ABS buffers influenced MlessEBFC activity, EtOH,O2 MlessEBFC performance was measured at pH 6.5 in both buffers (ABS and PBS; buffer concentration = 200 mmol L -1 ).This "precaution" was necessary because PEM removal resulted in each enzyme facing a medium that was different from its ideal operation condition (ABS (pH 4.5) for laccase and PBS (pH 7.4) for ADH). ABTS influence on EtOH,O2 MlessBFCs ABTS is one of the most common oxygen reduction mediators when laccase is employed in EBFCs.Figure 4 illustrates how ABTS influences EtOH,O2 MlessEBFC performance in PBS medium (pH 6.5) for both anode configurations investigated here: polyPYR-Os-laccase and polyPYR-Rulaccase.Homogeneous ABTS introduction diminishes Ru-mediated cathode cell PD by over 80% as compared to Os-mediated cathode cell.Indeed, PD decreased from 15.3 ± 0.2 W cm -2 to 2.7 ± 0.3 W cm -2 just by changing the Os complex to the Ru complex.Also, when results obtained in the absence (Fig. 3A) and in the presence (Fig. 4) of ABTS are compared for Os-complex in PBS, PD and Jcell(max) was approximately 27.5 and 23.8%, respectively.This decrease could be explained by competition between ABTS and mediators incorporated into the polyPYR matrix and the enzymatic redox sites.The best results were achieved for EtOH,O2 MlessEBFCs based on the MWCNT-ADH,polyPYR-Os-laccase system.These results agreed with literature data 25 claiming that [Os(bpy)2Cl2] can bind to laccase copper hydrophobic T1 active site and establish a strong electrostatic interaction, which entraps the Oscomplex in polyPYR, shifts the polyPYR-Os oxidation potential (Eoxi), and facilitates electron transfer.Our results showed that polyPYR-Rulaccase did not interact in the same way as the Osmediator. Table 1 summarizes all experimental parameters obtained for EtOH,O2 MlessEBFCs as a function of the different redox mediators entrapped in polyPYR, in the absence or presence of 1 mmol L -1 ABTS.On the basis of the results above (Table 1), the best EtOH,O2 MlessEBFCs was MWCNTs-ADH,polyPYR-Os-laccase in 200 mmol L -1 PBS (pH 6.5) in the absence of ABTS. Figure 5 illustrates the selected operation system.We have investigated and reported half-cell data for these electrodes configuration before 15,21 .For the biocathode half-cell 15 a gas diffusion membrane (ELAT) consisting of 40% metal in C (Pt0.66Ru0.34,E-TEK commercial mixture) hot pressed in a Nafion NRE-212 membrane was employed as the anode.This configuration furnished at least five times higher power densities values than the value reported for the membraneless fuel cell.For the half-bioanode 21 Pt was used as cathode, also separated by a Nafion membrane.This configuration furnished power density values as high as 0.25 mW cm -2 .Table 1 shows that the results for EtOH,O2 MlessEBFCs are much lower than the separated compartment cell indicating that besides pH effect must be a mutual influence of the fuel and O2 in the performance of the enzymatic systems.Nevertheless, despite differences with respect to enzyme immobilization method, the values measured herein are in the same order of magnitude (W cm -2 ) with some data reported in the literature data 5,8,27,28 .Considering the high efficiency in ethanol/acetaldehyde conversion and concomitant O2 reduction to H2O demonstrated by enzymatic systems future application of biofuel cells in miniaturized systems must be solved preparing microfluidic devices that operate with streams of liquid electrolytes 29 .In this configuration it is possible to operate with different electrolytes for anodic and cathodic compartments without any problem.This is our future goal to increase the power density. Table 2 lists the MWCNTs-ADH,polyPYR-Oslaccase cell storage lifetime under optimum conditions (200 mmol L -1 PBS (pH 6.5), 1.9 mmol L -1 NAD + , and 100 mmol L -1 EtOH).After five months, PD and Jcell(max) decreased by approximately 38% (13 ± 4) W cm -2 and 0.09 ± 0.02 mA cm -2 , respectively) as compared to freshly prepared electrodes.These values dropped slowly and reached 62% and 48% (8 ± 2 W cm -2 and 0.09 ± 0.02 mA cm -2 , respectively) of the initial values.These results attested that immobilization of the enzymes employed here provided a relatively stable medium for long-term storage tests.This result may be important to apply these devices in new types of nanofluid cells to enhance the power harvest from the ethanol molecule 30 . Conclusions Bioelectrodes containing the enzymes ADH and laccase and different redox mediators (Os or Ru) entrapped in polyPYR films or PAMAM dendrimer were tested.MWCNTs-ADH,polyPYR-Os-laccase employing PBS (pH 6.5) in the absence of ABTS performed the best.PD and Jcell(max) remained around 21.0 ± 0.2 W cm -2 and 0.15 ± 0.07 mA cm - 2 for freshly prepared electrodes.Electrodes retain 38% of their activity after storage for five months storage in a refrigerator.The prepared EtOH,O2 MlessEBFC generated power densities values comparable with literature data as well as considerable lifetime stability.Therefore, the results presented here for EtOH,O2 MlessEBFCs are promising and may be employed in microfluidic devises to enhance the activity of the system. Table 1 . Parameters obtained for different EtOH/O2 MlessEBFCs.Average and standard deviation for combinatorial analysis, in triplicate, for a set of three biocathodes and four bioanodes. *
2019-11-28T12:16:47.895Z
2019-11-20T00:00:00.000
{ "year": 2019, "sha1": "2887bbf1ec05ac609723cee12607ee5b81ba048a", "oa_license": "CCBY", "oa_url": "http://revista.iq.unesp.br/ojs/index.php/ecletica/article/download/1059/978", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8b1516ba8984644c0ede75dbfceb37f3662748fa", "s2fieldsofstudy": [ "Engineering", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
263619990
pes2o/s2orc
v3-fos-license
The proteome landscape of the root cap reveals a role for the jacalin-associated lectin JAL10 in the salt-induced endoplasmic reticulum stress pathway Rapid climate change has led to enhanced soil salinity, one of the major determinants of land degradation, resulting in low agricultural productivity. This has a strong negative impact on food security and environmental sustainability. Plants display various physiological, developmental, and cellular responses to deal with salt stress. Recent studies have highlighted the root cap as the primary stress sensor and revealed its crucial role in halotropism. The root cap covers the primary root meristem and is the first cell type to sense and respond to soil salinity, relaying the signal to neighboring cell types. However, it remains unclear how root-cap cells perceive salt stress and contribute to the salt-stress response. Here, we performed a root-cap cell-specific proteomics study to identify changes in the proteome caused by salt stress. The study revealed a very specific salt-stress response pattern in root-cap cells compared with non-root-cap cells and identified several novel proteins unique to the root cap. Root-cap-specific protein–protein interaction (PPI) networks derived by superimposing proteomics data onto known global PPI networks revealed that the endoplasmic reticulum (ER) stress pathway is specifically activated in root-cap cells upon salt stress. Importantly, we identified root-cap-specific jacalin-associated lectins (JALs) expressed in response to salt stress. A JAL10-GFP fusion protein was shown to be localized to the ER. Analysis of jal10 mutants indicated a role for JAL10 in regulating the ER stress pathway in response to salt. Taken together, our findings highlight the participation of specific root-cap proteins in salt-stress response pathways. Furthermore, root-cap-specific JAL proteins and their role in the salt-mediated ER stress pathway open a new avenue for exploring tolerance mechanisms and devising better strategies to increase plant salinity tolerance and enhance agricultural productivity. INTRODUCTION Agricultural production is severely affected by high soil salinity.Nearly 1.5 million ha of farmland per year is out of crop production because of soil salinization, causing a decrease in production potential of 46 million ha annually (Food and Agriculture Organization of the United Nations, 2020).Several anthropogenic and natural processes, such as over-irrigation, climate change, rainfall, aeolian deposits, mineral weathering, and stored salts, contribute to soil salinity (Rengasamy, 2006).Soluble salt accumulation in the soil severely affects plant growth by inducing osmotic stress and ion toxicity (Munns and Tester 2008;van Zelm et al., 2020).High salt levels in the soil and the plant affect several physiological and Previous studies have revealed salt-signaling pathways that operate in roots and shoots, from initial salt perception to events that lead to cell death.However, gaps remain to be addressed at every stage of the plant response to salt stress.For example, it is not yet clear how Na + enters the cell.Although single channels or transmembrane proteins have not yet been identified, entry of Na + ions into plants may occur through non-selective cation channels and the high-affinity K + transporter (HKT1) (Essah et al., 2003).The entry of Na + ions into cells disturbs water potential and affects Na + /K + ionic homeostasis, thus creating ionic stress.The imbalance in cytosolic ion homeostasis and disturbed water potential lead to a rapid increase in cytosolic Ca 2+ , primarily in the roots (Knight et al., 1997).This increased Ca 2+ level helps to maintain ionic homeostasis and reduce osmotic stress (Tracy et al., 2008).For example, the well-studied SOS (salt overly sensitive) pathway is crucial for decoding salt-induced Ca 2+ signals and restoring the ionic balance.The Na + /H + antiporter SOS1/ NHX7 is a crucial player in the evolutionarily conserved SOS pathway.It transports sodium out of the cell with the support of the phosphorylated kinases SOS2 and SOS3 (Guo et al., 2001;Lin et al., 2009).In addition, salt stress leads to activation of channel proteins such as AHA2 (H + -ATPase), AVP1 (vacuolar H + -pyrophosphatase), and VAB2 (vacuolar H + -ATPase subunit) for sequestration of Na + ions from the cytoplasm (Gaxiola et al., 2002;Batelli et al., 2007;Duan et al., 2007;Fuglsang et al., 2007).In addition to maintenance of ionic homeostasis, elevated levels of reactive oxygen species (ROS) are brought back to steady-state levels by activating enzymatic and nonenzymatic ROS scavengers.For example, the enzymatic antioxidant catalase 2 (CAT2) shows increased ROS scavenging during salt stress (Song et al., 2021). Despite progress in our understanding of the plant salt-stress response, the critical driving factor(s) that determine plant responses to salt stress remain elusive (Ismail et al., 2014).One reason for this paucity of knowledge is the complexity of signaling during salt stress: ionic toxicity and osmotic stress may occur in a temporal manner during salt stress (Munns and Tester, 2008), and salt-stress responses differ among different root cell types (Dinneny et al., 2008;Geng et al., 2013).Therefore, understanding the spatiotemporal dynamics of salt responses at the cellular level will unravel the complexity of salt sensing and signaling pathways.To address this topic, we aimed to map the functional players in root-cap cells of Arabidopsis under salt stress in a temporal manner. The root cap, located at the tip of the primary root (PR) in dicots, is at the forefront of sensing and relaying different environmental stimuli for plant growth and adaptation.The role of the root cap in several tropic responses, including gravitropism, halotropism, hydrotropism, and, recently, nutritropism, has been established and envisaged (Kumpf and Nowack, 2015;Kanno et al., 2016;Ganesh et al., 2022).Furthermore, root-cap-derived auxin and cytokinin contribute to PR meristem size and lateral root (LR) development (Xuan et al., 2016;Di Mambro et al., 2019).Rootcap cells are also essential for penetration into the soil and communication with rhizosphere microbiota (Miyasaka and Hawes, 2001;Massa and Gilroy, 2003;Swarup et al., 2005;Kumpf and Nowack, 2015;Kanno et al., 2016;Ganesh et al., 2022).However, it remains unclear how root-cap cells achieve this multitasking, and the proteins and gene regulatory networks that aid in this process are unknown.Because the root cap is the first point of contact, high salt in the soil has been shown to affect the root-cap structure in several crop plants (Qiao 2011;Bogoutdinova et al., 2020;Ninmanont et al., 2021).Thus, investigating the specific functional players in the root cap under salt stress will help us to untangle the complex cellular responses of the root cap. Here, we characterized the root-cap cell-specific proteome under normal and salt-stress conditions using a promoter:reporter line specific to the columella and LR cap cells.We identified several novel proteins unique to root-cap cells upon salt treatment, and we generated root-cap-specific protein-protein interaction (PPI) networks by superimposing proteomics data onto known PPI networks.These PPI networks revealed that the endoplasmic reticulum (ER) stress pathway is specifically activated in root-cap cells upon salt stress.Furthermore, we identified saltresponsive root-cap-specific jacalin-associated lectins (JALs) in the ER.Functional characterization of one of the JALs, JAL10, revealed that it may alleviate salt stress by regulating the saltinduced ER stress pathway. Root-cap cell-specific proteomics under salt stress Previous studies have highlighted the regulatory roles of individual Arabidopsis root cell types in developmental and stress responses, significantly advancing our understanding of root plasticity (Brady et al., 2007;Dinneny et al., 2008;Geng et al., 2013;Rich-Griffin et al., 2020).Among root cell types, the root cap (central columella and LR cap cells) is unique for its multifaceted role in plant adaptation (Ganesh et al., 2022).However, a reporter line whose reporter gene is expressed specifically in the root cap is required to explore the role of the root cap.To identify root-cap-specific genes, we performed an in silico analysis using the condition search tool of the GENEVESTIGATOR database (Hruz et al., 2014).Candidate genes were shortlisted by cross-comparing their expression profiles using the eFB browser (Winter et al., 2007).Our analysis identified At5g54370, which encodes a late embryogenesis abundant protein-like protein, as a potential marker gene for the root cap.Strikingly, the expression of this gene was restricted to the outer central columella and LR cap cells (Kamiya et al., 2016).Nevertheless, the specificity of its expression within root tissues and during different developmental phases remained to be explored.To test its expression specificity, we fused the promoter of At5g54370 to an eGFP-GUS fusion reporter and transformed the resulting construct into Arabidopsis to validate its cell-type-specific expression (Figure 1).GFP expression was restricted to the root-cap cells of homozygous 5-day-old At5g54370 promoter:eGFP-GUS plants (Figure 1).We observed no reporter gene expression in other developmental zones (i.e., the transition zone and differentiated zone) of PRs and LRs.Similarly, expression was not detected in vegetative tissues such as leaves and stems (Figure 1C).However, slight (low) reporter activity was detected in the anthers and stigma of closed flower buds (Figure 1D).We used this At5g54370 promoter:eGFP-GUS line to sort root-cap cells for further experiments. To characterize spatiotemporal changes in the root-cap-specific proteome in response to salt treatment, we performed a rootcap-specific proteomic study using the pAt5g54370:eGFP-GUS line (supplemental Figure 1A).We performed liquid chromatography and a hybrid quadrupole orbitrap mass spectrometry run with protein extracts from root-cap and non-root-cap cells.We detected 304 and 440 proteins in the root-cap and non-root-cap cells, respectively, under control and salt-stress treatment together (supplemental Figure 1C and supplemental Table 1).After excluding common proteins between control and salt-stress conditions, root-cap cells alone had 131 proteins, and non-root-cap cells had 217 proteins with unique peptides (supplemental Figure 1B and 1C and supplemental Table 1).We categorized these proteins as differentially translated based on differences in their presence and abundance between salt-stressed and control conditions.A protein with a fold change (FC) R 1.5 in abundance was considered to be upregulated, and a protein with a FC % 0.5 was considered to be downregulated.Some proteins were translated only under salt stress and were categorized as condition-specific proteins.Proteins in these three categories-upregulated, downregulated, and conditionspecific-were termed stress-responsive proteins (supplemental Figure 1B and 1C and supplemental Table 1B).More stress-responsive proteins were identified in root-cap cells (109) than in non-root-cap cells (79) (supplemental Figure 1C).This result suggests that root-cap cells may play a key role during the early events of salt perception and relay the signal to neighboring cell types. The root cap is an active center during salt-stress signaling We identified several salt-stress-responsive proteins in root-cap cells compared with non-root-cap cells (supplemental Figure 1C).To better understand the biological processes regulated by these saltresponsive proteins, we constructed a bipartite network consisting of the biological processes associated with each protein and its translation status in root-cap and non-root-cap cells upon salt stress (Figure 2 and supplemental Table 1D).Several salt-responsive proteins were translated or upregulated in root-cap cells, but most responsive proteins in non-root-cap cells were downregulated, indicating that the root cap is an active center during salt stress.Interestingly, within the root-cap cells, most saltresponsive proteins were upregulated or specifically translated at the 12-h time point but downregulated or absent at the 24-h time point (Figure 2).In general, when plants are under stress, they try to balance growth and development with stress defense and adaptation.During the initial stage of stress, plants attempt to briefly curtail growth by limiting energy-consuming processes like protein metabolism (translation, processing, synthesis, etc.) and instead spend the energy on construction of stress-response molecules used in the adaptation process (Ndimba et al., 2005).Here, many of the salt-responsive proteins identified in root-cap cells were associated with biological processes such as transcription, post-transcription, translation, and post-translational regulation.Numerous 40S and 60S ribosomal subunits, such as RPL4A, RPL4D, RPL9D, RPP0B, EMB2171, and At2g44210 of the 60S subunit and RPS4B, RPS9B, and RPS10B of the 40S subunit, were active (either upregulated or specifically present) in root-cap cells under salt stress at the 12-h time point compared with nonroot-cap cells (Figure 2).We also observed many candidate proteins responsible for folding of de novo synthesized proteins in root-cap cells but not non-root-cap cells upon salt stress; these included the chaperone proteins HSC70-1, CPN10, and P23-1 (HSP90), along with CRT1 (Calreticulin-1) and At5g07340 (Calreticulin family) (Figure 2).This suggests that upon perception of salt stress by root-cap cells, protein translation and turnover are more active to manage the downstream activities of salt signaling.We next examined whether any known salt-stress response proteins were translated specifically in root-cap cells upon salt stress. Components of the salt signaling pathway operating in the root cap Perception of salt stress evokes different signaling cascades, such as osmotic, ROS, ionic, and phytohormone signaling, to adapt or cope with salt stress.When salt stress is perceived at the root tip, Ca 2+ waves travel from the perception site to distal shoot tissues via cortical and endodermal cells (Choi et al., 2014).We identified the presence of one such calcium sensor, the Arabidopsis calmodulin 1 (CaM1) protein, in both root-cap and non-root-cap cell types.Although the CaM1 translation level did not change in root-cap cells, it was upregulated in non-root-cap cells after 12 h of salt stress and downregulated A B Figure 2. The root cap is an active center under salt stress compared with non-root-cap cell types (A) Bipartite network of the translation status and associated biological processes of salt-stress-responsive proteins in (A) root-cap cells and (B) nonroot-cap cell types.The biological processes (center of the figure) associated with a protein are connected via an edge.The number of proteins representing each biological process is given on the associated node.The protein's translation status and regulation at 12 and 24 h of salt treatment is given in the two halves of the node, with different colors based on the regulation, as mentioned at the bottom of the figure. at 24 h (Figure 2).This spatiotemporal pattern of CaM1 translation in root-cap and non-root-cap cells could indicate spatial passing of the salt stress signal in the Arabidopsis root.Increased levels of Ca 2+ activate NADPH oxidase, resulting in extracellular ROS production.Extracellular ROS lead to AtAN-NEXIN1 (AtANN1)-mediated Ca 2+ influx, which in turn promotes transcription of the Na + /H + antiporter SOS1 in root epidermal cells (Laohavisit et al., 2013).Here, we identified AtANN2 (ANNEXIN2), a close homolog of AtANN1, in root-cap cells at 12 h and found that it was downregulated after 24 h of salt treatment.We observed the active presence of two peroxisomal proteins, CAT2 and MDAR1, in root-cap cells under salt stress, but these proteins were downregulated or absent in non-root-cap cells (Figure 2).Both CAT2 and MDAR1 are known for their roles in ROS scavenging during salt stress (Eltayeb et al., 2007;Song et al., 2021). Generation of proton-motive force across membranes by proton pumps such as H + -ATPase and vacuolar H + -pyrophosphatase is necessary to lower cytosolic Na + concentrations using Na + /H + antiporters.We identified several proton pumps and voltagedependent anion channels (VDACs) translated in root-cap cells during salt stress compared with non-root-cap cell types.For example, the plasma membrane (PM) H + -ATPase AHA2, the vacuolar H + -ATPase subunit VAB2, the vacuolar H + -pyrophosphatase AVP1, and voltage-dependent anion channel 2 (VDAC2) were all condition-specifically translated in root-cap cells.By contrast, 12 h of salt treatment did not regulate their protein translation in non-root-cap cells (Figure 2).The upregulation and presence of AVP1 and VAB2 in root-cap cells suggest that root-cap cells maintain ion homeostasis by sequestering Na + in the vacuolar lumen (Gaxiola et al., 2002;Duan et al., 2007).At the same time, the proton gradient generated by AHA2 across the plasma membrane is required for exclusion of Na + from the cell by the SOS1 transporter.During this process, interaction with 14-3-3 proteins such as GENERAL REGULATORY FACTOR2 (GRF2) and GRF6 activates AHA2 and enhances the proton gradient across the plasma membrane (Fuglsang et al., 2007;Zhou et al., 2014;Yang et al., 2021).We found two other 14-3-3 proteins, GRF4 and GRF10, in root-cap cells upon salt stress that might participate in regulation of AHA2.Similarly, VDAC2, a mitochondrial outer membrane transport protein, was found specifically in root-cap cells and is known to positively regulate SOS2 and SOS1 during salt stress (Liu et al., 2015) (Figure 2).Together, these results suggest that root-cap cells are actively involved in Na + exclusion and sequestration during salt stress. In addition to proteins with known roles in salt-stress response, we also observed some novel candidate proteins in root-cap cells compared with non-root-cap cells.Metabolite like g-aminobutyric acid (GABA) content was increased in response to salt stress.GABA positively regulates salt tolerance by activating H + -ATPase, SOS1, and NHX (Su et al., 2019).The key enzyme in the GABA shunt is glutamate decarboxylase (GAD), which synthesizes GABA from glutamate (Su et al., 2019).Another cellular enzyme, glutamine synthetase (GS or GLN) catalyzes the transformation of glutamate (Glu) into glutamine (Gln) and plays a crucial role in regulating ROS homeostasis under abiotic stresses like salt and cold (Ji et al., 2019).We found that glutamate decarboxylase 4 (GAD4), glutamine synthetase 1;2 (GLN1;2 or GSR2), and GLN1;4 were translated in root-cap cells upon salt stress but not in nonroot-cap cells (Figure 2).We also noted the presence of three mannose-binding JALs-JAL10 (At1g52070), JAL20 (At2g25980), and JAL32 (At3g16440)-specifically in root-cap cells upon salt stress at the 12-h time point.These JALs were either downregulated or absent in non-root-cap cells upon salt stress.None of these JALs have previously been associated with salt-stress response in plants.However, a rice mannosebinding jacalin-related lectin (OsJRL) enhanced salinity tolerance in Escherichia coli and transgenic rice plants (He et al., 2017), and a barley jacalin-related lectin conferred salinity tolerance in Saccharomyces cerevisiae and Arabidopsis (Witzel et al., 2021).Thus, our root-cap-specific proteomics analysis revealed known and novel proteins involved in the early saltresponse pathway of root cap cells. Root-cap protein-protein interactome networks in response to salt stress PPI networks were generated to visualize the salt-specific interactions among salt-responsive proteins in both root-cap and non-root-cap cell types compared with their respective control conditions.The PPI networks reveal condition-specific interactions in response to salt treatment over space and time and form major clusters (supplemental Figures 2 and 3).Under control conditions, very few interactions were observed in rootcap cells (supplemental Figure 2).However, 12 h after salt treatment, several proteins were condition-specifically translated and differentially accumulated compared with control conditions (supplemental Figures 2 and 3).In the case of nonroot-cap cells under control conditions, several interactions were visualized among the identified proteins.However, these interactions were absent at 12 h of salt treatment, and several proteins were downregulated (supplemental Figures 2 and 3).A clear interaction among proteins involved in protein synthesis and turnover produced a significant cluster.Another cluster comprised proteins mainly involved in ER stress, probably owing to accumulation of misfolded/unfolded proteins (Figure 3).In response to abiotic and biotic stimuli, misfolded/ unfolded proteins accumulate in the ER and cause ER stress.This ER stress is alleviated by initiation of the unfolded protein response (UPR) and ER-associated degradation (ERAD) of misfolded proteins.The activated UPR increases the expression of ER chaperones and ERAD components that aid in proper protein folding and degradation of unfolded proteins, respectively (Liu et al., 2011;Reyes-Impellizzeri and Moreno, 2021).Salt treatment induces the expression of ER chaperones such as luminal binding protein (BiP1/2), calreticulin (CRT), calnexin (CNX), and protein disulfide isomerase 5 (PDI5) (Liu et al., 2011;Zhang et al., 2021), which help to mitigate ER stress.In this study, 12 h after salt treatment, CRT1 (upregulated), BiP2, PDI5, and the ER body protein NAI2 were present in root-cap cells.All of these proteins were involved in condition-specific interactions with other molecular chaperones such as heat shock proteins (HSPs) and GRF10 and GRF4 (Figure 3).This condition-specific PPI was missing in non-root-cap cells (Figure 3).Three HSPs from the HSP90 family (HSP90.2,HSP90.7, and HSP81-2) and two members of the HSP70 family (HSP70-1 and HSP70-15) were also part of this cluster.These PPIs were not observed in non-root-cap cells under salt stress JAL10 acts in the salt-induced ER stress pathway Plant Communications (Figure 3).All these PPI interactions in root-cap cells upon salt stress indicate that root-cap cells mitigate the negative effect of salt stress on growth by activating the ERAD and UPR signaling pathways. We found another small PPI cluster comprising protein components involved in ROS scavenging and primary glycolytic enzymes (Figure 3).Monodehydroascorbate reductase 1 (MDAR1) and MDAR6 were translated in a condition-specific manner in The protein-protein interactome of proteins translated in both root-cap and non-root-cap cells was reconstructed using interactions from the BioGRID4.4database.The proteins translated under salt-stress conditions are highlighted in different colors, as mentioned in the figure.The edge is blue if the interaction is possible in that particular condition and cell type as per its translation status from our proteomic study. Plant Communications JAL10 acts in the salt-induced ER stress pathway root-cap cells.MDARs are known to participate in the ascorbateglutathione cycle and in removal of toxic H 2 O 2 (Eltayeb et al., 2007).A recent study of post-translational protein modifications in Arabidopsis roots under persistent osmotic and salt stress revealed the accumulation of lysine acetylation events in primary glycolytic enzymes such as UDPglucose pyrophosphorylase 1, phosphoglycerate kinase 2, and ENOLASE 2 (Rodriguez et al., 2021).Consistent with this report, we also observed translation of pyrophosphorylase 1, phosphoglycerate kinase 2, and ENOLASE 2 in root-cap cells in response to salt treatment.The PPI network analysis thus revealed salt-responsive protein interactions that occurred specifically in root-cap cells compared with non-root-cap cells. The same PPIs were absent, or the proteins involved in the PPI were downregulated, after 24 h of salt treatment in root-cap cells. JAL10-a salt-responsive root-cap-specific protein After identifying ER stress components that formed a major PPI cluster in root-cap cells upon salt stress, we sought to identify the candidate proteins associated with this process.We selected three mannose-binding JALs-JAL10, JAL20, and JAL32-identified in root-cap cells for further study.The motivation for this choice was that one of the two homologous lectins, CRT and CNX, was specifically translated and upregulated in root-cap cells upon salt stress. CRT and CNX are known to work as molecular chaperones during folding and quality control in the ER (Caramelo and Parodi, 2008).The three selected JAL proteins were condition-specifically translated in root-cap cells 12 h after salt stress (Figure 2).However, JAL20 and JAL32 were downregulated in non-root-cap cells in the 12-and 24-h salt treatments, respectively.To date, there is no information on these proteins in the literature.Analysis of their expression profiles revealed that these three JAL proteins were exclusively coexpressed in the root-cap cells to various degrees (supplemental Figure 3).Quantification of their transcript levels revealed that these JALs were differentially expressed in response to salt treatment in a time-dependent manner (Figure 4A).The transcript level of RD29A, a known saltresponsive gene, was upregulated linearly up to 50-fold in a timedependent manner after 24 h of salt treatment compared with control conditions (Figure 4A).The transcript profile of three JAL proteins (JAL10, JAL20, and JAL32) identified in the root-cap proteome showed the highest expression at 1 h of salt treatment, and their expression declined with increasing duration of salt exposure (Figure 4A).Following a 1-h exposure to salt treatment, the transcript levels of JAL10, JAL20, and JAL32 increased 23-, 140-, and 4.5-fold compared with control conditions (Figure 4A and supplemental Figure 4).The increase in JAL32 level was not significant at the 1-h time point but was significant (more than three-fold) at the 3-h time point (Figure 4A and supplemental Plant Communications JAL10 acts in the salt-induced ER stress pathway Figure 4).These observations indicate that JAL10, JAL20, and JAL32 are involved in the early response to salt treatment.Their transcript profiles are consistent with our proteomics data, in which translation of the corresponding proteins was observed in root-cap cells at 12 h of salt treatment but was not observed or did not change after 24 h of treatment.Close homologs of JAL10 (JAL8, At1g52050; and JAL9, At1g52060) and JAL32 (JAL31, At3g16430; JAL33, At3g16450; and JAL34, At3g16460) also displayed increased expression in response to salt compared with the control.However, at early time points (1 and 3 h) post salt stress, there were no significant changes in the transcript levels of JAL16 (At1g60095), JAL18 (At1g60130), and JAL41 (At5g35940), a close homolog of JAL20.However, the expression of JAL18 was significantly reduced at later time points (12 and 24 h after salt treatment) in comparison with the control (Figure 4A and supplemental Figure 4).Our transcript analysis revealed the salt-stress specificity of the JALs reported to be coexpressed in the root cap.We next examined whether saltresponsive JAL proteins JAL10 and JAL20 were specifically expressed in the root cap.Homozygous transgenic plants containing pJAL10:eGFP-GUS and pJAL20:eGFP-GUS were generated to study the tissue-specific expression of the corresponding genes.JAL10 reporter gene expression was confined to the rootcap cells from imbibition to germination and after germination (Figure 4B).By contrast, expression of the JAL20 reporter gene was also apparent from imbibition onward, but its expression was visible throughout the entire root apical meristem, not only the root cap.After germination, expression of the JAL10 and JAL20 reporter genes was confined to the root-cap cells and apical meristem, respectively, and was not observed in other root cell types or leaves of 5-day-old seedlings (Figure 4C and 4D and supplemental Figure 5).Expression of JAL10 and JAL20 was consistent with the proteomics data, in which JAL10 was specifically translated in the root cap and JAL20 in root-cap and non-root-cap cells.These results show that salt stress leads to increased expression of some of the JALs in the root and that JAL10 is a salt-responsive, root-cap-specific protein. JAL10 mediates the salt-stress-induced ER stress response One of the prominent clusters in the PPI network of root-cap cells upon salt stress contained proteins involved in alleviation of ER stress.We therefore investigated whether the identified JAL pro-teins had a role in salt-mediated ER stress signaling.To participate in ER stress, these proteins must be located in or transported to the ER.To assess their localization, we created gene constructs in which JAL10 and JAL20 protein-coding sequences were fused to green fluorescent protein (GFP) and transiently expressed them in Nicotiana benthamiana leaves.Subcellular localization analysis revealed that both JAL10 and JAL20 were co-localized in the ER compartment together with the mCherry:HDEL marker (Figure 5A and 5B), but not with a plasma-membrane PIP2A:mCherry marker (supplemental Figure 6).Next, we examined whether JAL proteins had a role in regulating the ER stress pathway.To this end, we characterized the loss-offunction jal10 mutant, as JAL10 expression is specific to the root cap, whereas that of JAL20 is not (Figure 4).Semi-quantitative RT-PCR of the SALK_125442 T-DNA insertion line revealed that jal10 is a null mutant (supplemental Figure 7).We then characterized the transcriptional regulation of ER-stress marker genes identified in the proteomics experiment (Figures 2 and 3) in the jal10 mutant and the wild type under progressive saltstress treatment (Figure 5C).Our proteomics analyses revealed that salt treatment resulted in translation of BiP2, CNX, PDI5, and GRF10 in root-cap cells.Consistent with the proteomics data, transcript levels of BiP2, CNX, PDI5, and GRF10 were upregulated in a time-dependent manner in the wild type (Figure 5C).Expression of these genes was reduced under control conditions in the jal10 mutant compared with the wild type.Surprisingly, and in contrast to its effects on the wild type, salt treatment did not evoke upregulation of these transcripts in jal10 (Figure 5C).This result suggested that JAL10 function might be necessary for upregulation of these molecular chaperones. We also examined the transcript levels of genes such as bZIP60, SENSITIVE TO SALT1 (SES1), and Hmg-CoA reductase degradation 3A (HRD3A), which are known to participate in the ER stress pathway.The bZIP60 transcription factor has been shown to be essential for UPR gene activation mediated by the ER stress sensor inositol-requiring enzyme 1 (IRE1) (Deng et al., 2011); SES1 is activated by another ER stress sensor, bZIP17, and acts as a molecular chaperone to mitigate salt-induced ER stress (Guan et al., 2018); and HRD3A is an active player in the ERAD pathway (Su et al., 2011).DNAJ3 encodes a molecular cochaperone from the HSP40 family that is regulated by different abiotic stresses, including salt, and is important for seed development (Salas-Mun ˜oz et al., 2016).Transcript levels of bZIP60, Plant Communications JAL10 acts in the salt-induced ER stress pathway SES1, HRD3A, and DNAJ were significantly upregulated several fold in wild-type plants under salt stress compared with control conditions (Figure 5C).By contrast, their transcript levels were lower in the jal10 mutant than in the wild type under control conditions, and salt treatment did not cause a significant increase in their transcript levels in jal10 (Figure 5C).One of the main strategies by which cells alleviate ER stress is the accelerated degradation of misfolded proteins through ERAD.HRD3A is actively involved in regulating misfolded proteins during the ERAD process under salt stress (Liu and Howell, 2010;Liu et al., 2011;Su et al., 2011), and the hrd3a mutant displayed hypersensitivity to salt stress due to an increased misfolded protein response (Liu et al., 2011).Here, expression of HRD3A was attenuated in the jal10 mutant in response to salt stress (Figure 5C).We therefore examined whether this attenuation of HRD3A led to increased aggregation of misfolded proteins in jal10 compared with the wild type (Figure 5D and 5E).Using a commercially available aggresome detection kit (Proteostat aggresome detection kit, ENZO), we visualized the distribution of misfolded protein aggregates under salt stress.Salt stress and a known inducer of ER stress, MG132, caused aggregation of misfolded proteins in both Col-0 and the jal10 mutant (Figure 5D-5F and supplemental Figure 8).When the proteasome inhibitor MG132 was used, levels of misfolded proteins in wild-type and jal10 mutant seedlings increased by 61% and 75%, respectively, compared with those under control conditions (supplemental Figure S8).In response to salt treatment, the wild type exhibited increases of 12% and 48% in accumulation of protein aggresomes at 12 and 24 h, respectively.By contrast, jal10 displayed increases of 59% and 42% at 12 and 24 h.In addition, the jal10 mutant displayed significant increases of 33% and 36% in the aggregation of misfolded proteins in the 12-and 24-h salt stress treatments compared with the wild type (Figure 5F and 5G).Interestingly, misfolded aggregates were detected more in the root-cap region of the jal10 mutant (Figure 5D and 5E). Together, these results suggest that JAL10 might be crucial for the salt-stress-mediated ER-stress pathway.In addition, it is plausible that JAL10 may participate in activation of many regulators implicated in salt-stress mitigation, perhaps through a mechanism that remains to be identified. The jal10 mutant displayed a hypersensitive response to salt stress To test whether the role of JAL10 was specific to salt-mediated ER stress or a generic response to ER stress, wild-type plants were treated with dithiothreitol (DTT), which rapidly induces ER stress by blocking disulfide-bond formation (Je et al., 2022).As expected, expression of bZIP60, a known regulator of ER stress, was upregulated by $2.7-fold in DTT-treated Col-0 seedlings compared with untreated controls (Figure 6A).By contrast, transcript levels of JAL10, JAL20, and JAL32 were significantly downregulated by 75%-90% compared with control conditions.These results suggest that DTT-mediated ER stress has an inhibitory effect on JAL expression (Figure 6A).Because JAL10 is expressed from seed imbibition through seed germination, we examined whether the seed germination process was affected in the jal10 mutant under control, salt-, and DTT-treated conditions (Figure 6B).Under control conditions, seed germination was significantly reduced in jal10 mutant seeds compared with wild-type seeds.The wild type seeds showed $90% germination after 4-5 days, but the jal10 mutant seeds showed approximately 70% germination, even after 8 days.Salt treatment caused a reduction of $35% in the wild type 5 days after treatment, and germination of the wild type slowly increased with time.In the case of the jal10 mutant, salt caused a similar reduction (32%) in germination at 5 days; however, it was not alleviated further as in the wild type, and jal10 displayed a significant reduction in germination relative to the wild type at later time points under salt stress (Figure 6B).Furthermore, after DTT treatment, >90% reduction in germination percentage was observed in jal10 mutant seeds compared with Col-0 seeds.These results suggest that JAL10 is a positive regulator of salt stress response during seed germination, and loss of JAL10 function causes hypersensitivity of germination to salt stress.In response to DTT treatment, expression of JALs was reduced, and jal10 mutant seeds had severe germination deficits. We next investigated the effect of salt stress on jal10 mutant growth and development after germination.The jal10 mutant seedlings had significantly shorter PRs and fewer LRs under salt-stress conditions than under control conditions (Figure 6C-6E).Similar percentage reductions were observed in the wild type (Figure 6D and 6E): for example, salt-stressed PR length was reduced to 43% of control PR length in the wild type and to 41% of control PR length in jal10.However, the PR length of jal10 mutant seedlings was significantly reduced by 12% compared with that of wild-type seedlings under salt stress.Under control conditions, jal10 mutant seedlings had significantly lower shoot biomass (30% reduction) and root biomass (35% reduction) than wild-type seedlings.Salt stress caused similar reductions in the biomass of wild type (60% in shoots and 76% in G) was compared among genotypes and conditions using a two-way ANOVA followed by a Bonferroni's post-test."a" denotes significant differences (p < 0.05) between the control and treatment within a genotype."b" denotes a significant difference between Col-0 and jal10 under control conditions."c" denotes a significant difference between Col-0 and jal10 under salt or DTT treatment.FW, fresh weight. Plant Communications roots) and jal10 mutant seedlings (64% in shoots and 78% in roots).However, the shoot biomass of jal10 differed significantly from that of the wild type (Figure 6F and 6G).Together, these results suggest that the jal10 mutant has reduced biomass and that salt stress significantly hampers its growth and development. DISCUSSION A chain of events occurs in a spatiotemporal manner when a plant cell is under salt stress, beginning with stress sensing and ending in either re-establishment of cellular ionic status or cell death, depending on the severity of the stress (Huh et al., 2002).However, it is not clear how Na + is sensed by plants, as there is no evidence of a putative channel or protein for the selective entry of Na + ions into the plant cell.Nevertheless, the downstream signaling cascade of salt sensing has been studied extensively (Knight et al., 1997;Liu et al., 2000;Qiu et al., 2004;Kim et al., 2007;Jiang et al., 2012).Cell-type-specific responses are crucial for fine-tuning environmental cues.Previous studies have highlighted the gene regulatory networks that operate in root cell types (e.g., pericycle, cortex, etc.) in response to high salt and iron deprivation (Dinneny et al., 2008;Geng et al., 2013).The initial perception of stress is crucial, as it determines the rest of the events and the fate of the plant under stress.Thus, it is interesting to investigate how the root cap, the first cell type to explore the rhizosphere region, perceives the salt-stress signal from the environment and relays it to the internal regulatory network and other root cell types.However, our understanding of cell-type-specific responses in the root cap is limited.This is because there has been no specific marker for both central columella and LR cap cells.PET111 (enhancertrap line), which specifies only the central columella cells (Nawy et al., 2005;Brady et al., 2007;Dinneny et al., 2008;Petricka et al., 2012;Bargmann et al., 2013;Moussaieff et al., 2013), the M0028 GAL4-driver line, which specifies columella, LR cap, and epidermal tissues (Swarup et al., 2005;Petersson et al., 2009), and pGLABRA2 and the enhancer-trap line E4722 (Birnbaum et al., 2003;Gifford et al., 2008), which specify LR cap cells, have been used as marker lines.However, to our knowledge, no reporter line with specific expression both in central columella cells and LR cap cells has been used for rootcap-specific omics studies.The At5g54370 promoter:eGFP-GUS line generated in this study can be used to specifically sort root-cap cells for further downstream experiments (Figure 1).Here, we used this line to characterize the root-capspecific proteome under control and high-salt conditions.The proteome landscape of the root cap consisted of 131 proteins, 11 of which were exclusively present in root-cap cells (supplemental Table 1C).JAL10 was among these 11 proteins, and its root-cap-specific expression was validated in the present study (supplemental Table 1 and Figures 2 and 4).Most of the remaining ten proteins have not yet been functionally characterized.Together with JAL10, these are important candidates whose roles in root-cap growth and development remain to be studied.A critical requirement for single-cell RNAsequencing analysis is the availability of established marker genes for specific cell types.Thus, the proteins identified in this study can be used to identify root-cap cell types during singlecell RNA-sequencing analysis. Our root-cap-specific proteomics study revealed the proteome landscape of root-cap cells under control and salt-stress conditions.Investigating the candidate proteins identified exclusively in root-cap cells will shed light on the growth and development of the root cap and its multifaceted role in the perception of environmental cues.Several proteins were either condition-specifically translated or upregulated in root-cap cells compared with non-root-cap cells, indicating that root-cap cells are an active center under salt stress (Figures 2 and 3).When we examined the protein landscape of the root cap under salt stress, it contained proteins from the canonical salt-signaling pathway that participate in the homeostasis of ions and ROS and the synthesis and turnover of proteins (Figure 7).For example, Na + / H + antiporters are an integral part of the cell machinery that maintains Na + homeostasis, and their activities are regulated by the PMF generated by membrane-bound proton pumps.In the root cap, salt stress induced the translation of PM proton pumps such as AHA2 and 14-3-3 proteins such as GRF4 and GRF10.Homologs of the identified GRFs, GRF2 and GRF6, have been shown to inhibit SOS2 activity under normal conditions.They dissociate from SOS2 under salt stress, releasing it from inhibition and activating AHA2 to increase PMF across the PM (Fuglsang et al., 2007;Zhou et al., 2014;Yang et al., 2021).VDAC2, a mitochondrial outer membrane transport protein, is known to positively regulate SOS2 and SOS1 during salt stress (Liu et al., 2015).These proteins might be crucial for activating Na + exclusion mediated by the SOS pathway.We also observed condition-specific upregulation and presence of AVP1 and VAB2 Plant Communications JAL10 acts in the salt-induced ER stress pathway in the root cap, both of which are known to participate in Na + sequestration in the vacuole.Overexpression of AVP1 has been shown to accelerate Na + sequestration in the vacuolar lumen, thereby increasing salt tolerance (Gaxiola et al., 2002;Duan et al., 2007) (Figures 2 and 7).The expression of ROSscavenging enzymes such as CAT2 and the presence of MDAR1, MDAR6, mMDH2, and primary glycolytic enzymes also contribute to balancing ROS status and sustaining growth in root-cap cells under salt stress (Figures 3 and 7). The PPI networks of stress-responsive proteins in root-cap and non-root-cap cells revealed that ER stress pathway components were translated and formed a major cluster in rootcap cells, but not non-root-cap cells, under salt stress (Figure 3).Recent studies have highlighted the role of ER homeostasis during salt stress.In response to salt stress, several ER-resident proteins, including chaperones such as BiP2, CRT, CNX, and PDI5, have been shown to be regulated at the transcriptional level (Liu et al., 2011;Zhang et al., 2021).CRT and CNX are critical for ER protein folding and Ca 2+ homeostasis.Overexpression of wheat CRT proteins has been shown to enhance salt tolerance in tobacco (Xiang et al., 2015).In this study, CRT1 (upregulated), BiP2, PDI5, and the ER body protein NAI2 were translated in root-cap cells 12 h after salt treatment.These proteins are involved in conditionspecific interactions with five other molecular chaperones, including HSPs.One of the HSPs, HSP90.7, is known to modulate the UPR by interacting with the ER-membrane-localized ribonuclease IRE1, which is a crucial player in UPR signal transduction (Marcu et al., 2002).All these ER-resident chaperones and HSPs might play crucial roles in protein processing in root-cap cells under salt stress (Figures 3 and 5).This study identified three JAL proteins that were translated in the root cap in response to salt.We characterized and validated one of these JALs, JAL10, which is localized to the ER and may be involved in regulation of the UPR response upon salt induction.Characterization of the jal10 mutant revealed that UPR gene activation was attenuated, and increased accumulation of misfolded protein aggregates was observed in response to salt in root-cap cells of jal10 compared with Col-0 (Figure 5D-5F and supplemental Figure S8).The jal10 mutant displayed a hypersensitive response to salt in a seed germination assay, and reduced root growth and biomass were also observed (Figure 6).However, questions pertaining to the regulation of JAL10 and its role in UPR gene activation in response to salt require further investigation (Figure 7).Given that JAL10 is an ER-resident protein that encodes a mannose-binding lectin, it is tempting to speculate that it might also be involved in protein folding, similar to other lectin chaperones such as CNX/CRT.CNX/CRT participate in sequestration of nascent glycoproteins by recognizing N-linked oligosaccharides on glycoproteins (Caramelo and Parodi, 2008). On the other hand, homologs of the JAL proteins identified in our study, JAL30/PYK10 binding protein, JAL33, JAL34, and JAL35, are known to form a complex with PYK10/BGLU23, a beta-glucosidase.PYK10 and NAI2 are the two major ER body proteins, and JALs regulate ER body size by interacting with PYK10 and NAI2 (Nagano et al., 2008).ER bodies have been shown to play a role in plant responses to wounding, biotic stress, and abiotic stresses such as drought and metal ion toxicity (Li et al., 2022), and this might be another way in which JAL10 facilitates effective salt-stress response.The findings presented here pave the way for a better understanding of root-cap cell growth, development, and perception of environmental cues.Furthermore, they can facilitate the design of strategies to improve crop-plant salt tolerance and thus increase crop production under unfavorable conditions. Generation of promoter:reporter lines The promoter region (942 bp upstream of the protein-coding sequence) of the late embryogenesis abundant protein-like gene (At5g54370) was amplified by PCR using Arabidopsis genomic DNA.The primers are listed in supplemental Table 2.The amplified promoter sequence was cloned into the pDONRP4-P1r plasmid (Invitrogen) using the GATEWAY BP reaction according to the manufacturer's protocol.The resulting pDONRP4-P1r-pAt5g54370 plasmid and an entry clone containing the reporter gene eGFP-GUS were shuttled into the binary vector pK7m24GW7 using a multisite GATEWAY LR reaction.Similarly, the promoter regions of JAL10 (1226 bp upstream of ATG) and JAL20 (938 bp upstream of ATG) were also cloned into the same vector and reporter combination to validate their root-cap-specific expression.All these constructs were transformed into the Agrobacterium strain GV3101 and then into Arabidopsis by the floral-dip method (Clough and Bent, 1998).Homozygous lines were identified in the T2 generation, and seeds of homozygous plants were used for further studies. Plant material and growth conditions We used T3 generation homozygous transgenic lines in all our experiments.For the root-cap-specific proteomics study, pAt5g54370:eGFP-GUS seeds were surface sterilized using a 5% sodium hypochlorite solution, then washed three times with double-distilled water.In the final step, 0.1% agarose was added to the seeds, and they were sown in a line on sterilized Nitex 03-250/47 mesh (Sefar America) placed onto 1 / 2 MS plates (Murashige and Skoog [MS] medium, 1% sucrose, and 1% agar [pH 5.7]) for germination.The seedlings were grown in growth chambers under long-day conditions (16 h light/8 h dark, 150 mmol m À2 s À1 light intensity, 22 C day/20 C night, 65% relative humidity) for 5 days.For salt-stress experiments, 5-day-old seedlings were transferred to 1 / 2 MS plates containing 150 mM NaCl and grown for 12 and 24 h.Three replicates were used for each condition (control, and 12 and 24 h).Approximately 24 000 seedlings (8000 per replicate) of the pAt5g54370:eGFP-GUS line for each condition were used to perform root-cap-specific proteomics.For the root growth assay and biomass estimation in response to salt stress, Col-0 and jal10 were grown on 1 / 2 MS plates for 3 days in a growth chamber.Later, they were transferred to medium containing 150 mM NaCl and grown for an additional 7 days before responses were observed.At the end of the 7th day, photos were taken, and PR length and LR number were determined using ImageJ software.Shoots and roots were separated and measured using a fine balance to estimate their fresh weights.For seed germination assays, 50 seeds each of Col-0 and jal10 were placed onto 1 / 2 MS plates containing 150 mM NaCl and 3 mM DTT.After 2 days of stratification, they were transferred to a growth chamber under long-day conditions, and germination was observed.The experiment was performed in three biological replicates. Protoplast preparation After the salt-stress treatment, root tips of the pAt5g54370:eGFP-GUS line were excised, and protoplasts were isolated as per Birnbaum et al. (2010).In brief, root tips (approximately 0.5 cm from the tip) were cut and immediately placed into protoplast solution (1.25% JAL10 acts in the salt-induced ER stress pathway Plant Communications (supplemental Figure 1), and the root tips and 30 ml protoplast solution were transferred to a 100-ml conical flask and placed in an incubator shaker (26 C) for 1 h and 45 min at 75 rpm.After incubation, the solution was passed through a 40-mm cell strainer to remove debris from the protoplasts.Protoplasts were then pelleted by centrifugation at 500 rpm for 10 min at room temperature.The supernatant was removed without disturbing the pellet.The pellet was resuspended in 500 ml protoplast buffer, and the quality of the isolated protoplasts was visualized using a binocular fluorescence microscope. FACS sorting of GFP marker lines Protoplasts from the control and salt-treated samples were sorted using a BD FACS Aria II instrument (BD Biosciences) at the flow cytometry facility of the Max Plank Institute for Molecular Genetics, Berlin, Germany.We used a 100-mm nozzle size and a sheath pressure of 20 psi for sorting. The voltage settings for measuring the scattering and emission of GFP signals were as described in Bargmann and Birnbaum (2010).The sorted GFP-positive and GFP-negative cells were collected into Eppendorf tubes containing RapiGest SF buffer (Waters) and immediately placed on dry ice.For the 12-h time point, approximately 37 000 GFP-positive cells per replicate were collected from the control samples, and 33 000 GFPpositive cells were collected from the salt-treated samples.The numbers of GFP-negative cells were approximately 100 000 from the controls and 41 000 from the salt-treated samples.The numbers of GFP-positive (control, 76 000 cells; salt, 22 500 cells) and GFP-negative cells (control, 357 000 cells; salt, 193 000 cells) for the 24 h-time point were different from those for the 12-h time point. Protein extraction, digestion, and identification Proteins were extracted from the sorted GFP-positive and GFP-negative cells using RapiGest SF (Waters, Eschborn, Germany, product code 186001860).The protoplasts were collected directly into 0.1% RapiGest SF (dissolved in 50 mM ammonium bicarbonate).In-solution digestion of whole protoplasts was performed following the manufacturer's instructions with some modifications.In brief, after sonicating the samples, we measured the protein concentrations using the Bradford assay (Kielkopf et al., 2020) and took equal concentrations in all samples.The samples were then treated with 2.5 mm DTT, and alkylation was performed with 7.5 mM 3-indole acetic acid.Trypsin digestion was performed in a 1:50 (weight to weight) ratio overnight at 37 C.The pH of the solution was adjusted to an acidic range to inactivate further activity of RapiGest SF, and peptides were concentrated using a vacuum centrifuge up to a 10ml volume.Desalting was performed using Ziptips with 0.2 ml C18 resin (Merck Millipore, product code ZTC18M008) as per the manufacturer's protocol.The peptides were resuspended to a final concentration of 100 ng/ml using 2% acetonitrile and 0.1% trifluoroacetic acid.We used 6 ml of protein digest for nanoflow liquid chromatography on a Dionex UltiMate 3000 system (Thermo Scientific) coupled to a Q Extractive Plus mass spectrometer (Thermo Scientific) as described by Witzel and Matros (2020).Peptides were loaded onto a C18 trap column (0.3 3 5 mM, PepMap100 C18, 5 mm, Thermo Scientific) and then eluted onto an Acclaim PepMap 100 C18 column (0.075 3 250 mm, 2-mm particle size, 100-A ˚pore size, Thermo Scientific) at a flow rate of 300 nl min À1 .The mobile phases consisted of 0.1% formic acid (solvent A) and 0.1% formic acid in 80% ACN (solvent B).Peptides were separated chromatographically using a 100-min gradient from 2% to 44% solvent B, with the column temperature set to 40 C. A Nanospray Flex ion source was used for electrospray ionization of peptides, with a spray voltage of 1.80 kV, capillary temperature of 275 C, and S-lens RF level of 60.Mass spectra were acquired in positive-ion and data-dependent mode.Full-scan spectra (375-1500 m/z) were acquired at 140 000 resolution, and MS/ MS scans (200-2000 m/z) were conducted at 17 500 resolution.The maximum ion injection time was 50 ms for both scan types.The 20 most intense MS ions were selected for collision-induced dissociation fragmentation.Singly charged ions and unassigned charge states were rejected, and the dynamic exclusion duration was set to 45 s.All samples were measured in triplicate.The raw files were processed using Proteome Discoverer v2.4 and the Sequest HT search engine (Thermo Scientific) with the A. thaliana dataset from the SwissProt database (as of January 2021).The false discovery rate was set to 0.01, corrected using the Benjamini-Hochberg method, for highly confident identifications.Further parameters for the database search were: peptide tolerance, 10 ppm; fragment ion tolerance, 0.02 Da; tryptic cleavage with a maximum of two missed cleavages; carbamidomethylation of cysteine as a fixed modification; and oxidation of methionine as a variable modification.The result lists were filtered for high-confidence peptides, and their signals were mapped across all LC-MS experiments and normalized to the total peptide amount as per the same LC-MS experiment.The summed abundance method was used to calculate protein abundance.A differential protein expression ratio between the control and salt-treated samples was generated after an ANOVA test.The resulting protein list was further filtered on the basis of the following criteria: included proteins identified by at least two peptides or by a minimum of one unique peptide representing a protein coverage of more than 8% (this was to include small proteins that give only a small number of tryptic peptides).Furthermore, only those peptides identified in a minimum of two out of three biological replicates were considered.Raw proteome data have been deposited at MassIVE (https:// massive.ucsd.edu/ProteoSAFe/static/massive.jsp)under the dataset ID MSV000091171. Categorizing identified proteins as stress-responsive proteins After identifying proteins, we performed further categorization based on the presence or absence of proteins in each sample.The presence of a protein was considered only when it was present in two out of three technical replicates and two out of three biological replicates.We generated a Venn diagram (Venny 2.1, Oliveros, 2007) of the proteins identified in control and salt-stress conditions in a cell-type-specific manner (i.e., root-cap cells and non-root-cap cells) (supplemental Figure 1B).The intersection (control Ո salt stress) of the Venn diagram represents the common proteins between the control and salt-stress conditions.Among these, proteins with an abundance ratio (salt stress/control) FC R 1.5 were considered to be upregulated, and those with an abundance ratio FC % 0.5 were considered to be downregulated.From these analyses, we categorized stress-responsive proteins as those (root-cap or non-root-cap cells) that were detected exclusively in samples from salt treatments (condition-specifically present) or that were significantly differentially accumulated in response to salinity. A bipartite network of stress-responsive proteins in a cell-typespecific manner We analyzed the biological processes represented by the stressresponsive proteins in both root-cap cells and non-root-cap cells.We first retrieved biological process annotations for all A. thaliana proteins from the AmiGO database (Gene Ontology as of December 2021) and then assigned GO terms to each stress-responsive protein using this information.We used Cytoscape to create a network based on the GO terms associated with each protein. Generation of a cell-type-specific interactome network of stress-responsive proteins To reconstruct interactome networks of the stress-responsive proteins in root-cap cells and non-root-cap cells, we superimposed the rootcap proteomics data onto the global PPIs among A. thaliana proteins cataloged in the BioGRID4.4database (as of December 2021).In brief, we retrieved the PPIs among all Arabidopsis proteins from the BioGRID4.4database (https://thebiogrid.org).Here, we considered only experimentally proven interactions.A PPI can occur only when the two interacting proteins are present concurrently within a specific cell type under specific conditions.Using this criteria, we checked whether any of these proven interactions were possible among the identified proteins in root-cap and non-root-cap cells under a particular condition.A total of four PPI networks were created using Cytoscape v.3.10.0 (http://www.cytoscape. Plant Communications JAL10 acts in the salt-induced ER stress pathway org/) with respect to condition (control and salt stress) and cell type (rootcap and non-root-cap cells). Subcellular localization of JAL proteins The full-length coding sequences (CDSs) of JAL10 (948 bp) and JAL20 (1350 bp) without stop codons were amplified using cDNA and cloned into the pDONR221 plasmid (Invitrogen).This entry clone (pDONR221_CDS) and the destination vector pK7FWG2-GFP were used in an LR reaction to construct a CDS sequence with a C-terminal GFP tag.The resulting plasmid (pK7FWG2_CDS-GFP) was transformed into Agrobacterium strain GV3101.For subcellular localization studies, these vectors were co-transformed into N. benthamiana leaves together with an mCherry marker targeted to the PM or ER as described in Xu et al. (2015).Confocal images of infiltrated N. benthamiana leaves were taken after 72 h using a Leica SP8 microscope.GFP was excited at 488 nm, and emission was observed between 500 and 530 nm.mCherry was excited at 561 nm, and emission was observed between 600 and 680 nm. Detection of aggregated misfolded proteins Col-0 and jal10 were grown on 1 / 2 MS plates for 5 days in a growth chamber, then transferred to medium containing 150 mM NaCl and grown for 12 and 24 h.Protein aggresomes were measured in roots using a Proteostat Aggresome Detection Kit (Enzo: ENZ-51035) according to the manufacturer's protocol with slight modifications.In brief, after salt treatment, at least 10 plants each of Col-0 and jal10 were immediately fixed in 4% formaldehyde, then washed with 13 PBS.The seedlings were treated with permeabilization solution (0.5% Triton X-100, 3 mM EDTA [pH 8.0] and 13 assay buffer) for 30 min at 4 C and washed three times with 13 PBS.They were then incubated with PROTEOSTAT Aggresome Detection Reagent and Hoechst 33342 Nuclear stain (1:5000 dilution) in 13 PBS for 1 h at room temperature, washed with 13 PBS, and immediately imaged using a Leica SP8 confocal microscope.The Proteostat Aggresome detection red dye was excited with a 488-nm laser, and its emission was recorded between 500 and 620 nm.Hoechst 33342 nuclear stain was used to visualize nuclei; a 405-nm laser was used for excitation, and emission was recorded between 420 and 480 nm and represented in a grayscale pseudocolor.After imaging, signal intensity was measured by selecting only the root-cap region using Leica LASX software. RNA isolation and qRT-PCR analysis To measure transcript levels of candidate genes under salt stress, 5-dayold seedlings of Col-0 and the jal10 mutant were treated with 150 mM NaCl for 1, 3, 12, and 24 h.For ER stress, 5-day-old Col-0 and jal10 seedlings were treated with 10 mM DTT for 6 h.After treatment, total RNA was extracted from whole roots using the TRIzol method.RNA was purified using a Nucleospin RNA Clean-up kit (MACHEREY-NAGEL, product code 740948.50).Genomic DNA was removed by treatment with DNAse I (Thermo Fisher Scientific).One microgram of cDNA was synthesized using the iScript Select cDNA synthesis kit (Bio-Rad, product code 1708897) with oligo(dT) primers.The cDNA samples were used to deter-mine transcript levels by qRT-PCR as described in Ramireddy et al. (2018).The experiment was performed in two to three biological replicates (n = 150 seedlings per replicate).The primers used are listed in supplemental Table 2. Statistical analysis All qRT-PCR data in this publication were statistically analyzed using Student's t-test.The statistical significance of the proteostat aggresome, seed germination, and root physiology assays was computed by two-way ANOVA followed by Bonferroni's post-hoc test with non-normalized values. Figure 1 . Figure 1.Identification and validation of an Arabidopsis root-cap cell-specific promoter (A and B) Confocal microscopy of 5-day-old homozygous pAt5g54370:eGFP-GUS seedlings revealed a GFP signal exclusively in root-cap cells of the primary root.(C-E)GUS histochemical analysis of leaves, flowers, and siliques revealed that, with the exception of anthers and stigmas, reporter gene expression was not visible at any other growth stages.Scale bars correspond to 100 mm (A and B) and 1 mm (C-E). Figure 3 . Figure 3. Interactome map of root-cap and non-root-cap cells under salt stress Figure 4 . Figure4.Jacalin-associated lectins as novel root-cap-specific salt-stress-responsive proteins (A) Heatmap showing the relative transcript levels of novel salt-stress-responsive proteins identified from our root-cap proteomics analysis (JAL10, JAL20, and JAL32), along with their close root-specific homologous JALs under 1, 3, 12, and 24 h of salt stress.The transcript level of RD29A, a wellknown marker gene for abiotic stress response, was used as a positive control for salt stress.UBQ4 was used as the internal reference gene for qRT-PCR analysis.Mock-treated Col-0 was used as the control, and its expression was set to 1. Two to three biological replicates were used in the experiment, each consisting of at least 150 seedlings.S denotes the statistical significance of transcript level compared with the control using Student's t-test.Refer the bar graph in supplemental Figure4for details. (B-D) Root-cap-specific expression of JAL10 and JAL20.GUS histochemical staining of pJAL10:eGFP-GUS and pJAL20:eGFP-GUS shows the radicle cell-specific expression of reporter gene during (B) seed germination from imbibition onward.After germination (C and D), reporter gene expression is confined to root-cap cells in pJAL10, whereas in pJAL20, expression is also seen in some non-root-cap cell types.This pattern of GUS staining is also consistent with GFP reporter gene expression, as shown in supplemental Figure5.Reporter gene expression was not observed in leaves or other root parts in either promoter:reporter line.The scale bars represent 25 mm (A) and 100 mm (C and D). Figure 5 . Figure5.The ER-localized JAL10 protein regulates ER stress-associated UPR gene expression and accumulation of misfolded proteins in response to salt stress (A and B) Transient protein expression in Nicotiana benthamiana revealed that JAL10 and JAL20 were localized to the ER.Tobacco leaves infiltrated with C-terminal GFP-fusion proteins of (A) JAL10 and (B) JAL20 together with the HDEL:mCherry marker.Scale bars correspond to 10 mm (A and B).(C) The relative transcript levels of ER-associated UPR genes were attenuated in the jal10 mutant background.qRT-PCR analysis was carried out in the Col-0 and jal10 mutant background under control and high-salt treatments.Expression of identified ER-stress genes was quantified under high salt stress compared with the control.UBQ4 was used as the internal reference gene.Mock-treated Col-0 was used as the control, and its expression was set to 1. Two to three biological replicates were used, each consisting of at least 150 seedlings.The statistical significance of transcript level compared with the control was calculated using Student's t-test.The values shown are mean ± SD. *p < 0.05, **p < 0.01, ***p < 0.001.Confocal microscopy images of Col-0 and jal10 mutant plants under salt stress for (D) 12 h and (E) 24 h show increased accumulation of misfolded protein aggregates in root-cap cells of jal10 mutant plants.The protein aggresomes (red) were stained with Proteostat aggresome detection dye, and nuclei were stained with Hoechst 33342 (pseudocolor gray).Scale bars correspond to 50 mm.The boxplots represent the average fluorescence intensity of Proteostat aggresome red dye in rootcap regions of Col-0 and jal10 mutant plants under salt stress for (F) 12 h and (G) 24 h.The values shown are mean ± SD.The statistical significance of differences in fluorescence level was calculated using a two-way ANOVA (Bonferroni's post-test)."a" denotes significant differences (p < 0. 001) between the control and salt treatments within a genotype."b" denotes a significant difference between Col-0 and jal10 under control conditions."c" denotes a significant difference between Col-0 and jal10 under salt treatment.Plant Communications 4, 100726, November 13 2023 ª 2023 The Authors. 9 Figure 6 . Figure6.The jal10 mutant is susceptible to salt and ER stress during germination and exhibits reduced growth under salt stress (A) Relative transcript levels of JAL10, JAL20, and JAL32 were lower under DTT-mediated ER stress.Mock-treated Col-0 was used as the control, and its expression was set to 1. UBQ4 was used as the internal reference gene.Three biological replicates were used, each consisting of at least 150 seedlings.The statistical significance of transcript level compared with the control was determined using Student's t-test.*p < 0.05, **p < 0.01.(B) The jal10 mutant plants showed reduced seed germination under control, salt-stress, and DTT-stress treatments.Each plate contains approximately 50 seedlings (n = 3).The values shown are the mean ± SE. (C) Growth of Col-0 and jal10 mutant plants on control medium and salt-containing medium (150 mM NaCl) for 7 days.Scale bars correspond to 1 cm.(D-G) Comparisons of (D) primary root length, (E) lateral root number, (F) root biomass, and (G) shoot biomass between Col-0 and jal10 mutant plants under salt stress.The values shown are mean ± SD.The statistical significance in (B and D-G) was compared among genotypes and conditions using a two-way ANOVA followed by a Bonferroni's post-test."a" denotes significant differences (p < 0.05) between the control and treatment within a genotype."b" denotes a significant difference between Col-0 and jal10 under control conditions."c" denotes a significant difference between Col-0 and jal10 under salt or DTT treatment.FW, fresh weight. Figure 7 . Figure 7.The root-cap-specific proteome in response to salt stressThe proteome and PPI analysis of root-cap cells under salt stress reveal several known and novel candidates participating in the salt adaptation of root-cap cell types.Several proton pumps associated with salt exclusion or sequestration are specifically translated to restore Na + homeostasis.Expression of several antioxidant proteins and other glycolytic enzymes deals with ROS production.Salt-stress-mediated ER stress triggers expression of JAL10 specifically in the ER, where it may participate in regulation of UPR gene expression directly or indirectly to restore protein homeostasis in the root-cap cell.It might be interesting to investigate how JAL10 regulates UPR gene expression in response to salt stress and how this regulation affects the size of ER bodies and the restoration of protein metabolism.
2023-10-05T06:18:02.004Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "164d5974f97a33b636ee150b0d46ef6dfbcc2009", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.xplc.2023.100726", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c79826e0bbf5505e5c81e0a94996d2656901ca0c", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
53735898
pes2o/s2orc
v3-fos-license
A Case of Multiple Myeloma Presenting as Streptococcus pneumoniae Meningitis with Candida auris Fungemia Multiple myeloma (MM), a plasma cell neoplasm, has a typical presenting pattern consisting of bone pain, renal failure, anemia, and/or hypercalcemia. Even though MM is a cancer that impairs the immune system, rarely is a systemic infection the first sign of disease. In this case report, our patient presented with altered mental status due to meningitis and was later diagnosed with MM. Furthermore, we display a case of a rare but emerging and serious fungus, Candida auris, that the patient developed during his inpatient stay. This is the first such record of C. auris in an MM patient. Introduction Plasma cells in multiple myeloma (MM) proliferate uncontrollably, producing a monoclonal variant of an immunoglobulin. Among other harmful effects, these plasma cells damage their progenitor environment -the bone marrow -creating dysfunctional immune system cells that predispose the patient to bacterial, viral, and fungal infections. Most infections occur after three or more months following diagnosis and after initial chemotherapy, although some occur earlier [1]. Here we present a case of Streptococcus pneumoniae meningitis that prompted the diagnosis of MM. This patient's subsequent infection with Candida auris was most likely a result of a depressed immune system due to underlying MM. C. auris has a relatively novel appearance in several locations globally. This fungus generally arises in those with compromised immune systems and often is acquired in hospital settings, resistant to multiple drugs, and misidentified with other Candida species, making diagnosis and treatment difficult [15]. Case Report A 72-year-old male with a history of hypertension and alcohol abuse was brought to the Emergency Department (ED) by his coworkers after developing an acute change in mental status and unsteady gait. He had a history of meningitis twelve years prior, which was treated accordingly, and had been in relatively good health until about a week earlier when, according to his wife, he began acting oddly. On examination in the ED, the patient was febrile (T 38.3°C) and tachycardic (HR 121 bpm). Imaging of the head and chest were unremarkable for acute abnormalities or processes. Laboratory results revealed a left band shift without leukocytosis (WBC 6.2 × 10 3 /µL). On physical exam, the patient was displaying the classic triad of meningitis (fever, nuchal rigidity, altered mental status), and a lumbar puncture was performed. Cerebrospinal fluid (CSF) analysis revealed increased total protein (>300 mg mg/dL), WBC (275/mm 3 ), and neutrophils (95%), and decreased glucose (3.93 mg/dL). The patient was admitted to the intensive care unit (ICU) with septic shock secondary to meningitis and immediately started on intravenous ceftriaxone, vancomycin, and dexamethasone. Blood and CSF cultures speciated to Streptococcus pneumoniae, and the patient completed two weeks of intravenous antibiotic therapy with ceftriaxone 2 grams every twelve hours for meningitis. Further investigation ensued to understand why this patient had developed recurrent meningitis. The only immediately identifiable risk factor for meningitis was his alcoholism; there was no evidence of a predisposing infection in the previous months, and both an HIV 4th Generation test and ANA screen were negative. Early in his admission, the patient was noted to have an elevated total protein (9.9 g/dL), low albumin (1.9 g/dL), mild anemia (Hgb 12.2 g/dL), and elevated serum creatinine (1.3 mg/dL). With an unprovoked pneumococcal infection and lab tests suggestive of an underlying multiple myeloma, further work up was performed. Immunofixation and immunoglobulin quantitative tests showed an increased monoclonal gammopathy. The patient's IgG and β2-microglobulin levels were elevated, at 5,154 mg/dL and 3.58 mg/L respectively. No suspicious osseous lytic lesions were found on skeletal survey, and serum calcium remained within normal limits. A bone marrow biopsy was performed, showing hypercellular bone marrow and monoclonal IgG lambda restricted plasma cells, confirming the diagnosis of MM. The patient was transferred to acute rehab to recover from neurological deficits secondary to meningitis. About three weeks into his rehab admission, the patient developed fevers and leukocytosis. He was transferred back to the inpatient medical floor and started on broadspectrum antibiotics for sepsis of unknown source. Blood cultures grew Candida auris (identification confirmed by the New York State Department of Health), and he received a two-week course of antifungal therapy with intravenous micafungin 150 mg daily. Subsequent cultures were negative. The start of chemotherapy treatment for MM was significantly delayed until his active infections resolved. After adequate treatment of the fungal infection, the patient was started on induction chemotherapy for multiple myeloma with the CyBorD regimen (cyclophosphamide, bortezomib, and dexamethasone), which was later changed to lenalidomide, bortezomib, and dexamethasone. He attained complete remission and has been without any recurrent infections since his diagnosis with C. auris candidemia. Discussion Multiple myeloma is a plasma cell malignancy characterized by the proliferation of plasma cells producing an abundance of a monoclonal immunoglobulin. This excess of a singular immunoglobulin causes the antibody-dependent humoral arm of the immune system to function poorly. In addition to increasing the number of defective immunoglobulins, MM suppresses functional immunoglobulins as well as various innate and adaptive immune system cells and their subsequent responses. Without functional antibodies, opsonization of pathogens cannot occur; these microorganisms, specifically polysaccharide encapsulated bacteria, go unrecognized in the body. Patients with MM are more susceptible to bacteremia secondary to Streptococcus pneumoniae, Klebsiella pneumoniae, and other such encapsulated organisms. A study by Twomey et al. affirmed that patients with MM demonstrated a higher incidence of severe infection when compared to healthy patients in the same age group [2]. A study by Chapel and Lee showed a higher incidence of first infection in the first three months after diagnosis of MM was made. If reinfection was taken into account, 75% of all serious infections were made after three months, after initial chemotherapy. Most of these infections were bacterial and respiratory or urinary in nature [1]. There are few instances, such as the case presented here, where an infection with a polysaccharide encapsulated organism was the presenting sign of multiple myeloma. This case report along with other similar case reports should serve as a cautionary tale for physicians to be more vigilant of patients developing severe bacterial infection with no known risk factors. Suspicion should be high for a patient presenting with severe infection and any other symptom such as leukopenia, acute renal disease, bone pain, and a history of several bacterial infections. Not much research concerning C. auris and MM is available. Several virulence factors that C. auris uses to invade and cause blood infections are shared with C. albicans, its distant relative. We will attempt to bridge the normal defense mechanisms against C. albicans to those of C. auris and explore why these mechanisms are defective in MM. TGF-B is produced in excess by myeloma cells and has a myriad of effects on suppressing the immune system as well as ensuring myeloma survival. One of these effects is the inhibition of the T cell's entrance into an IL-2 autocrine proliferation pathway, which hinders proper maturation and further cytokine secretion [3]. Dendritic cells (DCs) allow proper recognition, phagocytosis, and presentation of various fungal species to T cells [4]. It has been shown that in stable and progressive MM disease, TGF-B or IL-10 or both decrease CD-80, a costimulatory molecule for T cells expressed by DCs [5]. Muc10 is a glycoprotein on the surface of plasma cells that can diminish the response of dendritic cells to produce proper stimulatory effects to T cells. The DCs in turn produce a high amount of IL-10 and low IL-12 which in turn diminish their ability to trigger protective Th1 cells [6]. Myeloma cells can also produce IL-6, a cytokine that inhibits Th1 differentiation from CD4 cells [7]. Lymphocytes in general are affected in this disease. Regarding T cells, there is an abnormal Th1/Th2 ratio in MM [8]. Signaling molecules such as CD28, CD152, CD3zeta, p56lck, ZAP-70, and PI3-k in both CD4 and CD8 cells of MM patients were demonstrated to be decreased [9]. A study showed that Th17 cells, which is an important T cell population responsible for preventing candida mucosal invasion, are reduced and functionally impaired in the peripheral blood in an MM patient [10]. A suppression of CD19 B-cells causes a polyclonal hypoglobulinemia especially in the early and late stages in MM [11]. B cells can also be suppressed by the inhibitory effects of TGF-B [12]. The mechanism by which a well-functioning immune system protects itself against C. albicans is through recognition and binding of pathogen-associated molecular patterns (PAMPs) through the innate arm of the immune system. Innate immune system cells have pattern recognition receptors that can bind PAMPs. After recognizing PAMPs, they elicit an effective inflammatory response by calling anti-fungal effector cells, neutrophils, and monocytes to the site of infection. Binding of beta-glucan of C. albicans by the dectin-1 receptor on DCs induces Th17 lymphocytes that secrete Il-17 [13]. Depending on the type of DC, subsets of Th1, cytotoxic lymphocytes, or Th17 cells are generated as a response [14]. Phagocytes prevent Candida species from causing bloodstream infections, and a disability in their function causes systemic candidiasis [11]. Conclusively, quantitative and qualitative shortcomings of Th1, Th17, DCs, neutrophils, and to a lesser extent B cells, all come together to provide an environment for an array of Candida infections. C. auris is an emerging fungal threat to hospitalized patients, the highest concentrations of which have been in the regions of New York and New Jersey, according to the CDC [15]. Comparing annotation sequencing of the C. auris genome to other Candida species, 1,988 orthologous proteins with functional annotations were found. It is demonstrative that C. auris has many of the same virulence factors as other Candida species. It shares with C. albicans a group of virulence factors such as oligopeptide transporters, mannosyl transferases, secreted proteases, and genes involved with biofilm formations. Mannosyl transferases coordinate the synthesis of glycan, important cell wall units in Candida species, and play a role in immune recognition and host cell adherence. These enzymes were conserved in an isolate of C. auris with many orthologs of Candida species. It also contains ABC (ATP binding cassette) transporters, which are drug efflux pumps orthologous with C. albicans. However, the genetic annotation showed that C. auris employs slightly different proteins for other types of host cell adhesion [16]. Since C. auris and C. albicans share similar cell wall components and enzymes, it is probable that they evoke similar immune responses mediated by similar immune cells. Taking this information one step further, it is likely that the same defects seen in immune cells in MM that allow systemic candidiasis from C. albicans also allow the process of invasion of C. auris in one with MM. This case report along with other similar case reports suggest that patients who present with sepsis caused by encapsulated bacteria may require further evaluation for underlying immunodeficiency. Meningitis in particular has been the presenting symptom of MM in a handful of cases. A summary table of bacterial syndromes as the first feature of MM can be found in the paper published by Naderi et al., which reported only five instances of meningitis as the manifestation of MM [17]. Myeloma patients may also be prone to infection by fungi,
2018-11-28T10:29:08.279Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "76dce8798ec7bc1535d3f4a4266a101d8aa17d09", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/493852", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0fd1c90fdb73f4b87ca9bb5a80b6c453c2dafbd7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8362316
pes2o/s2orc
v3-fos-license
Gene therapy for hemoglobin disorders - a mini-review. Gene therapy by either gene insertion or editing is an exciting curative therapeutic option for monogenic hemoglobin disorders like sickle cell disease and β-thalassemia. The safety and efficacy of gene transfer techniques has markedly improved with the use of lentivirus vectors. The clinical translation of this technology has met with good success, although key limitations include number of engraftable transduced hematopoietic stem cells and adequate transgene expression that results in complete correction of β0 thalassemia major. This highlights the need to identify and address factors that might be contributing to the in-vivo survival of the transduced hematopoietic stem cells or find means to improve expression from current vectors. In this review, we briefly discuss the gene therapy strategies specific to hemoglobinopathies, the success of the preclinical models and the current status of gene therapy clinical trials. Introduction Sickle cell disease (SCD) and β-thalassemia are autosomal recessive disorders that result in qualitative and quantitative defects in β-globin protein production; and are highly prevalent worldwide, with approximately 7% of the global population estimated to be carriers of hemoglobin gene-variants, and over 330,000 affected infants born annually with SCD alone 1 . Despite improved medical supportive therapies, significant long-term mortality and morbidity associated with hemoglobinopathies remains 2,3 . Hematopoietic stem cell transplant (HSCT) provides the only definitive cure, with a disease free survival exceeding 80% with HLA-matched sibling donor transplants 4,5 . Improvements in management of graft versus host disease (GVHD) and better means of inducing graft tolerance have encouraged the use of an extended donor pool comprising of unrelated donors and umbilical cord blood as the hematopoietic stem cell (HSC) source for patients lacking a matched sibling donor 6 . However, matched-unrelated HSCT for thalassemia have an overall survival of 65% in the high risk patients 7 . In addition, a 5-10% mortality from transplant-conditioning, GVHD and graft failure continues to limit the acceptability of this treatment modality 8 . Gene therapy, using genetically-modified autologous HSCs is an attractive alternative to allogeneic HSCT, and specifically unrelated-donor HSCT, since it eliminates the need for a matched donor and the risk of GVHD/graft rejection. Successful gene therapy for monogenic immune disorders like chronic granulomatous disease and severe combined immunodeficiency 9-13 has encouraged development of this technology for hemoglobinopathies. Whereas in immunodeficiency disorders, the genetically modified hematopoietic progenitors and T cells have a selective survival advantage and a tremendous expansion potential, respectively, requiring a minimal (0.1-1%) gene corrected HSC engraftment for sustained correction of lymphoid dysfunction 14 ; in hemoglobinopathies, no such survival advantage of genetically modified HSCs/progenitors is present, and selective advantage is limited to the terminal erythroid cells 15 . Post-transplant follow-up studies 16 and murine models 17,18 show that a 20% donor chimerism is essential for improving the clinical manifestations in SCD and thalassemia, a level requiring substantial pre-transplant chemotherapy conditioning. Furthermore, in order for globin gene transfer to affect a cure, high level erythroid lineage specific expression is necessary. Despite these hurdles, improved vector potency and safety have significantly advanced the field, resulting in cures in patients with Hemoglobin E-β-thalassemia and considerable disease amelioration in some patients with β0 thalassemia and SCD. The lessons learnt from these early gene therapy trials suggest that engraftment of sufficient transduced HSC, or their in-vivo selection could play a crucial role to extend the curative capacity of gene therapy. Vector development for hemoglobin disorders Correction of hemoglobin disorders by vector-mediated gene transfer requires utilization of a safe delivery vehicle/vector to efficiently transfer the complex β-transgene cassette to HSCs and result in sustained high expression of the transferred globin gene. The vectors commonly used have been bioengineered from different retroviruses, mainly murine Moloney leukemia virus (retrovirus vectors; RV), HIV-1 (lentivirus vectors; LV) and foamy virus, after removing the genetic elements responsible for their pathogenicity and virulence, and adding the β-globin gene and its locus control region (LCR) elements. Of these, LV have been most successful at correcting hemoglobinopathy animal models, and have resulted in their clinical translation. The β-globin LCR is a cis-regulatory element composed of five DNAase-1 hypersensitivity sites, four of which are formed in the erythroid cells 19 . When linked to the globin genes LCR leads to position-independent, erythroid lineage-specific enhancement of globin gene expression. The enhancer activity of LCR resides in three of its hypersensitivity sites HS 2, 3 and 4, which contain an array of binding sites for ubiquitous and erythroid specific transcription factors 20 . An intact LCR (5'HS 1-5) is involved in maintaining an open chromatin conformation that is needed for position independent expression of the globin genes. The LCR also results in developmental regulation of globin expression and interacts with the ε, γ and β globin gene promoters in the embryonic, fetal and adult stage, respectively 21 . Gamma-retrovirus (RV) vectors Initial studies looking at RV-mediated human β-globin gene transfer without inclusion of the LCR elements, showed variable and low levels of gene expression (<1% of endogenous βglobin expression) 22 . Following this study, nearly one decade of efforts to develop RV for expressing sufficient globin gene expression were futile 23,24 . RVs utilizing the enhancer/promoter sequences of the LTR (long terminal repeat) to drive transgene expression of genes other than globin genes, were the first ones to be used in clinical trials. Despite their initial clinical success in gene therapy of immune-deficiencies, concerns about their safety emerged following reports of vector-mediated insertional mutagenesis [9][10][11][12] . Integration site analysis revealed that the RVs have a tendency to integrate near cellular promoters, retroviral common integration sites (CIS) and cancer genes, independent of the vector design, and enhance their expression via the LTR promoter/ enhancer. The RV vector insertions increase immortalization of primary hematopoietic progenitor cells 25 . While the RV LTR is a strong enhancer and upregulates transgene expression to very high levels compared to relatively weaker enhancers from the HIV LTR and cytomegalovirus 26 , it also simultaneously activates cellular proto-oncogenes flanking insertion sites 27 . Additionally, methylation of the LTR can lead to inactivation of the integrated transgene promoter and prevent long-term transgene expression 28 . The construction of a self-inactivating [SIN] vector design deletes the LTR promoter/enhancer and allows the transgene expression to be driven by internal cellular promoters, reduce LTR enhancer-mediated genotoxicity 27 and its methylation-induced inactivation 29 . Inclusion of the chicken β-globin hypersensitive site-4 (cHS4) insulator element to the SIN vector further improved its safety by reducing position dependent variablility in gene expression 30 . However the inability of RV to transduce non-diving cells, along with vector instability seen with incorporation of large LCR sequences greatly limited their use in gene therapy for hemoglobin disorders 31 . Lentivirus vectors The interest in LV was generated with the increasing knowledge of the basic structure and properties of HIV-1 virus. HIV-1 can efficiently translocate the intact nuclear membrane and thus, has the ability to transduce non-dividing/quiescent cells and can carry larger expression cassettes. These features enabled the HIV-1 based LVs to be developed for hemoglobinopathies, and efficiently transfer the β-transgene/LCR to HSCs for sustained correction of the hemoglobin defect. The major safety concerns with the use of LVs initially were risk of generating a replication-competent lentivirus (RCL) and insertional mutagenesis. The former risk has been eliminated by removal of HIV regulatory and accessory genes from vector plasmids and constructing the vector with 3-4 separate packaging plasmids, with minimal overlapping sequences between and within them 31 . The preference for intragenic integration of LV vectors, coupled with the SIN design considerably reduces their genotoxicity potential; Indeed, recent LV clinical trials with a follow up of nearly 10 years have shown no genotoxicity resulting from LV vectors, even though, preclinical studies have reported vector integration in known oncogenes (MLL, NUP214) 32 and suggested that transcriptionally active enhancers like the LCR can lead to gene dysregulation, independent of the vector type (RV or LV) or design (LTR-based or SIN) 26,33 . Our group explored the genotoxicity potential of LCR enhancer elements and showed that the LCR-containing LVs have approximately 200-fold lower immortalization potential than RVs. Though gene dysregulation was seen in the vicinity of the integrated vector, no protooncogene upregulation was noted 34 . Use of cHS4 insulators decreased the transforming potential further, along with decreasing position dependent variable gene expression and methylation-associated silencing 35 . Preclinical studies Gene Therapy for β-Thalassemia LV-mediated human β-gene transfer was shown to rescue mouse models of β-thalassemia intermedia and β-thalassemia major. May and colleagues demonstrated the use of a LV vector carrying the human β-globin gene fragment and β-globin LCR spanning the HS2, HS3, and HS4 regions to correct thalassemia intermedia in mice with increase in hemoglobin levels by 3-4 g/dl 36 . The same group developed an adult β0-thalassemia major mouse model using mice engrafted with beta-globin-null Hbb(th3/th3) fetal liver cells and rescued their severe phenotype using the same vector with an average vector copy number (VCN) of 1.0-2.4 18 . Imren and coworkers thereafter showed correction of β-thalassemia mice using a vector carrying the βT87Q gene, where a point mutation in the β-globin gene also confers it with anti-sickling properties. However, multiple copies were required for adequate correction of the mouse thalassemia phenotype 32 . Our group showed complete correction of the human β0 thalassemia phenotype in vitro and in a xenograft model with approximately 2 vector copies/cell 37 . Miccio and colleagues used a LV vector carrying the β-globin gene linked to a minimized LCR HS2/HS3. They showed that a frequency of 30-50% of transduced hematopoietic cells harboring an average VCN of 1 was sufficient to fully correct the thalassemia phenotype in th3/+ mice 15 . In addition they also demonstrated that the genetically corrected erythroblasts had an in vivo survival advantage, thus encouraging the need to explore the utility of reduced intensity transplant regimens for clinical gene therapy trials. Gene Therapy for SCD The efficacy of LV-mediated transfer of γ-globin gene/mutated β-globin genes (βT87Q and βAS3) for correcting SCD was explored using transgenic and humanized xenograft sickle cell murine models. Pawliuk and colleagues showed improvement in hematological parameters, splenomegaly and hyposthenuria in BERK and SAD mice using the βT87Q LV 38 . Levasseur on the other hand used a βAS3 (a human β-globin gene with 3 anti-sickling mutations) in a LV to successfully transduce murine HSC without cytokine stimulation 39 . Romero et al used the same βAS3 LV to successfully transduce bone marrow CD34 progenitor cells from patients with SCD, and produce sufficient levels of anti-sickling Rai Clinical trials The success in preclinical models, supported by safety studies on LV vectors led to the design of clinical gene therapy trials. Cavazzana-Calvo et al. enrolled a hemoglobin E/β (βE/β0)-thalassemia major patient in 2007 who received genetically modified autologous HSCs expressing βT87Q-globin following myeloablative busulfan conditioning. This subject became transfusion independent 1-2 years later 44 with a hemoglobin maintained at 9-10 gm/dl. This therapeutic benefit was initially due to a clonal expansion observed following vector insertion in the HMGA2 gene. However this clone has eventually subsided. The trial was subsequently extended to include 18 patients with thalassemia (transfusion-dependent βEβ0 n=10, β0β0 n=5, β+thalassemia n= 3) and 4 patients with SCA [45][46][47] . All patients with βEβ0-thalassemia and β+thalassemia became transfusion independent within a year of transplant, with a median increase in hemoglobin by 4.9-g/dl, while patients with β0β0thalassemia with a similar hemoglobin increase experienced a significant reduction in their transfusion requirement, but were not transfusion independent, since the baseline hemoglobin levels in β0β0thalassemia are much lower than in individuals with β+/βE thalassemia. One of four patients with SCD who received a high dose of transduced CD34 cells had remarkable improvement in their SCD phenotype. Two other trials are using the β-globin LV vectors for β-thalassemia. In the trial led by Boulad et al (NCT01639690), the preconditioning regimen had to be switched from a reduced-intensity busulfan to myeloablative doses following modest engraftment and βglobin expression with the lower dose 48 ; the trial led by Ferrari et al (NCT02453477) is using a myeloablative regimen consisting of treosulfan and thiotepa with initial success 48 . The NCT02247843 trial led by Kohn et al. for SCD is investigating the efficacy of βAS3LV, and the trial led by our group (Malik et al.; NCT02186418) is using a γ-globin LV following reduced intensity conditioning. The results of these studies are eagerly awaited. Recent Advances in Genetic Manipulation Technology The emergence of gene editing technology, which enables precise genome manipulation, offers a new approach for treating β-hemoglobinopathies 49 . Site specific double strand breaks (DSB) can be induced with zinc finger nucleases, transcription activator-like effector nucleases (TALENS), meganucleases and more recently with Clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 system. CRISPR/Cas9 has revolutionized gene targeting. Unlike other nucleases which use a protein dimer for target sequence recognition and require a novel protein to be engineered for each new target site, CRISPR/Cas9 technology uses a short guide RNA (gRNA) with a 20bp sequence complementary to the DNA sequence to be targeted 50 . In addition targeting/knockdown of multiple genes can be Rai achieved by using multiple gRNAs with a common Cas9 protein 51 . DSB is then followed by DNA repair through one of the two major pathways: 1) Non-homologous end joining (NHEJ) with direct fusion of the nuclease cleaved ends. This repair mechanism is errorprone, leading to indels, and is cell-cycle stage independent 52 . 2) Homologous directed repair (HDR) uses an exogenous donor template 53 delivered via single-stranded oligonucleotides, plasmids, or viral vectors like integrase deficient lentivirus or adenoassociated virus 54 , for gene correction with targeted insertion. For hemoglobinopathies, gene editing strategizes shown to be successful include either induction of endogenous fetal hemoglobin 55,56 , modification of the causal β-globin gene mutation by targeted nucleases 57 or therapeutic transgene integration 58 , or a combined approach 59,60 . Inactivation of an erythroid specific enhancer of BCL11A by gene editing leads to suppression of BCL11A and up-regulation of γ-globin in erythroid lineage cells 61,62 . These gene editing strategizes are being performed in CD34+ stem and progenitor cells 63 or in induced pluripotent stem cells (iPSCs) capable of differentiating into any somatic cell type [64][65][66] . Patient-specific iPSCs are generated by the genetic reprogramming of their somatic cells, and provide an unlimited source of stem cells which can be genetically manipulated, differentiated along a specific tissue type and returned back to the patient. Currently, active research in differentiating iPSC towards definitive hematopoietic stem cells with long term engraftment potential is underway. In addition to their therapeutic potential, iPSCs can also be used as in-vitro disease models 67 . Off-target nuclease binding activity 68 , efficient means of delivering the genome editing tools to target stem cell populations without loss of 'stemness', genomic variation occurring with somatic reprogramming, efficient gene targeting by homology directed repair 69 , and developing functional HSCs from these genetically modified iPSC 70,71 are some of the challenges in the field. Conclusions and future directions Gene therapy for hemoglobinopathies is now a reality, with several patients cured of their β0/βE thalassemia or with significant amelioration from β0/β0 thalassemia and one patient with SCD, while others are showing modest transgene expression. The current curative capacity of gene transfer technology is limited by the severity of the underlying disease. The increase in hemoglobin to 8-9 gm/dl seen in β0β0 thalassemia is still not sufficient to prevent ineffective erythropoiesis, and hence these subjects are still intermittently transfused. However the overall transfusion burden in this patient population has decreased dramatically. Cohen et al showed that the success of chelation therapy in achieving a neutral or negative iron balance (assessed by liver iron concentration) had a significant correlation to the transfusion iron intake 72 . Thus decreasing the transfusion burden is advantageous, as it not only might affect the dose of chelation therapy used but also affects its outcome. The challenges to efficacious clinical translation in hemoglobinopathies include the dose of engraftable transduced HSCs, the intensity of the preconditioning transplant regimen, and expression of the transgene. In-vivo selection strategies can ensure expansion of the few genetically-modified engrafted HSCs. Improving vector potency will augment gene expression. Efforts to promote differentiation of iPSC technology to produce engraftable HSC can expand the HSC source, and gene editing can circumvent the need for high Rai transgene expressing-LV and potential, albeit low, insertional genotoxicities of LV. New technologies that can reshape the future of gene therapy are gene editing using CRISPR/ CaS9 and development of hematopoietic stem cells from iPSCs with long term repopulating potential, although much work is needed to make this a reality. With scientific advancements in stem cell biology and genetic manipulation, we envision a future where a child prenatally diagnosed with hemoglobinopathy can have his/her genetically modified cord blood stem cells transfused even before the fetal to adult hemoglobin switch, thus preventing the occurrence of any disease manifestations.
2018-04-03T00:30:42.084Z
2016-11-10T00:00:00.000
{ "year": 2016, "sha1": "38a62edf5332684d81ce1e4e5305af78024819ad", "oa_license": "CCBY", "oa_url": "http://www.rarediseasesjournal.com/articles/gene-therapy-for-hemoglobin-disorders--a-minireview.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b11c87267faf8a29f00b3657b287684e93f5958", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
8506847
pes2o/s2orc
v3-fos-license
Gravitational waves originated before compactification in Kaluza-Klein theory We investigate the propagation of multidimensional gravitational waves generated before compactification and observes today as 4-dimensional gravitational waves and gauge fields by generalizing the investigation of Alesci and Montani(AM) to the case where the internal space is an n-sphere. We also derive the 4-dimensional forms of the multidimensional harmonic gauge condition. The primary difference from the case of AM comes from the effect of curvature of the internal space which prevent the space to be static. The other effects are explicitly shown for propagation of the waves in our 4-dimensional spacetime. Static internal space exists if multidimensional cosmologicalconstant is present. Then the situation is similar to the one-dimensional internal space. Introduction In unified theories such as Kaluza-Klein(KK) and string theories, extra-dimensional space is necessary. This spacetime may also serve to make hierarchy in energy scales in the brane universe picture through its size [1,2]. However, it is difficult to detect the existence of the extra-dimensional space directly. Recently, Alesci and Montani(AM) examined differences between gravitational waves generated before and after compactification of the internal space [3] in the context of original KK theory [4]. AM investigated the propagation of gravitational waves in our 4-dimensional spacetime which were produced in the 5-dimensional spacetime, i.e. before compactification. They also examined the gauge conditions on gravitational waves in the 4-dimensional spacetime imposed originally in the 5-dimensional spacetime. In their investigation, it was assumed that Einstein equations in the 5-dimensional spacetime hold also after compactification if the original metric is replaced by the compactified one. They found that such gravitational waves propagate with the speed of light as the one originated in the 4-dimensional spacetime. On the other hand, the gauge condition, the 5-dimensional harmonic condition, was found to forbid the 4-dimensional transverse and traceless gauge condition. As to the observability of these gravitational waves, there may be problems concerning the effects of inflation. Since the inflation dilutes the energy density of the gravitational waves, their detection might be difficult by the present detectors, unless the superposition effects are so large. However, there would not be problems with respect to wavelength which would be so small when the waves were generated, so their wavelength would not be unobservably long. Nevertheless the results of AM are interesting in that observational signals of the extra-dimensional space are not known well. In this work, we extend their model by taking into account the unified theoretical point of view, from which it is moregrealistic" that the dimension of the extra-space is more than one. Thus we examine the propagation of the gravitational waves generated in (4+n)-dimensional spacetime before compactification. The n-dimensional extra dimensional space is taken to be a sphere as in the multidimensional inflation models. This would be anticipated if we consider that the sphere is maximally symmetric subspace and the breaking of the symmetry would be caused by some kind of excitations. In this case, the extra space cannot be static due to its nonvanishing curvature, contrary to the 5-dimensional case since 1-dimensional space cannot have nonvanishing Riemannian curvature. The sphere would be expected to collapse, so some mechanism for stabilization would be necessary. [5,6] We also examine how the harmonic gauge condition imposed in the (4 + n)-dimensional spacetime is expressed in our 4-dimensional spacetime. Four dimensional wave equation and gauge conditions are complicated due to time variation of extra space. There is a static solution if we introduce the multidimensional cosmological constant. In this case, wave equations take the same form as those in 1-dimensional extra space. Gauge conditions take here the linearized form of the nonabelian gauge theory. In section 2, we derive wave equations and gauge conditions in (4+n)-dimensional form, i.e., when the waves were generated. In section 3, background spacetime is investigated for cases with and without multidimensional cosmological constant. Section 4 is devoted to derive explicit form of the wave equations and gauge conditions. Summary and discussions are given in setion 5. Details of geometric quantities are summarized in the appendix. Wave equations and gauge conditions in (4+n)-dimensional form In this section, we write down the equations for the gravitational waves generated in (4+n)(≡ D)-dimensional spacetime before compactification. Einstein equations in the D-dimensional spacetime take the following form when the cosmological constant is absent whereĜ AB andT AB are D-dimensional Einstein tensor and energy-momentum tensor, respectively. A hat is used to denote quantities defined in D-dimensional spacetime. κ D is the D-dimensional gravitational constant. Here we investigate the propagation of the gravitational waves in the vacuum, so we putT AB = 0 in the following. Then it is well known that Eqs.(2.1) reduce to the vanishing of the Ricci tensor: We denote the background metric byĝ (0)AB and its perturbation byĥ AB : g (0)AB is determined by the zeroth order equations of (2.2),R (0)AB = 0. For the perturbations, the first order equations of (2.2) are written aŝ whereĥ ≡ĝ AB (0)ĥ AB . Eqs. (2.4) are the equations for D-dimensional gravitational waves. A stroke( | ) denotes the covariant derivative with respect to the D-dimensional background metricĝ (0)AB . In terms of variables ψ AB defined as where ψ ≡ĝ AB (0) ψ AB , the harmonic gauge condition takes the following form In this gauge, the wave equations (2.4) are written as whereR (0)ABCD is the Riemann tensor of the background spacetime. Background metric In this section, we examine the background spacetime through which gravitational waves propagate. We assume that, after compactification, extra n-dimensional space is maximally symmetric, i.e. n-sphere as noted in the introduction. This symmetry is often assumed in multidimensional inflationary models. The breaking of the symmetry can be thought to arise from some excitations. Then the background metric is written aŝ by a suitable choice of the coordinates of the internal space. Here the scale factor of the extra-dimensional space a I (x) behaves as a scalar field in the 4-dimensional spacetime and g (0)ab is the metric of the unit n-sphere and is giben as g (0)ab ≡ a −2 Iĝ (0)ab . Instead of a I , we will often use a scalar field φ defined as φ ≡ ln(a I /a e ). ( 3.2) Here a e is a constant with dimension of length. Details of calculations of geometric quantities are give in the appendix. The case without the cosmological constant Einstein equations for the background metricR (0)AB = 0 are now written as follows and where quantities without a hat are defined in 4-dimensional spacetime. ∇ µ denotes the covariant derivative with respect to the 4-dimensional background metric g (0)µν and 2 ≡ g λρ (0) ∇ λ ∇ ρ . EquationsR (0)µa = 0 are automatically satisfied. Since the extra space is taken to be an n-sphere which is maximally symmetric, we have a relation Using (3.5), we have an equation for a I from (3.4); The righthand sides of these equations come from the curvature of n-sphere, R I = n(n − 1). These equations show that a I or φ cannot be constant. This is essentially different from the case of 1-dimensional extra space in which case the extra-dimensional component of the Riemann tensor is vanishing so thatĝ (0)44 can be constant as in the original KK-model. When the 4-dimensional background spacetime is the Robertson-Walker(RW) one, time component of (3.3) (precisely, the time component ofĜ (0)µν = 0) becomes as follows where (3) R is the scalar curvarure of 3-dimensional space, a is the scale factor of the universe, H is the Hubble parameterȧ/a. All the three space components of (3.3) lead to the same equation which is given asḢ can be thought of as a constraint as usual. As suggested by observations, and for the sake of simplicity, we put (3) R = 0 in the following. Then the scale factor a appears only in the combination H. We can solve for H from (3.9) whenȧ I = 0. Putting the solution into (3.8), we obtain the third order differential equation for a I which reads as follows: I − 2ä 2 I +ȧ 2 Iä I /a I − (n − 1) 3ä I /a I + (n − 1)(ȧ I /a I ) 2 + (n − 1)/a 2 I = 0. (3.10) Equation (3.10) has no power-law or exponential type solutions. Considering the multidimensional inflation model, we might expect that a I would collapse, since it could not expand so large. The case with the cosmological constantΛ It would be desirable that there are static solutions for a I . It is possible if the cosmological constantΛ term is introduced into the 4+n(≡ D)-dimensional gravity, which can be thought of as the least generalization of Einstein gravity. Then Eqs.(2.1) and (2.2) are replaced bŷ where we used a relationR and putT AB = 0. Concerning the effects ofΛ, there are two possibilities. One possibility is that the size ofΛ is the order of fluctuations so that background and the first order field equations are given byR In this case, the background is unchanged, so that the internal space is not static, which is not interesting here. Another possibility is that the size ofΛ is the order of background curvature:R and we assumeĥ Here δa I represents the perturbation of a I . That is, we assume that the extra-dimensiomal space remains to be a sphere and only its radius perturbs. Off-diagonal elements are expressed in terms of the Killing vectors ξ a (α) (α = 1, 2, · · · , n(n + 1)/2) on the sphere aŝ where g (0)ab and ξ a (α) are functions of angles θ 1 , · · · , θ n and A (α) µ are functions of 4-dimensional spacetime coordinates x µ . In the above, A (α) ν (x)'s are known to behave as 4-dimensional vector fields and play the role of the gauge fields. As we choose the harmonic gauge condition, ψ AB are convenient variables. They are expressed, if we use (4.1a,b), (4.2) and as follows where h I ≡ 2a −1 I δa I represents the perturbation of the extra-dimensional space. However, h I does not depend on the coordinates of the extra-dimensional space. Gauge conditions Using (3.2), (4.3), (4.4) and (A.2) in (2.6), we have the following gauge conditions. For A = µ, we have the gauge conditions on the 4-dimensional gravitational waves: For A = a, we have the gauge conditions on the 4-dimensional vector(gauge) fields: which take the following explicit form in terms of A (α) These do not appear to be the gauge conditions in non-Abelian gauge theories, since we are dealing with linear perturbations and non-linearity is discarded. The gauge conditions are affected by the existence ofΛ only through the behavior of φ, and Eqs.(4.5) and (4.6a,b) are unchanged. Therefore if we assume a static φ, these conditions reduce to the transverse condition as in the case of one extra dimension. In the above calculations and in the next section, we use the following relations for the covariant derivative,∇ a , with respect to the internal metric g (0)ab :∇ c ψ ab = 0 and∇ (a ψ µb) = 0 (4.7) where ( ) means symmetrization. The case with the cosmological constantΛ In this case, the wave equations are given by the second equation of (3.14b) as noted above. Then ψ |A |A is nonvanishing but is given by In terms of ψ AB , the wave equation (3.14b) takes the same form as (2.7) under the harmonic gauge condition, (2.6). Thus for the case of static background, wave equations (4.8)−(4.10) take the same form as for those in 1-dimensional internal space except for the curvature terms. Summary and discussions In the framework of Einstein gravity in D(≡ 4 + n)-dimensional spacetime, we investigated the behavior of the 4-dimensional gravitaional waves after compactification of the n-dimensional extra-dimensional space when the gravitational waves were generated before compactification. After compactification, extra-dimensional space which is assumed to be maximally symmetric has nonvanishing Riemannian curvature if its dimension is more than one. Then it cannot be static contrary to the case of the 1-dimensional extra space. The effects of curvature are very complex but appear only through the time variation of the extra-space in the terms other than the first ones in the gauge codition (4.5), (4.6a,b) and wave equations (4.8)−(4.10). These equations are different from those for the ordinary gravitational waves and the differences are very complicated. It is noted that static extra space is possible if the cosmological constant is present in D-dimensional gravity.@ In the static case, equations of the gravitational waves reduce to those of the 1-dimensional extra space. In our analysis, we used the 4-dimensional components of the original metricĝ µν as the metric g µν after compactification without making conformal transformation. In this case, 4-dimensional gravity is not the Einstein one but scalar-tensor gravity type. Then evolutions of the scale factors of the background 3-dimensional and extra spaces are given by (3.8) and (3.10). The latter has no solution which is proportional to t α or e βt . With respect to the observation, we point out a possibility that, if the waves were so frequently generated, the interference effects might appear similar to the quantum fluctuations and might have affected structure formations, as they would be expanded during the inflation. After compactification, 4-dimensional gravity is, at least approximately, the Einstein one. It can be obtained from the dimensionally reduced higher dimensional Einstein gravity by making the following conformal transformation of the metric [7,8] where n is the dimension of the internal space. Since compactification can be thought of as a kind of phase transition, the conformal transformation could be thought as expressing relations between quantities before and after compactification. Then the propagation of the vector fields and the evolutions of the 4-dimensional spacetime and internal space are affected. In this case, it is possible to have approximate solutions representing growing a and decreasing a I corresponding to inflation, i.e. a I playing the role of inflaton, although the probability of inflation is approximately vanishing in the model used in our discussions [8]. To obtain finite probability, at least two internal spaces are required, which may be avoided in higher curvature gravity theories. For the static internal background, inflaton field must be introduced by hand. From the string theoretical point of view , the original D-dimensional spacetime should be a 10-dimensional one and the 4-dimensional spacetime should be replaced by the bulk. However in the brane universe picture, gauge conditions on fields and propagations of them investigated in this work would be for fields in the bulk. The compactification would be earlier than the appearance of branes in the sense that we do not assume that the brane exists also in internal space. We collect here formulas relating between geometric quantities for the background spacetime defined in D-dimensional space and 4-dimensional spacetime. (iii) Riemann tensor Nonvanishing components of the Riemann tensor satisfy the following relations: R a (0)bcd = R a (0)bcd − a 2 I ∂ λ φ ∂ λ φ δ a c g (0)bd − δ a d g (0)bc , The components of the Riemann tensors R λ (0)µνρ and R a (0)bcd are those of the 4-dimensional and extra-dimensional space respectively. (iv) Ricci tensor Nonvanishing components of Ricci tensor satisfy the following relations: (v) Scalar curvature Relations between the scalar curvatures in two spaces are given aŝ where R (0) is the scalar curvature of the 4-dimensional background and R I is the scalar curvature of the extra-dimensional space formed from g (0)ab .
2009-06-13T08:59:32.000Z
2007-09-14T00:00:00.000
{ "year": 2007, "sha1": "3b20255d615089102260b85309938364db7d1997", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "387b568b3d4e4e545d08fe197fdc8d1926618cec", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
55482632
pes2o/s2orc
v3-fos-license
Van Allen Probes observation of plasmaspheric 1 hiss modulated by injected energetic electrons 2 31 Plasmaspheric hiss was observed by Van Allen Probe B in association with energetic 32 electron injections in the outer plasmasphere. The energy of injected electrons coincides 33 with the minimum resonant energy calculated for the observed hiss wave frequency. 34 Interestingly, the variations of hiss wave intensity, electron flux, and ULF wave intensity 35 exhibit remarkable correlations, while plasma density is not correlated with any of these 36 parameters. Our study provides direct evidence for the first time that the injected 37 anisotropic electron population, which is modulated by ULF waves, modulates the hiss 38 intensity in the outer plasmasphere. This also implies that plasmaspheric hiss observed by 39 Van Allen Probe B in the outer plasmasphere (L > ~5.5) is locally amplified. Meanwhile, 40 Van Allen Probe A observed hiss emission at lower L shells (< 5), which was not associated 41 with electron injections but primarily modulated by the plasma density. The features 42 observed by Van Allen Probe A suggest that the observed hiss deep inside the plasmasphere 43 may have propagated from higher L shells. 44 45 Introduction Plasmaspheric hiss plays an important role in the loss of energetic electrons within the plasmasphere and in high-density plumes (Lyons et al., 1972;Lyons and Thorne, 1973;Albert, 2005;Meredith et al., 2007Meredith et al., , 2009;;Summers et al., 2008;Ni et al., 2013;Breneman et al., 2015;Li et al., 2015a;Ma et al., 2016).However, the generation mechanisms of plasmaspheric hiss remain under active research.Three mechanisms have received the most intense attention to explain the generation of plasmaspheric hiss, including in situ growth of waves (Thorne et al., 1979;Church and Thorne, 1983), lightninggenerated whistlers (Green et al., 2005), and whistler mode chorus waves as an "embryonic source" (Bortnik et al., 2008(Bortnik et al., , 2009;;Chen et al., 2012a, b).Although wave power above 2-3 kHz from lightning-generated whistlers shows some correlation with hiss waves (Green et al., 2005), the waves below 1 kHz, which contain the majority of hiss wave power, are independent of the lightning flash rate (Meredith et al., 2006).The in situ growth of waves inside the plasmasphere was shown to be inadequate to account for the observational level (∼ 20 dB) (Huang et al., 1983); in response, Church and Thorne (1983) suggested that an "embryonic source" is required to lead to the observed wave intensity.Recent studies based on ray tracing simulation (Bortnik et al., 2008) have demonstrated that chorus waves from the distant magneto-Published by Copernicus Publications on behalf of the European Geosciences Union. sphere can propagate into the plasmasphere and act as an embryonic source for the hiss wave generation.Furthermore, ray tracing simulations (Chen et al., 2012a) suggested that the majority of hiss formation is caused by chorus emission originating within ∼ 3 R E from the plasmapause.This model has successfully explained the observed frequency spectrum and spatial distribution of the observed hiss over the typical hiss frequency range from 100 Hz to several kHz.A number of observational studies (Bortnik et al., 2009;Wang et al., 2011;Meredith et al., 2013;Li et al., 2015b) have shown good correlations between chorus and plasmaspheric hiss and suggested that chorus plays an important role in hiss wave intensification. The Van Allen Probes mission recently detected unusual low-frequency hiss emissions with wave power extending well below 100 Hz (Li et al., 2013).The low-frequency hiss was demonstrated to cause more efficient loss of high-energy electrons (from ∼ 50 keV to a few MeV) due to its stronger pitch angle scattering rates compared to normal hiss (Ni et al., 2014;Li et al., 2015a).Such a low-frequency hiss is unlikely to be a result of propagation of chorus waves from a more distant region because embryonic chorus waves at the same frequency (Bortnik et al., 2008) would need to originate from unrealistically high L shells (Li et al., 2015b).Therefore, these low-frequency hiss waves were suggested to be generated in the outer plasmasphere on the dayside through local amplification (Li et al., 2013;Chen et al., 2014;Shi et al., 2017). Hiss intensity modulation is often driven by the variation in background plasma density either through local amplification or wave propagation (Chen et al., 2012c), and the modulation of hiss by other factors may easily be suppressed by the effect of the plasma density.Therefore, observations showing a direct correlation between hiss emission and electron flux are still very limited.In fact, electron fluxes of energetic electrons (tens to hundreds of keV) can be modulated by ultra low frequency (ULF) waves.A typical modulation is caused by drift resonance (Southwood and Kivelson, 1981).Zong et al. (2009) showed an interesting event of energetic electron modulation by shock-induced ULF waves.More recently, Claudepierre et al. (2013) presented observations of electron drift resonance with the fundamental poloidal mode of ULF waves based on Van Allen Probes measurements.The energy dependence of the amplitude and phase of the electron flux modulations provided strong evidence for such an interaction.The peak electron flux modulations occurred over 5-6 wave cycles at energies ∼ 60 keV.The drift resonance between electrons and ULF waves has been extensively studied both theoretically and observationally based on Van Allen Probes data (Dai et al., 2013;Hao et al., 2014;Chen et al., 2016;Zhou et al., 2015Zhou et al., , 2016;;Li et al., 2017).Such a modulation of energetic electrons may modulate hiss emissions by varying the electron flux and pitch angle anisotropy, which could potentially affect the local growth rates of hiss waves, but the observational evidence has not been reported yet.In this study, we report on a modulation of hiss wave intensity and injected electron flux due to ULF waves observed by Van Allen Probe B near the dayside, providing clear evidence that the hiss emission was generated through local amplification in the outer plasmasphere. Data and methodology The Van Allen Probes mission comprises two identical spacecraft (probes A and B) in near-equatorial orbits with an altitude of ∼ 600 km at perigee and geocentric distance of ∼ 5.8 R E at apogee (Mauk et al., 2012).The Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) suite on board Van Allen probes A and B includes a magnetometer and a "waves" instrument (Kletzing et al., 2013).The DC magnetic field is measured by the magnetometer, and the survey mode of the waveform receiver (WFR) provides the power spectral density from 10 Hz to 12 kHz at 6 s time resolution.Plasma density can be either calculated based on the upper hybrid resonance frequency extracted from the high frequency receiver (HFR) data (Kurth et al., 2015) or be inferred from the spacecraft potential measured by the electric field and waves (EFW) instrument (Wygant et al., 2013).We inferred plasma density profiles based on the measurements from both instruments in the present study to obtain accurate plasma density values with high time resolution.High-resolution electron flux measurements over the energy range of ∼ 30 keV to 4 MeV are provided by the magnetic electron ion spectrometer (MagEIS) instrument (Blake et al., 2013;Spence et al., 2013).We used the level 3 MagEIS dataset, which includes particle pitch angle distribution, in this study to evaluate the electron distribution responsible for the hiss wave generation. Observational results A hiss intensification event modulated by electron injection was observed by Van Allen Probe B during ∼ 20:00-22:00 UT on 12 January 2014, as shown in Fig. 1.The satellite was located on the dayside and remained inside the plasmasphere, indicated by the high plasma density (Fig. 1f).The main power of the hiss emission (Fig. 1b and c) resided below the lower hybrid resonance frequency (white dash-dotted line in Fig. 1b) and 100 Hz (white dashed line in Fig. 1c) and intensified following the increase in the AE index (geomagnetic auroral electrojet index; Fig. 1a).Figure 1e presents the magnitude of the background magnetic field.The spinaveraged electron flux (Fig. 1g) exhibited modulations with a period of about 6 min.There is also a variation in the electron pitch angle anisotropy (Fig. 1h) although it is not as clear as the modulations of electron flux.The electron anisotropy is calculated based on Chen et al. (1999).The black lines in Fig. 1g and h show the calculated minimum electron resonant energy for the first-order cyclotron resonance with parallel- propagating right-hand polarized waves at a frequency of 40 Hz (magenta line in Fig. 1b).As shown in Fig. 1g, the minimum resonant energy captures the main energy of injected electrons.Figure 1i shows the electron pitch angle distribution at 54 keV, which exhibits a pronounced modulation.The vertical dashed lines present the minima of the electron fluxes at 54 keV. Figure 1d illustrates the convective linear growth rates for parallel-propagating whistler mode waves that were calculated using the electron distribution measured by MagEIS based on the equations of Summers et al. (2009). The modulation of linear growth rate appears to correlate well with the observed hiss wave spectral intensity with a period of several minutes.Changes in the background magnetic field, plasma density and the injected electron distribution (flux and pitch angle anisotropy of resonant electrons) could potentially be responsible for the hiss wave growth.Since the variation of the background magnetic field is small (∼ 4 nT) compared to the median value (∼ 150 nT), the effect of background magnetic field on the wave growth rate is likely to be insignificant compared to the effects of plasma density and electron injection.To distinguish the roles of these two effects in the local wave amplification, we compared the hiss wave amplitude with spin-averaged electron flux and plasma density. The hiss wave amplitude integrated from 20 to 1000 Hz is shown in Fig. 2a. Figure 2b presents the spin-averaged electron flux integrated over the energy range from 30 to 200 keV.The vertical dashed lines in Fig. 2 depict the same times as in Fig. 1. Figure 2c shows the comparison between the filtered electron flux (black) over 1.5-4 mHz and the filtered hiss wave intensity (blue) over 1.5-4 mHz.It suggests that the hiss intensity is well correlated with the variation of the electron flux.The correlation coefficient between the filtered electron flux and the filtered hiss wave intensity in the time period from 20:00 to 22:00 UT is 0.841.The satellite was located at a magnetic latitude of −1.3 to −2.0 • , which was near the source region where local wave amplification typically occurs, and this is probably why hiss intensity and electron flux exhibit a remarkable correlation. In the present hiss modulation event, the filtered background plasma density (green line in Fig. 2d) is not well correlated with the filtered wave intensity (with a correla-tion coefficient of 0.105), especially during the period from 20:45 to 21:40 UT.This suggests that the variation in plasma density plays an insignificant role in the modulation of hiss wave intensity during this event.To investigate the sole effect of density on hiss intensity, we also calculated the correlation coefficient between the non-filtered hiss wave intensity and the non-filtered plasma density which even shows a slight anti-correlation with a coefficient of ∼ −0.483. The comparison between the filtered electron pitch angle anisotropy at 54 keV and filtered wave intensity is shown in Fig. 2e.Although a correlation coefficient of 0.378 indicates a certain correlation between these two parameters, it is much lower than the correlation between the hiss wave intensity and electron flux (0.841).Therefore, we suggest that the variation of electron pitch angle anisotropy play a less important role in hiss intensity modulation compared to the variation in electron flux. The electron flux variation observed by Van Allen Probe B may be caused by ULF wave modulation since they have similar time periods.Figure 3 shows the variation of electron fluxes at different energy channels observed by both Van Figure 4 is the summary of the Pc4-5 ULF waves from Van Allen Probe B during the time interval of interest (20:00-22:00 UT).Dynamic spectrograms of the ULF wave powers are shown for the three components of the magnetic field (in the mean field-aligned, geocentric solar magnetospheric, GSM, coordinates) along with the y component of the electric field in modified geocentric solar elliptic (MGSE) coordinate.Band-pass filtered time series (1.5-4 mHz) are shown below for each dynamic spectrogram.The parallel magnetic field (B para ) and y component electric field in MGSE coordinate (E y ) have a similar frequency peak at ∼ 2.6 mHz.The wave spectra of the E y and B para components suggest that the compressional mode and shear mode are likely coupled. The correlation of the ULF waves and the energetic electron fluxes at different energy channels is shown in Fig. 5. Figure 5a illustrates the filtered E y component of the electric field between 1.5 and 4 mHz.Since Van Allen Probe B is near noon, the E y component approximately represents the electric field in the azimuthal direction.Band-pass filtered electron fluxes normalized by unperturbed levels at different energy channels are shown in Fig. 5b.The vertical black lines indicate the minima of the E y component.The electron fluxes at various energies show a modulation period which is very similar to that of E y .Besides, these fluxes exhibit an energy-dependent phase shift with respect to E y .The phase of the electron flux oscillations with respect to E y is closest to 180 • out of phase at ∼ 466 keV.At lower energies, the phase of peak electron fluxes relative to the E y minimum varies but is not 180 • out of phase.For the observed modulating hiss, the minimum resonant energy is tens of keV (Fig. 1), and thus the electron flux at energy below 100 keV plays a dominant role in hiss amplification.Although these low-energy electrons (30-100 keV) are not exactly in drift resonance with the observed ULF waves, their modulation is highly relevant to the presence of ULF waves.These lowenergy electrons may be accelerated by the ULF waves during the first half cycle and then decelerated so that there is no total energy gain.This mechanism was also demonstrated in the drift-resonance theory in which the peak electron fluxes should have a 180 • energy shift (Southwood and Kivelson, 1981). Meanwhile, Van Allen Probe A detected hiss emissions in a similar frequency range as shown in Fig. 6.During this time period, Van Allen Probe A was located at lower L shells (2.6 < L < 5.3) and later magnetic local times (14.9 < MLT < 18.0).The hiss intensity also exhibited modulation in electric and magnetic field, as shown in Fig. 6b and c, respectively.However, different from the observation by probe B, the hiss intensity is dominantly modulated by the variation in the plasma density.Figure 6d shows the density profile obtained from EMFISIS (black) and EFW (red).Ex-amples of evident modulations by variation in plasma density are highlighted with grey blocks.According to ray tracing simulation (Chen et al., 2012c), the hiss waves tend to propagate to the region with higher density resulting in higher wave intensity.Figure 6e and the white lines are the minimum resonant energy corresponding to a frequency of 40 Hz (Fig. 6b).There is no clear correlation between the hiss intensity and electron flux, suggesting that the modulations are mainly caused by the plasma density variation.We also calculated the convective linear growth rates for parallel-propagating whistler mode waves as shown in Fig. 6g.The growth rate profile shows little correlation with that of the observed hiss intensity, indicating that these waves are not locally excited.Figure 7 illustrates the comparison of hiss wave frequency spectra observed by Van Allen probes A (Fig. 7a-b) and B (Fig. 7c-d).At the beginning of the emission around 20:20 UT, the hiss wave intensity as a function of frequency observed by Van Allen Probe A presents a minimum at ∼ 200 Hz (indicated by the white arrows in Fig. 7a and b).This feature is similar to the observation by Van Allen Probe B (Fig. 7c and d), where the modulation of hiss wave power below 100 Hz is correlated with the calculated wave growth rate (Fig. 1d) based on the observed electron distribution.The hiss wave frequency spectra and structures observed by probe A are similar to those observed by probe B, but the energy spectra of energetic electrons are significantly different.Therefore, the hiss emission observed by probe A may be the result of wave propagation from the source region in the outer plasmasphere and further modulated by the local plasma density variation.energy calculated for the observed hiss wave frequency is consistent with the energy of injected electrons.The hiss wave intensity was modulated by the injected energetic electrons, which were modulated by ULF waves.In the meantime, Van Allen Probe A also observed similar hiss emissions at lower L shells, which is probably due to the propagation from the source region in the outer plasmasphere.Different from the observation by probe B, the hiss wave intensity observed by probe A is predominantly affected by the background plasma density.The modulation of hiss intensity by plasma density could be due to the effect of ray focusing at a high-density region during propagation (Chen et al., 2012c). Figure 8 summarizes the processes discussed in this study.The injected energetic electrons with energies of tens to hundreds of keV drift from the nightside to the dayside in the outer plasmasphere.Simultaneously, the ULF waves modulate the energetic electron fluxes.The modulated energetic electrons then lead to the modulation of the hiss intensity via local amplification.These features were all well captured by Van Allen Probe B. During the same time period, probe A at a later MLT and lower L shell observed hiss emissions which may originate from the source region in the outer plasmasphere. Chorus waves which are intense coherent electromagnetic emissions exhibiting discrete rising or falling tones are believed to be generated through cyclotron resonance with anisotropic electrons (Kennel and Petschek, 1966;Anderson and Maeda, 1977;Meredith et al., 2001;Li et al., 2009).It has been shown that ULF waves can modulate chorus intensity by modulating the background magnetic field and/or plasma density which affect the number of energetic elec- trons resonant with chorus waves (Li et al., 2011).Besides, the ULF wave-induced modulation of chorus could have an impact on electron precipitation leading to pulsating aurora (Jaynes et al., 2015).Similar modulations may also be captured in hiss wave intensity if hiss is locally amplified.However, different from chorus, plasmaspheric hiss waves are commonly known to be structureless (Thorne et al., 1973) and wave propagation is believed to be important for the measured hiss wave intensification (Bortnik et al., 2008(Bortnik et al., , 2009;;Chen et al., 2014).The hiss wave intensity is typically modulated by the variation in the background plasma density (Chen et al., 2012c).Nonetheless, our study showed the first evidence of the hiss wave modulation caused by modulated injected electrons due to ULF waves, clearly indicating that the hiss is locally amplified in the outer plasmasphere.It also provides an interesting link between the ULF waves and hiss waves which are in two distinct frequency ranges but both play important roles in radiation belt electron dynamics. Competing interests.The authors declare that they have no conflict of interest. Figure 1 . Figure 1.Plasmaspheric hiss modulation caused by injected electrons observed by Van Allen Probe B from 20:00 to 22:00 UT on 12 January 2014.(a) AE index; frequency-time spectrogram of (b) wave electric field and (c) wave magnetic field spectral density in the WFR channel; (d) frequency spectrum of convective linear wave growth rates; (e) background magnetic field intensity; (f) calibrated plasma density based on EFW and EMFISIS; (g) spin-averaged electron flux measured by MagEIS; (h) electron pitch angle anisotropy; (i) pitch angle distribution of electrons at 54 keV.The white dash-dotted line in (b) represents the lower hybrid resonance frequency (f LHR ).The magenta line in (b) indicates 40 Hz.The white dashed line in (c) indicates 100 Hz.The black lines in (g, h) represent the minimum resonant energy of electrons interacting with the waves at 40 Hz.The dashed vertical lines mark the modulation of the electron flux at 54 keV (i). Figure 2 . Figure 2. (a) Integrated hiss intensity from 20 to 1000 Hz; (b) integrated spin-averaged electron flux from 30 to 200 keV; (c) filtered integrated electron number flux (black) and filtered magnetic wave intensity of hiss (blue); (d) filtered plasma density (green) and filtered magnetic wave intensity of hiss (blue); (e) filtered pitch angle anisotropy (red) and filtered magnetic wave intensity of hiss (blue).The vertical dashed lines depict the same times as those in Fig. 1. Figure 3 . Figure 3. Variation of electron fluxes at different energies observed by Van Allen Probe A (a) and Van Allen Probe B (b).In (b), the modulation of electron fluxes was observed by Van Allen Probe B between 20:00:00 and 22:00:00 UT in association with ULF waves, and the dispersed electron injection was observed at ∼ 19:30:00 UT. Figure 4 .Figure 5 . Figure 4. Summary of the Pc4-5 ULF wave frequency spectra from Van Allen Probe B during the time interval of interest (20:00-22:00 UT).Dynamic spectrograms are shown for the three components of the magnetic field (in the mean field-aligned, GSM coordinates) along with the y component of the electric field in MGSE coordinate.Band-pass filtered time series (1.5-4 mHz) are shown below each dynamic spectrogram.The black dashed lines indicate the frequency at ∼ 2.6 mHz. Figure 6 . Figure 6.The observation of waves and electron fluxes by Van Allen Probe A during the same period as that in Fig. 1.(a) AE index; (b) frequency-time spectrogram of wave electric field and (c) wave magnetic spectral density in the WFR channel; (d) plasma density obtained by EFW (red) and EMFISIS (black); (e) spin-averaged electron flux measured by MagEIS; (f) electron pitch angle anisotropy; (g) convective wave growth rates.Grey block areas indicate the intervals of hiss modulation by variation of plasma density.The magenta line in (b) indicates 40 Hz.The black dashed line in (c) indicates 100 Hz.The white lines in (e, f) represent the minimum resonant energy of electrons for the waves at 40 Hz. Figure 7 . Figure 7.The wave electric (a) and magnetic (b) spectral density observed by Van Allen Probe A and the wave electric (c) and magnetic (d) spectral density from Van Allen Probe B. Note that at the beginning of the emissions around 20:20 UT, the hiss wave intensity as a function of frequency presents a minimum at ∼ 200 Hz (white arrows) for the observations from both Van Allen probes A and B. Figure 8 . Figure 8.An illustration showing the energetic electron trajectory (green), ULF waves (pink) and hiss intensity modulation (blue).Injected electrons from the nightside drift to the post-noon sector (green arrow) in the outer plasmasphere where they provide a source of free energy for hiss wave generation in the outer plasmasphere.During the period of electron injection, electrons are modulated by ULF waves (magenta), which lead to the modulation of hiss wave amplification (blue), as observed by Van Allen Probe B. The hiss waves are probably generated in the outer plasmasphere, and then propagate into lower L shells, as observed by Van Allen Probe A.
2018-12-07T17:53:05.342Z
2018-05-23T00:00:00.000
{ "year": 2018, "sha1": "090719df3d9b595a08ad9485ce448702de650047", "oa_license": "CCBY", "oa_url": "https://angeo.copernicus.org/articles/36/781/2018/angeo-36-781-2018.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "896c60a3e19ddd2636297f7be807bba8e3adcac0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
219149244
pes2o/s2orc
v3-fos-license
Monumentum aere perennius ’ – Discussions and Decisions by the Synod of Dort on the Translation of the Bible The Synod of Dordrecht 1618/19 was not only the most ecumenical synod of reformed churches in history, but is also famous for reaching closure with the formulation of the Canons of Dort, on the highly controversial discussions of election, grace, predestination, free will and other related theological themes that disturbed the Netherlands during the first two decades of the 17th century. Unfortunately, in the wake of this, other crucial matters that also were dealt with at the Synod tend to be obscure. The critical issue of Bible translation is one such example. Although this theme appears to be in the shadow of the contentious debates on election etc., till today the importance of the decisions of the Synod on the principles of Bible translation, which gave rise to the well-known Dutch “Statenvertaling” (State Translation), remain unassailed. These include principles such as translating from the original languages, staying as close as possible to the original source text, remaining as faithful as possible to the typical Hebrew and Greek idiom, as well as the use of an unadulterated, understandable language as target language – with special consideration of and respect for the Names of the Lord, while also taking other important translations into account. Key concepts: Statenvertaling (Dutch State Translation); Dordrecht/Dort ; Bible translation ; Translation principles ‘Monumentum aere perennius’ – Diskussies oor en besluite deur die Sinode van Dordrecht oor Bybelvertaling Opsomming Die Sinode van Dordrecht 1618/19 was nie slegs die mees ekumeniese sinode van gereformeerde kerke in die geskiedenis nie, maar dit is ook beroemd vir die vasstelling van die Dordtse Leerreëls wat terselfdertyd as afsluiting gedien het vir die hoogs omstrede diskussies oor die uitverkiesing, genade, voorsienigheid, vrye wil en ander samehangende teologiese onderwerpe – temas wat groot onrus veroorsaak het in die Nederlande gedurende die eerste twee dekades van die sewentiende eeu. Ongelukkig bring die bekendheid van die Dordtse Leerreëls mee dat ander sleutelsake wat ook deur die sinode hanteer is in die vergetelheid raak. Die sentrale kwessie van Bybelvertaling is een só ’n aspek. Ofskoon dit lyk of hierdie saak in die skaduwee van die kontensieuse diskussies oor uitverkiesing ensovoorts staan, is die betekenis van die sinodebesluite oor die beginsels van Bybelvertaling (wat uiteindelik in die Statevertaling uitgemond het) ’n uitgemaakte saak. Dit sluit beginsels in soos om uit die oorspronklike tale te vertaal, om so na as moontlik aan die oorspronklike bronteks te bly, terwyl ook so getrou as moontlik die tipiese Hebreeuse en Griekse idioom weergegee word en terselfdertyd van ’n suiwer, verstaanbare segswyse en uitdrukking in die doeltaal gebruik gemaak word. Hierby kom besondere aandag aan en respek vir die Name van die Here terwyl ook ander belangrike vertalings in ag geneem word in die vertaalproses. Kernbegrippe: Statevertaling; Dordrecht / Dordt; Bybelvertaling; Vertaalbeginsels 1 This article is based on research done for a Reformation Conference in South African during August 2018 (Pretoria, Cape Town and Vryburg),in commemoration of the Dordrecht Synod (1618/19), as well as a paper delivered in Heidelberg, Germany, on 26 July 2019 at the international conference 400 Years Synod of Dort (in Heidelberg and Dordrecht). A part of this research was published in a conference volume (d’Assonville, 2019:49-77). Return to Dort ... the "truth of 1618 and 1619" It was in a remote part of the Bo-Karoo, a semi-desert in Southern Africa, that a remarkable piece of church history was re-enacted approximately 160 years ago. Writing at that time, Hester du Plessis (née Venter), the wife of elder I.D. du Plessis, described the events that resulted in the establishment of the Colesberg Reformed Church on 8 December 1860 as an appeal to the "truth of 1618 and 1619" (in Postma, 1905). It is clearly the Synod of Dort that she was referring to -the Three Forms of Unity and the Church Order of Dort in particular. "Back to Dort!" This truly was the motto that expressed the diligent endeavours of a multitude of congregation members in the middle of the 19 th century -in the classis of Graaff Reinet at that stage as well as, simultaneously, in the ZAR (Transvaal) and the OFS (Oranje Vrystaat/Orange Free State). As a theological motivation for the average member, "God's infallible Word" was particularly useful, with an appeal to the Belgic Confession, Article 7, 27-29. This slogan was confessional in character; this naturally implies a specific concern for the reformed confession, but also in a broader sense the "mutual agreement" (= common accord) regarding the church order. It assumes the absolute primacy of the Lord's Word, the Bible. But, and this is germane to our subject, both bring to light a very important facet: the availability of the Scriptures in the language of the believers; the Bible in their own language. This historical snippet is an example of similar reformation movements that took place in different parts of the world in the nineteenth century: The Word of God in the vernacular Right from the start of the Reformation it was evident that the Bible's availability in the vernacular would be a priority -which necessarily entailed the priority of accurate and faithful Bible translation. This too was the case at Dordrecht. And 230 years after the Synod of Dort, in the nineteenth century, this continued to be the abiding concern of a gathering of simple believers in the Karoo, apparently without a single theologian or preacher in their midst. In 1889, about 30 years after the events in Colesberg, S.J. du Toit in the Paarl -one of the founders of the GRA (Genootskap van Regte Afrikaners) and the father of Totius (J.D. du Toit) -published a book called "The Bible in Afrikaans". In this work he mounts a spirited argument for this noble ideal (cf. d'Assonville, 1999:244). The Synod of Dort and Scripture -the preamble Early on in the proceedings, less than a week after the official opening of the Synod of Dort on 13 November 1618, the sixth session of the synod occurred on 19 November 1618. The Acta (minutes) of the Synod read: "After the Praeses opened with the normal prayer, they began to discuss the possibility of a new and better translation of the Bible from the original languages into Dutch" (Cf. Acta 1618/19:18; Kaajan s.a.:86ff.; De Kooter, 2018). As with most decisions or records of church meetings, it should be noted that minutes Original Research www.koersjournal.org.za usually indicate a certain progression of events, or 'prehistory,' -sometimes brief, but often more extensive than generally presumed. The latter was the case with the famous Great Synod of Dort; the ideal of a Dutch Bible Translation had travelled a long road by this time. This 'prehistory' was a troubled one in the Low Countries ever since the Synod of Emden in 1571 (cf. Goeters, 1971 andalso Ruttgers, 1899:90). For specific reasons, the Synod of Emden took place in Germany (Lomberg, 1973:7-35), but, as with this synod in 1571, the Acta of synods that occurred in the Netherlands subsequent to Emden (1574, 1578, 1581, 1586-cf. Rutgers, 1899267, 367, 426-427, 534, 608 et seq.) mention the need for -and the expressed desire and striving for -a "correct translation of the Bible in the Dutch language" (cf. Goeters, 1971:56;Rutgers, 1899:90). 2 In this regard, Nauta relates how Helmichius presented his report at a Particular Synod of North Holland in Amsterdam in 1607 -an argument for the continued efforts to complete a new and thorough translation of the Bible in Dutch. Helmichius, says Nauta, "recalled how this issue (the matter of a Dutch translation of the Bible -VEd'A) had remained on the agenda since 1571, when it had been addressed by the delegates of Cologne during the Synod of Emden ..." (Nauta, 1937:2). He then quotes Helmichius himself: "...how many general synods and annual synods of Holland have dealt with this, holding diligent discussions and debates, as well as investigating various methods of translation, with the knowldege -the resolution, even -of the government of the provinces of the Low Lands ..." (Nauta, 1937:2-4). To go even further back in this prehistory, we need to return, quite literally, to the beginning of the Reformation, and to the first Bible translations that arose as the fruit of the Reformation. Luther's famous translation, which he began during his exile at Wartburg in 1521-1522, deserves particular mention (cf. Blanke, 2005:258-265). Within the same time-frame, Ulrich Zwingli's reformation work in Zürich encouraged the fertile climate which produced the translation known as the Zürich Bible (cf. Beutel, 1998Beutel, :1500Campi, 2005Campi, :1947. It wasn't long before a number of other translation initiatives saw the light, the most famous of which are the Tyndale (English) and the Olivétan (French -the later Geneva translation -cf. Neuser, 1989:87 et seq.). 3 Similarly, there were a number of different Dutch translations in circulation in the Netherlands by the close of the 16th century. All this is concrete evidence that the Reformation doctrine of sola scriptura both presupposed and required (and resulted in) faithful translations of the Bible from the original languages. 4 In terms of tracing the development of Bible translation as such, the prehistory goes much further back into the past. A brief excursion -Bible translation in the Early Church It is a matter of foundational importance to realise that the principle of translation, i.e. the aspect of the translatability of Scripture as the Word of God, arises from Scripture itself. Indeed, there are many instances in the New Testament in which the Greek translation of the Old Testament, the Septuagint, is used. The first parts of this Greek translation of the Old Testament, which was done at different times and periods, date back to the third century B.C. -more than a century after the completion of the canonical books of the Old Testament (cf. Dogniez, 1998Dogniez, :1487Dogniez, -1491. The principle of the authoritative translation of Scripture is thus embedded in the Word itself. Regarding the Bible itself, there were already translations of the New Testament in the 2nd century, as well as of the entire Bible. Some of the earliest include the Syriac Aramaic translation (Peshitta -actually about six versions) and various Old Latin translations (the Vetus Latina or Itala). The Latin translations in particular rose fairly quickly to prominence as the knowledge and mastery of Greek dwindled in the Roman Empire. In addition, the growing authority of Rome as capital city meant the Latin translations were increasingly valued in the life of (especially) the Western church. Other well-known translations from the early centuries include the Coptic translation (2nd -4th century), the Ethiopian translation (4th century), the Gothic translation of the 4th century (the famous Wulfila Bible), the Armenian translation (5th century), and the Georgian translation (5th century) (cf. Ebertshäuser, 2006:18-20). The Roman Empire's official recognition of the Christian faith in 313 AD gave rise to the pursuit of a single Latin translation, which in turn led to Jerome's (Hieronimus') task in 382 of producing this translation -or at least co-ordinating existing translations such as the Vetus Latina into this Latin translation. His contribution was essential in establishing the principle that translation needs to take place from the original languages -in other words, from the Hebrew and Aramaic texts in the case of the Old Testament, rather than, as some would have had it, from the Greek translation (the Septuagint). Jerome's (Hieronimus') translation and editing in the late 4th century AD resulted in a work which would be well received and widely recognised -from whence the name Vulgata, which implies a common and popular reception. This Latin translation has continued to play an important role in the church and in the world up to today. This is an important fact, because almost 1200 years after Hieronimus' Latin translation, at the time of the Reformation, it would again precipitate the vital question of the availability of the Bible in the vernacular -against fierce and mighty opposition from Rome. This defining issue would play an important role at the Synod of Dort, and continues to do so today. The matter of the canonical books, and the inclusion and status of the apocryphal books, would also have important consquences at Dort, and up to our present day. A thousand years later After a leap of more than a thousand years, the issue of Bible translation would again be thrust onto centre stage in church history. It is remarkable that wherever the Spirit of God was bringing forth new life and the accompanying repentance and conversion, it went hand-in-hand with new initiatives in Bible translation. This was evident, for example, among the Hussites in Bohemia, the Waldensians in Italy and France, and the Lollards in England (Ebertshäuser, 2006:21). It could be said that all instances of genuine Reformation in dark times are accompanied by the rejection of false teaching and proclamation of true Biblical doctrine. An inevitable result of the appeal to Scripture as the only authoritative standard for true doctrine was that the urgent need for thorough and authoritative Bible translations that could be read and understood by ordinary Christians became increasingly apparent (Ebersthäuser, 2006:20) -and this was no different at the Synod of Dort. The short overview above is important when considering this topic, because it brings us to a Original Research www.koersjournal.org.za highly significant pre-Reformation translation, namely the 14th century Wycliffe translation by John Wycliffe (1324 -1384). This work (translated in part from the Latin Vulgate, and partially assembled from existing sections of the Bible already in English) was a forerunner of the Tyndale translation. This translation was undertaken by Willian Tyndale (ca. 1494-1536) as the first English translation from the original languages; he began in 1525 and completed it while on the run for his life. The Tyndale translation (along with the Geneva Bible -not to be confused with the Genevan translation of our time) was in turn a precursor of the later King James Bible, which itself grew out of the Reformation -the fruit of the Reformation in England, in other words. The topical relevance of the King James translation for the Synod of Dort becomes clear when it is noted that it was precisely the King James Bible that was put forward as an example and motivation for the principles of a new, authoritative Bible translation from the original languages at the seventh sitting of the synod on Tuesday 20 November 1618 -a matter which the synod would eventually decide in favour of (Acta 1618/19, s.a.:19 et seq.). Bible translations from the time of the Reformation A more in-depth discussion of Bible translations from the time of the Reformation is beyond the scope of this article, but it is important to realise that the Statenvertaling -as a notable outcome of the Synod of Dort -did not originate in a vacuum. It is contained within a considerably broader stream of Bible translations which emerged at a similar time during the Reformation (the Luther, Zürich, Olivétan and Tyndale translations), or which flowed directly out of the Reformation (such as the King James Bible and the Dutch Statenvertaling). A few examples will serve to illustrate this: In German there were: Luther's German translation (with a substantial contribution by Melanchthon). The New Testament appeared in 1522, and the complete Bible in 1534with the last revision by Luther himself appearing in 1545; 5 the Zurich translation (Huldrich Zwingli and associates). Sections of this translation were available between 1524 and 1529, and in complete form in 1531 -a full three years before the Luther translation was published in its complete form; the Lübeck translation (1533/1534). This translation also appeared before that of Luther's, although it relied heavily on the latter. The Lübeck translation was in 'Nederduits' or a German version of "Plat" -a North German dialect, distinct from (the later development of) High German. This must not be confused with 'Nederlands' or Dutch, which was also known as 'Nederduits' at that time. This translation is also known as the Bugenhagen translation, due to the contribution by Bugenhagen, a colleague of Luther's. In French there were the translation by Jacques Levèvre d'Etaples (Faber Stapulensis, 1455-1536, of which the New Testament was published in 1523 and the complete Bible in 1528. Then also Robert Olivétan's translation (1535). This translation by Calvin's cousin was later revised in various editions in Geneva, and proved popular with French reformed churches for more or less 300 years. In Italy it is presumed that in the 13th century a translation was already in circulation among the Waldensians. Two Italian translations (from Latin) by Nicolo Malermi and Antonio Brucioli appeared in the 16th century, leading to severe persecution by the Roman Catholic Church. In 1607 an Italian translation, made from the original languages by the reformer Giovannie Diodati, was published in Geneva; this translation is still in use today (Ebersthäuser 2006:23). There were similar Spanish and English translation initiatives (such as Tyndale's, from 1525), but the details of these are beyond the scope of this article (Ebersthäuser 2006:23). What is of relevance to this topic is the fact that none of the translations that are still recognised and used today (Luther, King James etc.) were the first translations of the Bible 5 It is less well-known that there have been 18 printed German translations since the Middle Ages, before the Reformation (cf. Landgraf). Original Research www.koersjournal.org.za into that particular language. We could rather point out that it is the culmination of centuries' worth of translation work that found its highest expression in such translations. Things were no different with the Statenvertaling. Worthy of special mention, however, are the principles of Bible translation, and the standards to which a translation should conform, that were established at the Synod of Dort. It is for precisely this reason that this topic should form part of any Dort 400 commemoration. 3. The Statenvertalingmonumentum aere perennius 6 Without delving too deeply into the historical details, we will now focus on the principles of Bible translation, as determined by the Synod of Dort in 1618/1619, and the ramifications of the decisions made at Dort regarding a Bible translation. We will also discuss the implications for us today. Reference has already been made to the fact that a number of Dutch translations or partial translations were already in use at the start of the 17th century ( Many copies of the Deux Aes Bible were in circulation. By the last decade of the 16th century criticism of the Deux Aes Bible had been voiced for a while, with the renowned and influential Marnix (Philips of Marnix of St Aldegonde, 1540-1598) being the most vociferous critic. As an example, his 1594 letter to the learned Hebraist, Johannes Drusius, furnished a number of reasons for his description of the Deux Aes Bible as being "so faulty that a totally new edition is required" (in Nauta, 1937:6). And 13 years before that (in 1581) Helmichius, in a passing remark to his friend, Arnoldus Cornelisz, said that a new translation of the Bible was "truly necessary" (Nauta, 1937:6 on the progress of the work at the Particular Synod of North Holland in 1607, as noted by Nauta (1937:1 et seq.). There will be no further discussion of his report at this point, only to mention that it established the urgent need for a thorough and faithful translation into Dutch from the original languages. The matter dragged on, with the translation work taking place in a piecemeal fashion, until the Synod of Dort in 1618/1619 would finally tackle the project head-on. Nauta words his conclusion to this prehistory so aptly: "Only the great Synod of Dordrecht of 1618 brought an end to the prolonged period of failed attempts to provide a new Bible for the people" (Nauta, 1937:9). Nevertheless, it is important to note that the long and tiresome labour undertaken by so many diligent servants was not in vain. Nauta's conclusion recognises this, and indicates that the Synod of Dort in 1618/1619 did not take place in a vacuum -in contrast with the tendency to pluck the "Great Synod" out its historical context, and to consider it anachronistically; the same applies to the issue of Bible translation: "This does not mean that all the hard work and effort of more than 45 years -at least since 1571 -has been fruitless. Quite the contrary. Apart from the fact that the synod ordered the translators to use Marnix and Helmichius' notes, the churches at Dordrecht -when making decisions about translationcould gain from the experience acquired through the discussions and translation attempts over the years. And so, firm and general convictions could be established that would help answer several questions that come to the fore when dealing with a new Bible translation" (Nauta, 1937:9). A prominent point on the agenda, and the King James translation How heavily the need for a new, thorough Dutch translation from the original languages weighed on the minds and hearts of those at the Synod of Dort in 1618/1619 is apparent by fact that it was, in a manner of speaking, at the top of the agenda -the first order of business 8 after the delegates' letters of credential had been received, evaluated and accepted. The opening prayer of this sixth session of the synod -the first session at which the matter of the a new Dutch Bible translation was addressed -is considered even by Kaajan to be the official opening prayer of the synod: "The session in which the gravamen concerning the new Bible translation was dealt with as an item on the agenda [19 November] was opened by the president, pastor Bogerman, with such an exceptional prayer that it served as the official opening prayer, as it were, of the synod" (Kaajan, s.a.:86). It also became apparent at this point that foreign delegates would play a valuable role in the proceedings. The English delegates, for example, gave a meticulous account of the translation of the King James Bible, which was finally published seven years before, in 1611, highlighting which principles of translation they had applied. Principles of translation and rules/guidelines for the translators Discussions regarding the purposes and principles of the new Dutch translation took place over eight sittings of the synod (Session 6 to 13), from Monday 19 November to Monday 26 November. The most important decisions concerning the principles, rules and guidelines that were to govern the translation project can be summarised as follows: It must be a "better" translation, "from the original languages directly into Dutch". The need and urgency was great; as in previous synods it was determined "that this task should be carried out diligently, quickly, and competently, in the shortest time possible..." The synod decided that the translation needed to be completely new, not "merely editing the existing Dutch versions, but at the same time avoiding the annoyance of making jarring changes; taking from previous tanslations things that would not violate the truth, purity, and the character of the Dutch language..." (cf. Acta 1618/19, s.a.:19). Therefore, this translation had to be from the original languages, from the Greek and Hebrew texts. Nevertheless, the translation process still needed to consult and take into account the best existing translations, as well as the interpretations, explanations and decisions of various scholars (cf. Acta 1618/19, s.a.:20). In conjunction with this, additional rules were set in place for the translators (Cf. Acta 1618/19, s.a.:20): • Care has to be taken to follow the original texts as closely as possible, with the idiomatic expressions of the original language being retained as far as the Dutch language allowed. Hebrew or Greek expressions that are too difficult to retain in translation have to be carefully recorded in the marginal notes. • In cases where it is necessary to add words to the text in order to facilitate a better grasp of the meaning, the addition needs to consist of as few words as possible. The addition is required to be in a different font, and in parentheses, in order to differentiate it from the original text. • A short table of contents has to be included at the start of each book and chapter, and cross references to other parts of Scripture are to be inserted in the margins. • Apart from an occasional note to briefly explain the reasons for choices made in the translation of difficult passages, the synod determined that it is neither necessary nor advisable to include comments on doctrinal aspects of the text. There were also discussions on other aspects, which are briefly noted below. Canonical and apocryphal books A detailed and lengthy discussion concerning the apocryphal books was held at the ninth session of the synod, on Wednesday 21 November. 9 It is important to remember that this discussion had in view only the apocryphal books of the Old Testament -in other words, the books that are, admittedly, included in the Septuagint, but that do not form part of the Hebrew Canon (the Tenach, which we know as the Old Testament -cf. the Belgic Confession, articles 4 & 6; the discussion at Dordrecht was about those mentioned in article 6). The books that originated after the time of the New Testament, known as the New Testament apocrypha, did not receive even a mention at Dordt. 9 Cf. Neuser (1989:83-103) for a thorough discussion of the reformed view on the apocryphal books of the Old Testament. Original Research www.koersjournal.org.za It was unanimously concluded that the Old Testament apocrypha are merely human writings, with some sections consisting of fictional stories as well as false teaching and scriptures. There were even instances in which the apocryphal books contradicted the canonical books. The question was whether it was appropriate to include the apocryphal books in one volume with the holy, inspired and canonical books of the Bible (cf. Acta 1618/19, s.a.:20). This deliberation was put forward and voted on at the tenth session, on Thursday 22 November 1618. With the support of the majority it was decided to translate the apocryphal books from Greek into Dutch, but with less diligence and care than with the translation of the canonical books (cf. Acta 1618/19, s.a.:20,21). 10 In what seems to have been an extremely intensive discussion, the synod felt that it would be desirable if the apocryphal books were not published in the same volume as the Holy Scriptures. Nevertheless, the apocryphal books had long been published in Protestant Bible translations in the same volume as the canonical books, both inside and outside the Netherlands (cf. Neuser, 1989:83 et seq.). Examples in this regard were the Luther, Zurich, Castellio and the French Genève [Olivètan] as well as the King James translations). 11 Therefore the Synod judged that it "could cause mild annoyance and even slander" if this was no longer the case and the apocryphal books were now published separately from the (canonical) Scriptures (Acta 1618/19, s.a.:21). This decision was made with many reservations and qualifications. The apocryphal books had to be separated "... from the canonical books by a substantial space between the two, [they should be distinguished] by a distinctive title page ... in which it needs to be explicitly indicated that these books are human and therefore apocryphal (Acta 1618/19, s.a.:21)." Furthermore, the apocryphal books needed to be printed "in a smaller font, distinct from the fonts used in the canonical books, so that notes could be made in the margins at the places where the truth of the canonical books is contradicted, especially where the Roman apologists found material to argue against the truth from the canonical books." (Acta 1618/19, s.a.:21.) The printers had to ensure that the apocryphal books were: … bound on their own, with page numbers that differed from the canonical books, so that it would be immediately evident that they were not canonical ... (Acta 1618/19, s.a.:21.) The printers therefore had to number the pages of the apocryphal books differently and independently from those in the canonical books of Scripture (Acta 1618/19, s.a.:21). An interesting departure from existing editions of the Bible was the synod's decision to place the apocryphal books after the books of the New Testament, to emphasise the distinction between the them and the canonical books of the Bible (Acta 1618/19, s.a.:21). 12 Up until this point, it had been customary to place the apocryphal books directly after the Old Testament. This practice is still followed in some editions of the modern Luther Bible, for example. Grammatical and lexicographical aspects There were two issues especially that needed to be clarified before the translation work began; these were on the agenda for the 12 th session on Saturday 24 November 1618 (cf. Van Vlis). 13 There were other matters for consideration as well, e.g. the Names of God, the chapters and verses division etc. (cf. Acta 1618/19, s.a.:23), but the details can not be discussed here. An issue of great importance was the question of how the the Lord should be referred to in the second person in the translation. Formerly, the second person singular "du" was used in Dutch, where currently "jij" or "je" is used to refer to someone in the second person. Where the second person plural in Dutch used to be "gij" or "jij", it is now "jullie". Both "jij" and "gij" were thus plural forms. In due course, however, "jij" or "gij" replaced "du" as the singular form. To avoid ambiguity, the plural form "liede" or "lui" (meaning people), was added to "jij", and over a period of time the plural form "jullie" ("jij" plus "lui") came into use. It is against this background that the discussion that unfolded at the 12th sitting of the Synod of Dort (cf. Acta 1618/19, s.a.:22), needs to the understood. This sitting, which took place on Saturday 24 November 1618, was the occasion of an intensive and contentious debate on this issue. 14 Eventually, with majority support, the decision was made that the Dutch translation would use the plural form "Gij" to refer to (speak to) God in the second person (Acta 1618/19, s.a.:22). The decision would guide and shape the Dutch Biblical language for centuries to follow. While this discussion may look to some like pedantic hairsplitting, it demonstrates how the synod took great care and expended effort to ensure that the translation would honour the Lord in the manner in which He would be referred to and addressed in the second person. The two options being considered were either to render the consonants JHWH ‫)הוהי(‬ as Jehovah (the consonants with the vocalisation for Elohim) or with the apellation "Heere" (meaning "Lord" in the Dutch of that time). The latter followed the example of the Septuagint, which rendered JHWH as Κύριος (Lord). The rendering of JHWH The synod settled on the second variant, but specified the use of capital letters, namely, LORD (HEERE). This was then approved (Acta 1618/19, s.a.:22,23). Completion of the translation It was estimated at first that the translation work would take four years. This was completely unrealistic; the work only began seven years after the synod, in 1626. Due to a number of setbacks (including translators falling ill, and even dying, a translator who needed to be ransomed from Spanish imprisonment, and an outbreak of the plague), the translation and revision were only completed by 1635. The final product was handed to the States General in 1637. The "acceptance and introduction" was not without opposition either. But that is another story ... 15 And thus the illustrious Statenvertaling is a pivotal outcome of the Synod of Dort 1618/1619 -indeed a "monumentum aere perennius" (as mentioned by Kaajan,s.a.:86). It is a substantial confirmation of the recognition and authority of Scripture as the Word of God. This translation is in use up to our present day. A complete revision in modern Dutch was published in 2010, 16 with further editions and corrections having appeared since then. This Revised State Translation has been well received since then and is used in various churches in the Netherlands. The significance of Dort in the question of Bible translation principles It would be a big mistake to think the Dort Synod was only concerned with election and predestination and other related themes. The first mistake with this kind of thinking is that one tries to separate the questions that were discussed and answered at the Synod of Dort 1618/19 from the rest of the confession and faith. A second mistake is that one separates the whole issue that was dealt with eventually by formulating and accepting the Canons of Dort from the rest of the agenda of Dort. This is but one reason why the matter of Bible translation, that actually enjoyed the very first attention at the Synod, is so important and may not be neglected in any study of the Canons. translation cannot be considered apart from the Doctrine of Scripture. The reformed Doctrine of Scripture, as made evident at the Synod of Dort in 1618/1619 when the commitment to the reformed confession was expressed, is most concretely displayed in how we "rightly handle the word of truth" (2 Tim 2:15); this is especially evident in the principles of Bible translation. To mention but one foundational premise, and surely its most important facet: The Synod of Dort accepted that Scripture is the Word of God (cf. d'Assonville, 1998). Herein lies the power of the Canons of Dort: they do no more, nor less, than render the teachings of Jesus Christ -about the Doctrine of Election, and much more besides. Research has already indicated that the principles of translation established by the Synod of Dort in 1618/1619 remained valid for many subsequent Bible translations. They include (cf. d'Assonville, 2004): • "The translation must accord as closely as possible with the original source text; • "the typical Hebrew and Greek idiom must be taken into account as far as possible; • "the language of translation must be pure; • "the Hebrew name Jahwe must be rendered in capitals or at least differentiated from the translation of Adonai." Conclusion Much water has flowed into the sea since the Synod of Dort in 1618/1619 -and much philosophical water too. There has been a prodigious development in the areas of translation theory and the philosophy of language, specifically during the 20th century and past few decades. This is not the place to enter into a debate on the merits and demerits of these developments. Those who participated in the Synod of Dort could obviously not foresee the later developments in language and philosophy of language, and the shifts in perspectives on language and communication. Disciplines and terms such as "structuralism", "discourse analysis", "post-structuralism", "deconstructuralism", and "semiotics" etc. would only come into being three centuries later. The 20th century's most illustrious thinkers about language played no role at all -how could the later influence of Bertrand Russell, Ferdinand de Saussure, Ludwig Wittgenstein, Michel Foucault, Jacques Derrida, Jürgen Habermas and others have been anticipated? The fact is that 400 years ago foundational decisions regarding translation work were made without any foreknowledge of things that would follow in the disciplines of philosophy and the philosophy of language. But does that mean that the principles of translation established at Dort 1618/1619 are outdated, and of no more value to us today? What Dort teaches us regarding principles of translation is that there is an inherent and direct relationship between the view of Scripture, confession and the translation of Scripture. The confirmation of the Belgic Confession, 17 i.a., as a confession of the churches at the same Synod that agreed on the Canons of Dort cannot be considered seperately from the decision process that resulted in the principles for Bible translation. 17 Article 3, considered alongside articles 4-7 of the Belgic Confession, comes especially to mind: We confess that this Word of God was not sent, nor delivered by the will of man, but that holy men of God spoke as they were moved by the Holy Ghost, as the apostle Peter says. And that afterwards God, from a special care, which he has for us and our salvation, commanded his ser vants, the prophets and apostles, to commit his revealed word to writing; and he himself wrote with his own finger, the two tables of the law. Therefore we call such writings holy and divine Scriptures. Original Research www.koersjournal.org.za The principles of Bible translation, as determined at Dordrecht in November 1618, were based on the assumption that translation is about faithful rendering of that which the Lord says in his Word. When measured against the Belgic Confession (as well as the Heidelberg Catechism), these principles of translation receive a value and significance that reaches beyond Dort and its historical meaning, all the way to the Statenvertaling, casting it in a new light. By this light we see that it is not only the Statenvertaling that is a "monumentum aere perennius" (a monument more durable than bronze), but the principles of translation, fixed in place at the Synod of Dort 1618/1619, are also "monumenta aere perenniora".
2019-12-19T09:16:25.579Z
2019-12-12T00:00:00.000
{ "year": 2019, "sha1": "0c7dd195346c9e2392228112d13a120ac7326742", "oa_license": "CCBY", "oa_url": "https://www.koersjournal.org.za/index.php/koers/article/download/2475/2899", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b9dd3570f3573623d252574c2e7c46fe1efee36f", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "History" ] }
229323948
pes2o/s2orc
v3-fos-license
Functional Analysis Validation of Micro and Conventional Injection Molding Machines Performances Based on Process Precision and Accuracy for Micro Manufacturing Micro polymer parts can be usually manufactured either by conventional injection moulding (IM) or by micro-injection moulding (µIM). In this paper, functional analysis was used as a tool to investigate the performances of IM and µIM used to manufacture the selected industrial component. The methodology decomposed the production cycle phases of the two processes and attributed functions to parts features of the two investigated machines. The output of the analysis was aimed to determine casual chains leading to the final outcome of the process. Experimental validation of the functional analysis was carried out moulding the same micro medical part in thermoplastic elastomer (TPE) material using the two processes by means of multi-cavity moulds. The produced batches were assessed using a precision scale and a high accuracy optical instrument. The measurement results were compared using capability indexes. The data-driven comparison identified and quantified the correlations between machine design and part quality, demonstrating that the µIM machine technology better meets the accuracy and precision requirements typical of micro manufacturing productions. Introduction Conventional injection moulding (IM) is the most used process for the manufacturing of polymer parts, since it enables the mass-production of net-shaped components. In the last decades, the miniaturization of components has become one of the principal technological drivers in many engineering sectors [1]. In order to meet the consequent growing demand of micro components, conventional injection moulding (IM) was downscaled into micro-injection moulding (µIM) [2]. The two technologies have the same process cycle phases (i.e., filling, packing, cooling, demoulding) but, at the same time, fundamental differences deriving from the smaller dimensional scale exist between the process chains associated with both processes. In particular, specific micro tooling processes [3][4][5], micro scale measuring techniques [6] and new design approaches [7,8] must be adopted when dealing with micro scale polymer processing [9,10]. New injection moulding machines have been also developed: conventional ones embed a reciprocating screw, while those dedicated to µIM typically have a screw for plasticising pellets and a separate plunger (diameter of 5 mm down to 2/mm) for metering and injection [11]. Such alternative architecture increases the accuracy of polymer melt dosing and provides higher injection speeds because of the lighter and more controllable injection plungers. This directly results in higher repeatability and improved replication fidelity. These features make µIM the preferred method for the manufacturing of polymer micro parts [12]. However, it is worth noticing that polymer micro parts can be manufactured by both IM and µIM. IM is usually employed in the plastic industry when small batches of micro components are needed and, therefore, the investment related to the acquisition of a dedicated µIM machine is not sustainable or justified. Although the differences between using an IM and a µIM machine are well known, no study reports an investigation aimed at correlating the functionality of the two machine architectures to the actual dimensional capabilities of both the macro and micro process. In fact, the literature mostly focused on the difference in terms of morphology, demonstrating that the size of crystalline entities was significantly influenced by the dimensional scale of the moulded component [13,14]. In order to investigate the impact of a particular machine layout on its function, functional analysis and axiomatic design represent powerful tools. Functional Analysis (FA) allows identifying the functions performed by the product and by the components of the structure that carry on these functions [15][16][17][18]. It has already been used in the field of injection moulding by the authors [19] to analyse two different machine designs. Axiomatic Design, correlating functional requirements with parameters, allows a comparison between different design arrangements [20]. Functional Requirements are defined as the "minimum set of independent requirements that completely characterize the functional needs of the product in the functional domain". The fundamental axiomatic axioms can be read together as "Among all the design that satisfy the independence axiom, the one with the minimum information content is the best design". It means that Design Parameters, which are the physical variables characterizing the physical entities, are related to Functional Requirements in such a way that specific parameters can be adjusted to satisfy its corresponding requirements without affecting the others. Since axiomatic design links functions with corresponding features, it is highly relevant for both designers and production engineers. In this paper, an IM and a µIM moulding machine were directly compared by using FA and then axiomatic design in order to demonstrate the differences in terms of process capabilities. The relevant machine macro-functions, corresponding to the main phases of the moulding process, were used to divide the conventional and the micro process in functional maps. The map differences represented the functional key aspects, allowing to identify critical components and possible working problems. The axiomatic analysis began with an identification of the design key features that affect the feeding phase (therefore the metering) and the injection phase of the process. Those two phases are the most different in the implementation and the most critical ones. For such a reason, these two were firstly compared in an aggregate axiomatic analysis and then the entire axiomatic matrix of each machine was built. The same micro part was moulded with the two machines and dimensionally assessed. Data on precision (i.e., repeatability) and accuracy (i.e., closeness to target, that in the context of the moulding processes considered in this research is represented by the cavity dimensions) of IM and µIM were gathered in order to validate the functional and axiomatic analysis results. Case Study The investigated micro part was a thermoplastic elastomer (TPE) component for medical applications with a nominal mass of 20 mg. TPE was selected for its elastic properties as well as a level of mouldability that enabled an effective and repeatable micro replication process [21]. Figure 1 shows the geometry of the micro part, which is cylindrical and has a through hole generated by a pin coaxial to the cavity. The main dimensional features of the part are shown, namely inner and outer top diameters (IDt and ODt), inner and outer bottom diameter (IDb and ODb), and two lengths (L1 and L2). In particular, IDb and ODt were chosen as indicators for comparison of µIM and IM since they are geometries originated by the replication of the cavity wall and the pin. There exists a significant distinction between these two situations, since the polymer is allowed to shrink freely in correspondence with the outer diameter, while it undergoes a constrained shrinkage when inner dimeters are involved, thus generating residual stresses that enhance the deviation with respect to the mould dimensions. The other geometries were also assessed in order to determine the volume of each moulded part, which was then employed for the density calculation. The density is a particularly relevant output of any moulding process since it is an indication of the holding phase performance. A higher resulting density means that the packing phase was particularly effective in achieving a high shrinkage compensation. Compensating for shrinkage allows to obtain a higher replication degree of the moulded part with respect to the cavity geometry, higher dimensional accuracy and lower warpage (i.e., smaller form errors). The dimensional tolerances were specified as ± 50 µm on the considered geometries. The polymer used for both IM and µIM experiments was a Thermolast ® grade from Kraiburg TPE GmbH (Waldkraiburg, Germany) having a nominal density of 0.89 g/cm 3 . The viscosity and pressure-specific volume-temperature plots of the material are presented in Figure 2. coaxial to the cavity. The main dimensional features of the part are shown, namely inner and outer top diameters (IDt and ODt), inner and outer bottom diameter (IDb and ODb), and two lengths (L1 and L2). In particular, IDb and ODt were chosen as indicators for comparison of µIM and IM since they are geometries originated by the replication of the cavity wall and the pin. There exists a significant distinction between these two situations, since the polymer is allowed to shrink freely in correspondence with the outer diameter, while it undergoes a constrained shrinkage when inner dimeters are involved, thus generating residual stresses that enhance the deviation with respect to the mould dimensions. The other geometries were also assessed in order to determine the volume of each moulded part, which was then employed for the density calculation. The density is a particularly relevant output of any moulding process since it is an indication of the holding phase performance. A higher resulting density means that the packing phase was particularly effective in achieving a high shrinkage compensation. Compensating for shrinkage allows to obtain a higher replication degree of the moulded part with respect to the cavity geometry, higher dimensional accuracy and lower warpage (i.e., smaller form errors). The dimensional tolerances were specified as ± 50 µm on the considered geometries. The polymer used for both IM and µIM experiments was a Thermolast ® grade from Kraiburg TPE GmbH (Waldkraiburg, Germany) having a nominal density of 0.89 g/cm 3 . The viscosity and pressure-specific volume-temperature plots of the material are presented in Figure 2. IM and µIM Set-Ups IM experiments were performed using an Allrounder 270 U injection moulding machine from Arburg (Lossburg, Germany) equipped with an 18 mm diameter reciprocating screw and capable of a maximum clamping force of 400 kN. A two-plate mould with four cavities was used (see part, gate and runner system layout in Figure 3a). The volume of the feed system was equal to 980 mm 3 , accounting for 91% of the total amount of injected polymer. The four injection moulded parts account for a total of 96.8 mm 3 (the nominal volume of one part based on design specifications is 24.2 mm 3 ), equal to 9.0% of the total injection volume. The usage of submarine pin gates allowed achieving automatic detachment of the parts from the feed system [22]. µIM experiments were carried out with a state-of-the-art MicroPower 15 micro injection moulding machine from Wittmann-Battenfeld (Vienna, Austria). This machine presents a 14 mm diameter plasticisation screw and a 5 mm injection plunger. The maximum clamping force is equal to 150 kN. A two-plate micro injection moulding tool with four cavities was used with this machine (see part, gate and runner system layout in Figure 3b). The feed system was designed with a submarine gate and had a total volume of 174 mm 3 , thus representing 64.3% of the total injected shot. Correspondingly, the four injection moulded parts, which account for a total of 96.8 mm 3 , represent 35.7% of the total micro injection volume. By comparing this value with the one of the previous case, it is clear that µIM allowed to consistently reduce the amount of material waste, representing a valuable improvement with respect to production cost reduction, material consumption and production sustainability. Table 1 shows the optimized settings for the two processes. The same level of holding pressure, melt temperature and mould temperature were kept in order to minimize the sources of variation in the comparison between IM and µIM. As for the other process parameters, a higher value of injection speed was used with the µIM machine in order to balance for the smaller injection section. µIM was set on a shorter cycle time due to the smaller amount of injected polymer into the cavity. Measurement Strategy and Uncertainty Evaluation After discarding the first 50 shots, 10 consecutively injected parts were collected per each of the four mould cavities and then weighed for both IM and µIM batches using a scale having 0.1 mg resolution (AW220, Shimadzu Corp., Kyoto, Japan). The 80 moulded micro components were also dimensionally assessed. In particular, the diameters were measured with a 3D focus variation microscope (Alicona InfiniteFocus, Alicona Imaging GmbH, Raaba, Austria) with a 5× magnification objective (0.41 µm vertical resolution and 1.75 µm lateral digital resolution). To do this, top and bottom sides of each part were acquired and then levelled by applying a planar correction to correct for any influence of tilting. After this operation, the measurands were extracted by fitting the points of the measured circles (see Figure 4) with the software MountainsMap ® (Digital Surf, Besan çon, France). Each acquisition was repeated three times. The two lengths L1 and L2 were measured with an optical CMM (DeMeet 220, Schut Geometrical Metrology, Groningen, The Netherlands) having a 0.5 µm resolution. Based on the measurements of the six dimensions (four diameters: ODt, IDt, ODb, IDB; two length: L1, L2), the volume V of each moulded part was calculated as follows: The density was then calculated as the ratio of mass and volume. The cavities of both IM and µIM moulds were measured with an optical microscope having a 2.6 µm lateral resolution (Infinity X-32, DeltaPix, Smørum, Denmark). In particular, the geometries correspondent to ODt and IDb were measured in order to calibrate the process for the comparison of the achieved precision and accuracy. Any influence induced by differences of the mould dimensions between the conventional injection moulding tool and the micro injection moulding tool was then eliminated from the process analysis. The measurement uncertainty U was evaluated by applying the method described in ISO 15530-3 [24]. This evaluation technique is based on the substitution method, which allows estimating the error of the measuring instrument by repeated measurements on a calibrated artefact that is similar to the actual measurand. Two calibrated artefacts were used: a calibrated circle for the focus variation measurement and calibrated lines for the optical CMM measurements. Four uncertainty contributions were taken into account: u cal , as the uncertainty of the calibrated artefacts; u p , introduced by the measurement procedure and calculated as standard deviation of 20 repeated measurements on the artefact; u w , associated with material and manufacturing variations of the actual measurand; and ures, introduced by the limited resolutions of the instrument. u w was calculated as: where M is the vector listing the three repeated measurements for any of the six measurands. The three contributions were then combined using the law of propagation of uncertainty to determine the expanded uncertainty U: where k is the coverage factor of 2 selected to achieve a 95% approximated confidence. Tables 2 and 3 show the uncertainty budgets for IM and µIM parts respectively. Considering the 50 µm tolerance, uncertainty-to-tolerance ratios ranging between 3% and 5% were attained, thus confirming that the employed measuring instruments were suitable for the task [25]. By applying the rule of propagation of uncertainty [26] to the volume and consequently the density formulas, the expanded uncertainty for the density was calculated. In particular, the expanded uncertainty U for the density was on average equal to 0.0014 g/cm 3 and 0.0017 g/cm 3 for IM and µIM parts respectively. Such values are much lower than the nominal density: they represent 0.16% and 0.19% of the density of the moulding material in case of IM and µIM respectively. This result confirms that the selected measurement chain was capable of providing a sufficiently accurate output. Functional and Axiomatic Results A machine performance is influenced by its design. The combined application of functional analysis and axiomatic design, as it is done below, allowed to compare the two different moulding processes. The methodology can be extended to a multiple number of machines, which is not the focus of the present study. In the following analyses, Arburg machine architecture and the Battenfeld patent by Ganz [27] were used as the principal source of information with regards to machine designs. Design differences, highlighted by the following functional and axiomatic considerations, refer to the following different machine assemblies (see Figure 5). The functional analysis of the two machines was organized following the main phases of the moulding process reported below (see Figure 6). Such phases were used to divide the conventional and the micro process in functional maps. The plastication phases for the IM and µIM machines are reported in Figures 7 and 8 respectively. The screw in both machines carries out the plastication phase. In the functional analysis, the plastication is divided into the solid sub-phase and the liquid sub-phase corresponding to the two different physical states of the material during the process step. The functional maps are similar, but there are two important differences: • The friction generated by the IM machine is greater, since the IM machine screw has a bigger diameter than the µIM machine. Moreover, it is also heavier. • The last functional block of the phase, referred to as "store", takes place in front of the screw in the IM machine, while in the µIM one, the liquid is stored at the beginning of the bore hole (indicated as 9 in previous Figure 5). The second phase (feeding) is carried out by the bored hole in the µIM machine and again by the screw in the conventional one. In this way, the IM machine sticks to the previous functional map: the injection chamber is fed by rotating the screw. Therefore, there was no need to generate a new functional map, as the feeding is performed simultaneously with the plastication. On the other hand, with the µIM machine the feeding phase is carried out through the bored hole that guides the mould material and controls the mould volume using the pressure sensor situated in the hole (bored hole and pressure sensor are respectively indicated as 9 and 10 in Figure 4). The functional map for µIM machine feeding phase is reported in Figure 9 below. The last two phases (injection and packing) are performed by the screw in the IM machine (Figures 10 and 11) and by the plunger in the µIM machine ( Figure 12). In the IM machine, the screw stops rotating and begins to accelerate and, at the same time, begins the injection in the mould. On the other hand, the µIM machine has a rapid sequence of injection and packing made by the plunger, which contacts the liquid material when its acceleration has already begun (in the IM machine, in fact, the screw begins its acceleration when the liquid material is already accumulated ahead). The functional analysis gives evidence that: • The sealing function provided by the plunger and the screw (and consequently the different backflow pressure) are very different during the injection phase. Almost no backflow is observed with the µIM machine if compared to the IM machine. This is due to the plunger's smaller diameter (so to its tighter tolerance) and to the "sealing effect" realized by solidified material (remaining from previous injections) close to the front of the injection plunger. • The effect of air and pneumatic energy that comes out of the mould during the injection phase is different for the two machines (as it highlighted in bold block in Figure 11). To accelerate the screw in IM, a certain stroke is necessary, thus it implies a certain volume of air in front of the screw. Conversely, owing to a smaller mass, a shorter stroke is requested to reach the same speed, and the smaller diameter implies a reduced air storage. Based on such aspects, it is possible to conclude that the critical aspects (friction, backflow and pneumatic air) have a different impact on the two machines. In particular, they are more relevant in the IM machine, thus decreasing its controllability and consequently its performance. Considering the two most critical process phases (metering and injection) in the two machines, an axiomatic comparison between the two machines can highlight how their design differences can affect their performance. The screw, in fact, in both machines carries out the metering of liquid material during the feeding phase. However, this component is different for the two machines. The screw of the IM machine has a greater diameter (D screw ) and a different shape: Actually the metering section is longer in the µIM, thus assuring an improved control over the feed liquid. The diameter of the µIM being smaller, a more accurate tolerance (T screw ) than the screw of the IM machine (d screw , t screw , l screw ) can also be obtained. Besides the benefit due to the previous design conditions, the µIM machine also presents an additional key component for guaranteeing a high quality metering: a pressure sensor in the bored hole that accurately measures the liquid material characteristic in front of the injection plunger. Such differences are represented in Table 4, which is arranged into three blocks (a red block, a green block and a yellow block) that influence the performance of the machines. The red block has an impact on the control of the liquid volume, while the green block affects the liquid's backflow. The yellow block is present only in the µIM machine and its function is to measure the metered volume of the polymer. The more accurate tolerance of the diameter and the presence of the yellow block increase the precision of the µIM machine. The less accurate screw facilitates backflow, while the absence of a proper measuring system hinders a precise metering of the molten plastic. The injection phase is performed by the screw in the IM machine and by the plunger in the µIM one. The plunger (l plunger ) is shorter than the screw (L screw ) and it has a smaller diameter (d plunger and t plunger ). The µIM machine is also equipped with a strain gauge sensor on the back of the injection plunger [14]. Such different features are listed in Table 5. The main implication is on the two different masses of the screw (red block) and the plunger (green block). Such a difference implies the different possibility of accelerating and decelerating, respectively, the piston and the screw. The piston is lighter than the screw, so it can be rapidly accelerated and decelerated even if it reaches high injection speed values. Another important difference is that in front of the screw there is already liquid material when it is accelerated to reach the desired injection speed. The screw acceleration and the liquid injection are simultaneous events in the conventional machine. Moreover, when the screw accelerates it might also drag plastic pellets and molten liquid. In view of the aforementioned, the valve in the screwed micro machine does not have rigorous on-off states caused by a difficult control on screw behaviour, while the plunger for its features (smaller mass and sensor presence) can be totally controlled. Furthermore, the shape of the IM screw (see Figure 13) negatively influences the control of its on-off behaviour, in contrast, the plunger cylindrical form does not cause the same effect. In this way, the plunger can act in the µIM machine as a perfect valve so that both the closing and the opening functions are executed with higher precision. Considering both tables (Tables 4 and 5), it is possible to observe that in the IM machine the listed features are the same for injection and metering, while in the µIM machine there are other features involved in the two phases. Extending previous axiomatic considerations in order to build the typical axiomatic matrix, it is possible to observe how principal machine functional requirements (matrix rows) match with their design parameters (matrix columns). In the following, Tables 6 and 7 describe the design parameters of the IM and µIM machines respectively. Tables 8 and 9 present the axiomatic matrixes in which the machine design parameters are used. The two machine functional requirements are referred to previous functional analysis and main phases. Length between the plunger starting and end point within the injection chamber D bored hole Diameter of the bored flow path (9 in Figure 5) t plunger Plunger diameter tolerance Def. Sensor S.G. Sensor deformation entity which corresponds to the measure controlled by the sensor (Young modulus plus sensor geometry) Table 8. Axiomatic matrix for the IM machine. Machine Design Parameter D notchscrew D screw N of Turns T screw L screw Stroke (l) Store (pellets) (feeding phase) X Feed (feeding phase) X X X Seal (feeding phase) X X Store (feeding phase) X X Meter (feeding phase) X X X Move (injection phase) X Seal (injection phase) X In particular, the screw diameter parameters are omitted in the µIM machine because it is negligible compared to screw length in that machine. From a functional perspective, in fact, the material feeding is performed by screw length instead of screw diameter considering the µIM machine screw design. Since the gap between the screw and the barrel is constant in the µIM machine, the diameter has less influence with respect to the length of the active screw. The volume fed can be actually calculated as a cross sectional area of the gap multiplied by the feeding length (that is the result of n rotations of the crew multiplied the screw pitch). Differently from previous functional analysis, Tables 8 and 9 also include "store (pellet)" function within functional requirements and notch screw diameter within design parameters. This phase was not considered in previous functional maps, but it is inserted in the following axiomatic matrix in order to obtain the complete matrix. The two previous matrixes (Tables 8 and 9) show that µIM machine presents an uncoupled design considering the axiomatic perspective, each machine function being carried out by a different design parameter (diagonal matrix). On the other hand, the IM machine presents a coupled design. The entire axiomatic analysis finally confirms the µIM machine uncoupled design and the presence of "control functions" (illustrated in yellow in Table 9), only owned by the µIM machine design. The axiomatic design theory explains that a decouple design outperforms a coupled one: Actually, it means that no further optimization of the system is required since each parameter can be managed separately with respect to all others. For these reasons, new µIM design over-performs the standard design of a conventional IM machine. Experimental Results In order to understand what is the impact of the functional analysis and axiomatic design conclusions in terms of manufacturing precision and accuracy, the two batches produced by the IM and the µIM machines were compared. The replication performance of IDb and ODt was evaluated by means of a shrinkage indicator S, which was defined as: where D polymer and D mould represent the same diameter measured on the moulded parts and the mould respectively. Such a variable allows evaluating the real shrinkage of the polymer, since the influence of the mould dimension is eliminated through the normalization. Comparison Based on Replication of Diameters The results of IDb measurements are shown in Figure 14. What stands out is that µIM allowed to attain a better replication for all cavities, the S value always being closer to 0, i.e., the perfect replication of the mould feature. In fact, IDb shrank five times more when using IM with respect to µIM. This improvement was due to a more efficient filling phase in µIM resulting from the faster injection and to a more effective holding phase caused by faster switch-over and smaller injection volume (see Table 5 and related discussion). As for the different cavities, both the technologies resulted in a balanced multi-cavity replication process: The interval bars in fact overlap for the four cavities of IM and µIM. µIM also leads to a better repeatability of IDb, the interval bars always being smaller than for the IM case. Figure 15 shows the results for the replication of ODt. µIM provided a generally better replication also in this case. However, the benefit of using the micro-scaled technology is not as evident as with IDb. In this case, in fact, a certain deviation between the different cavities was observed for both IM and µIM: Cavities 1 and 3 of IM were replicated with a level comparable to µIM, while cavity 2 and 4 provided a lower replication performance, proving that cavity unbalance still affected the outcome of the process. µIM also provided results that varied with the cavity, although less than the conventional technology. When comparing the replication performances of the two diameters, it is possible to observe that the benefit introduced by using a micro-injection moulding machine was more pronounced for IDb. In fact, when considering ODt, there was not substantial improvement in terms of replication observed for IDb when using µIM instead of IM. Such a difference might be due to the fact that IDb was obtained by replicating a pin, while ODt by replicating an outer geometry. Thus, the polymer was free to shrink when generating ODt, but it was not in the case of IDb, since the presence of the pin did not allow a free contraction of the polymer. Such a constrained deformation generates a concentration of residual stresses in correspondence with internal geometries such as holes that increase the shrinkage of the moulded part once it is ejected from the cavity [23]. Since the shrinkage amount of IDb substantially decreased when applying µIM, it may be possible that the use of µIM instead of IM allowed reducing the residual stresses and the deriving shrinkage. In order to compare the repeatability of IDb and ODt achieved with the two processes, the capability index Cp was used. Such a parameter allows to evaluate the variability of a process with respect to the imposed design specifications [29] and is calculated as: where USL and LSL are the upper and lower specification limits as set by the tolerance, and σ is the standard deviation of the results. A higher Cp is the result of a more precise, i.e., repeatable, process. In manufacturing productions, values larger than 1.33 are considered as satisfying as the process operates with a four-sigma performance. In the case of SIDb and SODt, USL and LSL were set by applying Equation (4) to the tolerance of 50 µm. The target, i.e., the mean between USL and LSL, was set at perfect replication, i.e., at a value of S equal to 0. Figure 16 shows the distribution of SIDb values gathered from the four cavities. It can be seen that µIM yielded a more precise production, the data being less disperse than for IM. It is also clear how the micro-scaled technology manufactured parts have IDb much closer to the mould dimensions, as anticipated before. The Cp values for the two cases are reported in Table 8. For both processes, a Cp larger than 1.33 was attained. However, µIM proved much more repeatable with a Cp. of 5.11, confirming the results of the functional and axiomatic analyses in Section 3.1 and the related conclusion concerning the Im and µIM machines design differences. The distributions of the results of S ODt are shown in Figure 17. For this measurand, the two distributions overlapped, demonstrating that the data gathered with the two technologies were more comparable than in the previous case. However, a narrower distribution and consequently more precise production was once again attained with µIM, as also confirmed by the higher Cp value (see Table 10). Comparison Based on Density Results The results of the density calculations are reported in Figure 18. What stands out is that the parts moulded using µIM had a significantly higher density, closer to the nominal value of the material data-sheet equal to 0.89 g/cm 3 . The reason for this lies in the more effective holding phase achieved with the µIM machine. In fact, the holding pressure acts on the value of the specific volume of the polymer melt. In particular, an effective application of the holding pressure yields an increase of the density of the final part by minimizing reduction of specific volume suffered by the polymer melt and the consequent shrinkage. In addition, the smaller feed system adopted in combination with the µIM machine allowed to postpone the freezing of the gate, thus leaving a wider window open for the action of the holding pressure. Furthermore, the functional analysis in Section 3.1 gives evidence of the smaller amount of air and to its easier evacuation from µIM injection chamber, supporting the aforementioned experimental results. As for the dispersion of the results, comparing the distributions of the results provided by the two processes (see Figure 19) shows that IM parts had a more heterogeneous density. Therefore, the adoption of µIM was also beneficial with respect to repeatability. This was most probably caused by the enhanced precision of the µIM machine due in turn to its electric drives, lighter injection piston and more homogeneous polymer melt, which resulted in a more repeatable injection procedure. Conclusions The present paper aimed at comparing IM and µIM by means of functional analysis and experimental data when moulding the same micro TPE component. The functional analysis was used to identify the main differences between the conventional and the micro injection moulding machines that were used in the experimental campaign. In particular, each of the phases carried out by the machines (plastication, feeding, injection and packing) was analysed in terms of functionality by considering the different energies involved. The two machines were also compared by building axiomatic matrixes, which allowed assigning the functions to the main design parameters. The functional analysis allowed drawing the following conclusions: • The friction generated by the IM machine, during the plastication phase, was greater than that of the µIM machine because of the bigger screw size and mass. • The smaller dimension and tighter tolerances of the µIM machine screw make the metering procedure more precise and accurate and drastically reduce the backflow effect in the injection phase. • The presence of the bored hole and pressure sensor in the µIM machine generates a closed loop control that is absent in IM. • The µIM machine plunger, being lighter than the IM machine screw, allows to carry out a faster acceleration, thus obtaining a more effective injection phase. • The IM screw begins the injection phase still while accelerating. Instead, the µIM plunger begins the injection phase after reaching the desired injection acceleration, thus allowing to obtain a more accurate and precise injection phase. • In accordance to the functional analysis conclusions, the axiomatic design showed that the IM machine has a coupled design, whereas the µIM machine has an uncoupled one, thus allowing a higher controllability of each design parameter. • The experimental observations allowed concluded that: • Selecting the µIM process resulted in a great reduction of material waste, the feed system being much smaller than that adopted with IM. • µIM provided a relevant replication improvement if compared to IM concerning the inner bottom diameter IDb of the moulded parts. The replication was also more precise, resulting on a higher Cp value. • As for the outer top diameter ODt, µIM improved the replication accuracy and precision, even though it was less than the other measurand. This discrepancy could be due to the fact the IDb dimension is more influenced by the residual stress build-up being an inner geometry and as such, the subject of constrained shrinkage and less sensitive to the machine design and performance. • The µIM process provided parts having a substantially higher and more homogeneous density among the four cavities. This clearly proved that the more repeatable injection phase and more effective packing phase allowed to better compensate the volumetric shrinkage of the polymer towards the end of the moulding process. The results of this research unveiled the design reasons behind the higher performance of the µIM machine. This was obtained by the reverse engineering of the IM and µIM machines design and by establishing the links between theoretical analyses (functional analysis and axiomatic design) with experimental results. The interpretation of these results provides a valuable insight for both the production engineers and micro product designers in order, on one hand, to improve such a challenging manufacturing process as µIM, and on the other to develop new micro product designs that can take full advantage of the possibilities given by the µIM technology.
2020-12-17T09:10:55.392Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "ac1b11956139ec9cf4c39204a41c74288e7bc649", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/11/12/1115/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "89151aa6bfc4687f772c39f8e1ffe1096ddd205d", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
118683959
pes2o/s2orc
v3-fos-license
Multiband photometry of a Patroclus-Menoetius mutual event: Constraints on surface heterogeneity We present the first complete multiband observations of a binary asteroid mutual event. We obtained high-cadence, high-signal-to-noise photometry of the UT 2018 April 9 inferior shadowing event in the Jupiter Trojan binary system Patroclus-Menoetius in four Sloan bands $-$ $g'$, $r'$, $i'$, and $z'$. We use an eclipse lightcurve model to fit for a precise mid-eclipse time and estimate the minimum separation of the two eclipsing components during the event. Our best-fit mid-eclipse time of $2458217.80943^{+0.00057}_{-0.00050}$ is 19 minutes later than the prediction of Grundy et al. (2018); the minimum separation between the center of Menoetius' shadow and the center of Patroclus is $72.5\pm0.7$ km $-$ slightly larger than the predicted 69.5 km. Using the derived lightcurves, we find no evidence for significant albedo variations or large-scale topographic features on the Earth-facing hemisphere and limb of Patroclus. We also apply the technique of eclipse mapping to place an upper bound of $\sim$0.15 mag on wide-scale surface color variability across Patroclus. INTRODUCTION The origin and nature of Jupiter Trojans have remained an enigma for many decades. The central question remains whether these objects orbiting in 1:1 mean motion resonance with Jupiter formed in situ or were scattered inward from the outer Solar System and captured into resonance during a period of dynamical instability sometime after the end of planet formation Morbidelli et al. 2005;Tsiganis et al. 2005). While recent numerical modeling has demonstrated the consistency of the latter scenario with current theories of late-stage giant planet migration (e.g., Roig & Nesvorný 2015), the definitive answer to the question of the Trojans' formation location will invariably come from obtaining a more detailed understanding of the physical properties and composition of these objects. The discovery of Menoetius, the nearly equal-size binary companion of Patroclus (Merline et al. 2001), established the first multiple system in the Trojan population and provided the first estimate of a Trojan's bulk density. Subsequent analyses using resolved imaging (Marchis et al. 2006;Grundy et al. 2018), thermal spectroscopy during mutual events (Mueller et al. 2010), and stellar occultations (Buie et al. 2015) have refined the density estimate to the current value of 1.08±0.33 g/cm 3 . This low density indicates that Patroclus-Menoetius's bulk composition is dominated by ices, with significant porosity, similar to density measurements of cometary nuclei. Such a compositional model points strongly to an outer solar system origin of Trojans. Theories of binary asteroid formation center around two processes: capture or coeval formation. The former process involves stochastic close encounters, between two bodies, with capture occurring either via dynamical fric-tion from surrounding objects, energy exchange during gravitational scattering of a third body, or capture of fragments from a collision (e.g., Goldreich et al. 2002). Within the context of dynamical instability models of solar system evolution, Patroclus-Menoetius could have formed via capture early on during the planet formation stage, after the planet formation stage prior to the instability in the outer Solar System, or following the scattering of Trojans into their current orbits. The latter process of coeval formation forms binaries through the gravitational collapse of locally concentrated swarms of planetesimals (e.g., Nesvorný et al. 2010). While coeval formation has a strong tendency to produce near-equal binary components, capture typically results in large size discrepancies between the two components. Therefore, the near-equal sizes of Patroclus and Menoetius point toward coeval formation. Furthermore, coeval formation always produces companions with identical compositions, while capture scenarios can yield heterogeneous pairs. Detailed study of Kuiper Belt binaries has revealed a preponderance of equal-color pairs, whereas the average system colors span the full range of colors seen in the overall population (Benecchi et al. 2009). If recent dynamical instability models are true, and the Trojans were scattered into their current orbits from the outer Solar System, then one would expect Patroclus-Menoetius to also have identical colors as a result of coeval formation in the early Solar System. Comparisons of the properties of the two binary components provide a powerful empirical test of binary formation theories. In particular, the measurement of discrepant physical properties between Patroclus and Menoetius would immediately rule out coeval formation. It has been hypothesized for over a decade that the Trojans are comprised of two color sub-populations with dis-arXiv:1904.06379v1 [astro-ph.EP] 12 Apr 2019 tinct photometric and spectroscopic characteristics (e.g., Roig et al. 2008;Wong et al. 2014), and within the framework of dynamical instability models, these two sub-populations formed in different regions of the outer protoplanetary disk (Wong & Brown 2016). If Patroclus and Menoetius are found to belong to different sub-populations, then it means that the binary system formed via capture during or after the period of dynamical instability, when the two sub-populations first mixed. The unique nature of the Patroclus-Menoetius system has made it a prime target for detailed study, and it is one of five Trojan asteroids that will be visited by the space probe Lucy. An extensive effort has begun to better characterize the Trojan targets in order to maximize the mission's scientific yield. In 2017-2019, Patroclus-Menoetius was in a mutual event season when eclipse and occultation events were visible from Earth. We obtained multiband photometric observations of an inferior shadowing event as Menoetius' shadow passed across Patroclus on UT 2018 April 9. In this paper, we present high-cadence, high-signal-to-noise lightcurves in four bands and fit the eclipse lightcurves to produce a precise mid-eclipse timing and estimate of the relative separation of the eclipsing components at mid-eclipse. We also use the technique of eclipse mapping, a first in the study of binary asteroids, to derive constraints on surface heterogeneity from the resultant color lightcurves. OBSERVATIONS AND DATA ANALYSIS We observed the UT 2018 April 9 Patroclus-Menoetius inferior eclipsing event using the then newly-installed Wafer-scale Imager for Prime (WaSP) instrument on the 200-inch Hale Telescope at Palomar Observatory. The science detector in WaSP is a 6144×6160 CCD with a pixel scale of 0.18 . We chose a 2048×2048 sub-array to reduce readout time and increase the cadence of our observations. As the shadow of Menoetius passed across the surface of Patroclus, we imaged the system in four Sloan filters -g , r , i , and z -with individual exposure times of 30, 20, 20, and 45 s, respectively, which yielded a target signal-to-noise of at least 100 in all bands. Filters were cycled in the order g -r -i -z , producing a uniform cadence of roughly 5.5 minutes in each band, after accounting for readout and filter changes. Bias frames and dome flats were acquired at the beginning of the night prior to science observations. Observing conditions at Palomar ranged from average to poor throughout the night. The sky was mostly clear, with a few isolated bands of thin, high-altitude clouds passing through at various points during the night. The seeing was poorest at the beginning of the observations, prior to the start of eclipse; before UT 5:00, the typical seeing exceeded 1.6 , going as high as 2.1 at times. The remainder of the night saw significantly better seeing, averaging around 1.2-1.3 , with the exception of a roughly 30-minute period around UT 8:00, when there was a spike in the seeing to over 1.6 , likely associated with the passage of a few tenuous bands of high-altitude clouds across the vicinity of the observing field. There was also an increase in the seeing during the final 45 minutes of observation. These periods of relatively poor seeing can be identified by the corresponding notable increase in scatter in the lightcurves during those times. Image processing and photometric calibration were . Apparent magnitude lightcurves of the Patroclus-Menoetius system prior to, during, and following the inferior eclipsing event in the Sloan g , r , i , and z bands. The vertical axis denotes increasing brightness (decreasing magnitude). Periods of larger scatter correspond to times of poorer observing conditions and higher seeing. The overall increased scatter in the z -band lightcurve is attributed to discernible residual fringing on the images. carried out using standard techniques. After the images were bias-subtracted and flat-fielded, the centroid positions and fluxes of bright sources in each image were obtained using SExtractor (Bertin & Arnouts 1996). These sources were then matched with stars in the Pan-STARRS DR1 catalog (Flewelling et al. 2016) to produce an astrometric solution and a photometric zeropoint. Our pipeline then automatically queried the JPL Horizons database for the position of Patroclus-Menoetius at the time of the exposure, identified the corresponding source on the image, and computed its apparent magnitude. Photometric extraction was carried out using a variety of fixed circular aperture sizes with diameters ranging from 8 to 24 pixels, choosing the optimal aperture for each exposure that minimizes the resultant photometric error. The median optimal aperture diameters in the four bands are 20, 11, 16, and 17 pixels, corresponding to radii of 1.80 , 0.99 , 1.44 , and 1.53 , respectively. In Figure 1, the apparent magnitude lightcurves are plotted in each band; the individual 1σ uncertainties are a quadrature sum of the propagated photometric errors stemming from the measured fluxes and the zeropoint uncertainties. The eclipse produced a roughly 0.15 mag dimming of the total system brightness in each of the four bands. The median photometric uncertainties are 0.0079, 0.0085, 0.0067, and 0.0074 mag in g -, r -, i -, and z -band, respectively. A handful of outliers are discernible, for example, two in the r -band lightcurve at around UT 8:00 and 10:20. Visual inspection of these images did not reveal cosmic rays or any obvious chip artifacts that could have affected these points. By changing the extraction aperture used for those exposures, we found that the saliency of these outliers showed notable variation, suggesting a non-astrophysical cause. We also note that all of the outlier exposures occurred during the periods of increased seeing mentioned previously. We have chosen to leave them in the lightcurves presented in this paper. In z -band images, there was discernible residual fringing on the flux arrays, even after flat-fielding, particularly in the northeast corner. While the target mostly avoided the regions of the detector with the most severe residual fringing, there is still a noticeable effect in the z -band lightcurve, as manifested by the larger scatter in the photometry on short timescales and larger than expected photometric zeropoint errors. We do not attempt to correct for fringing, and while we present the z -band lightcurve in Figure 1, we do not utilize or discuss the z -band photometry in the following analysis. Eclipse lightcurve fit To derive estimates of the mid-eclipse time and the extent of the eclipsed region, we use a custom transit model to fit the i -band lightcurve, which has the smallest median photometric error. Since the eclipsed region of Patroclus is non-illuminated, we can equivalently model the eclipse event as an occultation. The mutual orbit of the binary system is consistent with circular, so we fix the eccentricity to zero. We fix the orbital period and semimajor axis to the values reported and assumed in the mutual event predictions of Grundy et al. (2018): P = 4.282680 days, a = 688.5 km. Both components are significantly non-spherical, and modeling from occultation and rotational phase curves yields a triaxial radius ratio of α : β : γ = 1.3 : 1.21 : 1; the long dimension of each object lies along the line connecting the two objects, while the shortest dimension is aligned with the angular momentum vector of the binary system (Buie et al. 2015). During a mutual event, the sky-projected shapes of Patroclus (1) and Menoetius (2) are ellipses with semimajor axis values of β 1 = 117 km, γ 1 = 98 km and β 2 = 108 km, γ 2 = 90 km, respectively. We fit for the center of eclipse time T c and the apparent orbital inclination i, which is defined relative to the sky plane so that i = 90 • is a perfectly edge-on occultation where the centers of the two objects align at mid-event. For each pair of T c and i values in the Markov Chain Monte Carlo (MCMC) chain, we use the orbital shape and period to derive the relative separation vector between the two components at every point in the time series. To compute the amount of Patroclus blocked by Menoetius' shadow, we use a Python-based code 1 to calculate the overlapping area of the two ellipses, which is based on the algorithm described in Hughes & Chraibi (2012). We also fit for a constant multiplicative factor to normalize the out-of-eclipse lightcurve to unity. We modify the transit model to account for the fact that Menoetius is illuminated, which dilutes the transit signal relative to the case where the secondary is dark. If the lightcurve of the eclipsed object Patroclus is modeled as λ(t), then the total lightcurve of the binary system is (λ(t) + f 2 )/(1 + f 2 ), where f 2 is the brightness of the secondary Menoetius relative to Patroclus. If Patroclus and Menoetius were identical in albedo, then the brightness ratio would be equal to the ratio in sky-projected areas: f 2 = β 2 γ 2 /β 1 γ 1 . While it is reasonable to assume that the two components are largely identical in composition and therefore should have very similar albedos, given the 1 https://github.com/chraibi/EEOver likely formation mechanism of such near-equal mass binaries (see Section 1) and the markedly narrow albedo distribution of the Trojan asteroid population as a whole (e.g., Romanishin et al. 2018;Fernández et al. 2003), we nevertheless account for our uncertainty in the albedos of the individual components: we set a multiplicative scaling factor on f 2 and place a Gaussian prior on its value centered on unity with a standard deviation of 20%, consistent to the variance in the measured geometric albedos of large Trojans (e.g., Fernández et al. 2003;Romanishin et al. 2018). The best-fit eclipse lightcurve is plotted in Figure 2. We have removed the fourth data point prior to the final fit, which is more than 3σ discrepant from the best-fit eclipse model. The lightcurve is normalized such that the combined out-of-eclipse brightness of Patroclus and Menoetius is unity. The scatter in the residuals is 0.0048, compared to a median relative flux uncertainty of 0.0030, indicating significant non-white noise in the lightcurve attributable to the periods of poorer observing conditions at the beginning and towards the end of the night. We measure a mid-eclipse time (in Julian days) of which corresponds to UT 2018 April 9 7:25:35 with an uncertainty of 46s. This is 19 minutes later than the predicted center of eclipse in Grundy et al. (2018). Meanwhile, we obtain a precise relative inclination estimate of i = 83.95 ± 0.06 deg. We can compute the sky-projected separation d min of the center of Patroclus and the center of Menoetius's shadow at mid-eclipse: d min = a cos(i) = 72.5 ± 0.7 km. (2) Grundy et al. (2018) reported a predicted minimum separation between the centers of the two eclipsing bodies of 69.5 km. The greater separation derived from our fit indicates a more grazing shadowing event than predicted and points toward a slight inaccuracy in the orbital pole obliquity calculated in Grundy et al. (2018). We remind the reader that during this event, it is the shadow of Menoetius that occults Patroclus. The disk of Menoetius itself does not interact with the disk of the primary. 3.2. Surface properties Various physical and compositional properties of the surface are expressed in the eclipse lightcurves. When looking in one photometric band, comparison between the observed lightcurve and the best-fit eclipse model provides constraints on albedo variations across the eclipsed region of the primary as well as the shapes of both binary components. Significant covariant deviations in the residuals from a flat line may indicate patches of enhanced or reduced reflectivity on the primary or significant deviations along the limb from that of a skyprojected ellipse. Examining the residuals from our bestfit eclipse model in Figure 2, we do not discern any statistically significant deviations indicating non-uniform reflectivity or non-ellipsoidal shapes for the primary disk and secondary shadow. Leveraging photometric lightcurves at multiple wavelengths provides additional information about the level of color variation across and between the two binary components. As the shadow of Menoetius eclipses Patroclus, the contribution of the shadowed region to the average color of the system is removed. By examining the resultant color lightcurves, one can piece together the color distribution of the eclipsed region in a technique known as eclipse mapping. This powerful method allows one to potentially extract spatial information about the target from spatially unresolved images. For each pair of photometric lightcurves, we use linear interpolation between adjacent points in the second lightcurve's time series to calculate the magnitudes in the second filter at the time sampling of the first lightcurve's time series. We then subtract the resampled lightcurves from one another, adding the propagated uncertainties in quadrature. Figure 3 shows the three color lightcurves derived from the g -, r -, and i -band lightcurves in Figure 1. We have omitted the color lightcurves involving z -band due to the effect of residual fringing (see Section 2). The color lightcurves are generally very smooth, with no large deviations and almost all points lying well within 1.5σ of the average color across the observations. We note that the regions with increased short-term variation and the largest color deviations correspond precisely to the periods during our observations when seeing was poor and highly variable (see Section 2). Given the grazing nature of this eclipse event, we are only sensitive to very large color variations on small scales. The most stringent constraints on color variability can be derived from comparing the mid-eclipse color, when the eclipsed region is at its maximum, with the out-of-eclipse color. For all color lightcurves, the mid-eclipse color value is well within 1σ of the out-of-eclipse color, so we place 1σ upper bounds on the color variability using the median color uncertainty from the lightcurves, σ c . To quantify these constraints, we consider two cases. Color lightcurves derived from the photometric lightcurves in Figure 1, showing minimal variations during the shadowing event. The vertical solid and dashed lines indicate mideclipse and the beginning/end of the eclipse event, respectively. Almost all points in the color lightcurves are consistent with a flat line to within 1.5σ. The two notable outliers at around UT 8:00 and 10:20 in the g − r and r − i lightcurves stem from two outlier points in the r -band lightcurve (see Figure 1). The first case seeks to constrain the difference between the average color of the eclipsed region on Patroclus c * and the average color c of the uneclipsed regions on both objects. The change in the measured color of the combined system between the out-of-eclipse baseline and mid-eclipse is weighted by the ratio of the maximum eclipsed area A * to the uneclipsed area A 1 + A 2 − A * , where A 1 = πβ 1 γ 1 and A 2 = πβ 2 γ 2 are the sky-projected areas of Patroclus and Menoetius, respectively. The maximum eclipsed area of Patroclus, as derived from our eclipse model fit in Section 3.1, was 12.4% of its sky-projected disk: A * = 1110 km 2 . From here, the difference in color ∆c 1 ≡ |c * − c| is given by For the g − i color variability, for example, we have σ g−i = 0.0092 and establish an upper limit of ∆c 1,g−i = 0.13 mag, with similar constraints for the other colors. The second case assumes that the two components have different colors, c 1 and c 2 , but are individually uniform in color. A similar derivation yields the following expression for ∆c 2 ≡ |c 2 − c 1 |: The constraints on ∆c 2 are much looser. For g − i, this upper limit is ∆c 2,g−i = 0.28 mag. Starting with the second constraint, we see that the small maximum shadow coverage of Patroclus prevents us from deriving particularly useful upper limits on the difference in color between the two components. For comparison, the two color sub-populations in the Trojans have mean g − i colors of 0.73 and 0.86 (Wong et al. 2014;Wong & Brown 2015), so a larger eclipsed area and/or more precise photometry would be needed to confidently rule out a binary comprised of components from two different sub-populations using lightcurves like these. Typical color differences between the components of KBO binary systems are also significantly smaller than our upper bound constraint (e.g., Benecchi et al. 2009). The first constraint reflects the level of large-scale surface inhomogeneities across Patroclus. This much more stringent constraint suggests that the surface of Patroclus is quite homogeneous. When comparing with other ice-rich asteroids and satellites that have well-mapped surface color distributions, we find that those larger bodies, such as Pluto, Europa, Ceres, and Triton, have significantly higher levels of color variability than Patroclus across physical scales comparable to the relative area probed by our eclipse measurements. In addition, those objects also display significant localized albedo variations across the surface, which we do not detect on Patroclus from our measurements. The relative homogeneity of Patroclus is consistent with theories regarding the formation and evolution of Trojans and similar objects. Whereas the larger bodies like the Galilean satellites and dwarf planets accreted sufficient material to gravitationally circularize, internally differentiate, and, in some cases, bind tenuous atmospheres, leading to secondary geological processes that continue to be active in the present day, smaller bodies like the Trojans would have formed as undifferentiated ice-rock agglomerations, similar to cometary nuclei, without sufficient gravity or internal heating to undergo further physical or compositional alterations (e.g., Wong & Brown 2016). These primitive objects would have a uniform composition throughout and develop a homogeneous irradiation mantle across their entire surfaces. Such a formation scenario does not preclude occasional instances of surface inhomogeneities due to minor cratering events. Areas of pristine material excavated by impacts might have much higher albedo than the ∼5% typical of Trojans (e.g., Fernández et al. 2003). Likewise, these newly-exposed regions might have a distinct color from the rest of the radiation-reddened surface (Wong & Brown 2016). Both the reflectivity and color inhomogeneities would be detectable using high-precision multiband lightcurves of mutual events similar to the ones presented in this work. SUMMARY In this paper, we presented multiband photometric observations of the UT 2018 April 9 inferior shadowing in the Patroclus-Menoetius system. Our short-cadence high-signal-to-noise lightcurves provided a precise mid-eclipse timing measurement, T c = 2458217.80943 +0.00057 −0.00050 , which is later than the prediction from Grundy et al. (2018) by almost 20 minutes. Eclipse lightcurve modeling showed that the eclipse magnitude was slightly less than predicted, with a minimum separation distance of 72.5 ± 0.7 km between the centers of Patroclus and Menoetius' shadow at mid-eclipse. Through an analysis of the color trends derived from the photometric lightcurves, we placed a moderately tight upper bound on the level of surface variability across Patroclus, in agreement with the predictions from formation models of primitive icy bodies. Meanwhile, the grazing nature of the event prevented us from ruling out a mixed binary scenario with components from different color sub-populations. Nevertheless, our analysis demonstrated the applicability of the eclipse mapping technique to the study of binary asteroids. Future work combining the observations of Patroclus-Menoetius from the 2017-2019 mutual event season with previous measurements will greatly improve the orbital parameters of the system. New orbital fits and shape models will enable more detailed planning of the Lucy flyby encounter of the Patroclus-Menoetius system in 2033.
2019-04-12T19:08:15.000Z
2019-04-12T00:00:00.000
{ "year": 2019, "sha1": "6eab7a5a4b4b9e2c9b3e50c2d3e34382aea4aba3", "oa_license": null, "oa_url": "https://iopscience.iop.org/article/10.3847/1538-3881/ab18f4/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "6eab7a5a4b4b9e2c9b3e50c2d3e34382aea4aba3", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
5840119
pes2o/s2orc
v3-fos-license
Shape-IT: new rapid and accurate algorithm for haplotype inference Background We have developed a new computational algorithm, Shape-IT, to infer haplotypes under the genetic model of coalescence with recombination developed by Stephens et al in Phase v2.1. It runs much faster than Phase v2.1 while exhibiting the same accuracy. The major algorithmic improvements rely on the use of binary trees to represent the sets of candidate haplotypes for each individual. These binary tree representations: (1) speed up the computations of posterior probabilities of the haplotypes by avoiding the redundant operations made in Phase v2.1, and (2) overcome the exponential aspect of the haplotypes inference problem by the smart exploration of the most plausible pathways (ie. haplotypes) in the binary trees. Results Our results show that Shape-IT is several orders of magnitude faster than Phase v2.1 while being as accurate. For instance, Shape-IT runs 50 times faster than Phase v2.1 to compute the haplotypes of 200 subjects on 6,000 segments of 50 SNPs extracted from a standard Illumina 300 K chip (13 days instead of 630 days). We also compared Shape-IT with other widely used software, Gerbil, PL-EM, Fastphase, 2SNP, and Ishape in various tests: Shape-IT and Phase v2.1 were the most accurate in all cases, followed by Ishape and Fastphase. As a matter of speed, Shape-IT was faster than Ishape and Fastphase for datasets smaller than 100 SNPs, but Fastphase became faster -but still less accurate- to infer haplotypes on larger SNP datasets. Conclusion Shape-IT deserves to be extensively used for regular haplotype inference but also in the context of the new high-throughput genotyping chips since it permits to fit the genetic model of Phase v2.1 on large datasets. This new algorithm based on tree representations could be used in other HMM-based haplotype inference software and may apply more largely to other fields using HMM. Background The recent advent of genotyping chips, which can analyze up to 500,000 single nucleotide polymorphisms (SNP) per individual, offers a powerful tool for large scale association studies in human diseases. The most common approach to find genes possibly implicated in a disease relies on the comparison, in patients and controls, of the distributions of SNP markers. An approach to increase the power of such studies is to focus on more complex markers which capture implicitly the linkage disequilibrium (LD) between SNPs: the combination of SNP alleles on the same chromosome called haplotypes. Haplotypes are of great interest to study complex diseases since they are generally derived from chromosomal fragments which are transmitted from one generation to the next or which may have a biological meaning such as the promoter or the exons of a gene [1]. Beyond the biomedical applications, the comparison of haplotype distributions between populations also provides new insights in the diversity, the history and the migrations of human populations. For instance, several studies [2][3][4][5][6] have recently highlighted that genetic diversity of the human genome is organized in regions called haplotype blocks in which SNPs exhibit a high degree of LD and few common haplotypes. These haplotype blocks are delimited by recombination hotspots and chromosomes can thus be viewed as mosaics of common haplotypes. The recently developed Hap-Map project, dedicated to establish a dense map of SNPs and LD in various human populations [7][8][9], has emphasized the interest of haplotypes to study human diversity. Regular genotyping (based on PCR/sequencing or on chips) provides the genotype for each SNP but does not allow the determination of the haplotypes (i.e. the combination of SNP alleles on each chromosome), and current experimental solutions to this problem are still expensive and time-consuming [10,11]. Clark was first to introduce a computational alternative [12]: the determination of haplotypes via a parsimony criterion which leads to a minimal set of haplotypes sufficient to explain the entire population. Since then, efficient statistical algorithms have been developed under the random mating assumption where the observed genotypes are formed by sampling independently two unknown haplotypes. This assumption, coupled with a probabilistic model for the haplotypes, permits to define the likelihood of the observed genotypes as a function of the model parameters. Thus, in order to infer haplotypes, the most likely parameter values are estimated via an Expectation Maximization algorithm (EM) or a Gibbs sampler algorithm (GS) on the observed genotypes. The first EM-based model estimated the most likely haplotypes frequencies for observed genotypes without making any assumption on the mutation and recombination history of haplotypes [13]. Many software were built on this simple model and the best-known is certainly PLEM [14]. Later on, two new models were developed based on the idea that the haplotypes were arising through mutation and recombination events from few founder haplotypes. In Gerbil [15], haplotype blocks are strictly defined by dynamic programming and in each block, the haplotypes are derived through mutations from founder haplotypes. On the other hand, in Fastphase [16], in HIT [17], and in HINT [18], both mutation and recombination events on founder haplotypes are simultaneously modeled through a hidden Markov model (HMM). All these methods estimate founder haplotypes from observed genotypes via EM algorithms. For the GS-based algorithms, the general case relies on sampling haplotypes for a genotype in function of all the haplotypes currently assigned to the other genotypes. The model of Haplotyper [19] simply favors haplotypes which have been already assigned to many genotypes. In Phase v1.0 [20], the idea was to favor the sampling of haplotypes which likely coalesce with the already assigned ones. At last, in Phase v2.1 [21,22], the sampled haplotypes are mosaics of the previously sampled ones modeled in a HMM. Recently, an alternative approach to the statistical algorithms was proposed in 2snp [23] which computes LD measures for all pairs of SNPs and then resolves genotypes by finding the maximum spanning trees. Several studies have suggested that the HMM-based methods were the most accurate to infer the haplotypes [17,18,24], certainly because of the flexible definition of the haplotype blocks which depends generally on the physical distance between SNPs [16]. Among the HMMbased methods, Phase v2.1 is often considered as the most accurate developed so far [24][25][26][27][28][29][30] which explains why it is widely used in genetic association studies [31][32][33] and why it was used to phase the genotype data of the Hap-Map project [8]. The strength of Phase v2.1 probably comes from two particularities. First, the HMM is built during the GS iterations with a number of haplotypes proportional to the number of genotypes in opposition to other HMM-based methods which define a fixed number of founder haplotypes. Second, the haplotypes are inferred by summing over all the possible hidden state sequences of the HMM (Forward algorithm) whereas many other HMM-based methods infer haplotypes by sampling only the most probable hidden sequence in the HMM (Viterbi algorithm). However, the required running time increases dramatically with the number of SNPs since the search space grows exponentially. This prevents the easy use of Phase v2.1 in the current high-throughput chips. This fact has previously motivated us to develop Ishape [27] which matches Phase v2.1 accuracy while maintaining feasible running times. For that, we have used a two-step strategy: 1. we defined a limited space of possible haplotypes with a rapid pre-processing algorithm based on bootstrapped EM haplotypes estimations 2. on this limited set of haplotypes, we then used an accurate Phase-like algorithm. The rapidity of the first step is made possible thanks to an iterative implementation of the EM algorithm which avoids any exponential growth of the space of possible haplotypes and includes the SNPs one after the other during the computations. In practice, Ishape runs up to 15 times faster than Phase 2.1 (for up to 100 SNPs) with a similar accuracy in populations with high LD, such as Caucasian genomes. In this work, we present major improvements which greatly reduce the computational time of Phase v2.1. These improvements have been implemented in the software package Shape-IT and compared to the widely used competitor software. Notations (Figure 1) Let's assume we have a sample of n genotypes G = {G 1 ,..., G n } describing the allelic content of n diploid individuals over s SNPs. A genotype is split into a haplotype pair by setting the phases between the z heterozygous SNPs (z ≤ s). The number of distinct haplotype pairs consistent with a genotype is then 2 (z-1) . Let S = {S 1 ,..., S n } denotes the total haplotype space where S i is the space of possible haplotype pairs associated with the ith genotype. Moreover, let's assume we have the recombination parameters ρ = {ρ 1 ,..., ρ s-1 } in the s-1 intervals between the s SNPs of the sample as described by Stephens et al [22]. Gibbs sampler algorithm The GS algorithm considers the haplotype reconstructions of n individuals as a set of n random variables H = {H 1 ,..., H n } with sampling spaces in S and it estimates the conditional joint distribution of H given G and some recombination parameters ρ: Pr(H | G, ρ). In simple words, it computes a conditional probability for each haplotype pair of S in light of the observed genotypes G and the recombination pattern between the SNPs. Given these probabilities, the haplotype frequencies and the most likely haplotype pair for each genotype are straightfor- [34] and Li and Stephens [35]. This conditional distribution, called FDLS distribution in the following, is computed thanks to a hidden Markov model for haplotypes described in the next section. The important fact here is that computation of Pr(H i | , ρ) constitutes the most time-consuming part of the GS since it has to be done on a space of possible haplotype pairs which grows exponentially with the number of heterozygous SNPs. An iteration of the GS algorithm corresponds to update successively the haplotypes of the n individuals of G given a randomly initialized order of treatment. Between iterations, according to the Metropolis Hasting acceptance rates described by Stephens et al [22], we accept or reject Computation of a haplotype pair probability in a HMM (Figure 2) First of all, we assume that genotypes are produced by sampling independently two haplotypes according to their respective probabilities, which yields: The conditional probability π of haplotype h reflects how likely h corresponds to an "imperfect mosaic" of the other haplotypes {h 1 , ..., h 2n-2 } [22]. The underlying idea is that haplotype h has been probably created through the generations as a recombined sequence of haplotypes from the pool {h 1 , ..., 1 ≤ j ≤ s and 1 ≤ k ≤ 2n-2. A hidden state q j (k) of λ corresponds to the allele of haplotype h k at SNP j and it is linked to all the hidden states q j+1 (l) (1 ≤ l ≤ 2n-2) at SNP j+1 in order to model all the possible recombination jumps of haplotypes between SNPs j and j+1 ( Figure 2). Then, a sequence of s hidden states in λ through the s SNPs corresponds to a particular mosaic of {h 1 , ..., h 2n-2 }. The probability of observing h = {o 1 , ..., o s } in λ is computed thanks to transition probabilities between hidden states which mimic recombination and thanks to emission probabilities from hidden alleles to observed alleles which mimic mutation. Similar hidden Markov models have been proposed, but they generally rely on a limited number of founder haplotypes where the most likely transition and emission probabilities are estimated from observed genotype data via an EM algorithm [17,18]. Here, the emission and transition probabilities are defined with prior distributions depending respectively on a constant mutation parameter and on the variable recombination parameters ρ . The objective of this section is not to fully describe the probabilistic model of transitions and emissions since this has already been done by Stephens and Scheet [22]. Instead, we focus on how the haplotype probability is computed in such a HMM λ from transition and emission probabilities. We thus assume that the following quantities are known as set up by Stephens and Scheet: • The transition probability a j (l,k) from the state q j (l) of haplotype h l for SNP j to the state q j+1 (k) of haplotype h k for SNP j+1. If l ≠ k then a j (l,k) is the probability for h l to be recombined with h k between SNP j and SNP j+1 (large dashed arrows in Figure 2). And conversely, if l = k then a j (l,l) is the probability for h l to be not recombined between the two SNPs (plain arrows in Figure 2). • The emission probability b j (k) of the hidden allele of q j (k) in the observed allele o j of h (small dashed arrows in Figure 2). If the hidden allele is different from the observed one, then b j (k) corresponds to the probability that the hidden allele q j (k) has been altered in o j by a mutation event. Else, b j (k) corresponds to the probability that no mutation has occurred. In the HMM λ, the probability of a hidden states' sequence is given by the product of the corresponding transition probabilities. And the probability to observe h = {o 1 , ..., o s } given a particular hidden states' sequence is obtained by the product of the probabilities for the hidden alleles to be emitted in the observed ones. Finally, to compute the probability Pr(h|λ), one must sum up the probabilities of observing h over all (2n -2) s possible sequences of s hidden states. An alternative to this expensive computational approach is to define a forward probability α j (k) as the probability for the incomplete observed sequence {o 1 , ..., o j } to be emitted by all the possible hidden sequences that end at state q j (k). Then, the partial posterior probability π j until SNP j of h can be written as follows: And the total probability of h over the s SNPs becomes: π(h|h 1 , ..., h 2n-2 , ρ) = π s (h|h 1 , ..., h 2n-2 , ρ) The computations of α j (k) for k = 1,..., 2n-2 and j = 1,..., s are efficiently done by a recursive algorithm for HMM called forward algorithm [36]. It starts from initial values: And recursively computes the α j+1 values from the α j values as follows: Representation of the execution trellis of the hidden Markov model used to compute the probability of a haplotype Figure 3A, several haplotypes of S i differ only in the last SNPs while the computation of forward values α starts each time from the first SNP. Second, the list of haplotypes grows exponentially with the number of heterozygous SNPs which prevents any application with a high number of SNPs. To partially overcome this problem, a "divide for conquer" solution called "partition-ligation" (PL) was first proposed by Niu et al [14,19,21]. It has been included in the Phase v2.1 algorithm as follows: it first divides the genotypes into segments of limited size (typically 5-8 SNPs), determines the most probable haplotypes on each segment with complete runs of the GS, and then progressively ligates haplotypes of the adjacent segments in several runs until completion. When two adjacent segments are ligated, the space S of candidate haplotype pairs is initialized from all combinations of the most probable haplotypes previously found in each segment. However, the PL procedure remains computationally expensive because it implies 2s/ p -1 (where p is the size of the partitions) complete runs of the algorithm, each time on a quadratic number of combinations of adjacent plausible haplotypes. Computation of the FDLS distribution from a complete binary tree by Shape-IT (Figure 3B) To compute the FDLS distribution while avoiding any redundant calculations of α values, our algorithm uses a complete binary tree (called haplotype tree in the following) instead of an exhaustive list to represent the haplotype pairs space S i . It can be viewed as an extension of the forward algorithm which computes the probabilities of observing in the HMM λ several pairs of sequences classified into a binary tree rather than observing a unique sequence. Such a haplotype tree is easily derived from a partition of genotype G i into m unambiguous segments : each one starts from a heterozygous SNP, includes all the following homozygous SNPs, and ends before the next heterozygous SNP. A node of the haplotype tree corresponds to a genotype segment , and the two children nodes, to the two possible switch orientations with the following segment (g j+1 , ) and ( , g j+1 ). Then, a single path from the root to a leaf corresponds to a single possible haplotype pair of S i ( Figure 3B). To compute efficiently the FDLS distribution, Shape-IT explores the haplotype tree with a single recursive algorithm which combines the reconstruction of the haplotypes and the calculation of associated α forward values. In practice, it iterates the nodes by level-order (i.e. segment-order) to avoid any previous construction in memory of the haplotype tree. When visiting a node with the associated genotype segment (g, g'), the algorithm makes recursively a quadruplet q = {h, α, h', α'} where h and h' are the two haplotypes with respective forward values α and α' corresponding to the current explored path in the haplotype tree. Once all the nodes visited, the haplotype pairs of S i and the FDLS distribution are given respectively by the haplotypes and the forward values of the quadruplets associated to the leaf nodes. This approach is implemented in the algorithm 1 ( Figure 4). This algorithm avoids all the unnecessary forward value computations made when using the representation by haplotype lists. However, the haplotype tree to be explored still grows exponentially with an increasing number of heterozygous SNPs. It results in a list L whose size is multiplied by two at each level explored ( Figure 4). As with the classical haplotype list approach, this algorithm can be simply implemented in a PL strategy: first, a haplotype tree is derived for each segment of genotype, and then the most probable adjacent subtrees are determined and combined until completion. We have used an alternative strategy described in the next paragraph. Computation of the FDLS distribution from an incomplete binary tree by Shape-IT (Figure 3C) In practice, the number of haplotype pairs sufficiently probable to be sampled in the FDLS distribution is roughly linear with the number of SNPs instead of being exponential. As an alternative to the classical and expensive PL strategy, we have thus modified our recursive algorithm to explore only the paths in the haplotype tree which correspond to the most plausible haplotype pairs. In other words, our algorithm aims at identifying an Figure 3C). For that, recursions are made only on nodes exhibiting a probability, as given by expressions (2) and (1), greater than a threshold f initially defined. In practice, it results in maintaining a list L of quadruplets of limited size for each level of the tree explored, which no longer grows exponentially with the number of heterozygous SNPs. The corresponding modifications made in algorithm 1 are implemented in algorithm 2 ( Figure 5). Obviously the value of the threshold f affects the number of quadruplets kept at each level of the haplotype tree and thus, the number of haplotype pairs on which the FDLS distribution is computed. It is clear that the value of threshold f influences the diversity of haplotypes to be captured and so, the computational effort needed. However, the strength of our algorithm clearly lies in the greatly reduced complexity with the number of SNPs of the FDLS computation step. Moreover, compared to the 2s/p -1 complete runs of the GS required by the PL strategy, it treats all the SNPs in a single run. Methods We have implemented our algorithm in the software package Shape-IT publicly available at http:// www.griv.org/shapeit/. We have extensively compared Shape-IT with the widely used haplotype inference software 2snp [23], Gerbil [15], Fastphase [16], PL-EM [14], Ishape [27] and Phase v2.1 [21,22] on 3 kinds of datasets described hereafter. All the software were run with default parameters on a standard 2 GHz computer with 1 Go of RAM. In the comparisons, we have tried to work as close as possible to real conditions: on the one hand, we have used tightly linked SNPs such as those used in a single gene fine Algorithm 1 to compute the FDSL distribution on the complete haplotype tree Figure 4 Algorithm 1 to compute the FDSL distribution on the complete haplotype tree. (3) and (1). INPUT: a genotype G i partitioned into m segments mapping and on the other hand, we have used TagSNPs with a low level of LD which correspond to the worst conditions to infer haplotypes. At last, we have also made estimations of the running times required by the most accurate software to infer the haplotypes of a 300 K Illumina chips. Single gene datasets First, we have used genotypes for which the haplotypes have been completely determined experimentally: the GH1 [37] and ApoE [38] genes. The GH1 dataset contains 14 SNPs for 150 Caucasian individuals and the ApoE dataset contains 9 SNPs for 90 individuals of mixed ethnic origins. For each gene, we have additionally generated 100 replicates by randomly masking 5% of the alleles in order to simulate real experimental conditions (missing data). On these datasets, we have measured the IER (Individual Error Rate) and the MER (Missing data Error Rate) which corresponds respectively to the percentage of individuals incorrectly inferred and to the percentage of missing data incorrectly inferred. Although of limited size, these two genes are very useful to compare precisely the haplotype frequency estimations made by the algorithms via the I F coefficient [25], since haplotype frequencies are commonly used by the geneticists in genetic association studies. HapMap trio datasets Second, we have worked on trios' genotypes (2 parents and 1 child) derived from the HapMap project [7,8]. We have collected five regions of 10 Mb on chromosomes 1, 2, 3, 4 and 5 in African (YRI) or European (CEU) populations. The 10 resulting chromosomal regions have been preprocessed by the Haploview software [39] to remove SNPs with Mendelian inconsistency or with insufficient minor allele frequency (MAF). From these chromosomal regions, we have generated several HapMap datasets according to the choices of markers described in Table 1 [24,27]. On all these trios' genotypes, the parent haplotypes can be partially obtained (about ~80% of the phases between adjacent heterozygous SNPs are determined), and we have measured the running times of the various algorithms and the SER (Switch Error Rate) of haplotypes inferred by the various software. The SER corresponds to the percentage of known phases between adjacent heterozygous SNPs (obtained thanks to the trios affiliation) incorrectly inferred [22,27], which is more adapted than the IER on large numbers of SNPs because the IER does Algorithm 2 to compute the FDSL distribution on the incomplete haplotype tree Figure 5 Algorithm 2 to compute the FDSL distribution on the incomplete haplotype tree. (2) and (1). (3) and (1). INPUT: a genotype Gi partitioned into m segments not differentiate between one or several heterozygous SNPs incorrectly inferred. To investigate on the impact of low LD in haplotype inference, we have also used a set of 15,000 adjacent Tag SNPs picked up from the large arm of chromosome 12 and found in the 300 K Illumina chips. GRIV cohort datasets Third, we have generated large SNP datasets from subjects of the GRIV (Genomics of Resistance to Immunodeficiency Virus) cohort genotyped with the 300 K Illumina chip. The GRIV cohort comprehends about 400 Caucasian subjects collected for genomic studies in AIDS [1,[40][41][42][43]. These datasets were used to estimate the running times required by the most accurate software to infer the haplotypes of a 300 K Illumina chips. For that, we have gener- Accuracy of the different values tested for the threshold f in Shape-IT (grey boxes) compared to Phase v2.1 (black line) Figure 6 Accuracy of the different values tested for the threshold f in Shape-IT (grey boxes) compared to Phase v2. Results Empirical determination of the threshold f ( Figure 6) As discussed in the section Algorithm, Shape-IT relies on a threshold f to discard some branches of the haplotype binary trees. So, we have tested several values for f: the accuracy is clearly stable for values below 0.01. Since the running time was optimal for f = 0.01, we have used this value as default in all the following comparisons. Comparisons on the single gene datasets (Table 2 and 3) On these datasets, Shape-IT, Ishape and Phase v2.1 give clearly the better haplotype reconstructions and frequency estimations compared to the other software. One can notice that Ishape seems to be slightly more accurate than Shape-IT and Phase v2.1. For the completion of missing data, all the methods (except 2snp) are closely related. Comparisons on the HapMap trio datasets (Table 1 and 4) As a matter of accuracy, Shape-IT and Phase v2.1 outperform all the other methods. Ishape comes second but plunges when dealing with larger number of Tag SNPs. Fastphase comes third but it seems to work relatively better when the datasets get bigger. 2snp, Gerbil, and PLEM do not match the accuracy of the other software. All the software get higher error rates when the number of Tag SNPs increases which is probably the consequence of the increasing complexity of the LD pattern when dealing with limited numbers of individuals. As a matter of speed, the fastest software is clearly 2snp. For relatively small numbers of SNPs, PLEM and Gerbil are also very fast, but become very slow when the number of SNPs increases or when the LD pattern gets more complex to capture. Among the 4 most accurate software (Phase v2.1, Fastphase, Ishape, and Shape-IT), Phase v2.1 is the slowest, Shape-IT is the fastest for small and medium-sized SNP samples (< 100 SNPs), and Fastphase becomes faster for larger numbers of SNPs (see additional file 1). (Table 5) On these datasets, Shape-IT runs between 15 to 150 times faster that Phase v2.1, depending on the segmentation strategy used (50, 100 or 200 SNPs) and the number of genotypes in the population (100, 200 or 300). Fastphase remains the fastest software but closely followed by Shape-IT. The increase of SNP and genotype numbers strongly cripples Phase v2.1 and Ishape, while it is better handled by Shape-IT and Fastphase. Discussion and conclusion We have developed a new algorithm derived from the Phase v2.1 Gibbs sampler scheme. We have improved the most time-consuming steps by using binary tree representations and by avoiding the PL procedure thanks to an incomplete exploration of binary trees. The resulting software, Shape-IT, is extremely accurate like Phase v2.1, but may run up to 150 times faster as shown in our tests. These results have an impact for the computation of haplotypes in genome scans as shown in Table 5 For the various software tested, we measured the percentage of individuals incorrectly reconstructed (IER), the percentage of missing data incorrectly inferred (MER), and the distance between real and inferred haplotype frequencies (IF) on the ApoE with complete genotypes and 5% random missing genotypes. For the various software tested, we measured the percentage of individuals incorrectly reconstructed (IER), the percentage of missing data incorrectly inferred (MER), and the distance between real and inferred haplotype frequencies (IF) on the GH1 with complete genotypes and 5% random missing genotypes. days for 200 individuals (34 times more) and 1372 days for 300 individuals (49 times more). The gain of time using Shape-IT is thus considerable and practically very useful to exploit datasets derived from large-scale genotyping chips. An important aspect of this work is that other haplotype inference software relying on HMM may gain to implement this new binary tree representation of the observed genotypes. Moreover, we have not found in the literature the description of this algorithm whereas it might be useful for other fields using HMM. Programming language: C++ Do not forget to read the manual file, manual_ShapeITv1.0.pdf, to get the detailed information. . Results of the various tested software on the HapMap trios datasets described in Table 1. For each software tested, the mean percentage of heterozygous markers incorrectly inferred (SER) is shown in the upper-left corner, and the mean running time in seconds is shown in the lower-right corner. 50 100 10 29 10 151 100 100 6 37 12 519 200 100 6 41 19 3,137 50 200 21 34 13 443 100 200 21 119 29 2,739 200 200 21 124 37 7,601 50 300 37 113 28 1,372 100 300 41 268 52 6,514 200 300 42 261 81 12,757 Estimations of the running times in days of the 4 most accurate software (Phase v2.1, Ishape, Fastphase and Shape-IT) to infer the haplotypes for 100, 200, or 300 genotypes derived from Illumina 300 k chips partitioned into segments of either 50 SNPs, or 100 SNPs, or 200 SNPs. For each combination #SNPs #genotypes, the running time estimations were extrapolated from the measures performed on 10 datasets extracted from the GRIV cohort 300 K Illumina chip genomic data. The software remains confidential until publication of the work. It will be freely available to academics, and a licence will be needed for non-academics (patented for business and commercial applications).
2014-10-01T00:00:00.000Z
2008-12-16T00:00:00.000
{ "year": 2008, "sha1": "7ec4fdbfe98cd064aafa8d157effb39a5cadb1aa", "oa_license": "CCBY", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-9-540", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7ec4fdbfe98cd064aafa8d157effb39a5cadb1aa", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology", "Computer Science" ] }
225593466
pes2o/s2orc
v3-fos-license
Screening of depression with an assessment of the socioeconomic status of patients in the primary care network in the large industrial city of Eastern Siberia The aim was to compare the relationship between the severity of depression symptoms among the unorganized population of Krasnoyarsk in 2006 and 2012 with respect to socioeconomic and demographic factors; and to compare their prevalence for the analyzed period. Materials and methods. Two sample groups were selected from the unorganized population that resided permanently in the territory of Krasnoyarsk in 2006 and 2012. Evaluation of the severity of depression in both cases was carried out according to the Hospital Anxiety and Depression Scale, Depression subscale (HADS-D). Results. In both sample groups, the frequency of depression was associated with age. In 2012, social and economic factors of depression were revealed: lack of higher education, widowhood, unemployment and family poverty. A significant decrease in the frequency of increased (39.1% versus 16.4%) and clinical depression (14.6% versus 4.5%) was found for the period from 2006 to 2012. Conclusions. In 2012, the frequency of the above-normal depression level according to HADS-D in working age population was largely determined by the influence of socioeconomic factors. A decrease in the frequency of increased and clinical levels of depression among the adult population of Krasnoyarsk over the period from 2006 to 2012 was established. INTRODUCTION In recent years, the problem of growing mood disorders in working age has attracted increasing attention from domestic and foreign researchers. The World Health Organization estimates that by 2020, after cardiovascular disease (CVD), depression will be the second major cause of work disability. Meanwhile, in Russia, there is still a significant variation of data on the prevalence of depression in the general medical care network, which is explained by low screening of depression symptoms at the outpatient stage as well as by the lack of a unified method of its diagnosis [1]. Academician A.B. Smulevich developed a technique to detect anxiety and depressive disorders using psychometric scales where a preference was given to subjective questionnaires. Their completion did not require involvement of a psychiatrist or any special skills for data interpretation by general practitioners [2]. Numerous studies have confirmed validity of the Hospital Anxiety and Depression scale (HADS) used in our research for diagnosing depression in the general medical care network and in the general population. Besides, this questionnaire is simple and requires little time for the patient to fill it in [1,3,4]. Currently, depressive disorders play a key role in development of cardiovascular diseases (CVD). Depression is considered to be a bridge between social factors such as income level, family material security, etc. and biological risk factors (RF) [5]. It is well known that low social status is associated with an unfavorable behavior profile (smoking and alcohol abuse), which is triggered by stress and depression [6]. At the same time, there is no consensus on how social factors influence the course of depression in people of working age. A number of researchers believe that a lower risk of depression among working individuals of older age is primarily related to their somatic condition. However, there exists a different point of view claiming that employment is the major factor in protection from depression [6,7]. In the study by O.V. Tsygankova, the absence of family, low income, age and unemployment demonstrated a strong correlation with the high frequency of subdepression in patients with coronary artery disease (CAD) [8]. According to A.V. Orlov, high occurrence of depression among the adult population of St.-Petersburg is primarily related to low income. In addition, other studies have shown that higher levels of depression are associated with low levels of education rather than with family well-being [5,9]. In the study by E.V. Lebedeva (2018), social adaptation disorders (income management, family problems) showed a significant correlation with affective disorders among patients with CAD [10]. The only major study on the prevalence of depression was conducted in the city of Krasnoyarsk, which indicated the link between depression and hypertension. Nevertheless, no studies have been done on the correlation between depression and socioeconomic factors. Hence, our work is important and of scientific value [11,12]. The aim of the study was to assess the correlation between depression symptoms and socioeconomic and demographic factors, as well as to estimate the occurrence of depression symptoms in two independent sample groups formed from the unorganized population of Krasnoyarsk in 2006 and 2012 using psychometric testing (HADS, Depression subscale). MATERIALS AND METHODS The work provides analysis of two independent studies. The first study was conducted in 2006 within the framework of the regional targeted program "Prevention and treatment of hypertension". The other one was conducted in 2012 during the multi-center study "Epidemiology of Cardiovascular Diseases in Regions of the Russian Federation -2012 (ESSE-RF epidemiological study)". The latter is the latest epidemiological research which studied the frequency of CVD risk factors, including depression symptoms among the adult population. The coordinators of the study in Krasnoyarsk, Yu.I Greenstein and M.M. Petrova, focused on the analysis of the traditional risk factors. Attempts to estimate the risk factors and depression have not been made yet [12]. S.A. Shalnova's analysis provides psychometric data for 10 regions participating in the study, similar data for Krasnoyarsk (frequency of increased-level / clinical depression according to HADS-D, gender aspects of depression) are not available [1]. Moreover, no similar studies of random samples were performed in Krasnoyarsk after 2012. The obtained data can be used to develop measures aimed at timely screening and preventing depression in the general medical care network. In both cases, random samples were formed using the Kish selection grid, taking into account the clustering principles and age and gender representation (25-64 years old) [12]. In 2006, 322 people were included in the study in 10 clinics in Krasnoyarsk. The sizes of the representative samples were determined on the basis of the method proposed by V.I. Paniotto (2003), according to which for a total of more than 100 thousand people 400 should be screened [13]. 322 people agreed to participate in the survey -105 men (32.6%) and 217 women (67.4%), the response to the study was 80.2%. In 2012, 1,123 patients from 4 clinics were examined; the response to the study was 80%. Correct data according to HADS-D were obtained from 1,120 respondents: 408 men (36.4%) and 712 women (63.6%). The sociodemographic factors assessed in 2006 were age, absence of higher education and disability. In 2012, the analyzed factors were absence of higher education; absence of family; widowhood; unemployment, and disability. Comparative characteristic of the frequency of the studied parameters in 2006 and 2012 is shown in Table 1. The manifestation of depression symptoms was assessed according to the depression subscale of HADS, the reliability, sensitivity and specificity of which in Russia were determined during the study in the COMPASS program. It has been proven that by using this technique, taking into account optimal points of separation, the risk of missing depression is low. When interpreting the results, the total indicator for "Depression (D)" subscale was taken into account: 0-7 points -absence of depres-sion; 8-10 points -subclinical depression; 11 points and more -clinical depression; 8 + -increased depression level -total indicator of subclinical and clinical depression [3,5]. The data were statistically processed by means of SPSS version 23 (USA) and Microsoft Excel (2010) spreadsheets. The study applied non-parametric criteria -the Mann -Whitney U test for paired comparisons and the Kruskal -Wallis test for multiple comparisons. Quantitative data are presented as the median (Me) with lower (Q 25 ) and upper (Q 75 ) percentiles; qualitative data are presented as relative frequency (%). The qualitative comparison was performed using Pearson's chi-squared test criterion (χ2), taking into account the degrees of freedom (df). The critical level of statistical significance in the null hypothesis tests was assumed to be 0.05 or less. The article discusses only statistically significant relationships. RESULTS The results of the assessment of depression symptoms in both studies are presented in Tables 2 and 3. In 2012, the incidence of depression symptoms had a pronounced dependence on risk factors. Therefore, in 2006, absence of higher education did not affect the frequency of increased depression level (χ2 = 1.3, df = 1, p = 0.262), but in 2012, this factor was associated with a higher frequency of increased depression level (χ2 = 6.6, df = 1, p = 0.010). In 2012, widowhood was associated with higher incidence of increased depression level (χ2 = 6.6, df = 1, p = 0.010) and unemployment -with a three-fold increase in the likelihood of clinical depression development (χ2 = 6.8, df = 1, p = 0.009 with Yates's correction for continuity). In families with low material well-being, increased and clinical depression levels were significantly more common than in families with better material security (χ2 = 21.3, df = 1, p < 0.001 and χ2 = 14.3, df = 1, p < 0.001, respectively). To analyze the relationship between depression and age, individuals were divided into 2 groups: 25 to 44 years old and 45 to 64 years old. In 2006, the analysis showed predominance of persons with increased (χ2 = 10.8, df = 1, p = 0.001) and clinical depression (χ2 = 7.8, df = 1, p = 0.005) in a more senior age group (≥45 years). In 2012, the association of age with the incidence of depression symptoms was weaker in both strength and statistical significance and achieved significant differences between groups only in regards to clinical depression (χ2 = 5.5, df = 1, p = 0.019). sion among women are 2-4 times more common (especially in the postmenopausal period) compared to men. In the HAPIEE study, in the sample of 2,151 respondents aged 45-64 years, the frequency of depression among women reached 44%, and among men -23% according to the Center for Epidemiologic Studies Depression (CES-D) scale. In the work by O.V. Tsygankova, among 245 patients with CAD aged 35-65years, the probability of detection of subdepression using the Zung Self-Rating Depression Scale in women was 41.2% versus 24.0% in men [7,8,14]. At the same time, in Krasnoyarsk, groups of patients with depressive disorders in 2004-2008 were comparable in terms of gender composition [11]. It also should be noted that in the "ESSE-RF" study among 10 regions of Russia, men and women differed slightly in the frequency of depression symptoms: 20.6% of women and 20.0% of men with HADS-D ≥ 8. In the work by F.N. Jacka (2011), which studied 2,957 Norwegians aged 46-49 years, the above-normal depression level according to HADS was registered among 9.6% of men and 7.6% of women [1,15]. The study confirms that low education level and family poverty are factors associated with low social status and high prevalence of depression [1,11]. In 2012, the link between depression symptoms prevalence and family poverty was stronger and more significant than with low level of education. DISCUSSION In 2004-2008, in Krasnoyarsk, patients with depressive disorders according to the Beck Depression Inventory had lower level of education [11]. In the study by A.V. Orlov, among 1,600 residents of St.-Petersburg aged 25-64 years, regression analysis with gender and biological risk factor correction did not confirm a correlation between HADS depression symptoms and low level of education [5]. Loneliness and death of loved ones are predominant risk factors of depression [6]. According to our data, the frequency of depression was maximum in the older age group of 55-64 years old (26.7%). In 2006, the frequency of depression symptoms did not depend on the level of employment. Unemployment was associated with a greater frequency of clinical depression in 2012. At the same time, as presented in the work by O.V. Tsygankova, subsindromal manifestations of depression in women show a stronger connection with unemployment than with age and absence of family [8]. It is obvious that there is a current trend towards weakening of the correlation between age and frequency of depression. According to V.V. Gafarov, the age of 45 years old and more does not affect the prevalence of depression [16]. Meanwhile, in both studies individuals with clinical depression according to the HADS-D scale were most likely to be in the older age group. A similar increase in the gradient of depression symptoms with the increase in age was noted in S.A. Shalnova's study. However, in the HAPIEE study among residents of Novosibirsk, this association turned out to be insignificant [1,7,16]. The results of the study are significantly limited by a number of factors: differences in the age of the sample groups; differences in the number of people with disabilities in both groups (which can be explained by their natural death over a period of 6 years); differences in the number of people with higher education (29.2% versus 46.3%, probably due to the population growth in 2012). At the same time, disability and low education level in 2006 did not affect the variability of depression symptoms. Hence, some aspects of our study should be investigated further. CONCLUSION Nowadays, the effect of socioeconomic factors on the variability of depression symptoms has increased. If in 2006 such correlations were not observed, in 2012 we were able to identify the social categories of individuals that are most susceptible to depression. In the period from 2006 to 2012, a decrease in the increased and clinical depression levels according to HADS-D was observed among the population of Krasnoyarsk.
2020-07-16T09:09:02.767Z
2020-07-12T00:00:00.000
{ "year": 2020, "sha1": "6934ef897104ac5f1e2f6179b791c0b97ae99cde", "oa_license": "CCBY", "oa_url": "https://bulletin.tomsk.ru/jour/article/download/2864/1744", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "51ba019ee90beaa668a9da249210f11f0b34026d", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245117358
pes2o/s2orc
v3-fos-license
A Simple and Efficient Sampling-based Algorithm for General Reachability Analysis In this work, we analyze an efficient sampling-based algorithm for general-purpose reachability analysis, which remains a notoriously challenging problem with applications ranging from neural network verification to safety analysis of dynamical systems. By sampling inputs, evaluating their images in the true reachable set, and taking their $\epsilon$-padded convex hull as a set estimator, this algorithm applies to general problem settings and is simple to implement. Our main contribution is the derivation of asymptotic and finite-sample accuracy guarantees using random set theory. This analysis informs algorithmic design to obtain an $\epsilon$-close reachable set approximation with high probability, provides insights into which reachability problems are most challenging, and motivates safety-critical applications of the technique. On a neural network verification task, we show that this approach is more accurate and significantly faster than prior work. Informed by our analysis, we also design a robust model predictive controller that we demonstrate in hardware experiments. Introduction : -RANDUP consists of three simple steps: 1) sampling M inputs x i in X , 2) propagating these inputs through the reachability map f , and 3) taking the -padded convex hull Y M to approximate the reachable set Y. Forward reachability analysis entails characterizing the reachable set of outputs of a given function corresponding to a set of inputs. This type of analysis underpins a plethora of applications in model predictive control, neural network verification, and safety analysis of dynamical systems. Sampling-based reachability analysis techniques are a particularly simple class of methods to implement; however, conventional wisdom suggests that if insufficient representative samples are considered, these methods may not be robust in that they cannot rule out edge cases missed by the sampling procedure. Alternatively, by leveraging structure in specific problem formulations or computational methods designed for exhaustivity (e.g., branch and bound), a large range of algorithms with deterministic accuracy and performance arXiv:2112.05745v3 [eess.SY] 13 Apr 2022 guarantees have been developed. However, these methods often sacrifice simplicity and generality for their power, motivating the development of algorithms that avoid such restrictions. In this work, we analyze a simple yet efficient sampling-based algorithm for general-purpose reachability analysis. As depicted in Figure 1, it consists of 1) sampling inputs, 2) propagating these inputs, and 3) taking the padded convex hull of these output samples. We refer to this RANDomized Uncertainty Propagation algorithm as -RANDUP: it is simple to implement, benefits from statistical accuracy guarantees, and applies to a wide range of problems including reachability analysis of uncertain dynamical systems with neural network controllers. Importantly, -RANDUP fulfills key desiderata that a general-purpose reachability analysis algorithm should satisfy: • it works with any choice of possibly nonlinear reachability maps and non-convex input sets, • its estimate of the reachable set is conservative with high probability and tighter than prior work, • it is efficient and does not require precomputations, which is a key advantage for learning-based control applications where uncertainty bounds and models are updated in real-time. Our main contribution is a thorough analysis of the statistical properties of -RANDUP. Specifically: 1. We prove that the set estimator converges to the -padded convex hull of the true reachable set as the number of samples increases. Our assumption about the sampling distribution is weaker than in related work and implies that sampling the boundary of the input set is sufficient. This asymptotic result justifies using -RANDUP as a thrustworthy baseline for offline validation whenever the reachability map and the input set are complex and no tractable algorithm exists. 2. We derive a finite-sample bound for the Hausdorff distance between the output of -RANDUP and the convex hull of the true reachable set, assuming that the reachability map is Lipschitz continuous. This result informs algorithmic design (e.g., how to choose the number of samples to obtain an -accurate approximation with high probability), sheds insights into which problems are most challenging, and motivates using this simple algorithm in safety-critical applications. We demonstrate -RANDUP on a neural network controller verification task and show that it is highly competitive with prior work. We also embed this algorithm within a robust model predictive controller and present hardware results demonstrating the reliability of the approach. Related work Reachability analysis has found a wide range of applications ranging from model predictive control (Schürmann et al., 2018), robotics (Shao et al., 2021;Lew et al., 2022), neural network verification (Tran et al., 2019;Hu et al., 2020), to orbital mechanics (Wittig et al., 2015). Reachability analysis is particularly relevant in safety-critical applications which require the strict satisfaction of specifications. For instance, a drone transporting a package should never collide with obstacles and respect velocity bounds for any payload mass in a bounded input set. In contrast to stochastic problem formulations which typically consider the inputs as random variables with known probability distributions (Webb et al., 2019;Sinha et al., 2020;Devonport and Arcak, 2020), we consider robust formulations which are of interest whenever minimal information about the inputs is available. Deterministic algorithms are often tailored to the particular parameterization of the reachability map and to the shape of the input set. For instance, one finds methods that are particularly designed for neural networks (Tran et al., 2019;Ivanov et al., 2019;Hu et al., 2020), nonlinear hybrid systems (Chen et al., 2013;Kong et al., 2015), linear dynamical systems with zonotopic (Girard, 2005) and ellipsoidal (Kurzhanski and Varaiya, 2000) parameter sets, etc. We refer to (Liu et al., 2021) and (Althoff et al., 2021) for recent comprehensive surveys. Such algorithms have deterministic accuracy guarantees but require problem-specific structure that restricts the class of systems they apply to. Given the wide range of applications of reachability analysis, there is a pressing need for the development and analysis of simple algorithms that can be applied to general problem formulations. On the other hand, sampling-based algorithms reconstruct the reachable set from sampled outputs. The stochasticity is typically controlled by the engineer, who selects the number of samples and their distribution. A key strength of this methodology is the possible use of black-box models with arbitrary input sets, which allows using complex simulators of the system. For instance, kernelbased methods (De Vito et al., 2014;Rudi et al., 2017;Thorpe et al., 2021) have been proposed as a strong approach for data-driven reachability analysis. Kernel-based methods are highly expressive, as selecting a completely separating kernel (De Vito et al., 2014) enables reconstructing any closed set to arbitrary precision given enough samples. Their main drawback is the potentially expensive evaluation of the estimator for a large number of samples. Its implicit representation as a level set is also not particularly convenient for downstream applications. Sampling-based reachable set estimators with pre-specified shapes have been proposed to simplify computations and downstream applications. Recently, (Lew and Pavone, 2020) proposed to approximate reachable sets with the convex hull of the samples, but this approach is not guaranteed to return a conservative approximation. Ellipsoidal and rectangular sets are computed in (Devonport and Arcak, 2020) using the scenario approach, but this work tackles a different problem formulation with inputs that are random variables with known distribution. To tackle the robust reachability analyis problem setting, (Gruenbacher et al., 2022) use a ball estimator that bounds the samples. The statistical analysis is restricted to ball-parameterized input sets, uniform sampling distributions, and smooth diffeomorphic reachability maps that represent the solution of a neural ordinary differential equation (Chen et al., 2018) from the input set. In practice, using an outer-bounding ball is more conservative than taking the convex hull of the samples, see Section 6. In this work, we slightly modify RANDUP (Lew and Pavone, 2020) with an additional -padding step to yield finite-sample outer-approximation guarantees, Our analysis leverages random set theory (Matheron, 1975;Molchanov, 2017), which provides a natural mathematical framework to analyze the reachable set estimator. We characterize its accuracy using the Hausdorff distance to the convex hull of the true reachable set, which provides an intuitive error measure that can be directly used for downstream control applications. Our analysis draws inspiration from the vast literature on statistical geometric inference, which proposes different set estimators including union of balls (Devroye and Wise, 1980;Baillo andCuevas, 2001), convex hulls (Ripley andRasson, 1977;Schneider, 1988;Dumbgen and Walther, 1996), r-convex hulls (Rodriguez-Casal andSaavedra-Nieves, 2016, 2019;Arias-Castro et al., 2019), Delaunay complexes (Boissonnat and Ghosh, 2013;Aamari, 2017;Aamari and Levrard, 2018), and kernel-based estimators (De Vito et al., 2014;Rudi et al., 2017). This research typically makes assumptions about the set to be reconstructed (e.g., it is convex (Dumbgen and Walther, 1996) or has bounded reach (Cuevas, 2009)) and considers points that are directly sampled from this set. In this work, we derive similar results for reachable sets given known properties of the input set, reachability map, and chosen input sampling distribution. Problem definition In this section, we introduce our notations and problem formulation. Due to space constraints, we leave measure-theoretic details to Appendix A. We denote λ(·) for the Lebesgue measure over R p , Γ(·) for the gamma function, H(A) for the convex hull of a subset A ⊂ R n , A c = R n \ A for its complement, ∂A for its boundary, ⊕ for the Minkowski sum, B(x, r) := {y ∈ R n : y − x ≤ r} for the closed ball of center x ∈ R n and radius r ≥ 0, andB(x, r) for the open ball. The family of nonempty compact subsets of R n is denoted as K. For any A ∈ K and d > 0, D(A, d) := min{n ∈ N : ∃{a 1 , . . ., a n } ⊂ R n , A ⊂ B(a 1 , d) ∪ . . . ∪ B(a n , d)} denotes the d-covering number of A. Let X ⊂ R p be a compact nonempty set of inputs and f : R p → R n be a continuous function. In this work, we tackle the general problem of reachability analysis, i.e., characterizing the set of reachable outputs y = f (x) for all possible inputs x ∈ X . This problem is also often referred to as uncertainty propagation. Mathematically, the objective consists of efficiently computing an accurate approximation of the reachable set Y ⊂ R n , which is defined as (1) To tackle this problem, -RANDUP relies on the choice of three parameters: a number of samples M ∈ N, a padding constant > 0, and a sampling distribution P X on measurable subsets of R p . As depicted in Figure 1, -RANDUP consists of sampling M independent identically-distributed inputs x i in X according to P X , of evaluating each output y i = f (x i ), and of computing the -padded convex hullŶ Our analysis hinges on the observation that the reachable set estimatorŶ M is a random compact set, i.e.,Ŷ M is a random variable taking values in the family of nonempty compact sets K. We refer to Appendix A for rigorous definitions using random set theory. Intuitively, different input samples This metric induces a topology and an associated σ-algebra, which enables rigorously defining random compact sets as random variables and describing their convergence; see Appendix A. Interestingly, the distribution of a random compact set is characterized by the probability that it intersects any given compact set. We use this fact in Sections 4 and 5, where we characterize the probability that the set estimatorŶ M intersects well-chosen sets along the boundary of the true reachable set. By analyzing the distribution ofŶ M , this approach allows bounding the Hausdorff distance between Y M and the convex hull of the true reachable set H(Y) with high probability. Asymptotic analysis In this section, we provide an asymptotic analysis under minimal assumptions about the input set and the reachability map (namely, that X is compact and f is continuous). To enable the reconstruction of the true convex hull H(Y) using the sampling-based set estimatorŶ M , we make one assumption about the sampling distribution P X for the inputs x i . Note that by definition, P X (X ) = 1. This assumption states that the probability of sampling an output arbitrarily close to any point on the boundary of the true reachable set is strictly positive. In other words, the boundary of the reachable set should be contained in the support of the distribution of the output samples y i . Assumption 1 is weaker than the associated assumption in (Lew and Pavone, 2020, Theorem 2), which can be restated as "P X (f −1 (A)) > 0 for any open set A ⊂ R n such that Y ∩ A = ∅". Indeed, Assumption 1 only considers open neighborhoods of the boundary ∂Y, as opposed to all open sets intersecting Y. Selecting a sampling distribution P X that satisfies Assumption 1 is easy. For instance, if X has a smooth boundary (see Assumption 4), then the uniform distribution over X satisfies Assumption 1. Assumption 1 is sufficient to prove that the random set estimatorŶ M converges to the -padded convex hull of Y as the number of samples M increases. Below, we prove a more general result which allows for variations of the padding radius as the number of samples increases. Proof We refer to Appendix B.1. We leverage (Molchanov, 2017, Proposition 1.7.23) which states sufficient conditions for the convergence of random compact sets and use properties of the convex hull to relax the corresponding assumption in (Lew and Pavone, 2020) with Assumption 1. Practically, Theorem 1 justifies using -RANDUP for general continuous maps f and compact sets X . This consistency result implies that choosing any converging sequence of padding radii (e.g., M = 1/M ) guarantees the convergence of the random set estimatorŶ M M to the¯ -padded convex hull of the true reachable set. As a particular case, selecting a constant padding radius (which yields -RANDUP) guarantees that Y M converges to the -padded convex hull H(Y) ⊕ B(0, ). Compared to (Lew and Pavone, 2020, Theorem 2), which only treats the case with constant zero padding radii M =¯ = 0 (i.e., without -padding the convex hull of the output samples), Theorem 1 allows for variations of the padding radii M and is proved under weaker assumptions. Instead of relying on -covering arguments (e.g., see Corollary 1 in (Dumbgen and Walther, 1996) which assumes that Y is convex), we use (Molchanov, 2017, Proposition 1.7.23) to conclude asymptotic convergence. This proof technique allows deriving a general result that does not depend on the exact sampling density along the boundary ∂Y and uses a sequence of padding radii M converging arbitrarily slowly to some constant¯ ≥ 0. Finite-sample analysis Theorem 1 provides asymptotic convergence guarantees that support the application of -RANDUP in general scenarios (e.g., as a baseline for offline validation in complex problem settings), but does not provide finite-sample guarantees which are of practical interest in safety-critical applications. Deriving stronger statistical guarantees requires leveraging more information about the structure of the problem. We derive finite-sample rates under general assumptions in Section 5.1 and analyze a particular case in Section 5.2. We discuss practical implications of our results in Section 5.3. General finite-sample statistical guarantees To derive convergence rates and outer-approximation guarantees given a finite number of samples M , we first make an assumption about the smoothness of the reachability map f . Assumption 2 The reachability map f : Next, we make an assumption about the sampling distribution P X along the input set boundary ∂X . Given any boundary input x ∈ ∂X , the constant Λ L characterizes the probability of sampling an input x i that is /(2L)-close to x. Selecting a sampling distribution that satisfies Assumption 3 is simple; we provide examples in Sections 5.2 and 6. As we show next, these two assumptions are sufficient to derive finite-sample convergence rates for -RANDUP. Recall that D(∂X , d) denotes the d-packing number of ∂X , which is necessarily finite by the compactness of X . Then, under Assumptions 2 and 3 and assuming that Proof We refer to Appendix B.2 for a complete proof. Using a similar analysis, one could derive convergence rates for the -padded union of balls estimator (Devroye and Wise, 1980;Baillo and Cuevas, 2001) that would depend on the -covering number of the entire input set D(X , ). In the general case, D(∂X , ) ≤ D(X , ): Theorem 2 indicates that using a convex hull is more sample-efficient than a union of balls (assuming that ∂Y ⊆ f (∂X ), see Appendix B.2 for further details). It is better suited if Y is convex or if an approximation of H(Y) is sufficient for the downstream application, as is usual in control applications which typically use convex reachable set approximations, see (Lew and Pavone, 2020). Bottom: if X c is not r-convex, it is still possible to find a conservative approximation that is r-convex. Analysis of a particular setting: smooth input set and continuous distribution In many applications, the boundary of the input set is smooth (e.g., X is a 2-norm ball). In this setting, we can apply Theorem 2 to derive finite-sample guarantees for general continuous sampling distributions. We state this smoothness assumption below. Assumption 4 X c is r-convex for some r > 0. Equivalently, for any x ∈ ∂X , there existsx ∈ X such that x ∈ B(x, r) ⊆ X . Assumption 4 guarantees that for any parameter x on the boundary ∂X , one can find a ball of radius r contained in X that also contains x, see Figure 2. This assumption corresponds to a general inwards-curvature condition of the boundary ∂X . It is a common assumption in the literature (Walther, 1997;Rodriguez-Casal andSaavedra-Nieves, 2016, 2019;Arias-Castro et al., 2019) and is related to the notion of reach (Federer, 1959;Cuevas, 2009;Aamari, 2017) that bounds the curvature of the boundary ∂X . To guarantee its satisfaction, one can replace X with X ⊕ B(0, r) (Walther, 1997) before performing reachability analysis, which would yield a more conservative estimate of Y. Next, we state an assumption about the sampling distribution P X . This assumption states that the sampling distribution admits a lower-bounded continuous density. Specifically, there exists a density function p X : for any measurable subset A ⊂ X . For instance, the uniform distribution over X satisfies this assumption. Similarly to Assumption 3, this density assumption can be relaxed to neighborhoods of ∂X ; we leave this extension for future work. We obtain the following corollary. Then, under Assumptions 2, 4 and 5 and assuming that ∂Y ⊆ f (∂X ), with probability at least Proof We refer to Appendix B.3. We first prove that Assumptions 4 and 5 imply that Assumption 3 holds with Λ L = p 0 Λ r,L . The finite-sample bound then follows by applying Theorem 2. The constant Λ r,L corresponds to the p-dimensional Lebesgue volume of two hyperspherical caps and can be computed analytically, see (Li, 2011;Petitjean, 2013) and Appendix C. Insights: the difficulty of reachability analysis and algorithmic design Theorem 2 reveals which characteristics of the problem make reachability analysis challenging: • Assuming the smoothness of f is necessary: given an input set X and a sampling distribution P X , one can construct problems for which sampling-based reachability analysis algorithms require arbitrarily many samples to compute an -accurate approximation of Y, see Section 6.1. To derive finite-sample rates, assuming that the reachability map f is L-Lipschitz (Assumption 2) is necessary if only assumptions on input coverage density (Assumption 3) are available. • The smoother the easier: a smaller Lipschitz constant L and a larger radius parameter r induce tighter bounds in Theorem 2, requiring a smaller number of samples M to obtain a desired accuracy with high probability 1 − δ M . Indeed, such conditions guarantee a lower bound on the probability of sampling outputs y i = f (x i ) ∈ Y that are close to the boundary ∂Y, which is necessary to accurately reconstruct the true convex hull of the reachable set from samples. • Scalability: by Theorem 2, the number of required samples to reach a desired -accuracy with high probability depends on the covering number. This constant characterizes the size of the parameter space in terms of dimensionality (the number of different parameters) and volume (variations of each parameter). Given any X ∈ K and d = sup x∈∂X x , a simple and general bound for the covering number is D(∂X , ) ≤ (2d √ n/ ) n (Shalev-Shwartz and Ben-David, 2009). Results and applications We perform a sensitivity analysis in Section 6.1 to illustrate the insights from Theorem 2. In Section 6.2, we compute the reachable sets of a dynamical system with a simple neural network policy and compare with prior work. Finally, in Section 6.3, we embed -RANDUP in a model predictive control (MPC) framework to reliably control a robotic platform. Our code and hardware results are available at https://github.com/StanfordASL/RandUP and https://youtu.be/sDkblTwPuEg. All computation times are measured on a computer with a 3.70GHz Intel Core i7-8700K CPU. We analyze the sensitivity of -RANDUP to the sampling distribution and the smoothness of the reachability map. We consider a 2-dimensional input ball X = B(0, 1) and the map f (x) = (Lx 1 , x 2 ) with L ≥ 1. Clearly, X c is 1-convex and f is L-Lipschitz continuous, so Corollary 1 applies for any sampling distribution satisfying Assumption 5. We consider a distribution P α X that depends on a parameter α ≥ 1, such that P α X varies from a uniform distribution over X for α = 1 to a uniform distribution over the boundary ∂X as α → ∞. Given δ M = 10 −3 , we determine the minimum padding guar- Sensitivity analysis We take M = 1000 samples and present results in Figure 3. We observe better performance than the predicted finite-sample bounds and that distributions with a higher probability of sampling close to the boundary (i.e., larger values of α) perform better, corresponding to lower Hausdorff distance errors. Also, -RANDUP performs better on problems with smoother reachability maps, as is visible from our empirical evaluation and theoretical bounds on the Hausdorff distance. This validates the discussion in Section 5.3. Verification of neural network controllers Figure 4: Reachable sets computed in Section 6.2 for a total prediction horizon N = 9. Sets from the formal method REACHLP are shown in green, dashed sets correspond to no input splitting, straight-lines correspond to splitting X 0 into 16 components. We use M = 10 3 samples for all sampling-based methods and = 0.02. Next, we consider the verification of a neural network controller u t = π nn (x t ) for a known linear dynamical system x t+1 = Ax t + Bu t , where t ∈ N denotes a time index, and x t ∈ R 2 and u t ∈ R denote the state and control input. Given a rectangular set of initial states X 0 ⊂ R 2 , the problem consists of estimating the reachable set at time t ∈ N defined as X , we see that this problem fits the mathematical form described in Section 1. We use a ReLU network π nn from (Everett et al., 2021) with two layers of 5 neurons each. We compare -RANDUP with the formal method REACHLP (Everett et al., 2021) 1 and with two recently-derived sampling-based approaches: the kernel method proposed in (Thorpe et al., 2021) and GOTUBE (Gruenbacher et al., 2022). We implement GOTUBE using the -RANDUP algorithm where we replace the last convex hull bounding step with an outer-bounding ball. As ground-truth, we use the reachable sets from -RANDUP with = 0 and M = 10 6 , which is motivated by the asymptotic results from Theorem 1 and was previously done in (Everett et al., 2021). We refer to Appendix E.2 for details and present results in Figures 4 and 5. Formal methods that explicitly bound the output of each layer of the neural network can guarantee that their reachable set approximations are always conservative. However, obtaining tight approximations with REACHLP requires splitting the input set: a computationally expensive procedure (Fig. 5, bottom). Figures 4 and 5 show that REACHLP is more conservative than -RANDUP even when considering polytopic outputs with eight facets. As shown in Figure 4 (right), the conservatism of these methods increases over time. This shows that even when considering small neural networks, verifying safety specifications over long horizons remains an open challenge. Sampling-based approaches do not suffer from the long-horizon conservatism of formal methods. This comes at the expense of probabilistic guarantees (that rely on knowledge of the Lipschitz constant of the model), as opposed to deterministic conservatism guarantees. -RANDUP and GOTUBE have comparable computation time 2 and are significantly faster than other approaches. -RANDUP is significantly more accurate than prior work, especially for larger values of M . Also, the results from Theorem 2 allow for principled hyperparameter selection for -RANDUP: given = 0.02, sampling 1400 uniformly-distributed inputs on ∂X is sufficient for the output sets to be conservative with probability at least 1 − 10 −4 (for L = 1, see Section E.2). These experiments show that for short-horizon problems (5 steps) with relatively simple network architectures, both REACHLP and -RANDUP return accurate reachable set approximations. For longer-horizon problems (9 steps) with networks of moderate dimensions (which allows using existing methods to pre-compute a Lipschitz constant, see (Fazlyab et al., 2019) and Section D), -RANDUP is guaranteed to efficiently return non-overly-conservative reachable set approximations with high probability. Finally, though we do not present such results here, the generality of -RANDUP allows it to tackle complex model architectures (see (Lew et al., 2022) for experiments with longer horizons and more complex networks with uncertain weights) for which no alternative methods exist, albeit without finite-sample accuracy guarantees. Application to robust model predictive control Finally, we show that -RANDUP can be embedded in a robust MPC formulation to reliably control a planar spacecraft system actuated by cold-gas thrusters. Its state at time t ≥ 0 is denoted as Using a model predictive controller that does not account for the uncertain dynamics (middle) leads to unsafe behavior, colliding with an obstacle and causing the optimization problem to be infeasible at run-time (right). R 6 and its control inputs are given as u t ∈ R 3 . We use an auxiliary linear feedback controller (Lew et al., 2022) and an uncertain linear model x t+1 = f (x t , u t , m, F ) that depends on an uncertain mass m ∈ [10, 18] kg (depending on the payload transported by the robot and the current weight of the gas tanks) and an unknown force F = (F x , F y ) ∈ [−0.015, 0.015] 2 N that accounts for the tilt of the table. To control the system from an initial state x 0 ∈ R n to a goal region X goal ⊂ R n while minimizing fuel consumption and remaining in a feasible set X free (i.e., avoiding obstacles and respecting velocity bounds), we consider the following MPC formulation: where µ = (µ 0 , . . . , µ N ) and ν = (ν 0 , . . . , ν N −1 ) are optimization variables representing the nominal state and control trajectories, (m,F x ,F y ) = (14, 0, 0) are nominal parameter values, x goal ∈ X goal is the center of the goal set, and the reachable sets X t (ν) ⊂ R n are defined as X t (ν) = {x t = f (·, ν t−1 , m, F )•· · ·•f (x 0 , ν 0 , m, F ) : (m, F ) ∈ [10, 18]×[−0.015, 0.015]}. The numerical implementation is described in (Lew and Pavone, 2020). With a Python implementation, = 0.03, and M = 10 3 , our MPC controller runs at 10Hz which is sufficient for this platform and could be improved, e.g., by parallelizing computations on a GPU. We compare with a MPC baseline that does not consider uncertainty over the parameters (i.e., assumes (m, F ) ∈ {14}×{(0, 0)}). As shown in Figure 6 and in the attached video, this baseline is unsafe and collides with an obstacle. In contrast, our reachability-aware controller is recursively feasible, satisfies all constraints, and allows safely reaching the goal. These experiments motivate the development of efficient reachability algorithms that can be embedded in generic control frameworks to account for uncertain parameters. Conclusion We derived new asymptotic and finite-sample statistical guarantees for -RANDUP, a simple yet efficient algorithm for reachability analysis of general systems. We demonstrated its efficacy for a neural network verification task and its applicability to robust model predictive control. In future work, we will investigate tighter finite-sample bounds by leveraging further information about the smoothness of the input set boundary ∂X . Of practical interest is investigating which sampling distributions enable better sample efficiency, interfacing -RANDUP with Lipschitz constant computation methods (e.g., (Fazlyab et al., 2019) for neural networks), exploring methods to scale to high-dimensional input spaces, and applying the technique to safety-aware reinforcement learning. L. Dumbgen and G. Walther. Rates of convergence for random approximations of convex sets. Advances in Applied Probability, 28 (2) Appendix A. Formal definitions and random set theory As a complement to Section 3, this section provides a formal description of -RANDUP using random set theory. Since the set estimatorŶ M in (2) is a random variable, describing its measurability properties is important to formally analyze its convergence properties (in an appropriate topology, which we define using the Hausdorff distance). In particular, random set theory provides a rigorous framework to characterize the probability distribution ofŶ M in Theorems 1 and 2. We denote K for the family of nonempty compact subsets of R n , B(R n ) for the Borel σ-algebra for the Euclidean topology on R n associated to the usual Euclidean norm · , λ(·) for the Lebesgue measure over R p , H(A) for the convex hull of a subset A ⊂ R n , ⊕ for the Minkowski sum, B(x, r) = {y ∈ R n : y − x ≤ r} for the closed ball of center x ∈ R n and radius r ≥ 0, B(x, r) for the open ball, and ∂A for the boundary of any A ⊂ R n . A.1. Random set theory Our analysis hinges on the observation that the set estimatorŶ M is a random compact set, i.e.,Ŷ M is a random variable taking values in the family of nonempty compact sets K. To characterize the accuracy of our estimator, we use the Hausdorff metric, which is defined for any A, B ∈ K in (3) as This metric induces the myopic topology on K (Molchanov, 2017) with its associated generated Borel σ-algebra B(K). (K, B(K)) is a measurable space, which motivates the following definition: Definition 1 (Random compact set) Let (Ω, G, P) be a probability space. A mapŶ : Ω → K is a random compact set if {ω ∈ Ω :Ŷ(ω) ∈ Y } ∈ G for any Y ∈ B(K). Since a random compact setŶ is a random variable with values in K, its distribution is characterized by the probability P(Ŷ ∈ Y ) that it takes values in a measurable subset of compact sets Y ∈ B(K). Equivalently (Molchanov, 2017), the law ofŶ is characterized by the capacity functional TŶ : K → [0, 1], defined for any K ∈ K as TŶ (K) = P(Ŷ ∩ K = ∅). This functional describes the probability thatŶ intersects any compact set K. We will analyze this functional to prove the convergence of our set estimator in Theorems 1 and 2. Our asymptotic convergence result in Theorem 1 relies on (Molchanov, 2017, Proposition 1.7.23), which provides sufficient conditions for the convergence of random closed sets. We restate this result in the particular case of a sequence of random compact sets. Theorem 3 (Convergence of Random Sets to a Deterministic Limit (Molchanov, 2017)) Let Y ∈ K and let {Ŷ M } ∞ M =1 be a sequence of random compact sets. Assume that • For any K ∈ K such that Y ∩ K = ∅, • For any open subset G ⊂ R n such that Y ∩ G = ∅, Then, the sequence of random compact sets {Y M } ∞ M =1 almost surely converges to Y (in the myopic topology), i.e., almost surely, d H (Y M , Y) → 0 as M → ∞. A.2. Sampling-based reachability analysis As discussed in Section 3, -RANDUP relies on the choice of three parameters: • a number of samples M ∈ N, • a padding constant > 0, • a probability measure P X on (R p , B(R p )) such that P X (X ) = 1. This algorithm consists of sampling M independent identically-distributed inputs x i according to P X , of evaluating each output sample y i = f (x i ), and of computing the reachable set estimator (2). Formally, let (Ω, G, P) be a probability space such that the x i 's are G-measurable independent random variables which laws P X satisfy P X (A) = P(x i ∈ A) for any A ∈ B(R p ) 3 . Then, the y i 's are independent random variables which laws P Y satisfy P Y (B) = P(y i ∈ B) = P X (f −1 (B)) for any B ∈ B(R n ). It follows thatŶ M : Ω → K is a random compact set satisfying Definition 1 4 . Intuitively, different input samples x i (ω) induce different output samples y i (ω), resulting in different approximated compact reachable setsŶ M (ω) ∈ K, where ω ∈ Ω. B.1. Proof of Theorem 1 We first restate Assumption 1 and Theorem 1 from Section 4. such that the convex hull of two points y 1 ∈ G 1 ∂ and y 2 ∈ G 2 ∂ intersects G (i.e., H({y 1 , y 2 }) ∩ G = ∅). With this fact, we quantify the probability thatŶ M has at least one vertex in G 1 ∂ and one in G 2 ∂ , which guarantees thatŶ M intersects G. We will select specific sets G j ∂ as a function of G later in the proof. Next, note that The second inequality holds sinceŶ Combining the last three results, we obtain the following sufficient condition for (C2) (C2.3) Finally, we combine (6) and (7) as follows. For M ≥ 0, we have that From (6), the first event holds with probability one. Therefore, by the above, 6. More generally, given A ⊂ K and B ⊂ R n an open set, then A∩B = ∅ =⇒ A∩(B B(0, )) = ∅ for some > 0. Given the right choice of G 1 δ , G 2 δ , by convexity of H(Y) 7 (see also Figure 7), we have that Combining this result with (7), we obtain that P(Ŷ M M ∩ G = ∅ i.o.) = 0. This conludes the proof of (C2). B.2. Proof of Theorem 2 We start with four intermediate results and then prove Theorem 2. We use the notations introduced in Section A throughout this section. D(∂Y, ) ≤ D(∂X , /L). Lemma 3 Under Assumption 2, for any x ∈ R p , y = f (x), and any δ > 0, Lemma 4 Let > 0 and define Then, Lemma 5 Let ≥ 0 and let Y ∈ K be such that ∂Y ⊆ Y ⊕ B(0, ) and Y ⊆ Y. Then, Lemma 4 is the key to deriving Theorem 2. It is first derived in (Dumbgen and Walther, 1996) in the convex problem setting. Notably, Lemma 4 does not require the convexity of Y. ⊆ ∂X be a minimum ( /L)-covering for ∂X , so that |F ∂X | = D(∂X , /L) and for any x ∈ ∂X , there exists x i ∈ F ∂X such that x − x i ≤ /L. Then, is an -covering for ∂Y. Indeed, for any y ∈ ∂Y, there exists some x ∈ ∂X such that y = f (x), and there exists some Therefore, since F ∂Y is an -covering for ∂Y and |F ∂Y | = |F ∂X | = D(∂X , /L), we obtain that D(∂Y, ) ≤ |F ∂Y | = D(∂X , /L) which concludes this proof. Proof of Lemma 4. Let ⊆ ∂Y be a minimum -covering for ∂Y, so that |F ∂Y | = D(∂Y, ) and for any y ∈ ∂Y, there exists The conclusion follows. (Schneider, 2014). With these results, we prove Theorem 2 below. We first restate it for better readability. Remark: the assumption ∂Y ⊆ f (∂X ) holds if the reachability map f is open, e.g., if it is a submersion (its differential is surjective). If ∂Y f (∂X ), then one could modify Theorem 2 by replacing Assumption 3 with "Given , L > 0, there exists Λ L > 0 such that P X B x, 2L ≥ Λ L for all x ∈ X " (i.e., one should sample over the entire set X and not only along the boundary) and by defining δ M = D(X , /(2L))(1 − Λ L ) M . which corresponds to the worst probability over y ∈ ∂Y of not sampling some y i that is -close to y. First, we derive a bound for π(∂Y, Y M ). Using the fact that the samples y i are i.i.d., From Lemma 3, for any x ∈ R p , y = f (x), and > 0, P Y (B(y, )) ≥ P X (B(x, /L)). Since for all y ∈ ∂Y, there exists x ∈ ∂X such that y = f (x), we combine the two previous results to obtain In particular, using Assumption 3, To complete the proof of Theorem 2, we use Lemma 4 which states that Therefore, withŶ M = H(Y M ) and Assumption 3, combining the last inequalities, If d H (Ŷ M , H(Y)) ≤ , then Y ⊆ H(Y) ⊆Ŷ M ⊕ B(0, ) =Ŷ M . The conclusion follows. 8. Since PX (X ) = 1, we have that PY (Y) = 1, so that Y M ⊆ Y with probability one. Proof of Lemma 6. By Assumption 4, X c is r-convex. Thus, for any x ∈ ∂X , there exists somẽ x ∈ Int(X ) and a (closed) ball B(x, r) such that x ∈ B(x, r) and B(x, r) ⊆ X . Since x ∈ ∂X and B(x, r) ⊆ X , x −x = r. Let ≥ 0 and r = (r, 0, . . . , 0) ∈ R p . Then, by translational and rotational invariance of the Lebesgue measure, As this holds for x ∈ ∂X , we obtain that inf x∈∂X λ(X ∩ B(x, )) ≥ λ B(0, ) ∩ B(r, r) . Then, we restate Corollary 1 and prove it below. Appendix D. Computing the Lipschitz constant of a ReLU network from samples In this section, we show that sampling gradients enables obtaining the Lipschitz constant of a neural network with ReLU activation functions with high probability. Consider a feed-forward ReLU neural network f : R n → R n with ∈ N layers, given as where (W k , b k ) k=0 are the network weights and biases, and each φ k : R n k → R n k+1 is defined as φ k (x) = (ϕ(x 1 ), . . . , ϕ(x n k )) with ϕ(z) = max(0, z). Note that f is piecewise-affine. Let X ⊂ R n be a non-empty compact set. Let A = {A 1 , . . . , A N } be the set of all polytopes A i ⊆ X where f | A i is affine, which we call the activation regions of f . Let Λ N = min i=1,...,N λ(A i ) λ(X ) be the smallest (normalized Lebesgue) volume of all activation regions. Note that f is Lipschitz continuous over X , since it is continuous and restricted to a compact subset. Thus, for some L ≥ 0, Further, since f is piecewise-affine, f is L-Lipschitz continuous with We propose the following sampling-based method to recover the Lipschitz constant L: 1. Draw M random samples x i in X according to the uniform probability measure over X . In general, with this approach, providing statistical guarantees on whetherL is a valid Lipschitz constant for f is challenging; the analysis would rely on the Hessian of f which is a-priori unknown. In this specific setting, f is piecewise-affine, which we leverage in the analysis below. Proof Since L = max i=1,...,N { ∇f (x i ) for some x i ∈ A i }, a sufficient condition for f to bê L-Lipschitz continuous is that at least one point x j was sampled in each region A i . Thus, 1}, and a ReLU network π nn from (Everett et al., 2021) with two layers of 5 neurons each. We compare -RANDUP with the formal verification technique REACHLP (Everett et al., 2021) and the sampling-based approaches presented in (Thorpe et al., 2021) and (Gruenbacher et al., 2022). We use the Abel kernel K(x 1 , x 2 ) = exp(− x 1 −x 2 /0.05) for the kernel method (Thorpe et al., 2021) due to its separating property (De Vito et al., 2014). To implement GOTUBE (Gruenbacher et al., 2022), we use the -RANDUP algorithm where we replace the last convex hull bounding step with an outer-bounding ball. We use a uniform sampling distribution for all methods. As ground-truth, we use the reachable sets from -RANDUP with = 0 and M = 10 6 , which is motivated by the asymptotic results from Theorem 1 and was previously done in (Everett et al., 2021). Next, we provide further details into the evaluation of the finite sample bound: given = 0.02, sampling M = 1400 inputs that are uniformly-distributed on the boundary ∂X is sufficient to ensure that the approximated reachable sets from -RANDUP are conservative with probability greater than 1 − 10 −4 . This result relies on the Lipschitz constant of the closed-loop system, which we set to L = 1 to evaluate this bound since the neural network controller leads to closed-loop stability. Alternatively, one could use a formal method to compute a bound on this constant (in contrast to using a formal method for reachability analysis, computing this Lipschitz constant only needs to be done once and can be done offline) or sampling-based methods with a large number of samples (see Section D for an analysis). Since the input set is given as X = [2.5, 3] × [−0.25, 0.25], we have D(∂X , /(2L)) ≤ 2 · (0.5 + 0.5)/(2 /(2L)) + 1 = 2/ + 1. Finally, since we sample according to a uniform distribution on the boundary and the input set is rectangular, the coverage constant Λ L in Assumption 3 can be set to Λ L = (2 · ( /2L))/(4 · 0.5) = /2. Thus, with = 0.02, (which leads to more accurate reachable set approximations than alternative approaches, see Figure 5), from Theorem 2, choosing M ≥ log(δ M )−log(D(∂X , /(2L))) log(1−Λ L ) ≈ 1376 is sufficient to be conservative with probability at least 1 − 10 −4 .
2021-12-13T02:15:30.382Z
2021-12-10T00:00:00.000
{ "year": 2021, "sha1": "70980575ba92b9d28e6efe72e2f99a266b2a40f2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e76cbcf28e03d3af11b15be5bc08d5239c4b7d7e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
210841691
pes2o/s2orc
v3-fos-license
Impact of decompressive laminectomy on the functional outcome of patients with metastatic spinal cord compression and neurological impairment Metastatic spinal cord compression (MSCC) is a frequent phenomenon in advanced tumor diseases with often severe neurological impairments. Affected patients are often treated by decompressive laminectomy. To assess the impact of this procedure on Karnofsky Performance Index (KPI) and Frankel Grade (FG) at discharge, a single center retrospective cohort study of neurologically impaired MSCC-patients treated with decompressive laminectomy between 2004 and 2014 was performed. 101 patients (27 female/74 male; age 66.1 ± 11.5 years) were identified. Prostate was the most common primary tumor site (40%) and progressive disease was present in 74%. At admission, 80% of patients were non-ambulatory (FG A–C). Imaging revealed prevalently thoracic MSCC (78%). Emergency surgery (< 24 h) was performed in 71% and rates of complications and revision surgery were 6% and 4%, respectively. At discharge, FG had improved in 61% of cases, and 51% of patients had regained ambulation. Univariate predictors for not regaining the ability to walk were bowl dysfunction (p = 0.0015), KPI < 50% (p = 0.048) and FG < C (p = 0.001) prior to surgery. In conclusion, decompressive laminectomy showed beneficial effects on the functional outcome at discharge. A good neurological status prior to surgery was key predictor for a good functional outcome. Introduction Spinal metastases are a common manifestation of malignant diseases and have been reported in autopsy-studies in 30-70% of cancer patients since the 1950s [1][2][3]. Due to improvements in diagnostic and treatment of cancer, along with an aging population, the number of patients surviving years beyond their cancer diagnosis has increased and consequently also the incidence of spinal metastases [4][5][6]. Breast, prostate, lung and kidney tumors most commonly disseminate into the spine [7]. Metastases are thereby most frequently located within the thoracic spine, followed by the lumbar and cervical spine [7,8]. In more than 30% of cases, spinal metastases are discontinuously located on multiple vertebral-levels [9,10]. Despite local back-pain being the initial symptom in most patients, spinal metastases are frequently diagnosed not before neurological deficits occur [9,11,12]. These may include sensory and motor disturbances as well as autonomic dysfunction [11,13]. Progression of the epidural masses leads to metastatic spinal cord compression (MSCC) and might finally result in complete and irreversible paraplegia, unless timely treatment is initiated [14]. This most serious and devastating sequel of spinal metastases is termed malignant epidural spinal cord compression (MESCC) and occurs in 3-5% of all cancer patients [15,16]. Although MESCC does not directly alter life expectancy, its' severe clinical course results in rapid deterioration of neurological function culminating in a paraplegic status. Finally, this loss of ambulation leads to a significant reduction of the patients' quality of life [7,11]. It is understood that MESCC has to be treated as an oncological emergency, requiring rapid decision-making if neurological 1 3 function should be preserved [13,17]. In this context, early therapeutic intervention as well as a good neurological status prior to treatment-initiation are repeatedly accounted for a better functional outcome [18][19][20]. Treatment options for MSCC include the administration of corticosteroids, chemotherapy, different forms of radiotherapy as well as different surgical approaches [6,17,21]. Surgery, however, remains the only treatment option leading to immediate relief of neural compression. In addition, it can ascertain histopathological diagnosis [17]. Indications widely accepted for decompression surgery include rapid neurologic deterioration, pain unresponsive to conservative treatment or radio-resistant tumors [22]. Decompressive laminectomy has been the surgical treatment of choice for MSCC patients, lowering mortality and morbidity rates [15], but several reports on inadequate decompression and poor neurological outcome have initiated a critical discussion about the use of this technique [9,[23][24][25][26][27][28]. Apart from that, individualized surgical approaches were further developed [29][30][31] and despite the fact that the presence of spinal metastases makes most subsequent therapies palliative, radical surgical approaches encompassing gross total tumor resection with replacement of vertebral bodies combined with anterior or posterior stabilization were established in order to offer further treatment alternatives aiming for oncological cure [32][33][34][35]. Nevertheless, indication for surgery has to take into account that patients with spinal metastases often suffer from multiple disseminated metastases and severe comorbidities, and thus mostly are in a reduced general condition with limited life expectancy [19,36,37]. Considering these issues, radical and curative tumor resection often appears challenging when surgery should not impair the patients' remaining quality of life [38]. Although several studies have evaluated prognostic factors that may affect survival [39][40][41] or the psychological status of MSCC patients, only limited information is available on their quality of life before and after treatment [42][43][44][45][46]. Especially in cancer patients, quality of life is strongly dependent on the ambulatory status which in turn is mostly affected by MSCC. Independent from comorbidities and tumor expansion, decompressive laminectomy remains a straightforward surgical technique that might have the potential to improve neurological function in selected MSCC patients, potentially preventing loss of ambulation and improving quality of life. The aim of the current study therefore was to present data on the early postoperative ambulatory status of neurologically impaired MSCC patients without spinal instability who were surgically treated by decompressive laminectomy and to identify factors that may reinstitute their ability to walk. Patient selection A single center retrospective analysis of all consecutive patients with metastatic spinal cord compression who underwent decompressive laminectomy with the primary goal of maximum posterior decompression at our institution between 2004 and 2014 was performed. Adult patients (≥ 18 years) with neurological impairment at admission, a tissue-proven diagnosis of solid primary tumor and evidence of MSCC by an epidural mass on imaging were further analyzed. Patients with pain as their only symptom at admission as well as radiosensitive tumors originating from the bone marrow, the cartilages or the lymphatic system and tumors originating from the central nervous system were excluded. Furthermore, cases in which spinal instability according to the Spinal Instability Neoplastic Score (SINS > 12) was present and in which additional stabilization of the vertebral column was required were excluded as well (Fig. 1). The local standing committee of ethnical practice approved the protocol of this study. Clinical evaluation and outcome assessment Information was collected from the patients' hospital records including demographics, clinical presentation and duration of symptoms, preoperative imaging findings, surgical details, perioperative management and surgical or non-surgical complications as well as the pre-and postoperative neurological status. Perioperative mortality was defined as death during the in-hospital stay. For morphological evaluation of MSCC, the 6-point Epidural Spinal Cord Compression (ESCC) scale [47] was determined as a consensus decision of three independent raters on preoperative imaging [47]. To determine spinal stability, the SINS score [48], which assesses tumorrelated instability by adding together scores for spinal location, pain, lesion bone quality, radiographic alignment, vertebral body collapse and posterolateral involvement of the spinal elements was calculated for every patient [49]. Furthermore, the modified Tokuhashi score [39] was determined for each patient. This score uses six parameters (general condition, extraspinal bone metastases, metastases in the vertebral body, metastases to major organs, primary tumor site, spinal cord palsy) ranging from 0 to 5 points with a total score of 15 points and can be used for pretreatment evaluation of metastatic spinal tumor prognosis [39]. Karnofsky performance status (KPS) scale [50] and Frankel Grade (FG) [51] at admission and at the day of discharge, obtained by the treating physicians were collected to assess the patients' functional outcome. The ambulatory status at discharge was thereby used as the primary outcome parameter and ambulation was defined as a Frankel Grade of D or E. Statistical analysis For statistical comparison, subgroups of patients with and without an ambulatory status at admission as well as at discharge were formed. The p-values for categorical variables (gender, primary (first) symptom, ambulation, imaging, location of metastases, complications, revisions, etc.) were calculated with Fisher's exact test. For comparison of continuous variables (age, inpatient stay, number of metastases, time from onset to surgery, ESCC, Tokuhashi score, KPS, FG, strength level, duration of paresis, time point of surgery, etc.), a two-sided Student's t test was used. Additionally, associations between the described variables and the retrieval of ambulation at the time of discharge were assessed in univariate analysis. No adjustment for multiple testing was performed as this was an exploratory analysis. All statistical analyses were conducted using GraphPad Prism 7.0b. A p-value < 0.05 was considered statistically significant. Patient demographics A total of 101 eligible patients (74 male, 27 female) with a mean age of 66.1 ± 11.5 years (mean ± SD) was identified. Spinal metastases originated from the prostate in 40 (40%), the lung in 23 (23%), and the breast in 11 (11%) of cases. Other tumors (including kidney, melanoma, larynx, and GI.) accounted for 19 (19%) of the metastases. Most patients (74%) were in a progressive stage of the underlying malignant disease with at least one additional, extraspinal metastasis. In eight patients (8%), the existence of a malignant disease had still been unknown at the time of presentation. (Table 1). Imaging MR images of the spine were performed in 93 patients (92%). Since contraindications for MR imaging, the remaining 8% of patients received CT scans only. Thirtyone patients (31%) had a single metastasis in only one vertebral body, whereas 70 patients (69%) presented with multiple lesions, sometimes located in distant parts of the spinal column. Most metastases involved the thoracic spine (n = 79, 78%), whereby the spinal level Th 4-7 were affected in a majority of cases (43%), followed by the lumbar (n = 15, 15%) and the cervical spine (n = 7, 7%). The cervico-thoracic or thoraco-lumbar junctions were affected in 3 (3%) and 1 (1%) case, respectively. Morphological evaluation of MSCC revealed an ESCC grade of 1a in 1 (1%), of 1c in 2 (2%), of 2 in 26 (26%) and of 3 in 72 (71%) patients. No patient had an ESCC grade of 0, or 1b. Spinal stability measured by the SINS score showed complete stable conditions in 81% of cases (n = 82) and an average SINS score of 5 ± 2.26 (mean ± SD). Intermediate stability was present in 19 patients (19%) and no patient had an instable spine. (Table 1). Clinical presentation The most relevant symptoms determined by the patients prior to admission and mostly the reason for patient referral to our institution were motor palsy in 63% of cases (n = 64), followed by pain in 20% (n = 20) and sensory deficits in only 12% (n = 12) of cases. These symptoms had been present since a median of 5 days prior to hospitalization (IQR 2-14 days). Neurological examination at admission revealed paresis in 101 patients (100%) with muscle strength of grade 3 or less according to the British Medical Research Council (BMRC) grading system [52] and thus, the inability to move the corresponding extremities against gravity. Sensory deficits were present in 83 patients (82%) and abnormal urinary sphincter function was present in 60 patients (60%) whereas bowel dysfunction only occurred in 25 patients (25%). Nearly half of the patients suffered from back pain (n = 49, 49%) while radiating pain was rare (n = 13, 13%). Most importantly, all patients (100%) showed impaired ambulation (FG A-D) and 81 patients (80%) had even completely lost ambulation at admission (FG A-C). Nearly all patients (96%) thus were unable to work or carry out normal activities of daily living measured by the Karnofsky Performance Index (KPI score < 80%). (Tables 2 and 3). Surgical management and complications Following informed consent, surgical treatment was performed as an emergency procedure within 24 h after admission in 72 cases (71%). The overall median time to surgery was 13 h (IQR 8-24.75 h) after admission, and 65 h (IQR 32.5-100 h) after loss of ambulation. Due to the vast progression of tumor disease, patients showed severe systemic co-morbidities with an ASA score (American Society of Anesthesiologists Physical Status Classification System score) of III in 62% (n = 60) and IV in 15% (n = 15) of cases. Intraoperatively, a median of 2 spinal segments (IQR 1-2) were posteriorly decompressed by laminectomy. Surgery-related complications occurred in four patients (4%), consisting of three cases of secondary hemorrhage which all required revision surgery and one case of wound infection which required revision surgery as well. Additionally, general complications occurred in two patients (2%), both displaying symptoms of cardiorespiratory insufficiency. One of those two patients developed a myocardial infarction and died during the in-hospital stay. Overall complication rate was therefore 6%, revision rate 4% and mortality rate 1%. Patients could be discharged from the surgical ward after 9 ± 4.7 days (mean ± SD) ( Tables 2 and 3). Postoperative outcome and impact on ambulation At discharge, 83 patients (84%) reported that their symptoms had overall improved. Especially palsies showed good recovery (improvement in 73% of cases) followed by alleviation of pain (radiating pain in 54% and back pain in 47% of cases) whereas sensory deficits as well as bladder or bowl dysfunction were often persistent (improvement in 18%, 24%, and 20% of cases, respectively). Pre-operatively impaired neurological function (Frankel Grade A-D) had improved by ≥ 1 grade in the Frankel Grade in 61% of patients at discharge (Fig. 2a). To emphasize, 25% of all severely impaired patients (Frankel Grade A and B prior to surgery) and 51% of all non-ambulatory patients (Frankel Grade A-C) had regained ambulation after surgery (Fig. 2b). Overall, 61 patients (61%) were ambulatory at discharge (Frankel Grade D and E) compared to 20 patients (20%) prior to surgery. Functional improvement in the KPI score was observed in 75 patients (75%) and at discharge, 27% of patients had a KPI score ≥ 80 compared to 4% prior to surgery (Tables 2, 3, 4). Comparison of preoperative ambulatory and non-ambulatory patients Statistical analysis of 81 ambulatory (Frankel Grade D-E) and 20 non-ambulatory (Frankel Grade A-C) patients prior to surgery revealed significant differences in perioperative variables (Table 3): Non-ambulatory patients more frequently had paresis as their first symptom (p < 0.05), whereas preoperative ambulatory patients more commonly were suffering from pain (p < 0.05). Furthermore, the median KPI was lower for non-ambulatory patients compared to ambulatory patients (p < 0.01). At admission, radiating pain was more common in ambulatory patients (p < 0.01) whereas non-ambulatory patients experienced bladder and bowl dysfunction more frequently (both p < 0.01). While all patients suffered from motor palsy when admitted to our institution, its' duration was shorter but its' degree higher (p < 0.01 and p < 0.001 respectively) in non-ambulatory patients. Non-ambulatory patients more often showed spinal cord compression with no visible Identification of factors affecting postoperative ambulation In univariate analyses, male sex, a better neurological status prior to surgery (for Frankel Grade and KPI), the absence of bladder or bowl dysfunction as well as a lower degree of motor palsy and a lower Tokuhashi score were associated with an ambulatory status at the time of discharge. No other factors were significantly correlated with the ability to walk after surgery ( Table 3). who regained ambulation at discharge had presented with a median duration of their first symptom of 4 days (IQR 2.5-10.5 days) compared to 6.5 days (IQR 2-14) in patients who remained non-ambulatory and a median duration of muscle weakness of 3 days (IQR 2-7 days) compared to 4 days (IQR 1-13.5 days). These differences, however, did not reach statistical significance. No further clinical, imaging, surgical or pathological parameter was significantly affecting the recovery of ambulation at discharge (Table 4). Discussion In this study of 101 neurologically impaired MSCC-patients without spinal instability that received decompressive laminectomy, 74% showed improved motor function and 51% had regained the ability to walk at discharge while overall complication rate as well as revision and mortality rates (6%, 4%, and 1%, respectively) were low. In univariate analyses, absence of bowl dysfunction, better neurological status as well as smaller surgery in terms of decompressed spinal levels were associated with postoperative retrieval of the ability to walk. It is noteworthy that in contrast to many other published series [53,54], all MSCC patients in our study had impaired motor function and 80% were unable to walk prior to surgery. To our knowledge, our study is the only clinical series that solely focusses on the surgical treatment of neurologically impaired MSCC patients. Additionally, our study population was older (66.1 ± 11.52 years mean ± SD) and had a more extensive metastatic disease (74% with extraspinal metastasis) than many of the MSCC patient cohorts in the literature [55]. Furthermore, all MSCC patients that were treated by decompressive laminectomy in our study had a SINS score between 0 and 12, and therefore no relevant spinal instability. It needs to be emphasized that MSCC patients who underwent other surgical procedures (e.g. posterolateral fusion), which are mostly required when spinal instability is present, were excluded in our current study. Our findings hence should only be applied to MSCC patients with neurological impairment, a SINS score ≤ 12 and an extensive metastatic disease with limited life expectancy. Differences in characteristics of preoperative ambulatory and non-ambulatory patients Loss of ambulation due to MSCC is mainly caused by motor palsy and spinal ataxia. Back pain or radiating pain may limit the patients´ mobility to some extent as well, but the objective Frankel Grade we used to assess the ambulatory status of MSCC patients does not inquire these symptoms. Our findings reflect the often-rapid progression of MSCC into MESCC which makes affected patients an oncological emergency [13,16]. As expected, the KPI was lower in nonambulatory patients, since it is influenced by the patients' ability to walk. Further imaging analyses revealed a trend towards thoracic localization of spinal metastases in non-ambulatory patients with a higher rate of radiological signs of myelopathy which might be affected by the anatomical narrowing of the spinal canal in this region. Pretreatment evaluation of prognosis by the modified Tokuhashi score predicted a shorter survival period for non-ambulatory patients. However, it must not be forgotten that this score itself already includes KPI and Frankel Grade as two of its six prognostic factors. In addition, due to recent improvements in specific cancer therapies, and hence increased survival time of some MSCC patients, the modified Tokuhashi score, in which the primary tumor constitutes a major factor in estimating life expectancy, is thought to be increasingly limited [39,56]. Non-ambulatory MSCC patients have been described to require more extensive surgery in terms of decompressed vertebral levels and to incur more complications [18]. Due to possible difficulties in decompressing the spine in these cases, it has been recommended to perform early surgical interventions before MSCC patients become non-ambulatory [34,35,57,58]. In our study, there were no statistically significant differences in the extent or duration of surgery as well as the length of hospital stay between preoperative ambulatory and non-ambulatory patients. However, complications and revision surgeries only occurred in non-ambulatory patients which might be influenced by their worse overall health status, assessed by preoperative ASA scores. Likewise, time to surgery was shorter for non-ambulatory patients. In contrast to other studies, these findings did not reach statistical significance in our analysis. The indication to perform early surgery on ambulatory MSCC patients without neurological impairment in order to prevent surgical complications should therefore be critically discussed [18]. Decompressive laminectomy to maintain or regain ambulatory ability In their recent multicenter randomized study, Patchell et al. compared radiotherapy alone with both surgery and radiotherapy and revealed that aggressive surgical decompression and instrumented stabilization had half the mortality rate compared to radiotherapy alone. Additionally, patients in the surgical arm retained the ability to walk for significantly longer than those in the radiotherapy arm without spending increased time in the hospital [59]. Although the study has been critically discussed due to a possible selection bias towards better outcome in the surgical arm as well as poor functional results after radiotherapy alone when compared with the literature [60], it confirmed the importance of surgery in the treatment of MSCC patients. Today, extensive surgical techniques to treat MSCC patients with e.g. circumferential instrumentation and fusion or corporectomy and cage graft placement from an anterolateral, posterolateral or retroperitoneal approach are available [61]. It has to be noted that goals of surgery with such approaches usually go beyond restoration or preservation of neurological function and include deformity correction and stabilization as well as oncologic control [62]. However, rates of complications for the surgical treatment of MSCC patients reported in the literature with more extensive approaches are high and range between 10 and 48% [54,55,[63][64][65][66][67][68]. Our current data reinforces this problem: MSCC patients were of higher age, had progressive disease in most of cases, a reduced functional status (KPI) prior to surgery and severe systemic symptoms (ASA 3 or 4). These are some of the typically increased risk factors for such local and systemic complications after surgery [55]. Laminectomy, a surgical technique that allows fast decompression of the spinal cord in cases of MSCC with the possibility of obtaining a histological sample or further tumor debulking has been pushed into an increasingly marginal role in the last decades [69]. Although surgical complication rates are generally low, the technique has fallen into disrepute for causing vertebral collapse and possible neurologic deterioration which in return may have resulted in the increased use of radiotherapy for MSCC treatment in the past [7]. Nevertheless, our data suggests that decompressive laminectomy might provide significant outcome benefits for a specific cohort of MSCC patients. In our study, all patients had a SINS score < 13, and therefore no evidence for spinal instability. Because the SINS score was specifically developed to assess the stability of the spine in MSCC patients, it has been proven to be reliable and reproducible with a sensitivity and specificity for potentially unstable lesions of 95.7% and 79.5% respectively [49]. In addition, 98% of the patients in our series had an ESCC scale of 2 or 3 and therefore profound spinal cord compression, 100% suffered from motor weakness at admission and 80% were unable to walk prior to surgery since only 24- [55]. In our study, no MSCC patient lost the ability to walk after surgery, 74% had functional improvement at discharge and 51% had regained the ability to walk while overall complication rate as well as revision and mortality rates (4%, 2% and 1% respectively) were low. Even completely paraplegic patients became walkers at discharge after emergency decompressive laminectomy in 25% of cases. Like other authors, we found that a better neurological status (KPI > 40%, FG > C) prior to surgery is associated with the ability to walk at discharge [34,35,70,71]. Moreover, our data suggests that higher KPI (> 40%) and better FG (> C) at admission are predictors even for non-ambulatory patients to regain the ability to walk after surgery. Surprisingly, duration of motor weakness or duration of the inability to walk prior to surgery had no significant impact on the ambulatory status at discharge, although trends towards shorter durations could be observed. Likewise, an earlier timepoint of surgery after admission of MSCC patients (</> 24 h) showed no association with postoperative ambulation. We assume, that these findings might be related to the small sample size in our study. Nevertheless, in order to alleviate damage to the spinal cord and thus allow for better recovery of neurological function, prompt surgical intervention should be performed in MSCC patients before edema, venous congestion and secondary vascular injury due to compression occur [18,59]. In our analyses, a lower modified Tokuhashi score (0-8) as well as the presence of bladder-and bowl dysfunction at admission were associated with the inability to walk at discharge. Moreover, the presence of bowl dysfunction was a predictor for non-ambulatory patients to remain unable to walk after surgery. Although the Tokuhashi score itself is partly determined by the patients' ambulatory status, we deem it a useful tool to predict not only prognosis for survival but also for postoperative ambulation. Interestingly, Tokuhashi et al. already recommend conservative treatment for MSCC patients with a total score of 8 or less due to a predicted survival period of < 6 months [39]. To this recommendation, our data adds the finding that these patients may also have a worse functional outcome when treated surgically. The presence of bowl dysfunction at admission might be an additional prognostic factor to predict the postoperative functional outcome of MSCC patients. Limitations Our study is primarily limited by its retrospective design and the corresponding lack of a prospective follow up assessing the long-term neurological status, development of spinal instability and the survival of MSCC patients. Moreover, we are unable to present data on further adjuvant treatments. Although we demonstrate objective and immediate effects of decompressive laminectomy on the ambulatory status, the alteration of ambulation over time which is expected to decrease depending on e.g. local radiation or local tumor recurrence therefore remains unknown. Similarly, possible secondary instability in e.g. patients with laminectomy over the cervico-thoracic or thoraco-lumbar junction cannot be addressed. However, information on direct effects of the surgical treatment on the functional status are equally important for affected patients and treating physicians. Secondly, due to its single center design and its relatively long time period, our study is prone to selection bias and heterogeneity in treatment due to secular changes. Nevertheless, decompressive laminectomy as a surgical technique did not change during the 10-year period of our analysis and there was no significant difference in surgery time or rate of complications between patients who were operated within the first 5 vs. the last 5 years of the study. Thirdly, the onset of motor symptoms, usually reported by the patients themselves, is only loosely defined in our series, which limits our results regarding neurologic improvement and outcome after surgery. Prospective studies are certainly needed to provide better data on the long-term effect of decompressive laminectomy and to guide clinical decision-making in the surgical treatment of MSCC patients. Conclusion Our data demonstrates a beneficial effect of decompressive laminectomy on the ambulatory status at discharge in the treatment of 101 neurologically impaired MSCC patients: 61 (61%) patients could walk at discharge compared to only 20 (20%) who were able to ambulate preoperatively. More importantly, patients with preserved sensation only or even complete loss of any motor or sensory function (FG A + B) regained ambulation in 25% of cases. Additionally, surgical (4%) and general complications (2%) as well as mortality (1%) after decompressive laminectomy were low. In univariate analysis, the absence of bowl dysfunction as well as a better neurological status prior to surgery were associated with postoperative retrieval of the ability to walk. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2020-01-22T14:02:02.279Z
2020-01-20T00:00:00.000
{ "year": 2020, "sha1": "15694a39389564d1f0ab5db8eaf2850cedd09731", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10585-019-10016-z.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "761693c270bb21505f809a599cbcd0f95906c302", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
203003079
pes2o/s2orc
v3-fos-license
The role of mentors in addressing issues of work–life integration in an academic research environment Introduction: There is growing evidence for both the need to manage work–life conflict and the opportunity for mentors to advise their mentees on how to do this in an academic research environment. Methods: A multiphase approach was used to develop and implement an evidence-informed training module to help mentors guide their mentees in issues of work–life conflict. Analysis of existing data from a randomized controlled trial (RCT) of a mentor training curriculum (n = 283 mentor/mentee dyads) informed the development of a work–life mentoring module which was incorporated into an established research mentor training curriculum and evaluated by faculty at a single academic medical center. Results: Only 39% of mentors and 36% of mentees in the RCT indicated high satisfaction with the balance between their personal and professional lives. The majority (75%) of mentors and mentees were sharing personal information as part of the mentoring relationship which was significantly associated with mentees’ ratings of the balance between their personal and professional lives. The effectiveness of the work–life module was assessed by 60 faculty mentors participating in a mentor training program at an academic medical center from 2013 to 2017. Among the respondents to the post-training survey, 82.5% indicated they were very/somewhat comfortable addressing work–life issues with their mentees as a result of the training, with significant improvements (p = 0.001) in self-assessments of mentoring skill in this domain. Conclusions: Our findings indicate that a structured training approach can significantly improve mentors’ self-reported skills in addressing work–life issues with their mentees. Introduction There is a growing body of evidence indicating that work-life integration is one of the most pressing challenges facing faculty in academic medical centers [1][2][3][4][5][6][7]. Work-life conflict has been identified as a key factor affecting faculty retention in academic medicine [5], particularly among women [4]. Most leading academic medical centers have developed and implemented institutional policies to address work-life challenges, including tenure deferment, part-time employment status, maternity/paternity leave, and job sharing [8,9] However, evidence suggests that there has been variable uptake and acceptance of these policies [1,3,9]. Complementing institutional policies to promote work-life integration, effective mentoring is a critical determinant for career satisfaction in academic medicine [2,10,11]. However, advising about balancing work and family life was the mentoring role least likely to be reported by junior faculty in a large national survey of K award recipients, reported by only about 20% of both male and female junior faculty with dedicated mentors [2]. Dissatisfaction with the balance between personal and professional life was also common among both men (40%) and women (52%) in this sample. Given the growing evidence base for both the need to manage work-life conflict and the opportunity for mentors to provide role modeling, support, and guidance to their mentees on how to do this in an academic research environment, a structured approach to addressing work-life issues in academic mentoring is needed [1,2,4,5]. Total Leadership® is a leadership development program created by Stewart Friedman at the University of Pennsylvania Wharton Business School [12]. It is unique among leadership development programs in that it defines successful leadership as a function of a person's ability to identify and align specific goals in four domains of life: work, family, community, and self. Achieving and sustaining these so-called "four-way wins" are considered central to becoming a more effective leader. Total Leadership provides a structured series of activities, which includes designing behavior change experiments to produce sustainable progress toward achieving selfdefined goals in the four life domains with the support of "coaching teams," consisting of other program participants. Total Leadership's structured approach made it an attractive program to adapt into a training curriculum to encourage mentors in an academic research setting to address issues of work-life integration with their mentees. We approached this initiative with a three-phase longitudinal research program. In Phase 1, we sought to understand the extent to which academic research mentors and mentees share information from their personal lives as part of the mentoring relationship, and how this impacts mentees' assessments of their satisfaction with the balance between their personal and professional lives. For this phase of the project, we utilized baseline data from a prior randomized controlled trial (RCT) of a mentor training program known as Mentor Training for Clinical and Translational Researchers [13,14]. In Phase 2, we utilized results from the Phase 1 analysis to adapt elements of Total Leadership into a work-life mentoring module and incorporated this into the established research mentor training curriculum. Finally, in Phase 3, we implemented and evaluated the self-reported effectiveness of the work-life mentoring module as assessed by faculty mentors at the University of Pennsylvania Perelman School of Medicine and its affiliate, the Children's Hospital of Philadelphia, who were participating in the mentor training curriculum. Approach for Phase 1 Analyses for Phase 1 drew on the baseline data from a RCT designed to test the effectiveness of a research mentor training curriculum, Mentor Training for Clinical and Translational Researchers [13,14]. Data were collected via structured interviews in 2010 from 16 academic health centers across the USA and Puerto Rico. The sample included 283 faculty research mentors and 283 of their matched mentees, who consisted of early career faculty, postdocs, and graduate students. All interviews were conducted in person by trained research assistants at each site. The original clinical trial data collection was approved by the University of Wisconsin-Madison IRB (IRB #: M-2010-1053). The current analysis was exempt from IRB review given that it involved the analysis of existing data recorded in a manner by which subjects could not be identified. Baseline data collected from both mentors and mentees on a wide range of issues relevant to a mentoring relationship included the validated Mentoring Competency Assessment (MCA), in which mentors' skills are rated on a 7-point Likert scale ranging from 1 (not at all skilled) to 7 (extremely skilled) [15]. Other assessments included satisfaction with the respondents' professional life, as well as the balance between their personal and professional lives, also measured on a 7-point Likert scale ranging from 1 (very unsatisfied) to 7 (very satisfied), and an assessment of the climate of the mentee's work environment, ranging from 1 (very negative) to 7 (very positive). Among the items characterizing the mentoring relationship was one asking respondents to rate the degree to which they know about each other's personal life (e.g., family, hobbies, interests outside of work) on a 7-point scale ranging from 1 (We know nothing about each other's personal life) to 7 (We know a lot about each other's personal life). Data from the baseline surveys were used in the current analysis to avoid introducing any effect of the training program. Our primary outcome of interest was the mentee rating of work-life satisfaction. To facilitate interpretation of analyses, we collapsed the 7-point scale into a threelevel outcome variable: low satisfaction (ratings of 1-3), moderate satisfaction (ratings of 4 or 5), or high satisfaction (ratings of 6 or 7). Descriptive statistics were used to summarize the distribution of mentor and mentee ratings of professional satisfaction as well as work-life satisfaction. Bivariate analyses were then conducted on a number of candidate mentor, mentee, and mentoring relationship/environmental factors to explore their association with mentee's assessments of high work-life satisfaction. Factors that were associated with mentee work-life satisfaction were entered into a multivariable logistic regression model to ascertain the independent contribution each factor had on mentee work-life satisfaction. Odds ratios and 95% CI were calculated. Approach for Phase 2 Phase 2 activities consisted of "off-line" development of a new module in the mentor training curriculum through the adaptation of Total Leadership content, informed by the analyses conducted in Phase 1. Existing content in the 8-hour Mentor Training for Clinical and Translational Researchers curriculum is organized into roughly 1-hour standalone modules covering a variety of topics (introduction, maintaining effective communication, aligning expectations, assessing understanding, addressing equity and inclusion, fostering independence, promoting professional development, and articulating a mentoring philosophy) [13]. The Total Leadership® program consists of a longitudinal series of exercises that culminate in the design and implementation of a behavior change experiment intended to better align the goals one identifies in each of the four domains of life: work, family, community, and self [12]. We adapted and integrated selected content from the program into the structure of the mentor training curriculum, resulting in a new module entitled Enhancing Work-Life Integration, with a companion facilitator's guide. The first activity requires participants to define a personal vision statement to identify what is most important to them. A "four-way assessment" is then conducted so participants can identify discrepancies between their perceived relative importance of each life domain with how much time and attention they are currently spending in that domain. Informed by these activities, participants design a behavior change experiment, intended to mitigate a discrepancy between the perceived level of importance in each domain with one's actual allocation of time and attention. Participants are divided into groups of 3-4 individuals to serve as "coaches" for one another, troubleshooting the implementation of experiments, and providing mutual support and accountability to ensure that experiments were completed. All work-life integration experiments are designed to be conducted over the 12-week time period during which the mentor training curriculum is implemented. In-person activities are supplemented by readings from the Total Leadership book [12], which was provided at no cost to workshop participants. Materials developed for the work-life mentoring module are available via the CIMER website https://cimerproject. org/ along with materials for all other mentor training topics. Approach for Phase 3 The research mentor training program has been offered each year since 2012 to faculty in the Tenure, Research and Clinician-Educator tracks at the "late Assistant Professor" (i.e., years 7-9 of appointment) rank or higher at the University of Pennsylvania Perelman School of Medicine. Participation is voluntary and is limited to a maximum of 15 faculty per session. The Enhancing Work-Life Integration module was developed in 2012 and initially integrated Journal of Clinical and Translational Science into the curriculum in 2013, resulting in a mentor training curriculum of five 2-hour sessions spaced 2-3 weeks apart. Prior to the start of each program, participating faculty completed a baseline survey ascertaining demographic information, characteristics of their mentoring experience, and self-assessments of a number of mentoring skills and behaviors using the MCA [15]. Faculty were surveyed within 1 week following the completion of the training program in order to ascertain feedback on the content and implementation of the training program, as well as any self-reported changes in mentoring skills assessments. New survey items were developed specific to the Enhancing Work-Life Integration module and incorporated into the standard assessment of the mentor training curriculum. Implementations for years 2013-2015 used a pre-post survey design. For implementations in 2016 and 2017, a retrospective pre-post survey design was used. This adjustment was made to reduce respondent fatigue as the retrospective pre-post was found to be a reliable means of self-assessing skill gains. The questions remained the same as in previous years, but faculty were only asked to assess skill gains at the completion of the training. The data collection for this phase of work was approved by the IRB at the University of Wisconsin-Madison (protocol #s: 2017-0026, 2016-0458, 2015-0871, 2015-0042, 2013-0732). Data from both baseline and post-training surveys were used to assess the implementation and effectiveness of the new work-life mentoring module. Descriptive statistics were used to summarize the distribution of faculty characteristics, their ratings of specific mentoring skills and behaviors, and their feedback on components of the Enhancing Work-Life Integration module. The Wilcoxon signed-rank test was used to compare faculty assessments of specific mentoring skills and behaviors after vs. before participating in the mentor training program. P values were calculated for the difference in median ratings pre-vs. post-training. Phase 1 Results The majority of mentors in the RCT were male (60%), white (91%), and had a mean age of 50.5 years (range: 31-81 years). Most were full or associate professors and reported extensive mentoring experience (average of 15 years, standard deviation 8.0 years). The mentors' most common research focus area was clinical research (66%) and the remainder included laboratory, behavioral, and community engaged research. The mentees' mean age was 35.9 years (range: 25-61), 42% were male and 74% self-identified as white, with 30% self-selecting other racial categories. Most mentees were funded by career development awards or postdoctoral fellowships. The majority conducted clinical research (69%) and the remainder were engaged in the full spectrum of clinical and translational research (for more information, see Pfund et al. 2014). Mentors and mentees had a similar distribution (weighted kappa = 0.33, 95% CI 0.24, 0.42) of responses to the question about knowledge of each other's personal lives. Fig. 1 provides the distribution of responses for both mentors and mentees, indicating that approximately 3 out of 4 respondents in each group indicated moderate to high knowledge of each other's personal lives. Fig. 2 provides the distributions of the ratings of satisfaction with professional lives, as well as the balance between personal Durbin et al. and professional lives, for both mentors and mentees. The majority of both mentors and mentees indicated fairly high ratings of satisfaction with their professional lives, with 78% of mentors and 58% of mentees indicating high satisfaction for this domain. The distribution of ratings for the balance between personal and professional lives indicated lower ratings of satisfaction, with only 39% of mentors and 36% of mentees indicating high satisfaction. Although the overall distributions of ratings for personal/professional life satisfaction are similar for mentors and mentees, they were not significantly correlated with one another (weighted kappa = 0.02). Because the mentee's rating of satisfaction with the balance between their personal and professional life was our primary outcome of interest, we examined the association between several mentor, mentee, and mentoring relationship characteristics with the mentee's rating, grouped as: high (6, 7) vs. moderate/low (1-5) satisfaction. There was no association (p > 0.05) between the mentee's satisfaction rating and the mentor's or mentee's gender or race, the mentor's age or years of mentoring experience, the mentee's academic rank or productivity as measured by number of grants submitted, or indicators of specific characteristics of the mentoring relationship such as responsiveness of mentors and helpfulness of feedback as assessed by mentees. Notably, there was no evidence (p = 0.56) for an association between gender concordance in the mentoring relationship and the mentee's satisfaction with the balance between their personal and professional lives. In addition, there was no association between the mentee's satisfaction with the balance between their personal and professional life and their own assessment of the knowledge of each other's personal lives (p = 0.07). Conversely, an association was noted (p < 0.05) between the mentee's personal/professional balance satisfaction and the mentor's academic rank; there was a "U-shaped" relationship, with lower ratings of personal/professional satisfaction among mentees with associate professor mentors. Associations were also found with the mentee's assessment of the work climate (p = 0.03), the mentee's assessment of the overall quality of their mentoring (p = 0.03), the mentee's age (younger age associated with higher satisfaction, p = 0.01), and finally, the mentor's reported knowledge of each other's personal life (p = 0.03). Table 1 provides results of the multivariable logistic regression analyses. After adjustment for all variables in the model, only the academic rank of the mentor (i.e., Professor), the mentee's (younger) age, and the mentor's knowledge of each other's personal lives remained significantly associated with high mentee satisfaction in the balance between their personal and professional lives. Mentees whose mentors reported a high degree of shared personal knowledge were over twice as likely to report high satisfaction with the balance between their personal and professional lives as compared to mentees whose mentors reported low or moderate knowledge of each other's personal lives. This finding is notable in light of the distribution of the mentors' satisfaction with their own personal/professional balance, suggesting that independent of their own personal/professional satisfaction, their knowledge of their mentee's personal life is associated with improved perceptions of personal/professional satisfaction on the part of mentees. Phase 3 Results From 2013 to 2017, a total of 60 faculty participated in mentor training sessions which included Enhancing Work-life Integration. Participant characteristics (n = 55 completed responses) are summarized in Table 2. Participants provided mentoring to a wide range of research trainees including undergraduates, PhD and Masters students, medical students, postdoctoral fellows, and medical specialty fellows, as well as junior faculty. Feedback on the overall mentor training program was positive with 46/50 (92%) respondents indicating the training was a valuable use of their time and 44/50 (88%) indicating they were likely or very likely to recommend the training to a colleague. Further, 47/50 (94%) indicated that they had already made or were planning to make changes in their mentoring practice as a result of the training. Feedback on the Enhancing Work-life Integration module demonstrated that 33/40 (82.5%) respondents self-rated their behavior change experiment as moderately or very successful, and 45/47 (95.7%) respondents were possibly or very likely to continue the experiment following the training. Finally, among the 40 respondents to the post-training survey question, "How comfortable would you be addressing work-life integration with your mentees," 15 (37.5%) indicated they were very comfortable, 18 (45%) indicated they were somewhat comfortable, 6 (15%) indicated that they were somewhat uncomfortable, and only 1 (2.5%) indicated he/she was very uncomfortable. Participants compared their self-assessments in each mentoring domain immediately following the training vs. prior to the training [15]. For the true pre-post surveys, overall MCA skills assessment scores increased from a median (sd) of 4.3 (.63) to 5.41 (.46), p < 0.002, and from a median (sd) of 4.2 (0.70) to 5.27 (0.64), p < 0.001 for the retrospective pre-post surveys. Of specific relevance to this project, the median (sd) competency score for "Helping mentees balance work with professional life" increased Discussion Over half of both mentors and mentees in our national sample of dyads indicated low to moderate levels of satisfaction with worklife balance, proportions similar to those found by DeCastro in a separate sample of junior faculty with K awards [2]. Despite relatively low ratings by mentors of satisfaction with their own personal/professional balance, a significant proportion of mentors and mentees were sharing personal information as part of the mentoring relationship, and such information sharing was associated with higher ratings of work-life satisfaction by mentees. These findings supported the development of a structured mentor training curriculum focused on both improving work-life integration for mentors and encouraging interactions between mentors and mentees focused on issues of work-life integration. This was accomplished by adapting content from a leadership development program into a proven-effective research mentor training curriculum. The new Enhancing Work-life Integration module was well received by faculty participants and resulted in both direct benefit to the faculty mentors in the form of their own successful work-life integration experiments, as well as significant improvements in their self-assessed competency in addressing work-life issues with their mentees. While the findings in Phase 1 from a prior randomized trial of the mentor training program cannot establish a cause-effect relationship between mentors discussing work-life issues and mentee's perceptions of work-life satisfaction, our findings indicate that a structured approach designed to help mentors guide their mentees in managing work-life conflict can significantly improve mentors' self-reported mentoring skills in this domain [2,10,11]. Such training can encourage constructive expansion of traditional mentoring activities to address this area of increasing importance to junior faculty and research trainees and should complement institutional policies and other efforts to facilitate better work-life integration for faculty in academic medicine [1,3,8,9]. The voluntary nature of the mentor training program evaluated in Phase 3 likely self-selected for faculty who were motivated to participate in such professional development programs and more likely to perceive benefit from participating. The nearly universal positive feedback from participating faculty with a wide range of research interests and mentoring experience indicates that such a program is likely applicable to a varied faculty phenotype in any academic medical center. However, all participating faculty were from a single academic medical center and its affiliated free-standing children's hospital, thus limiting the potential generalizability of these findings. The Enhancing Work-life Integration module is now available and being used in trainings nationally via NRMN and has been adopted into other mentee training programs. The current research program did not include longitudinal follow-up of faculty who participated in the mentor training program to gain insight on how they ultimately used the training to address issues of work-life conflict with their mentees, nor could it assess the impact of this training on their mentees' assessments of worklife satisfaction. Faculty were encouraged to utilize the Total Leadership materials with small groups of their mentees who are at similar career stages to facilitate peer-to-peer mentee coaching groups akin to the experience from the training program. Future research should assess the implementation of work-life integration discussions and activities in the mentoring relationship to better define how the training is best translated into mentoring practice. In addition, further research is needed to determine the impact of this mentor training curriculum on outcomes of importance to mentees, such as their perceived satisfaction with their work-life integration and the contribution made by their mentors. This would help further guide mentors to best mediate this key topic with mentees and inform institutional practices to address work-life conflicts.
2019-09-17T02:46:07.479Z
2019-10-15T00:00:00.000
{ "year": 2019, "sha1": "b3944d30fb18a390a446ab3ef52258a39e39600f", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/E8BC9DDCED7D5F210A730F5F5076F182/S2059866119004084a.pdf/div-class-title-the-role-of-mentors-in-addressing-issues-of-work-life-integration-in-an-academic-research-environment-div.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5599de291c1f89c2702bde3df2e47240000bc356", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
119635754
pes2o/s2orc
v3-fos-license
A note on Bogomolov-Gieseker type inequality for Calabi-Yau 3-folds The conjectural Bogomolov-Gieseker (BG) type inequality for tilt semistable objects on projective 3-folds was proposed by Bayer, Macri and the author. In this note, we prove our conjecture for slope stable sheaves with the smallest first Chern class on certain Calabi-Yau 3-folds, e.g. quintic 3-folds. 1. Introduction 1.1. Motivation and result. Let X be a smooth projective 3-fold over C. Given an element with ω ample, the heart of a bounded t-structure B B,ω ⊂ D b Coh(X) was constructed in [3], following the construction of Bridgeland's stability conditions on projective surfaces [5], [1]. The notion of tilt stability on B B,ω was introduced in [3], and a conjectural Bogomolov-Gieseker (BG) type inequality among Chern characters of tilt semistable objects in B B,ω was proposed in [3,Conjecture 1.3.1]. Our conjecture in [3] turned out to imply several very important results: construction of Bridgeland stability on projective 3-folds [3], Fujita's conjecture in birational geometry [2], and Ooguri-Strominger-Vafa's conjecture in string theory [9]. In this note, we report a partial progress toward the conjectural BG type inequality in [3]. When B = 0, the first Chern class on the heart B 0,ω is always nonnegative, and plays a role of the rank on the category of coherent sheaves. A hopeful approach toward the proof of the main conjecture in [3] is to use the induction argument on the first Chern classes of tilt semistable objects, as in the proof of BG type inequality without ch 3 . (cf. [3,Theorem 7.3].) As a first step for this induction argument, the conjectural BG type inequality should be solved when the tilt semistable object has the smallest first Chern class. In this case, the required result is formulated in the following conjecture for slope stable sheaves (cf. [ The above conjecture was studied in [3,Example 7.2.4] for rank one torsion free sheaves. In this case, the inequality (1) is reduced to Castelnuovo type inequality for low degree curves in X. On the other hand, the higher rank case was not studied in [3]. The purpose of this article is to show that, when X is a certain Calabi-Yau 3-fold, the inequality (1) is reduced to Castelnuovo type inequality even in the higher rank case. The main result is as follows: Theorem 1.2. Let X be a smooth projective Calabi-Yau 3-fold such that Pic(X) is generated by O X (H) for an ample divisor H in X. Suppose that the following inequalities hold: for any one dimensional subscheme C ⊂ X with C ·H < H 3 /2. Then X satisfies Conjecture 1.1. Furthermore the inequality (1) is an equality only when E = O X (H). As we discussed in [3, Example 7.2.4], a typical (and important) example satisfying the assumption of Theorem 1.2 is a quintic 3-fold in P 4 . Therefore we obtain the following corollary: Corollary 1.3. Let X ⊂ P 4 be a smooth quintic 3-fold. Then X satisfies Conjecture 1.1. In the case of quintic 3-folds, the conditions c 1 (E) = [H], ch 2 (E)H > 0 and the Bogomolov-Gieseker inequality [4], [6] restrict the rank of E up to five. So a priori, the sheaf E could be rank(E) ≥ 2. On the other hand, we do not know any example of such a sheaf E with rank(E) ≥ 2. (cf. Remark 2.5.) The result of Corollary 1.3 means that ch 3 (E) should obey the desired inequality (1), if such a sheaf E exists. In general, the third Chern character ch 3 (E) is known to be bounded by a certain polynomial of ch 0 (E), ch 1 (E) and ch 2 (E), see [8]. However 1 The statement of [3, Conjecture 7.2.3] was more general than Conjecture 1.1 and the formulation is slightly different. When Pic(X) is generated by one element, they are obviously equivalent. the evaluation in [8] is not strict to show the inequality (1). Although the hypersurface restriction of E plays an important role in [8], we do not take the hypersurface restriction. Instead we take the universal extension and the classical Bogomolov-Gieseker inequality to evaluate the dimensions of cohomology groups. As far as the author knows, such a method is not seen in literatures. Acknowledgement. This work is supported by World Premier International Research Center Initiative (WPI initiative), MEXT, Japan. This work is also supported by Grant-in Aid for Scientific Research grant (22684002), and partly (S-19104002), from the Ministry of Education, Culture, Sports, Science and Technology, Japan. Notation and convention. In this note, all the varieties are defined over C. We say X is a Calabi-Yau 3-fold if dim X = 3, its canonical line bundle is trivial and h 1 (O X ) = 0. For an ample divisor H in a 3-fold X and a torsion free sheaf E on X, its slope is denoted by The notion of slope stability is defined in the usual way. (cf. [7].) For a subscheme Z ⊂ X, the defining ideal sheaf of Z is denoted by I Z . 2. Proof of Theorem 1.2 2.1. Some lemmas. The key ingredient for the proof of Theorem 1.2 is the following two lemmas, which may be well-known. For the lack of references, we give the proofs. the sheaf E ′ is also slope stable. Proof. We prove the assertion by the induction on ext 1 (E, O X ). When ext 1 (E, O X ) = 0, then the assertion is obvious. Suppose that ext 1 (E, O X ) > 0, and take a non-zero element a ∈ Ext 1 (E, O X ). The element a corresponds to the extension, We show that E a is slope stable. Suppose by contradiction that E a is not slope stable. Then there is a saturated subsheaf F ⊂ E a such that F is slope stable and If we write c 1 (F ) = k[H], then k ≥ 1, hence Hom(F, O X ) = 0. It follows that the composition is non-zero, which implies µ H (F ) ≤ µ H (E). Combined with (6), we obtain the inequality The above inequality immediately implies k = 1 and r(F ) = r(E). Then µ H (F ) = µ H (E), and since F and E are slope stable with the same slope, the non-zero morphism (7) is an isomorphism. However this contradicts to that the sequence (5) is non-split. Let V a be the C-vector space Ext 1 (E a , O X ) ∨ and take the universal extension, Applying Hom(−, O X ) to the sequence (5), we see that Hence E ′ a is slope stable by the assumption of the induction. On the other hand, composing the sequence (5) with (8), we obtain the exact sequence It is easy to see that the above sequence is identified with the sequence (4), hence E ′ ∼ = E ′ a is slope stable. Lemma 2.2. In the situation of Lemma 2.1, suppose that r(E) ≥ 2 and there is a non-zero element s ∈ H 0 (X, E). Then for the associated exact sequence Proof. We first show that F is torsion free. If F has a torsion, there is an exact sequence where T is a non-zero torsion sheaf and A ⊂ E is a rank one torsion free sheaf. If dim Supp(T ) = 2, then c 1 (A) = k[H] with k ≥ 1, which contradicts to that E is slope stable. Therefore dim Supp(T ) ≤ 1, hence Therefore the sequence (9) splits, which contradicts to that A is torsion free. Next suppose that F is not slope stable. Then there is a slope stable sheaf G and a surjection F ։ G satisfying Also since there is a surjection E ։ F ։ G and E, G are slope stable, It is immediate to see that there is no solution (k, r(G)) satisfying the above inequality and r(G) < r(E) − 1. Hence F is slope stable. As a corollary of Lemma 2.2, we have the following: Corollary 2.3. In the situation of Lemma 2.1, there is an exact sequence of the form such that F is either a rank one torsion free sheaf or a slope stable sheaf with r(F ) ≥ 2 and h 0 (F ) = 0. Proof. We show the assertion by the induction of θ(E) defined by The assertion is obvious when θ(E) = 0. Suppose that θ(E) > 0, i.e. h 0 (E) = 0 and r(E) ≥ 2. Then there is a non-zero element s ∈ H 0 (X, E). If we take the exact sequence then F s is slope stable by Lemma 2.2. By applying Hom(O X , −) to the sequence (11), we see h 0 (F s ) = h 0 (E) − 1. Hence we have θ(F s ) = θ(E) − 1, and by the assumption of the induction, there is an exact sequence such that F is a rank one torsion free sheaf or a slope stable sheaf with r(F ) ≥ 2 and h 0 (F ) = 0. The desired exact sequence (10) is obtained by combining the sequence (12) with (11). Proof of Theorem 1.2. Proof. Let X be as in the statement of Theorem 1.2, and E a slope stable sheaf on X with c 1 (E) = [H] and ch 2 (E)H > 0. By Corollary 2.3, there is an exact sequence of the form such that either F is a rank one torsion free sheaf or a slope stable sheaf with r(F ) ≥ 2 and h 0 (F ) = 0. Note that in the first case, we have F ∼ = O X (H) ⊗ I Z for a subscheme Z ⊂ X with dim Z ≤ 1. We evaluate ch 3 (E) = ch 3 (F ) by dividing into the following three cases: In this case, we have by the Serre duality, which is zero by the cohomology exact sequence associated to the sequence and the Kodaira vanishing h 2 (O X (H)) = 0. Hence the sequence (13) splits if m > 0, which contradicts to the slope stability of E. Therefore E ∼ = O X (H) ⊗ I Z , and The above equalities imply the inequality (1), and the equality holds only when Z = ∅. In this case, ch 2 (E)H = ch 2 (F )H > 0 is equivalent to Applying the assumption (3), we have Therefore the inequality (1) holds.
2012-01-24T03:45:45.000Z
2012-01-24T00:00:00.000
{ "year": 2012, "sha1": "c234e8bff1b039a526118be6238378cb61e4afde", "oa_license": "pd", "oa_url": "https://www.ams.org/journals/proc/2014-142-10/S0002-9939-2014-12096-X/S0002-9939-2014-12096-X.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "c234e8bff1b039a526118be6238378cb61e4afde", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
253284919
pes2o/s2orc
v3-fos-license
Obstructive Sleep Apnoea and Lipid Metabolism: The Summary of Evidence and Future Perspectives in the Pathophysiology of OSA-Associated Dyslipidaemia Obstructive sleep apnoea (OSA) is associated with cardiovascular and metabolic comorbidities, including hypertension, dyslipidaemia, insulin resistance and atherosclerosis. Strong evidence suggests that OSA is associated with an altered lipid profile including elevated levels of triglyceride-rich lipoproteins and decreased levels of high-density lipoprotein (HDL). Intermittent hypoxia; sleep fragmentation; and consequential surges in the sympathetic activity, enhanced oxidative stress and systemic inflammation are the postulated mechanisms leading to metabolic alterations in OSA. Although the exact mechanisms of OSA-associated dyslipidaemia have not been fully elucidated, three main points have been found to be impaired: activated lipolysis in the adipose tissue, decreased lipid clearance from the circulation and accelerated de novo lipid synthesis. This is further complicated by the oxidisation of atherogenic lipoproteins, adipose tissue dysfunction, hormonal changes, and the reduced function of HDL particles in OSA. In this comprehensive review, we summarise and critically evaluate the current evidence about the possible mechanisms involved in OSA-associated dyslipidaemia. Introduction Obstructive sleep apnoea (OSA) is common disorder which is characterised by recurrent collapses of the upper airways during sleep. Intermittent hypoxia (IH) and sleep fragmentation are the most important factors in the pathomechanism of OSA resulting in sympathetic overdrive, oxidative stress and systemic inflammation. These derangements lead to cardiovascular and metabolic alterations, such as atherosclerosis, hypertension, insulin resistance and dyslipidaemia, ultimately contributing to cardiovascular morbidity and mortality [1]. Dyslipidaemia is an independent risk factor for cardiovascular morbidity [2]. There is also strong evidence supporting the role of OSA with altered lipid profile: elevated triglyceride (TG), total cholesterol (TC) and low-density lipoprotein cholesterol (LDL-C) concentrations with a corresponding reduction in high-density lipoprotein cholesterol (HDL-C) levels were commonly found in patients with OSA [3,4]. Understanding the mechanisms linking OSA to lipid abnormalities is of major clinical importance, as they could represent treatable traits (i.e., choosing the right lipid-lowering medication and lifestyle changes), and also highlights the importance of active screening of dyslipidaemia in patients with OSA. The aim of this review is to summarise and critically evaluate the current evidence about the possible mechanisms involved in OSA-associated dyslipidaemia. Naturally, we lifestyle changes), and also highlights the importance of active screening of dyslipidaemia in patients with OSA. The aim of this review is to summarise and critically evaluate the current evidence about the possible mechanisms involved in OSA-associated dyslipidaemia. Naturally, we focus on human studies; however, we briefly discuss animal models and highlight if the research was conducted in humans or animals. The Physiological Role of Chylomicrons Dietary TGs are hydrolysed by several lipases (for example pancreatic and gastric lipases) to FFAs and monoacylglycerol (MAG) to be absorbed by the enterocytes in the small intestine [5]. FFAs can be transported by passive diffusion or fatty acid transporters, such as cluster determinant 36 (CD36) or fatty acid-transport protein 4 (FATP4). Dietary cholesterol esters (CEs) are hydrolysed to FFAs and free cholesterol (FC). Several FC transporters were identified on the enterocytes, such as the Niemann-Pick C1-like protein (NPC1L1), ATP-binding cassette protein G5 (ABCG5) and G8 (ABCG8) and scavenger receptor class B type I (SR-BI). They are also expressed in the apical membrane of hepatocytes [6]. The chylomicrons (CMs) are large TG-rich lipoproteins containing apolipoprotein B-48 (apoB-48) which are usually formed by dietary FFA absorbed from the small intestine [7]. The most common role of CMs is to transport dietary cholesterol and TGs to the peripheral tissues and to the liver; the process is called "exogenous lipid transport" ( Figure 1). CMs have also an active role in enterohepatic cholesterol transport. Around 1000 g of biliary cholesterol is secreted to the intestine every day. Thus, the majority of CM-transported cholesterol derives from the reabsorption of biliary cholesterol [8]. CMs are synthetised in the endoplasmic reticulum (ER) and Golgi apparatus of the enterocytes. First, in the ER, the previously hydrolysed FFAs, FC and MAG are resynthesised. FATP4 converts FFAs to fatty acyl-CoA (FFA-CoA) [9], and thus FFA-CoA and MAG can be converted to diacylglycerol (DAG) by monoacylglycerol acyl transferase 2 (MGAT2) [10,11]. Then, diacylglycerol acyl transferase 1 (DGAT1) further converts DAG The Role of Lipoprotein Lipase LpL hydrolyses the VLDL-and CM-associated TAGs to FFAs and MAGs which are taken up by the target cells. This enzyme is mainly produced by adipose tissue and skeletal and cardiac muscle and transported to the luminal surface of endothelial cells by the glycosylphosphatidylinositol-anchored high-density lipoprotein-binding protein 1 (GPIHBP1) [47]. This endothelial LpL pool is referred to as the functional LpL [48]. LpL activity is regulated by different physiological stimuli in a tissue-specific manner. In white adipose tissue, LpL activity is increased by the postprandial state and decreased by fasting [49]. On the contrary, fasting activates the myocardial LpL [50]. Finally, in skeletal muscle, LpL activity is promoted by acute exercise [51]. ApoC-II is the main cofactor of LpL activity [18], whereas apoC-I and apoC-III have been shown to inhibit LpL activity [52]. Moreover, some members of the family of angiopoietin-like proteins, such as ANGPTL3 (hepatocyte), ANGPTL4 (adipocyte) and ANGPTL8, also promote the inhibition of LpL [53,54]. Several hormones, such as insulin, glucocorticoids and adrenalin, stimulate LpL activity in the adipose tissue [6]. The Physiological Role of HDL HDL is a major mediator in reverse cholesterol transport (RCT). RCT is termed as a cholesterol transport from the peripheral cells (including macrophages) back to the hepatocytes for further metabolism [55]. In general, HDL particles comprise a hydrophobic core with CE and TG covered by PL, FC and apolipoproteins (apoA-I, A-II, A-IV, A-V, C-I, C-II, C-III, E, F, J, M). Various HDL particles highly differ in their size, shape, proportion of proteins and lipids and biological activities [56]. The two main forms of HDL are the small and poorly lipidated discoid HDL (also known as preβ-HDL) and the larger and CE/TGcontaining spherical HDL (also known as α-HDL) [56,57]. Spherical particles represent the majority of HDL particles in the circulation [57]. HDL 2 particles are larger and lipid-rich but less dense, and HDL 3 particles are smaller, lipid-poor and dense [58]. These can further be divided into HDL3c, HDL3b, HDL3a, HDL2a and HDL2b fractions [57]. Further subclasses can be identified by gel electrophoresis: large (HDL 1-3), intermediate and small (HDL 8-10) subfractions [31]. The small HDL 8-10 particles are atherogenic through easy penetration to the endothelium and low recognition by HDL receptors [59,60]. The main structural apolipoproteins of HDL are apoA-I (70%) and apoA-II (20%) [61]. ApoA-I plays role in activating LCAT and also has anti-inflammatory and antioxidant effects [62,63]. ApoA-II is an important inhibitor of LpL directly and indirectly by replacing apoC-II in VLDL. Moreover, it also has a cofactor activity for LCAT and CETP [64]. ApoM accounts for approximately 5% of HDL proteins. It plays role in lipid transfer into nascent HDL [65] and enhances the cholesterol efflux from foam cells [66]. Noteworthily, apoM is a carrier of sphingosine-1-phosphate (S1P) mentioned below [67]. Other apolipoproteins constitute a minor amount of HDL, such as apoA-IV, V, C-I, C-II, C-III, D, E, J and L [68]. It is important to note that apoJ or clusterin has anti-apoptotic, anti-atherogenic and antiinflammatory properties and is involved in lipid transport forming HDL particles [69]. ApoA-I is mainly produced by the liver (70%) [70] and partly by the intestine (30%) [71]. Lipid-poor apoA-I binds ATP-binding cassette transporter A1 (ABCA1) on peripheral cells (such as hepatocytes and macrophages [72]), resulting in FC and PL transport from the cells to apoA-I [73]. Two apoA-I molecules with FC and PL form a discoidal HDL formation [57]. Noteworthily, these particles can also be produced from surface components of the catabolism of TG-rich lipoproteins after the LpL hydrolysis [56]. The discoidal HDL formation reacts quickly with lecithin cholesterol acyltransferase (LCAT) which transports free acid from lecithin to FC, resulting in CE. After esterification and incorporation of more apoA-I by LCAT, the HDL particle becomes a mature spherical form (small HDL 3 , large HDL 2 ) [55,57] which is dynamically modified in the RCT. Phospholipid transfer protein (PLTP) transfers more PL and FC from VLDL to HDL, enhancing the LCAT reaction and resulting in HDL 2 with increased size [74]. PLTP can lead to the fusion of HDL particles with a consequential production of small lipid-poor apoA-I/PL complexes [75]. The mature HDL particles can be cleared from the circulation by two main pathways: (1) The main receptor in the RCT is SR-BI, which is expressed on hepatocytes and steroidogenic cells. SR-BI has an affinity for CE and apoA-I content in HDL particles [76,77]. The hepatic HDL uptake is stimulated by HL [78]. (2) The other mechanism is the indirect pathway in which spherical HDL particles are modified by CETP. CETP is mainly produced by hepatocytes and adipocytes and circulates with HDL [79]. CETP transports CE from HDL towards apoB-containing lipoproteins (mainly LDL, but also VLDL and CM) in exchange for TG in the opposite direction. The transfer activity of CETP is regulated by the triglyceride levels [80]: in the physiological state, predominantly CEs are transported from HDL to apoB-containing lipoproteins with a minor transfer of TG in the opposite direction. In hypertriglyceridaemia, increased concentrations of apoB-containing lipoproteins are available as potential acceptors for CEs. Moreover, CETP also transports TG from TG-rich lipoproteins (VLDL, CM) to LDL and HDL, resulting in small, dense and TG-rich particles [80]. The TG and PL content of these HDL 2 particles can be further hydrolysed by HL, resulting in lipid-poor small HDL 3 particles which interact with ABCA1 for the next HDL circle [56,78]. The CE content of the apoB-containing particles is taken up by the hepatic LDLR. It is important to mention that by taking cholesterol from foam cells, HDL has a protective role against atherosclerosis [81][82][83]. HDL also inhibits LDL oxidation. Small HDL 3 particles are more resistant to oxidative damage than HDL 2 particles and inactivate the products of LDL lipid peroxidation [84,85]. Several HDL-associated apolipoproteins [56] and HDL-bound paraoxonase-1 (PON-1) possess antioxidant properties [86]. HDL displays anti-inflammatory effects by decreasing the expression of inflammatory cytokines and adhesion molecules and inhibiting inflammatory cell activation [87,88]. Animal Models Animal models allow experimental investigation of OSA-related processes in isolation and have been extensively used to explore the relationship between dyslipidaemia and OSA. The effects of IH were the most widely investigated [89]. These models allow researchers to precisely define the major parameters of IH, such as the frequency or the severity of the hypoxic events [90]. However, it is important to consider that experimental IH episodes cause hypoxia in animals that is significantly more severe than that experienced in humans. For a realistic stimulation of IH, SaO 2 should be much lower in mice than the SaO 2 observed in patients [90]. Most of the experimental studies investigated whether IH regulates the expression of different transcription factors involved in the lipid metabolism. The regulation of hypoxia-inducible factor-1 (HIF-1), SREBP-1 and stearoyl-coenzyme A desaturase 1 (SCD-1) was investigated in detail in rat and mouse models [91][92][93]. The consequences of dyslipidaemia, such as atherosclerotic lesions associated with IH, can also be studied more precisely in animal models [94]. In humans, IH and sleep fragmentation are closely interrelated [95], and animal models could better separate these entities. On the other hand, dyslipidaemia in humans is complicated by genetic factors, diet, exercise, abdominal obesity, the presence of comorbidities and medications. Therefore, complex animal models which study numerous heterogeneous processes simultaneously are warranted [94]. Calorie Intake in OSA Excessive calorie intake is a main cause of aberrant obesity, which is the most important risk factor for OSA. Indeed, patients with OSA tend to consume high-calorie diets [96]. Hunger and food intake are controlled by the balance of a number of hormones, such as leptin, ghrelin, insulin, cholecystokinin, glucagon-like peptide 1 (GLP-1) and peptide YY [97]. However, increased levels of GLP-1 and gastric inhibitory polypeptide/glucose-dependent insulinotropic polypeptide were found in patients with OSA [98]. Moreover, IH seemed to upregulate the expression of peptide YY, GLP-1 and neurotensin in enteroendocrine cells [99]. These hormones have an anorexigenic influence on the enteric nervous system. As a vicious circle, sleep fragmentation in OSA attenuates hypothalamic leptin receptors, resulting in cravings for high-energy foods [100]. The consequences of this leptin resistance are an increase in fat mass and gaining weight, worsening obesity [100]. The ingested fat is the main drive of CM production [101] leading to further alterations in the lipid profile. Intestinal Lipid Absorption in OSA OSA is associated with postprandial hyperlipidaemia [102]. Indeed, using oral retinylpalmitate, the retinyl-esters incorporated into CM had an earlier peak under IH than under normoxia [103]. Although patients with OSA have higher postprandial TG levels, experimental IH in this group did not result in a further increase in TG levels [104]. High postprandial TG levels could be due to accelerated intestinal absorption. For instance, the FFA transporter CD36 is upregulated by HIF-1 [105]. However, CD36 expression is also upregulated by the peroxisome proliferator-activated receptor-gamma (PPAR-γ), the expression of which was reported to be decreased in OSA [106]. Nevertheless, intrahepatic CD36 was increased in mice exposed to IH [107]. Bile acids act as natural detergents: they emulsify lipid dietary fat into smaller lipid droplets, making the digestion by lipases easier. Cytochrome P450 7A1 (CYP7A1), an important enzyme in bile acid synthesis, was repressed by HIF-1α under hypoxia, suggesting altered bile acid production [108]. However, the effects of IH on bile acid synthesis and absorption have not been investigated. Similarly, gastric and pancreatic lipases were not studied in OSA. Impaired Intravascular Lipolysis and Uptake by the Periphery: Lpl Dysfunction in OSA A well-described mechanism for OSA-associated hyperlipidaemia is the impaired clearance of circulating lipoproteins by LpL ( Figure 2). Drager et al. showed that the functional clearance rate of CEs and TGs was significantly lower among patients with OSA compared to controls [103]. This delayed clearance was correlated with the depth of nocturnal hypoxaemia (MinSatO 2 ) and disease severity (apnoea-hypopnoea index (AHI)) [109]. In human preadipocytes exposed to 24 hours of hypoxia in vitro, a 6-fold decrease in LpL activity was detected [110]. Serum LpL concentrations were lower in patients with OSA compared to controls and negatively correlated with disease severity [111]. Bile acids act as natural detergents: they emulsify lipid dietary fat into smaller lipid droplets, making the digestion by lipases easier. Cytochrome P450 7A1 (CYP7A1), an important enzyme in bile acid synthesis, was repressed by HIF-1α under hypoxia, suggesting altered bile acid production [108]. However, the effects of IH on bile acid synthesis and absorption have not been investigated. Similarly, gastric and pancreatic lipases were not studied in OSA. Impaired Intravascular Lipolysis and Uptake by The Periphery: Lpl Dysfunction in OSA A well-described mechanism for OSA-associated hyperlipidaemia is the impaired clearance of circulating lipoproteins by LpL ( Figure 2). Drager et al. showed that the functional clearance rate of CEs and TGs was significantly lower among patients with OSA compared to controls [103]. This delayed clearance was correlated with the depth of nocturnal hypoxaemia (MinSatO2) and disease severity (apnoea-hypopnoea index (AHI)) [109]. In human preadipocytes exposed to 24 hours of hypoxia in vitro, a 6-fold decrease in LpL activity was detected [110]. Serum LpL concentrations were lower in patients with OSA compared to controls and negatively correlated with disease severity [111]. Several OSA-associated mechanisms can lead to the altered function of LpL, including IH, oxidative stress, inflammation, catecholamines and hormones. IH itself is a potent inhibitor of LpL [103], and the degree of hypoxia correlates with the delay in TG clearance [112,113]. Serum LpL concentrations correlated with markers of nocturnal hypoxia, such as the oxygen desaturation index (ODI) [114] and nocturnal SpO2 [111]. In animal models of OSA, CIH increased the levels of adipose ANGPTL4 in an HIF-1α-dependent manner Several OSA-associated mechanisms can lead to the altered function of LpL, including IH, oxidative stress, inflammation, catecholamines and hormones. IH itself is a potent inhibitor of LpL [103], and the degree of hypoxia correlates with the delay in TG clearance [112,113]. Serum LpL concentrations correlated with markers of nocturnal hypoxia, such as the oxygen desaturation index (ODI) [114] and nocturnal SpO 2 [111]. In animal models of OSA, CIH increased the levels of adipose ANGPTL4 in an HIF-1α-dependent manner [103], and ANGPTL4 levels correlated with the severity of nocturnal desaturation [115]. Moreover, the antibody against ANGPTL4 increased the activity of LpL in the adipose tissue and the lung [112]. However, Mahat et al. failed to demonstrate any differences in postprandial LpL activity or ANGPTL4 expression between normoxia and IH [110], suggesting other, ANGPTL-4-independent, regulatory mechanisms during CIH [112]. In vivo, higher concentrations of plasma ANGPTL4 and ANGPTL8 were measured in patients with OSA compared to the controls [116]. Higher serum levels of ANGPTL3 were detected in patients with OSA and coronary artery disease (CAD) compared to the patients having OSA alone [117]. PPAR-γ is a main regulator of several genes associated with lipid metabolism, including LpL [118], and it is downregulated by hypoxia in an HIF-1α-dependent manner [119]. Jun et al. detected that acute hypoxia decreased the PPAR-γ expression, resulting in down-regulated LpL in mice [113]. However, hypoxia had no effect on the expression of GPIHBP1, which is the carrier of LpL [113]. Inflammation was also found to impair the function of LpL in several ways. Interleukin-1 (IL-1) and tumour necrosis factor-α (TNF-α) decrease the activity of LpL in vitro [120,121] and in vivo [122], at transcriptional [123] and post-transcriptional levels [124]. Circulating LpL levels inversely correlated with CRP levels, emphasising the inhibitory role of inflammation in LpL function [111]. OSA is characterised by increased sympathetic activity [125]. Early studies indicate that catecholamines reduce LpL activity directly [126,127] and indirectly through the activation of ANGPLT4 [128]. Insulin activates LpL in the adipose tissue [129] and downregulates the expression ANGPTL3 [130]. However, insulin resistance (IR) decreases the activity of LpL. In line with this, HOMA-IR, the marker of IR, negatively correlated with LpL [111]. Leptin decreases the activity of LpL directly [131] and indirectly by decreasing the expression of ANGPTL3 [132]. Leptin levels were elevated in OSA [133]. Decreased levels of adiponectin, detected in OSA [134], were associated with lower LPL activity independently of systemic inflammation [135]. In conclusion, impaired function of LpL in OSA leads to decreased lipid uptake of the peripheral tissues resulting in an increase in circulating CM and VLDL-C levels. Increased Lipid Production in the Liver The lipid production in the liver is influenced by three main mechanisms: (1) de novo lipogenesis of the hepatocytes, (2) FFA delivery and uptake from the periphery and (3) availability of lipids and carbohydrates. Previous evidence suggested that IH activates SREPB-1, the key transcriptional factor involved in lipid biosynthesis, through HIF-1α activation [91,92]. SREBP-1 upregulates SCD-1. SCD-1 is responsible for the synthesis of monosaturated FAs (MUFAs) [93], which are substrates for PL, TG and CE synthesis [145]. As mentioned above, the HIF-1α/SREBP-1/SCD-1 pathway was widely investigated in OSA ( Figure 2). Mice with partial HIF-1αdeficiency exhibited lower hepatic mRNA and protein levels of SCAP and SCD, lower hepatic protein levels of SREBP-1 and lower hepatic fat accumulation compared to the wild-type mice [92]. In a SCAP-deficient mouse model, 5 days of IH did not influence the levels of serum and hepatic lipids and expression of SREBP-1, SCD-1 and HMG-CoAreductase [138]. Furthermore, SCD-1 deficiency in mice abolished the IH-induced increased hepatic SCD-1 and plasma VLDL-C levels and atherosclerosis in the ascending aorta [146]. The duration of IH seems to influence lipid production in OSA. Five days of IH exposure increased the serum levels of total cholesterol, HDL-C, PL, TG, hepatic TG and SREBP-1 and the protein and mRNA levels of SCD-1 [91]. However, the genetically obese leptin-deficient rats that had higher baseline lipid values did not show changes in serum lipid profile after 5 days of IH compared to the lean rats. The authors concluded that shortterm IH upregulates lipid biosynthesis but does not affect it in the presence of pre-existing lipid alterations [91]. On the contrary, genetically obese rats exposed to 12 weeks of IH experienced elevated TG and PL levels as well as SREBP-1 and SCD-1 transcription [147]. The severity of IH may also affect lipid production. The ubiquitination of HIF-1α leads to the proteasomal degradation of HIF-1α protein and depends on the O 2 tension [148,149]. In the study of Li et al., only severe IH (oxygen nadir of 5% compared to 10%) increased the hepatic SCD-1 levels [3]. The authors hypothesised that moderate IH did not prevent HIF-1α from proteasomal degradation [3]. In addition, oxidative stress contributes to hepatic lipid overproduction in two ways. Firstly, reactive oxygen species (ROS) stabilise HIF-1α [150]. Secondly, ROS induce lipid peroxidation in the liver [3]. Lipid peroxidation leads to hepatic inflammation and fibrosis resulting in nonalcoholic steatohepatitis (NASH) [151]. The pathomechanism of NASH in OSA was reviewed in detail previously by Mesarwi et al. [151]. IH also enhances hepatic lipid production through the increased sympathetic tone which has a stimulatory effect on VLDL secretion [152]. However, IH alone did not seem to be enough to cause dyslipidaemia in animal models. In atherosclerosis-resistant mice (C57BL/6J), atherosclerosis was observed only in those exposed to both IH and cholesterol-rich diet, but not in those exposed to cholesterolrich diet or to IH alone [94]. Moreover, the combination of IH and a cholesterol-rich diet was associated with a marked progression of dyslipidaemia. The authors suggested that the presence of dyslipidaemia due to genetic or environmental factors is required for atherogenic consequences of CIH [94]. In line with this, twin studies showed genetic susceptibility to the development of dyslipidaemia [153] and OSA too [154]. In our previous twin study, we detected a heritable relationship between TG levels and sleep parameters (AHI, ODI, TST90%), suggesting a common genetic background [155]. The genetic link between OSA and TG levels has recently been confirmed in a genome-wide association study [156]. Most notably, dyslipidaemia and OSA share common genetic loci, such as PPAR-γ [157,158] or APOE polymorphism [159]. The hepatic lipid accumulation and hepatic insulin resistance can enhance the lipid alterations in OSA. The hepatic lipid accumulation is the consequence of the FFA overload from the periphery due to adipose tissue dysfunction with increased lipolysis and altered lipid clearance by LpL. The coexistence of insulin resistance may also increase VLDL production. In insulin resistance, insulin loses the ability to promote the degradation of apoB [160]. The accumulated lipid content undergoes lipid peroxidation under IH leading to NASH [151]. Moreover, the lipid overproduction leads to increased VLDL production and export to the circulation. Abnormal Modifications of LDL in OSA LDL modification is one of the most important consequences of oxidative stress and inflammation. LDL can be modified in the extracellular space or in the lysosome of macrophages [161] by enzymatic (such as myeloperoxidase (MPO)) and non-enzymatic (such as desialylation, glycosylation, interaction directly with ROS) mechanisms. Not only the lipids but also the protein components of LDL can be modified [162]. Small dense LDL (sdLDL) particles associated with hypertriglyceridaemia are often desialylated, which is the most frequent modification of LDL. Due to their decreased affinity for LDL-R, their longer circulation time makes them susceptible to other modifications [163], including glycosylation [164] and oxidation [165]. Oxidised LDL (oxLDL) particles were found to have pro-inflammatory and atherogenic potential contributing to atherosclerosis (Figure 3). OxLDL particles can be hydrolysed by PON-1 associated with HDL [166]. Pro-atherogenic sdLDL3-7 subfractions were significantly higher in the OSA group [31]. SdLDL particles were independently associated with OSA in non-obese participants [167]. LDL size was independently associated with metabolic syndrome in OSA [168]. However, Liu et al. did not detect a correlation between OSA severity measures and sdLDL [169]. Only a few studies investigated oxLDL in OSA; oxLDL levels were found to be increased in OSA in most [170][171][172][173] but not all studies [174,175]. A recent meta-analysis concluded that oxLDL levels are increased in OSA [176]. However, studies that matched in age or BMI between patients with OSA and controls showed no significant difference in oxLDL levels [176]. Furthermore, endothelial lectin-like oxidised low-density lipoprotein receptor-1 (LOX-1) was upregulated in OSA [172]. LOX-1 is the main receptor for oxLDL on endothelial cells and orchestrates the expression of adhesion molecules and may induce atherosclerosis in OSA [177]. Biomedicines 2022, 10, 2754 10 of 31 the most frequent modification of LDL. Due to their decreased affinity for LDL-R, their longer circulation time makes them susceptible to other modifications [163], including glycosylation [164] and oxidation [165]. Oxidised LDL (oxLDL) particles were found to have pro-inflammatory and atherogenic potential contributing to atherosclerosis (Figure 3). Ox-LDL particles can be hydrolysed by PON-1 associated with HDL [166]. Pro-atherogenic sdLDL3-7 subfractions were significantly higher in the OSA group [31]. SdLDL particles were independently associated with OSA in non-obese participants [167]. LDL size was independently associated with metabolic syndrome in OSA [168]. HDL Dysfunction in OSA HDL is converted to a dysfunctional form with impaired physiological effects due to IH, oxidative stress and inflammation [178] (Figure 3). The dysfunctional HDL comprises lower CE, oxidised PL, increased TG and decreased apoA-I content, serum amyloid A (SAA) and several inflammatory proteins, such as complement C3 [178,179]. There is some evidence that IH and inflammation [180] downregulate molecules in the RCT, such as ABCA1 [181] and SR-BI [91]. Short-term IH (5 days) decreased liver SR-BI protein levels independent of obesity in a mouse model. However, obese mice had lower baseline SR-B1 levels than lean mice [91]. On the contrary, long-term IH (4 weeks) did not cause a change in hepatic SR-B1 levels [3]. Oxidative stress enzymes associated with OSA [182], such as MPO, excessively oxidise HDL. The oxidative modification of apoA-I leads to its inability to interact with ABCA-1, resulting in decreased premature HDL and impaired cholesterol efflux [183,184]. Other oxidised components of HDL, such as oxidised PLs [185] or FFAs [186], can also impair the functions of apoA-I by destroying its structure [187]. Although the functionality of apoA-I seems to be altered, its levels were not affected in OSA [171]. Decreased activity of PON-1 is also associated with HDL dysfunction [188]. Circulating levels of PON-1 were lower in subjects with OSA than in controls [189][190][191][192][193]. The higher levels of apoJ or clusterin in OSA [200,201] may suggest its protective function in the HDL metabolism. Several studies evaluated the circulating HDL-C concentrations in OSA and reported decreased HDL-C levels in most [202,203] but not all cases [31]. In the study of Tan et al., OSA-associated HDL dysfunction was measured as reduced LDL oxidation by HDL [171]. Patients with OSA presented a higher degree of HDL dysfunction with a consequential higher concentration of oxLDL independent of cardiovascular comorbidities. HDL dysfunction was more strongly correlated with disease severity than HDL-C concentration [171]. In another study, HDL 2 and HDL 3 levels were correlated with IR, but not with OSA severity or the degree of hypoxia. The authors concluded that IR plays a role in OSA-related dyslipidaemia [169]. In a recent study, despite similar HDL-C levels between the OSA and control groups, the participants with OSA had higher pro-atherogenic small HDL 8-10 subfractions and decreased anti-atherogenic large HDL 1-3 subfractions [31]. Moreover, not only OSA severity but also sleep fragmentation was inversely correlated with HDL-C and HDL 1-3 subfractions [31]. The atherogenic index of plasma (AIP) is a biomarker of atherosclerosis and coronary heart disease which is calculated as log(TG/HDL-C) [204] and reflects the dysregulation between anti-and pro-atherogenic lipoproteins. Previous studies found significantly higher AIP values among participants with OSA compared to the controls [205][206][207][208][209]. AIP was higher in patients with OSA and associated with disease severity [206,207,209] and daytime sleepiness in some [209] but not all studies [208]. Increased Intracellular Lipolysis in Adipose Tissue Fatty acids are mainly stored in the form of TAG in adipocytes [210]. This storage can be mobilised in three main steps: (1) Adipocyte triglyceride lipase (ATGL) catalyses the hydrolysis of TAG to DAG and FFAs [211]. (2) The hydrolysis of DAG is catalysed by hormone-sensitive lipase (HSL), resulting in MAG and FFAs [212]. (3) Finally, monoacylglycerol lipase (MGL) completes the hydrolysis, producing FFAs and glycerol [213]. Dysregulated peripheral lipolysis has been associated with OSA ( Figure 2). IH leads to increased sympathetic activity [214], and elevated levels of catecholamines are major activators of lipolysis [215]. In healthy subjects, increased sympathetic tone with consequential higher HSL expression was detected after two weeks of IH [216]. In mice, IH-induced lipolysis and decreased adipocyte size were detected [217]. In line with this, IH resulted in an increase in lipolysis rate by 211% and a decrease in intracellular lipid stores by 37% in human adipocytes too [218]. However, IH did not seem to affect postprandial lipolysis in lean healthy men [110]. Obesity is associated with higher basal levels of lipolysis [231]. Leptin exerts lipolytic activity [232], whilst adiponectin has an inhibitory effect on catecholamine-induced lipolysis [233]. In line with this, increased levels of leptin [234] and decreased levels of adiponectin [235] were reported in OSA. Insulin is the main negative regulator of lipolysis. Insulin resistance is associated with the loss of the suppressive effects of insulin [236]. Moreover, the anti-lipolytic effect of insulin depends on the O 2 tension of adipose tissue [237]; in hypoxia, it seems to be inhibited [238]. It is important to note that fragmented sleep leads to the nocturnal secretion of adrenocorticotropin and cortisol [239], which enhance lipolysis [240]. Adipose Tissue Dysfunction Obesity is the most important risk factor for OSA. At least 30% of obese patients have OSA, and 60% of the patients with OSA are obese [241,242]. The dysfunction of adipose tissue is an important contributor to the metabolic consequences of OSA [243]. White adipose tissue (WAT) is the most important energy storage. High levels of circulating FFAs force WAT to store lipids via two mechanisms: through increases in the number (hyperplasia) and the size (hypertrophy) of the adipocytes [244]. In contrast to hyperplasia, hypertrophy induces pathological changes in the adipose tissue by activating stress pathways, such as endoplasmic reticulum stress, oxidative stress and inflammation [245]. IH induces specific changes in WAT even in the absence of obesity [246]. However, adipocyte hypertrophy and hyperplasia are not always present in IH-induced adipose tissue dysfunction. Some previous studies detected shrunken adipocytes in the WAT of non-obese mice exposed to IH [247,248]. Moreover, IH reduced fat mass by inducing lipolysis [217]. Whereas the morphological changes of WAT are different between IH and obesity, they share the consequential abnormalities. Inflammation in Adipose Tissue The larger size of adipocytes reduces the vascularity of hypertrophic adipose tissue, resulting in lower oxygen tension and hypoxic damage. The consequential hypoxia contributes to inappropriate angiogenesis mediated by vascular endothelial growth factor (VEGF) [249]. Furthermore, IH activates HIF-1α and NF-κB, consequently resulting in an increased production of cytokines and adipokines [243]. In contrast to the healthy state characterised by anti-inflammatory immune cells, such as M2 type macrophages, T-helper 2 (Th2) cells, regulatory T cells and anti-inflammatory mediators (IL-10 or adiponectin), hypertrophic WAT is infiltrated by pro-inflammatory immune cells, mainly by CD8+ cytotoxic T cells and Th1 cells leading to the production of pro-inflammatory cytokines (TNF-α, IL-6) [246]. Moreover, hypoxic and inflammatory changes result in macrophage polarisation from M2 type to M1 type. In lean mice exposed to IH, reduced M2-type and increased M1-type macrophage infiltration were also detected in adipocytes [250]. M1-type macrophages enhance the inflammation, producing further cytokines, such as monocyte chemoattractant protein-1 (MCP-1). MCP-1 is an important regulator of macrophage tissue infiltration and chemotaxis of monocytes [251]. Moreover, it is secreted from adipose tissue to the circulation and may increase the hepatic expression of SREBP-1 [251]. Increased plasma levels of MCP-1 were detected in patients with OSA irrespective of obesity and correlated with ODI [252,253]. Furthermore, in the presence of IH, human adipocytes have a higher sensitivity to express pro-inflammatory genes [254]. Role of Adipokines Leptin is a master regulator of food intake and body energy balance, and its levels were shown to be increased in obesity [255], diabetes [256] and cardiovascular diseases [257,258]. Leptin levels were widely investigated in OSA and found to be increased [133,[259][260][261][262][263] even after adjustment for obesity [261]. OSA-associated hyperleptinaemia was related to disease severity measures, such as AHI [133,235,259,260], TST90% [235] and MinSatO 2 [261,264]. However, high levels of leptin contribute to leptin resistance by downregulating its cellular responses [265]. Leptin resistance with the loss of physiological functions of leptin also plays a role in OSA-associated metabolic alterations [266]. In a recent animal model, leptin injection did not decrease the food intake of rats exposed to IH [267]. Moreover, IH resulted in a reduced expression of leptin receptors, suggesting the role of leptin resistance in OSA [267,268]. Sleep fragmentation attenuates leptin signalling in the hypothalamus, resulting in consequential high-calorie food intake enhancing obesity [100]. However, sleep fragmentation itself was not found to influence circulating leptin levels [269]. Obese patients with OSA have dysfunctional adipose tissue with adipocyte hyperplasia which increases leptin production [270]. Independently of obesity, IH can itself induce leptin secretion via activating the sympathetic nervous system, renin-angiotensin system and hypothalamic-pituitary-adrenal axis [246,266]. Moreover, leptin gene expression is induced by HIF-1α [271]. Leptin may contribute to lipid alterations in OSA. Leptin activates hepatic lipid production [152] and peripheral lipolysis [232] through the activation of the sympathetic nervous system and by increasing the expression of SREBP-1 and SCD-1 [272]. Moreover, it decreases the activity of LpL [131]. The dissociation between high leptin levels and its action is caused by leptin resistance and attenuated leptin signalling in the liver [273]. A recent study found that leptin levels in OSA correlated positively with TG and negatively with HDL-C concentrations [274]. Leptin can lead to oxidative stress by activating the nicotinamide adenine dinucleotide phosphate (NADPH) oxidase [275]. Adiponectin is another important adipokine with anti-inflammatory and antioxidant properties, and its levels are inversely correlated with various disorders, such as obesity [276] and hypertension [277]. Lower adiponectin levels were detected in patients with OSA compared to controls [134,278] and were correlated with disease severity independently of obesity [279]. However, some studies found comparable [280] or even higher [274] adiponectin levels in patients compared to controls. IH suppresses adiponectin expression directly and indirectly by increased sympathetic activation [281]. Adiponectin increases the production of apoA-I and ABCA1 and induces HDL assembly [282,283]. It positively correlates with HDL-C levels independent of obesity [284]. Adiponectin enhances the catabolism of VLDL by activating LpL [285]. Moreover, it increases the mRNA levels of the VLDL-R in skeletal muscle cells [286]. In line with this, there is a negative correlation between VLDL-C and adiponectin levels [287]. Another anti-inflammatory and antioxidant adipokine is omentin, the levels of which were detected in lower concentrations and correlated positively with HDL levels in OSA [280]. Altered Hormone Production Several other hormones have an impact on the lipid metabolism, such as cortisol [288], growth hormone (GH) [289] and insulin [290]. GH deficiency is known to be associated with lipid alterations [291], and GH levels were decreased in OSA [292]. Cortisol overproduction is strongly associated with dyslipidaemia [293], and its levels were detected in high concentrations in OSA [294]. Insulin activates LpL in the adipose tissue [129] and inhibits lipolysis [236]. Moreover, it promotes the degradation of apoB [160], leading to decreased hepatic production of apoB-containing lipoproteins. As OSA is associated with insulin resistance, these effects are mitigated. Sleep Stages It is known that rapid eye movement (REM) sleep is associated with higher sympathetic tone [295]. REM and non-REM (NREM) sleep influence the production of several hormones, such as cortisol [296] and GH [297]. GH is mainly produced during N3 sleep [297]. Some patients have a disproportionally higher burden of obstructive events in REM than in non-REM sleep. These patients have a higher risk for hypertension, diabetes and cardiovascular disease [298]. Only a few studies investigated the association between sleep stages and OSA. Interestingly, AHI measured in the REM phase (AHI REM ) correlated with TG levels only in one study [299], and it did not have any correlation with lipid parameters in another study [300]. Xu et al. found an independent association between AHI REM and increasing levels of TG, HDL-C and apoE. However, this association became insignificant after analysing only the patients who had an AHI NREM or AHI REM < 5/h [301]. In contrast, AHI NREM correlated with TG, apoB [299,301], HDL-C, apoA-I [299], LDL-C and cholesterol levels [301]. Slow wave sleep duration and REM latency were independently and inversely associated with cholesterol and LDL-C levels [302]. In conclusion, it could be postulated that NREM sleep may have the greatest impact on lipid alterations in OSA. Endothelial Dysfunction Endothelial dysfunction is defined as an impairment in the vasodilatory ability of the vessels (mainly due to the compromised nitric oxide (NO) availability) leading to altered oxygenation, oxidative stress, vascular inflammation and consequential atherosclerosis. IH has a direct detrimental effect on endothelial function [303][304][305][306][307]. OxLDL particles also impair eNOS function by decreasing its expression [308], decreasing L-arginine availability [309]. Moreover, oxLDL increases iNOS expression and ROS generation [308]. Systemic Inflammation and Consequential Atherosclerosis OxLDL particles increase the levels of adhesion molecules (VCAM-1, P-selectin) on the endothelium, resulting in enhanced leukocyte recruitment [310]. In OSA, these molecules are also overexpressed by IH and oxidative stress in an NF-κB-dependent fashion [311][312][313]. This leads to increased adhesion between leukocytes and endothelium cells, resulting in the adhesion of circulating leukocytes to the endothelium and slowing down the rolling of leukocytes, thus facilitating their extravasation [314]. Moreover, the oxLDLs have a greater affinity for scavenger receptors, such as LOX-1 on endothelial and smooth muscle cells [315] and CD36 on macrophages [316]. Thus, the activated macrophages increase their CD36 expression, facilitating uncontrolled oxLDL uptake [317], and release pro-inflammatory cytokines (IL-1, TNF-α) [318]. This activation of innate immunity is a key mechanism in foam cell formation in atherosclerosis. It is important to know that adaptive immune cells, such as B-cell-derived plasma cells, are also activated and produce antibodies against oxLDL, and antigen-specific T cells produce further cytokines, resulting in enhanced inflammation [319]. HDL dysfunction in OSA also contributes to atherosclerosis [87,88]. The anti-inflammatory and anti-atherogenic effects of HDL are mainly mediated by sphingosine-1-phosphate (S1P). S1P decreases the expression of several inflammatory cytokines (such as TNF-α) and increases the expression of eNOS [320], improving endothelial function [321]. Elevated S1P enrichment was found in HDL 3 particles [322]. HDL particles also enhance the eNOS function by binding to SR-BI expressed on endothelial cells [323]. HDL is an important inhibitor of platelet activation and aggregation as well as of coagulation factors, such as factor X and tissue factor [324]. Insulin Resistance Dyslipidaemia can cause insulin resistance. Increased FFA levels due to increased lipolysis reduce insulin-mediated glucose uptake in skeletal muscle by interrupting insulin signalling [325]. Moreover, FFAs activate the NF-κB pathway, resulting in the production of pro-inflammatory cytokines such as TNF-α, IL1β and IL6 in the peripheral tissues. Systemic low-grade inflammation reduces the responsiveness of the peripheral tissues to insulin, leading to insulin resistance [326]. 6. The Effect of OSA Therapy on the Lipid Metabolism 6.1. The Effect of CPAP Therapy Continuous positive airway pressure (CPAP) is the gold standard treatment for OSA [327]. Several studies investigated the effect of CPAP on plasma or serum lipid profile in OSA. Various duration of CPAP (i.e., from 8 weeks to 6 months) effectively decreased TG, TC, LDL-C and apoB and increased HDL-C levels [328][329][330][331][332]. However, these effects depended on sufficient therapy adherence in some cases [331]. On the contrary, some studies failed to demonstrate improvement in lipid levels; the TG, TC and HDL-C levels did not change after 6 weeks to 4 months of CPAP therapy [333][334][335][336]. The effect of CPAP therapy on the lipid profile was also investigated in meta-analyses. Nadeem et al. evaluated 29 articles including 1958 participants with therapy durations ranging from 2 days to 1 year [337]. They concluded that there was a significant reduction in TC (−5.66 mmol/L) and LDL-C (−0.49 mmol/L) levels; however, TG levels did not change (−0.05 mmol/L). HDL-C levels increased after the therapy (+0.21 mmol/L) [337]. Xu et al. analysed the results of six studies including 456 subjects with therapy durations of 2-24 weeks [338]. CPAP therapy sufficiently reduced only the TC levels (−0.15 mmol/L). TG (0.00 mmol/L), LDL-C (−0.04 mmol/L) and HDL-C (−0.02 mmol/L) levels were not different between CPAP and the sham CPAP/control groups [338]. According to their subgroup analysis, younger subjects, more obese patients and patients with a longer duration of CPAP showed a significant decrease in TC concentrations (−0.27, −0.24 and −0.20 mmol/L). The authors postulated that CPAP therapy may not have any clinical effect on circulating lipid levels [338]. In the meta-analysis of Lin et al., six studies with 699 subjects met the inclusion criteria [339]. The time of the therapy was 4-24 weeks. Significant improvements in TC (−6.23 mg/dL), TG (−12.60 mg/dL) and HDL-C (−1.05 mg/dL) levels were detected but LDL-C concentrations did not decrease (−1.01 mg/dL) after CPAP therapy. Moreover, moderate-to-severe OSA, daytime sleepiness, CPAP treatment with short-term duration and good compliance were associated with the changes in lipid profile [339]. In a recent paper by Chen et al., 14 studies with 1792 subjects were included [340]. The therapy duration was 4-48 weeks. The CPAP therapy significantly decreased the TC levels (−0.09 mmol/L); however, it failed to change the levels of TG (0.07 mmol/L), LDL-C (−0.06 mmol/L) or HDL-C (−0.03 mmol/L). The authors did not find any confounders of CPAP treatment effect on lipid profile changes [340]. CPAP may improve some aspects of dyslipidaemia. For example, CPAP decreases the levels of several inflammatory molecules by mitigating hypoxia [341], reduces sympathetic activity [342], decreases the levels of cortisol [343] and improves insulin sensitivity [344]. CPAP increased the LpL concentrations after 3-6 months in patients with OSA [111,114]. The fractional clearance rate (FCR) of TG showed a 5-fold increase after 3-month CPAP therapy, but the FCR of CE was unchanged [109]. Circulating FFAs, which are the markers of increased lipolysis, were decreased after CPAP [345]. In line with this, CPAP withdrawal dynamically increased nocturnal FFA levels [346]. CPAP reduced the markers of lipid peroxidation, such as malondialdehyde levels [347], and decreased the endothelial LOX-1 expression [348]. However, it did not influence the oxLDL levels after 1 year of therapy in patients with OSA having comorbidities [170]. In summary, the previous studies investigating the effect of CPAP on lipid profiles were inconclusive. The studies were heterogeneous with different designs and sample sizes. The negative results of some studies may suggest that CPAP treatment alone does not improve lipid profiles in patients with OSA. Dyslipidaemia in OSA is strongly associated with comorbidities, such as obesity, insulin resistance and cardiovascular diseases, which also need to be addressed with pharmacological interventions. Furthermore, the differences between CPAP trials could be due to differences in diet, which was often uncontrolled in these studies. Most importantly, the effect of CPAP on triglyceride levels was more pronounced and more sustainable when it was combined with weight loss [349]. The Effect of MAD Therapy A mandibular advancement device (MAD) is an alternative therapy option for OSA [350]. Only a few studies evaluated the impact of MAD on lipid profile in OSA. Interestingly, Recoquillon et al. detected a significant increase in TG levels after 2 months of effective MAD therapy, whilst the other investigated lipid parameters (TC, LDL-C, HDL-C) were unchanged [351]. There was no improvement in lipid profile after 12 months of MAD therapy in the study of Venema et al. [352]. Silva et al. compared the effectiveness of MAD on the metabolic profile with CPAP: CPAP was more effective in reducing TC and LDL-C levels compared to MAD therapy after 12 months [353]. The Effect of Upper Airway Surgery The effect of upper airway surgery on the lipid profile in OSA has been poorly investigated. Li et al. investigated the postoperative lipid profile in patients with OSA who underwent uvulopalatopharyngoplasty (UPPP) or nasal surgery [354]. In patients who underwent UPPP, serum TC and HDL-C levels were significantly improved. In patients who underwent nasal surgery, these values did not change. Patients with isolated hypertriglyceridaemia showed significant improvements in serum TG and HDL-C levels [354]. Another study detected a UPPP-induced decrease in TG and TC levels after a 3-year follow-up [355]. Discussion of Major Findings and Further Research Directions As outlined above, intermittent hypoxia, oxidative stress and consequential systemic inflammation may result in lipid alterations in OSA. Although most of the studies investigating these pathways were performed in vitro or in animal models, the results were also confirmed in humans. Although large population-based studies are concordant in OSA-related dyslipidaemia, they usually did not control for diet, regular exercise or lipidlowering medications, which could contribute to bias. Clinical studies on large groups of patients are warranted to control for these factors. Furthermore, multiple mediators that are involved in dyslipidaemia (see Section 2) have not been investigated in OSA yet. Coexistent disorders, such as obesity, insulin resistance and nonalcoholic steatohepatitis, may also lead to systemic inflammation and dyslipidaemia. This could be a reason for inconclusive results with CPAP on lipid profile. CPAP treatment alone may not be able to improve the lipid profiles in patients with OSA. Thus, parallel treatment of these comorbidities is essential to improve dyslipidaemia. Studies should also focus on which patients benefit the most from an intervention with CPAP. As dyslipidaemia is strongly linked to OSA, patients should actively be screened for lipid abnormalities and cardiovascular complications. The detailed lipid profile of the patients with OSA should be measured at the screening visit and later under the CPAP therapy. Patients with lipid abnormalities detected during OSA management should be also referred to the appropriate specialty. Compared to single lipid components, the use of lipid components in combination with measures of abdominal obesity could better select those patients who are at higher cardiovascular risk [356]. Conclusions In summary, OSA is associated with altered lipid metabolism and results in elevated circulating lipid levels. Intermittent hypoxia, oxidative stress and inflammatory mechanisms lead to altered lipid profiles in OSA. Dyslipidaemia promotes endothelial dysfunction and consequential atherosclerosis leading to increased cardiovascular morbidity and mor-tality. However, OSA-associated comorbidities might enhance these alterations. Further well-designed studies investigating potential causative associations between dyslipidaemia and OSA and involving CPAP treatment are warranted. The studies in the future should also take into consideration the role of OSA-related comorbidities in the pathomechanism of OSA-related dyslipidaemia. We strongly advocate measuring blood lipids in patients with OSA to estimate and ultimately reduce cardiovascular risk in clinical practice.
2022-11-04T19:26:42.513Z
2022-10-29T00:00:00.000
{ "year": 2022, "sha1": "13533153b66049931fc6cb55fb6e0bebfd20b290", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9059/10/11/2754/pdf?version=1667036404", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e276498c637cfb436c145ca3b985a96aad3b9ae4", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7821581
pes2o/s2orc
v3-fos-license
Helplessness and perceived pain intensity: relations to cortisol concentrations after electrocutaneous stimulation in healthy young men Background Uncontrollable aversive events are associated with feelings of helplessness and cortisol elevation and are suitable as a model of depression. The high comorbidity of depression and pain symptoms and the importance of controllability in both conditions are clinically well-known but empirical studies are scarce. The study investigated the relationship of pain experience, helplessness, and cortisol secretion after controllable vs. uncontrollable electric skin stimulation in healthy male individuals. Methods Sixty-four male volunteers were randomly assigned to receive 30 controllable (self-administered) or uncontrollable (experimenter-administered) painful electric skin stimuli. Perceived pain intensity (PPI), subjective helplessness ratings, and salivary cortisol concentrations were assessed. PPI was assessed after stress exposure. For salivary cortisol concentrations and subjective helplessness ratings, areas under the response curve (AUC) were calculated. Results After uncontrollable vs. controllable stress exposure significantly higher PPI ratings (P = 0.023), higher subjective helplessness AUC (P < 0.0005) and higher salivary cortisol AUC (P = 0.004, t-tests) were found. Correlation analyses revealed a significant correlation between subjective helplessness AUC and PPI (r = 0.500, P < 0.0005), subjective helplessness AUC and salivary cortisol AUC (r = 0.304, P = 0.015) and between PPI and salivary cortisol AUC (r = 0.298, P = 0.017). Conclusions The results confirm the impact of uncontrollability on stress responses in humans; the relationship of PPI with subjective helplessness and salivary cortisol suggests a cognitive-affective sensitization of pain perception, particularly under uncontrollable conditions. Background Uncontrollability of unpleasant life events and aversive stressors seems to be one of the most important determinants of physiological and psychological stress response [1][2][3]. Learned helplessness theory has shown that repeated exposure to non-contingent feedback, i.e. a lack of correlation between behavior and aversive consequences may lead to negative affective, motivational, and cognitive sequelae including blunted and lowered affect, hopelessness, low self-esteem, motivational deficits and a cognitive bias towards low self-efficacy and controllability expectancies [4,5]. Besides these psychological effects of experiencing uncontrollable stress, activation of the hypothalamic-pituitary-adrenal (HPA) axis, mainly with elevated corticosteroid levels, was repeatedly found after uncontrollable stress [2,6]. Persisting HPA axis activity and hypercortisolism are assumed to be linked to depression and related disorders in humans [7]. On the other hand, depressive and pain-related syndromes are often co-occurring. The comorbidity of depression and chronic pain is very high [8]; other pain syndromes with high prevalence of depressive symptoms comprise fibromyalgia [9] and low back pain [10]. According to clinical and brain imaging studies affective and cognitive factors seem to play a crucial role in modifying and modulating pain experience [11][12][13][14][15][16]. Cognitions of helplessness, loss of control, rumination and negative future expectations seem to be related to enhanced affective pain experience [9,10,[17][18][19][20]. Thus, a relationship between helplessness, HPA-axis activation and pain seems to exist in clinical states and disorders, but the findings are controversial. While acute uncontrollable painful stress seems to be regularly followed by a cortisol response [2], in chronic pain syndromes, e.g. in fibromyalgia, blunted cortisol responses and low awakening cortisol levels have been found [21,22]. However, even in patients with chronic pain, affective distress seems to be related to helplessness and enhanced cortisol secretion [23]. Basic psychological stress research in this area is widely lacking. The present study investigated salivary cortisol responses, subjective helplessness, and pain intensity perception (PPI) to controllable and uncontrollable stress in healthy males using an electric skin stimuli procedure. Mildly painful stimuli were used because the main focus of the present study was the PPI in relation to experimentally induced uncontrollability and not the pain induction per se. It was hypothesized that PPI is intensified and related to salivary cortisol secretion after uncontrollable conditions and experimentally induced subjective helplessness. Subjects and Design Healthy male volunteers (age 18-45 years) were recruited by advertisement. After an extensive screening interview individuals with a history of severe medical disease or with a psychiatric disorder or psychotherapy (recently or within the last two years) were excluded. Additionally, volunteers taking any medication potentially interfering with cortisol secretion (e.g. hormones, anti-inflammatory compounds) were excluded. No drinking or eating was allowed at least 2 hours prior to the experiments (4.30 -7 p.m.). All experiments were carried out at the Department of Psychology, University of Giessen. The present data are part of a larger project comprising also pre-studies, a study with an attention task (one week apart), and several additional assessments not reported here. Approval by the Institutional Review Board was granted and all subjects had given written informed consent after the procedure had been explained as completely as possible. During the screening session, the electrical stimulus procedure (see below) was explained and individually tested in each participant (1-3 stimuli with the same intensity as used in the study). Sixty-four subjects were randomly assigned to one of the experimental conditions (controllable vs. uncontrollable, see below). The standardized study protocol comprised baseline (20 min), anticipation (10 min), stress exposure (10 min), and post-stress relaxation (20 min) periods. During baseline conditions the participants were generally informed about the protocol, completed short questionnaires on socio-demographic data and subjective helplessness. In the anticipation period, two silver stimulus electrodes were placed on the non-dominant forearm and fixed with a stretch band, followed by information about the subsequent stress procedure. During the anticipation period, three test trials were carried out. Procedure Mild electric cutaneous stimulation was used to induce completely harmless but potentially painful stimuli according to the literature; the DC electric shock was generated by a transformer/condensor device [24,25]. In a pre-test with 20 healthy students, the lowest intensity which in at least 50% of trials (200/400 trials) was judged at least "mildly painful" (5-point scale of perceived pain: not at all -threshold -mild -moderate -severe) was detected (4.5 points on a scalable potentiometer with an arbitrary intensity scale, 1 -10). This stimulus intensity (approx. 10 mA) was used in the present trial to assure that all subjects received comparable physical stimulus intensity. All participants were exposed to 30 stimuli with a mean inter-stimulus-interval of about 20-sec (10 min duration of stress exposure). In each group 32 subjects were investigated. Under "controllable" conditions (C), the subjects could apply the stimulus within an interval of 10 sec at their choice by pressing a button located on the desk. To start a single trial a green LED in front of the participants was activated. If a participant decided not to press the button, the stimulus was automatically applied after 10 sec. In both cases the green changed to a red LED and the stimulus generator was blocked (to avoid more than one stimulus within one interval). A new trial was indicated again by a change of LED activation (from red to green) after the end of the 20-sec interval. Under uncontrollable conditions (UC), the participants stimuli were applied by the experimenter according to a random schedule within the 10 sec interval; all other features of the experiment were identical. Assessments During baseline, anticipation, immediately after the stimuli series, and at the end of the experimental session (relaxation), subjective helplessness was assessed using a previously developed and validated 5-point Likert scale (0-4) consisting of six items ("I feel helpless", "I can (not) influence the situation", " I feel at a loss", "I feel confused", "the situation is inscrutable", "I have (no) control") [25,26]. The scale has good internal consistency (Cronbach's coefficient alpha >0.80). Pain intensity perception (PPI) was judged on a 100-mm visual analog scale (VAS). Saliva was collected four times for 5 min at the end of baseline (20 min), anticipation (10 min), stress exposure (10 min), and post-stress relaxation (20 min) periods using commercial cotton rolls (Salivette ® , Sarstedt AG). After centrifugation and saliva specimens were analyzed (double detection). Free cortisol concentrations were detected using commercial sensitive ELISA assays; inter-assay and intra-assay variation was <12%, the lower detection limit was at 1.0 nmol/l. Data Analysis Values are reported as means and standard deviations. For subjective helplessness and salivary cortisol concentrations areas under the response curve (AUC) were calculated according to the trapezoid rule as outlined in the literature [27]. Due to the design and the objective of the present study to sensitively investigate changes in cortisol secretion following a mild stressor in the afternoon, AUCs with respect to increase (AUCi) were calculated [27]. Moreover, negative AUC values could be expected due to the circadian rhythm; in line with the recommendations in the literature, negative AUC values were regarded as "index of decrease" and entered into the statistical analyses [27]. PPI was derived as single assessment after stress exposure. After having tested for normal distribution with Kolmogorov-Smirnov tests (all P-values > 0.15) group differences were analyzed with unpaired t-Tests. Relationships between parameters were evaluated with Pearson correlation coefficients. The level of statistical significance was set at α = 0.05. Results The mean age was 25.1 +/-3.2 years, and 90% of the participants were students. No differences emerged between groups with respect to age, smoking status (52% never smoking, 42% more than 5 cigarettes per day), alcohol consumption (5% never drinking, 59% more than 2 drinks a week), and body mass index (mean 22.3 +/-1.7 kg/m 2 ). Figure 1 and Figure 2 show the course of subjective helplessness ratings and salivary cortisol concentrations under the controllable and uncontrollable conditions. Baseline subjective helplessness ratings were low and comparable in both groups. Under the uncontrollable stress condition, a sharp increase of subjective helplessness ratings occurred after stress exposure whereas the subjective helplessness ratings values after the controllable condition decreased. Under baseline conditions, salivary cortisol concentrations were not significantly different (uncontrollable vs. controllable condition). The course of salivary cortisol concentrations in the group under the controllable condition followed strongly the circadian rhythm of cortisol secretion while salivary cortisol concentrations increased slightly in the group with the uncontrollable condition during anticipation and stress exposure. Table 1 shows the descriptive results in the total group and group comparisons (controllable vs. uncontrollable condition) of PPI, subjective helplessness ratings (AUC), and salivary cortisol concentrations (AUC). The AUCs indicate a significantly higher response of cortisol secretion and subjective helplessness after uncontrollable conditions (P < 0.01). The AUC of helplessness ratings was highly correlated with the simple difference of helplessness ratings after stress exposure and baseline (r ΔSHL;AUC = 0.93, P < 0.0005). Mean AUCs of subjective helplessness and salivary cortisol concentrations were negative after controllable stress conditions indicating a decrease compared to baseline. PPI was also significantly more pronounced (P < 0.05) after uncontrollable vs. controllable stress exposure. Table 2 reports the correlations between PPI, subjective helplessness, and salivary cortisol in both experimental groups and in the total sample. In the total group, significant relationships were found between PPI and subjective helplessness ratings (P < 0.001) as well as salivary cortisol concentrations (P < 0.01) and between subjective helplessness ratings and salivary cortisol concentrations (P < 0.05). Correlations in subgroups (controllable and uncontrollable stress conditions) revealed a significant correlation between PPI and salivary cortisol concentrations (AUC) only in the subgroup with uncontrollable stress exposure. The differences of correlations between the controllable and uncontrollable condition were statistically not significant (P > 0.10). Discussion The main finding of the present study was an association of pain intensity perception with saliva cortisol responses and subjective helplessness after uncontrollable electrical stimuli in healthy young men. After uncontrollable stress exposure, significantly higher pain perception and helplessness ratings as well as a significantly more pronounced salivary cortisol response were found when compared to the controllable stress condition. Moreover, correlation analyses revealed significant positive associations between the three parameters in the total sample without significant differences of correlations between the controllable or uncontrollable condition. Thus, subjective helplessness seems to be a potent cognitive mediator of pain evaluation and HPAaxis activation. Enhanced pain intensity experience after uncontrollable stress exposure and during states of helplessness is in line with previous findings in healthy subjects and patients with pain syndromes [8,20,28,29]. On the other hand, cortisol elevation following uncontrollable aversive stress has also been a basic finding since the early studies of learned helplessness theory [2,4,30]. However, the relationship between uncontrollable and potentially painful stress, subjective helplessness, and perceived pain intensity has not been sufficiently studied yet. Our results fit closely to very recent data from an interventional study with repetitive transcranial magnetic stimulation (rTMS) [17]. The authors could show that fast left prefrontal rTMS acutely suppressed the analgesic effects of perceived controllability on the emotional dimension, but not on the sensory/discriminatory component of pain perception. After rTMS, perceived uncontrollability of a painful task was related to an emotionally more distressing pain perception; the findings were hypothetically linked to fast activation of left prefrontal cortical areas [17]. The clinical studies in patients with often chronic pain syndromes seem, however, to be contradictory to the present findings. In several studies, lower mean diurnal cortisol levels were found in patients with chronic pain [28,31], particularly with fibromyalgia [22]. After metyraponeinduced hypocortisolism, an increase of mechanical pain sensitivity was found in healthy volunteers [21]. Cortisol response after acute stress in patients with chronic pain seems to be either within the normal range (in patients with chronic pelvic pain) or reduced (in fibromyalgia) [32]. In a recent study of this group [23] diurnal salivary cortisol release was associated with depression in patients with fibromyalgia, but not with perceived pain. Another recent study investigated the impact of perceived control during a cold pressor test and the influence of active coping on salivary cortisol response and reported a weak interaction of high perceived control and active coping on higher cortisol responses which occurred only in women [33]. In men, a reverse picture emerged. The authors claim that cortisol elevations after acute painful stress could be an adaptive neuroendocrine mechanism and interpreted their result as evidence that active coping and perceived control could potentiate adaptation [33]. Although an adaptive function of cortisol responses after acute uncontrollable painful stress can not be ruled out, converging evidence shows, however, that negative cognitive and affective factors intensify both HPA axis activation and pain perception. Anticipatory and evaluative cognitions seem to be crucial for pain processing [8,15,34] and cortisol response [28,35]. Most likely blunted HPA axis reactivity and hypocortisolism as seen in post-traumatic stress disorder and fibromyalgia are consequences of chronic stress and a prolonged period of HPA axis hyperactivity [36]. Our study suggests that acute painful stimulation is not followed by HPA axis activation under controllable conditions and when the perceived level of helplessness is low. Under such conditions pain was perceived less severe compared to uncontrollable stress exposure and states of induced helplessness. However, generalization of our findings should be limited to healthy young men. Gender differences in stress response and pain perception should be taken into account [31,33]. An influencing factor which has not been ruled in the present study was tobacco smoking. Smoking can activate the HPA axis, but non-smokers and smokers were equally distributed in both experimental groups. Salivary cortisol responses were relatively small due to the mild stimulation compared to other stressors [37]; the pain stimulation procedure used in the present study was quite artificial and might have led to a stimulation of both non-nociceptive and nociceptive fibers. Additionally, stress induction and measurement of altered pain intensity were implemented concurrently. Stressor modality, intensity and the temporal pattern of stress exposure seem all to have influence on pain processing [38] and cortisol responses. The present findings are, therefore, in need for replication. Conclusions The study presents experimental data of healthy males corroborating the hypothesis that perceived controllability of painful stimuli is crucial for perceived pain intensity and HPA axis activation. The findings can help clinicians substantiate and foster cognitive-psychotherapeutic approaches to prevent and treat helplessness in the context of pain management.
2017-06-26T17:50:38.422Z
2011-06-30T00:00:00.000
{ "year": 2011, "sha1": "0edb1eab07a954de9de55a20ad97a460d679e3ba", "oa_license": "CCBY", "oa_url": "https://bpsmedicine.biomedcentral.com/track/pdf/10.1186/1751-0759-5-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0edb1eab07a954de9de55a20ad97a460d679e3ba", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247071457
pes2o/s2orc
v3-fos-license
Development and Evaluation the Performance of Ann Based Statistical Downscaling Models For Daily and Monthly Precipitation. Statistical downscaling techniques represent a quantitative relationship between large-scale atmospheric variables (predictors) and local-scale meteorological variables (predictand) such as precipitation. This study uses large-scale atmospheric as predictor variables derived from the National Centre for Environmental Prediction and National Centre for Atmospheric Research (NCEP/NCAR) reanalysis data set and precipitation in Izmir city stations as predictant. The purpose of this study is to develop statistical downscaling models for daily and monthly precipitation over Izmir city by using Artificial Neural Network ) ANN ( methods and comparison of the performance of those models. The results revealed that the performance of the daily model improves with the aggregated of daily results. Although the performance of the daily model gives fair (not high) results (e.g., R 2 ranging from 0.362 to 0.331), the aggregated model gives very good results at monthly INTRODUCTION Global climate changes can lead to hydrological changes that will affect almost all aspects of humans, e.g, regional water availability, agricultural productivity, flood control.So it will be necessitates understanding how a change in global climate could affect regional water availability (Kusangaya et al. 2014). General circulation models (GCMs) are one of the most important tools for studying climate change, GCMs are numerical models and describe the atmospheric processes through mathematical equations.GCMs which have been developing over several decades are able to simulate features of the global climate, represent various earth systems including the atmosphere, oceans, land surface, and sea ice, and offer considerable potential for studying climate change (Liu et al. 2016). The performance of GCMs are coarse at the smaller spatial and temporal scales relevant to regional climate, because the spatial resolution grids are too coarse to resolve many important sub grid-scale processes, therefore GCMs outputs are often unreliable at regional scales (Xu 1999). One possible solution to overcome this problem is using downscaling techniques to downscale the output from GCMs or reanalysis datasets to a higher resolution in space and time.The basic idea of downscaling techniques is transferring large-scale changes in atmospheric variables to local weather (Hanssen-Bauer et al. 2005). Downscaling methodologies are classified into two main groups as dynamic downscaling and statistical downscaling (Fistikoglu and Okkan 2011).Dynamic downscaling, which use of regional climate models (RCMs) to produce higher resolution outputs, is very computationally intensive, very complex, and requires substantial computational resources.Therefore, it is not the best options for the studies that require a quick response.The statistical downscaling methodology requires less computational effort than dynamical downscaling applications and it is sound statistically.Statistical downscaling consists of a varied group of approaches that are often almost simple to implement, but requires an adequate amount of quality observed data (Trzaska and Schnarr 2014).Statistical downscaling can be divided into three main categories, which are Weather classification, Weather generators, and Transfer functions. Weather classification methods use synoptic patterns to classify weather patterns based on their synoptic similarity.After that, they establish quantitative relationships to assign Predictants (Bardossy et al. 2005;Fistikoglu and Okkan 2011).Although these methods are appropriate for downscaling nonnormal distributions, they require a large amount of observed data (more than 30 years) to evaluate all probable weather conditions.These approaches are more computationally intensive than linear approaches (Trzaska and Schnarr 2014). Weather generators methods are replicators of local weather statistical features under the large-scale variables conditions (Fistikoglu and Okkan 2011;Kilsby et al. 2007). These methods are widely used in temporal downscaling. Transfer functions methods build the relationships between large-scale atmospheric variables and local surface variables.These methods are considered the most common methods in statistical downscaling.Transfer functions use various applications including linear and non-linear regression, artificial neural networks, redundancy analysis, and canonical correlation analysis (Benestad et al. 2007;Wilby et al. 2004). The applications of transfer functions are range from linear and nonlinear regression types to artificial neural networks (ANNs), principal component analysis (PCA), canonical correlation, and redundancy analysis (Schoof et al. 2007), (Fistikoglu and Okkan 2011). Studies comparing different statistical downscaling methods are now relatively common (Singh and Kumar 2020;Liu et al. 2016).The results of these studies have shown that different methods have different performance in a certain area, and a certain method has different performance in different study areas. Many studies were implemented SDM using monthly precipitation as predictands (Okkan and Kirdemir 2016) and also relatively few previous studies focused on daily precipitation (Singh and Kumar 2020).This paper take in consideration the daily and monthly precipitation. In this study, a statistical downscaling method based on ANN is applied to predict precipitation over the city of Izmir.The purpose of the study is to develop two different models, the first one is a model that downscale daily precipitation, and through this model we calculate a cumulative rainfall for different periods for two days until we reach the monthly precipitation by aggregate the rainfall day by day, while the second model is to downscale monthly rainfall directly by using monthly data.At the end of the paper, we compare the results of the monthly model and monthly cumulative results of the daily model. STUDY REGION AND DATASETS The study region is Izmir city, which is the third largest city In Turkey, it is located at the Aegean coast of Turkey (see Fig. 1).Izmir city and its environs reflect typical Mediterranean climate characteristics.Precipitation amounts in study area is around 610 mm per year, and it is mostly abundant in winter, whereas up to 80% of the total annual precipitation falls between November to March. Predictands There are many stations are available within the region, 6 stations were selected in this study.The available data and the location of the stations are listed in table 1. Daily and monthly precipitation data from meteorological stations in Izmir city used as predictand.The other neighbour stations around Izmir station were used in order to test the results of downscaling models regarding to the spatial consistency of the daily and monthly rainfall estimations.The required data were collected and extracted mainly from the Turkish state Meteorological Service in Turkey It's worth mention that, the stations (Adnan menderes, Selcuk, Seferihisar Cesme, and Odemis) were selected as the stations are geographically closed to each other, they are sharing similar meteorological and atmospherically conditions.Besides, the stations shows a strong +ve correlation (coefficient of determination R 2 ) with the main station used in this paper is Izmir station.As its clearly shown in Table 2, at the daily level, the obtained correlation between the main station (Izmir) and A.M. havalimani and Seferihisar stations are (0.72 and 0.65, receptivity), while odemis and cesme selcuk stations recorded relatively low R 2 regards to the other stations (havalimani and sefirhisar).On the other hand, at the monthly level, a strong correlation was observed between all stations ranging between (0.75 and 0.91), see Table 2. ,and partial correlation (Anandhi et al. 2009;Wilby and Wigley 2000).According to the previous studies (Chen et al. 2010;Wilby et al. 1999), the predictors can be selected using stepwise regression and analysing correlation. In this study, the main criteria for choosing the relevant predictors are the predictors should be physically and conceptually sensible for the predictand (Wilby et al. 1999). In addition, it has an acceptable correlation with the predictant which is daily precipitation in this study (Fistikoglu and Okkan 2011).Therefore, 12 large scale atmospheric variables were selected as predictors, which taking into consideration precipitation generation mechanism.The selected NCEP/ NCAR reanalysis variables are listed in Table 3.The data were downloaded from the NCEP/NCAR reanalysis project web site http:www.esrl.noaa.gov/psd/data/grided/data.necp.reanalysis.html. Compare and analysis Fig. 2: flow chart illustrates the proposed Methodology Statistical downscaling models can be defined as Y = F(X), where Y is the predictand, which represent the daily/monthly precipitation, and X is the predictors. The predictand is the target data, which is the small-scale variable representing in this study precipitation from meteorological stations in Izmir city (see Table 1).Predictors are the input that represent the large-scale atmospheric variables from NCEP/NCAR reanalysis data set (see Table 3). Statistical Downscaling using ANN ANN was used as a well-known machine learning technique to estimate the observed daily rainfall from the large-scale atmospheric reanalysis parameters (Wilby et al. 1998). ANN refers to computing systems whose central theme is borrowed from the analogy of biological neural networks.ANN are able to learn and generalize from examples to produce meaningful solutions to problems (Jiang and Cotton 2004).The ANN is utilized as a practical black-box tool for developing a non-linear regression between the large-scale atmospheric dataset (predictors) and observed daily rainfall (predictands) (Harpham and Wilby 2005;Khan et al. 2006). The ANN structure designed consists of three types of layers; the first one is an input layer, which has n neurons for each variable; here, the input layer is the one where the predictors (large-scale atmospheric variables) were defined.The second layer is the hidden layer which may have several neurons, and all hidden neurons transform the inputs nonlinearly into another dimension through weight and a bias term shown in Eq. ( 1).In this study, the assumption that the number of neurons is double the number of predictors was accepted (by applying the trial and error technique).The last layer is the output layer (predictand) which in this study contains daily/monthly precipitation variables from station records.See Fig. 3. Three popular transfer functions are tried out in ANN construction trials: tangent sigmoid, linear, and log-sigmoid.In this study, the tangent sigmoid function was found suitable.See Eq. ( 2) The search for the best value of weights and biases is referred to as ANN learning phase, which is carried out with a known input and output set.Each training phase involves a set of inputs passed forward through the network to generate trial outputs, and then compare the observed outputs.When the difference (residual) exceeds the desired value, the error is passed back to the network. The training algorithm adjusts the connection weights based on the error.This process is referred to as back-propagation.Once, the comparison error has been reduced to an acceptable level for the entire training set, the training phase is complete. After training, the network is evaluated using a set of cases withheld from it during the training session.After the model was completed, it was ready to be applied to any other situation. The network is trained using the Levenberg-Marquardt feed forward-backpropagation algorithm, as shown in Eq. (3.The Levenberg-Marquardt feed forwardback-propagation algorithm is a second-order non-linear optimization technique, that is typically faster and more reliable than any other back-propagation methodology.(Fistikoglu and Okkan, 2011). J =Jacobian matrix, w is parameter vector, μ is Marquardt parameter. RESULTS AND DISCUSSION The performance of the model is evaluated by comparing the results of statistical downscaled model P(t)_down.The cumulative precipitation was obtained through aggregating the daily unit to larger units (i.e 2, 3, 4, 5, 6, 7, 15, 30-days).Eq. 6 mathematically expresses the aggregation process. 𝑷(𝒏) 𝒄𝒖𝒎 = ∑ 𝑷(𝒏) (6) Where P(n)cum is the cumulative precipitation aggregated from daily precipitation, and n is the number of days. The performance of the model for all the aggregated units (from the daily unit) combined with the descriptive statistics of the training time series are presented in Table 4. Similarly, Table 5 shows the same statistics for the tested time series. The results show that the mean values for both observed and downscaled precipitation are very close to each other in both training and testing periods, regardless aggregated units.However, in terms of standard deviation, as the aggregated unit becomes larger, the gap between the observed and downscaled values decreases, giving a strong indicator that the performance of the model is enhanced as the aggregated unit increases.Besides, when comparing the coefficient of variation between the observed and downscaled precipitation, the abovementioned conclusion can be drawn as Fig. 4 illustrate. Fig. 7 shows that the differences of the box plots of observed and downscaled depresses in long duration like 15 days and 30 days. For the aims of spatially validate the neighbour stations, the concept of the aggregated unit was also applied.The model behaviour maintains valid (R 2 increase with the aggregated unit increase) for these stations, as Fig. 6 and Table 6 help indicate. To sum up, it was clear that the performance of the model improves as the aggregated unit increases.Although the performance of the model gives fair results at daily level, a significant improvement has been recorded at 30 days level (e.g., R 2 is more than 0.73).1948-1990 1991-2018 2009 -2018 1965 -2011 1980 -2011 1965 -2018 1960 CONCLUSION The ANN technique was used to establish statistical downscaling models for estimating station-based precipitation from the large-scale NCEP/NCAR atmospheric variables.The downscaling models were established using the records in Izmir station, and for spatial validation of the model, the nearby stations were utilized.Here it was downscaled daily and monthly precipitation in separated models, and then the performance of these models was compared. In the daily model, although the performance of the model gives fair (not high) results at daily level, a significant improvement has been recorded as the aggregated day increases.That means, the accuracy at weekly precipitation level is significantly better than at daily level, as well as at monthly precipitation better than at weekly (e.g., R 2 are 0.36, 0.59 and 0.73 for daily, weekly, and monthly model respectively for Izmir station in training period).Also, the model behaviour maintains valid for the neighbour stations. In the Monthly downscaled model, where mean monthly large sale atmospheric variables were used, it is observed that the model gives very good results at training and test period, (e.g.R 2 ranging 0.65 to 0.73). In general, performance of statistical downscaling model shows that ANN model produces very good results for monthly model, while daily model produces fair (not high) results.This is explained by the presence of uncertainty in daily predictors and predictants, which is much higher than that in monthly data.(Khan et al., 2006).In addition to that, the ANN downscaling model simulates the mean values perfectly, because ANN uses MSE for obtaining the best results, so it produces small amount of precipitation in each dry days, but the mean values are nearly the same with observed values. So, the aggregated daily sums approaches to the observed cumulative as the aggregate day increases.For this reasons, the monthly downscaling models have relatively strong performance comparing with the daily downscaling.It is found that there are similar findings of the aggregated monthly model and the monthly model. However, the aggregated monthly provided a little better result.Although the accuracy of the aggregated monthly model is higher, it required a significant amount of time and effort. This study recommends that if monthly precipitation is required for climate studies, the monthly downscaling approach could be preferred conveniently rather than daily downscaling due to the time of the process. Overall, these results indicate that ANN method proved to be a good tool for producing local surface variables like precipitation, especially at the monthly level, from large-scale atmospheric variables. Fig. 1 : Fig. 1: Location of selected meteorological stations within study area. Fig. 3 : Fig. 3 : Structure of the downscale model and precipitation observed P(t)_obs.from stations data.To examine and evaluate the performance of models, the coefficient of determination (R 2 ), and root mean square error (RMSE) indices were used.See Eq. model, the time series of Izmir station was divide for training and test datasets.The precipitation data of the years between 1948 and 1990 were served as training period, while rain-data recorded for the years of 1991 to 2018 employed as testing period. Fig. 6 : Fig. 6: The performance of the model (R 2 ) improves with an increase in the duration. Table 2 : Summary of the R 2 between the precipitation in Izmir station and other predictors for the period from 1948 -2018.The spatial resolution of 2.5° x 2.5°.The latitudes range from 36.25 °N to 38.75 °N, and the longitudes range from 26.25°E to 28.75 °E, see Fig. 1. (Fistikoglu and Okkan, 2011)tmospheric variables from the National Center for Environmental Prediction and National Center for Atmospheric Research (NCEP/NCAR) reanalysis data set were used as predictors.The NCEP/NCAR data set is considered to represent the atmospheric conditions of the study area considerably well(Fistikoglu and Okkan, 2011).The large-scale atmospheric variables of NCEP/NCAR reanalysis dataset are selected as the Table 4 : Summary of the results of daily model for the training periods for Izmir station Table 5 : Summary of the results daily model for the testing periods for Izmir station Table 6 : Summary of the value of R 2 of precipitation for 1 day and cumulative days Table 8 : The performance of cumulative monthly results and monthly model.
2022-02-24T16:18:07.039Z
2022-02-22T00:00:00.000
{ "year": 2022, "sha1": "29f0e67629ae22f3ee9d948f68a476dee43fde8d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.21203/rs.3.rs-1363323/v1", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "b855ad2de7449bf12e52f6e77a172ca4951c4f19", "s2fieldsofstudy": [ "Environmental Science", "Computer Science" ], "extfieldsofstudy": [] }
238747003
pes2o/s2orc
v3-fos-license
Fast FMCW Terahertz Imaging for In-Process Defect Detection in Press Sleeves for the Paper Industry and Image Evaluation with a Machine Learning Approach We present a rotational terahertz imaging system for inline nondestructive testing (NDT) of press sleeves for the paper industry during fabrication. Press sleeves often consist of polyurethane (PU) which is deposited by rotational molding on metal barrels and its outer surface mechanically processed in several milling steps afterwards. Due to a stabilizing polyester fiber mesh inlay, small defects can form on the sleeve’s backside already during the initial molding, however, they cannot be visually inspected until the whole production processes is completed. We have developed a fast-scanning frequenc-modulated continuous wave (FMCW) terahertz imaging system, which can be integrated into the manufacturing process to yield high resolution images of the press sleeves and therefore can help to visualize hidden structural defects at an early stage of fabrication. This can save valuable time and resources during the production process. Our terahertz system can record images at 0.3 and 0.5 THz and we achieve data acquisition rates of at least 20 kHz, exploiting the fast rotational speed of the barrels during production to yield sub-millimeter image resolution. The potential of automated defect recognition by a simple machine learning approach for anomaly detection is also demonstrated and discussed. Introduction Terahertz technologies, and in particular their application in nondestructive testing (NDT) for quality control and/or defect recognition, are on their way into industrial markets and real-world applications in production environments, maintenance tasks, and other areas of quality assessment [1][2][3][4]. The terahertz terminology in these contexts commonly refers to electromagnetic radiation in the region of the electromagnetic spectrum with frequencies between 0.1 and 10 THz corresponding to free-space wavelengths of 3 mm to 30 µm. Terahertz waves offer a number of characteristic properties especially interesting for the NDT investigation of nonconducting materials and industrial components or products made from these materials. First, terahertz radiation can penetrate many common production materials at low absorption rates and good penetration depths, in particular plastics and polymer compounds [4], glass fiber-reinforced (GFR) composite materials [5][6][7][8][9], wood [10,11], paper [12] and cardboard [13], dry and wet paint layers or other coatings [4,[14][15][16], and many more. At the same time, the small terahertz wavelengths of few millimeters down to several tens of micrometers constitute an ideal premise for imaging techniques [17][18][19][20][21] with image resolutions on the order of typical, relevant defect sizes in components produced from the above materials. Compared to other established NDT technologies such as ultrasound inspection, X-ray screening and computed tomography, terahertz waves offer the unique combination of low photon energies and low radiation power -making it harmless to biological tissue and safe to be used in industrial contexts -and, due to the electromagnetic nature of the waves, the possibility to be employed contact-free with no need for a coupling medium to penetrate the materials under investigation [3,22]. With suitable materials for quasi-optical components, e.g., the polymers PE and PTFE, the radiation can be easily guided and focused according to the specific context of application with quasi-optical lenses [23], typically produced of low-cost materials such as the polymers PE and PTFE, and diffractive elements [24], or with simple metallic mirrors. In addition, guiding and focusing of the radiation with easy-to-fabricate dielectric waveguide antennas has been demonstrated [25,26]. Combining all the above properties and advantages over other NDT techniques, and the increasing availability of sources, detectors and receivers, terahertz technology has today reached a level of maturity to be implemented in industrial production environments or processes of quality control and to offer a valuable benefit in the optimization of these processes. Among the typical real-world NDT scenarios where in particular terahertz imaging can be (or is already being) employed are packaging control [13], production lines in the polymer an plastics industry for the detection of defects or the inspection of welding processes, manufacturing of GFR composites for lightweight construction [7,8,27] for example in the automotive, aviation and space industry [9,28,29], food inspection [30,31], investigation of thermal and electrical insulation materials [26,32], but also in fields of biomedical applications [33], artwork conservation [34][35][36][37], and many more. In this article we present a rotational terahertz imaging system, in particular at the example of the in-process inspection and defect detection in the production of large press sleeves (roller covers) for the paper industry. Press sleeves made from polyurethane (PU) are used in the paper industry on large rotating roller presses to extract residual water content from the still wet paper pulp under high pressure. The dimensions of such roller presses commonly reach up to 15 m in length and up to around 1.5 m in diameter. Hence, typical surface areas of 50 square meters and more are covered by the relatively thin press sleeves with several millimeters in thickness. During the manufacturing of the sleeves, the PU material is molded onto large rotating metal barrels and stabilized by an inlaid fine fiber mesh, which is woven onto the barrels prior to sleeve production. When the PU is applied onto the barrels with the fiber mesh around them, small defects in the form of mostly spherical air inclusions with diameters of few millimeters down to less than 0.5 mm can form on the sleeves' backside in contact with the metal barrels, in particular at the grid crossing points of the fiber mesh. These defects cannot be visually identified until the whole press sleeve is finally removed from the supporting barrels when the whole production process involving many time-consuming steps of surface processing is completed. However, the formation of the defects may influence the structural integrity and lifetime of the press sleeves in the actual paper production. There, downtimes in the production lines due to damaged or worn out press sleeves can easily become very expensive. We developed a terahertz imaging system which can reveal defects on the press sleeves' backsides at an early stage in the production process when they are still mounted onto the supporting, rotating metal barrels. Hence, faulty sleeves with too many such defects can already be separated out from further surface processing and the terahertz imaging can therefore add great value to the optimization of the sleeve manufacturing saving valuable time and production costs. This application scenario demonstrates in a model way the great potentials and evident benefits the use of terahertz technology can have in industrial contexts where other inspection technologies cannot be easily applied. Our terahertz system consists of frequency-modulated continuous wave (FMCW) terahertz transceivers based on all-electronic, waveguide-integrated components for the terahertz frequency range. We developed two transceivers with working frequencies around 300 GHz and 500 GHz and with corresponding total sweep bandwidths of 90 and 160 GHz, respectively-we have reported on comparable measurement units in previous publications [28,29]). The FMCW technique enables depth resolved measurements to generate 3D volumetric terahertz images [7,38] of the press sleeves in order to be able to separate and investigate the sleeves' backsides where the defect formation occurs. We integrate our terahertz transceivers into a linear translation stage which is placed in front of the rotating metal barrels with mounted press sleeves and by linear translation along the barrels rotational axis, we record a spiral imaging path across the press sleeves' surfaces. In this way, we manage to limit the need for an interaction with the production machines to a minimum level-no direct communication with the rotational mechanics is required-and at the same time we exploit the very high rotation speeds of the sleeve production machine of up to 150 rpm. In order to meet the requirements in terms of image pixel resolution down to around 0.5 mm we tuned our terahertz transceivers to reach data acquisition rates of up to 20 kHz for a single-point measurement. We show in this article that the above concept is suitable for the inline NDT inspection and defect recognition of press sleeves in the industrial production environment. Since the surface area of the press sleeves is quite large, manual inspection for possible defects over such an area can become very time consuming and the risk of overlooking smaller defects become quite high. On the other hand, the large sleeve area with a relatively limited number of round defects, which show with good contrast in the acquired terahertz images, constitutes a promising situation for the automated image processing and defect detection by machine learning (ML) approaches [39]. Application of ML techniques to terahertz measurements has been reported many times before, however, mostly in terms of direct application of the ML methods to the quite complex terahertz signals (in pulsed timedomain systems or continuous-wave systems [40]) and employing various sophisticated ML concepts such as artificial neural networks (ANNs) [39,41], random forests, support vector machines (SVMs) and many others (see [42] and references therein). There exist only few examples where ML is applied on the acquired terahertz images in an image processing sense, in which a direct evaluation of the image content itself is performed, rather than the measured signals. One reason may well be that large amounts of terahertz image data with reasonable quality -and relevance in terms of realistic, not artificially implemented defects -are often not readily available, as is commonly required for the training of most of the above ML methods. In addition, in many contexts the spectroscopic information contained in the terahertz data may be of great use. However, in defect detection in industrial production environments, often the mere existence of an anomaly showing up in terahertz intensity images may well be sufficient to sort out a product, without any need for further insight knowledge about the precise terahertz signature of the defect. We demonstrate here that the processing of simple terahertz intensity images (measured in reflection or transmission) can already yield enough information for an automated or semi-automated inline quality control. Some examples of ML techniques applied to terahertz images for defect or abnormality detection are References [41,[43][44][45]. We note that there exist a number of works on the topic of object recognition and image segmentation in terahertz images, which could possibly be translated to the task of defect detection in production materials and components. The measurement scenario we present in this work offers two main benefits for the use of automated defect detection in the recorded terahertz images. On one hand, the use of FMCW transceivers allows us to use some a priori knowledge of the investigated samples, namely, that the we can pre-select a specific depth layer where the defects occur (here: the sleeves' backsides) out of the full volumetric image data, which is acquired. Second, huge, intact sleeve areas are compared to relatively few, small defects and thus, outlier or anomaly detection ML methods should be a natural approach to our specific task of defect recognition. We demonstrate that even with a simple statistical multivariate Gaussian anomaly detection approach [46,47], we can already achieve good detection accuracy on our measured terahertz data sets. Naturally, with increasing operation time of the imaging system in the press sleeve production, large amounts of terahertz image data can be obtained, which could be used for the training of further, more complex ML algorithms. Nevertheless, even with our straight-forward approach we can provide ML-based support to the manual work of quality control personnel. Materials and Methods In this section we describe the details of our terahertz setup for the imaging of paper press sleeves. First, we present the terahertz FMCW transceivers we used for our measurements. We then explain the imaging setup we have realized to obtain 3D volumetric images of large press sleeve areas with very little need to interfere with the actual sleeve production process as desired in the early stage of production where our measurements take place. Terahertz FMCW Transceivers For the terahertz imaging of press sleeves we employ two all-electronic, waveguide component-based FMCW terahertz transceivers with operation frequencies around 300 and 500 GHz with sweep bandwidths of 90 and 150 GHz, respectively. We choose these particular transceivers for a good combination of penetration depth in the press sleeves' PU material and high spatial resolution to identify sub-surface defects on the rear side of the sleeves. We employ two slightly different setups in our two measurement units, as shown in the schematics in Figure 1 (a photograph of the two sensor units mounted on top of each other is shown in Figure 3 in Section 2.3). The 300 GHz transceiver uses an active frequency multiplier (AFM) with a multiplication factor of 6 driven by a voltage-controlled oscillator (VCO) to generate linear frequency ramps in the W-Band between 70 and 110 GHz. On the other hand, the 500 GHz transceiver employs AFMs with multiplication factor 12 to generate frequency ramps in the 115 to 175 GHz range. In both transceiver setups, the AFM output is multiplied by another frequency multiplier with multiplication factor 3 yielding sweep frequencies from 230 to 320 GHz and 350 to 510 GHz, respectively. Note that the exact operation conditions and usable bandwidths of the electronics offer some tuning range and depend on the specific components of the multiplier chains and attached antennas. The terahertz radiation (Tx) is coupled out via directional output couplers with attached horn antennas designed for the respective waveguides of the two frequency bands. We use quasi-optical PTFE-lens systems (50 mm focal length) to focus the terahertz radiation onto the target under test. The reflected terahertz signals (Rx) from the target-in detail: from reflecting interfaces within the terahertz-transparent target-are received by the same quasi-optics and horn antennas of the transceivers and fed to (third) subharmonic Schottky-diode mixers. There, the received signals are mixed with reference frequency ramps generated in a second AFM for heterodyne operation. The resulting intermediate frequency (IF) beat signals f b are sampled in a data acquisition unit (DAQ) at 10 MHz sampling rate-we integrate delay lines in our measurement system to obtain IFs between 1 and 4 MHz to stay below the Nyquist-Shannon frequency of the DAQ. For a single reflecting interface at distance d to the transceiver, the sampled beat frequency signal f b directly correlates with the time of flight τ of the received Rx signals compared to the TX reference frequency ramps [38] τ where B sweep is the bandwidth and T sweep is the sweeping time of a single linear frequency ramp of the respective transceiver. The distance d to the target's reflecting interface can then simply be deduced from with c the effective speed of light in the material of the object under test and the factor 2 stemming from the measurement in reflection geometry. AFM x12 Figure 1. Schematic of the FMCW terahertz transceivers. Linear voltage ramps from a data acquisition unit's (DAQ) analog output drive the voltage controlled oscillators (VCOs) at frequencies from 12 to 18 GHz for the 300 GHz and 9 to 15 GHz for the 500 GHz system, respectively. The frequencies are then multiplied in waveguide component-based multiplier chains to the desired target frequencies of 230 to 320 GHz and 350 to 510 GHz. We use waveguide horn antennas in combination with quasi-optical lens systems to focus the outgoing radiation (Tx) onto the press sleeves. The reflected radiation (Rx) is collected by the same quasi-optics and guided to Schottky-diode receivers and mixed with the VCOs reference output ramps. The generated difference frequency signals are sampled by 10 MHz ADC input channels of the DAQ. For terahertz-transparent target materials, the measurement signal constitutes the sum over all single (and multiple) reflections within the target under test superposed in the receiving mixer. Consider that such multiple reflections can be in particular important, e.g., for signal modeling approaches in high-resolution thickness measurements with terahertz FMCW systems [48]. In terms of signal processing, the real measurement signal sampled in the DAQ is bandpass filtered and converted into an analytical signal, which is subsequently windowed by an appropriate window function and then Fourier-transformed into the frequency domain. A more detailed discussion of the signal processing steps can be found, e.g., in Reference [48]. The theoretical range resolution for the FMCW sensors is again directly related to the sensors' sweep bandwidth via For our sensor configurations we find maximum range resolutions (in air, n = 1) of roughly ∆r = 1.6 mm for the 300 GHz system and ∆r = 1 mm for the 500 GHz system at full sweep bandwidths. We note that enhancement at the same time of range and lateral resolution in FMCW imaging systems by computational image processing has recently been reported [49]. In standard configuration, our terahertz sensors operate over large sweeping bandwidths of 90 and 160 GHz around the 300 and 500 GHz center frequencies, respectively. The duration of one single linear frequency sweep in both cases is 200 µs for 2000 sampling points, mainly defined by the digital-to-analog (DAC) converters of the 10 MHz DAQ unit driving the VCOs. As a result, the typical maximum data acquisition rate for a full frequency ramp without any signal averaging is 5 kHz. However, for the specific application of NDT of large press sleeves for the paper industry, we had to realize significantly higher single-point measurement rates of up to 20 kHz to address the high rotational velocities of the press sleeves and the desired spatial resolutions of the resulting terahertz images (see Section 2.2 for details). We achieve this by cropping the sweep bandwidths B sweep but keeping the slope B sweep /T sweep of the frequency ramps in (1) constant to ensure that the IF frequency remains around 5 MHz satisfying Nyquist's sampling theorem for the maximum 10 MHz sampling rate of our DAQ. Thus, for an effective 20 kHz data acquisition rate, the remaining sweep bandwidths of the two terahertz sensors amount to 27 GHz for the 300 GHz unit and 45 GHz for the 500 GHz unit. Although the maximum range resolutions after (3) are therefore reduced to 5 mm and 3.3 mm (in air, n = 1), respectively, we show in our measurements results below that this is still sufficient for a discrimination of the press sleeves' front and back sides for defect detection purposes. Both terahertz transceivers are equipped with quasi-optical focusing lens setups with focal distances of 50 mm. The wavelength limits of the lateral resolution amount to roughly 1 mm and 0.6 mm for the 300 and 500 GHz systems, respectively. We note, however, that in many imaging scenarios, defects with diameters below the theoretical resolution limit of the measurement setup can still be inferred from full 2D cross-sectional image data-especially in cases of spatial oversampling [50]-even though the defects are not fully resolved in the strict sense of the technical term. Altogether, with the described FMCW measurement approach we obtain singlepoint terahertz depth-profiles (A-scans) from our terahertz FMCW transceivers at kHz measurement rates. Finally, in order to acquire 3D volumetric terahertz images, the terahertz sensors have to be combined with appropriate scanning mechanics to form 2D depth-profiles along a single line (B-scans) or across a 2D surface (C-scans). In the application scenario presented in this contribution, we employ the FMCW transceivers in a rotational imaging setup, which is described in detail in the following section. For the NDT inspection of press sleeves, this enables us to monitor the hidden backside of the sleeves, where typically the formation of defects occurs during production. Terahertz Imaging Setup Schematic views of our terahertz FMCW imaging setup for NDT inspection of press sleeves for the paper industry are depicted in Figure 2. During production, the PU press sleeves are molded onto large rotating metallic barrels with diameters on a meter scale. In order to obtain volumetric quasi-3D terahertz images of large sleeve areas, the terahertz sensors are mounted on a mechanical translation stage at the height of the barrel's rotational axis. The sensors are operated in continuous data acquisition mode recording a continuous stream of terahertz FMCW sweeps-i.e., single-point depth profiles (A-scans)-during measurement. As the sensor moves along the linear translation stage while the metal barrel with press sleeve is spinning, a spiral imaging path across the surface is recorded. In the current implementation of the imaging system, no further rotational encoder information is used for automatic synchronization of our terahertz measurements with the metal barrel's rotation. We therefore attach a small metal strip onto the press sleeves parallel to the translation axis (y axis), which produces a strong spike in terahertz reflection signal on every revolution (x axis). We implemented an edge detection algorithm searching for maximum terahertz signal per revolution to align the acquired stream of terahertz data along the metal strip. In this way, the recorded data is unrolled along the spiral imaging path and 3D volumetric terahertz images of the press sleeves are obtained. Therefore, the measurement setup operates completely independent of the rotating metal barrel with surrounding press sleeves, as long as a constant rotational speed of the barrel is ensured. With this approach we can integrate our measurement system into the given circumstances of the production environment with no further need of higher level communication with the rotational axis of the manufacturing machine. During production, the press sleeves together with the supporting metal barrels rotate at quite high rotational velocities of up to 150 rpm. We designed our terahertz imaging system in such way that we can exploit these high velocities to obtain terahertz images of the entire sleeve area at reasonable scanning times being defined only by the velocity of the linear translation stage. The fast rotational velocities together with typical sleeve diameters of about 1.1 m result in significant surface velocities of up to 10 m/s of the transceivers scanning along the spiral trajectory across the sleeves' surface. Therefore, fast data acquisition rates of up to 20 kHz for the full FMCW sweeps are required to realize pixel resolutions of roughly 0.5 mm along the direction of the circumference (y) in the final terahertz images. Note that for a similar resolution along the translational axis, a linear velocity of roughly 1 mm/s is required. For typical sleeve lengths of up to 13 m, the total image acquisition time for an entire sleeve is approximately 4 h. Since we integrate our measurements directly into the manufacturing process, this does not add significantly to the total time of sleeve production and the terahertz imaging can even be combined with mechanical surface processing steps with comparable time consumption. It should be mentioned that this simple approach comes with some minor downfalls. First, the terahertz images to some extent show some jittering from line to line, because it cannot be guaranteed that the exact moment of passing the metal stripe coincides exactly with the identical position within one frequency sweep performed by the terahertz sensor from one roundtrip to the next. This, however, is a fundamental problem which could not be easily solved by interpolation onto a finer pixel grid. Higher level synchronization with the rotational axis' motion controller would be required to address this issue. Here, we deliberately did not pursue this approach due to the simplicity of the presented measurement procedure. Second, we rely on the constant velocity of the rotational (and translational) axis to obtain images with constant resolution in y (and x) direction over the whole image. Potential fluctuations in one or both velocities could in principle lead to a distortion in the acquired terahertz images. Nevertheless, we have already successfully demonstrated a comparable imaging approach in a previous work where a 5-axis milling machining was combined with a dual-frequency terahertz sensor to obtain 3D volumetric images of aircraft radomes [28,29] with even more complex conical geometry compared to the cylindrical press sleeves presented here. There, additional position information of the 5-axis machine was used for the alignment of the volumetric terahertz image data after each measurement. Figure 3 shows our terahertz sensors set up in a laboratory-scale test setup, which has been designed to mimic the real world situation in the production environment of the real paper press sleeves. We used this setup for preliminary tests on relevant model samples of press sleeves, i.e., cut-out pieces of full-scale press sleeves, which contained real defects of various sizes. For the test setup, we attached our terahertz FMCW transceivers to a vertical linear translation stage, which was placed in front of a rotation table. The terahertz sensors were aligned to point at the vertical rotational axis of the rotation table and could be moved over a linear travel range of 450 mm. We aimed to emulate our final application scenario using a metal cylinder with 30 cm diameter onto which we mounted pieces of press sleeves with real defects from the manufacturing process. With rotational velocities around 60 rpm we reach similar surface velocities as in the full-scale imaging setup described above. We note again that the imaging setup could also be used to record terahertz images of noncylindrical objects as long as a certain degree of rotational symmetry is given. Although the image reconstruction method at this stage relies on a constant surface velocity and an adaptation of the rotational speed during the measurement may not be feasible, it may depend on the requirements of the specific NDT application if a distortion of the terahertz images after data alignment can be accepted at the benefit of this simple and fast imaging concept for rotational symmetric objects. The measurement process with our laboratory setup works similar to the method described above in Section 2.2 for the final application scenario, except that for testing purposes, only a segment of a press sleeve was used. Note in Figure 3 that the press sleeve segment only partially covers the metal cylinder and the transition from the press sleeve to the metal cylinder itself (instead of an additional metal strip as in the final setup) serves as the reference metal edge for the terahertz data alignment. Figure 4a shows a piece of paper press sleeve we used as a model sample for the preliminary tests. The sample shows a number of defects, namely, clearly visible larger defects of sizes from roughly 1.2 mm to 0.8 mm (measured with a mechanical caliper), and a number of smaller pinhole defects with less than 0.5 mm in diameter. The defects tend to form at the cross-section points of the sleeves' fiber mesh inlays and thus are arranged rather regularly across the sample area. The magnified photograph shows the regular arrangement of some pinhole defects. We mounted the piece of press sleeve on the metal barrel as shown in Figure 3 and performed terahertz measurements with both FMCW transceiver units at 300 and 500 GHz operation frequencies. The measurements were recorded at a data acquisition rate of 20 kHz at a rotational speed of 60 rpm and a translational velocity of 0.5 mm/s of the linear axis. With these parameters, we achieve a surface pixel size of approximately 0.5 mm along the circumference (x axis) and 0.5 mm along the linear axis (y axis). Recall that with a realistic diameter of the metal barrel of up to 1.5 meters and rotational velocities of up to 150 rpm, the resolution of the final measurement setup calculated to approximately 0.5 mm along the circumference and 0.2 mm along the linear axis. Figure 3. The images were recorded at a data acquisition rate of 20 kHz with the two terahertz transceivers at 300 GHz and 500 GHz center frequency. Defects down to 0.8 mm (circles) as well as an unexpected larger defect inside the sleeve (rectangle) can be detected with both systems. The imaging system working around 500 GHz can even reveal a large number of the pinhole defects, the ones marked with yellow arrows corresponding to the pinholes in (a). Preliminary Studies on a Laboratory Scale Model Terahertz images of the test sample are shown in Figure 4b. The images represent crosssections (C-scans) of the press sleeve samples a the depth close to its backside in contact with the metal cylinder at around 8 mm below the outer surface. In both measurements, the defects down to a size of roughly 0.8 mm can be clearly recognized (marked by the yellow circles). However, the measurement at 500 GHz also reveals most of the smaller pinhole defects distributed along the regular grid crossing points of the fiber mesh inlay (marked by yellow arrows in the figure and corresponding to the pinholes in Figure 4a). Thus, when penetration depth and/or dynamic range of the 500 GHz transceiver unit are sufficient (which is in particular the case for the PU press sleeves), our measurement system can detect small defects even slightly below the corresponding free-space wavelength of 0.6 mm. We observe that the terahertz images are overlaid by a number of interference fringes, which were caused by a deformation of the press sleeve being stretched onto the metal cylinder by the help of two tension belts -the cut-out sample with original sleeve diameter of 1.1 m could not be mounted perfectly flat onto the smaller diameter of the lab-scale metal barrel. In the final measurement scenario, such interference patterns are not present in the terahertz images (see Section 3.1). We also note that a larger material defect is found inside the sleeve material (marked by the yellow rectangles) in the lower right corner of the test sample which was not expected from visual inspection. This underlines once more the great value of terahertz NDT imaging in general for the detection of hidden defects inside terahertz-transparent materials. We finally note that the costs of waveguide components for all-electronic terahertz FMCW transceivers usually grow significantly with desired output frequency and a tradeoff between required detectable feature sizes depending on the particular application and hardware costs of the measurement system should be considered. For our scenario of defect detection in paper press sleeves, we find the 500 GHz FMCW transceiver to best meet the requirements of the specific application. Nevertheless, the above results also prove the applicability of the 300 GHz measurement unit when slightly less spatial resolution of the imaging system may be sufficient. Results and Discussion In this section we present terahertz imaging results of on-site measurements of fullscale press sleeves for the paper industry in a typical production environment. The resulting image data was first investigated manually to assess the feasibility of the measurement approach. We find that the imaging system with the 500 GHz terahertz transceiver yields promising results to be used for an inline NDT detection of typical defects in press sleeve production. We also applied rudimentary machine learning to the acquired terahertz images to demonstrate the future potential of automated defect recognition, in particular of interest for the scenario of the inspection of large press sleeve surface areas, where a manual quality control can be quite complex and time consuming. Measurement of Press Sleeves in Real-World Scenario We installed our terahertz imaging system in a real production environment of press sleeves for the paper industry at Voith Group. Figure 5 shows a photograph of the system during the measurement of a press sleeve on the rotating metal barrel. In this particular setup, the terahertz transceiver was mounted on a linear stage with a travel range of 30 cm to scan over a total area of approximately one square meter of the sleeves surface. For the first imaging run, the rotational speed of the supporting metal barrel was set to 60 rpm and the linear axis' velocity was 0.5 mm/s, yielding image pixel resolutions of 0.2 mm along the circumference (x axis) and 0.5 mm along the linear axis (y axis). With a sleeve diameter of 1.1 m the total imaging area was approximately 1 square meter. The time of image acquisition with the above settings amounted to roughly 10 minutes when scanning over the full 30 cm travel range of the linear stage. Figure 6 presents terahertz images of a press sleeve acquired in a real production environment of the sleeves as discussed above and as shown in Figure 5. The wide image at the top shows a segment of 25 × 110 cm 2 out of a total scan area of 1 square meter of the press sleeve-note that it is not possible to display terahertz images of the entire measured surface with reasonable detail on a typical high-resolution computer screen (or paper printout), since the actual size of relevant defects would be well below the size of a single pixel. For a visual inspection of the terahertz images, a possible scenario could be to show segments of the full sleeve area in a continuous scrolling video mode (for an example, please see Supplementary Material). Press sleeve Terahertz imaging system Rotating metal drum Figure 5. The terahertz imaging system setup in a realistic production environment of press sleeves for the paper industry. The sleeves are investigated within the usual production process to detect possible invisible defects at an early stage of the production line. An area of around 1 square meter of sleeve surface was recorded in this study, limited only by the total travel range of the linear translational axis. Due to the high rotational velocities of up to 150 rpm and large sleeve diameters up to 1.3 m, the terahertz FMCW transceivers are operated at very high data acquisition rates of 20 kHz to yield desired image resolutions of around 0.5 mm. The image in Figure 6 represents a cross-sectional layer (C-scan) of the sleeve area at a depth of 8 mm below the sleeve's surface. The terahertz data was acquired with the 500 GHz transceiver at 20 kHz measurement rate. Recall from Section 2.1 that in this configuration, the depth resolution amounts to approximately 3.3 mm for a single layer. A cutout of a single line scan (B-scan, different press sleeve) is shown in the bottom left image in the same figure. The bright signal on the left of the B-scan represents the reference metal edge on the sleeve's surface (z = 0 mm) used for alignment of the measured terahertz data. Next to the reference edge, the terahertz data of the press sleeve is seen, with the top layer representing the sleeve's surface and the layer below representing the signal at the sleeve's backside at the interface with the supporting metal barrel. A number of defects can be identified distributed over the terahertz image of the selected sleeve segment as marked with blue circles in the figure. For better visibility, two segments of 200 × 200 mm 2 and 100 × 100 mm 2 , respectively, are shown on a magnified scale below the full image. It can be recognized that the size of most of the defects is roughly comparable to the size of the rectangular background pattern generated by the sleeve's fiber mesh inlay. We therefore assess that pinhole defects down to at least 1 mm in diameter can be reliably detected by our terahertz imaging system and even smaller defects may be identified. It should be noted that detected defects need not necessarily be located at the sleeve's rear side when they appear in the cross-sectional C-scan images at the respective depth layer but may represent shadows of invisible defects within the PU material (compare Section 2.3). We present another imaging result of another press sleeve in order to demonstrate the feasibility of our terahertz NDT approach under real operation conditions of the production environment. Before, during the molding process of the sleeve material, the velocity of the production machines was deliberately perturbed to implement typical production defects at a defined area on the sleeve's backside for our measurement campaign. The press sleeve had a diameter of 1.3 m and we performed our measurements at a realistic rotational speed of 150 rpm of the metal barrel, where the full 20 kHz data acquisition rate of the terahertz transceivers is required to achieve the relevant lateral image resolution of approximately 0.5 mm per pixel in both directions. The image in Figure 7 shows again a cutout segment of the total 0.3 × 4 m 2 measured sleeve surface area, where the area with the forced production defects between 50 and 120 mm in y-direction is clearly revealed by the terahertz imaging system. The magnified inset shows again that defects down to the (sub-)millimeter scale and the sleeve's fiber mesh inlay are well resolved even under these real-production conditions. Note that the metal edge reference appears as dark area on the left because the depicted image shows the sleeve's backside while the terahertz radiation was blocked by the metal edge attached to the sleeve's surface. Terahertz image (C-scan) of a larger segment (1100 × 250 mm 2 ) of a press sleeve measured at the production site as in Figure 5 with the 500 GHz transceiver at 20 kHz measurement rate. The image shows a depth layer close to the sleeves backside in contact with the metal barrel (see B-scan in lower left corner) at 8 mm below the surface. The magnified images indicate that the size of most of the defects is comparable to the grid generated by the sleeve's fiber mesh inlay roughly amounting to approximately 1 mm in diameter or smaller. Automatic Detection of Defects In the above results, we have demonstrated that our terahertz FMCW imaging system can well visualize typical pinhole defects of millimeter sizes during the production of press sleeves for the paper industry. However, it can be a quite complex task to manually investigate typical full sleeve areas of 50 m 2 and more. Therefore, an automated defect recognition is desired, which can help NDT personnel with a pre-selection of interesting sleeve segments. If possible, a fully automated image processing may even render redundant the need for any manual assessment of the acquired terahertz data, once a reliable detection model has been trained. Here, we briefly show promising results of the application of a simple machine learning (ML) approach for outlier or anomaly detection in the terahertz data acquired during our measurement campaign. For the automated defect detection, we implemented a statistical anomaly detection model based on a multivariate Gaussian distribution. Before the actual training of the model, we make use of the volumetric nature of our recorded FMCW terahertz data and preselect the depth layer as displayed before in Figure 6. We split the full recorded terahertz image from the measurement described in Section 3.1 into rectangular image segments of 100 × 50 pixels to generate a total of 1740 samples. Since we apply supervised, statistical anomaly detection for the training of the algorithm, we manually label the image segments where a defect could be visually identified as positive (1) and the remaining samples without defects as negative (0). In the data of the press sleeve shown in Figure 6 we find 25 image segments with visible defects. Due to the extremely low positives-to-negatives ratio in the data set, a simple anomaly detection approach based on a multivariate Gaussian distribution should already yield reasonable precision and recall for automated defect detection. For such an approach, it is beneficial when the training data contains as few outliers as possible. We therefore select a total of 60% of the data samples from areas of the sleeve containing only a total of 3 positive labels (0.3% ratio) for the training of our model and the remaining 40% of the samples containing a total of 23 positive labels (3.3% ratio) as a cross-validation set. We then define a two-dimensional feature-space for the terahertz data based on the intensity values of each image segment as obtained from the data of the terahertz transceivers. Recall that by using the image from Figure 5 we have already pre-selected a slice from the full volumetric data set where possible defects usually appear on a mostly homogeneous background overlaid with the weak regular grid pattern of the fiber mesh inside the sleeve. Our first data feature X1 represents the total dynamic range of each sample, which should be a good indicator for areas with general inhomogeneities in the sleeve material. However, we define a second feature X2 as the standard deviation of the minimum signal value per column in order to discriminate between an overall distribution of higher dynamic range (e.g., due to the regular grid pattern) and highly localized low intensity values. Hence, we calculate our models input features as X1 = log(max(X) − min(X)), X2 = log(std(min(X, dim = 1)), where X is the set of training or cross-validation samples, respectively, and scale our features by mean normalization. To illustrate our choice of features, Figure 8a gives some examples of image segments with defects (and non-defects) detected by our algorithm in the top row. In the bottom row, the according minimum intensity per column of the above images is plotted. It can be recognized that typical pinhole defects as in example 1 produce a narrow spike in minimum intensity which has a high standard deviation and thus large X2-value. On the other hand, e.g., image 4 may show slightly increased dynamic range due to the regular mesh pattern but has small standard deviation of the minimum value. Figure 8b visualizes the feature space of our data set and results of the outlier classification with the above parameters. Green crosses represent the samples manually labeled as non-defect and red crosses represent samples manually labeled as defect. We train a multivariate Gaussian distribution with the selected training set with little positive (defect) samples to obtain an optimal Gaussian fit to the large, defect-free areas of the press sleeve as basis for the anomaly detection model. The distribution is naturally centered around zero due to the feature scaling (numerical expectation values are µ = [−2.033 × 10 −16 , −1.08 × 10 −16 ]) and has variances σ 2 = [0.005, 0.006] in X1 and X2, respectively. In order to define the decision boundary for labelling samples as outliers, we maximize in an iterative process the F1 score of our model applied to the cross-validation data set with high positives-to-negatives ratio. As usual, the F1 score is defined as with tp, fp, and fn the total amount of true-positives, false-positives, and false-negatives, respectively. The F1 score saturates at a value of 0.913 after around 3000 iterations (see inset in Figure 8b) and we find an optimal probability decision boundary of ε = 0.0112 for our outlier detection, illustrated by the orange contour line in the figure. Finally, when we apply the anomaly detection to the whole terahertz image of the press sleeve, we detect 28 image segments with defects compared to manually labeled 26 positive samples. The detected outliers are marked by black circles in Figure 8b and the respective images segments are marked by red borders in the terahertz image in Figure 9. Note that some of the defects lie on the borders of the 100 × 50 pixels image segments and are detected as two different defects by the algorithm (for example, see magnified inset with blue borders). In a real application scenario, such a doubling should not be of major concern, as long as the defects are detected at all. In total, we find that our automated defect detection misses only 2 out of the 26 manually labeled image segments containing real defects yielding a defect detection accuracy of 92% in the given press sleeve measurement. We note that after omitting the double detections, no further false positive detections occur in this given data set. Thus, with even this simple ML approach we achieve good reliability to support manual inspection of large area press sleeves in the production process by automated defect recognition. Nevertheless, numerous methods exist to further improve the reliability of ML-based anomaly detection, e.g., one-class SVMs [51], convolutional neural networks (CNNs) [52], histogram methods, auto-encoders etc., in multiple variants for semi-or unsupervised learning. An extensive review and further reading can be found, e.g., in Refs. [46,47] and references therein. . Illustration of the outcome of the automated defect recognition on the whole measured press sleeve area with the previously trained ML anomaly detection model. The red rectangles mark the image segments labeled as outliers (defects) by the algorithm. Double detection occurs when the defects extend from one image segment to an adjacent one (see magnification). The large green rectangle marks the section of the sleeve, which was displayed before in Figure 6. In total, 24 out of 26 manually labeled defects are correctly recognized as outliers, yielding a detection accuracy of 92%. Conclusions We presented in this work the successful implementation of a terahertz imaging system for the NDT inspection of press sleeves for the paper industry. Defects in the sleeves' PU material may form at an early stage of manufacturing on the sleeves' inaccessible backside, which stays in contact with a supporting metal barrel during the whole production process. This makes a visible inspection impossible before the production of a sleeve is finished and it is removed from the barrel. In order to overcome this issue and to prevent unnecessary productions costs for eventually faulty press sleeves, our imaging system can be directly integrated into the production process to identify possible defects at a very early stage of production. We designed our inspection system based on all-electronic FMCW terahertz transceivers working at 300 GHz or 500 GHz center frequencies, respectively, optimized for high data acquisition rates up to 20 kHz. Therefore, we can exploit the fast rotational speeds of the sleeves mounted on the rotating metal barrels during manufacturing, to yield high spatial image resolutions of approximately 0.5 mm, and at the same time achieving reasonable image recording times for the large surface areas of typical press sleeves (typically tens of square meters) with large diameters up to 1.3 m and lengths up to 15 m. Since the terahertz measurement can be integrated at almost any point in the production line, the measurement time itself does not add significantly to the total time of sleeve production. Our study showed that an inspection for defects should -besides aspects of optimization of the production routines and time/cost savings -be performed as early as possible after the molding of the sleeves before any further surface processing (e.g., milling of drainage grooves) has occurred. We demonstrated the feasibility of the general measurement concept in laboratoryscale studies, where we were able to detect typical defects of diameters down to approximately 0.8 mm and under good conditions of even smaller pinhole defects which can form at the crossing points of a fiber mesh inlay inside the PU sleeves. In order to achieve the high data acquisition rates for the required high surface velocities of up to 15 m/s of the rotating sleeves, we reduced the standard bandwidth of our FMCW transceivers to a level, where we could still separate front and backside of the sleeves via the FMCW principle but could also improve the measurement rate from our usual 5 kHz up to 20 kHz. We then performed a number of measurements in a real production environment and showed the good performance of our NDT imaging system also under these conditions. At the current stage, an attached metal strip is employed to align the continuously recorded terahertz data in to 3D volumetric terahertz images. While this concept requires no access to the rotational mechanics in the sleeve production, using available information of an additional rotation encoder could in the future improve this process to be even less prone to adjustment errors of a reference edge. We demonstrated measurement results obtained with limited translational axes of a maximum of 30 cm length to image a sleeve surface area of 1.2 m 2 . In our ongoing work, we are integrating a larger linear stage to be able to image much larger lengths of the press sleeves in a single measurement, where also a continuous display of the already measured sleeve area in the form of, e.g., waterfall diagrams, can be implemented for continuous process monitoring. Currently, linearly moving milling tools are already being used for the mechanical processing of the sleeves' surface along a spiral path, very similar to the imaging path of our terahertz sensor. In the future, our relatively compact terahertz transceivers could possibly be mounted directly to the same sliding platforms as the milling tools, to easily gain access to the full length of the press sleeves. In a first and rudimentary implementation of a ML algorithm for anomaly detection, we demonstrate that the acquired terahertz images are ideally suited for an automated image processing and defect recognition task. Large homogeneous sleeve areas containing only relatively few and small defects can make a manual inspection of the acquired terahertz images quite challenging. On the other hand, this low defect-to-non-defect-ratio constitutes a promising starting point for typical approaches of anomaly detection. We showed in this work that with a two-dimensional multivariate Gaussian fit, we could train an anomaly detection algorithm to yield very high detection accuracy on the measured press sleeves. We also applied the trained model to other areas of the press sleeve with even fewer defects (data not shown) and achieved comparably good defect detection rates. This underlines that with relatively simple ML approaches, the NDT of large area press sleeves can to a large extent be automated, in order to support the work of quality control personnel. Again, an early stopping of the production process, when relevant hidden defects are detected, can greatly enhance the production efficiency in terms of cost and time consumption. In our current implementation, we do not correct for double detection of defects, multiple defects within one image segment, and other artifacts mainly introduced by the current choice of image segment size and definition of the classes "defect" and "no defect". Optimization of the model's hyperparameters and error handling of the above cases could further improve the detection accuracy of the proposed ML approach. With growing amounts of terahertz data of press sleeves, more complex ML algorithms could be investigated to even allow for an unsupervised learning of various types of defects for a more sophisticated assessment of relevant irregularities in press sleeve production.
2021-10-14T06:24:04.179Z
2021-09-30T00:00:00.000
{ "year": 2021, "sha1": "9daae9755e81cb03c232deeee0769c8b6c043088", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/21/19/6569/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e55fc350725623b71f74316bfc19f256972f6511", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
251320440
pes2o/s2orc
v3-fos-license
Kantorovich type topologies on spaces of measures and convergence of barycenters We study two topologies $\tau_{KR}$ and $\tau_K$ on the space of measures on a completely regular space generated by Kantorovich--Rubinshtein and Kantorovich seminorms analogous to their classical norms in the case of a metric space. The Kantorovich--Rubinshtein topology $\tau_{KR}$ coincides with the weak topology on nonnegative measures and on bounded uniformly tight sets of measures. A~sufficient condition is given for the compactness in the Kantorovich topology. We show that for logarithmically concave measures and stable measures weak convergence implies convergence in the Kantorovich topology. We also obtain an efficiently verified condition for convergence of the barycenters of Radon measures from a sequence or net weakly converging on a locally convex space. As an application it is shown that for weakly convergent logarithmically concave measures and stable measures convergence of their barycenters holds without additional conditions. The same is true for measures given by polynomial densities of a fixed degree with respect to logarithmically concave measures. Introduction The geometry and topology of spaces of measures on metric spaces have become an important direction in probability theory over the past two decades. A particular role in these studies is played by the Kantorovich metrics W p (also called Wasserstein metrics in part of the literature) and similar Kantorovich-Rubinshtein (or Fortet-Mourier) metrics, see, for example, [2], [4], [5], [6], [9], [14], [20], [23], [24], [27], [29], [30]. These metrics are traditionally considered on probability or nonnegative measures, where they are related to the weak topology, but are also defined on all measures, although on the whole space of signed measures they induce topologies not comparable with the weak one. However, for more general spaces analogous constructions have not been studied in detail, although they were already considered by Castaing, Raynaud de Fitte and Valadier [19,Section 3.4] in the framework of Young measures. The goal of our paper is to further develop Kantorovich type seminorms in the case of completely regular spaces. We consider two natural locally convex topologies τ K and τ KR on the space of Radon measures, corresponding to the classical Kantorovich and Kantorovich-Rubinshtein (or Fortet-Mourier) norms. We show that the topology τ KR coincides with the weak topology on the cone of nonnegative measures and also on bounded uniformly tight sets of measures. For a separable space, it coincides with the weak topology on weakly compact sets. A simple sufficient compactness condition is obtained for τ K , which combines the uniform tightness with some uniform integrability of quasi-metrics defining the topology. Kantorovich type seminorms can be used for analysis of barycenters. Along these lines we show convergence of the barycenters of weakly convergent logarithmically concave and stable measures. Moreover, the same is proved for measures given by polynomial densities of a fixed degree with respect to logarithmically concave measures. Kantorovich and Kantorovich-Rubinstein seminorms We recall (see details in [22]) that the topology of a completely regular space X is generated by a family of pseudo-metrics Π (a pseudo-metric differs from a metric by the property that it can be zero on distinct elements). A finite nonnegative Borel measure µ on a topological space is called Radon if for every Borel set B and every ε > 0 there exists a compact set K ⊂ B such that µ(B\K) < ε. A signed Borel measure µ is called Radon if such is its total variation |µ| = µ + + µ − , where µ + and µ − are the positive and negative parts of the measure µ. On Radon measures, see [11]. A set M of Radon measures on X is called uniformly tight if for every ε > 0 there is a compact set K such that |µ|(X\K) < ε for all µ ∈ M. Let M r (X) denote the linear space of all Radon measures on X and let M + r (X) and P r (X) be its subsets consisting of nonnegative measures and probability measures, respectively. Let M Π r (X) denote the subset of M r (X) consisting of measures µ for which the function x → p(x, x 0 ) belongs to L 1 (|µ|) for all p ∈ Π for some (then for all) x 0 ∈ X. Note that this class depends on our choice of Π; say, for X = (0, 1) with the usual metric and the family Π reducing to it, all measures belong to M Π r (X), but the situation changes if for Π we take all continuous metrics. In the case of a normed space X for Π we shall take only the norm and in the case of a locally convex space for Π we shall take a collection of seminorms defining the topology (which amounts to taking the collection of all continuous seminorms); in these cases we shall write M 1 r (X) in place of M Π r (X) and speak of measures with finite first moment. The image of a Radon measure µ on X under a continuous mapping F to a topological space Y is the Radon measure µ • F −1 given by the equality The weak topology on M r (X) is the topology of duality with the space C b (X) of bounded continuous functions, which is generated by all seminorms of the form The space M r (X) of all Radon measures on a metric space (X, d) can be equipped with the classical Kantorovich-Rubinstein norm The subspace M 1 r (X) of all measures for which for some x 0 ∈ X (then for all x 0 ) the function d(x, x 0 ) is integrable can be equipped with the Kantorovich norm The topology generated by the Kantorovich-Rubinstein norm coincides with the weak topology on the cone of nonnegative measures and also on compact sets in the weak topology, although on the whole space these two topologies are uncomparable in nontrivial cases. For a general completely regular space X both norms have natural analogs in the form of collections of seminorms: for every pseudo-metric d one can define µ KR,d and µ K,d on the corresponding spaces. It is shown below that the topology τ KR generated by all such Kantorovich-Rubinstein type seminorms coincides with the weak topology on the cone of nonnegative measures and also on weakly compact sets (for a separable space). We give a simple sufficient condition for the compactness in the topology τ KR and the similarly defined topology τ K . In the case of a locally convex space X we deduce from this convergence of the barycenters of weakly convergent measures (a precise formulation is give below). It is also shown that if X is a Fréchet space, then every set compact in the topology τ K is concentrated on some separable reflexive Banach space E compactly embedded into X and is compact in the topology τ K on E generated by the stronger topology from E. Let us connect the topology τ KR with the weak topology. The first assertion in the next theorem can be derived from [19,Lemma 1.3.3], but we include a short justification for completeness. Theorem 2.1. Suppose that the topology in X is generated by a family of pseudometrics Π. Then the weak topology on the set M + r (X) is generated by the family of seminorms · KR,p , p ∈ Π. In addition, the weak topology is generated by these seminorms on every bounded in variation and uniformly tight set in M r (X). Proof. Let p ∈ Π. We denote by X/p the quotient space with the metric The canonical mapping π p : x → [x] p is continuous. Note that the equality µ • π −1 p p = µ KR,p is true. Indeed, for every function f ∈ Lip 1 ( p) on X/p the composition f •π p belongs to the set Lip 1 (p), so µ • π −1 p p ≤ µ KR,p . On the other hand, if f ∈ Lip 1 (p), then we set g([x] p ) := f (x). The function g is well-defined, since the function f is Lipschitz in the pseudo-metric p. Moreover, g ∈ Lip 1 ( p) and we have It suffices to prove the coincidence of the weak topology and τ KR on P r (X). If a net of measures µ α ∈ P r (X) converges weakly to a measure µ ∈ P r (X), then their images µ α under the mapping π p are Radon on the metric space X/p and converge weakly to the image of µ. Therefore, Conversely, let µ α → µ with respect to all seminorms · KR,p . Then we have convergence of the integrals of all functions of the form f (x) = min(p(x, x 0 ), c). This implies convergence µ α (U) → µ(U) on all open sets U of the form U = {x : f (x) < t} with µ-zero boundary, which implies weak convergence (see [14,Theorem 4.3.11]). Let us prove the second assertion. Let S be a bounded and uniformly tight set in M r (X). Suppose that a net of measures µ α from S converges weakly to a measure µ ∈ M r (X). Let us show that for every pseudometric p ∈ Π we have convergence µ α − µ KR,p → 0. We can assume that µ α ≤ 1 for all α. Given ε > 0, we can find a compact set K such that |µ|(X\K) < ε and |µ α |(X\K) < ε for all α. Note that the set of restrictions to K of the functions from Lip 1 (p) bounded by 1 in absolute value is compact in the space C(K) with the sup-norm by the Arzela-Ascoli theorem. Therefore, it contains a finite ε-net f 1 , . . . , f m . Take an index α 0 such that for all α ≥ α 0 we have since the integrals over K differ by at most ε and the absolute values of the integrals over the complement of K are estimated by ε. Remark 2.2. The topologies τ KR and τ K are introduced precisely in the same way on the space of all Baire measures M σ (X) or on its subspace M τ (X) of τ -additive measures (see [11]). The previous theorem with the same proof remains valid for τ -additive measures. Proposition 2.3. If a completely regular space X is separable or possesses a countable collection of continuous functions separating points, then the weak topology coincides with the topology τ KR on weakly compact sets in M r (X). Proof. Since on a compact space every weaker topology coincides with the original one, it suffices to verify that weak convergence of a net of measures from a weakly compact set S implies convergence in the topology τ KR under one of our two conditions. Let X be separable and p ∈ Π. Then the image X p of X under the indicated factorization is a separable metric space with the completion Z p . The image of S is compact in M r (Z p ). Since Z p possesses a countable collection of continuous functions separating points, the compact image of S is metrizable in the weak topology. Hence it suffices to use that any weakly convergent sequence in M r (Z p ) also converges in the Kantorovich-Rubinshtein norm (see, e.g., [26] or [14,Exercise 3.5.22]). The second case is similar: here the compact set S itself is metrizable (because a countable family of continuous functions separating points gives a countable family of bounded continuous functions separating measures), hence its image is also. Therefore, it suffices to verify our assertion for countable sequences of Radon measures on a metric space, which reduces to the case of a separable space. Note that a similar assertion is true for the space M σ (X) of Baire measures on a separable space X. Recall (see [14,Theorem 4.8.3]) that a set M in M σ (X) is contained in a weakly compact set precisely when for every sequence of functions f n ∈ C b (X) pointwise decreasing to zero (in the formulation of the cited theorem it is mistakenly said "converging" in place of "decreasing", but the proof deals with decreasing sequences; the case of nonnegative measures is covered by Theorem 4.5.10 with a correct formulation) one has This criterion extends at once to weakly complete sets in the space of Radon measures (or in the case where all Baire measures on X have Radon extensions). There is a sufficient condition of convergence in the topology τ K . r (X) (for nonnegative measures or from measures in a bounded and uniformly tight family this is equivalent to weak convergence). If every pseudo-metric p from Π satisfies the condition of uniform integrability then {µ α } converges in the topology τ K . In the case of probability measures this is also a necessary condition. Finally, in the case of a countable sequence of measures, convergence in τ KR can be replaced with weak convergence. Proof. Let p ∈ Π and ε > 0. There is R > 0 such that the integral of p(x, x 0 ) over the set {p ≥ R} is less than ε for every measure |µ α |. Next we take an index Hence the integrals of f R against µ and µ α with α ≥ α 0 differ in absolute value by at most ε. Clearly, |f | ≤ p and |f R | ≤ p, so the integrals of f and f R against µ α differ in absolute value by at most 2ε. Then the same is true for µ. Therefore, the difference of the integrals of f against µ and µ α with α ≥ α 0 does not exceed 3ε. Hence µ − µ α K,p ≤ 4ε. For nonnegative measures or measures from a bounded uniformly tight family weak convergence is equivalent to convergence in the topology τ KR . It is readily seen that for probability measures the converse is also true. Finally, for a countable sequence of measures µ n , as above, it suffices to consider the case of a complete metric space, but then we arrive at the case of a uniformly tight family. As one can see from Example 3.2 below, for nets of signed measures, convergence in the topology τ KR cannot be replaced with weak convergence. Let us give a sufficient condition for the compactness of sets in M r (X) in the topology τ KR and for sets in M Π r (X) in the topology τ K . Proposition 2.5. Let S ⊂ M r (X) be a bounded and uniformly tight set. Then S has compact closure in the topology τ KR . Moreover, if S ⊂ M Π r (X) and every pseudo-metric p from Π satisfies the condition of uniform integrability Proof. It follows from the assumption that S has compact closure in the weak topology, and the previous theorem states that on S it coincides with the topology τ KR . The second assertion follows by the previous proposition. Indeed, every net in S contains a subnet {µ α } ⊂ S converging weakly and in the topology τ KR . The limiting measure µ belongs to M Π r (X). Indeed, for every p ∈ Π and R > 0, letting We now prove that a uniformly tight family of Radon measures on a Banach space with a uniformly integrable norm remains uniformly tight with some stronger norm, and this norm is also uniformly integrable (so that this family is contained in some compact set in the Kantorovich norm). More precisely, this family turns out to be uniformly tight on a compactly embedded separable reflexive Banach space with a uniformly integrable norm. The result for a single Borel probability measure on a separable Banach space was proved in [10], extending Buldygin's theorem [17]. The proof employs the known Grothendieck's construction (see [15, §2.5]). Let B be a bounded absolutely convex set in a locally convex space X. Denote by E B the linear span of B equipped with the norm which is the Minkowski functional of the set B. If X is sequentially complete, then E B is a Banach space. Then there is a linear subspace E ⊂ X with the following properties: (i) the space E with some norm · E is a separable reflexive Banach space whose closed unit ball is compact in X; (ii) the family M is concentrated and uniformly tight on E and · E is also uniformly integrable. Proof. We can assume that all measures µ ∈ M are nonnegative. We need the following technical assertion. Let p n ≤ p n+1 for all n. Then there is a sequence of continuous seminorms q n , which generates the original topology of X, and there is sequence of positive numbers α n decreasing to zero such that q n ≤ q n+1 and lim n→∞ sup µ∈M ∞ k=n µ(x : q k (x) > kα k ) = 0, Indeed, there are increasing numbers N n such that N n+1 > 2 n N n and Using these numbers, we set α k = 1 and For every n ∈ N there is a compact set K n in the set U n := {x : q n (x) ≤ n} such that for all µ ∈ M we have µ(α n U n \ α n K n ) < 2 −n . Then Note that the set is totally bounded. Indeed, given ε > 0, take n 0 ∈ N and δ > 0 such that α n 0 < δ and {x : q n 0 (x) ≤ δ} lies in the open ball of radius ε in the metric of X centered at zero. Then, since the sequence {α n } decreases, we obtain that the compact sets c n K n are contained in this ball for all n ≥ n 0 . The remaining compact sets are also covered by finitely many balls of radius ε. The closed absolutely convex hull V of K is also precompact (see [ The right-hand side of the last inequality tends to zero as n → ∞ uniformly in µ ∈ M, which implies the uniform integrability of the function p V with respect to the family of measures M. We now prove the existence of a separable reflexive Banach space E satisfying conditions (i) and (ii). There is a convex balanced compact set W such that V ⊂ W , the Banach space (E W , p W ) is separable and reflexive and K is also compact in the norm p W (see [15,Corollary 2.5.12]). Then p W ≤ p V , which shows that p W is also uniformly integrable. Moreover, all Borel sets in E W are Borel in X, since the image of a Borel set under a continuous injective mapping from a Polish to a metric space is Borel (see [11,Theorem 6.8.6]). Therefore, the measures from M can be restricted to the Borel σ-algebra of the Banach space E W and they are concentrated on this space and uniformly tight, since V is compact in E W and which tends to zero as n → ∞ for each measure µ in M. Remark 2.7. Let (X, d) be a metric space, x 0 ∈ X a fixed point, q ≥ 1 and M q r (X) the space of Radon measures with finite moment of order q, that is, measures µ such that the function d(x, x 0 ) q is |µ|-integrable for some x 0 . Recall that for any q ≥ 1 the subspace of probability measures P q r (X) can be equipped with the q-Kantorovich metric d K,d,q defined by where Π(µ, ν) is the set of probability measures on X × X with projections µ and ν on the factors. The metric d K,d,q with q > 1 is not generated by a norm (unlike the case q = 1, where d K,d,1 (µ, ν) = µ − ν K ), but the norm generates the same topology on P q r (X) as d K,d,q (see [14,Corollary 3.3.7], where there is a misprint in the formula: the norm · K should be replaced with · KR ). So in the general case of a family of pseudo-metrics Π we can introduced the Kantorovich topology τ K,q on M q r (X) generated by the seminorms K p,q with p ∈ Π. The same reasoning as above leads to the following result for τ K,q . Proposition 2.8. Suppose that a net of measures µ α ∈ M q r (X), where q ≥ 1, converges to a measure µ ∈ M r (X) in the topology τ KR (for nonnegative measures or measures from a bounded and uniformly tight family this is equivalent to weak convergence). If for every pseudo-metric p from Π we have then µ ∈ M q r (X) and {µ α } converges to µ in the topology τ K,q . Convergence of barycenters We say that a Borel measure µ on a locally convex space X has a mean (or barycenter) m µ ∈ X if X * ⊂ L 1 (|µ|) and for every f ∈ X * we have In the case of a Banach space X with a Radon measure µ, the mean exists if the norm is µ-integrable. In this case m µ is the Bochner integral A similar statement is true in any quasi-complete locally convex space (see [15,Corollary 5.6.8]): for the existence of the mean, it is sufficient to have the integrability of all seminorms from a family generating the topology of this space (which is equivalent to the integrability of all continuous seminorms). It is worth noting (although we do not use it below) that consideration of convergence of barycenters in locally convex spaces reduces to Banach spaces by means of the factorizations used above and the following simple observation: if a net of elements v α in a locally convex space X and an element v ∈ X are such that T v α → T v for every continuous linear operator T on X with values in a normed space, then v α → v in X. Indeed, for each continuous seminorm p on X the linear subspace Y = p −1 (0) is closed, so the quotient space X/Y is normed with the norm and the natural projection x → [x] is linear and continuous. It follows from this observation that if a net of measures µ α ∈ M r (X) and a measure µ ∈ M r (X) with barycenters in a locally convex space X are such that for each normed space Y and each continuous linear operator T : X → Y the barycenters of µ α • T −1 converge to the barycenter of µ • T −1 , then m µα → m µ . Indeed, the barycenter of µ α • T −1 is T m µα . Note also that if a Borel measure µ has a barycenter and a continuous seminorm q is µ-integrable, then from the Hahn-Banach theorem and the definition of the barycenter we obtain As a consequence of the results of the previous section (see Proposition 2.4), we obtain the following sufficient condition for convergence of barycenters. In the case of probability measures, the same is true for nets. The following example shows that the second assertion of Corollary 3.1 can fail for a net of signed measures. By normalization, we can achieve the equality j |c j | = 1. Now for the basic neighborhood of zero in the weak topology we define a measure by the formula Then by construction µ U ∈ U, and this measure is concentrated on the unit sphere. The set of basic neighborhoods of zero is directed with respect to the inverse inclusion: a neighborhood V is declared to be larger than a neighborhood U if V ⊂ U. By definition, the constructed net of measures µ U converges weakly to zero. Finally, the mean of the measure µ U equals j c j e j , therefore, we have the equality m µ U = n+1 j=1 |c j | = 1. In this example all measures in the net are absolutely continuous with respect to the measure n 2 −n δ en . Note that similarly one can construct a net converging in the stronger topology of duality with the space of all bounded Borel functions. In the general case, weak convergence of measures µ n to µ and weak convergence of measures ν n = f n · µ n to a measure ν do not imply that ν is absolutely continuous with respect to µ. For example, the measures (1 − n −1 )δ 0 + n −1 δ 1 converge weakly (and even in the total variation norm) to δ 0 , but the measure δ 1 is mutually singular with δ 0 . However, under the following additional condition this implication is true. Lemma 3.3. Suppose that Radon probability measures µ α on a completely regular space X converge weakly to a Radon measure µ and the measures ν α = f α · µ α converge weakly to a Radon measure ν. Assume also that Proof. Let K be a compact set such that µ(K) = 0. Suppose that ν(K) = δ > 0 (the case ν(K) < 0 is similar). Pick R > 1 such that We can find an open set U such that K ⊂ U and µ(U) = µ(U) < δ(2R) −1 , where U is the closure of U. This is possible, since we can take some open set U 0 with K ⊂ U 0 and µ(U 0 ) < δ(2R) −1 , then find a continuous function f with values in [0, 1] for which f | K = 1 and f | X\U 0 = 0, finally, for U we can take the set {f > c}, where c ∈ (0, 1) is picked such that µ(f −1 (c)) = 0. Then µ α (U) → µ(U) by Alexandrov's criterion (see [14,Corollary 4.3.5]), hence µ α (U) < δ(2R) −1 for all α large enough. For such α we finally obtain which gives the estimate |ν(U)| ≤ δ, hence ν(K) ≤ δ, which contradicts our assumption. Logarithmically concave and stable measures Now we investigate convergence of logarithmically concave measures and their means. Recall that a Radon probability measure µ on a locally convex space X is called logarithmically concave if µ satisfies the inequality for all compact sets A and B. This definition is also equivalent to the property that for every continuous linear operator T from X to R n the measure µ • T −1 has a density of the form exp(−V ) with respect to Lebesgue measure on some affine subspace with a convex function V (see [16], [12]). The class of logarithmically concave measures contains all Gaussian measures, i.e., measures for which all continuous linear functionals are Gaussian random variables. We need the following estimate due to C. Borell (see [16] or [12,Theorem 4.3.7]). Let µ be a logarithmically concave measure on a locally convex space X and let A be an absolutely convex Borel set with θ := µ(A) > 0. Then This estimate implies that, for any Borel seminorm q such that µ(q > 1) = 1 − θ < 1/2, one has with some constants α(θ) and M(θ) depending only on θ > 1/2. Therefore, with some constants C(p, θ) depending only on p and θ > 1/2. We apply inequality (4.1) to prove the following sufficient condition for convergence of means of logarithmically concave measures. Note that all Radon Gaussian measures have barycenters, but for logarithmically concave measures this is known only under the assumption of sequential completeness of the space, as for general measures with finite first moment. Therefore, for every r > 0 we have Finally, if µ α , µ have barycenters, then m µα → m µ . Proof. Take c > 0 such that µ(q < c) > θ > 1/2. Since {µ α } converges weakly to µ and the set {q < c} is open, by Alexandrov's theorem we have µ α (q < c) > θ for all α larger than some α 0 . Then by (4.1) we obtain the inequality The integral Consequently, the integrals of exp(κq) converge (see [14,Theorem 4.3.15]), which yields convergence of the integrals of q r . Convergence of means follows from Corollary 3.1. Corollary 4.2. In the previous theorem one has also convergence in the topology τ K,q with any q ≥ 1 introduced in Remark 2.7. For a Radon probability measure µ on a locally convex space, we denote by P d (µ) the set of all µ-measurable polynomials of degree d ≥ 0, i.e., µ-measurable functions possessing versions that are polynomials of degree d on X in the usual algebraic sense (this is equivalent to the property that the restrictions to all affine lines are polynomials of degree d). For a Gaussian measure, every measurable polynomial of degree d is the limit almost everywhere and in L 2 of a sequence of polynomials of the form f (l 1 , . . . , l n ), where f is a polynomial of degree d on R n and l j are continuous linear functionals. It is not known whether this is true for all logarithmically concave measures; about measurable polynomials, see [13]. For measurable polynomials on a space equipped with a logarithmically concave measure two very important estimates are known with constants independent of the measure. The first one (obtained in [18], [25] in the finite-dimensional case and extended in [3] to the infinite-dimensional case) gives an estimate for small values: The second one (see [7], [8], [3]) gives the equivalence of all L p -norms on P d (µ): In addition, if {ν α } is uniformly tight, which holds automatically in the case of a weakly convergent countable sequence on a Fréchet space, then {µ α } is also uniformly tight and has a limit point µ that is a logarithmically concave measure, moreover, the measures µ and ν are equivalent. (ii) Suppose that {µ α } is a uniformly tight family of logarithmically concave measures on a locally convex space and for each α there is a measure ν α = f α · µ α with f α ∈ P d (µ α ). If the family {ν α } is bounded in variation, then it is uniformly tight. It follows from Lemma 3.3 that the measure ν is absolutely continuous with respect to µ. The absolute continuity of µ with respect to ν follows from the same lemma applied to the probability measures f α ·µ α and the measures µ α given by the densities f −1 α with respect to f α ·µ α (note that f α (x) > 0 for µ α -a.e. x). These densities satisfy the hypotheses of the lemma, since (ii) The norms f α L 2 (µα) are uniformly bounded by some number M as explained above. Hence for every Borel set B we have |ν α |(B) ≤ M 1/2 µ α (B) 1/2 , which yields the claim. It remains unclear in the considered situation with a countable sequence whether the measures µ n must converge (even if they are uniformly tight, it is not clear whether the limit point is unique). This is unclear even in the case of Gaussian measures µ n (if they are different). Stable measures form another important class of probability distributions (see [11], [28], [31]). Recall that a Radon probability measure µ on a locally convex space X is called stable of order p ∈ (0, 2] if for every α > 0 and β > 0 there is a vector v such that the image of µ under the mapping x → (α p + β p ) 1/p x + v equals the convolution of the images of µ under the homotheties with the coefficients α and β. In other words, if ξ and η are independent random vectors with distribution µ, then αξ + βη has the same law as (α p + β p ) 1/p ξ + v. The case p = 2 corresponds to Gaussian measures, and this is the only intersection with the class of logarithmically concave measures. Stable measures of order p > 1 possess barycenters. Indeed, as shown in [1], in this case all measurable seminorms are integrable, hence the barycenter exists in the case of a complete space X, so it exists in the completion, but it is readily seen from the definition that it must belong to the original space. Note also that if a net of measures µ α that are stable of orders greater than some p 1 > 1 converges weakly to a Radon measure µ, then µ is also stable of some order p ≥ p 1 . Indeed, it is known (see [21]) that if all one-dimensional projections of a measure ν are stable, then they are stable of the same order γ, and if γ > 1, then ν is stable of order γ. In order to apply this result from [21] we can take a linear topological embedding of X into a suitable power R T of the real line and obtain that ν is stable on R T , but then it remains stable on X, because X is ν-measurable in R T by the Radon property (there is a sequence of compacts sets K n in X with ν(K n ) → 1 and these sets are also compact in R T ). Hence it suffices to consider the one-dimensional case, where there is a countable sequence µ αn of elements of the original net converging to µ. Passing to a subsequence we can assume that the orders p n of µ αn converge to some p ∈ [p 1 , 2]. The Fourier transform of µ αn has the form (see [31]) exp[ita n − c n |t| pn (1 − ib n sign(t) tan(πp n /2))]. It follows that a n → a, b n → b, c n → c, so that the Fourier transform of µ has the same form with (a, c, p, b), hence is stable of order p. Theorem 4.4. Suppose that a net of measures µ α that are stable of orders greater than some p 1 > 1 converges weakly to a Radon measure µ. Then they converge in the Kantorovich topology. Hence their barycenters converge to the barycenter of µ. Proof. As explained above, the measure µ is also stable of some order in [p 1 , 2]. It suffices to show that for every continuous seminorm q there is a number r > 1 such that the integrals of q r with respect to µ α are uniformly bounded for α larger than some α 0 . This estimate completes the proof. Corollary 4.5. In the previous theorem one has also convergence in the topology τ K,q with any q < p 1 introduced in Remark 2.7.
2022-08-05T06:41:37.055Z
2022-08-03T00:00:00.000
{ "year": 2022, "sha1": "a81dd0d5bf7e962b8ac57036165bba168698da34", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a81dd0d5bf7e962b8ac57036165bba168698da34", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
267375310
pes2o/s2orc
v3-fos-license
Early Biofilm Formation on the Drain Tip after Total Knee Arthroplasty Is Not Associated with Prosthetic Joint Infection: A Pilot Prospective Case Series Study of a Single Center Background: Periprosthetic joint infection (PJI) is a devastating complication of arthroplasties that could occur during the surgery. The purpose of this study was to analyze the biofilm formation through microbiological culture tests and scanning electron microscopy (SEM) on the tip of surgical drainage removed 24 h after arthroplasty surgery. Methods: A total of 50 consecutive patients were included in the present prospective observational study. Drains were removed under total aseptic conditions twenty-four hours after surgery. The drain tip was cut in three equal parts of approximately 2–3 cm in length and sent for culture, culture after sonication, and SEM analysis. The degree of biofilm formation was determined using a SEM semi-quantitative scale. Results: From the microbiological analysis, the cultures of four samples were positive. The semi-quantitative SEM analysis showed that no patient had grade 4 of biofilm formation. A total of 8 patients (16%) had grade 3, and 14 patients (28%) had grade 2. Grade 1, scattered cocci with immature biofilm, was contemplated in 16 patients (32%). Finally, 12 patients (24%) had grade 0 with a total absence of bacteria. During the follow-up (up to 36 months), no patient showed short- or long-term infectious complications. Conclusions: Most of the patients who underwent primary total knee arthroplasty (TKA) showed biofilm formation on the tip of surgical drain 24 h after surgery even though none showed a mature biofilm formation (grade 4). Furthermore, 8% of patients were characterized by a positivity of culture analysis. However, none of the patients included in the study showed signs of PJI up to 3 years of follow-up. Introduction Closed-suction drainages are widely used after elective or emergency orthopedic surgery to prevent hematoma formation and local fluid accumulation, reducing limb swelling and promoting wound healing [1,2].Recently, the systematic use of drains after orthopedic procedures has been questioned [3,4].Hence, a recent systematic review reported that the routine use of drains does not add any benefits after total knee arthroplasty (TKA) [5].Moreover, a recent randomized controlled trial reported that the application of intraarticular suction drainage following TKA did not show any effects on postoperative hemoglobin levels, blood loss, prosthetic knee range of motion, Knee Injury and Osteoarthritis Outcome Score (KOOS), or the length of hospital stay.Conversely, the authors observed that drains were associated with a greater complication rate: infection and knee stiffness.Therefore, they concluded that the usage of a suction drain may not provide any benefit and may have adverse effects in the TKA procedure [6]. Indeed, there is increasing evidence that suction drains may increase the incidence of local wound complications and infections, especially if they remain in situ for more than twenty-four hours [7,8].Moreover, microbial agents are known to form biofilm on the surface of several devices (e.g., drains or prostheses), improving resistance to antimicrobial agents.Furthermore, materials such as hydroxyapatite and other bioactive substances may be more prone to bacterial adhesion than bioinert metals such as titanium or stainless steel [9]. Periprosthetic joint infection (PJI) is a devastating complication of arthroplasties.The diagnosis of PJI still remains difficult, particularly in its acute postoperative stage due to postoperative inflammation [10].Early infections are mostly due to the contamination of the surgical site during surgery or in the first postoperative days [11,12].The International Consensus Meeting on Musculoskeletal Infection (2018) stated that bacterial biofilm formation is a continuum and could be formed after just one day.The use of next generation sequencing analytical technology demonstrates a joint microbiome both in native joints (the microbiome of osteoarthritis) and prosthetic joint environments where, in a negative clinical setting, there is no PJI [13].The systematic culture of the drain tip has been addressed in many studies, but the prognostic factor of the culture result after an elective orthopedic surgery is still unclear [14,15].Dower et al. found that biofilm in closed-suction drains is already formed in the second postoperative hour in patients undergoing breast augmentation surgery [16].Akiyama et al. inoculated Staphylococcus aureus onto animal wounds.They found the development of a cluster of cells (characteristic of a biofilm) after 6-24 h post inoculation [17].To the best of our knowledge, there are no studies evaluating biofilm formation after total knee replacement.The primary aim of the present prospective observational study was to determine the biofilm formation on the tip of the drain of TKA removed 24 h after surgery.The secondary aim was to analyze whether any biofilm formation on the drain tip or its possible positivity on cultural examinations was associated with the development of PJI at a minimum follow-up of 3 years. Materials and Methods The study was designed as a prospective study of a single center.Fifty consecutive patients that underwent primary TKA at Clinical Orthopedics, Department of Clinical and Molecular Sciences, Università Politecnica delle Marche, from November 2019 to June 2020 were prospectively included in the present study if they fulfilled the following inclusion criteria: diagnosis of primary symptomatic knee osteoarthritis, age 60-80 years, and Kellgren-Lawrence (KL) grade IV.Patients with a previous history of knee infection, rheumatic disease, and/or systematic or local infection (i.e., cystitis or pneumonia in the previous 6 months) were excluded.Patients with surgical time greater than 90 min (from incision to closure) were excluded.All patients received the study protocol and provided their consent.The study was approved by the Internal Review Board (2022-377).All procedures followed the ethical standards of the responsible committee on human experimentation (institutional and national), and were in accordance with the Helsinki Declaration of 1975, as revised in 2008.Ethical compliance was obtained according to the Italian legislative decree of 14 May 2019, n. 52, containing amendments to the legislative decree of 6 November 2007, n. 200, implementing Directive 2005/28/EC.Patients were evaluated pre-and postoperatively (3 years of follow-up).Oxford knee score (OKS), pain (VAS), and any sign of PJI were noted. Surgical Technique The senior surgeon performed all surgeries.TKAs were performed using the medial parapatellar approach, and a cemented cruciate-retaining or posterior-stabilized total knee prosthesis (Persona knee prothesis, Zimmer Inc., Warsaw, IN, USA) was used.The tourniquet was routinely used from the beginning of the surgery until the end of cementation.Prophylactic antibiotics (i.e., 2 g of cefazoline) were routinely administered 30 min before incision.A 16-inch suction drain was routinely placed intra-articularly.All patients were given a standardized postoperative rehabilitation program that consisted of a four-point gait pattern within the first 2 weeks after surgery.Crutches were used for the initial 4 weeks.For the subsequent 8 weeks, low-impact physical activities such as walking, swimming, and static cycling were recommended. Drain Removal, Biofilm Assessment, and Culture Analysis The polyurethane drains were removed twenty-four hours after surgery by the same team (MG, MS, and LF).Before removal, a meticulous asepsis of the skin around the drain was performed with a 10% aqueous povidone-iodine solution.If the tip of the drain inadvertently came into contact with the skin, the patient was excluded from the study.The drain tip was cut into three equal parts of approximately 2-3 cm in length, maintaining the aseptic conditions.Three different specimens of the same drain were obtained (Figure 1).One specimen was sectioned longitudinally into two halves, appropriately fixed, and then prepared for SEM analysis.The other two specimens were collected in two different sterile containers and analyzed, respectively, using routine microbiological drain tip culture and sensitive broth culture after drainage sonication. The senior surgeon performed all surgeries.TKAs were performed using the med parapatellar approach, and a cemented cruciate-retaining or posterior-stabilized to knee prosthesis (Persona knee prothesis, Zimmer Inc., Warsaw, IN, USA) was used.T tourniquet was routinely used from the beginning of the surgery until the end cementation.Prophylactic antibiotics (i.e., 2 g of cefazoline) were routinely administer 30 min before incision.A 16-inch suction drain was routinely placed intra-articularly.patients were given a standardized postoperative rehabilitation program that consisted a four-point gait pattern within the first 2 weeks after surgery.Crutches were used for t initial 4 weeks.For the subsequent 8 weeks, low-impact physical activities such walking, swimming, and static cycling were recommended. Drain Removal, Biofilm Assessment, and Culture Analysis The polyurethane drains were removed twenty-four hours after surgery by the sa team (MG, MS, and LF).Before removal, a meticulous asepsis of the skin around the dr was performed with a 10% aqueous povidone-iodine solution.If the tip of the dr inadvertently came into contact with the skin, the patient was excluded from the stud The drain tip was cut into three equal parts of approximately 2-3 cm in leng maintaining the aseptic conditions.Three different specimens of the same drain w obtained (Figure 1).One specimen was sectioned longitudinally into two halv appropriately fixed, and then prepared for SEM analysis.The other two specimens w collected in two different sterile containers and analyzed, respectively, using rout microbiological drain tip culture and sensitive broth culture after drainage sonication. Microbiological Culture Analysis The specimens intended for microbiological analysis were promptly stored in ster containers and delivered to the microbiology laboratory of "Azienda Ospedalie Universitaria delle Marche" within 30 min.A specimen for each drain underwen sonication procedure in order to improve the sensitivity of microbiological analysis d to the attachment of biofilm on the drain's surface. Sonication procedure: the contents of the drainage tip were opened under a lamin flow hood, the components were covered for at least 90% of their volume with ster physiological solution or Ringer's solution, the container was closed, and the sonicati bath (BactoSonic-Bandelin, BANDELIN, Berlin, Germany) was prepared by filling the t Microbiological Culture Analysis The specimens intended for microbiological analysis were promptly stored in sterile containers and delivered to the microbiology laboratory of "Azienda Ospedaliera-Universitaria delle Marche" within 30 min.A specimen for each drain underwent a sonication procedure in order to improve the sensitivity of microbiological analysis due to the attachment of biofilm on the drain's surface. Sonication procedure: the contents of the drainage tip were opened under a laminar flow hood, the components were covered for at least 90% of their volume with sterile physiological solution or Ringer's solution, the container was closed, and the sonication bath (BactoSonic-Bandelin, BANDELIN, Berlin, Germany) was prepared by filling the tub with sterile water.The container was vortexed for 30 s, and sonicated at 30-40 kHz 0.22 ± 0.04 W/cm 2 for 5 min.A total of 0.1 mL of sonicate was inoculated into plates of Columbia Blood Agar, Chocolate Agar, and Mac Conkey Agar.The plates were incubated in a 5% CO 2 atmosphere at 37 • C for 4 days and Mac Conkey Agar at 37 • C for 48 h.Then, 0.1 mL was inoculated in Blood Agar and the incubation time was extended until 14 days at 37 • C in anaerobiosis.To identify bacteria and fungi, a protein profile obtained using matrix-assisted laser desorption ionization-time of flight (MALDI-TOF MS, Biomerieux Italia, Grassina, Italy) mass spectrometry was used.The mass peaks achieved by the test strains were compared to those of known reference strains.Currently, it is possible to distinguish an organism from an isolate within a short time frame.For antimicrobial susceptibility testing, the automatic instrumentation VITEK-2 Biomerieux Italia or the broth microdilution test SENSITITRETM Thermofisher (Thermo Fisher Scientific, Waltham, MA, USA) was used.A second specimen of drain underwent culture analysis.Hence, the tip of the drain was unrolled on a plate, with the aid of sterile tweezers.The same culture media were used, with the same incubation methods as described above. Scanning Electron Microscopy Analysis The samples intended for SEM analysis were fixed in 2.5% glutaraldehyde (MERCK, Rahway, NJ, USA) in 0.1M sodium cacodylate buffer (Sigma, St. Louis, MO, USA, C-0250) for 1 h and then rinsed with 0.1 M sodium cacodylate buffer solution.After osmium tetroxide (Electron Microscopy Sciences, Hatfield, PA USA) fixation, another cacodylate buffer washing was performed and complete dehydration was achieved in graded alcohol series and hexamethyldisilane.The samples were then mounted on aluminum stubs, goldsputtered, and observed with an SEM Philips XL 20 (FEI, Milan, Italy) at an accelerating voltage of 20 kV at 1500, 2000, and 4000 magnification (Figure 2).The degree of biofilm formation was determined using a semi-quantitative grading scale (Table 1 and Figures 3-6).For the SEM analysis, the samples were longitudinally cut in half and both surfaces were observed.From each patient, five different images were taken at 500× magnification.Two reviewers (MMB and CL) independently reviewed all scans, and differences were resolved by a third reviewer (AG), who independently reviewed all cases of disagreement. with sterile water.The container was vortexed for 30 s, and sonicated at 30-40 kHz 0 0.04 W/cm 2 for 5 min.A total of 0.1 mL of sonicate was inoculated into plates of Colum Blood Agar, Chocolate Agar, and Mac Conkey Agar.The plates were incubated in CO2 atmosphere at 37 °C for 4 days and Mac Conkey Agar at 37 °C for 48 h.Then, 0.1 was inoculated in Blood Agar and the incubation time was extended until 14 days °C in anaerobiosis.To identify bacteria and fungi, a protein profile obtained using ma assisted laser desorption ionization-time of flight (MALDI-TOF MS, Biomerieux I Grassina, Italy) mass spectrometry was used.The mass peaks achieved by the test st were compared to those of known reference strains.Currently, it is possible to disting an organism from an isolate within a short time frame.For antimicrobial susceptib testing, the automatic instrumentation VITEK-2 Biomerieux Italia or the broth micro tion test SENSITITRETM Thermofisher (Thermo Fisher Scientific, Waltham, MA, U was used.A second specimen of drain underwent culture analysis.Hence, the tip o drain was unrolled on a plate, with the aid of sterile tweezers.The same culture m were used, with the same incubation methods as described above. Scanning Electron Microscopy Analysis The samples intended for SEM analysis were fixed in 2.5% glutaraldehyde (MER Rahway, NJ, USA) in 0.1M sodium cacodylate buffer (Sigma, St. Louis, MO, USA, C-0 for 1 h and then rinsed with 0.1 M sodium cacodylate buffer solution.After osmium troxide (Electron Microscopy Sciences, Hatfield, PA USA) fixation, another cacod buffer washing was performed and complete dehydration was achieved in graded alc series and hexamethyldisilane.The samples were then mounted on aluminum s gold-sputtered, and observed with an SEM Philips XL 20 (FEI, Milan, Italy) at an acc ating voltage of 20 kV at 1500, 2000, and 4000 magnification (Figure 2).The degree of film formation was determined using a semi-quantitative grading scale (Table 1 and ures 3-6).For the SEM analysis, the samples were longitudinally cut in half and both faces were observed.From each patient, five different images were taken at 500× ma fication.Two reviewers (MMB and CL) independently reviewed all scans, and differe were resolved by a third reviewer (AG), who independently reviewed all cases of greement. Grade Biofilm Formation 0 Absence of pathogenic cells 1 Scattered cocci, immature biofilm (covering < 25% of the specimen surface) 2 Partially mature biofilm (covering < 50% of the specimen surface) 3 Mature biofilm (covering between 50 and 75% of the specimen surface) 4 Mature biofilm (covering the entire specimen surface) Grade Biofilm Formation 0 Absence of pathogenic cells 1 Scattered cocci, immature biofilm (covering < 25% of the specimen surface) 2 Partially mature biofilm (covering < 50% of the specimen surface) 3 Mature biofilm (covering between 50 and 75% of the specimen surface) 4 Mature biofilm (covering the entire specimen surface) Table 1.Semi-quantitative scale of biofilm formation (Supplementary Materials). Grade Biofilm Formation 0 Absence of pathogenic cells 1 Scattered cocci, immature biofilm (covering < 25% of the specimen surface) 2 Partially mature biofilm (covering < 50% of the specimen surface) 3 Mature biofilm (covering between 50 and 75% of the specimen surface) 4 Mature biofilm (covering the entire specimen surface) Statistical Analysis Data were collected and organized using Excel (v.16.77.1)(Microsoft, Redmond, WA, USA).Categorical variabilities were expressed in numbers and percentages.Continuous variabilities were expressed as averages and standard deviation (DS) or median and interquartile range (IQR) for normally and non-normal distributed data, respectively.The Shapiro-Wilk test was used to assess the normality of continuous data.All continuous variables were found to be non-normal, indicating an appropriate nonparametric approach.The non-parametric Friedman tests were used for comparing pre-and postoperative data.All statistical analyses were performed using SPSS (Version 28.0.1,IBM Corp., Armonk, NY, USA) A p < 0.05 was considered as significant.A prior power analysis was performed with G-Power test.Considering the F test, one tail, an effect size of 0.3, an alfa error of 0.5, and a power of study of 95%, 28 patients were required for the study. Microbiological Results From the microbiological analysis, four (8%) patients were characterized by culture positivity; specifically, two (4%) patients resulted positive for Staphylococcus epidermidis from the cultural analysis of drain's specimen without sonication, whereas two (4%) patients were positive for Enterococcus faecium from the cultural test after the sonication procedure.None of these patients were positive for both analyses.No polymicrobial cultures were detected.From the SEM analysis, it was observed that the same samples were characterized by biofilm formation.Specifically, the two (4%) patients with Enterococcus faecium positivity had a biofilm score of grade 1, whereas those with Staphylococcus epidermidis positivity had a score grade of 3. The semi-quantitative analysis using SEM showed that no samples were characterized by grade 4 of biofilm formation.A total of 8 (16%) patients had grade 3 and 14 (28%) patients had grade 2. Grade 1, scattered cocci with immature biofilm, was contemplated in 16 (32%) patients.Finally, in 12 (24%) patients, a total absence of bacteria was reported (grade 0). Clinical Results Patients' characteristics are reported in Table 2.The mean follow up was 40 months, ranging from 38 to 45 months.All patients reported a significant clinical improvement in terms of OKS and VAS after surgery at a minimum of 3 years of follow-up (p < 0.05) (Table 3).No patients characterized by culture positivity and/or biofilm formation reported superficial wound infection, fistula, or any other sign of PJI during the 3-year follow-up period. Discussion The most important finding of the present study was that 76% of patients (38 of 50) who underwent primary TKA showed biofilm formation on the tip of the surgical drain 24 h after surgery, even though none showed a mature biofilm formation (grade 4).Furthermore, 8% of patients were characterized by culture positivity.However, none of the patients included in the study showed signs of PJI up to 3 years of follow-up.From our results, it can be noted that patients with a positive culture showed no concordance between the conventional analysis and culture performed after sonication.Hence, it should be expected that conventional culture positivity would also lead to culture positivity after sonication.A possible explanation was that a contamination could lead to culture positivity with conventional analysis, but not after sonication due to the non-formation of biofilm.On the other hand, the positivity after sonication could not correspond to positivity with conventional analysis due to the low sensitivity of the analysis or the senescence of the biofilm [18]. Our results were in line with previous research.Using next generation sequencing technology, Tarabichi M. et al. reported the presence of microbes in 35.3% of primary arthroplasties (hip and knee) without any sign of PJI, demonstrating the presence of microbiome in prosthetic joint environments [13].Furthermore, Dower et al. found that biofilm in closed-suction drains is already formed in the second postoperative hour in patients undergoing breast augmentation surgery [16].Sambanthamoorthy et al. found a mature Staphylococcus aureus biofilm formation at 12 h in an in vitro flow-cell study [19].It is imperative to consider that bacterial colonization and the formation of biofilm represent a common complication on invasive devices [20,21].Indeed, drawing a comparison with urologic surgery, the urinary catheters used to empty the bladder and collect urine in a drainage bag could be subject to bacterial colonization.Several studies reported that biofilms could readily develop on the inner or outer surfaces of urinary catheters upon insertion [22].Analogously to our results, the dominant organisms isolated from urinary catheters were S. epidermidis, S. aureus, Enterococcus faecalis, and Escherichia coli [23].However, as reported in previous studies, it should be considered that colonization and infection remain two different processes; therefore, the isolation of microorganisms from urinary catheters does not mean infection [24].Indeed, in orthopedic surgery, several studies showed that the use of drainage did not significantly increase the risk of wound or deep infection after TKA [25,26].In light of our results, no patients showed symptoms and signs of PJI at 3 years of follow-up.On the other hand, a significant clinical improvement in terms of OKS and VAS was observed.However, it remains to be elucidated if the presence of biofilm might be predictive of a higher risk of PJI. Limitations The present study presents limitations that warrant disclosures.Firstly, a semiquantitative grading scale was used for the SEM evaluation of biofilm formation.However, no other methods have been found in the literature and the focus of the present study was to assess the presence or not of biofilm on the drain tip.The semi-quantitative scale was only the method available to report the results.A quantitative molecular approach such as real-time PCR would have been the best method for the analysis of the presence of bacteria, but the substantial costs did not allow us to conduct it.The sample size was limited, but this reflects the pilot nature of the study, considering also the high costs of SEM analyses.The follow-up was limited. Conclusions In our population of 50 patients who underwent primary TKA, 76% showed immature or partially mature biofilm formation on the tip of the surgical drain at 24 h from surgery.Instead, 8% of patients were characterized by a culture positivity of the surgical drain.No patients showed symptoms or signs of PJI at 3 years of follow-up.Further studies are needed to establish whether the formation of biofilm and/or the culture positivity determine a higher risk of future PJI, or they only represent a contamination. Figure 1 . Figure 1.(A) The removal of the surgical drain on the first day after TKA surgery performed i sterile and standardized manner.(B) Surgical drain tip section on a sterile surface. Figure 1 . Figure 1.(A) The removal of the surgical drain on the first day after TKA surgery performed in a sterile and standardized manner.(B) Surgical drain tip section on a sterile surface. Figure 2 . Figure 2. Low magnification of SEM image showing the tip of the drain sectioned longitudina Figure 2 . Figure 2. Low magnification of SEM image showing the tip of the drain sectioned longitudinally. Figure 3 . Figure 3. Grade 0. Absence of pathogenic cells.(A) Scattered red blood cells (*) are visible in the background.No pathological cells are detectable.(B) Blood clot. Figure 4 . Figure 4. Scattered cocci, immature biofilm (covering < 25% of the specimen surface).(A) Bacterial cells appear irregular, without a matrix covering (orange arrow).(B-D) Cells were arranged either as individual cells, or as short chains with a filamentous projection of extracellular matrix (orange arrows).Red blood cells (*). Figure 4 . Figure 4. Scattered cocci, immature biofilm (covering < 25% of the specimen surface).(A) Bacterial cells appear irregular, without a matrix covering (orange arrow).(B-D) Cells were arranged either as individual cells, or as short chains with a filamentous projection of extracellular matrix (orange arrows).Red blood cells (*). Figure 4 . 10 Figure 5 . Figure 4. Scattered cocci, immature biofilm (covering < 25% of the specimen surface).(A) Bacterial cells appear irregular, without a matrix covering (orange arrow).(B-D) Cells were arranged either as individual cells, or as short chains with a filamentous projection of extracellular matrix (orange arrows).Red blood cells (*). Figure 6 . Figure 6.Mature biofilm (covering between 50 and 75% of the specimen surface).(A-D) Images of biofilm structure.Biofilms show a cobwebbed appearance, with an amorphous polymeric extracellular matrix surrounding (orange arrow) and interconnecting bacteria.Red blood cells (*). Figure 5 . 10 Figure 5 . Figure 5. Partially mature biofilm (covering < 50% of the specimen surface).(A) Bacterial cells are arranged in a cluster, with extracellular matrix in the background.(B) High magnification.Bacterial cells are arranged in a cluster (orange arrow).Red blood cells (*). Figure 6 . Figure 6.Mature biofilm (covering between 50 and 75% of the specimen surface).(A-D) Images of biofilm structure.Biofilms show a cobwebbed appearance, with an amorphous polymeric extracellular matrix surrounding (orange arrow) and interconnecting bacteria.Red blood cells (*). Figure 6 . Figure 6.Mature biofilm (covering between 50 and 75% of the specimen surface).(A-D) Images of biofilm structure.Biofilms show a cobwebbed appearance, with an amorphous polymeric extracellular matrix surrounding (orange arrow) and interconnecting bacteria.Red blood cells (*).
2024-02-02T16:08:39.715Z
2024-01-31T00:00:00.000
{ "year": 2024, "sha1": "c71f939b733147dc05f43e7eedc3f5e601137946", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9032/12/3/366/pdf?version=1706696794", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "612ac272603a5693eb841f35702e8a7698194212", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }